paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_Twf_XYunk5j
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs
Query embedding (QE)---which aims to embed entities and first-order logical (FOL) queries in low-dimensional spaces---has shown great power in multi-hop reasoning over knowledge graphs. Recently, embedding entities and queries with geometric shapes becomes a promising direction, as geometric shapes can naturally represent answer sets of queries and logical relationships among them. However, existing geometry-based models have difficulty in modeling queries with negation, which significantly limits their applicability. To address this challenge, we propose a novel query embedding model, namely \textbf{Con}e \textbf{E}mbeddings (ConE), which is the first geometry-based QE model that can handle all the FOL operations, including conjunction, disjunction, and negation. Specifically, ConE represents entities and queries as Cartesian products of two-dimensional cones, where the intersection and union of cones naturally model the conjunction and disjunction operations. By further noticing that the closure of complement of cones remains cones, we design geometric complement operators in the embedding space for the negation operations. Experiments demonstrate that ConE significantly outperforms existing state-of-the-art methods on benchmark datasets.
accept
The paper attempts to improve reasoning over knowledge bases. In this regard, the authors introduces a novel query representation using cones. As the set of cones is closed under intersection, union, and complement, it is claimed that such representation can be very expressive. Empirical performance of the proposed method is very good. We thank the reviewers and authors for engaging in an active discussion, which resulted in clearing a lot of the concerns (e.g. mismatch between motivation and method) and a lot of constructive feedback were provided to improve the paper. The authors provided extensive empirical results as part of the discussion, please include them in the final version of the paper as they add great value and understanding to the model as a whole.
train
[ "XHf06LtfgCu", "lZu8h--Bbw7", "PnhAmy389J", "VqXEkGQ3kku", "wd2VpHY96i", "NuM8ADqs_O-", "VQ9CGKslrrz", "Zz1MFo0KJm0", "vD7H2_qOnAk", "OUWM9UlgVd1", "LXbz2ujkmZQ", "dDw8CIaiOY", "HPcquTSSKza", "X-kJIRYldlk", "uWGfhSSGr9", "zsJh2ttYThH", "6b4sHaEZzj", "Si0i9cZoIe", "HyWZwb269P", ...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_r...
[ " Dear Reviewer 65PB,\n\n\n\nMany thanks for your great effort in reviewing our paper and replying to our response. We believe your valuable suggestions will significantly strengthen the quality of this submission. We will try our best to improve our manuscript based on the discussions in the rebuttal period. Thank...
[ -1, -1, 8, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5 ]
[ "VqXEkGQ3kku", "NuM8ADqs_O-", "nips_2021_Twf_XYunk5j", "VQ9CGKslrrz", "nips_2021_Twf_XYunk5j", "vD7H2_qOnAk", "Zz1MFo0KJm0", "6b4sHaEZzj", "OUWM9UlgVd1", "uWGfhSSGr9", "dDw8CIaiOY", "zsJh2ttYThH", "nips_2021_Twf_XYunk5j", "uWGfhSSGr9", "PnhAmy389J", "064GeyC8nG_", "wd2VpHY96i", "H_...
nips_2021_p99rWde9fVJ
Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing
Tuning hyperparameters is a crucial but arduous part of the machine learning pipeline. Hyperparameter optimization is even more challenging in federated learning, where models are learned over a distributed network of heterogeneous devices; here, the need to keep data on device and perform local training makes it difficult to efficiently train and evaluate configurations. In this work, we investigate the problem of federated hyperparameter tuning. We first identify key challenges and show how standard approaches may be adapted to form baselines for the federated setting. Then, by making a novel connection to the neural architecture search technique of weight-sharing, we introduce a new method, FedEx, to accelerate federated hyperparameter tuning that is applicable to widely-used federated optimization methods such as FedAvg and recent variants. Theoretically, we show that a FedEx variant correctly tunes the on-device learning rate in the setting of online convex optimization across devices. Empirically, we show that FedEx can outperform natural baselines for federated hyperparameter tuning by several percentage points on the Shakespeare, FEMNIST, and CIFAR-10 benchmarks—obtaining higher accuracy using the same training budget.
accept
This paper studies hyperparameter optimization in FL, which is often neglected but a crucial part of FL. The paper provides interesting ideas to tackle this challenging problem. After the discussion phase, the reviewers are all in favor of accepting the paper. I recommend acceptance. I suggest the authors revise the paper to address the reviewers' concerns.
val
[ "_YK3k0TkcF-", "B-cAbsZeSl-", "QS87nHcOdC8", "fef8H8_IJo2", "D4NfgAkrl68", "v8zIhXBBhiy", "OjqmkM8Fesx", "JdpeaP6HZYI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper analyzes three challenges in hyperparameters tuning under the federated learning setting. The paper proposed two straightforward solutions (random search and successive halving) as baselines and one method, namely FedEx, as the solution to tackle those challenges. The two baselines tune hyperparameters d...
[ 6, -1, -1, 6, -1, -1, -1, 7 ]
[ 3, -1, -1, 4, -1, -1, -1, 4 ]
[ "nips_2021_p99rWde9fVJ", "v8zIhXBBhiy", "OjqmkM8Fesx", "nips_2021_p99rWde9fVJ", "JdpeaP6HZYI", "_YK3k0TkcF-", "fef8H8_IJo2", "nips_2021_p99rWde9fVJ" ]
nips_2021_U7SBcmRf65
Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions. Such models are often re-trained on new data periodically, and they hence need to generalize to data not too far into the future. In this context, there is much prior work on enhancing temporal generalization, e.g. continuous transportation of past data, kernel smoothed time-sensitive parameters and more recently, adversarial learning of time-invariant features. However, these methods share several limitations, e.g, poor scalability, training instability, and dependence on unlabeled data from the future. Responding to the above limitations, we propose a simple method that starts with a model with time-sensitive parameters but regularizes its temporal complexity using a Gradient Interpolation (GI) loss. GI allows the decision boundary to change along time and can still prevent overfitting to the limited training time snapshots by allowing task-specific control over changes along time. We compare our method to existing baselines on multiple real-world datasets, which show that GI outperforms more complicated generative and adversarial approaches on the one hand, and simpler gradient regularization methods on the other.
accept
The reviewers feel that this paper introduces an interesting and potentially useful approach to learning in changing environments. They consider the approach sensible. Various reviewers originally had concerns both about the computational cost and about the experimental validation (especially details of hyperparameter optimization and choices of tasks and baselines). After some discussion, the reviewers feel like the authors have addressed their main concerns, and all of them recommend acceptance. I concur.
val
[ "dLQzm6ON0ui", "l0zBkqsDzGi", "V2W8rfDMCoj", "_YGXc1PqCAC", "lNQfqAfjiMc", "hp15eIJPY1W", "3edKay8beOh", "40mIpsXnmn2", "xrpvXyA2eVm", "eUT-od5N6Yx", "G3v_PBCAKYM", "lZC8nui9zB", "uJVAawN33di", "EqDCtD0doD", "6DGeETlkV80", "Cz5p1WMhNP" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ " We thank the reviewer for their valuable comments.\n\na: Please note that our IncFineTune algorithm can be considered as an instance of FTL. For non-convex learner’s like ours, we checked that recent deep learning methods (e.g. the Online Meta Learning paper of Finn et al[2]) call this as the FTL approach. We al...
[ -1, 6, -1, -1, -1, 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "3edKay8beOh", "nips_2021_U7SBcmRf65", "_YGXc1PqCAC", "G3v_PBCAKYM", "xrpvXyA2eVm", "nips_2021_U7SBcmRf65", "6DGeETlkV80", "hp15eIJPY1W", "nips_2021_U7SBcmRf65", "nips_2021_U7SBcmRf65", "EqDCtD0doD", "uJVAawN33di", "Cz5p1WMhNP", "l0zBkqsDzGi", "hp15eIJPY1W", "eUT-od5N6Yx" ]
nips_2021_QcwJmp1sTnk
Agent Modelling under Partial Observability for Deep Reinforcement Learning
Modelling the behaviours of other agents is essential for understanding how agents interact and making effective decisions. Existing methods for agent modelling commonly assume knowledge of the local observations and chosen actions of the modelled agents during execution. To eliminate this assumption, we extract representations from the local information of the controlled agent using encoder-decoder architectures. Using the observations and actions of the modelled agents during training, our models learn to extract representations about the modelled agents conditioned only on the local observations of the controlled agent. The representations are used to augment the controlled agent's decision policy which is trained via deep reinforcement learning; thus, during execution, the policy does not require access to other agents' information. We provide a comprehensive evaluation and ablations studies in cooperative, competitive and mixed multi-agent environments, showing that our method achieves significantly higher returns than baseline methods which do not use the learned representations.
accept
This paper considers the problem of partial observability in multi-agent learning, and takes the approach of learning the latent representations of other agents from local observations. This is shown to be beneficial during learning. The reviewers were generally positive about this paper, which presents an interesting and effective solution to an important problem. The reviewer scores were improved as a result of the rebuttals, and the authors should include the promised revisions in the paper.
train
[ "F7uDHe_tUEw", "_pOrZCcPUEB", "nqYhkIRPRL8", "tLr365V1ebh", "oQnMadADSA1", "nCqJwFzvCaF", "yCj7n_uPp7z", "KQlnSMHWYT", "alCWQIraFZL", "-pvSilGI9Nw", "xjv1V7UJF4O", "iCRrwp963Il", "yie3P81MMT", "stOSZ-2zGqj", "9gae1T61x79" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes an encoder-decoder type neural network for agent modelling in partially observable domains. The method seeks to learn a useful encoding of a controlled agent's action-observation histories to a latent vector. By useful, it is meant that the learned representation used as an input to a downstrea...
[ 7, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_QcwJmp1sTnk", "nips_2021_QcwJmp1sTnk", "nips_2021_QcwJmp1sTnk", "nCqJwFzvCaF", "KQlnSMHWYT", "yCj7n_uPp7z", "xjv1V7UJF4O", "stOSZ-2zGqj", "-pvSilGI9Nw", "stOSZ-2zGqj", "nqYhkIRPRL8", "F7uDHe_tUEw", "9gae1T61x79", "_pOrZCcPUEB", "nips_2021_QcwJmp1sTnk" ]
nips_2021_dYGFRxCf7P
Leveraging Distribution Alignment via Stein Path for Cross-Domain Cold-Start Recommendation
Cross-Domain Recommendation (CDR) has been popularly studied to utilize different domain knowledge to solve the cold-start problem in recommender systems. In this paper, we focus on the Cross-Domain Cold-Start Recommendation (CDCSR) problem. That is, how to leverage the information from a source domain, where items are warm', to improve the recommendation performance of a target domain, where items arecold'. Unfortunately, previous approaches on cold-start and CDR cannot reduce the latent embedding discrepancy across domains efficiently and lead to model degradation. To address this issue, we propose DisAlign, a cross-domain recommendation framework for the CDCSR problem, which utilizes both rating and auxiliary representations from the source domain to improve the recommendation performance of the target domain. Specifically, we first propose Stein path alignment for aligning the latent embedding distributions across domains, and then further propose its improved version, i.e., proxy Stein path, which can reduce the operation consumption and improve efficiency. Our empirical study on Douban and Amazon datasets demonstrate that DisAlign significantly outperforms the state-of-the-art models under the CDCSR setting.
accept
Reviews are quite mixed, and admittedly the one most positive review is somewhat of an outlier, though there's enough here to recommend acceptance. Still somewhat borderline given the two marginal scores (2x5), though with two strong voices for acceptance I'd lean positive. In spite of the mixed scores, and the detailed rebuttal, opinions didn't really move during the discussion phase. So mostly my judgement is based on the initial scores, as well as what appears to be a reasonable rebuttal (even if it didn't move the needle on scores). Reviewers were not totally aligned in their opinions. Some praised the originality and significance of the paper (especially the most positive review), while others found the technical contribution more limited. Other issues centered around lack of clarity w.r.t. various components. Ultimately the negative points seemed not to be deal-breakers and could be fixed easily. Again, enough here to recommend acceptance, and nothing that seems like a red flag, though still somewhat borderline in terms of scores / distribution of scores.
train
[ "RfQSRgUKA46", "QdL_bK290EK", "h7pvZhYsetI", "hbturZkCx9C", "Bu_XU3X7inn", "PvuH9MHSd6c", "p_6ZZtuOJu0", "PgUGPCj7v5", "Zeh5vQFQp5s", "ose68ssaqUF", "TvrKTfA4pgm", "rSFBT4OaflD", "afCIj_Jyrtl" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank all the reviewers for your detailed and valuable comments, and particularly, for recognizing the potential of our work in solving the Cross-Domain Cold-Start Recommendation (CDCSR) problem. \n\nWe hope our response has adequately addressed the main concerns regarding the motivation, technic...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 9, 5, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_dYGFRxCf7P", "nips_2021_dYGFRxCf7P", "hbturZkCx9C", "ose68ssaqUF", "QdL_bK290EK", "rSFBT4OaflD", "TvrKTfA4pgm", "rSFBT4OaflD", "afCIj_Jyrtl", "QdL_bK290EK", "nips_2021_dYGFRxCf7P", "nips_2021_dYGFRxCf7P", "nips_2021_dYGFRxCf7P" ]
nips_2021_Z2vksUFuVst
Conservative Offline Distributional Reinforcement Learning
Many reinforcement learning (RL) problems in practice are offline, learning purely from observational data. A key challenge is how to ensure the learned policy is safe, which requires quantifying the risk associated with different actions. In the online setting, distributional RL algorithms do so by learning the distribution over returns (i.e., cumulative rewards) instead of the expected return; beyond quantifying risk, they have also been shown to learn better representations for planning. We proposeConservative Offline Distributional Actor Critic (CODAC), an offline RL algorithm suitable for both risk-neutral and risk-averse domains. CODAC adapts distributional RL to the offline setting by penalizing the predicted quantiles of the return for out-of-distribution actions. We prove that CODAC learns a conservative return distribution---in particular, for finite MDPs, CODAC converges to an uniform lower bound on the quantiles of the return distribution; our proof relies on a novel analysis of the distributional Bellman operator. In our experiments, on two challenging robot navigation tasks, CODAC successfully learns risk-averse policies using offline data collected purely from risk-neutral agents. Furthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of both expected and risk-sensitive performance.
accept
All reviewers agree that this is a good paper with sufficient novelty and relevance to the NeurIPS community. Reviewers had several questions, which were adequately answered by the authors. Therefore, I would recommend the *acceptance* of this paper. I encourage the authors to consider the feedback by reviewers in their revision of the paper, including the followings: - discuss the tightness of the lower bound - discuss what other risk measures CODAC can be applied to - discuss that CVaR's not being time-consistent, and how you avoid it. - cite and compare with the literature on moment matching for DRL - Fix the typos, including the one in Theorem 3.6
train
[ "p-lcKzvbOwg", "4DmZ5H5qmEP", "vt15g8Que_", "nWhxjKNRTU3", "hXgttzMMa1", "1eaTCfGfW1U", "8PJekvAxiiT", "8kZPNeKlPn", "1ka3mAunZW", "AhmSRW9D-o", "X0WOAZluLu5", "Oj8cbrb8vzy" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response. I have read your response to my questions and those of the other reviewers and happy to keep my score. I would recommend to include the tightness proof and explain its intuition in the main paper, as well as the updated results you provided above. I also do believe that illus...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "8kZPNeKlPn", "hXgttzMMa1", "8PJekvAxiiT", "nips_2021_Z2vksUFuVst", "X0WOAZluLu5", "Oj8cbrb8vzy", "AhmSRW9D-o", "1ka3mAunZW", "nips_2021_Z2vksUFuVst", "nips_2021_Z2vksUFuVst", "nips_2021_Z2vksUFuVst", "nips_2021_Z2vksUFuVst" ]
nips_2021_WKxmP7bcFvt
Separation Results between Fixed-Kernel and Feature-Learning Probability Metrics
Carles Domingo i Enrich, Youssef Mroueh
accept
The authors present interesting theoretical analysis with wide-reaching implications; they are able to exhibit sequences of distributions which are distinguished by F1 integral probability metrics (based on an infinite width neural network function class that performs "feature learning") but not F2 integral probability metrics (a function class that does not perform "feature learning"), and in doing so they provide theoretical support for the notion that feature learning is an important property for discriminators (e.g. an argument against the use of "MMD GANs", which have previously appeared in NeurIPS). The analysis is extended to cover Stein discrepancy, again based on F1 and F2. All reviewers agreed that the paper was well-written and correct. I believe these results will be of significant interest to the participants of NeurIPS.
train
[ "3m-xRScFRYe", "TIwDXkM_qp", "mfPVdwwD3un", "KJ2fV6KpxaO", "GLL9RVSLYMc", "dDFz1Vf3jws", "Eo_W7Q6T3SA", "qYGZ1NusqyI", "1m8hUhFvYTt", "CjWkJKIk8dF", "eA7rD4sNv_", "Wxs0FXVRtXs", "eTh7c2pOIId", "Y6Mw4lRqVxQ" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose theoretical results about separation between probability metrics with fixed-kernel and feature-learning discriminators in overparameterized two-layer neural network spaces. The authors propose theoretical results about separation between probability metrics with fixed-kernel and feature-learni...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 9, 8 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "nips_2021_WKxmP7bcFvt", "mfPVdwwD3un", "dDFz1Vf3jws", "GLL9RVSLYMc", "Eo_W7Q6T3SA", "qYGZ1NusqyI", "1m8hUhFvYTt", "Wxs0FXVRtXs", "eTh7c2pOIId", "Y6Mw4lRqVxQ", "3m-xRScFRYe", "nips_2021_WKxmP7bcFvt", "nips_2021_WKxmP7bcFvt", "nips_2021_WKxmP7bcFvt" ]
nips_2021_2FDhSA_yxY
Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning
Empirical risk minimization (ERM) is the workhorse of machine learning, whether for classification and regression or for off-policy policy learning, but its model-agnostic guarantees can fail when we use adaptively collected data, such as the result of running a contextual bandit algorithm. We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class and provide first-of-their-kind generalization guarantees and fast convergence rates. Our results are based on a new maximal inequality that carefully leverages the importance sampling structure to obtain rates with the good dependence on the exploration rate in the data. For regression, we provide fast rates that leverage the strong convexity of squared-error loss. For policy learning, we provide regret guarantees that close an open gap in the existing literature whenever exploration decays to zero, as is the case for bandit-collected data. An empirical investigation validates our theory.
accept
This looks like a strong paper that makes significant contributions. Simultaneously, the current presentation is very technically dense; this was the conclusion of absolute experts and so is not to be taken lightly. Consequently, despite the strength of the results in this work, the results will have a tough time being appreciated even by general readers in learning theory (and definitely going outside of learning theory). Also, there is no space in the current version to add anything, and so it is implausible that the authors would be able to add in things like proof sketches without a major revision of the paper. On that note, the authors mentioned that they do not believe the paper needs a major rewrite (the reviewers and I disagree), and a major revision would require another round of reviewing in any case. Therefore, and I admit the decision is not easy, I really do feel that this paper should not be accepted in the current form. My feeling is that a resubmit to a conference like ALT, which is more generous with space and would allow for considerable improvements to the narrative (plus proof sketches, etc.) would be in service to the authors by letting their work be better appreciated by the field. On that note, I advise the authors to consider the reviewer feedback about significantly changing the presentation in their paper. It’s their choice whether or not to rewrite the paper, but to get it accepted somewhere, I (and evidently many reviewers) really think this is needed. In the rest of this meta-review, I highlight some of the positive aspects that came out in the reviews, as well as some specific negative aspects. On the positive side, the bound presented in this paper is the first of its kind for adaptively collected data. There certainly is novelty here, and the result is significant. The analysis is highly non-trivial (i.e., quite sophisticated). Based on the author response to some of the reviews and the paper itself, there also is interesting follow-up work. On the negative side, the paper could really use more intuition (by way of informal statements) and proof sketches. However, there simply isn't space in the current form of the paper; it’s incredibly dense in terms of filling the space as it is. The authors disagree about reducing the lengthy, highly technical comparisons with previous papers (they seem unwilling to do a serious rewrite), even though several reviewers mentioned that this would be a good idea. Especially the fundamental result Theorem 1 should be given more justice/attention in the paper. In addition, one technical weakness is that a continuing concern about the comparison with the VC/Natarajan setup of the lower bound of Zhan et al. (when the authors claim that they have matched the lower bound). It sounds like the authors simply need to technically qualify in what way they match, and so I hope they clarify this in any future version.
train
[ "XGQB_BfgCEv", "HCWvBnB1qc7", "JyJk_ApEgUw", "7cPv2xR6dmp", "OzffrByvrT", "920jsLS5MgK", "SXWZwJpY6Va", "YP59EvU2pV", "em8k-Smtm8c", "qnnpaqIaqMr", "chkKRoShl8r", "fbHct87QvAV", "QHlsMp5UYPe", "PC3RzM7mlfV" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate your taking the time to read our paper closely and consider our responses and others' reviews, and we thank you for the very useful feedback. We respectfully disagree about the fit for NeurIPS and, while we sincerely appreciate the incredibly useful feedback pointing out at where can clarity...
[ -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 2, 2 ]
[ "JyJk_ApEgUw", "nips_2021_2FDhSA_yxY", "920jsLS5MgK", "nips_2021_2FDhSA_yxY", "YP59EvU2pV", "qnnpaqIaqMr", "PC3RzM7mlfV", "7cPv2xR6dmp", "QHlsMp5UYPe", "HCWvBnB1qc7", "fbHct87QvAV", "nips_2021_2FDhSA_yxY", "nips_2021_2FDhSA_yxY", "nips_2021_2FDhSA_yxY" ]
nips_2021_vDo__0UwFNo
Bayesian Optimization with High-Dimensional Outputs
Bayesian optimization is a sample-efficient black-box optimization procedure that is typically applied to a small number of independent objectives. However, in practice we often wish to optimize objectives defined over many correlated outcomes (or “tasks”). For example, scientists may want to optimize the coverage of a cell tower network across a dense grid of locations. Similarly, engineers may seek to balance the performance of a robot across dozens of different environments via constrained or robust optimization. However, the Gaussian Process (GP) models typically used as probabilistic surrogates for multi-task Bayesian optimization scale poorly with the number of outcomes, greatly limiting applicability. We devise an efficient technique for exact multi-task GP sampling that combines exploiting Kronecker structure in the covariance matrices with Matheron’s identity, allowing us to perform Bayesian optimization using exact multi-task GP models with tens of thousands of correlated outputs. In doing so, we achieve substantial improvements in sample efficiency compared to existing approaches that model solely the outcome metrics. We demonstrate how this unlocks a new class of applications for Bayesian optimization across a range of tasks in science and engineering, including optimizing interference patterns of an optical interferometer with 65,000 outputs.
accept
The reviewers have provided thoughtful and constructive comments. They have responded to the authors' feedback and have reached consensus to recommend acceptance. I hope the authors will take the reviewers' comments to heart and encourage them to incorporate their thoughts in preparing the camera-ready version of their manuscript.
train
[ "DLy9_Dt-o7Y", "qJffBY64kw-", "3XUaXPP-ZAu", "RUK1yHmxZnS", "AQ09Qb4gt8N", "j81Iq1umnh1", "ZIoe9L1Yfhv", "Y7jNn1ePgS", "on_GwdPgh00", "dUEev-dd6fi", "23dwu-8YNCw", "oO3a44_qDhz", "84h8fPQZCpg" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the pointers to these interesting references. \n\nWith additional work, both Paria et al, UAI, '19 (\"A flexible framework...\") and Lukovic et al, NeurIPS, '20 (\"Diversity-Guided Multi-Objective...\") can be extended to use MTGPs rather than batch GPs; we can make a note of these when describing futu...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 6, 8, 7 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "qJffBY64kw-", "Y7jNn1ePgS", "j81Iq1umnh1", "ZIoe9L1Yfhv", "nips_2021_vDo__0UwFNo", "AQ09Qb4gt8N", "23dwu-8YNCw", "84h8fPQZCpg", "oO3a44_qDhz", "nips_2021_vDo__0UwFNo", "nips_2021_vDo__0UwFNo", "nips_2021_vDo__0UwFNo", "nips_2021_vDo__0UwFNo" ]
nips_2021_g0wang64Zjd
Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks
Chen Ma, Xiangyu Guo, Li Chen, Jun-Hai Yong, Yisen Wang
accept
The paper proposes a slightly novel approach for black-box adversarial attacks, with extensive sets of experiments that seem to confirm some performance benefits compared to the related work. This one should be better positioned (e.g., targeted vs non-targeted attack settings), and the active discussion with the reviewers should help in clarifying the true benefits of the proposal in the revision of the manuscript towards any eventual publication.
train
[ "dnnFYKQnJeW", "nrvRmZN8Fi", "d9zbn_08kUu", "CD_3H5vK09", "WhFMAbXWYL", "QrEWhbOq7t", "-PxY8EO-BnL", "LJoOgos6Sj", "jUWJyujIVtB", "8E7VzxHVYg2", "xIH6bl9hFdy", "0tzCajyvXjb", "mw7vJ_p4KbS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper discovers that searching the optimal tangent point of a virtual hemisphere could help find minimal adversarial distortion; therefore, this paper proposes an effective method of hard-label attack Pros. \n\n1 The paper is well-written and well-organized. In particular, I like the related work part, which...
[ 6, 6, -1, -1, -1, -1, -1, 5, -1, -1, -1, 7, 3 ]
[ 4, 2, -1, -1, -1, -1, -1, 4, -1, -1, -1, 3, 5 ]
[ "nips_2021_g0wang64Zjd", "nips_2021_g0wang64Zjd", "LJoOgos6Sj", "QrEWhbOq7t", "mw7vJ_p4KbS", "WhFMAbXWYL", "LJoOgos6Sj", "nips_2021_g0wang64Zjd", "dnnFYKQnJeW", "nrvRmZN8Fi", "0tzCajyvXjb", "nips_2021_g0wang64Zjd", "nips_2021_g0wang64Zjd" ]
nips_2021_bhEAWsS9-Sb
Scalable Diverse Model Selection for Accessible Transfer Learning
With the preponderance of pretrained deep learning models available off-the-shelf from model banks today, finding the best weights to fine-tune to your use-case can be a daunting task. Several methods have recently been proposed to find good models for transfer learning, but they either don't scale well to large model banks or don't perform well on the diversity of off-the-shelf models. Ideally the question we want to answer is, "given some data and a source model, can you quickly predict the model's accuracy after fine-tuning?" In this paper, we formalize this setting as "Scalable Diverse Model Selection" and propose several benchmarks for evaluating on this task. We find that existing model selection and transferability estimation methods perform poorly here and analyze why this is the case. We then introduce simple techniques to improve the performance and speed of these algorithms. Finally, we iterate on existing methods to create PARC, which outperforms all other methods on diverse model selection. We have released the benchmarks and method code in hope to inspire future work in model selection for accessible transfer learning.
accept
In summary, the reviewers rate this paper as follows: * 3 of 5 reviewers view it as being close to the acceptance threshold (2 of 5 slightly below, 1 of 5 slightly above, not updated from "5" to "6" in the official rating, but in a discussion comment). * 2 of 5 reviewers recommend the paper for acceptance (with scores "7" and "8"). So the reviewers overall are leaning towards acceptance with no reviewer opposing acceptance. The paper discusses several methods for model selection and ranking and proposes a new method (PARC) for this task. Experiments compare the methods on a new benchmark created for that purpose. Highlights of the paper mentioned by the reviewers include: * The presented study is enlightening and thorough. * The subject is of high practical importance. * Open-sourcing the benchmark will be valuable. * The paper is well written. * The authors' responses addressed several of the reviewers' initial concerns. Some concerns expressed by the reviewers include the following: * Limited improvements of the introduced method; similar performance of other approaches. * Some over-claiming regarding novelty of the approach. * A lack of the definition of "scalability". * The value of ranking models vs selecting best models is debated. * Some related work is missing or could be discussed more appropriately. In case the paper is published, it would be appreciated if the authors could address the concerns as far as possible. Specifically, the results presented in the authors' responses would be valuable to include. Overall, I would recommend the paper for acceptance as a poster.
train
[ "gWHtYejcTZ1", "afNDBDPIphS", "iRt-O-Yv1UR", "wECjzqE5D5F", "RsoWlj8_S5A", "LB45hOTz-4", "kM4mEcbu-A", "OXGTis4OqJp", "NMo8J2-sX4y", "AN_8cDjx2GE", "Hmb72fyROLq", "NgvnEDs4vH", "ENTP8vhmyom" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewer’s concern and would like to further clarify. We believe the problem setting of model selection among a large number of models which may be trained in diverse ways across distinct architectures is important — we called this scalable and diverse model selection to distinguish our focus fr...
[ -1, -1, 7, 5, 5, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, 4, 5, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "afNDBDPIphS", "Hmb72fyROLq", "nips_2021_bhEAWsS9-Sb", "nips_2021_bhEAWsS9-Sb", "nips_2021_bhEAWsS9-Sb", "NMo8J2-sX4y", "wECjzqE5D5F", "iRt-O-Yv1UR", "RsoWlj8_S5A", "ENTP8vhmyom", "NgvnEDs4vH", "nips_2021_bhEAWsS9-Sb", "nips_2021_bhEAWsS9-Sb" ]
nips_2021_q0h6av9Vi8
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence. Emerging 3D-structured neural scene representations are a promising approach to 3D scene understanding. In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural implicit representation. Rendering a ray from an LFN requires only a single network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations. In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. This results in dramatic reductions in time and memory complexity, and enables real-time rendering. The cost of storing a 360-degree light field via an LFN is two orders of magnitude lower than conventional methods such as the Lumigraph. Utilizing the analytical differentiability of neural implicit representations and a novel parameterization of light space, we further demonstrate the extraction of sparse depth maps from LFNs.
accept
The submission was thoroughly reviewed and discussed. All four reviewers support acceptance. The work was found to be stimulating and can usefully inform follow-up efforts in this area. The AC supports the reviewers' recommendation. The authors are encouraged to thoroughly address the reviewers' concerns and recommendations in the revision.
train
[ "R9f0UVSGVrs", "wfpn8vxvw0f", "pFePOKpWpjS", "oi64D5NqsaV", "PfGrfMB8J3", "ck72_SypD8Q", "eOasG0HSdYO", "Y7nbH5EsiN", "KIudmbdxJEs", "syqjomBssv2", "JjUH2AB5SIH", "xiwI0ww6_Z", "3u-wYS47uSL", "Dyio81CSt0K", "voyXdQ2kfQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "The authors propose a novel neural scene representation called Light Field Networks (LFNs), where geometry and appearance of the considered scene are represented in a 360-degree, 4D light field that is parameterized via a neural implicit representation. In contrast to other ray-marching-based or volumetric-renderi...
[ 7, 8, 7, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_q0h6av9Vi8", "nips_2021_q0h6av9Vi8", "nips_2021_q0h6av9Vi8", "R9f0UVSGVrs", "eOasG0HSdYO", "nips_2021_q0h6av9Vi8", "KIudmbdxJEs", "syqjomBssv2", "Dyio81CSt0K", "nips_2021_q0h6av9Vi8", "wfpn8vxvw0f", "ck72_SypD8Q", "R9f0UVSGVrs", "pFePOKpWpjS", "nips_2021_q0h6av9Vi8" ]
nips_2021_-JJy-Hw8TFB
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
We introduce ViSER, a method for recovering articulated 3D shapes and dense3D trajectories from monocular videos. Previous work on high-quality reconstruction of dynamic 3D shapes typically relies on multiple camera views, strong category-specific priors, or 2D keypoint supervision. We show that none of these are required if one can reliably estimate long-range correspondences in a video, making use of only 2D object masks and two-frame optical flow as inputs. ViSER infers correspondences by matching 2D pixels to a canonical, deformable 3D mesh via video-specific surface embeddings that capture the pixel appearance of each surface point. These embeddings behave as a continuous set of keypoint descriptors defined over the mesh surface, which can be used to establish dense long-range correspondences across pixels. The surface embeddings are implemented as coordinate-based MLPs that are fit to each video via consistency and contrastive reconstruction losses.Experimental results show that ViSER compares favorably against prior work on challenging videos of humans with loose clothing and unusual poses as well as animals videos from DAVIS and YTVOS. Our code is available at viser-shape.github.io.
accept
This submission received 4 positive final ratings: 7, 7, 8, 6. The reviewers mostly appreciated novelty (while noting high similarity with LASR), clear presentation and strong empirical performance. The remaining questions and concerns seemed to be addressed in the rebuttal, as acknowledged by the reviewers. The final recommendation is therefore to accept as a spotlight.
val
[ "Dr9KTb_G5v", "QBhd-lt9iH", "rLUvOTW-EIs", "Tnj_lV_Omh", "svoh3ltJMWm", "P3Q5UdXughH", "1HK4BGl5rG3", "Rx6hqF0znjQ", "a6w49jAWe0K" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper presents an optimization-based framework that recovers 3D shape, articulated pose, and weak texture from a monocular video of an articulated object. The proposed method extends a recent method (LASR) with a surface embedding matching module for establishing long-range correspondences, and achieves signi...
[ 8, 7, -1, 7, 6, -1, -1, -1, -1 ]
[ 4, 4, -1, 4, 4, -1, -1, -1, -1 ]
[ "nips_2021_-JJy-Hw8TFB", "nips_2021_-JJy-Hw8TFB", "P3Q5UdXughH", "nips_2021_-JJy-Hw8TFB", "nips_2021_-JJy-Hw8TFB", "Tnj_lV_Omh", "svoh3ltJMWm", "Dr9KTb_G5v", "QBhd-lt9iH" ]
nips_2021_oAog3W9w6R
Understanding the Effect of Stochasticity in Policy Optimization
Jincheng Mei, Bo Dai, Chenjun Xiao, Csaba Szepesvari, Dale Schuurmans
accept
The paper studies the effect of stochastic on policy optimization in Reinforcement Learning, i.e., the impact of not having the true gradient. The paper argues that there are significant differences between the case of true gradient and a stochastic version. While this is not surprising, the reviewers agree that the paper makes a good contribution to the literature by pointing out the differences. There was some concern that many of the results are for a one-state MDP, but the reviewers felt that the paper makes some interesting points which can potentially stimulate further work.
train
[ "N9YSj8lsbV", "qw5UbfEvnqs", "9vRdmRfUmLE", "Z8aeHMyY9k0", "5oUCAvwgCR", "QXOD6PJm3I", "xgMQ_u5R7ux", "l1ZhzM6cfZY", "YJN4SlTWOH", "kg_oknCM-ix", "6OXwl_gZc6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the answer and the details. Indeed for the 2 actions example I missed your positive reward assumption, this makes sense!\nI will keep my score as it is, please in your manuscript make sure to precise which results hold for one-state MDPs (totally fine for counter examples) vs more general MDPs (more...
[ -1, -1, -1, 6, 6, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, 4, 4, -1, -1, -1, -1, 4, 4 ]
[ "l1ZhzM6cfZY", "xgMQ_u5R7ux", "QXOD6PJm3I", "nips_2021_oAog3W9w6R", "nips_2021_oAog3W9w6R", "Z8aeHMyY9k0", "6OXwl_gZc6", "kg_oknCM-ix", "5oUCAvwgCR", "nips_2021_oAog3W9w6R", "nips_2021_oAog3W9w6R" ]
nips_2021_RqAzAoL8BER
Fine-Grained Zero-Shot Learning with DNA as Side Information
Fine-grained zero-shot learning task requires some form of side-information to transfer discriminative information from seen to unseen classes. As manually annotated visual attributes are extremely costly and often impractical to obtain for a large number of classes, in this study we use DNA as a side information for the first time for fine-grained zero-shot classification of species. Mitochondrial DNA plays an important role as a genetic marker in evolutionary biology and has been used to achieve near perfect accuracy in species classification of living organisms. We implement a simple hierarchical Bayesian model that uses DNA information to establish the hierarchy in the image space and employs local priors to define surrogate classes for unseen ones. On the benchmark CUB dataset we show that DNA can be equally promising, yet in general a more accessible alternative than word vectors as a side information. This is especially important as obtaining robust word representations for fine-grained species names is not a practicable goal when information about these species in free-form text is limited. On a newly compiled fine-grained insect dataset that uses DNA information from over a thousand species we show that the Bayesian approach outperforms state-of-the-art by a wide margin.
accept
This paper addresses the problem of zero shot learning (ZSL) in the context of fine-grained visual categorization using images from the natural world with (for the first time) DNA as side information. Existing work in this space typically exploits alternative forms of side information such as visual attributes or textual information. Experiments are performed using an existing Bayesian ZSL method on the CUB dataset (with additional DNA information) and a newly proposed dataset of insect images, and show that the models using DNA are competitive and often better than those using only textual side information. Issues that should be addressed in the final text. * The current related work text is missing a discussion of alternative methods for learning DNA embeddings. This needs to be remedied. (wkgT) * Include the already performed comparison to DNABert. This is important as the CNN encoder is listed as one of the contributions of the paper but no comparisons are made to alternative methods for encoding this information. * Discuss the limitations of DNA side-information e.g. that it is not applicable to all visual categories (wkgT). * Fix the issues with the writing and paper structure as noted by wkgT. * Fix the issues with the text as noted by 2AZs and fQEh. e.g. Clarify in the caption for Table 3, what US, S, and H represent. The reviewers raised legitimate concerns regarding the level of technical contribution, the writing quality, and lack of discussion of other approaches for learning DNA embeddings. The final two of these concerns can be addressed in a revision of the text. Despite these concerns, the reviewers were supportive of the paper, with the most critical raising their score after the discussion. The novelty of the application and the merit of the new dataset were quoted as the reasons for recommending acceptance. The authors are strongly encouraged to address the above issues in the final camera ready text. As a final note, the authors should be aware that how they cited [5] was not consistent with the NeurIPS guidelines. As the paper was already published it should have been cited with the authors names and treated no differently than any other paper: "If you need to cite one of your own papers, you should do so with adequate anonymization to preserve double-blind reviewing. For instance, write “In the previous work of Smith et al. [1]…” rather than “In our previous work [1]...”). If you need to cite one of your own papers that is in submission to NeurIPS and not available as a non-anonymous preprint, then include a copy of the cited submission in the supplementary material and write “Anonymous et al. [1] concurrently show...”)." https://nips.cc/Conferences/2021/CallForPapers
train
[ "Rz_7-MgScij", "uUuentXbOvp", "lstzVB4CgbE", "B_gnnJp56yd", "7tSffpNiF9Q", "giPm0vDgwWD", "R_PNpiO-6lf", "5Lx5moeDTu3", "t_MHk-97GKM", "s8giCxPwGzI", "7F-QuWPv0J4", "_gfFvgMMBS9", "2eUF5EXLbnN", "TLcQ8nu1PM1", "1fxBGWHqcV_", "IvG3YDCnqSU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response and clarification. I would like to keep my original score, but I would suggest authors include the results of using different embeddings in the final version and the discussions about the tradeoff between seen and unseen classes.", " Thank you for the follow-up clarification. I have chan...
[ -1, -1, 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "2eUF5EXLbnN", "B_gnnJp56yd", "nips_2021_RqAzAoL8BER", "7tSffpNiF9Q", "7F-QuWPv0J4", "nips_2021_RqAzAoL8BER", "IvG3YDCnqSU", "s8giCxPwGzI", "nips_2021_RqAzAoL8BER", "IvG3YDCnqSU", "lstzVB4CgbE", "giPm0vDgwWD", "1fxBGWHqcV_", "nips_2021_RqAzAoL8BER", "nips_2021_RqAzAoL8BER", "nips_2021_...
nips_2021_6wuE1-G4pu6
Optimal Underdamped Langevin MCMC Method
Zhengmian Hu, Feihu Huang, Heng Huang
accept
This paper proposes a randomized algorithm for approximating a trajectory of underdamped Langevin dynamics. It also uses variance reduction to improve the error over decomposable problems. It also establishes an information-theoretic lower bound. There have been extensive discussions between the authors and the reviewers, and I have gone through them carefully. I agree with the majority of the reviewers that the paper has made interesting technical contributions to the field. I am happy to recommend acceptance.
train
[ "Nw_pdloSL5_", "9OSFZsjh9IR", "Lxc0MF7INZZ", "2-SZQv4T4yp", "86AOIG2WH2", "S4Ct9o41vta", "V2msA0pjOU", "JTZ3x5zSyf5", "4wF2NrrPTY", "27uPnBMKzNH", "LFlQbzwWTeA", "pTk_BNz6yBM", "aanQMzbFwoG", "jIZCYvBm4rb", "jwzEK9YMt3", "FULPeZEecEj", "3k8s0tn8fs", "lo6cMMyhvfU", "3R-4TE94bJn", ...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "...
[ "The paper considers methods for approximating underdamped Langevin diffusion processes with sum-decomposable strongly convex potential. The main contribution in the paper is the proposal of the Accelerated ULD-MCMC algorithm and its variance-reduced variants. This is followed by a detailed analysis of convergence ...
[ 7, 7, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5 ]
[ 3, 3, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_6wuE1-G4pu6", "nips_2021_6wuE1-G4pu6", "JTZ3x5zSyf5", "4wF2NrrPTY", "nips_2021_6wuE1-G4pu6", "FULPeZEecEj", "4wF2NrrPTY", "27uPnBMKzNH", "jwzEK9YMt3", "LFlQbzwWTeA", "pTk_BNz6yBM", "lo6cMMyhvfU", "nips_2021_6wuE1-G4pu6", "cuJyuB2phA", "DXlQDGX7qV", "86AOIG2WH2", "8yFPzR46d...
nips_2021_B_jLF98u13
Scheduling jobs with stochastic holding costs
Dabeen Lee, Milan Vojnovic
accept
The paper concerns online scheduling algorithm which aims at minimizing the expected cumulative holding costs incurred by jobs with a priori unknown parameters. The reviewers found the problem tackled by the paper interesting and novel, and the framework of unknown costs challenging as compared to the prior settings. They liked the fact that the paper gives a simple scheduling algorithm whose performance (almost) matches lower bounds for the problem. The paper was also generally assessed as clear and well-written. There were also some reservations raised, namely: the algorithm is rather straightforward, limited relevance of the specific problem considered (assumptions which might be too strong in practice). However, the overall evaluation of the paper was mainly positive and thus I recommend the acceptance. I suggest that the authors incorporate the reviewers' remarks in the final version of the manuscript.
train
[ "76mez0Zza6", "5165ZnbE16w", "NOGU9uGXkSq", "Qz5j8goGzY8", "UecZ9d-81Mz", "UUMCsp5euDV", "w11GvMLteKN", "SMMdLkbJ5FC" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents algorithms for the problem of ordering jobs on a machine so as to minimize the cumulative holding cost. The assumption is that the holding cost in each timestep follows a sub-Gaussian distribution (i.i.d. across timesteps), but the the mean values of these distributions for individuals jobs are ...
[ 5, -1, -1, -1, -1, 6, 7, 7 ]
[ 4, -1, -1, -1, -1, 3, 3, 4 ]
[ "nips_2021_B_jLF98u13", "SMMdLkbJ5FC", "w11GvMLteKN", "UUMCsp5euDV", "76mez0Zza6", "nips_2021_B_jLF98u13", "nips_2021_B_jLF98u13", "nips_2021_B_jLF98u13" ]
nips_2021_-AV3AKwgiG
REMIPS: Physically Consistent 3D Reconstruction of Multiple Interacting People under Weak Supervision
The three-dimensional reconstruction of multiple interacting humans given a monocular image is crucial for the general task of scene understanding, as capturing the subtleties of interaction is often the very reason for taking a picture. Current 3D human reconstruction methods either treat each person independently, ignoring most of the context, or reconstruct people jointly, but cannot recover interactions correctly when people are in close proximity. In this work, we introduce \textbf{REMIPS}, a model for 3D \underline{Re}construction of \underline{M}ultiple \underline{I}nteracting \underline{P}eople under Weak \underline{S}upervision. \textbf{REMIPS} can reconstruct a variable number of people directly from monocular images. At the core of our methodology stands a novel transformer network that combines unordered person tokens (one for each detected human) with positional-encoded tokens from image features patches. We introduce a novel unified model for self- and interpenetration-collisions based on a mesh approximation computed by applying decimation operators. We rely on self-supervised losses for flexibility and generalisation in-the-wild and incorporate self-contact and interaction-contact losses directly into the learning process. With \textbf{REMIPS}, we report state-of-the-art quantitative results on common benchmarks even in cases where no 3D supervision is used. Additionally, qualitative visual results show that our reconstructions are plausible in terms of pose and shape and coherent for challenging images, collected in-the-wild, where people are often interacting.
accept
This submission received 4 diverging final ratings: 4, 6, 7, 6. On the positive side, the reviewers mentioned importance of the problem, an overall interesting approach to the multi-person setting, strong performance and clear presentation. At the same time, some of the reviewers were skeptical about overall novelty (collection of known components), insufficient motivation of certain design choices and high overall complexity, lack of important ablations and evaluation on common datasets. Some of these concerns were addressed in the rebuttal, as acknowledged by the reviewers. The most skeptical reviewer did not engage in further discussion with the authors. Overall, the AC agrees with the positively-inclined reviewers that the strengths of this work outweigh its weaknesses, and the paper presents an interesting approach that can potentially have an impact. The authors are highly encouraged to address the remaining concerns in the camera ready version. The final recommendation is to accept as a poster.
train
[ "HcTVv6btinh", "muFf5iR--N1", "yZKuaIlIQty", "Q5nxRw1Hgw", "ZKU06LnRcJF", "ePffN5TG_P", "sqpKCt-V-ZW", "0ZIOYXstNv7", "i0XinN8Wbw", "bR7ZTDAIzKv", "qBGJ5D7lswj", "jgLGo0aScAL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response from authors. The answers have addressed most of my concerns. Overall speaking, this paper includes interesting ideas. To further improve the experiments, it is also recommended to perform an evaluation on COCO and report 2D keypoint localization results along with other approaches to 3D h...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "Q5nxRw1Hgw", "ZKU06LnRcJF", "ePffN5TG_P", "jgLGo0aScAL", "qBGJ5D7lswj", "bR7ZTDAIzKv", "i0XinN8Wbw", "nips_2021_-AV3AKwgiG", "nips_2021_-AV3AKwgiG", "nips_2021_-AV3AKwgiG", "nips_2021_-AV3AKwgiG", "nips_2021_-AV3AKwgiG" ]
nips_2021_6rqjgrL7Lq
Differentiable Annealed Importance Sampling and the Perils of Gradient Noise
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation, but are not fully differentiable due to the use of Metropolis-Hastings correction steps. Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective using gradient-based methods. To this end, we propose Differentiable AIS (DAIS), a variant of AIS which ensures differentiability by abandoning the Metropolis-Hastings corrections. As a further advantage, DAIS allows for mini-batch gradients. We provide a detailed convergence analysis for Bayesian linear regression which goes beyond previous analyses by explicitly accounting for the sampler not having reached equilibrium. Using this analysis, we prove that DAIS is consistent in the full-batch setting and provide a sublinear convergence rate. Furthermore, motivated by the problem of learning from large-scale datasets, we study a stochastic variant of DAIS that uses mini-batch gradients. Surprisingly, stochastic DAIS can be arbitrarily bad due to a fundamental incompatibility between the goals of last-iterate convergence to the posterior and elimination of the accumulated stochastic error. This is in stark contrast with other settings such as gradient-based optimization and Langevin dynamics, where the effect of gradient noise can be washed out by taking smaller steps. This indicates that annealing-based marginal likelihood estimation with stochastic gradients may require new ideas.
accept
This paper suggests a method for using annealed importance sampling (AIS) together with Hamiltonian Monte Carlo (HMC). The central idea is that if the accept/reject step is dropped, once can still define a ratio of augmented distributions that is tractable to estimate. The advantage of dropping the accept/reject step is that the estimator is now differentiable. A variational objective is defined in terms of this estimator, and reparameterization is used to optimize the objective. One might worry that dropping the accept/reject step could have serious consequences by changing the properties of the estimator. This is addressed, at least for the case of a simple Bayesian linear regression model, by an analysis that shows a convergence rate of 1/\sqrt{K} can be obtained, where K is the number of annealing distributions. Another analysis is given where gradients are corrupted with additive noise. Here, a more negative result is given that even in the limit that K goes to infinity, the variational bound never becomes tight. This suggests that a significant price is paid for computing gradients on subsets of data. Strictly speaking, this negative result also applies only to the Bayesian linear regression setting, however this shows that any stronger guarantee would have to exclude that-typically considered "easiest"-setting. Generally speaking, all reviewers felt the algorithm was novel, correct, and potentially significant. There were some fairly minor technical concerns about the experiments, and some questions about the intended "scope" of the contribution (e.g. regarding the title). The authors appear to have good answers for these questions that could easily be integrated into the paper for a final submission.
train
[ "-BCWrK2W2MP", "dhBBK7KOTtL", "UC6oJAMCjkG", "2w9DrVKXiDY", "XeMOPcUjeZD", "nfjFjY6H4m_", "XxDQtOM0Cz", "7IA6rIhTKXd" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for responding and thank you for such a well-written paper!", " Thank you for the answers and clarifications. The questions in my review were well addressed.", " Thank you for your thorough review and many insightful comments. We address your concerns and questions in order below.\n\n- In terms of n...
[ -1, -1, -1, -1, -1, 7, 9, 8 ]
[ -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "XeMOPcUjeZD", "UC6oJAMCjkG", "7IA6rIhTKXd", "nfjFjY6H4m_", "XxDQtOM0Cz", "nips_2021_6rqjgrL7Lq", "nips_2021_6rqjgrL7Lq", "nips_2021_6rqjgrL7Lq" ]
nips_2021_HyQskgZwXO
PSD Representations for Effective Probability Models
Finding a good way to model probability densities is key to probabilistic inference. An ideal model should be able to concisely approximate any probability while being also compatible with two main operations: multiplications of two models (product rule) and marginalization with respect to a subset of the random variables (sum rule). In this work, we show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end. In particular, we characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees. Moreover, we show that we can perform efficiently both sum and product rule in closed form via matrix operations, enjoying the same versatility of mixture models. Our results open the way to applications of PSD models to density estimation, decision theory, and inference.
accept
This article considers using a positive semi-definite (PSD) construction for flexible probability models. The basic idea is to parameterize a probability density as $f(x; M, \phi) = \phi(x)^\top M \phi(x)$ where $x\mapsto\phi(x)$ is a mapping to a Hilbert space and $M$ is a positive semi-definite operator (Equation 1). In particular, this provides a generalization of mixture models that allows negative mixture weights. The paper focuses on the special case of Gaussian PSD models, in which $\phi(x)$ is the feature map associated with the Gaussian kernel. This class of PSD models has some attractive properties: it is closed under multiplication (product rule, see Prop 2) and marginalization over a subset of variables (sum rule, see Prop 1). Further, it exhibits universal consistency: it can approximate any probability density arbitrarily well; see Prop 4. This paper builds upon recent work by Marteau-Ferey et al (2020), which showed the properties above. The paper extends this previous work by showing that PSD models obtain the optimal rate of convergence for density approximation (similar to kernel density estimation with positive and negative weights); see Theorem 7. This is in contrast with the slow, sub-optimal rate exhibited by kernel density estimation using non-negative weights only. Additionally, the paper applies the PSD framework to several nontrivial applications in decision theory, classification and regression, and hidden Markov models. The paper is well-written and clear. The reviewers and I found the ideas to be interesting and well-developed. The PSD framework seems potentially very useful for nonparametric modeling. In my view, the main limitations of the paper are as follows. 1) Novelty. Since many of the results are closely related to the previous work of Marteau-Ferey et al (2020), it would be helpful for readers if the paper could more clearly delineate the novel contributions of this work. Nonetheless, the reviewers and I found the work to contain many innovative contributions. 2) Computational complexity. The computational cost can be large, particularly under the multiplication rule. The article discusses using Nystrom projections as a way of mitigating this, but this is an approximation that may hinder performance. 3) Limited experiments. The experimental results shown are compelling, but the experiments section is quite brief. More experimental investigation would help demonstrate the utility of the methodology. Marteau-Ferey, U., Bach, F., & Rudi, A. (2020). *Non-parametric models for non-negative functions.* arXiv preprint arXiv:2007.03926.
train
[ "MNL7tbPv15N", "ItAVTnVCAz9", "6yvOxKyb8PH", "-amlVqBWCO2", "BtN7M4igadM", "3p1SA40Ku05", "fhHw5QYGmu", "x4aTegOpAIq", "O0aE9dzso-E", "dgwvkK-4cxl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors discuss an application of recent work in nonnegative function estimation to the problem of density estimation. In particular, they argue that under mild assumptions, PSD models can concisely model a large range of interesting densities and that these densities can be (more or less) efficiently learned...
[ 6, -1, -1, 7, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, -1, 3, -1, -1, -1, -1, 2, 3 ]
[ "nips_2021_HyQskgZwXO", "x4aTegOpAIq", "fhHw5QYGmu", "nips_2021_HyQskgZwXO", "dgwvkK-4cxl", "O0aE9dzso-E", "-amlVqBWCO2", "MNL7tbPv15N", "nips_2021_HyQskgZwXO", "nips_2021_HyQskgZwXO" ]
nips_2021_nlR7LzSArtK
Exploiting a Zoo of Checkpoints for Unseen Tasks
There are so many models in the literature that it is difficult for practitioners to decide which combinations are likely to be effective for a new task. This paper attempts to address this question by capturing relationships among checkpoints published on the web. We model the space of tasks as a Gaussian process. The covariance can be estimated from checkpoints and unlabeled probing data. With the Gaussian process, we can identify representative checkpoints by a maximum mutual information criterion. This objective is submodular. A greedy method identifies representatives that are likely to "cover'' the task space. These representatives generalize to new tasks with superior performance. Empirical evidence is provided for applications from both computational linguistics as well as computer vision.
accept
PRELIMINARY: The reviewers generally agree that the paper is interesting and can be accepted for publication at NeurIPS.
train
[ "7XSPGTuCBs", "ddT4HBYS_S", "zV7ypOdPjwY", "guo-7ALVC3M", "y2okhFyyLfe", "57KJGZYbxA0", "RqoqSJCdepL", "93LBb02RKai", "zgzziX58hjV" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors addressed all points. \n1. As for A-map that would be still an interesting result to be seen.\n2. it is a good outcome\n4. That is a reasonable reply, means one can say it is useful when one is on a computational budget and wants a choice by prior guess. \nall the others are satisfactory.\nThe reviewe...
[ -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "guo-7ALVC3M", "57KJGZYbxA0", "93LBb02RKai", "zgzziX58hjV", "93LBb02RKai", "RqoqSJCdepL", "nips_2021_nlR7LzSArtK", "nips_2021_nlR7LzSArtK", "nips_2021_nlR7LzSArtK" ]
nips_2021_l7ULU2q6mvY
Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach
We target open-world feature extrapolation problem where the feature space of input data goes through expansion and a model trained on partially observed features needs to handle new features in test data without further retraining. The problem is of much significance for dealing with features incrementally collected from different fields. To this end, we propose a new learning paradigm with graph representation and learning. Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data. Based on our framework, we design two training strategies, a self-supervised approach and an inductive learning approach, to endow the model with extrapolation ability and alleviate feature-level over-fitting. We also provide theoretical analysis on the generalization error on test data with new features, which dissects the impact of training features and algorithms on generalization performance. Our experiments over several classification datasets and large-scale advertisement click prediction datasets demonstrate that our model can produce effective embeddings for unseen features and significantly outperforms baseline methods that adopt KNN and local aggregation.
accept
Overall, this paper’s reviews have reached a consensus. Three of the reviewers wish for the paper to be accepted, in particular after some high-quality clarifications from the authors. I agree with this decision. There is only one reviewer who is opposed, but that reviewer’s disagreement is based on a potential similarity between the paper and previous work, which the authors have successfully dispelled. I assume the reviewer is comfortable with this response; in any case, there are clearly sufficient distinctions from prior work and sufficient novelty here. From my own read, I think this paper identifies a fairly neat problem setting (ie, feature space expansion at test time) and makes it concrete, then offers reasonable solutions to it. The problem is a practical one, but it hasn’t been tackled in this exact form before (at least, to the best of my knowledge). While theoretically the problem could be posed in a far more general framework on domain shift, it would be hard to find a good solution. The current identification leads to some pretty nice results, and it could serve to drive a bunch of new research questions and solutions.
train
[ "u33CWL7d6P", "LPJiNG-QwWc", "JgV3AkTLp3", "kmj14NRTKRM", "cnbcRn1Cra", "HUCrsyyGYUm", "r4u-Zk2ryCF", "jsjFTiTiLEk", "v90J4n4YiV8", "VwmGUTcteQh", "Ar9w3YJvxor", "QLbhOo19v5U", "a27KIQ636Xc", "YSwtEOUJJ9W", "ww0CpiM0xkK", "_QS16STDun", "mgRXFugwpX", "K6pY34x5gFG", "0AoSY8ACj-", ...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Dear Reviewer fz9M,\n\nSince it is approaching the end of the discussion period, we would like to kindly ask if our previous response clarifies your concerns and if there are any further questions that we could answer to facilitate the review process. Thanks a lot for your time!", " Thank you for the feedbacks ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "K6pY34x5gFG", "JgV3AkTLp3", "kmj14NRTKRM", "mgRXFugwpX", "K6pY34x5gFG", "mgRXFugwpX", "jsjFTiTiLEk", "QLbhOo19v5U", "K6pY34x5gFG", "a27KIQ636Xc", "K6pY34x5gFG", "_QS16STDun", "ww0CpiM0xkK", "mgRXFugwpX", "4b9FHQf4TX", "0AoSY8ACj-", "nips_2021_l7ULU2q6mvY", "nips_2021_l7ULU2q6mvY",...
nips_2021_gKyyBfMM4Y
Adversarial Teacher-Student Representation Learning for Domain Generalization
Domain generalization (DG) aims to transfer the learning task from a single or multiple source domains to unseen target domains. To extract and leverage the information which exhibits sufficient generalization ability, we propose a simple yet effective approach of Adversarial Teacher-Student Representation Learning, with the goal of deriving the domain generalizable representations via generating and exploring out-of-source data distributions. Our proposed framework advances Teacher-Student learning in an adversarial learning manner, which alternates between knowledge-distillation based representation learning and novel-domain data augmentation. The former progressively updates the teacher network for deriving domain-generalizable representations, while the latter synthesizes data out-of-source yet plausible distributions. Extensive image classification experiments on benchmark datasets in multiple and single source DG settings confirm that, our model exhibits sufficient generalization ability and performs favorably against state-of-the-art DG methods.
accept
This paper was recommended for acceptance by all reviewers. The reviewers appreciated the fact that the paper was well-written, had comprehensive experiments, and presented a simple yet intuitive method. Though the reviewers originally had a few questions, the authors' response sufficiently answered those questions and convinced the reviewers of the merit of this work. The paper addresses the problem of domain generalization and the key aspect the reviewers appreciated the most was the relaxation of the assumption of domain labels and the effective performance shown from the method.
train
[ "vX7nbzWodCE", "d9053iHnDLA", "LKtsmFQwXv3", "MhG74t3d2s_", "4Ec0mNwDHrj", "I0oho_OLUXw", "EhUEJ0fh7W0", "Sm-OE1V7eFy", "VYEMvKTEr0j", "QUSjEuBsceK", "GxxD72KwnX1", "mQYVBNC995s", "k6bQwAFsLlu", "AO2bF_R5wVK", "VfDG6idxKQW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the problem of Domain Generalization (DG), where a model trained on single or multiple source distributions / domains is expected to generalize to data from unseen / novel distributions at test-time. The authors propose an adversarial teacher-student representation learning setup consisting of tw...
[ 7, -1, 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_gKyyBfMM4Y", "QUSjEuBsceK", "nips_2021_gKyyBfMM4Y", "4Ec0mNwDHrj", "GxxD72KwnX1", "Sm-OE1V7eFy", "nips_2021_gKyyBfMM4Y", "VYEMvKTEr0j", "EhUEJ0fh7W0", "vX7nbzWodCE", "VfDG6idxKQW", "LKtsmFQwXv3", "AO2bF_R5wVK", "nips_2021_gKyyBfMM4Y", "nips_2021_gKyyBfMM4Y" ]
nips_2021_RcIorZrz88d
Stochastic bandits with groups of similar arms.
Fabien Pesquerel, Hassan SABER, Odalric-Ambrym Maillard
accept
The committee was divided about this paper: while the technical value is recognized by all and praised by one reviewer, it seems like at least two reviewers were puzzled by the clarity of the paper. I decided to read the draft myself to make my own opinion on whether the pass needed was minor (and does not require another cycle of review) or major. I think it is rather the former situation, though I see how a few typos at the wrong places may have mislead reader. Please update your work for the final version, removing typos and adding comments on motivation as well as citations to further highlight the connection with existing work and the potential impact of your discoveries.
train
[ "Ljf7nDokBH", "rV7xmcNR7mH", "APo70HXQ1fX", "bcd4EwBed4G", "eRVItEP7c4j", "HMgic2iOAc", "GHOKK2oI35z", "OkW0OAeAiO0", "DpgL6WsSgMm", "qCaZYEF2niR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Hi authors,\n\nThank you for your replies to my comments. I still consider this to be a strong submission and will keep my score\n\n", "In this work the authors present an algorithm for the stochastic multi-armed bandits setting where it is known a priori that arms come in group of size at least q, all having t...
[ -1, 5, 6, -1, -1, -1, -1, -1, 7, 5 ]
[ -1, 4, 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "HMgic2iOAc", "nips_2021_RcIorZrz88d", "nips_2021_RcIorZrz88d", "qCaZYEF2niR", "APo70HXQ1fX", "DpgL6WsSgMm", "rV7xmcNR7mH", "nips_2021_RcIorZrz88d", "nips_2021_RcIorZrz88d", "nips_2021_RcIorZrz88d" ]
nips_2021_fbAHHm_jyo2
Tracking Without Re-recognition in Humans and Machines
Imagine trying to track one particular fruitfly in a swarm of hundreds. Higher biological visual systems have evolved to track moving objects by relying on both their appearance and their motion trajectories. We investigate if state-of-the-art spatiotemporal deep neural networks are capable of the same. For this, we introduce PathTracker, a synthetic visual challenge that asks human observers and machines to track a target object in the midst of identical-looking "distractor" objects. While humans effortlessly learn PathTracker and generalize to systematic variations in task design, deep networks struggle. To address this limitation, we identify and model circuit mechanisms in biological brains that are implicated in tracking objects based on motion cues. When instantiated as a recurrent network, our circuit model learns to solve PathTracker with a robust visual strategy that rivals human performance and explains a significant proportion of their decision-making on the challenge. We also show that the success of this circuit model extends to object tracking in natural videos. Adding it to a transformer-based architecture for object tracking builds tolerance to visual nuisances that affect object appearance, establishing the new state of the art on the large-scale TrackingNet challenge. Our work highlights the importance of understanding human vision to improve computer vision.
accept
Having read the paper carefully and looked at the reviews, I am pretty convinced this is a useful contribution and should definitely be accepted. Two of the reviewers gave the paper high scores -- but not a free pass, having asked detailed questions and gotten apparently satisfying responses from the authors. I see that one of the reviewers is very skeptical, and remains unsatisfied by the authors' responses. But looking at the author responses, and having tried to probe that reviewer's thoughts myself, it seems to me that the reviewer's skepticism is rather biased (e.g. not responding to logical questions, but just kind of baked in). So I've decided to largely discount that low reviewer's comments in making my final decision.
train
[ "I9vPL4-ooy0", "zRmv0IvbR_P", "uy85sRJQKN", "8ZgS6awnyD-", "Uf3lI_5h4TF", "EcSMDaL6U5A", "cHZoEn8g-cn", "qquLgQWSKv", "I7a1X13ilYS", "UcAp4dT99BB", "8SRwIAXPFGY", "Ve_Lsf2HxIm", "zvhD8Zmoa_b", "LIfRs6o3JnI", "_e7WijJ_7CO", "PCh21rFCS67", "tPIUC-wOZrs", "9TFVKHK3-7l", "9BMhPU0748X...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "...
[ " If we were randomly tweaking the prior state of the art we would agree that a percentage point of improvement might not constitute an intellectual contribution. However, our InT circuit is first and foremost designed to solve the synthetic PathTracker challenge and explain human decision making when tracking with...
[ -1, -1, -1, 3, 7, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "zRmv0IvbR_P", "uy85sRJQKN", "8ZgS6awnyD-", "nips_2021_fbAHHm_jyo2", "nips_2021_fbAHHm_jyo2", "I7a1X13ilYS", "nips_2021_fbAHHm_jyo2", "PCh21rFCS67", "_e7WijJ_7CO", "Ve_Lsf2HxIm", "LIfRs6o3JnI", "zvhD8Zmoa_b", "8SRwIAXPFGY", "tPIUC-wOZrs", "9BMhPU0748X", "cHZoEn8g-cn", "Uf3lI_5h4TF", ...
nips_2021_Hox8lKfr82L
Rethinking conditional GAN training: An approach using geometrically structured latent manifolds
Conditional GANs (cGAN), in their rudimentary form, suffer from critical drawbacks such as the lack of diversity in generated outputs and distortion between the latent and output manifolds. Although efforts have been made to improve results, they can suffer from unpleasant side-effects such as the topology mismatch between latent and output spaces. In contrast, we tackle this problem from a geometrical perspective and propose a novel training mechanism that increases both the diversity and the visual quality of a vanilla cGAN, by systematically encouraging a bi-lipschitz mapping between the latent and the output manifolds. We validate the efficacy of our solution on a baseline cGAN (i.e., Pix2Pix) which lacks diversity, and show that by only modifying its training mechanism (i.e., with our proposed Pix2Pix-Geo), one can achieve more diverse and realistic outputs on a broad set of image-to-image translation tasks.
accept
The reviewers were in agreement that the method is of broad interest to the community, and is sufficiently general to improve many types of GANs. In the initial reviews, the author's noted that the GAN used was not sufficiently recent. The authors addressed this and other concerns more than satisfactorily, and all reviewers agree that this paper should be accepted. I agree. Nice work!
train
[ "d1-Ntp2yU9", "Y-tZg4mFHYk", "mY9LeaWat03", "Ngqd5ZTgPvw", "ISaMpGSSq22", "YntqwiPX2cL", "nheohKgk7wJ", "vmNCjhw2Tsj", "WOVVFcHtS8U", "p_YnvZoxRIy", "l4uaual-7gc", "m7H0BpOMSAA", "mlShiTtZ6LM", "lGbPbjIc0FD" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed response. ", " We thank the reviewer for raising the score. We gratefully acknowledge that the questions and suggestions from the reviewer helped us to improve our current manuscript. ", " We appreciate the positive feedback from the reviewer and are grateful for raising the score. We...
[ -1, -1, -1, 7, -1, 7, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "WOVVFcHtS8U", "ISaMpGSSq22", "nheohKgk7wJ", "nips_2021_Hox8lKfr82L", "p_YnvZoxRIy", "nips_2021_Hox8lKfr82L", "l4uaual-7gc", "mlShiTtZ6LM", "mlShiTtZ6LM", "Ngqd5ZTgPvw", "YntqwiPX2cL", "lGbPbjIc0FD", "nips_2021_Hox8lKfr82L", "nips_2021_Hox8lKfr82L" ]
nips_2021_q2JWz371le
How to transfer algorithmic reasoning knowledge to learn new algorithms?
Learning to execute algorithms is a fundamental problem that has been widely studied. Prior work (Veličković et al., 2019) has shown that to enable systematic generalisation on graph algorithms it is critical to have access to the intermediate steps of the program/algorithm. In many reasoning tasks, where algorithmic-style reasoning is important, we only have access to the input and output examples. Thus, inspired by the success of pre-training on similar tasks or data in Natural Language Processing (NLP) and Computer vision, we set out to study how we can transfer algorithmic reasoning knowledge. Specifically, we investigate how we can use algorithms for which we have access to the execution trace to learn to solve similar tasks for which we do not. We investigate two major classes of graph algorithms, parallel algorithms such as breadth-first search and Bellman-Ford and sequential greedy algorithms such as Prims and Dijkstra. Due to the fundamental differences between algorithmic reasoning knowledge and feature extractors such as used in Computer vision or NLP, we hypothesis that standard transfer techniques will not be sufficient to achieve systematic generalisation. To investigate this empirically we create a dataset including 9 algorithms and 3 different graph types. We validate this empirically and show how instead multi-task learning can be used to achieve the transfer of algorithmic reasoning knowledge.
accept
This paper investigates how algorithms for which we have access to the execution trace can be leveraged to learn to solve similar tasks for which we do not have execution traces. The authors create a dataset that covers 9 algorithms and 3 different graph types to investigate transfer learning in two major classes of graph algorithm, parallel and sequential. They also introduce modifications to the existing NeuralExtractor model that improve its performance in some experimental settings. The paper makes an interesting empirical discovery: according to their results, standard methods for transfer learning are ineffective in transferring algorithmic reasoning knowledge, while multi-task learning does effectively transfer this kind of knowledge. I think this observation is important to share with the community, particularly as it arises from experiments that Reviewer JA8z says "are extensive and cover many different settings." One possible concern with reliability of the finding is the large standard deviation in some of the results, as pointed out by Reviewer ZoqR. A relevant issue raised by several reviewers is that the paper offered no analysis of why MTL is better than pre-training approaches for algorithmic transfer learning. In their response, the authors admit that they cannot provide a conclusive explanation for this observation, although they do argue for the exclusion of some standard hypotheses like catastrophic forgetting. They also conducted two more experiments during the review period that reveal more about why transfer learning might fail (although they don't prove their hypothesis on generalizing minima conclusively). However, I think it's fine for papers to identify important open questions and leave their answers as future work for the community. Some reviewers questioned the paper's technical novelty (the modelling innovation amounts only to a modified NeuralExtractor that extends prior work), but I don't think that's what this paper is really about. Its impact lies in the novelty of its empirical observations. Finally, the authors pledged to address reviewers' concerns around the clarity of the results presentation and the detail of their model description. I'm confident that they can do this straightforwardly. For these reasons, I lean towards accepting the paper even though its aggregate review score puts it slightly below the acceptance threshold.
train
[ "EZGsfOceZZW", "8Ze07VysOhJ", "XkaLl0XttYw", "Zgu_57tlU4", "iv3cRKPZRna", "T-4LmQW83FI", "pRzGy9aEAFL", "KDZWleZHk_b", "WeCLlyY6Zf", "0sQlpfkZ5Do" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper discusses how to transfer the knowledge from known step-by-step algorithms to learn new algorithms with only input and output pairs. The authors reach the conclusion: transfer learning does not help, but multi-task learning helps with the systematic generalization.\nThe main contributions of this paper ...
[ 6, 4, -1, 5, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_q2JWz371le", "nips_2021_q2JWz371le", "nips_2021_q2JWz371le", "nips_2021_q2JWz371le", "nips_2021_q2JWz371le", "EZGsfOceZZW", "8Ze07VysOhJ", "Zgu_57tlU4", "0sQlpfkZ5Do", "nips_2021_q2JWz371le" ]
nips_2021_16Pv9PFDJB8
Fast Axiomatic Attribution for Neural Networks
Robin Hesse, Simone Schaub-Meyer, Stefan Roth
accept
This paper focuses on feature attribution techniques (which, in a frustrating linguistic turn are often called “explanations”). These techniques are all heuristics chosen for their ability, on the basis of anecdotal evidence to highlight features that the beholder finds important. What if anything they are “explaining” is seldom explained, and I would encourage the authors to strip the paper of gratuitous uses of the term explanation wherever it is wielded in a quasitechnical form: this includes the title “for Training Neural Networks with Explanations” and in the italicized phrase, as though introducing a technical term: "efficiently explainable”. That said, for better or ill, and the jury is still out, some of these techniques are objects of interest of a large part of the ML community, and while there has been some progress to positing axioms that these attribution maps ought to satisfy, not all methods that satisfy those axioms are known to be efficiently computable. In particular, the authors present a new method for computing Integrated Gradients efficiently, showing that when bias terms are removed from neural networks they are efficiently computable. Whether integrated gradients are a useful concept in the first place remains a dubious proposition (what should the baseline be and what basis could anyone possibly have for choosing one?). However, insofar as they are an object of interest and the community is willing to publish lots of papers about them, this paper strikes me as unusually useful and concrete in its offering. The reviewers debated the surprisingness of the finding and whether it was already known to some in the XAI community. However, while I have seen many papers computing integrated gradients, I have no knowledge of anyone computing integrated gradients efficiently via this trick. It seems sufficiently useful, that if it were known, you would have seen it. Questions of the value of the underlying method aside, I do not believe that a finding must require an earthshaking analysis to to be surprising to a great mathematician to be valuable and useful. If that were the case, most useful papers in ML would never have been published. The reviewer’s suggestions that a key result was already known have been rebutted by the authors (who claim that the result is only known for linear models) and the reviewer did not take the opportunity to respond. In this light, I will give the authors the benefit of the doubt. The authors had a constructive discussion with Reviewers bst1 and gwR5 and I trust that they will act in good faith and add the discussed modifications, clarifications, and exposition to the final draft. I will recommend acceptance and hope that the authors will act in good faith and tone down the quasitechnical language about “explanation” and to improve the paper per the promises that arose in the discussion period.
train
[ "7tVazeUaOAH", "rN5UJHs5csq", "IoESnD7atg5", "GMNk95rkRSi", "hwBtcW84Pyh", "6FNla45QCWD", "laXWtcWtX-w", "nglL7xxs_jp", "6aMJgJ3LIbC", "scqs7XsdXy" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors, \n\nThanks for your response and my apology for my late response. I believe most of my questions are resolved and I believe the rest questions in my feedback are good to have but not necessary to include in this paper. I will increase my score from 5 to 7 to reflect this change. I will appreciate If...
[ -1, 7, -1, -1, -1, -1, -1, 5, 4, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "hwBtcW84Pyh", "nips_2021_16Pv9PFDJB8", "6FNla45QCWD", "nglL7xxs_jp", "rN5UJHs5csq", "scqs7XsdXy", "6aMJgJ3LIbC", "nips_2021_16Pv9PFDJB8", "nips_2021_16Pv9PFDJB8", "nips_2021_16Pv9PFDJB8" ]
nips_2021_Me-tuhUjhKK
OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression
Chen Zhang, Shifeng Zhang, Fabio Maria Carlucci, Zhenguo Li
accept
The paper proposes to use a deep generative model optimized over a large dataset as the entropy model for a lossless compression algorithm. The main idea is to divide the data to be compressed into separate batches and to adapt the pre-trained model on each batch before coding the next batch. There are two reviewers opposing the acceptance of the paper and two reviewers who are for accepting it. The main drawbacks pointed out are: - Potentially limited originality because “backward adaptation” is commonly used in image and video coding. - Writing quality could be significantly improved. - Some experiments seem to be less informative. - Older (non-DL) literature is missing. All reviewers appreciated the quality and clarity of the paper. Two reviewers especially valued the idea of adapting the model on data being compressed with deep generative models. The rebuttal was positively received by the reviewers. In my assessment, even though the originality of the paper is put in question, the paper is an interesting contribution to the field of data compression with deep generative modeling. Therefore, I tend to accept the paper.
train
[ "TOCdhUrEXP2", "zUGSD3Z-1Qn", "NfRiYELHsu", "gz3l0XOspcX", "l7aKK9y2gM0", "35tiZv4Ks9o", "M9HCutFX8E", "zsfqlwG1SC_", "ivlvkCxtFg", "kGBkn_rpXDu", "NCiLwLGA79t" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the response, which addressed my concerns.", " Thank you, I appreciate the clarifications provided by the authors.", " Thanks for the clarifications.", " The authors thank the reviewer for the comments and we would like to further clarify the confusions with our response.\n\nNovelty:...
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "M9HCutFX8E", "gz3l0XOspcX", "35tiZv4Ks9o", "ivlvkCxtFg", "zsfqlwG1SC_", "kGBkn_rpXDu", "NCiLwLGA79t", "nips_2021_Me-tuhUjhKK", "nips_2021_Me-tuhUjhKK", "nips_2021_Me-tuhUjhKK", "nips_2021_Me-tuhUjhKK" ]
nips_2021_ZYX1ff6H0Bs
Compressive Visual Representations
Learning effective visual representations that generalize well without human supervision is a fundamental problem in order to apply Machine Learning to a wide variety of tasks. Recently, two families of self-supervised methods, contrastive learning and latent bootstrapping, exemplified by SimCLR and BYOL respectively, have made significant progress. In this work, we hypothesize that adding explicit information compression to these algorithms yields better and more robust representations. We verify this by developing SimCLR and BYOL formulations compatible with the Conditional Entropy Bottleneck (CEB) objective, allowing us to both measure and control the amount of compression in the learned representation, and observe their impact on downstream tasks. Furthermore, we explore the relationship between Lipschitz continuity and compression, showing a tractable lower bound on the Lipschitz constant of the encoders we learn. As Lipschitz continuity is closely related to robustness, this provides a new explanation for why compressed models are more robust. Our experiments confirm that adding compression to SimCLR and BYOL significantly improves linear evaluation accuracies and model robustness across a wide range of domain shifts. In particular, the compressed version of BYOL achieves 76.0% Top-1 linear evaluation accuracy on ImageNet with ResNet-50, and 78.8% with ResNet-50 2x.
accept
This paper applies conditional entropy bottleneck (CEB) objective to two existing frameworks for self-supervised learning: SimCLR and BYOL. The paper clearly demonstrates the advantages of this approach and is well-grounded in the existing theory on CEB. Adapting CEB to SimCLR and BYOL seems to be novel. Interestingly, presented approach can be applied in several similar settings, thus the results can be potentially impactful also beyond the main scope of the paper.
train
[ "bO_nNfKZv_p", "eLD6psh29a", "--m5pBp0uj4", "eum_eaZv73a", "As68sE9-b1D", "rTicfc-CJpX", "G2my-GfdUOd", "F121icziFNE", "l4F-0i_3Ncx", "Y48uVKOl9HA", "lClONBFKNN", "sfIJLl0PWuJ", "6pgrmA5_ivR", "4lMR1enHKg", "uHAgInvstyh" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time to address my concerns.", " Thanks for the detailed reply. The clarifications were helpful. Looking forward to read the final version.", " Thank you for your response! I am keeping my original score and look forward to the updated version of the paper. ", " Thank you for the de...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "lClONBFKNN", "Y48uVKOl9HA", "l4F-0i_3Ncx", "rTicfc-CJpX", "nips_2021_ZYX1ff6H0Bs", "G2my-GfdUOd", "F121icziFNE", "As68sE9-b1D", "uHAgInvstyh", "6pgrmA5_ivR", "4lMR1enHKg", "nips_2021_ZYX1ff6H0Bs", "nips_2021_ZYX1ff6H0Bs", "nips_2021_ZYX1ff6H0Bs", "nips_2021_ZYX1ff6H0Bs" ]
nips_2021_sKWgT8WppC3
Multi-Armed Bandits with Bounded Arm-Memory: Near-Optimal Guarantees for Best-Arm Identification and Regret Minimization
Arnab Maiti, Vishakha Patil, Arindam Khan
accept
The committee was divided on this paper and the discussions have highlighted that the despite several writing issues, the paper presents novel and important contributions. We suggest the authors to revise their draft for clarity but we recommend to accept this work to Neurips 2021.
train
[ "F6BkOXF6de5", "CrNeMlzBxbr", "WbVYm2KGIY9", "ZRhf3dFZDE1", "c8iRR24rX6S", "KQESZfHvdP7", "Ow3PaLKxJQm", "r2pGBpNdjxW", "qI1InzRDSQ4", "Bq3d4q1Fla7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the comment. We will clarify and incorporate the suggested changes in the updated version. Indeed, MAB instance is specified by the distributions corresponding to the arms and not just the means of the distributions. In fact, the knowledge of the distributions is not required in our anal...
[ -1, -1, 5, -1, -1, -1, -1, 6, 7, 8 ]
[ -1, -1, 3, -1, -1, -1, -1, 3, 4, 3 ]
[ "CrNeMlzBxbr", "ZRhf3dFZDE1", "nips_2021_sKWgT8WppC3", "qI1InzRDSQ4", "WbVYm2KGIY9", "r2pGBpNdjxW", "Bq3d4q1Fla7", "nips_2021_sKWgT8WppC3", "nips_2021_sKWgT8WppC3", "nips_2021_sKWgT8WppC3" ]
nips_2021_p7GujbewmRY
Grounding inductive biases in natural images: invariance stems from variations in data
To perform well on unseen and potentially out-of-distribution samples, it is desirable for machine learning models to have a predictable response with respect to transformations affecting the factors of variation of the input. Here, we study the relative importance of several types of inductive biases towards such predictable behavior: the choice of data, their augmentations, and model architectures. Invariance is commonly achieved through hand-engineered data augmentation, but do standard data augmentations address transformations that explain variations in real data? While prior work has focused on synthetic data, we attempt here to characterize the factors of variation in a real dataset, ImageNet, and study the invariance of both standard residual networks and the recently proposed vision transformer with respect to changes in these factors. We show standard augmentation relies on a precise combination of translation and scale, with translation recapturing most of the performance improvement---despite the (approximate) translation invariance built in to convolutional architectures, such as residual networks. In fact, we found that scale and translation invariance was similar across residual networks and vision transformer models despite their markedly different architectural inductive biases. We show the training data itself is the main source of invariance, and that data augmentation only further increases the learned invariances. Notably, the invariances learned during training align with the ImageNet factors of variation we found. Finally, we find that the main factors of variation in ImageNet mostly relate to appearance and are specific to each class.
accept
This paper brought about very divergent opinions from the reviewers — they all agreed that the analyses across the three sections are technically solid, and the topic is of broad interest to the community, but the main points of disagreement were over the novelty of the results, the paper being hard to read since it is packed with results, and whether the paper provides any actionable insights for future modeling approaches. I don’t necessarily view actionable insights as 100% necessary (although they would certainly add value to the paper), but upon reading the paper myself, I agree with reviewer WyYA’s assessment that Sections 3 and 4 are probably the more interesting ones and I wish the authors had dived deeper into them and perhaps had a more cohesive messaging for the paper (it is indeed packed with many, many results which makes it hard to digest). Overall however, I do think some of the results presented in this paper are surprising and will inform ongoing and future research in this field.
train
[ "qk-pVqUoUqv", "W-8kbgEJRlu", "jYHlyX7SoBd", "yM0V9T14W-a", "09JvDjc8gdR", "_6ZbYW_gZso", "7CrDtvlOVr", "zEZEJ7JaXlz", "Qczvaq7mWWb", "3EIl4dlh0Zu", "vT2xB4PyzHf", "m0Tiq4jdfc", "MsOsFsIzdl", "CgJTrvlyLSr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper provides an analysis of the invariance of neural networks (ResNet) trained on image classification (ImageNet) to image transformations, and of the role of data augmentation in encouraging invariance. The paper provides different types of results. First, in Section 2, the authors analyse what components ...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 7 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "nips_2021_p7GujbewmRY", "yM0V9T14W-a", "vT2xB4PyzHf", "09JvDjc8gdR", "3EIl4dlh0Zu", "qk-pVqUoUqv", "MsOsFsIzdl", "CgJTrvlyLSr", "MsOsFsIzdl", "qk-pVqUoUqv", "m0Tiq4jdfc", "nips_2021_p7GujbewmRY", "nips_2021_p7GujbewmRY", "nips_2021_p7GujbewmRY" ]
nips_2021_s6JD_xBS31
Directed Graph Contrastive Learning
Graph Contrastive Learning (GCL) has emerged to learn generalizable representations from contrastive views. However, it is still in its infancy with two concerns: 1) changing the graph structure through data augmentation to generate contrastive views may mislead the message passing scheme, as such graph changing action deprives the intrinsic graph structural information, especially the directional structure in directed graphs; 2) since GCL usually uses predefined contrastive views with hand-picking parameters, it does not take full advantage of the contrastive information provided by data augmentation, resulting in incomplete structure information for models learning. In this paper, we design a directed graph data augmentation method called Laplacian perturbation and theoretically analyze how it provides contrastive information without changing the directed graph structure. Moreover, we present a directed graph contrastive learning framework, which dynamically learns from all possible contrastive views generated by Laplacian perturbation. Then we train it using multi-task curriculum learning to progressively learn from multiple easy-to-difficult contrastive views. We empirically show that our model can retain more structural features of directed graphs than other GCL models because of its ability to provide complete contrastive information. Experiments on various benchmarks reveal our dominance over the state-of-the-art approaches.
accept
The paper proposes a novel idea of self-supervised learning with digraphs. The scores of the paper are somewhat borderline and the reviewers have had adequate reviews and discussions with the authors. The pros and cons of the paper are well discussed by the reviewers in the reviews. The AC finds the concerns on the novelty and significance of the idea slightly outweigh the pros. The reviewers also pointed out the lack of large-scale experiments and the authors responded that the paper's main contribution is the novel algorithm proposed but not large-scale experiments. This also seems to raise the bar for the theoretical contribution, which the paper does not seem to meet so far.
train
[ "2AbODgA2pPf", "os7QWTmrIEx", "eEA0I50s-AL", "OJRW9X2JgOL", "hXzrOnPsIfU", "8bseQ-i3jr2", "bZsedvtEBal", "27jNzDx695", "EawXUA2C2YW", "Jfi_-mMQpV4", "uZ2NYPkObB", "EU86KbBJnCq", "HBPvvDT_zen", "DhPIa8MtWra", "Uvk_o4fWP5a", "10cKhnaotOo", "DtiKjT58Au_" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on learning with directed graphs (digraphs), and aims to address the problems with existing GCL methods that 1) the augmentations undesirably alter meaningful structural information, and 2) the number of contrastive views is limited.\n\nAs the first contribution, the authors propose a solution c...
[ 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6 ]
[ 3, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_s6JD_xBS31", "HBPvvDT_zen", "nips_2021_s6JD_xBS31", "hXzrOnPsIfU", "8bseQ-i3jr2", "nips_2021_s6JD_xBS31", "Jfi_-mMQpV4", "os7QWTmrIEx", "Jfi_-mMQpV4", "uZ2NYPkObB", "2AbODgA2pPf", "DtiKjT58Au_", "10cKhnaotOo", "8bseQ-i3jr2", "nips_2021_s6JD_xBS31", "nips_2021_s6JD_xBS31", ...
nips_2021_QgX15Mdi1E_
Space-time Mixing Attention for Video Transformer
This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer's depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models.
accept
The authors propose a novel transformer architecture for video recognition whose performance scales linearly with the number of frames (as opposed to the usual quadratic scaling). To achieve this the authors restrict the time attention to a local temporal window, and introduce an efficient space-time mixing procedure. The proposed approach offers competitive results in terms of accuracy/flops tradeoffs on several popular video recognition benchmarks. The paper was reviewed by 4 expert reviewers and received borderline ratings. The rebuttal managed to address points raised by most reviewers, who found it a valuable contribution to the community. One reviewer maintains the view that the work lacks novelty in terms of technical contributions, and that more ablation studies are necessary. After considering the manuscript, the reviews, and the discussion, I felt that the work should be accepted for publication and that a minor revision is sufficient to address the raised criticisms.
test
[ "5VoqROO279X", "TbfiRkupjbk", "QdJsImAJig_", "Jwmz5ZID37C", "6NBrVOlOHxe", "mnnnHkYoa0", "SMBbJK7peyC", "PgVvj9yZy-s", "ry1vhSY7Neb", "Y5l50FCu7Pf", "LRSOLoQRF0W", "COtTYVkdFO", "V46YZit_vfv", "vHC1u7bIRn8", "uKI6N71NOMo", "SvIY55riXCn" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rebuttal. It addresses my concerns (and I think the points raised by the other reviewers) quite well.\nAs pointed out by Reviewer qij4, I would not call the difference between X-ViT and TSM-ViT a \"mathematically sound approximation\", but the authors have described in the rebuttal how their met...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "LRSOLoQRF0W", "nips_2021_QgX15Mdi1E_", "vHC1u7bIRn8", "V46YZit_vfv", "mnnnHkYoa0", "SMBbJK7peyC", "PgVvj9yZy-s", "COtTYVkdFO", "nips_2021_QgX15Mdi1E_", "vHC1u7bIRn8", "TbfiRkupjbk", "SvIY55riXCn", "uKI6N71NOMo", "nips_2021_QgX15Mdi1E_", "nips_2021_QgX15Mdi1E_", "nips_2021_QgX15Mdi1E_"...
nips_2021_jbcRU9dkxs
Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis
We propose the particle dual averaging (PDA) method, which generalizes the dual averaging method in convex optimization to the optimization over probability distributions with quantitative runtime guarantee. The algorithm consists of an inner loop and outer loop: the inner loop utilizes the Langevin algorithm to approximately solve for a stationary distribution, which is then optimized in the outer loop. The method can be interpreted as an extension of the Langevin algorithm to naturally handle nonlinear functional on the probability space. An important application of the proposed method is the optimization of neural network in the mean field regime, which is theoretically attractive due to the presence of nonlinear feature learning, but quantitative convergence rate can be challenging to obtain. By adapting finite-dimensional convex optimization theory into the space of measures, we not only establish global convergence of PDA for two-layer mean field neural networks under more general settings and simpler analysis, but also provide quantitative polynomial runtime guarantee. Our theoretical results are supported by numerical simulations on neural networks with reasonable size.
accept
The authors introduce a novel Particle Dual Averaging technique to minimize to minimize a non-linear functional of probability distributions. In terms of applications, they focus on the training of two-layer neural networks. In this context, they present quantitative convergence results for an entropy-regularized loss function. They also discuss the finite particle algorithm and present generalization bounds. There was a short but informative discussion between reviewers. All the reviewers agree that the paper presents some original, interesting and solid theoretical contributions. However, it was felt by some reviewers that the experimental section was fairly weak and I encourage the authors to improve it before publication. I recommend acceptance of the paper.
train
[ "1NbA8Kh-swT", "U9YYTVjUDwe", "2fmsxCRuBfR", "n_vh4i-KRGF", "xkN0lAM8v9p", "mA67pCTjetQ", "PIiVO_JnTAk", "GtSNDwHdQXN", "11ofTuLrV0Q", "9zf5p7JlYac", "o4Cxi2sJGG9", "WimuNUN2mq", "5VEYJsnlO3", "hoWXNZFyS6w", "ETRCfC0agQ4", "R_B6WgAizWv", "47RkYD2IU4X", "E7Eye-2FZvF", "cuzMFqS6EbF...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_re...
[ " Thank you for the close reading. Indeed the mini-batch version of PDA used in our experiments is only for the outer loop, and the inner loop utilizes the full gradient Langevin algorithm. Thus as you pointed out, the per-iteration costs of PDA and SGD in Appendix G.1 Figure 3 are not comparable (we however note t...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "U9YYTVjUDwe", "2fmsxCRuBfR", "xkN0lAM8v9p", "nips_2021_jbcRU9dkxs", "mA67pCTjetQ", "o4Cxi2sJGG9", "WimuNUN2mq", "9zf5p7JlYac", "hoWXNZFyS6w", "E7Eye-2FZvF", "R_B6WgAizWv", "47RkYD2IU4X", "nips_2021_jbcRU9dkxs", "ETRCfC0agQ4", "5VEYJsnlO3", "rjzlltwlDKC", "n_vh4i-KRGF", "cuzMFqS6Eb...
nips_2021_jb5fp_wQGHU
Learning Tree Interpretation from Object Representation for Deep Reinforcement Learning
Interpreting Deep Reinforcement Learning (DRL) models is important to enhance trust and comply with transparency regulations. Existing methods typically explain a DRL model by visualizing the importance of low-level input features with super-pixels, attentions, or saliency maps. Our approach provides an interpretation based on high-level latent object features derived from a disentangled representation. We propose a Represent And Mimic (RAMi) framework for training 1) an identifiable latent representation to capture the independent factors of variation for the objects and 2) a mimic tree that extracts the causal impact of the latent features on DRL action values. To jointly optimize both the fidelity and the simplicity of a mimic tree, we derive a novel Minimum Description Length (MDL) objective based on the Information Bottleneck (IB) principle. Based on this objective, we describe a Monte Carlo Regression Tree Search (MCRTS) algorithm that explores different splits to find the IB-optimal mimic tree. Experiments show that our mimic tree achieves strong approximation performance with significantly fewer nodes than baseline models. We demonstrate the interpretability of our mimic tree by showing latent traversals, decision rules, causal impacts, and human evaluation results.
accept
The paper proposes an approach to learning an interpretable distillation of an RL agent's advantage function by first adopting a object-disentangling model (MONet) and subsequently fitting regression trees to allow extraction of causal relationships therein. A key issue of deciding between tree size/complexity and expressiveness is handled using the Information Bottleneck (IB). The reviewers agree that the paper tackles an interesting and relevant problem, and the use of IB to drive the tree learning is novel, and the experiments are quite thorough. The primary issues with the work however appear to be with some of the framing (with a conditioned version of MONet implying novelty) and the model setup, with questions on whether the end-to-end setup is overly complicated over a two-stage pipelined model. On balance, it appears as if the paper has more merits than issues, and most of the issues raised could be addressed with a bit of work. I would strongly urge the authors to actually make the edits for model renaming, detail, and the utility of end-to-end here, as requested by the reviewers.
train
[ "WzryzbyGpPB", "sZ8dLfHhPrs", "pwcG9PTBeK", "Wk6XHvEQ-C", "BwmkE7DuUU4", "LgyY-Nav9eL", "XzdMPqEUV4", "ObDCIRahAyp", "Yf3K4uG-3ys", "nPnY4u4vmEe" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes Represent and Mimic (RAMi), a framework for extracting an interpretable mimic tree from the advantage function learned by a deep RL agent. This is achieved by (1) using a variation of MONet [34] to extract disentangled object representations from visual state observations that are interpretable...
[ 6, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_jb5fp_wQGHU", "pwcG9PTBeK", "ObDCIRahAyp", "BwmkE7DuUU4", "LgyY-Nav9eL", "nPnY4u4vmEe", "Yf3K4uG-3ys", "WzryzbyGpPB", "nips_2021_jb5fp_wQGHU", "nips_2021_jb5fp_wQGHU" ]
nips_2021_p5rMPjrcCZq
Only Train Once: A One-Shot Neural Network Training And Pruning Framework
Structured pruning is a commonly used technique in deploying deep neural networks (DNNs) onto resource-constrained devices. However, the existing pruning methods are usually heuristic, task-specified, and require an extra fine-tuning procedure. To overcome these limitations, we propose a framework that compresses DNNs into slimmer architectures with competitive performances and significant FLOPs reductions by Only-Train-Once (OTO). OTO contains two key steps: (i) we partition the parameters of DNNs into zero-invariant groups, enabling us to prune zero groups without affecting the output; and (ii) to promote zero groups, we then formulate a structured-sparsity optimization problem, and propose a novel optimization algorithm, Half-Space Stochastic Projected Gradient (HSPG), to solve it, which outperforms the standard proximal methods on group sparsity exploration, and maintains comparable convergence. To demonstrate the effectiveness of OTO, we train and compress full models simultaneously from scratch without fine-tuning for inference speedup and parameter reduction, and achieve state-of-the-art results on VGG16 for CIFAR10, ResNet50 for CIFAR10 and Bert for SQuAD and competitive result on ResNet50 for ImageNet. The source code is available at https://github.com/tianyic/onlytrainonce.
accept
The authors propose a fine-tuning free structured pruning method (OTO). The idea is to first partition the parameters into zero-invariant groups, pruning the zero groups, and solving for a structured sparsity optimization problem with projections. The experiment results on CIFAR10, ImageNet, and SQuAD show the competitive performance on FLOPs and number of parameters reduction. The paper initially received a split review. I would like to thank the time and effort the authors and reviewers spent engaging in the active discussion during the rebuttal phase. I strongly recommend the authors to revise the submission and include the ablation study and the baseline comparisons the reviewers requested. Specifically, please include 1. The ablation study of initialization stage vs projection stage 2. Baseline comparison with related methods [1,2,3,4,5]. [1] Learning Structured Sparsity in Deep Neural Networks, NeurIPS 16 [2] DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures, ICLR 20 [3] Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks, NeurIPS 20. [4] Operation-Aware Soft Channel Pruning using Differentiable Masks, ICML2020 [5] Lossless CNN Channel Pruning via Decoupling Remembering and Forgetting. ICCV 2021.
test
[ "WPkVqG6nJLg", "zNxJOjLP3Pt", "YF8n5u0sPhN", "W8kvvsGDReN", "hlhcLpsbuMT", "F8u5r5nyUG", "IVdQGSRZsX", "ogwmhkXnDg", "wpU46ACHxwd", "kham1-6PFyR", "qnxkUj9mGYT", "6sVPeNFqCpk", "w7JOpWFsG5t" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. Please find the below duplicated and rephrased comments along with our responses for addressing the concerns.\n\n\n- **Q9: Authors insist that employing more epochs onto OTO leads to the further improvement on Top-1 accuracy. Then, would you share me the improved accuracy?**\n\n A9: W...
[ -1, 6, -1, -1, -1, 5, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "YF8n5u0sPhN", "nips_2021_p5rMPjrcCZq", "W8kvvsGDReN", "zNxJOjLP3Pt", "F8u5r5nyUG", "nips_2021_p5rMPjrcCZq", "6sVPeNFqCpk", "zNxJOjLP3Pt", "w7JOpWFsG5t", "IVdQGSRZsX", "F8u5r5nyUG", "nips_2021_p5rMPjrcCZq", "nips_2021_p5rMPjrcCZq" ]
nips_2021_J64lDCrYGi
Referring Transformer: A One-step Approach to Multi-task Visual Grounding
As an important step towards visual reasoning, visual grounding (e.g., phrase localization, referring expression comprehension / segmentation) has been widely explored. Previous approaches to referring expression comprehension (REC) or segmentation (RES) either suffer from limited performance, due to a two-stage setup, or require the designing of complex task-specific one-stage architectures. In this paper, we propose a simple one-stage multi-task framework for visual grounding tasks. Specifically, we leverage a transformer architecture, where two modalities are fused in a visual-lingual encoder. In the decoder, the model learns to generate contextualized lingual queries which are then decoded and used to directly regress the bounding box and produce a segmentation mask for the corresponding referred regions. With this simple but highly contextualized model, we outperform state-of-the-art methods by a large margin on both REC and RES tasks. We also show that a simple pre-training schedule (on an external dataset) further improves the performance. Extensive experiments and ablations illustrate that our model benefits greatly from contextualized information and multi-task training.
accept
The work enables one-stage, end-to-end visual grounding using Transformer. The architecture is designed based on DETR, by replacing the object queries with phrase encoding, and equipped with several additional components for the grounding task. Overall, the reviewers unanimously agree the work makes a nice contribution with a simple and effective architecture, which cleverly extends the DETR architecture that was designed for object detection. The reviewers also found the experimentation sufficient, where the work showed good accuracy on a number of datasets. The authors' rebuttal clarified a few reviewer concerns, which further strengthens their confidence for accepting the paper.
train
[ "7IYMA6lGLWd", "3AZXLipPx3o", "_TzcijQ3WHu", "HdcGOlOYdE", "jhUFsa0IY9v", "OvBedgwCqlH", "F8ADGOtmys", "_Vd2A7h6uc", "Z8MCfi_8s2-", "vNFpmMHiuDd", "NTSc16hP6mF", "wL0LsnjIvM5" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your responses.\n\n1. I find both this paper's setting and the one in MDETR reasonable and interesting. The additional analyses are interesting and help readers to better understand the pros and cons when compared with MDETR. \n\n2. The presented experiment addressed my previous concern on the REC+R...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 5 ]
[ "jhUFsa0IY9v", "_TzcijQ3WHu", "F8ADGOtmys", "OvBedgwCqlH", "wL0LsnjIvM5", "NTSc16hP6mF", "vNFpmMHiuDd", "Z8MCfi_8s2-", "nips_2021_J64lDCrYGi", "nips_2021_J64lDCrYGi", "nips_2021_J64lDCrYGi", "nips_2021_J64lDCrYGi" ]
nips_2021__IY3_4psXuf
Decoupling the Depth and Scope of Graph Neural Networks
State-of-the-art Graph Neural Networks (GNNs) have limited scalability with respect to the graph and model sizes. On large graphs, increasing the model depth often means exponential expansion of the scope (i.e., receptive field). Beyond just a few layers, two fundamental challenges emerge: 1. degraded expressivity due to oversmoothing, and 2. expensive computation due to neighborhood explosion. We propose a design principle to decouple the depth and scope of GNNs – to generate representation of a target entity (i.e., a node or an edge), we first extract a localized subgraph as the bounded-size scope, and then apply a GNN of arbitrary depth on top of the subgraph. A properly extracted subgraph consists of a small number of critical neighbors, while excluding irrelevant ones. The GNN, no matter how deep it is, smooths the local neighborhood into informative representation rather than oversmoothing the global graph into “white noise”. Theoretically, decoupling improves the GNN expressive power from the perspectives of graph signal processing (GCN), function approximation (GraphSAGE) and topological learning (GIN). Empirically, on seven graphs (with up to 110M nodes) and six backbone GNN architectures, our design achieves state-of-the-art accuracy with orders of magnitude reduction in computation and hardware cost.
accept
This paper proposes a new way to perform graph sampling by using small neighborhoods of points for training and testing. The resulting method enables very deep networks to be used on a range of graph problems. The reviewers agree that the method is simple, and that this is a merit. While there’s some discrepancy over the usefulness of the theory, there seems to be consensus around the point of view that the presence of theoretical results adds to the paper.
test
[ "9qom5LVlO4v", "IrEqj6Kaz6u", "CClIWPDfBEB", "guGWsZnyGOX", "YrU0cxYe8E", "etmMslSeWGb", "KKcgW6JXTtL", "Yedd4z0YN8e", "TDsiXEWpS5g", "SVK6joKSDiY" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer k7tB,\n\n\nFirst of all, we thank you again for the valuable feedback on our paper! As there is only about 1 week left for the discussion phase, we would greatly appreciate it if you could confirm whether our response has addressed your concerns. \n\nPlease allow us to briefly summarize our previous...
[ -1, -1, -1, -1, -1, -1, -1, 5, 9, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "Yedd4z0YN8e", "guGWsZnyGOX", "Yedd4z0YN8e", "KKcgW6JXTtL", "SVK6joKSDiY", "CClIWPDfBEB", "TDsiXEWpS5g", "nips_2021__IY3_4psXuf", "nips_2021__IY3_4psXuf", "nips_2021__IY3_4psXuf" ]
nips_2021_WwZbupAKWo
Fast and Memory Efficient Differentially Private-SGD via JL Projections
Differentially Private-SGD (DP-SGD) of Abadi et al. and its variations are the only known algorithms for private training of large scale neural networks. This algorithm requires computation of per-sample gradients norms which is extremely slow and memory intensive in practice. In this paper, we present a new framework to design differentially private optimizers called DP-SGD-JL and DP-Adam-JL. Our approach uses Johnson–Lindenstrauss (JL) projections to quickly approximate the per-sample gradient norms without exactly computing them, thus making the training time and memory requirements of our optimizers closer to that of their non-DP versions. Unlike previous attempts to make DP-SGD faster which work only on a subset of network architectures or use compiler techniques, we propose an algorithmic solution which works for any network in a black-box manner which is the main contribution of this paper. To illustrate this, on IMDb dataset, we train a Recurrent Neural Network (RNN) to achieve good privacy-vs-accuracy tradeoff, while being significantly faster than DP-SGD and with a similar memory footprint as non-private SGD.
accept
This submission makes significant time and memory efficiency improvements for differentially private SGD using random projections, at the cost of a worse privacy/utility tradeoff when compared with vanilla DPSGD as emphasized by one reviewer. The authors gave a compelling rebuttal however that despite this worse tradeoff, DPSGDJL still has value for fast hyperparameter tuning for private training. The contribution of this submission thus seems strong.
train
[ "B422neLFQ0L", "6BGODvpdUwf", "yjkyMVjcln-", "H7ucFmDN395", "qVChB9KV4x", "fQKucPG9921", "iXHwrj9bOSa", "1ib7Rxgpvcb" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your positive review. We are glad that you found our privacy analysis using FFT interesting. We appreciate it.", " Thanks for the positive review and helpful comments and suggestions. We appreciate your effort and time spent in reviewing this paper. Now we address some of your comments mentioned in...
[ -1, -1, -1, -1, 7, 3, 6, 6 ]
[ -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "1ib7Rxgpvcb", "iXHwrj9bOSa", "fQKucPG9921", "qVChB9KV4x", "nips_2021_WwZbupAKWo", "nips_2021_WwZbupAKWo", "nips_2021_WwZbupAKWo", "nips_2021_WwZbupAKWo" ]
nips_2021_hOG8swMRmY
Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks. In this paper, we provide the first integral study and analysis for feed-forward neural networks in terms of the robustness in pairwise class margin and its generalization behavior under weight perturbation. We further design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations. Empirical experiments are conducted to validate our theoretical analysis. Our results offer fundamental insights for characterizing the generalization and robustness of neural networks against weight perturbations.
accept
Even though the paper received mixed scores, all the reviewers agreed on the originality and the significance of the contributions of the paper. The authors’ extensive rebuttal letters later on clarified several concerns/confusions, especially in terms of lack of empirical evaluation. While the paper is borderline, I believe that most raised issues can be properly addressed **provided the authors implement all the required changes, which they addressed in their rebuttal**. Hence I am recommending a weak accept.
train
[ "agOHC1BV5Bv", "mOPIVOYg5Mo", "3gOuftREHFO", "0tgO05-zjzk", "H3sLiB7cHc", "HgKcwLf2HDc", "f1hZK7d93cx", "Fk9x1kF8MoV", "FUQDfjhPP33", "ONHFyrwe7tH", "z-3XFWwVPoD", "uF6SB3RNF_W", "AuWNGQoIg0", "nMfRkS6buLL", "Vp80LiUZ5MU", "TEg2Hv3Mlki", "j8_SCFBX2c", "NulDVrm5T20", "FNoxbxzzzzD"...
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ " ### Additional Experiments Based on Reviewer's Comments\n___\n\nWe thank the reviewer for the recent suggestion and response regarding the interest of inspecting the generalization gap and test accuracy under weight perturbation (see https://openreview.net/forum?id=hOG8swMRmY&noteId=FUQDfjhPP33). \n\nAs a follow-...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, 7, -1, -1, -1, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, 4, -1, -1, -1, 4, 4 ]
[ "FNoxbxzzzzD", "NulDVrm5T20", "nMfRkS6buLL", "FNoxbxzzzzD", "NulDVrm5T20", "f1hZK7d93cx", "Fk9x1kF8MoV", "FUQDfjhPP33", "0tgO05-zjzk", "AuWNGQoIg0", "Vp80LiUZ5MU", "nips_2021_hOG8swMRmY", "j8_SCFBX2c", "nips_2021_hOG8swMRmY", "3gOuftREHFO", "nips_2021_hOG8swMRmY", "uF6SB3RNF_W", "n...
nips_2021_uFORMPcA_b
Pipeline Combinators for Gradual AutoML
Automated machine learning (AutoML) can make data scientists more productive. But if machine learning is totally automated, that leaves no room for data scientists to apply their intuition. Hence, data scientists often prefer not total but gradual automation, where they control certain choices and AutoML explores the rest. Unfortunately, gradual AutoML is cumbersome with state-of-the-art tools, requiring large non-compositional code changes. More concise compositional code can be achieved with combinators, a powerful concept from functional programming. This paper introduces a small set of orthogonal combinators for composing machine-learning operators into pipelines. It describes a translation scheme from pipelines and associated hyperparameter schemas to search spaces for AutoML optimizers. On that foundation, this paper presents Lale, an open-source sklearn-compatible AutoML library, and evaluates it with a user study.
accept
Overall there is not enough support from reviewers for me to recommend acceptance. Reviewers agreed on some real strengths in the paper, including (1) that is well-written and well-organized, and (2) that it tackles an important problem. On #1: * Reviewer G6BM: "I really enjoyed reading this paper! The paper is extremely well-written and well-organized" * Reviewer cjD3: "The authors introduced their AutoML library in a sorted, logical way" On #2: * Reviewer G6BM: "The paper considers a very important problem and provides a very satisfactory solution both in technical and practical point of view" * Reviewer D3TC: "AutoML is an important application, and if successful, can greatly reduce data scientists' effort" * Reviewer H7JW: "AutoML is a useful tool [...] and efficient autoML system is an interesting research topic" But the weight of opinion was that the paper (1) doesn't present a clear and significant scientific contribution, (2) shows limited empirical validation, and (3) doesn't concern a software package with large enough practical impact on the NeurIPS community. On #1: * Reviewer cjD3: "Furthermore, I don’t see a clear path for scientific future work building on top of Lale" and "I'm not convinced that the new operators can express substantially more than the formats of already existing AutoML tools [...] If the expressiveness of the format would be larger than prior work, I would have expected that we need also specialized optimizers for this [...] I have the impression that Lale could be a convenient package for new AutoML users, but this alone does not justify a NeurIPS paper" * Reviewer H7JW: "be more clear about the scientific contribution from the very beginning of the paper, e.g., a more expressive formalization of the AutoML system." * Reviewer D3TC: "The high-level scientific contribution is missing. The paper provides too many low level details without describing what are the main challenges to implement combinators." and "It is not clear why the proposed method is contributing to AutoML" On #2: * Reviewer cjD3: "the user studies only include 9 participants, making statements not very meaningful. Additionally, the survey might even be biased towards Lale" * Reviewer H7JW: "organize the presentation of the empirical study following the general scientific principles; and to recruit more participants if possible so that one could draw some statistically significant conclusions." On #3: * Reviewer cjD3: "Overall, I’m not fully convinced that there will be many users of Lale at the end of the day." and "Lale is not even close to the level of pytorch. It has 225 stars on github and was forked 52 times. If that would be the level of impact we expect from a NeurIPS software paper, we will have thousands of papers of those each year." Reviewer G6BM disagreed on #1 and #3, and felt the paper does clear the bar, but offered this follow-up commentary: "I suggest the authors address those concerns in the next version of the paper to make the paper stronger, e.g., by highlighting the first point more and providing concrete/quantitative evidence on how much useful the Lale library is (and will be) for the NeurIPS community." The other reviewers offered several suggestions for the authors to improve the paper, in addition to those quoted above: * Reviewer cjD3: "I liked Section 3 regarding the gradual automation which is a real problem for new (Auto)ML practitioners and I believe that there is a lot of untouched potential here. Unfortunately, this is not really the main focus of the paper." * Reviewer cjD3: "Maybe JMLR MLOSS would be a better fit for this paper" Overall, we weren't able to justify acceptance based either on the significance of the scientific contribution or on the practical impact of the software described here within the NeurIPS community. For that reason I recommend rejecting the paper, but I hope that the reviewers' feedback is helpful to the authors in improving it.
val
[ "GNuZexsVC3", "nQMS0UiH_3G", "5ahnEpsJ71", "c34WxydEJQb", "hNOvO7qHDwY", "oAThYj8H5e", "qqg-u4YmyOl", "ogiUYIbxm2k", "n7ivTK7ITDY" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper aims at constructing a system for gradual AutoML that is concise, modular (or compositional), and easy-to-use. To this end, the paper introduces three orthogonal combinators (i.e., higher-order functions) which enable compositional code for gradual AutoML, and hyperparameter schemas which describe searc...
[ 8, -1, 5, -1, -1, -1, -1, 3, 4 ]
[ 3, -1, 3, -1, -1, -1, -1, 4, 5 ]
[ "nips_2021_uFORMPcA_b", "5ahnEpsJ71", "nips_2021_uFORMPcA_b", "n7ivTK7ITDY", "5ahnEpsJ71", "GNuZexsVC3", "ogiUYIbxm2k", "nips_2021_uFORMPcA_b", "nips_2021_uFORMPcA_b" ]
nips_2021_zaqGp90Od4y
Boost Neural Networks by Checkpoints
Training multiple deep neural networks (DNNs) and averaging their outputs is a simple way to improve the predictive performance. Nevertheless, the multiplied training cost prevents this ensemble method to be practical and efficient. Several recent works attempt to save and ensemble the checkpoints of DNNs, which only requires the same computational cost as training a single network. However, these methods suffer from either marginal accuracy improvements due to the low diversity of checkpoints or high risk of divergence due to the cyclical learning rates they adopted. In this paper, we propose a novel method to ensemble the checkpoints, where a boosting scheme is utilized to accelerate model convergence and maximize the checkpoint diversity. We theoretically prove that it converges by reducing exponential loss. The empirical evaluation also indicates our proposed ensemble outperforms single model and existing ensembles in terms of accuracy and efficiency. With the same training budget, our method achieves 4.16% lower error on Cifar-100 and 6.96% on Tiny-ImageNet with ResNet-110 architecture. Moreover, the adaptive sample weights in our method make it an effective solution to address the imbalanced class distribution. In the experiments, it yields up to 5.02% higher accuracy over single EfficientNet-B0 on the imbalanced datasets.
accept
I recommend to accept this paper. In this paper, the authors proposed a boosting method to ensemble checkpoints during the training of neural networks called Checkpoint-Boosted Neural Network (CBNN) to improve the performance. In particular, a boosting scheme with both theoretical guarantee and empirical justification is proposed to accelerate model convergence and maximize the checkpoint diversity. In the post-rebuttal discussion, all the reviewers are agree that most concerns in the original reviewers are properly addressed. Reviewer Ca6wKai also raises the score. I will suggest the authors to take the suggestions from reviewers into account in the preparation of camera ready.
train
[ "ETJ62RPX7sx", "GSJbfvekpP9", "2pF6MJs9KNA", "zz3gCgA7Czn", "tEZWN6N9a30", "PpB8fchvl1S", "E4k9fgMhiDY", "AJQM8t520K7", "aAc-mkOM3T5", "db7PjNTtz-R", "VvnAa6pwIDz", "RrIJwmIbXZA", "hYV9jRvLzdB", "RMbAyAwVkpC", "Bn5RqI6zDyU" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " It is glad to address your concerns. We are also exploring the wider application of our algorithm. Again, thank you very much for your detailed comments!", " Thanks for submitting the author feedback. The additional experiments addressed my concerns about the ablation study. My concerns regarding experiments sc...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "GSJbfvekpP9", "RrIJwmIbXZA", "zz3gCgA7Czn", "PpB8fchvl1S", "nips_2021_zaqGp90Od4y", "db7PjNTtz-R", "AJQM8t520K7", "hYV9jRvLzdB", "db7PjNTtz-R", "VvnAa6pwIDz", "tEZWN6N9a30", "Bn5RqI6zDyU", "RMbAyAwVkpC", "nips_2021_zaqGp90Od4y", "nips_2021_zaqGp90Od4y" ]
nips_2021_KPLf9FhwEqZ
Model Selection for Bayesian Autoencoders
We develop a novel method for carrying out model selection for Bayesian autoencoders (BAEs) by means of prior hyper-parameter optimization. Inspired by the common practice of type-II maximum likelihood optimization and its equivalence to Kullback-Leibler divergence minimization, we propose to optimize the distributional sliced-Wasserstein distance (DSWD) between the output of the autoencoder and the empirical data distribution. The advantages of this formulation are that we can estimate the DSWD based on samples and handle high-dimensional problems. We carry out posterior estimation of the BAE parameters via stochastic gradient Hamiltonian Monte Carlo and turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space. Thanks to this approach, we obtain a powerful alternative to variational autoencoders, which are the preferred choice in modern application of autoencoders for representation learning with uncertainty. We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results, outperforming multiple competitive baselines.
accept
This paper introduces a model selection method for Bayesian autoencoders, based on a Bayesian formulation. Learning is achieved by optimization of the distributional sliced-Wasserstein distance (DSWD) and posterior estimation is carried out via stochastic gradient Hamiltonian Monte Carlo, useful for ​representation learning with uncertainty. The paper has been perceived quite positively. The address challenges in working with BAE 1) lack of generative modeling 2) inference intractability 3) picking a prior over model parameters. They address 1 with latent space density estimation, 2 with MCMC methods, and 3 by optimizing the distributional sliced-Wasserstein distance of BAEs induced distribution and the data generating distribution. The authors also provide solid and convincing experimental results. However, there is also a potential misleading problem with the notation, that is related to the issue raised by Reviewer 7aYu, the apparent cyclic definition in the probabilistic model. In this respect, it is important to highlight that an autoencoder (or its Bayesian treatment BAE) should be viewed as a supervised models This is in contrast to a VAE where the encoder is an inference distribution so strictly speaking, the statistical model is the decoder only. To highlight the supervised nature of the problem, one remedy could be to introduce $y$ as a supervised target and let $y_n = x_n$. Eq (1) can be expressed in the familiar form as in regular Bayesian NNs: $p(w|x,y) = p(y|w,x) p(w) / p(y|x)$. Starting with this notation seems to solve the potential confusion (raised as a concern by reviewer 7aYu) that I feel is not fully resolved during the discussion and must be addressed in the final manuscript. I suggest that the authors do not hide the notation and clearly define the Bayesian hierarchical model, and explicitly spell out the conditional in (2). It could be also helpful to shift the discussion in the experimental section, that starts from line 214 about the key differences between a VAE to the introduction. I believe that the authors can address this in the final manuscript, hence I am suggesting acceptance as a poster, acknowledging positive points raised by the reviewers.
train
[ "OL_8buJYCHU", "vYjgjySh2Bi", "4-1F6aUGg6p", "kaOkKbNNcbQ", "HL64EAuIPCo", "tYTJ5wFDqQ7", "BZ0DRV7xufD", "Q0O0CMNzooE", "2fyg61Flt1", "TPtb1tr3fDe", "aRYE8ofhWf0", "ARlyVF8k5Zw", "YjnRNt9AXnw", "Er2NvcR8RU9", "g8u4uim2rvT", "d0HQMBKVrwj", "TsUEvEtXjv" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for your insightful feedback. For the camera-ready, we are going to include a discussion of the potential issues of the empirical Bayes approach.", " Thank you for your response.\n\n- Regarding your 1st concern, we believe there is a possible misunderstanding here. \n - In the paper, we did not claim...
[ -1, -1, 6, -1, -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, 4 ]
[ -1, -1, 3, -1, -1, -1, 2, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5 ]
[ "kaOkKbNNcbQ", "HL64EAuIPCo", "nips_2021_KPLf9FhwEqZ", "g8u4uim2rvT", "YjnRNt9AXnw", "Q0O0CMNzooE", "nips_2021_KPLf9FhwEqZ", "2fyg61Flt1", "aRYE8ofhWf0", "nips_2021_KPLf9FhwEqZ", "Er2NvcR8RU9", "TPtb1tr3fDe", "TsUEvEtXjv", "BZ0DRV7xufD", "4-1F6aUGg6p", "nips_2021_KPLf9FhwEqZ", "nips_...
nips_2021_crnXK0jC2F
Three Operator Splitting with Subgradients, Stochastic Gradients, and Adaptive Learning Rates
Alp Yurtsever, Alex Gu, Suvrit Sra
accept
All reviewers agree that this paper is worthy of publication. Although some reviewers had initial remarks regarding missing reference, the authors have provided a strong rebuttal that addressed these concerns. I encourage the authors to take into account the feedback provided by the reviewers, in particular: * Reviewers 7qvs, ofGn and eqKr make excellent clarification suggestions on motivation and presentation. * Reviewer fPkS suggests some improvements to the experimental section. * Reviewer eqKr raises the issue of an important missing reference. This review also raises issues regarding clarity, motivation and comparison with other methods.
train
[ "ho1wo4Ftvxj", "eP76oAtLxqI", "fbVre-8A3_", "wlnFPcw82YG", "FPAti7YtgZx", "oA3V3vUU3Rp", "5nyObvYnIJ3", "3GkeYmpadc", "5_Vy3jQX3HY", "KMng21cPj1c", "Bv5yPduYhKg", "nrKoBn9IDw" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors analyze a variant of the classical three operator splitting algorithm where the smooth gradient (or co-coercive operator) is replaced by a subgradient (which is only a monotone operator). This enables the minimization of the sum of three functions, two of which have explicit proximity o...
[ 6, -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, -1, 5, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_crnXK0jC2F", "KMng21cPj1c", "5_Vy3jQX3HY", "nips_2021_crnXK0jC2F", "Bv5yPduYhKg", "nips_2021_crnXK0jC2F", "nips_2021_crnXK0jC2F", "ho1wo4Ftvxj", "wlnFPcw82YG", "nrKoBn9IDw", "oA3V3vUU3Rp", "nips_2021_crnXK0jC2F" ]
nips_2021__cXX-Dr7sf0
Knowledge-Adaptation Priors
Humans and animals have a natural ability to quickly adapt to their surroundings, but machine-learning models, when subjected to changes, often require a complete retraining from scratch. We present Knowledge-adaptation priors (K-priors) to reduce the cost of retraining by enabling quick and accurate adaptation for a wide-variety of tasks and models. This is made possible by a combination of weight and function-space priors to reconstruct the gradients of the past, which recovers and generalizes many existing, but seemingly-unrelated, adaptation strategies. Training with simple first-order gradient methods can often recover the exact retrained model to an arbitrary accuracy by choosing a sufficiently large memory of the past data. Empirical results show that adaptation with K-priors achieves performance similar to full retraining, but only requires training on a handful of past examples.
accept
The paper presents a family of approaches for adapting pre-trained models to a variety of changes to the model architecture, training data or other aspects of the training setup. Given the cost of training models from scratch and the generality of the presented approach, this is a highly relevant piece of work with potential for impact. After the discussion, three reviews recommend acceptance and one review recommends rejection. The negative review points to certain issues with the current version, but doesn't convince me that these issues rise to the level of rejection. After considering both positive and negative points, I lean towards recommending acceptance. **Comments to the authors**: After reading the paper, I was left with the impression that clarity could be improved, and that there are places where details are missing. I see that some of these missing details and some additional experiments were supplied during the discussion with the reviewers. I would like to see these, together with any other reviewer feedback, incorporated into the final version of the paper.
test
[ "z-jD7RU3LR", "GNGexe7MjYz", "-cJqq4qld_d", "suEarzBhoBE", "iA_wF-4QpAa", "u0dDwpk5vyz", "HlcHLg2lP4d", "nDsH3KfVgM", "cIhEFgKO5ck", "sVy9IUeWQtV", "zyY91Ay0IRG", "aFVv0JCkU5_", "64M30KnY-fs", "35kzBhUo6y3", "h5K7gS0npL1", "GbaQmPnN0M", "0Btcb97ce_-", "v2b6kL4N4f-", "_OSD5Xmfxx7"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "...
[ "- This paper tackles an important problem: Suppose we have trained a model on a dataset, but we wish to adapt it quickly. For example, we might add new training points, delete training points, or wish to change the model and regularization. We could re-train a model, but that's slow - can we get a similar result b...
[ 7, -1, -1, -1, 6, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, -1, -1, -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021__cXX-Dr7sf0", "v2b6kL4N4f-", "suEarzBhoBE", "u0dDwpk5vyz", "nips_2021__cXX-Dr7sf0", "nDsH3KfVgM", "sVy9IUeWQtV", "h5K7gS0npL1", "nips_2021__cXX-Dr7sf0", "0Btcb97ce_-", "h5K7gS0npL1", "64M30KnY-fs", "GbaQmPnN0M", "nips_2021__cXX-Dr7sf0", "iA_wF-4QpAa", "_OSD5Xmfxx7", "cIhEF...
nips_2021_qPOeyokHXT8
Provably efficient multi-task reinforcement learning with model transfer
We study multi-task reinforcement learning (RL) in tabular episodic Markov decision processes (MDPs). We formulate a heterogeneous multi-player RL problem, in which a group of players concurrently face similar but not necessarily identical MDPs, with a goal of improving their collective performance through inter-player information sharing. We design and analyze a model-based algorithm, and provide gap-dependent and gap-independent regret upper and lower bounds that characterize the intrinsic complexity of the problem.
accept
The paper presents a method to transform a multi-task MDP into a multi-player game where agents act in related but slightly environments, and share information. Several reviewers commented on the quality and clarity of writing, and while there were many specific technical questions, they were mostly clarified in the responses. One point of criticism from several reviewers is the lack of technical novelty. However, in the discussion, the reviewers also agreed on the importance of multi-task RL and that the theoretical results are state-of-the-art. As stated by two of the reviewers, experiments in common domains could be useful to demonstrate the practical applicability. I encourage the authors to consider this point in future work.
test
[ "JD3nzpCEJ6O", "8L_SObtVs1C", "xsOo_pD_ai3", "pkm1-HcBmSb", "wbeUmSYC86f", "WAS19EeV31Y", "OX3r3drkwK", "rOOZnWUFdS", "8ARU8bxAaPj", "kXZky7Marj2", "uqALhXrKLnU" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks authors for the response.", " Thank you for your detailed comments. We have also noticed some of the minor issues after the main paper submission deadline, and addressed them in the errata in the supplementary material; we will make sure to make a pass over the paper and incorporate your comments that ha...
[ -1, -1, -1, -1, -1, -1, 6, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 2, 4, 2 ]
[ "wbeUmSYC86f", "OX3r3drkwK", "kXZky7Marj2", "uqALhXrKLnU", "8ARU8bxAaPj", "rOOZnWUFdS", "nips_2021_qPOeyokHXT8", "nips_2021_qPOeyokHXT8", "nips_2021_qPOeyokHXT8", "nips_2021_qPOeyokHXT8", "nips_2021_qPOeyokHXT8" ]
nips_2021_hMY6nm9lld
Predicting Molecular Conformation via Dynamic Graph Score Matching
Predicting stable 3D conformations from 2D molecular graphs has been a long-standing challenge in computational chemistry. Recently, machine learning approaches have demonstrated very promising results compared to traditional experimental and physics-based simulation methods. These approaches mainly focus on modeling the local interactions between neighboring atoms on the molecular graphs and overlook the long-range interactions between non-bonded atoms. However, these non-bonded atoms may be proximal to each other in 3D space, and modeling their interactions is of crucial importance to accurately determine molecular conformations, especially for large molecules and multi-molecular complexes. In this paper, we propose a new approach called Dynamic Graph Score Matching (DGSM) for molecular conformation prediction, which models both the local and long-range interactions by dynamically constructing graph structures between atoms according to their spatial proximity during both training and inference. Specifically, the DGSM directly estimates the gradient fields of the logarithm density of atomic coordinates according to the dynamically constructed graphs using score matching methods. The whole framework can be efficiently trained in an end-to-end fashion. Experiments across multiple tasks show that the DGSM outperforms state-of-the-art baselines by a large margin, and it is capable of generating conformations for a broader range of systems such as proteins and multi-molecular complexes.
accept
The reviewers all agreed this work should be accepted. They appreciated the clarity, the results (particularly the small molecule results), the motivation of the problem being addressed, the end-to-end aspect of the model, and the related work section. The authors should take into account the detailed suggestions of the reviewers for the camera ready version, to make an already nice paper even better.
train
[ "86xe2xAcLG", "SomNa7msUb", "Z0LvD2Jt8y", "zPpd4L7uKPO", "RIRvCQHAhPv", "MZtSR7JV2F", "3lhGm6T0T_9", "Gx7NH5ikNa", "xIy5I5ZA-JM", "eI7aqd8Iic6" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors develop an improved method for 3D molecular conformation generation from 2D molecular graphs that can more directly incorporate non-bonded, topologically long-range interactions into the generation process. Their starting point is a recent method that leverages score matching with a GNN based on the 2D...
[ 7, -1, -1, 6, -1, -1, -1, -1, 6, 7 ]
[ 5, -1, -1, 3, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_hMY6nm9lld", "Z0LvD2Jt8y", "RIRvCQHAhPv", "nips_2021_hMY6nm9lld", "eI7aqd8Iic6", "xIy5I5ZA-JM", "86xe2xAcLG", "zPpd4L7uKPO", "nips_2021_hMY6nm9lld", "nips_2021_hMY6nm9lld" ]
nips_2021_CONAi0Bh26d
When in Doubt: Neural Non-Parametric Uncertainty Quantification for Epidemic Forecasting
Accurate and trustworthy epidemic forecasting is an important problem for public health planning and disease mitigation. Most existing epidemic forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions. Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations; e.g., it is difficult to specify proper priors in Bayesian NNs, while methods like deep ensembling can be computationally expensive. In this paper, we propose to use neural functional processes to fill this gap. We model epidemic time-series with a probabilistic generative process and propose a functional neural process model called EpiFNP, which directly models the probability distribution of the forecast value in a non-parametric way. In EpiFNP, we use a dynamic stochastic correlation graph to model the correlations between sequences, and design different stochastic latent variables to capture functional uncertainty from different perspectives. Our experiments in a real-time flu forecasting setting show that EpiFNP significantly outperforms state-of-the-art models in both accuracy and calibration metrics, up to 2.5x in accuracy and 2.4x in calibration. Additionally, as EpiFNP learns the relations between the current season and similar patterns of historical seasons, it enables interpretable forecasts. Beyond epidemic forecasting, EpiFNP can be of independent interest for advancing uncertainty quantification in deep sequential models for predictive analytics.
accept
There was significant interest in this paper, as the topic of epidemic forecasting is indeed one that has the attention of the community at the current time. Reviewers had mixed opinions about this paper, but overall found that the most important concerns raised were indeed addressed by the author responses. In particular, reviewer mtym's concern that the model may be prone to overfitting was reasonably addressed by the empirical evaluation. (And indeed, this would not be the first model in ML literature to show that over-parameterization does not necessarily lead to overfitting.) Reviewer EENP's questions around whether the model is specifically designed for this application area were also well addressed in rebuttal, along with useful additional detail around computational efficiency and the potential impact of post-hoc calibration. Overall, I can see no reason not accept this paper as a poster at this point, with the clear expectation of course that the authors will incorporate the feedback from reviewers into revising their paper, and include the additional results from the responses into the final paper.
train
[ "dcXFCJ-xIn", "wyR0JCzvOdQ", "8O14VAtEhaB", "dB5rXPTu3MS", "2NO71BbJiW6", "gXxi0eJjEu2", "xwS_Y43BjP", "P7oOMbC96G9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a method for probabilistic time series forecasting for univariate real-valued data (they just consider Influenza-Like Illness) using a semi-parametric neural network. The basic idea is to embed the prefix of observations for the current season into a fixed-sized latent vector using a GRU, and t...
[ 6, -1, -1, -1, -1, -1, 5, 7 ]
[ 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_CONAi0Bh26d", "2NO71BbJiW6", "dB5rXPTu3MS", "P7oOMbC96G9", "dcXFCJ-xIn", "xwS_Y43BjP", "nips_2021_CONAi0Bh26d", "nips_2021_CONAi0Bh26d" ]
nips_2021_HX0eMs5YYMA
Bounds all around: training energy-based models with bidirectional bounds
Energy-based models (EBMs) provide an elegant framework for density estimation, but they are notoriously difficult to train. Recent work has established links to generative adversarial networks, where the EBM is trained through a minimax game with a variational value function. We propose a bidirectional bound on the EBM log-likelihood, such that we maximize a lower bound and minimize an upper bound when solving the minimax game. We link one bound to a gradient penalty that stabilizes training, thereby provide grounding for best engineering practice. To evaluate the bounds we develop a new and efficient estimator of the Jacobi-determinant of the EBM generator. We demonstrate that these developments significantly stabilize training and yield high-quality density estimation and sample generation.
accept
The main contribution of the paper is the development of lower and upper bounds for training EBMs in a stable manner, by drawing inspiration from the WGAN objective. The methodology presented in the paper can be useful for the research community. The reviewers raised some concerns about the paper that were adequately addressed during the rebuttal phase. I therefore recommend the paper for acceptance, but strongly encourage the authors to address the following points for the camera-ready version: + Rephrase the points about the instability & grad clipping of WGAN, as discussed with reviewer rhEU. + Add the experiment details to the appendix, as suggested by reviewer y6Eh. + Add some discussion and references in the related work section. In particular, it'd be useful to include as much of the similarities and differences with previous methods (from the response to reviewer vzR3) as possible. + Add some clarification about the invertibility of the Jacobian determinant and the change-of-variables formula, as discussed with reviewer xLGS and myself. In particular, the assumptions under which Eq. (8) holds should be clearly stated in the paper - these are pointed out by the authors in their rebuttal, but some additional questions are: Does this limit the architecture of the generative NN? For example, does each hidden layer need to have a number of hidden units between $d$ and $D$, with non-decreasing sizes along the layers? Is any non-linearity valid? (e.g., ReLu's map many input values to 0, so it shouldn't be used). If any assumption is made, it should be clearly stated in the final version. In addition, the text should clarify that $p(G(z))$ in Eq. (8) is *not* the Lebesgue measure on $\mathbb{R}^D$ but it is restricted to some low-dim manifold. + Incorporate the experiments that were ran during the rebuttal period. Besides the items above, I also have two technical questions that should be clarified in the paper: + Below Eq. (20), the paper says that $\zeta=1$ allows for the simplified bound from Eq. (15). Shouldn't it be $\zeta=M$ instead, which corresponds to $m=1$ in Eqs. (13)-(15)? + Theorem 2 guarantees the existence of constants ($m$, $M$, and $p\geq 1$) such that the bound in Eq. (13) holds, but unless I'm missing something, it doesn't say what values of the constants guarantee the bound in Eq. (13). But, according to the rebuttal, Eq. (15) is a valid bound for any $p\geq 1$ (as long as the "expression$\geq 1$" assumption is met). So, even though it was obtained from Eq. (13), Eq. (15) makes an explicit statement about what values of the constants constitute a valid bound. How is such discrepancy possible? Is the "expression$\geq 1$" assumption the explanation for that? Does Theorem 2 actually say something about the values of the constants that guarantee the correctness of the bound?
train
[ "TZrA08kJV02", "PfOtQwERMBF", "obX5bpIAuj", "pHC3ZCFY08b", "hdWVRZCJ_h_", "iJVBtV99NN", "KlZoGZ7Vtzu", "mUcE8fTFg_c", "M0qx29qbIWx", "YxCke2pkTGk", "_0V5nFc2PAA", "DAKii-x83P", "tRJuaegybgi", "Vu1SlsNYGrV", "99k_W9njTX0", "DwmhBixE4jy", "ssxVGp4-P09", "sEiSKFsF6Ho", "GmIJgaP8giR"...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "a...
[ " Thank you for your answer, this clarifies my question.", " Thank you for the question -- we are thankful for the engagement.\n\nEq. 8 should be read as the expression of the density *where it has support*. \n\nLet us first consider the linear case. Assume $z \\sim \\mathcal{N}(0, I)$ with $z \\in \\mathbb{R}^d$...
[ -1, -1, 5, -1, -1, -1, -1, 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, 5, -1, -1, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "PfOtQwERMBF", "hdWVRZCJ_h_", "nips_2021_HX0eMs5YYMA", "iJVBtV99NN", "M0qx29qbIWx", "KlZoGZ7Vtzu", "tRJuaegybgi", "nips_2021_HX0eMs5YYMA", "DAKii-x83P", "sEiSKFsF6Ho", "nips_2021_HX0eMs5YYMA", "DwmhBixE4jy", "Vu1SlsNYGrV", "ssxVGp4-P09", "mUcE8fTFg_c", "GmIJgaP8giR", "obX5bpIAuj", ...
nips_2021_cnWSyJNmeCE
CogView: Mastering Text-to-Image Generation via Transformers
Text-to-Image generation in the general domain has long been an open problem, which requires both a powerful generative model and cross-modal understanding. We propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to advance this problem. We also demonstrate the finetuning strategies for various downstream tasks, e.g. style learning, super-resolution, text-image ranking and fashion design, and methods to stabilize pretraining, e.g. eliminating NaN losses. CogView achieves the state-of-the-art FID on the blurred MS COCO dataset, outperforming previous GAN-based models and a recent similar work DALL-E.
accept
This work presents a text-image generation model using VQ-VAE to discretize images, which is then followed by a LM on text + image tokens. There was a debate whether this work is concurrent to or a follow-up of DALL-E (blog post on Jan 5, 2021 and arXiv on Feb 24, 2021, more than two months before NeuRIPS deadline) given similarities among the two approaches. When viewed as a follow-up work, I found that there is a reasonable amount of contributions (including technical ones) in this paper, e.g., the finetuning ideas for super-resolution (which greatly enhances the quality and is actually a neat idea) and self-ranking (without having to train a full CLIP model). The detailed findings of using nearest neighbors in VQ-VAE, setting equal weight to the LM loss for texts, etc., together with tricks for stabilizing training seem informative and valuable to the community. In addition, the authors promised to open-source this work. Because of these reasons, I recommend Accept.
train
[ "b1e19pQb3rC", "aaAyAOLBZW", "nIBVqZonviu", "v741GqBHkP0", "tnNormMFdOV", "fcrQqtb1UB", "kHVIbElHGr", "4Xc42fyg3N1", "fw4KLo_I4v", "yk1mHwxmVJ-", "R-CBI78R-_n", "YJC5rfRUwe" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- CogView is an text-to-image model similar to DALL-E, except it uses a traditional VQ-VAE instead of dVAE. \n- A Chinese (image, text) dataset is collected for supervised pre-training. (30M pairs)\n- A couple of techniques are described to stabilize FP16 training: PB-Relax (identity that is more stable for comput...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "nips_2021_cnWSyJNmeCE", "nIBVqZonviu", "kHVIbElHGr", "kHVIbElHGr", "fcrQqtb1UB", "b1e19pQb3rC", "YJC5rfRUwe", "R-CBI78R-_n", "yk1mHwxmVJ-", "nips_2021_cnWSyJNmeCE", "nips_2021_cnWSyJNmeCE", "nips_2021_cnWSyJNmeCE" ]
nips_2021_tNT4APQ0Wgj
Time-independent Generalization Bounds for SGLD in Non-convex Settings
We establish generalization error bounds for stochastic gradient Langevin dynamics (SGLD) with constant learning rate under the assumptions of dissipativity and smoothness, a setting that has received increased attention in the sampling/optimization literature. Unlike existing bounds for SGLD in non-convex settings, ours are time-independent and decay to zero as the sample size increases. Using the framework of uniform stability, we establish time-independent bounds by exploiting the Wasserstein contraction property of the Langevin diffusion, which also allows us to circumvent the need to bound gradients using Lipschitz-like assumptions. Our analysis also supports variants of SGLD that use different discretization methods, incorporate Euclidean projections, or use non-isotropic noise.
accept
The paper has been received with mixed impressions. I mostly agree with the bottom-line of the reviews: the paper does introduce novel results and techniques to the line of work on generalization of SGLD, and there should be sufficient interest at NeurIPS for such a contribution; yet on the other hand, it is unclear how significant are the actual bounds derived, and the paper does not provide ample discussion/comparison around this. As some of the reviews alluded to, the significance of the dimension-dependant bounds is questionable (under the assumptions the paper makes, $\sqrt{d/n}$-type bounds can be obtained, independently of the algorithm, from generic uniform-convergence arguments; the authors response to this point is unconvincing). While the dimension-independent bounds (Theorem 4.1) seem more compelling, for truly appreciating them I would have liked to see a more nuanced discussion of their relation to existing bounds, as well as a more careful analysis of their precise dependence on the problem parameters. All considered, my decision is to accept the paper - but I urge the authors to carefully address the primary concerns raised in the discussion for the final version and carefully clarify their contribution and its relation to existing generalization bounds.
train
[ "TAHMjfbAMrr", "LvqNxgTYKRt", "8P4NJPhZAKu", "cEseTIPO_Hs", "lsMFMYsfji", "CTc_un7tZYy", "JpfcXIWTRU", "Cufuy3klPZt", "TDwwAVDVDTt", "bZvyRe8BC4G", "79SkzuDA_B", "gcywTz4KApB", "ijcx7Ief2YH", "bXXbZEMO-u" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nJust a quick follow up regarding our response to your comments. Specifically, If you have any other outstanding concerns, do let us know as we can provide additional clarification.\n\nOtherwise, if you have no other concerns, we would kindly ask that you consider updating your review in light of...
[ -1, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "ijcx7Ief2YH", "nips_2021_tNT4APQ0Wgj", "nips_2021_tNT4APQ0Wgj", "8P4NJPhZAKu", "8P4NJPhZAKu", "JpfcXIWTRU", "nips_2021_tNT4APQ0Wgj", "TDwwAVDVDTt", "bXXbZEMO-u", "LvqNxgTYKRt", "8P4NJPhZAKu", "ijcx7Ief2YH", "nips_2021_tNT4APQ0Wgj", "nips_2021_tNT4APQ0Wgj" ]
nips_2021_v4vuGbNIv71
Nonuniform Negative Sampling and Log Odds Correction with Rare Events Data
We investigate the issue of parameter estimation with nonuniform negative sampling for imbalanced data. We first prove that, with imbalanced data, the available information about unknown parameters is only tied to the relatively small number of positive instances, which justifies the usage of negative sampling. However, if the negative instances are subsampled to the same level of the positive cases, there is information loss. To maintain more information, we derive the asymptotic distribution of a general inverse probability weighted (IPW) estimator and obtain the optimal sampling probability that minimizes its variance. To further improve the estimation efficiency over the IPW method, we propose a likelihood-based estimator by correcting log odds for the sampled data and prove that the improved estimator has the smallest asymptotic variance among a large class of estimators. It is also more robust to pilot misspecification. We validate our approach on simulated data as well as a real click-through rate dataset with more than 0.3 trillion instances, collected over a period of a month. Both theoretical and empirical results demonstrate the effectiveness of our method.
accept
In this paper, an optimal negative sampling strategy from imbalanced data for rare event prediction is studied. The problem of class imbalance in rare event prediction is of considerable importance in various applied problems. The theoretical properties of the proposed method are clearly presented, and its advantages are clearly demonstrated by experimental validation on artificial and real data. All reviewers found the paper useful for the ML community. The authors are encouraged to consider updating the paper based on the reviewers' comments and suggestions.
train
[ "GrlpgpZQNX", "yo5fzk-ff0v", "zxm6TNTnvXS", "T62Sm3GwPvl", "DViPj9LMvgw", "NkC78BnD6JM", "EaF6LmFxciH", "GM5ZcykizS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper considers the estimation of the parameters of a binary response model when the positive instances are rare. Rareness is modeled via a scaling regime on the logits, with the key restriction being that the scaling occurs _uniformly over features_. The paper first shows that the fraction of positive instan...
[ 7, 6, -1, -1, -1, -1, -1, 8 ]
[ 4, 2, -1, -1, -1, -1, -1, 2 ]
[ "nips_2021_v4vuGbNIv71", "nips_2021_v4vuGbNIv71", "T62Sm3GwPvl", "yo5fzk-ff0v", "GrlpgpZQNX", "GM5ZcykizS", "nips_2021_v4vuGbNIv71", "nips_2021_v4vuGbNIv71" ]
nips_2021_huxo_Mqh76
Algorithmic stability and generalization of an unsupervised feature selection algorithm
Feature selection, as a vital dimension reduction technique, reduces data dimension by identifying an essential subset of input features, which can facilitate interpretable insights into learning and inference processes. Algorithmic stability is a key characteristic of an algorithm regarding its sensitivity to perturbations of input samples. In this paper, we propose an innovative unsupervised feature selection algorithm attaining this stability with provable guarantees. The architecture of our algorithm consists of a feature scorer and a feature selector. The scorer trains a neural network (NN) to globally score all the features, and the selector adopts a dependent sub-NN to locally evaluate the representation abilities for selecting features. Further, we present algorithmic stability analysis and show that our algorithm has a performance guarantee via a generalization error bound. Extensive experimental results on real-world datasets demonstrate superior generalization performance of our proposed algorithm to strong baseline methods. Also, the properties revealed by our theoretical analysis and the stability of our algorithm-selected features are empirically confirmed.
accept
This paper proposes an unsupervised feature selection algorithm motivated by _algorithmic stability_, in the sense that the algorithm should not be overly sensitive to changes in the dataset. The algorithm consists of a feature _scorer_ and a feature _selector_, which uses the scorer to select features. These components can be instantiated as neural nets and trained jointly. The paper analyzes the algorithm's _uniform stability_ (the strongest notion of stability), proving that it is $O(1/n)$, which is sufficient for generalization. Experiments show that the algorithm performs well on multiple datasets, and confirms the theoretical claims about stability. The reviews were mixed, but after much discussion, and back-and-forth with the authors, **I believe the correct decision is to accept.** Let me first address some concerns raised by Reviewer 4y4c (paraphrased for brevity). 1. _The stability bound is uninformative because it is not fully controlled by the regularization parameter, $\lambda_1$; in particular, if over-regularized ($\lambda_1 \to \infty$), the bound does not go to zero._ Since the stability bound is $O(\frac{1}{n} + \frac{1}{\lambda_1 n})$, over-regularizing does not make it vanish. However, $O(1/n)$-uniform stability is sufficient for generalization, so I don't see this as an issue. Another reviewer also made a good point about this: "I have to remind myself that the role of the regularizer is not to control complexity. In particular, if the regularization term penalized the complexity of the model, then taking $\lambda_1 \to \infty$ should have vanishing stability, independent of $n$. But in their formulation, taking $\lambda_1$ to infinity would mean that only the feature score is effectively minimized and it is not surprising that $O(1/n)$ is what you might expect in this setting." 2. _According to the stability bound, $\lambda_1$ can be any constant (w.r.t. $n$) and guarantee stability. So why not just take $\lambda_1$ as small as possible? How then should one choose $\lambda_1$?_ While it is true that any constant $\lambda_1$ ensures $O(1/n)$ stability, and that this bound vanishes as $n \to \infty$, it is important to remember that what we ultimately care about is generalization error, and that we always have a finite amount of data. Given a certain data size, $n$, we want a value of $\lambda_1$ that balances stability (hence, generalization) with our ability to fit the data (i.e., reduce empirical risk). Determining this optimal value analytically is likely infeasible, so most people do it empirically via cross-validation (like you would any other hyper-parameter). Indeed, this is precisely what the authors do. 3. _The experiments borrow some results from prior work, and in doing so give the proposed method an advantage over some baselines. In particular, the baselines were optimized for a nonlinear decoder, while the proposed method uses a linear decoder, and the evaluation metrics involve linear decoders._ I believe this is simply a misunderstanding. The authors clarified in the discussion period that, "our decoder is linear and shallow, and it is the same as the baseline methods, such as CAE and AEFS, for which we used the original implementation in Ref. [1]." Thus, I think the comparison is fair. Having settled these concerns, I don't see anything particularly wrong with the paper. The theory is sound; the experiments are sound; and the results are positive. That said, why am I not making a stronger Accept recommendation? 1. For starters, the paper could use some polishing. The fact that there were so many misunderstandings indicates that the presentation could be improved. I myself find the notation used in the stability analysis to be a little overwhelming at times. But this can be fixed. 2. I don't find it all that surprising that the algorithm is uniformly stable, given that its objective function is strongly convex (under certain conditions on the hypothesis class). This is not to say that every paper must come to an unexpected conclusion, but that it seems straightforward to prove uniform stability in this case; you just need to make the right assumptions and then follow the proof techniques laid out in prior work. That being said, the main insight of the theory is that just optimizing a selector alone is insufficient to guarantee stability (and generalization); one must also optimize the scoring function for stability. That takeaway is valuable, even if the proof follows from prior work. 3. None of the reviews seemed all that excited about the paper. Not every paper has to be revolutionary, but at a competitive conference, such as NeurIPS, it helps to have at least one high-scoring review. So, consider my recommendation "accept if possible." Note for the authors: In Assumptions 1 & 2, the order of quantifiers seems off. They go, roughly, "$\forall x$, $\exists$ ..." This means that for any given $x$, there exists something (either a subset of the data or a constant) such that some property holds. But I think for the bound to hold uniformly what they really want is, "$\exists$ ... such that $\forall x$ ..."
train
[ "g_h1wsHoglv", "5d5FP2gRhY_", "ZwRNViZsji9", "rx5nzejovGn", "cI5eLKaMPl", "k752UQOdFxC", "Btjeyorn0NZ", "78d8o2f4I5", "C_32FUYkVTM", "zXxQQis5cMd", "G54wabEwxqd", "4aifslQLwWS", "d0SXkIFLRM", "jf2OYg_xwTZ", "426Wr7WwkQp", "P4QRCGD4Lt1", "nwPUdWf-ZC4", "wK4Vpj5KLSE", "Qw4d3XEYqfB"...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "...
[ " It was our pleasure to explain them. We sincerely appreciate your valuable time and effort in reviewing our responses, discussing, clarifying the uncertainties, and giving insightful comments. Thanks!", " Thanks for the explanation!", " Our pleasure to clarify these questions.", "This paper proposes an unsu...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "5d5FP2gRhY_", "k752UQOdFxC", "cI5eLKaMPl", "nips_2021_huxo_Mqh76", "78d8o2f4I5", "Btjeyorn0NZ", "nips_2021_huxo_Mqh76", "C_32FUYkVTM", "zXxQQis5cMd", "G54wabEwxqd", "4aifslQLwWS", "d0SXkIFLRM", "jf2OYg_xwTZ", "Qw4d3XEYqfB", "P4QRCGD4Lt1", "wK4Vpj5KLSE", "64ZYsRE3ZMn", "53KKrlaYj4U...
nips_2021_6k0bAbb6m6
On learning sparse vectors from mixture of responses
Nikita Polyanskii
accept
Based on the reviews and subsequent discussion, it is clear that this paper is tackling a non-trivial problem and contains good results. However there are some concerns with clarity and writing, and we ask the authors to incorporate the suggestions given by the reviewers. There is also a suggestion to make it clear in the abstract and in the beginning of the introduction that the problems considered were actually studied before, and are not introduced by the authors.
train
[ "BIOTvZruoT", "BfOO-cVWApa", "Hxcv-Y2RRO1", "w-1tpThrr3N", "vs3kRVNCxLH", "oOXJpV22bRn", "rQ0lqoM8PXT", "4NGmICOXkD_", "lk7mnyXAC9w", "1ro10yElnTf", "hLhPY12IcXz", "qezgERwpe07", "0BoXa1epi3R", "9VCt2tP2vL" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper provides statistical upper bounds for the problem of learning a mixture of sparse vectors from signs of linear queries, both in the noiseless and noisy settings. The paper studies the problem of proper learning of a mixture of sparse vectors from possibly noisy 1-bit quantized linear queries. \n\nIn th...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6, 4 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 2 ]
[ "nips_2021_6k0bAbb6m6", "Hxcv-Y2RRO1", "w-1tpThrr3N", "lk7mnyXAC9w", "4NGmICOXkD_", "BIOTvZruoT", "9VCt2tP2vL", "0BoXa1epi3R", "hLhPY12IcXz", "qezgERwpe07", "nips_2021_6k0bAbb6m6", "nips_2021_6k0bAbb6m6", "nips_2021_6k0bAbb6m6", "nips_2021_6k0bAbb6m6" ]
nips_2021_1QhRTsqYPB
Convergence and Alignment of Gradient Descent with Random Backpropagation Weights
Stochastic gradient descent with backpropagation is the workhorse of artificial neural networks. It has long been recognized that backpropagation fails to be a biologically plausible algorithm. Fundamentally, it is a non-local procedure---updating one neuron's synaptic weights requires knowledge of synaptic weights or receptive fields of downstream neurons. This limits the use of artificial neural networks as a tool for understanding the biological principles of information processing in the brain. Lillicrap et al. (2016) propose a more biologically plausible "feedback alignment" algorithm that uses random and fixed backpropagation weights, and show promising simulations. In this paper we study the mathematical properties of the feedback alignment procedure by analyzing convergence and alignment for two-layer networks under squared error loss. In the overparameterized setting, we prove that the error converges to zero exponentially fast, and also that regularization is necessary in order for the  parameters to become aligned with the random backpropagation weights. Simulations are given that are consistent with this analysis and suggest further generalizations. These results contribute to our understanding of how biologically plausible algorithms might carry out weight learning in a manner different from Hebbian learning, with performance that is comparable with the full non-local backpropagation algorithm.
accept
This paper studies the convergence of Feedback Alignment (FA), an alternative to backpropagation that has been found empirically to work well, but until very recently did not received any theoretical justification of any sort, as the weights of the network that is being trained are replaced with random matrices in the back-propagation step. This paper is an interesting addition in the study of these “biologically” plausible algorithm. The authors study two-layer neural networks in the over-parametrised regime, and mainly provide two results: a) A proof of linear convergence of the loss to zero error and b) An analysis of the alignment between the second-layer weights of the network and the feedback matrix used in the feedback alignment algorithm. The consensus among the reviewer was that, despite FA not being a mainstream algorithm, given its interest as a potential solution to the weight transport problem, the study was of interest to the Neurips community, and that the paper provided a definitely worthy contribution. Given the rigour and the level of detail in the paper, this work has been judged to provide as well a blueprint for theoretical analysis of other biologically plausible learning rules, and therefore has a potential to impact future research. The interaction between the authors and the reviewers during the rebuttal phase have seen some reviewers increasing their grade for the paper. Currently, all reviewers are largely in favor of acceptance, and thus the area chair recommend acceptance as well.
train
[ "S1BVuzYpOZx", "XdGPyUJK-Cp", "1NSb26xYz8F", "gQTq5bQfNFN", "Ig8dlU05B7S", "_bGejnAdhj", "V6rEUx8ni10", "Vm0keWebwih", "CLpxG3km41Z", "AJPaNOqdwoj", "BB8qwhdRxX_", "ZgfEd2zwNe", "EkUVQh_tdrV", "Fsb5hg2ctdL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper uses analysis techniques from Neural Tangent Kernels to prove that the feedback alignment algorithm converges in neural networks with a single non-linear hidden layer. In addition, they prove that the forward weights do not converge to alignment with the random backward weights. \n\n This is a well-writ...
[ 7, -1, 6, -1, -1, 7, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ 3, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_1QhRTsqYPB", "1NSb26xYz8F", "nips_2021_1QhRTsqYPB", "ZgfEd2zwNe", "BB8qwhdRxX_", "nips_2021_1QhRTsqYPB", "CLpxG3km41Z", "1NSb26xYz8F", "_bGejnAdhj", "S1BVuzYpOZx", "EkUVQh_tdrV", "Fsb5hg2ctdL", "nips_2021_1QhRTsqYPB", "nips_2021_1QhRTsqYPB" ]
nips_2021_5Ld5bRB9jzY
Adder Attention for Vision Transformer
Transformer is a new kind of calculation paradigm for deep learning which has shown strong performance on a large variety of computer vision tasks. However, compared with conventional deep models (e.g., convolutional neural networks), vision transformers require more computational resources which cannot be easily deployed on mobile devices. To this end, we present to reduce the energy consumptions using adder neural network (AdderNet). We first theoretically analyze the mechanism of self-attention and the difficulty for applying adder operation into this module. Specifically, the feature diversity, i.e., the rank of attention map using only additions cannot be well preserved. Thus, we develop an adder attention layer that includes an additional identity mapping. With the new operation, vision transformers constructed using additions can also provide powerful feature representations. Experimental results on several benchmarks demonstrate that the proposed approach can achieve highly competitive performance to that of the baselines while achieving an about 2~3× reduction on the energy consumption.
accept
The reviewers are all in agreement that this paper provides novel and interesting results exploring a theoretical analysis of self-attention, and the introduction of an adder attention layer which helps reduce energy requirements. I would encourage the authors to incorporate some of the additional results on NLP tasks and CNN comparisons into their main text.
train
[ "vd_VCSJieRm", "wzgwoFNT621", "MP5vTM_AYP3", "hSGfVxkn327", "ymN4IafkXOM", "tcaGexNpi6X", "eGrUEwetntK", "xi3nFtZ-3mx", "OkNi04rO95", "dBppNprxg-S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " This paper designs the novel Adder paradigm for vision transformer. Specifically, the attention module is conducted with additions which saves energy consumption. This paper demonstrate the expressive power of addition computation paradigm both theoretically and experimentally on vision transformers. Experiments ...
[ 7, 7, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_5Ld5bRB9jzY", "nips_2021_5Ld5bRB9jzY", "tcaGexNpi6X", "ymN4IafkXOM", "OkNi04rO95", "dBppNprxg-S", "wzgwoFNT621", "vd_VCSJieRm", "nips_2021_5Ld5bRB9jzY", "nips_2021_5Ld5bRB9jzY" ]
nips_2021_gRlsFQMo_ze
Reverse engineering learned optimizers reveals known and novel mechanisms
Learned optimizers are parametric algorithms that can themselves be trained to solve optimization problems. In contrast to baseline optimizers (such as momentum or Adam) that use simple update rules derived from theoretical principles, learned optimizers use flexible, high-dimensional, nonlinear parameterizations. Although this can lead to better performance, their inner workings remain a mystery. How is a given learned optimizer able to outperform a well tuned baseline? Has it learned a sophisticated combination of existing optimization techniques, or is it implementing completely new behavior? In this work, we address these questions by careful analysis and visualization of learned optimizers. We study learned optimizers trained from scratch on four disparate tasks, and discover that they have learned interpretable behavior, including: momentum, gradient clipping, learning rate schedules, and new forms of learning rate adaptation. Moreover, we show how dynamics and mechanisms inside of learned optimizers orchestrate these computations. Our results help elucidate the previously murky understanding of how learned optimizers work, and establish tools for interpreting future learned optimizers.
accept
This paper was very actively discussed amongst the reviewers. Points in favor were summarized by reviewer QfSG as follows: 1. The analytical tools presented in the paper are a strong foundation for characterizing learned optimizers. Such tools are lacking in the field. In addition to the tools themselves (which are useful for other L2O researchers), the systematic approach taken to the experiments and explanations could also be helpful for others seeking to contribute such tools (for and beyond L2O methods). 2. The methodology, experiments, and conclusions taken from the experiments are well-grounded and solid, especially if one takes into consideration the significant additional depth provided in the appendix. 3. The results show that learned optimizers match or mimic techniques in state-of-the-art optimizers, despite being trained on diverse types of optimization problems - which gives merit to both L2O approaches and to the tools presented. That is, even if it is not surprising, given how learned optimizers are derived, that learned optimizers learn similar techniques as known optimizers -- that finding (learning to similar methods) brings merit to L2O approaches as finding 'trusted' approaches. 4. The authors demonstrates applicability beyond just the 4 tasks, and responded to the concerns. Reviewer sWNV replied to these 4 points as follows: 1. I agree that these tools are lacking and having a paper like this (and especially a toolbox) would lie a good foundation onto a better understanding of the learned optimizers. This is undeniable contribution and I'm happy that this paper tackles it. 2. The techniques provided in the paper are good, however, there are still some questions remain. For example, the momentum analysis (sec 5.1 and App C) relies on the linear approximation of the recurrent dynamics. It is not clear to me that this approximation is useful and actually holds in practice. It would be useful to actually run the momentum optimizer with the parameters recovered by this analysis and see if it is close to the training dynamics of the LO. Of course, this also would be not accurate, since the LO also do much more than just momentum... 3. I think that it is a bit of an overstatement to say that LO match or mimic known optimizers, they specifically being tested to match or mimic the specific property of a given optimizer. I would love to see an example when this does not happen and the LO solely memorizes the task without exhibiting the properties of the known optimizers. 4. My main criticism is that the from the title, introduction and the discussion it seems like the analysis holds and applies to many Learned Optimizers, whereas in reality it is only 1 that is being analyzed. This by itself is not a problem and doesn't deny the technical contribution, but it is misleading to the reader (especially the cursory reader that relies of Abstract + Intro for the quick understanding of the paper). I would really encourage the authors to pivot the narrative of the paper and instead focus on the techniques and methods of general LO analysis instead of claims about Learned Optimizers in general. To this extend, I would like to re-iterate that the release of the general and easy-to-use toolbox that can generate an extensive report given a used provided LO and a task (ideally a benchmark of many tasks) should be an essential part of this paper. After this internal discussion, the reviewers offered to change the narrative and title of the paper to alleviate the concerns in 4. above. Overall, the paper presents a solid contribution to the field of learned optimizers, and I recommend acceptance. I encourage the authors to act on the suggestions by reviewer sWNV, especially adapting narrative + title.
train
[ "hUbdfa49z2", "OmCedmA23et", "JZBuJyPxWi", "brxgBAIVnXM", "0D2rScdQNsp", "8aVjG_BwmKZ", "nXy0mp6Tw7S", "kt758r7NNBo", "qQkQBY9MmF2", "A9eEyYRf06", "taxTMBsDWG9", "ZQantEmxLfR", "pYsMWMRHz9z", "WUY8_MGVT3O" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Hi,\n\nGreat questions/comments. Responses below:\n- This is a great sanity check, and empirically, we are fairly confident that you would exactly recover things like momentum if that was the optimizer used (as opposed to a learned optimizer). This depends on the optimization problem used to generate the data for...
[ -1, 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "JZBuJyPxWi", "nips_2021_gRlsFQMo_ze", "brxgBAIVnXM", "0D2rScdQNsp", "taxTMBsDWG9", "nips_2021_gRlsFQMo_ze", "A9eEyYRf06", "nips_2021_gRlsFQMo_ze", "8aVjG_BwmKZ", "WUY8_MGVT3O", "OmCedmA23et", "pYsMWMRHz9z", "nips_2021_gRlsFQMo_ze", "nips_2021_gRlsFQMo_ze" ]
nips_2021_Sgqb8b8swh7
Matching a Desired Causal State via Shift Interventions
Transforming a causal system from a given initial state to a desired target state is an important task permeating multiple fields including control theory, biology, and materials science. In causal models, such transformations can be achieved by performing a set of interventions. In this paper, we consider the problem of identifying a shift intervention that matches the desired mean of a system through active learning. We define the Markov equivalence class that is identifiable from shift interventions and propose two active learning strategies that are guaranteed to exactly match a desired mean. We then derive a worst-case lower bound for the number of interventions required and show that these strategies are optimal for certain classes of graphs. In particular, we show that our strategies may require exponentially fewer interventions than the previously considered approaches, which optimize for structure learning in the underlying causal graph. In line with our theoretical results, we also demonstrate experimentally that our proposed active learning strategies require fewer interventions compared to several baselines.
accept
The paper considers the problem of transforming a causal system from a given initial state to a desired target state using a set of continuous shift interventions to match a desired mean when causal graph structure is unknown. The authors provide active learning strategies to choose interventions that guarantee to exactly match a desired mean. . Post rebuttal, the reviewers seem to be on the positive side and there is some consensus to accept the paper. The paper provides an interesting contribution and the fact that on some for certain problems, compared to state of art the proposed method requires exponentially fewer interventions is interesting. There is some concern about the restrictiveness of the assumption made by the authors. But none the less, I am inclined to accept the paper.
train
[ "-rj2E0G83PN", "40hjIAvW_qz", "-j30XNRPtbr", "s1IalM3Tq3a", "6d67L1CmaUR", "pn2PsPk5fBt", "ko_hYU_tz2b", "yeD4ew9YBz", "K7D0CXfFt2C", "lwzrT_IRsmu", "DnSAzBQuIgf", "EnHknyMEO8X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper focuses on the problem of identifying shift interventions that match the desired mean of a system by active learning. It provides both theoretical guarantees that there always exists a unique shift intervention, as well as practical algorithms. \n Overall, the paper is well-written, and to the best of ...
[ 6, 6, -1, -1, -1, 7, -1, 7, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1 ]
[ "nips_2021_Sgqb8b8swh7", "nips_2021_Sgqb8b8swh7", "lwzrT_IRsmu", "40hjIAvW_qz", "ko_hYU_tz2b", "nips_2021_Sgqb8b8swh7", "DnSAzBQuIgf", "nips_2021_Sgqb8b8swh7", "yeD4ew9YBz", "40hjIAvW_qz", "pn2PsPk5fBt", "-rj2E0G83PN" ]
nips_2021_R6U4-Qkcg21
Unsupervised Noise Adaptive Speech Enhancement by Discriminator-Constrained Optimal Transport
This paper presents a novel discriminator-constrained optimal transport network (DOTN) that performs unsupervised domain adaptation for speech enhancement (SE), which is an essential regression task in speech processing. The DOTN aims to estimate clean references of noisy speech in a target domain, by exploiting the knowledge available from the source domain. The domain shift between training and testing data has been reported to be an obstacle to learning problems in diverse fields. Although rich literature exists on unsupervised domain adaptation for classification, the methods proposed, especially in regressions, remain scarce and often depend on additional information regarding the input data. The proposed DOTN approach tactically fuses the optimal transport (OT) theory from mathematical analysis with generative adversarial frameworks, to help evaluate continuous labels in the target domain. The experimental results on two SE tasks demonstrate that by extending the classical OT formulation, our proposed DOTN outperforms previous adversarial domain adaptation frameworks in a purely unsupervised manner.
accept
This paper proposes a discriminator-constrained optimal transport network (DOTN) for unsupervised speech enhancement. The authors apply joint distribution optimal transport with a discriminator under Wasserstein GAN to generate enhanced speech signals. The novelty of the work is the combination of joint distribution OT and WGAN for both domain adaptation and unsupervised learning simultaneously in the application of speech enhancement, which is a regression setting. The authors conduct experiments on Voice Bank and TIMIT datasets under various conditions and show good performance under the PESQ and STOI metrics. The paper is well written. The work is theoretically solid and the performance under PESQ and STOI is decent. However, there are a few standing concerns. First of all, the authors should make it clear that both joint distribution optimal transport and WGAN are existing techniques by directly citing the previous work when mathematically formulating the problem in Section 3.2 even though they are mentioned in the related work. Second, there has been a significant concern regarding the quality of the generated speech by DOTN in the demo where distortions are quite noticeable perceptually. It is not clear whether this is due to the unsupervised setting. As suggested by one of the reviewers, the authors upload the comparative performance between MetricGAN+ and the proposed DOTN in terms of PESQ and STOI. It shows that DOTN has slightly better STOI and is quite a bit lower in PESQ since SOTA of MetricGAN+ is optimized under PESQ. This appears to clear up the concern to some degree. All reviewers consider the work interesting and the authors are responsive in the discussion period. I would recommend acceptance. But the authors should include the additional results in the rebuttal and discussion in the revised version. Furthermore, it would be greatly helpful to include subjective evaluation results in the revision to make the work more convincing.
val
[ "oGRnvPzkvgv", "oKMeAEzMAJs", "z2ww5OZlPFj", "dIXB1E7uEvL", "ACXeyM9rzQx", "qr10BjRtLxj", "sK0AtyZzCdo", "JG2KAkJ2bW2", "yagHiLgi1t", "-N2PUou938", "LNnELtgl8ce", "FaBHkyNUlU", "GAV2escP_5V", "cUAzUuYoMO0" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " > **(update 1) The follow-up MetricGAN+ (SOTA) results provided show that the approach achieves slightly better STOI scores, but significantly worse PESQ scores. However, the fact that the MetricGAN+ is optimized on PESQ makes this comparison more difficult to interpret.**\n\n- Indeed, direct optimization of Metr...
[ -1, 5, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, 4 ]
[ "ACXeyM9rzQx", "nips_2021_R6U4-Qkcg21", "dIXB1E7uEvL", "ACXeyM9rzQx", "qr10BjRtLxj", "oKMeAEzMAJs", "yagHiLgi1t", "nips_2021_R6U4-Qkcg21", "JG2KAkJ2bW2", "oKMeAEzMAJs", "cUAzUuYoMO0", "GAV2escP_5V", "nips_2021_R6U4-Qkcg21", "nips_2021_R6U4-Qkcg21" ]
nips_2021_iYzsR0JNaa2
Optimality of variational inference for stochasticblock model with missing links
Variational methods are extremely popular in the analysis of network data. Statistical guarantees obtained for these methods typically provide asymptotic normality for the problem of estimation of global model parameters under the stochastic block model. In the present work, we consider the case of networks with missing links that is important in application and show that the variational approximation to the maximum likelihood estimator converges at the minimax rate. This provides the first minimax optimal and tractable estimator for the problem of parameter estimation for the stochastic block model with missing links. We complement our results with numerical studies of simulated and real networks, which confirm the advantages of this estimator over current methods.
accept
The paper proves that the mean-field variational inference is minimax optimal for estimating the parameters of the stochastic block model (SBM) even when some of the links are missing. While the mean-field variational method has been studied for the SBM, this paper still addresses an interesting question. In particular, it shows that there is no computational gap in the problem of link prediction for SBM byproviding an estimator that is minimax optimal and computationally feasible (this question had been open for a while). Hence, given this contribution, I agree with the reviewers that the paper should be accepted in Neurips.
train
[ "BwM_B2ZJj6f", "7aFJKrT5zV5", "9YLPkvdYom", "Pj3SkRAMYqw", "YxbzjNCmEUJ", "k5nenMXIoCg", "rcPu9NHhRua", "op0sy6Sr1p", "JGiEFlx4kSb", "Lr2gMKnX0A-", "0Rsnbwp9C1j" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The main contribution of this paper is to propose and analyze a variational approximation (VA) method for the inference of the SBM. The main result is to establish that VA leads to a tractable and minimax optimal algorithm, specifically the latter is based on the EM principle with a variational approximation schem...
[ 6, -1, 7, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "nips_2021_iYzsR0JNaa2", "Pj3SkRAMYqw", "nips_2021_iYzsR0JNaa2", "YxbzjNCmEUJ", "k5nenMXIoCg", "9YLPkvdYom", "BwM_B2ZJj6f", "0Rsnbwp9C1j", "Lr2gMKnX0A-", "nips_2021_iYzsR0JNaa2", "nips_2021_iYzsR0JNaa2" ]
nips_2021_UZgQhsTYe3R
Policy Learning Using Weak Supervision
Most existing policy learning solutions require the learning agents to receive high-quality supervision signals, e.g., rewards in reinforcement learning (RL) or high-quality expert demonstrations in behavioral cloning (BC). These quality supervisions are either infeasible or prohibitively expensive to obtain in practice. We aim for a unified framework that leverages the available cheap weak supervisions to perform policy learning efficiently. To handle this problem, we treat the weak supervision'' as imperfect information coming from a peer agent, and evaluate the learning agent's policy based on a correlated agreement'' with the peer agent's policy (instead of simple agreements). Our approach explicitly punishes a policy for overfitting to the weak supervision. In addition to theoretical guarantees, extensive evaluations on tasks including RL with noisy reward, BC with weak demonstrations, and standard policy co-training (RL + BC) show that our method leads to substantial performance improvements, especially when the complexity or the noise of the learning environments is high.
accept
This paper was an exemplary case of the value of discussion between the reviewers and the authors. After multiple clarifications, additional experiments and even a code snippet examples, the reviewers agreed that the paper provides a valuable contribution to a problem of weak supervision in RL. I encourage the authors to address reviewers' extensive comments and update the paper for the final revision.
test
[ "TDqaQFVk5ta", "7HJi_T2mZMv", "S0WWa3q3MiI", "qR5JshBAgyb", "gjiV2qp_Yq0", "4aofirAj8b9", "Hf4Xm0BWa2", "6qy6fGcqojt", "FxWNCK0An3w", "jUEZ5mhBG5j", "kM5EkgFFJO", "3nojF6ZOX3S", "QsrqeZB2JzZ", "gduG3fAatLF", "QaPbr3jsajj", "NAGMqGe8NUQ", "iuQqB6NgRsL", "ZQ2Zf_Pilq0", "D71NA_cov3R...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_rev...
[ "This paper proposes a method for RL and imitation learning in the setting where the reward function or expert actions are corrupted with noise. The main idea is to regularize the learned policy to have *high* loss on unaligned state-action-reward examples. Experiments show that the proposed method outperforms base...
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_UZgQhsTYe3R", "qR5JshBAgyb", "nips_2021_UZgQhsTYe3R", "gjiV2qp_Yq0", "4aofirAj8b9", "Hf4Xm0BWa2", "ZQ2Zf_Pilq0", "FxWNCK0An3w", "jUEZ5mhBG5j", "kM5EkgFFJO", "3nojF6ZOX3S", "QsrqeZB2JzZ", "QaPbr3jsajj", "nips_2021_UZgQhsTYe3R", "NAGMqGe8NUQ", "D71NA_cov3R", "ZQ2Zf_Pilq0", ...
nips_2021_LKoMTwTuQnC
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
accept
Four experts reviewed this paper and gave ratings 6, 6, 6, and 7, respectively. The reviewers expressed concerns about novelty, writing, and some details, but they generally appreciated the study of sparsity in Transformers. Some reviewers especially liked the results. AC felt that the concerns can be addressed in a light revision of the paper after reading the rebuttal and the post-rebuttal discussions. Hence, the decision is to recommend the paper for acceptance. The authors are encouraged to make necessary changes in the camera-ready to address the reviewers' questions to the best of their ability. We congratulate the authors on the acceptance of their paper!
train
[ "olC4pBAYz-U", "WZOAln_Z25U", "h6TnblNCC6Y", "_j-MQQS0Os", "G9n_zDes9v5", "kp4AaMckD9L", "k5zUvDC_2z", "s626m6vQFky", "z97vvbGRHVM", "16Qz1vwgjXg", "9DQDHb7H0vq" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposes sparse ViT algorithms, named as SViTE and its variants S^2ViTE, and SViTE+, to explore high-quality sparse patterns in both ViT’s architecture and input token embeddings. The proposed methods can jointly optimize model parameters and explore connectivity throughout training. Experiments are cond...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_LKoMTwTuQnC", "G9n_zDes9v5", "olC4pBAYz-U", "nips_2021_LKoMTwTuQnC", "k5zUvDC_2z", "9DQDHb7H0vq", "_j-MQQS0Os", "olC4pBAYz-U", "16Qz1vwgjXg", "nips_2021_LKoMTwTuQnC", "nips_2021_LKoMTwTuQnC" ]
nips_2021_w1FvEPcwTnI
Graphical Models in Heavy-Tailed Markets
Jose Vinicius de Miranda Cardoso, Jiaxi Ying, Daniel Palomar
accept
This paper formulates a graphical model learning problem by maximizing the likelihood of a multivariate student-t distribution subject to constraints on the Laplacian structured parameter matrix. The formulation is non convex. An explicit ADMM iteration is derived to arrive at a stationary point. The primary strength of the paper is in its multiple experiments demonstrating the utility in the context of financial time-series of the modeling assumptions over the more commonly studied Gaussian graphical model setup. The main weakness of the paper is that it does not contain a great deal of conceptual depth or technical novelty: the ideas, and their implementation, appear to be relatively straightforward. Overall, the paper seems likely to have impact by spurring further work on estimation of graphical models of various types.
train
[ "gA1Q-0esk0D", "gcEJYtXO8w6", "DZgN9UTH-X5", "34HUIuRLCn7", "fpupL80il_v", "wwcF7L-o5--", "ne4Rfv6MXYK", "FbA7a4KKCLT", "K5-yqx6NC1n" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thanks for the response. Based on the points you mentioned, I would like to increase the score to 7.", "The paper proposes a new estimator for learning graphical models under the t-distribution assumption for data generation process. For the new estimator, an ADMM-based algorithm is designed to solve the associ...
[ -1, 7, -1, -1, 7, -1, -1, -1, 8 ]
[ -1, 4, -1, -1, 5, -1, -1, -1, 3 ]
[ "DZgN9UTH-X5", "nips_2021_w1FvEPcwTnI", "34HUIuRLCn7", "FbA7a4KKCLT", "nips_2021_w1FvEPcwTnI", "fpupL80il_v", "K5-yqx6NC1n", "gcEJYtXO8w6", "nips_2021_w1FvEPcwTnI" ]
nips_2021_k-0oq5eNjh
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
The advancement of generative radiance fields has pushed the boundary of 3D-aware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shape-color ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24% and 48%, respectively. Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting. Our code is available at https://github.com/XingangPan/ShadeGAN.
accept
The submission initially received mixed reviews. After the rebuttal, all reviewers became positive (though reviewer YNf7 didn't update the score). The AC agrees with the reviewers that the proposed idea is neat and well-executed. The authors are strongly encouraged to address the remaining concerns in the camera-ready version. This includes adding the results in response to reviewers TVTi, mEVj, and YNf7, addressing the concern on view dependence, among others.
train
[ "vQ7EzXotkp0", "lYSGXLxU7Fe", "eJYTgTJy6ri", "-ZIcKzsJoJ6", "FmkoePGg4Fg", "ca_cmaBmpeo", "Ch2VwYtdLli", "O0QxixIYUXH", "1bjRr1hLlMX", "WXaTsvnQ143", "V0vEmRSgXP2", "telxtP1FiYf", "tX3yJiXY2_m", "HfDwKD0oL2H", "aQuyDJ81KRJ", "z7MQHa-UZ3" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer again for leaning towards acceptance. We would like to remind the reviewer to kindly update final rating before the end of the discussion period. Thanks.", " Thanks for your comments. We will align the color space in the revision.", " Thank you for your feedback.\n\nWe will include resul...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "1bjRr1hLlMX", "O0QxixIYUXH", "-ZIcKzsJoJ6", "tX3yJiXY2_m", "nips_2021_k-0oq5eNjh", "telxtP1FiYf", "1bjRr1hLlMX", "V0vEmRSgXP2", "WXaTsvnQ143", "z7MQHa-UZ3", "aQuyDJ81KRJ", "FmkoePGg4Fg", "HfDwKD0oL2H", "nips_2021_k-0oq5eNjh", "nips_2021_k-0oq5eNjh", "nips_2021_k-0oq5eNjh" ]
nips_2021_kzPtpIpF8o
XCiT: Cross-Covariance Image Transformers
Following their success in natural language processing, transformers have recently shown much promise for computer vision. The self-attention operation underlying transformers yields global interactions between all tokens ,i.e. words or image patches, and enables flexible modelling of image data beyond the local interactions of convolutions. This flexibility, however, comes with a quadratic complexity in time and memory, hindering application to long sequences and high-resolution images. We propose a “transposed” version of self-attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariance matrix between keys and queries. The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images.Our cross-covariance image transformer (XCiT) is built upon XCA. It combines the accuracy of conventional transformers with the scalability of convolutional architectures. We validate the effectiveness and generality of XCiT by reporting excellent results on multiple vision benchmarks, including image classification and self-supervised feature learning on ImageNet-1k, object detection and instance segmentation on COCO, and semantic segmentation on ADE20k.We will opensource our code and trained models to reproduce the reported results.
accept
Initially, the paper received three acceptance and one rejection. One reviewer had concern on the novelty of the paper given its similarity with EA. Upon rebuttal and discussion, the reviewer converged with the other reviewers on the novelty of this paper. The AC agrees with the reviewers and recommends to accept the paper. The authors are encouraged to improve final version of the paper following the suggestions from the reviewers.
train
[ "-H_4SJN__Rt", "-UZBr5BHuRK", "zlKQJW2BpWO", "9WBfWKcpc2Q", "SFNg5-4PyPs", "rUm4thCaNB", "zOXR7t8_eqi", "a7i3t3oXqyW", "blwOBRlchjC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an efficient method to compute self-attention in Transformers without sacrificing accuracy. The idea is to compute the covariance self-attention over the feature dimension followed by a 3x3 conv to model the patch interaction. With additional techniques such as feature grouping and L2 norm, the...
[ 7, 6, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, 5, -1, -1, -1, -1, -1, 5, 5 ]
[ "nips_2021_kzPtpIpF8o", "nips_2021_kzPtpIpF8o", "SFNg5-4PyPs", "-H_4SJN__Rt", "-UZBr5BHuRK", "blwOBRlchjC", "a7i3t3oXqyW", "nips_2021_kzPtpIpF8o", "nips_2021_kzPtpIpF8o" ]
nips_2021_YXy_2b5wufe
Row-clustering of a Point Process-valued Matrix
Structured point process data harvested from various platforms poses new challenges to the machine learning community. To cluster repeatedly observed marked point processes, we propose a novel mixture model of multi-level marked point processes for identifying potential heterogeneity in the observed data. Specifically, we study a matrix whose entries are marked log-Gaussian Cox processes and cluster rows of such a matrix. An efficient semi-parametric Expectation-Solution (ES) algorithm combined with functional principal component analysis (FPCA) of point processes is proposed for model estimation. The effectiveness of the proposed framework is demonstrated through simulation studies and real data analyses.
accept
As pointed out by the reviewers, the paper is missing comparisons against more recent baselines. There are plenty of these baselines that the authors simply did not even mention. The paper also focuses on a single real data example. Including more real-data examples is necessary to assess the generality of the method. I believe that in its current version the paper is not ready yet for publication.
train
[ "-PVOJP1iwt", "gRj5b_2mWpv", "zJwidZm39XZ", "GAmVjlBawvh", "h3e3hLoGNad", "2e74dq7nvT7", "rLn3wBEobw6", "HFOTf_allTJ", "N4yEwrXcweq", "76UVS8ZkxJX", "L0n_I02fI0C", "NFNzLiFXAjp" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a mixture model of Multi-level Marked Point Process (MM-MPP) for repeatedly observed multi-category event sequences and develops a efficient semi-parametric Exponential-Solution (ES) algorithm for event sequence clustering. Experiments through simulation studies and a real-world data analysis s...
[ 4, -1, -1, -1, 5, -1, 7, -1, -1, -1, -1, 5 ]
[ 3, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, 4 ]
[ "nips_2021_YXy_2b5wufe", "HFOTf_allTJ", "GAmVjlBawvh", "2e74dq7nvT7", "nips_2021_YXy_2b5wufe", "nips_2021_YXy_2b5wufe", "nips_2021_YXy_2b5wufe", "-PVOJP1iwt", "h3e3hLoGNad", "NFNzLiFXAjp", "rLn3wBEobw6", "nips_2021_YXy_2b5wufe" ]
nips_2021_HglgPZAYhcG
Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information
One principal approach for illuminating a black-box neural network is feature attribution, i.e. identifying the importance of input features for the network’s prediction. The predictive information of features is recently proposed as a proxy for the measure of their importance. So far, the predictive information is only identified for latent features by placing an information bottleneck within the network. We propose a method to identify features with predictive information in the input domain. The method results in fine-grained identification of input features' information and is agnostic to network architecture. The core idea of our method is leveraging a bottleneck on the input that only lets input features associated with predictive latent features pass through. We compare our method with several feature attribution methods using mainstream feature attribution evaluation experiments. The code is publicly available.
accept
The reviewers are all in consensus that the paper is worthy of acceptance. I want to thank the authors for their extensive responses to the reviews. It appears as though the reviews will benefit the paper and lead to improvements for the final form.
train
[ "4_mkQcP-SPB", "DH-rI_deW98", "HXLkw7DXmMZ", "9qYkMpqPiBe", "IGmwKPRfkYB", "25XF9kGX2nM", "IRL2pcyEJK-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " I truly appreciate the author's extensive response to my questions, and interest in further explaining their methods. I am happy with their responses, and have learned a lot during this process of reviewing. I'd like to change my review from \"Good paper, accept\" to \"Top 50% of accepted NeurIPS papers, clear ac...
[ -1, 8, 7, -1, -1, -1, 6 ]
[ -1, 3, 4, -1, -1, -1, 3 ]
[ "9qYkMpqPiBe", "nips_2021_HglgPZAYhcG", "nips_2021_HglgPZAYhcG", "DH-rI_deW98", "IRL2pcyEJK-", "HXLkw7DXmMZ", "nips_2021_HglgPZAYhcG" ]
nips_2021_jfDaBf8PAE
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio
accept
The paper proposes a fast minimum-norm (FMN) attack that works with different norm perturbation models and is robust to hyper-parameter choices. Some reviewers had concerns regarding the novelty of the work with respect to prior works (e.g. DDN). Authors clarified some of these concerns in the discussion period. Overall I think the paper makes good contributions. I suggest authors to take reviewers' suggestions into account in the final draft of their work.
test
[ "s0_rM1nxsnV", "9NKSdP9hxM", "icWWvppOwWV", "kTh0jWEna5g", "JHYVE7FREab", "taeYh0EEMJm", "F3MKQ0WJE-w", "veJ0ab6JQx", "36yPXk1T-j", "BNNcegebNzb", "qwA2JJUl47t", "DwoJWuj2th" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents an adversarial attack (FMN) that generates minimum-norm adversarial examples. The attack builds on the DDN attack but generalizes to $l_p$ norms beyond the $l_2$ norm that the DDN attack is specialized for.\n\n- For the $l_0$, $l_1$ and $l_\\infty$ norms, the attack has better performance: it h...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "nips_2021_jfDaBf8PAE", "kTh0jWEna5g", "F3MKQ0WJE-w", "JHYVE7FREab", "s0_rM1nxsnV", "DwoJWuj2th", "nips_2021_jfDaBf8PAE", "qwA2JJUl47t", "BNNcegebNzb", "nips_2021_jfDaBf8PAE", "nips_2021_jfDaBf8PAE", "nips_2021_jfDaBf8PAE" ]
nips_2021_wg_kD_nyAF
Uncertainty Quantification and Deep Ensembles
Deep Learning methods are known to suffer from calibration issues: they typically produce over-confident estimates. These problems are exacerbated in the low data regime. Although the calibration of probabilistic models is well studied, calibrating extremely over-parametrized models in the low-data regime presents unique challenges. We show that deep-ensembles do not necessarily lead to improved calibration properties. In fact, we show that standard ensembling methods, when used in conjunction with modern techniques such as mixup regularization, can lead to less calibrated models. This text examines the interplay between three of the most simple and commonly used approaches to leverage deep learning when data is scarce: data-augmentation, ensembling, and post-processing calibration methods. We demonstrate that, although standard ensembling techniques certainly help to boost accuracy, the calibration of deep ensembles relies on subtle trade-offs. We also find that calibration methods such as temperature scaling need to be slightly tweaked when used with deep-ensembles and, crucially, need to be executed after the averaging process. Our simulations indicate that, in the low data regime, this simple strategy can halve the Expected Calibration Error (ECE) on a range of benchmark classification problems when compared to standard deep-ensembles.
accept
This paper studies the quality of the uncertainty of ensembles of deep networks through the lens of calibration. The authors demonstrate that contrary to common belief, ensembling can lead to even worse calibration. They identify that an altered version of temperature scaling, a common strategy, can help to significantly improve the calibration of these ensembles. During the discussion period a question came up regarding novelty compared to an existing published paper that was very related. Given the timeline of that paper compared to this one, the reviewers were instructed to treat that paper as concurrent work. There was significant discussion for this paper during the author response period, multiple reviewers raised their scores and a consensus of weak accept was reached. Quoting from the discussion, the reviewers all "agreed that 1) the paper studies an important problem (i.e. the interplay between ensembling, data augmentation and post-hoc calibration) that is of high relevance and interest to to the NeurIPS community, 2) the empirical evaluation fairly convincingly supports the key hypotheses (with some minor caveats in terms of breadth of experimental settings), and 3) the resulting practical suggestions (e.g. to apply post-hoc calibration after ensembling) are interesting and useful. Furthermore, there were no concerns raised w.r.t. the writing, which was clear and of high quality." However, the reviewers agreed that there is not a strong strong algorithmic, methodological, or theoretical contribution and thus no reviewers were willing to champion the paper with a rating higher than a marginal accept. The fact that all reviewers were unwilling to recommend an "accept" rating or champion this paper seems to indicate that while they believe it meets the bar for acceptance, they don't think it will be tremendously impactful in its current form. This appears to be partially owing to the fact that the paper has been on arxiv for a considerable time and in the meantime other papers have reported similar results. Therefore, the recommendation is to accept the paper as a poster. The paper could be more impactful if there was some additional theoretical justification for the observed phenomenon or some additional analysis, e.g. from the loss landscape perspective.
train
[ "Auu0jzLBNe5", "sja402JPY0n", "1zpcEJyg3i9", "pJjzUAQMDLj", "O3ji-Z7DNcz", "ETMixlwT2Zn", "Vm9PiKUMuz7", "a7Iw9e8APKf", "Mpi_u_spOcS", "Ce4wHbZStC", "K5Bw7OlWs5", "5fYERSEb7BK", "aTwQjfydRlQ", "E5zn3ZdYED", "1Dq7rKk69-Y" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ " Thank you for explaining further your comments, it is now clearer what was exactly meant. \n\nYour suggestions have been very useful to us: thank you for the time you have spent reading our manuscript, and for the overall pleasant, efficient and worthwhile review process.", " Dear authors,\n\nThank you for the ...
[ -1, -1, -1, -1, 6, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, 4, -1, 4, -1, -1, -1, -1 ]
[ "1zpcEJyg3i9", "pJjzUAQMDLj", "Vm9PiKUMuz7", "ETMixlwT2Zn", "nips_2021_wg_kD_nyAF", "1Dq7rKk69-Y", "Ce4wHbZStC", "5fYERSEb7BK", "nips_2021_wg_kD_nyAF", "aTwQjfydRlQ", "nips_2021_wg_kD_nyAF", "E5zn3ZdYED", "Mpi_u_spOcS", "K5Bw7OlWs5", "O3ji-Z7DNcz" ]
nips_2021_fzkU-UMKJIv
Directed Probabilistic Watershed
The Probabilistic Watershed is a semi-supervised learning algorithm applied on undirected graphs. Given a set of labeled nodes (seeds), it defines a Gibbs probability distribution over all possible spanning forests disconnecting the seeds. It calculates, for every node, the probability of sampling a forest connecting a certain seed with the considered node. We propose the "Directed Probabilistic Watershed", an extension of the Probabilistic Watershed algorithm to directed graphs. Building on the Probabilistic Watershed, we apply the Matrix Tree Theorem for directed graphs and define a Gibbs probability distribution over all incoming directed forests rooted at the seeds. Similar to the undirected case, this turns out to be equivalent to the Directed Random Walker. Furthermore, we show that in the limit case in which the Gibbs distribution has infinitely low temperature, the labeling of the Directed Probabilistic Watershed is equal to the one induced by the incoming directed forest of minimum cost. Finally, for illustration, we compare the empirical performance of the proposed method with other semi-supervised segmentation methods for directed graphs.
accept
In this paper the authors proposed a semi-supervised learning algorithm for segmenting directed graphs, extending previously proposed methods designed for undirected graphs The proposed method builds upon the watershed algorithm, an approached, based on building "separating forests" and defining a Gibbs distribution, as in statistical physics, on this set of forests. The main contribution, beyond generalisation of this method to directed graphs, is given in the form theoretical results related. After the first round of rebuttal, the paper was found interesting and looked upon favourably on it theoretical part. The new theorems (for instance an equivalence with the random walker method) were all found strong and interesting by the reviewers. The numerical part, however, was found was rather weak and incomplete and the paper initially received mixed grading. The authors did present a very good case in their rebuttal, giving new additional data and quantifying the comparative performance in their response to all the criticisms of the reviewers. After the rebuttal, it seemed that all reviewers (but one who did not reply, but whose criticism seemed to have been addressed) agreed that the paper was stronger and definitely of value to the Neurips community, actually increasing their score and assessment of the paper. Given the authors successfully answered the criticism of the referee, I therefore recommend acceptance to Neurips.
train
[ "6-LdIIgB3dg", "3M5jNyGJYmE", "YbrZii9vIEv", "19p0R5IJJ8z", "KZqyS8Xj9xv", "EuVQGydkfwb", "skWkfl64qL", "kw_0xsTPJ7D", "B77xXKMceGU", "IPyKeUagidB", "TrU9HNd5Kv", "byl2clYtbbH", "xJxcxdfeXI", "rgHTitpVuAK", "ZWLGRF4ybNx", "hKKym7WOu6o", "V74PH2wUlPk", "dq94AszKOJE", "k04FRm821Wa"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_re...
[ " I have read the rebuttal and maintain my score. The authors' response to point #2 demystifies things somewhat and is perhaps worth including in the final paper.", "The paper introduces an extension of the Probabilistic Watershed method in semi-supervised learning from undirected graphs to directed graphs. The p...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, 4, 4 ]
[ "rgHTitpVuAK", "nips_2021_fzkU-UMKJIv", "19p0R5IJJ8z", "KZqyS8Xj9xv", "EuVQGydkfwb", "kw_0xsTPJ7D", "3dqwiKghvMz", "B77xXKMceGU", "IPyKeUagidB", "byl2clYtbbH", "3dqwiKghvMz", "3M5jNyGJYmE", "bjcjuduAOgf", "k04FRm821Wa", "dq94AszKOJE", "V74PH2wUlPk", "nips_2021_fzkU-UMKJIv", "nips_2...
nips_2021_gDcaUj4Myhn
Laplace Redux - Effortless Bayesian Deep Learning
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection. The Laplace approximation (LA) is a classic, and arguably the simplest family of approximations for the intractable posteriors of deep neural networks. Yet, despite its simplicity, the LA is not as popular as alternatives like variational Bayes or deep ensembles. This may be due to assumptions that the LA is expensive due to the involved Hessian computation, that it is difficult to implement, or that it yields inferior results. In this work we show that these are misconceptions: we (i) review the range of variants of the LA including versions with minimal cost overhead; (ii) introduce "laplace", an easy-to-use software library for PyTorch offering user-friendly access to all major flavors of the LA; and (iii) demonstrate through extensive experiments that the LA is competitive with more popular alternatives in terms of performance, while excelling in terms of computational cost. We hope that this work will serve as a catalyst to a wider adoption of the LA in practical deep learning, including in domains where Bayesian approaches are not typically considered at the moment.
accept
A well-written paper proposing a software package. The paper additionally gives a compact review on Laplace approximation (LA) for DNN and benchmark experiments where LA works better or is comparable to more accurate approximation methods. The software is easy to use but reviewers have concerns about its flexibility (e.g., prior is hardcoded and cannot be changed) and maintainability. We hope that the authors would try to keep "Laplace" useful for the community.
train
[ "-AfkUhGkMli", "3ZRarZ82AW8", "qzhJV2ZEtd5", "H9oi4Iizn7s", "-6FbJBlmTml", "Q5MYpeaCoym", "Y74H2SP7UCL", "zoaYl294lCQ", "qkOJmIbL6pI", "fZGmCoY5cC", "1wqneGZgQ1-", "JCyvoYQDBCw", "7P8AL7htVSV", "DDdyYCzn8dm", "QLfin-Yd_r", "feM2dGGORBS", "v0vVUKY_dw", "q0tTF5u4qEp", "Nb5sj29WzKV"...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_re...
[ " Dear reviewer, thank you very much for your response and again for your insightful review, which will help us improve our paper and library!", " First of all, many thanks for the reply. I wanted to wait for the discussion with the other reviewers before making my final decision. The main concern for me remains ...
[ -1, -1, 6, -1, -1, -1, -1, -1, 7, -1, -1, -1, 7, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, 5 ]
[ "3ZRarZ82AW8", "v0vVUKY_dw", "nips_2021_gDcaUj4Myhn", "-6FbJBlmTml", "zoaYl294lCQ", "JCyvoYQDBCw", "fZGmCoY5cC", "1wqneGZgQ1-", "nips_2021_gDcaUj4Myhn", "QLfin-Yd_r", "q0tTF5u4qEp", "feM2dGGORBS", "nips_2021_gDcaUj4Myhn", "nips_2021_gDcaUj4Myhn", "qkOJmIbL6pI", "7P8AL7htVSV", "Nb5sj2...
nips_2021_o-RYNVOlxA8
Hessian Eigenspectra of More Realistic Nonlinear Models
Given an optimization problem, the Hessian matrix and its eigenspectrum can be used in many ways, ranging from designing more efficient second-order algorithms to performing model analysis and regression diagnostics. When nonlinear models and non-convex problems are considered, strong simplifying assumptions are often made to make Hessian spectral analysis more tractable.This leads to the question of how relevant the conclusions of such analyses are for realistic nonlinear models. In this paper, we exploit tools from random matrix theory to make a precise characterization of the Hessian eigenspectra for a broad family of nonlinear models that extends the classical generalized linear models, without relying on strong simplifying assumptions used previously. We show that, depending on the data properties, the nonlinear response model, and the loss function, the Hessian can have qualitatively different spectral behaviors: of bounded or unbounded support, with single- or multi-bulk, and with isolated eigenvalues on the left- or right-hand side of the main eigenvalue bulk. By focusing on such a simple but nontrivial model, our analysis takes a step forward to unveil the theoretical origin of many visually striking features observed in more realistic machine learning models.
accept
This paper develops new mathematical results in random matrix theory that characterize the limiting spectral distribution of Hessian matrices corresponding to so-called generalized generalized linear models (G-GLMs) in the high-dimensional regime. Using the technique of deterministic equivalents, the authors are able to relax a number of the simplifying distributional assumptions that prior work has used to pursue analyses of this type. The result is an asymptotically exact characterization of the limiting spectrum that exhibits the types of structures that often show up in practice, including isolated outlier eigenvalues, multimodal densities, etc. As such, this paper offers the NeurIPS community a new and powerful perspective for understanding the spectra of high-dimensional models, and potentially paving the way for new developments in second-order optimization and beyond. The phrase in the title of the paper, "more realistic," actually provides a good characterization of the scope and generality of the results: the focus here does not actually touch on realistic models, NNs, etc., but the G-GLM setup and the general distributional assumptions move the needle significantly in the "realistic" direction. While this paper does observe some phenomena that do tend to appear in practical configurations (such as large outliers), it remains unclear whether the explanations offered here are in fact the same ones that underlie the phenomena more generally. An expanded version of this paper would surely benefit from some detailed (empirical) analysis of this question. Still, absent these additions, the techniques and insights derived in this paper will nevertheless be of interest to the community, and this paper will make a great addition to NeurIPS.
train
[ "D263nxOC-cY", "2P37NVFrb5", "XcrwKAAXwVA", "TWfIbILVA-K", "PKoa33ZffUN", "1HCYKKdeJdV", "fPlQFKdB5sp", "wmoJIQkU3X1", "_7jtAhHU6m9", "RJ-hN6Cs48f", "sx-8JBweQtG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the explanations, this is helpful! Nice work!", "The paper explores the spectrum of Hessians for a class of generalized generalized linear models which includes convex problems such as logistic loss and non-convex problems such as the phase retrieval. The authors make some simplifying assumptions ...
[ -1, 8, -1, -1, -1, -1, -1, -1, 9, 8, 9 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "fPlQFKdB5sp", "nips_2021_o-RYNVOlxA8", "PKoa33ZffUN", "wmoJIQkU3X1", "2P37NVFrb5", "sx-8JBweQtG", "RJ-hN6Cs48f", "_7jtAhHU6m9", "nips_2021_o-RYNVOlxA8", "nips_2021_o-RYNVOlxA8", "nips_2021_o-RYNVOlxA8" ]
nips_2021_yw5KKWraUk7
Explicable Reward Design for Reinforcement Learning Agents
We study the design of explicable reward functions for a reinforcement learning agent while guaranteeing that an optimal policy induced by the function belongs to a set of target policies. By being explicable, we seek to capture two properties: (a) informativeness so that the rewards speed up the agent's convergence, and (b) sparseness as a proxy for ease of interpretability of the rewards. The key challenge is that higher informativeness typically requires dense rewards for many learning tasks, and existing techniques do not allow one to balance these two properties appropriately. In this paper, we investigate the problem from the perspective of discrete optimization and introduce a novel framework, ExpRD, to design explicable reward functions. ExpRD builds upon an informativeness criterion that captures the (sub-)optimality of target policies at different time horizons in terms of actions taken from any given starting state. We provide a mathematical analysis of ExpRD, and show its connections to existing reward design techniques, including potential-based reward shaping. Experimental results on two navigation tasks demonstrate the effectiveness of ExpRD in designing explicable reward functions.
accept
The reviewers agree that the problem of reward design is important and particularly relevant for safe reinforcement learning (RL). It is also a timely subject now that the deployment of RL systems in the real world is becoming increasingly more common. There was also a consensus among the reviewers that the proposed approach is technically sound. Based on the submission, the reviews, the rebuttal and the subsequent discussions, we strongly advise the authors to make two modifications to the paper. First, the placement of the proposed approach should be much more clear and stated at the outset. Some passages of the text suggest that the motivation underlying the proposed method is to find a reward to replace the original, sparse, reward in order to solve a specific task. The connections made with Ng et al.'s PBRS and variants are one example. One of the requirements of the proposed approach is that one has access to (an approximation of) the optimal value function $\bar{V}^*$. One of the reviewers called attention to this fact and asked how realistic this is in practice, to which the authors responded that "similar to PBRS with $\bar{V}^*$ as the potential function, our framework does require solving the task w.r.t. the original reward function $\bar{R}$". Although it is true that $\bar{V}^*$ can be used to derive PBRS' potential function, and Ng et al. point out that this choice gives rise to a particularly easy problem, their result apply to *any* potential function derived from a function $\Phi$ define over states (see Theorem 1 in Ng et al. 1999). In fact, Ng et al. emphasize that their method does *not* depend on $\bar{V}^*$, and also illustrate this point in their experiments by handcrafting a $\Phi$ that is intuitive but considerably different from $\bar{V}^*$. PBRS and variants aim at finding a reward $\hat{R}$ to replace the original, potentially sparse, reward $\bar{R}$ in order to solve a specific task. If we view the proposed approach as an alternative to these methods, the fact that it needs $\bar{V}^*$ is a strong limitation. An alternative view is that we *can* solve the task $\bar{R}$ but want a different version of it, $\hat{R}$, to be used in the future when we solve the task again. This new version of the reward, $\hat{R}$, should have two desirable properties: it should be informative and interpretable. Under this view, the fact that we need $\bar{V}^*$ seems to be less of an issue. Although this interpretation of the proposed approach seems to be favored in parts of the paper, it should be more clearly stated, and any passages that suggest otherwise should be modified to avoid ambiguities. A second modification to the paper we strongly suggest regards the use of sparsity as a proxy for interpretability. This concern came up in most of the reviews and was a point of contention in the discussions. We advise the authors to add the good points made during the discussion to the paper. Specifically, they should clarify why sparsity is a good measure of interpretability and explain how the proposed approach can be modified to accommodate other reward structures. They should also add the provided list with examples of applications where sparsity structure in the reward design is important. We hope the provided feedback will be helpful in making your paper even stronger.
train
[ "NZ2-f-gJTDy", "9yVcvG0YyAL", "tmgsYz3X8r-", "nIMuwr2NQc", "NR7xCQauyO", "zpzF5ovaXCv", "eyr6KseVUye", "yVuhpI0Zl3J", "-7DISNp0Pu6", "GKWl2IIy0ta", "HWL5Dbk5PlN", "qjdAwA2tJD-", "4TcHhUTpv4v", "rcZ3KID8wXH", "0rpPC96HGv", "am2WRun8lhp", "1jqdBaVDmT", "2Pl7lgnSDyR" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the reviewer for the valuable feedback. As suggested, we will revise the paper appropriately to tackle the concerns raised by reviewers. We are grateful for all the feedback that will help in improving the paper. Thank you again for the review!", "The paper introduces ExpRD, a novel optimizat...
[ -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 2, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "tmgsYz3X8r-", "nips_2021_yw5KKWraUk7", "qjdAwA2tJD-", "yVuhpI0Zl3J", "eyr6KseVUye", "nips_2021_yw5KKWraUk7", "GKWl2IIy0ta", "0rpPC96HGv", "qjdAwA2tJD-", "HWL5Dbk5PlN", "rcZ3KID8wXH", "9yVcvG0YyAL", "nips_2021_yw5KKWraUk7", "zpzF5ovaXCv", "2Pl7lgnSDyR", "1jqdBaVDmT", "nips_2021_yw5KK...
nips_2021_Q32U7dzWXpc
A Minimalist Approach to Offline Reinforcement Learning
Offline reinforcement learning (RL) defines the task of learning from a fixed batch of data. Due to errors in value estimation from out-of-distribution actions, most offline RL algorithms take the approach of constraining or regularizing the policy with the actions contained in the dataset. Built on pre-existing RL algorithms, modifications to make an RL algorithm work offline comes at the cost of additional complexity. Offline RL algorithms introduce new hyperparameters and often leverage secondary components such as generative models, while adjusting the underlying RL algorithm. In this paper we aim to make a deep RL algorithm work while making minimal changes. We find that we can match the performance of state-of-the-art offline RL algorithms by simply adding a behavior cloning term to the policy update of an online RL algorithm and normalizing the data. The resulting algorithm is a simple to implement and tune baseline, while more than halving the overall run time by removing the additional computational overheads of previous methods.
accept
Overall this paper makes a nice contribution and is arguably an example of the type of work our fields need more of. My only reservation is that the authors should have done more to highlight deficiencies of the approach, which may not be present for some of the other approaches. In particular, the approach obviously should suffer when data has coverage of good policies, but may not correspond to a good policy that one should aim to imitate. The authors are aware of this and will ideally make this point more strongly in the revision. Regardless, the paper still makes a nice contribution.
train
[ "qleC8jufjr9", "-vZ9L0Bq-cr", "G4X-VAiuJe", "pPV3HPyalaw", "kh-NEuVeEf1", "Prm1hBgApir", "SpA3irJIhVY", "Ct7PNODT2q", "5am1mJ7nBLB", "UKwS39l_2xO" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Ultimately the choice of hyperparameter is around balancing maximizing the value function and the BC term, which means the value should need to be within an order of magnitude around 1, or one term will dominate. There is, for example, the option to directly learn the hyperparameter such that both terms (Q and BC...
[ -1, -1, -1, -1, -1, -1, 8, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "-vZ9L0Bq-cr", "pPV3HPyalaw", "UKwS39l_2xO", "5am1mJ7nBLB", "Ct7PNODT2q", "SpA3irJIhVY", "nips_2021_Q32U7dzWXpc", "nips_2021_Q32U7dzWXpc", "nips_2021_Q32U7dzWXpc", "nips_2021_Q32U7dzWXpc" ]
nips_2021_YSzTMntO1KY
SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition
To help agents reason about scenes in terms of their building blocks, we wish to extract the compositional structure of any given scene (in particular, the configuration and characteristics of objects comprising the scene). This problem is especially difficult when scene structure needs to be inferred while also estimating the agent’s location/viewpoint, as the two variables jointly give rise to the agent’s observations. We present an unsupervised variational approach to this problem. Leveraging the shared structure that exists across different scenes, our model learns to infer two sets of latent representations from RGB video input alone: a set of "object" latents, corresponding to the time-invariant, object-level contents of the scene, as well as a set of "frame" latents, corresponding to global time-varying elements such as viewpoint. This factorization of latents allows our model, SIMONe, to represent object attributes in an allocentric manner which does not depend on viewpoint. Moreover, it allows us to disentangle object dynamics and summarize their trajectories as time-abstracted, view-invariant, per-object properties. We demonstrate these capabilities, as well as the model's performance in terms of view synthesis and instance segmentation, across three procedurally generated video datasets.
accept
This paper presents a method for learning disentanglement of scenes (as seen by an agent) into scene related content and object related content. It is based on transformers and a latent representation separated into time-varying and time-invariant slots. The paper received 4 expert reviews, and in the discussion phase very quickly a consensus emerged on acceptance. Some weaknesses were raised, in particular with experiments, positioning and mid- and long-term objectives of this line work, and the advantages or disadvantages of key design choices (missing ablations). The authors provided a thorough and detail review, which answered most questions related to framing the method, and lead to increased ratings. All reviewers agreed that this paper is of interest to the community and recommend acceptance. The AC concurs.
val
[ "zeHfsNT8zEZ", "C1oUM5dbmv", "XSDGGqkf9Fa", "dtzVOs47qi", "GULSi7c9qfj", "cdUCzuqMs9", "ydnbELHAZxO", "_pH9BRNTQ0j", "DWW-9PzN38I", "xekpXwRvyxz", "WFAl4JjOBT5", "qw-e6I8920v", "KjRvf9OmFf", "F3TN8xButrn", "4Wt64PBYoH2", "TtLbXGZPWV", "O7Oq7kzSUnD" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks again for the time you spent studying the paper in detail. We address the points you raised below.\n\n### *This is no fully generative model*\nIndeed, that is a limitation. Thanks for validating the potential extensions discussed in the paper as ways to overcome it.\n\n### *Experimental results are shown o...
[ -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "TtLbXGZPWV", "XSDGGqkf9Fa", "nips_2021_YSzTMntO1KY", "cdUCzuqMs9", "nips_2021_YSzTMntO1KY", "WFAl4JjOBT5", "GULSi7c9qfj", "nips_2021_YSzTMntO1KY", "GULSi7c9qfj", "GULSi7c9qfj", "xekpXwRvyxz", "O7Oq7kzSUnD", "XSDGGqkf9Fa", "TtLbXGZPWV", "O7Oq7kzSUnD", "nips_2021_YSzTMntO1KY", "nips_2...
nips_2021_paxcakYWwIu
Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning
Pairwise learning refers to learning tasks where the loss function depends on a pair of instances. It instantiates many important machine learning tasks such as bipartite ranking and metric learning. A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue. In this paper, we propose simple stochastic and online gradient descent methods for pairwise learning. A notable difference from the existing studies is that we only pair the current instance with the previous one in building a gradient direction, which is efficient in both the storage and computational complexity. We develop novel stability results, optimization, and generalization error bounds for both convex and nonconvex as well as both smooth and nonsmooth problems. We introduce novel techniques to decouple the dependency of models and the previous instance in both the optimization and generalization analysis. Our study resolves an open question on developing meaningful generalization bounds for OGD using a buffering set with a very small fixed size. We also extend our algorithms and stability analysis to develop differentially private SGD algorithms for pairwise learning which significantly improves the existing results.
accept
All the reviewers agree upon the quality of the paper and support its acceptance. There is no argument to go against this consensus and I recommend the acceptance of this paper. We count on the authors to take into account the small fixes recommended by the reviewers.
train
[ "vYmw8hH3r6T", "SEkqP8-EDI6", "UAXVIb9NM5", "H3Tl3i4QXRv", "qANOUi7Wso", "nAoz8IQXfwv", "159-IIZ2JCC", "GcTcgUSPzZ", "ZskeHctcJZ_", "v6Z3Dj2RDw", "0NVdGrrEmbf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposed \"stochastic gradient descent\" (SGD) and \"online gradient descent\" (OGD) algorithms for pairwise learning problems. Stability results, optimization, and generalization error bounds for both convex and nonconvex as well as both smooth and nonsmooth problems are provided. The authors also devel...
[ 6, -1, -1, -1, -1, -1, -1, -1, 8, 7, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 2, 2, 3 ]
[ "nips_2021_paxcakYWwIu", "qANOUi7Wso", "nAoz8IQXfwv", "GcTcgUSPzZ", "vYmw8hH3r6T", "ZskeHctcJZ_", "0NVdGrrEmbf", "v6Z3Dj2RDw", "nips_2021_paxcakYWwIu", "nips_2021_paxcakYWwIu", "nips_2021_paxcakYWwIu" ]
nips_2021_PqiCvohYSAx
User-Level Differentially Private Learning via Correlated Sampling
Badih Ghazi, Ravi Kumar, Pasin Manurangsi
accept
This paper considers the problem of private PAC learning in the setting of user-level privacy, where each user holds several labeled samples. They show that if every user holds sufficiently many samples, then very few users are needed to proved user-level privacy. They also provide lower bounds on the number of required users. The reviewers agree that this is an interesting paper with solid contribution, that should be accepted to NeurIPS.
train
[ "INBx_5dyi5F", "kuTZLw4U9Q", "Ms0eyy8hsqX", "oEaXWcSZM8k", "oCkpJGx1mpM", "4Edl97ex99h", "6G5tv7WbyeM", "VoYlCk2qTSb", "-i36Tm3Y2al" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. After the discussion period, my evaluation remains the same.", " Thank you for the detailed review and suggestions. We will consider incorporating the suggestions on rearranging the proofs/lemmas/definitions in the revision.\n\n**Large number of samples per user.**\nWe do agree with th...
[ -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 1, 3 ]
[ "Ms0eyy8hsqX", "4Edl97ex99h", "-i36Tm3Y2al", "VoYlCk2qTSb", "6G5tv7WbyeM", "nips_2021_PqiCvohYSAx", "nips_2021_PqiCvohYSAx", "nips_2021_PqiCvohYSAx", "nips_2021_PqiCvohYSAx" ]
nips_2021_VNYKJfYvoCq
Asynchronous Decentralized Online Learning
Most existing algorithms in decentralized online learning are conducted in the synchronous setting. However, synchronization makes these algorithms suffer from the straggler problem, i.e., fast learners have to wait for slow learners, which significantly reduces such algorithms' overall efficiency. To overcome this problem, we study decentralized online learning in the asynchronous setting, which allows different learners to work at their own pace. We first formulate the framework of Asynchronous Decentralized Online Convex Optimization, which specifies the whole process of asynchronous decentralized online learning using a sophisticated event indexing system. Then we propose the Asynchronous Decentralized Online Gradient-Push (AD-OGP) algorithm, which performs asymmetric gossiping communication and instantaneous model averaging. We further derive a regret bound of AD-OGP, which is a function of the network topology, the levels of processing delays, and the levels of communication delays. Extensive experiments show that AD-OGP runs significantly faster than its synchronous counterpart and also verify the theoretical results.
accept
The reviewers agree that this generally a good paper although not entirely without (minor) flaws. Please take the reviewers comments in consideration when preparing a revision. The answers provided by the authors were given due consideration.
val
[ "LBZVAZDqbP", "TODuJurQSEU", "htOJF6CikfY", "BWjQmTA7v3O", "dpbm7-UqiZ4", "i4u3NL8lP4", "oEi2nN9LfE0", "HRSBKOw3kxL", "SM1xDBkUkaT", "b9O0ImNNEB6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, authors propose asynchronous algorithm for convex optimization in decentralized distributed model. In contrast with other algorithms, the averaging procedure is asymmetric that leads to faster convergence comparing to the synchronous updates. In this paper, authors consider distributed decentralize...
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_VNYKJfYvoCq", "SM1xDBkUkaT", "nips_2021_VNYKJfYvoCq", "nips_2021_VNYKJfYvoCq", "htOJF6CikfY", "dpbm7-UqiZ4", "nips_2021_VNYKJfYvoCq", "b9O0ImNNEB6", "LBZVAZDqbP", "nips_2021_VNYKJfYvoCq" ]
nips_2021_hUx6pv-lwWJ
Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs
Bayesian optimization (BO) is a sample-efficient approach to optimizing costly-to-evaluate black-box functions. Most BO methods ignore how evaluation costs may vary over the optimization domain. However, these costs can be highly heterogeneous and are often unknown in advance in many practical settings, such as hyperparameter tuning of machine learning algorithms or physics-based simulation optimization. Moreover, those few existing methods that acknowledge cost heterogeneity do not naturally accommodate a budget constraint on the total evaluation cost. This combination of unknown costs and a budget constraint introduces a new dimension to the exploration-exploitation trade-off, where learning about the cost incurs a cost itself. Existing methods do not reason about the various trade-offs of this problem in a principled way, leading often to poor performance. We formalize this claim by proving that the expected improvement and the expected improvement per unit of cost, arguably the two most widely used acquisition functions in practice, can be arbitrarily inferior with respect to the optimal non-myopic policy. To overcome the shortcomings of existing approaches, we propose the budgeted multi-step expected improvement, a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous and unknown evaluation costs. We show that our acquisition function outperforms existing methods in a variety of synthetic and real problems.
accept
The paper studied a variant of the Bayesian optimization problem with additional twists of unknown costs and a budget constraint. All reviewers agree that the problem studied in this paper is practically relevant, the solution (based on dynamic programming and Monte Carlo tree search) is intuitive with rigorous theoretical justifications. Empirical results on several practical (both synthetic and realistic) tasks seem promising. The authors provided effective feedback during the discussion phase, which helped clarify several concerns (e.g., empirical behavior under large-scale problems under a large budget constraint). The authors are encouraged to take into account the reviews, in particular, to further strengthen the empirical analysis and clarity of the presentation if possible, when preparing a revision.
train
[ "joOS0nHCYhX", "qkyCIVLyz1Z", "O6r1RODF0X", "demJli4g9H3", "j2DrW6bLs4q", "CSeAghOH4Yw", "zrKD12Uxnwv", "XLKCMoUdRR", "OpbgT0W5rsQ", "I2KYBAYqNw3", "6FlWlXSgEBU", "pGh9-P8oGTJ", "9bjiUaJVbKX", "5HglJ0LH90" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer AFWF,\n\nWe would like to thank you for your confirmation and also again for your valuable feedback. We will make sure to thoroughly take into account all the suggestions and concerns raised by the reviewing team in the revised version of our paper.\n\nSincerely,\n\nThe authors", " Thanks for addr...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "qkyCIVLyz1Z", "I2KYBAYqNw3", "demJli4g9H3", "XLKCMoUdRR", "zrKD12Uxnwv", "nips_2021_hUx6pv-lwWJ", "OpbgT0W5rsQ", "9bjiUaJVbKX", "CSeAghOH4Yw", "5HglJ0LH90", "pGh9-P8oGTJ", "nips_2021_hUx6pv-lwWJ", "nips_2021_hUx6pv-lwWJ", "nips_2021_hUx6pv-lwWJ" ]
nips_2021_JOxB9h40A-1
Model-Based Domain Generalization
Despite remarkable success in a variety of applications, it is well-known that deep learning can fail catastrophically when presented with out-of-distribution data. Toward addressing this challenge, we consider the \emph{domain generalization} problem, wherein predictors are trained using data drawn from a family of related training domains and then evaluated on a distinct and unseen test domain. We show that under a natural model of data generation and a concomitant invariance condition, the domain generalization problem is equivalent to an infinite-dimensional constrained statistical learning problem; this problem forms the basis of our approach, which we call Model-Based Domain Generalization. Due to the inherent challenges in solving constrained optimization problems in deep learning, we exploit nonconvex duality theory to develop unconstrained relaxations of this statistical problem with tight bounds on the duality gap. Based on this theoretical motivation, we propose a novel domain generalization algorithm with convergence guarantees. In our experiments, we report improvements of up to 30% over state-of-the-art domain generalization baselines on several benchmarks including ColoredMNIST, Camelyon17-WILDS, FMoW-WILDS, and PACS.
accept
The reviewers reached a consensus that the paper has an interesting and novel idea and achieved good improvements over prior works on many (though not all) benchmark datasets. The AC recommends acceptance.
train
[ "L78--jjyPVm", "ISiReC4yxh", "Om9DEjWzGgA", "5zh2Qmxb5X_", "Fp79jNPwDfk", "5dk0IhTpvcf", "sqUEKoabH7T", "JQ1bban_TdW", "b4VRHl_tN5e", "53b1kldhJyp", "vxIFPliboGZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I've slightly updated my reviews as well as increasing my the score. All the best!", "This paper considers the domain generalization setting where data in all domains can be transformed into one another by a transformation G. Then, authors then propose to learn a predictor invariant to this transformation. Expe...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "53b1kldhJyp", "nips_2021_JOxB9h40A-1", "Fp79jNPwDfk", "sqUEKoabH7T", "5dk0IhTpvcf", "vxIFPliboGZ", "53b1kldhJyp", "b4VRHl_tN5e", "ISiReC4yxh", "nips_2021_JOxB9h40A-1", "nips_2021_JOxB9h40A-1" ]
nips_2021_rbdKZJxDWWx
$\alpha$-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression
Jiabo He, Sarah Erfani, Xingjun Ma, James Bailey, Ying Chi, Xian-Sheng Hua
accept
Reviewers agreed that this is a solid paper that deserves acceptance. Authors are highly encouraged to address the key comments reported by reviewers as well as to implement all the improvements (as indicated by authors in the rebuttal) in the final camera-ready version.
train
[ "OFnvr7pL_S", "U5qcBlSU98X", "Lwg-TTcaaKg", "oPDpeWBSMKb", "pPoHM9WYEIU", "JZERAXHoLH1", "C6FHUcdvfZg", "AemPZbLrXv", "BuylAN5ySH", "Uh9zyt_nyQ_", "nOaCuAQUn2b", "oeFa7adLo17", "T1EtbYlnbSh", "oPip6Lcu1z", "B8TzzgXsPgY", "81d3YJ3-cBi", "4w7rimgCgMr", "Uw77u1D6hyS", "W0oNfOAjWQ", ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_rev...
[ " We genuinely thank the reviewer for the helpful discussion.", " Thanks to the authors for providing so detailed answers. I believe this paper will be an excellent contribution for the conference.", " We truly thank the reviewer for the valuable comments.", " After reading the author's response and other rev...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, 7, -1, -1, -1, -1, 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, 4, -1, -1, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "U5qcBlSU98X", "LFuTvVaSOoZ", "oPDpeWBSMKb", "T1EtbYlnbSh", "4w7rimgCgMr", "B8TzzgXsPgY", "AemPZbLrXv", "3hRrdT0R6R9", "nips_2021_rbdKZJxDWWx", "oeFa7adLo17", "nips_2021_rbdKZJxDWWx", "W0oNfOAjWQ", "BuylAN5ySH", "nOaCuAQUn2b", "-WbijIPgF-S", "nips_2021_rbdKZJxDWWx", "1DUUF_ySzv", "...
nips_2021__eXwwWOyqT_
Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient
David Applegate, Mateo Diaz, Oliver Hinder, Haihao Lu, Miles Lubin, Brendan O'Donoghue, Warren Schudy
accept
Thank you for your submission to NeurIPS. All four reviewers agree that the paper is valuable because it solves an important practical problem well. Initially, there were some concerns regarding the novelty of the methods, and the heuristic nature of the methods. Fortunately, most of these concerns were adequately addressed after discussion with the authors. Please ensure that the camera-ready version is adequately revised to capture the points raised during the discussion period, as these were important in helping the reviewers come to their final decisions.
train
[ "vkLlX3-2_a5", "p7nB0K-H0SZ", "LmrqLYxjunS", "--tgZAgFWHC", "Ze8qOEL7rrB", "z6vm37wRQkX", "a2Tdr823S2E", "p84vJeNhSCb", "XR8r0TqUC53", "3SGYlOs-Blo", "PPUSMMVRFy", "l64HZtsMpX", "JitxxeDriN5", "yuUKDwFm6Y1" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " > Is there an intuition why the 4 problems being left unsolved are particularly hard for PDLP? For example, are they also difficult for other linear programming solvers?\n\nThe four MIP relaxation problems left unsolved are shs1014, shs1023, shs1042, and supportcase19. We noticed that the three 'shs' instances ha...
[ -1, -1, -1, -1, 7, -1, -1, 6, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, 4, 5 ]
[ "p7nB0K-H0SZ", "PPUSMMVRFy", "z6vm37wRQkX", "Ze8qOEL7rrB", "nips_2021__eXwwWOyqT_", "a2Tdr823S2E", "XR8r0TqUC53", "nips_2021__eXwwWOyqT_", "--tgZAgFWHC", "JitxxeDriN5", "yuUKDwFm6Y1", "p84vJeNhSCb", "nips_2021__eXwwWOyqT_", "nips_2021__eXwwWOyqT_" ]
nips_2021_nxrP9J_nG3
On the Provable Generalization of Recurrent Neural Networks
Lifu Wang, Bo Shen, Bo Hu, Xing Cao
accept
This paper presents new generalization bounds for RNN. While this paper follows the line of NTK theory, to deal with RNN, this paper proposes new and non-trivial techniques.
train
[ "HWkdBkcB4XO", "LIXKA4bFHEM", "K7ib5PDOxSN", "xx88yQRDB8", "0WlXnGyX1dF", "gj1igX8_oSZ", "1yRkc6Z1LmB", "GlykxOJaFRA" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nWe thank the reviewer for these helpful comments, while we are sorry that due to Neurips rebuttal policy, we cannot provide a revision to improve the present form during the rebuttal period. \n\nAs a summary of the discussion, please allow me to emphasize that in this paper, as written in the abstract and intro...
[ -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "1yRkc6Z1LmB", "gj1igX8_oSZ", "GlykxOJaFRA", "1yRkc6Z1LmB", "gj1igX8_oSZ", "nips_2021_nxrP9J_nG3", "nips_2021_nxrP9J_nG3", "nips_2021_nxrP9J_nG3" ]
nips_2021_jTEGbvLjgp
Differentiable Spline Approximations
The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer'' in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis. We also open-source the code at \url{https://github.com/idealab-isu/DSA}.
accept
Congratulations, the paper is accepted to NeurIPS 2021! Please reformulate your contributions in light of reviewer's qxvV comments. Please incorporate other corrections and additions from the rebuttal/reviews.
val
[ "ApvOwLQQz3d", "Rq9P_znzUPA", "VtSJng_qR9F", "vI-U2si793", "qdAfnaLj_b", "KXZBPo3Pgnp", "bcMArkZsP3w" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors propose to derive a technique allowing gradient based learning of splines (or piecewise polynomials) and to apply such a method to standard tasks such as PDE solving/approximation and other function approximation problems. Some notations are counter intuitive e.g. f_I(i) since F is a vector and F_I is...
[ 5, 7, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, 3 ]
[ "nips_2021_jTEGbvLjgp", "nips_2021_jTEGbvLjgp", "nips_2021_jTEGbvLjgp", "ApvOwLQQz3d", "bcMArkZsP3w", "Rq9P_znzUPA", "nips_2021_jTEGbvLjgp" ]
nips_2021__KhlwS9oFBp
Rate-Optimal Subspace Estimation on Random Graphs
Zhixin Zhou, Fan Zhou, Ping Li, Cun-Hui Zhang
accept
The reviewers gave very coherent marks to this paper, generally recognizing its quality and the value of its contribution, in particular in filling the gap of identifying minimax optimal estimators for estimation of an average connectivity matrix and of its column space from an observed bipartite random graph with M as its expected adjacency matrix. The paper thus appears well suited for acceptance. We recommend that the authors take into account the reviewers' comments especially about positioning their results when preparing the final version.
train
[ "R3okRMGGFy", "38w34aMM5kp", "TzNTvRVUdES", "PS1yti2ZBNu", "RA9frlBGrSN", "NFODOaWe-Sc", "IHK4v0D1cyc", "6T5S554rAZ", "olOloQi7SRW" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank their reviewer for their response, it will be good to add in the revised version the suggested literature for a more complete literature review. It seems also good to resolve the technical issue of Theorem 5.", "The authors study the problem of estimating the connectivity matrix $M \\in [0,1]^{n_1 \\tim...
[ -1, 6, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, 4, 3, 3 ]
[ "PS1yti2ZBNu", "nips_2021__KhlwS9oFBp", "38w34aMM5kp", "olOloQi7SRW", "6T5S554rAZ", "IHK4v0D1cyc", "nips_2021__KhlwS9oFBp", "nips_2021__KhlwS9oFBp", "nips_2021__KhlwS9oFBp" ]
nips_2021_LeW4XOVCrl
Estimating the Unique Information of Continuous Variables
The integration and transfer of information from multiple sources to multiple targets is a core motive of neural systems. The emerging field of partial information decomposition (PID) provides a novel information-theoretic lens into these mechanisms by identifying synergistic, redundant, and unique contributions to the mutual information between one and several variables. While many works have studied aspects of PID for Gaussian and discrete distributions, the case of general continuous distributions is still uncharted territory. In this work we present a method for estimating the unique information in continuous distributions, for the case of one versus two variables. Our method solves the associated optimization problem over the space of distributions with fixed bivariate marginals by combining copula decompositions and techniques developed to optimize variational autoencoders. We obtain excellent agreement with known analytic results for Gaussians, and illustrate the power of our new approach in several brain-inspired neural models. Our method is capable of recovering the effective connectivity of a chaotic network of rate neurons, and uncovers a complex trade-off between redundancy, synergy and unique information in recurrent networks trained to solve a generalized XOR~task.
accept
This work presents a new method for partial information decomposition that was found to be novel and interesting. Given the narrowing of the scope (away from the RNN application and more focused on the PID estimation) proposed by the authors in response to the reviewers initial comments, the reviewers agreed that the work was relevant to the NeurIPS community and that the method and applications would be greatly appreciated. I thus am happy to recommend this paper be accepted.
train
[ "QKPaZaq5mcj", "Z_V-tLsDmqA", "Zm-4kxx9sr", "QbSDeKvC5VZ", "HmqZfBgv0b", "1CsUTpPGKio", "wyddYLSSS3f", "QQtA6MNxNXg", "_HGqbDL4w3j", "mIArG86chcX", "wD8Mp9015Ny" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing the points I raised. I hope that these changes will help the paper reach a broad audience and achieve the impact it deserves.", "(I am not an expert in information theory)\nThe authors extend partial information decomposition to the case of continuous variables with arbitrary distributi...
[ -1, 7, -1, 7, -1, -1, -1, -1, -1, 7, 8 ]
[ -1, 2, -1, 4, -1, -1, -1, -1, -1, 2, 4 ]
[ "1CsUTpPGKio", "nips_2021_LeW4XOVCrl", "_HGqbDL4w3j", "nips_2021_LeW4XOVCrl", "wyddYLSSS3f", "wD8Mp9015Ny", "QbSDeKvC5VZ", "mIArG86chcX", "Z_V-tLsDmqA", "nips_2021_LeW4XOVCrl", "nips_2021_LeW4XOVCrl" ]
nips_2021_R-ZAZ-K1ILb
Reliable Causal Discovery with Improved Exact Search and Weaker Assumptions
Many of the causal discovery methods rely on the faithfulness assumption to guarantee asymptotic correctness. However, the assumption can be approximately violated in many ways, leading to sub-optimal solutions. Although there is a line of research in Bayesian network structure learning that focuses on weakening the assumption, such as exact search methods with well-defined score functions, they do not scale well to large graphs. In this work, we introduce several strategies to improve the scalability of exact score-based methods in the linear Gaussian setting. In particular, we develop a super-structure estimation method based on the support of inverse covariance matrix which requires assumptions that are strictly weaker than faithfulness, and apply it to restrict the search space of exact search. We also propose a local search strategy that performs exact search on the local clusters formed by each variable and its neighbors within two hops in the super-structure. Numerical experiments validate the efficacy of the proposed procedure, and demonstrate that it scales up to hundreds of nodes with a high accuracy.
accept
There is consensus on acceptance among reviewers. There are a few suggestions and discussions that can be helpful to improve the manuscript, in particular to appeal the potential audience at neurips 2021.
val
[ "IkVIrHaB3Db", "K7THykTicOI", "RfscTaeraV2", "ogW1i7Xocx8", "KtDgs5LLX5d", "Yj8zfUHoMsr", "pp_AvrisxjN", "DYMj6gqN8HA", "uGV4qrsi3l", "kjpZKtJ9sw-", "P-xHfcEfaZk" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes two steps for learning the DAG structure from linear Gaussian data. First, it is shown that under a condition weaker than several forms of faithfulness, the inverse of the observational covariance matrix can be used to find a graph guaranteed to contain all edges in the DAG. Second, another the...
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_R-ZAZ-K1ILb", "RfscTaeraV2", "nips_2021_R-ZAZ-K1ILb", "KtDgs5LLX5d", "Yj8zfUHoMsr", "P-xHfcEfaZk", "RfscTaeraV2", "IkVIrHaB3Db", "kjpZKtJ9sw-", "nips_2021_R-ZAZ-K1ILb", "nips_2021_R-ZAZ-K1ILb" ]
nips_2021_ekKaTdleJVq
Node Dependent Local Smoothing for Scalable Graph Learning
Recent works reveal that feature or label smoothing lies at the core of Graph Neural Networks (GNNs). Concretely, they show feature smoothing combined with simple linear regression achieves comparable performance with the carefully designed GNNs, and a simple MLP model with label smoothing of its prediction can outperform the vanilla GCN. Though an interesting finding, smoothing has not been well understood, especially regarding how to control the extent of smoothness. Intuitively, too small or too large smoothing iterations may cause under-smoothing or over-smoothing and can lead to sub-optimal performance. Moreover, the extent of smoothness is node-specific, depending on its degree and local structure. To this end, we propose a novel algorithm called node-dependent local smoothing (NDLS), which aims to control the smoothness of every node by setting a node-specific smoothing iteration. Specifically, NDLS computes influence scores based on the adjacency matrix and selects the iteration number by setting a threshold on the scores. Once selected, the iteration number can be applied to both feature smoothing and label smoothing. Experimental results demonstrate that NDLS enjoys high accuracy -- state-of-the-art performance on node classifications tasks, flexibility -- can be incorporated with any models, scalability and efficiency -- can support large scale graphs with fast training.
accept
This paper proposes a node-dependent local smoothing (NDLS) algorithm to control the smoothness of every node. NDLS provides a bound to guide how to control the extent of smoothness for different nodes. Extensive experiments on seven real-world graph datasets demonstrate that NDLS pipeline enjoys state-of-the-art performance on node classification tasks, can be combined with any GNN models, and is scalable and efficient. The proposed NDLS algorithm is novel and generalizes some existing smoothing techniques. The NDLS kernel can act as a building block to replace other graph kernels and be combined with some existing models. The authors also provide some theoretical analysis of the space and time complexities of the proposed NDLS pipeline. The paper is very well-written and easy to follow, with intuitive figures for better understanding the concept. NDLS shows state-of-the-art performance across various datasets. The results are nicely interpreted. Thus the algorithm is robust and generally reliable in practice. Therefore, we recommend accepting this paper.
test
[ "ea2Ja0qJtq", "D_h_97MEnfo", "xvMZmych23", "h4OkaemrTWt", "Q67ufMs3Fk", "0-kOckaiTEv", "fAcZ1S_J8EM", "D0YyT9EnBf7", "XRJPt8iw7jC", "-_8TmhXsgtR", "_jhtpbZFRNK", "AZPHUWrk5vJ", "N_KTowoIcL-", "8bEXCFk5NU_", "q5IyFC3-AUA", "LqafgQcuNX2" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your insightful feedback. We appreciate your assessment about this paper being \"well written and easy to follow\". The answers to your concerns are as follows.\n\n\n### 1. Monotonous decreasing distance: \nThe reviewer raises a good question. The definition of LSI does not require a decreasing distanc...
[ -1, -1, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "q5IyFC3-AUA", "D0YyT9EnBf7", "nips_2021_ekKaTdleJVq", "nips_2021_ekKaTdleJVq", "8bEXCFk5NU_", "XRJPt8iw7jC", "q5IyFC3-AUA", "xvMZmych23", "h4OkaemrTWt", "_jhtpbZFRNK", "N_KTowoIcL-", "xvMZmych23", "LqafgQcuNX2", "h4OkaemrTWt", "nips_2021_ekKaTdleJVq", "nips_2021_ekKaTdleJVq" ]
nips_2021_bhdntUKwA1
Parallel and Efficient Hierarchical k-Median Clustering
Vincent Cohen-Addad, Silvio Lattanzi, Ashkan Norouzi-Fard, Christian Sohler, Ola Svensson
accept
The authors present an algorithm for the distributed hierarchical k-median problem that requires only logarithmic memory on the machines, logarithmic rounds of communication, and has an O(d*log(n)) approximation factor for each k simultaneously. The approach taken combines tree embeddings with a careful parallelization of the clustering operations. The algorithm is of practical interest to the community, and the approach may lead to further innovation and efficient approaches to other clustering problems.
train
[ "J4geK9y0Edk", "dFvxU8Nqbjz", "DdrGbBG8zlz", "uGJApZOXD0H", "lPaTANYkrfd", "DvrYdFik9u-", "aEeFssHF1uw", "H8P2xMfB62", "yzf9TvhV0jF", "HrJ8JqnYWv", "2-YUWcxeiAb", "JaIDOyJg0I", "wx2ldQDyQv7" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the comment, we are sorry for the difficulty that you experienced. The model is formally described in the main body (please refer to line 33 for definition of hierarchical k-median and line 66 for the definition of the distributed setting). As mentioned in the paper, some of the proofs are omitted in t...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, 4, 5, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "dFvxU8Nqbjz", "2-YUWcxeiAb", "yzf9TvhV0jF", "nips_2021_bhdntUKwA1", "wx2ldQDyQv7", "JaIDOyJg0I", "uGJApZOXD0H", "2-YUWcxeiAb", "HrJ8JqnYWv", "nips_2021_bhdntUKwA1", "nips_2021_bhdntUKwA1", "nips_2021_bhdntUKwA1", "nips_2021_bhdntUKwA1" ]
nips_2021_vsCCDVdTAx
Human-Adversarial Visual Question Answering
Performance on the most commonly used Visual Question Answering dataset (VQA v2) is starting to approach human accuracy. However, in interacting with state-of-the-art VQA models, it is clear that the problem is far from being solved. In order to stress test VQA models, we benchmark them against human-adversarial examples. Human subjects interact with a state-of-the-art VQA model, and for each image in the dataset, attempt to find a question where the model’s predicted answer is incorrect. We find that a wide range of state-of-the-art models perform poorly when evaluated on these examples. We conduct an extensive analysis of the collected adversarial examples and provide guidance on future research directions. We hope that this Adversarial VQA (AdVQA) benchmark can help drive progress in the field and advance the state of the art.
accept
This paper introduces a new Adversarial VQA dataset collected with human-and-model in the loop by directly asking humans to write questions to attack SOTA VQA models. After author rebuttal, it has received 4 accept recommendations. All the reviewers are happy about the paper, and agree that this is a solid new benchmark worth sharing with the community, and has the potential to be the next generation of a generic VQA dataset for testing future VQA methods. Therefore, the AC is happy to recommend acceptance of the paper.
train
[ "ixTtSeT3AX", "_6HMhTMn7ce", "TY2zCfx2Lt", "5i_Mkpyh1yo", "wWBHDOlGlk", "mhLbrxqDVtB", "py_HQqzM1nS", "y3WCVkYg4R-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper collects a new validation dataset via crowdsourcing to benchmark the progress of state-of-the-art VQA models. \nThe dataset (containing 22K questions and 10x answers) is adversarially collected such that annotators have to come up with questions that can successfully fool a state-of-the-art VQA model, a...
[ 6, 6, 6, -1, -1, -1, -1, 8 ]
[ 4, 4, 4, -1, -1, -1, -1, 4 ]
[ "nips_2021_vsCCDVdTAx", "nips_2021_vsCCDVdTAx", "nips_2021_vsCCDVdTAx", "_6HMhTMn7ce", "y3WCVkYg4R-", "ixTtSeT3AX", "TY2zCfx2Lt", "nips_2021_vsCCDVdTAx" ]
nips_2021_85BzB3WP-qj
Across-animal odor decoding by probabilistic manifold alignment
Identifying the common structure of neural dynamics across subjects is key for extracting unifying principles of brain computation and for many brain machine interface applications. Here, we propose a novel probabilistic approach for aligning stimulus-evoked responses from multiple animals in a common low dimensional manifold and use hierarchical inference to identify which stimulus drives neural activity in any given trial. Our probabilistic decoder is robust to a range of features of the neural responses and significantly outperforms existing neural alignment procedures. When applied to recordings from the mouse olfactory bulb, our approach reveals low-dimensional population dynamics that are odor specific and have consistent structure across animals. Thus, our decoder can be used for increasing the robustness and scalability of neural-based chemical detection.
accept
This paper presents a novel probabilistic alignment of stimulus driven neural activity for decoding stimulus identity. Reviewers agree that this is an important contribution that can contribute to advancing neuroscientific understanding. At the same time, reviewers also agree on the assumptions that may limit its applicability outside the olfactory system. Please make sure to add the discussions through the rebuttal in the final version.
val
[ "Frrjvpv9NvN", "XhFRVfCdrHw", "_ixvfPV56T-", "fu-LjCxv-tk", "AcwB4tSiJvj", "Ed1H_jpXEU", "yXQhop1elrA", "nC6ZCqBtM1d", "UYa9FqKvbvL", "M7TWRcmEH0", "I_2indSscLm", "dsFujLedIz3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed response and clarifications. The assumption does seem to hold well in the case of olfactory bulb for this experimental setting as demonstrated by the impressive results. I have updated my score to reflect the same. I agree that identifying a shared topological solution is an important g...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 8, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "nC6ZCqBtM1d", "nips_2021_85BzB3WP-qj", "Ed1H_jpXEU", "yXQhop1elrA", "UYa9FqKvbvL", "dsFujLedIz3", "M7TWRcmEH0", "XhFRVfCdrHw", "I_2indSscLm", "nips_2021_85BzB3WP-qj", "nips_2021_85BzB3WP-qj", "nips_2021_85BzB3WP-qj" ]
nips_2021_8kk8a_zvWua
Excess Capacity and Backdoor Poisoning
A backdoor data poisoning attack is an adversarial attack wherein the attacker injects several watermarked, mislabeled training examples into a training set. The watermark does not impact the test-time performance of the model on typical data; however, the model reliably errs on watermarked examples.To gain a better foundational understanding of backdoor data poisoning attacks, we present a formal theoretical framework within which one can discuss backdoor data poisoning attacks for classification problems. We then use this to analyze important statistical and computational issues surrounding these attacks.On the statistical front, we identify a parameter we call the memorization capacity that captures the intrinsic vulnerability of a learning problem to a backdoor attack. This allows us to argue about the robustness of several natural learning problems to backdoor attacks. Our results favoring the attacker involve presenting explicit constructions of backdoor attacks, and our robustness results show that some natural problem settings cannot yield successful backdoor attacks.From a computational standpoint, we show that under certain assumptions, adversarial training can detect the presence of backdoors in a training set. We then show that under similar assumptions, two closely related problems we call backdoor filtering and robust generalization are nearly equivalent. This implies that it is both asymptotically necessary and sufficient to design algorithms that can identify watermarked examples in the training set in order to obtain a learning algorithm that both generalizes well to unseen data and is robust to backdoors.
accept
All reviewers agree that this is an important result filling in the gap of theoretically understanding backdoor attacks.
val
[ "Hy_qDhlaobu", "AC52nr3Vm4", "ImZ6jCaY-Qd", "amIIUt23X1j", "yTIym3ZBbju", "ClZxSFxK4Lx", "HCVdF0xQd5Y", "nE2klS2A3Me", "q_DHLT369AD", "_Aiv7_aq7fo" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their responses, and I am looking forward to reading their updated work.", "This work proposes a theoretical framework to draw connections between several phenomena in machine learning observed empirically but never adequately understood- the relationship between excess mod...
[ -1, 8, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "HCVdF0xQd5Y", "nips_2021_8kk8a_zvWua", "ClZxSFxK4Lx", "_Aiv7_aq7fo", "q_DHLT369AD", "AC52nr3Vm4", "nE2klS2A3Me", "nips_2021_8kk8a_zvWua", "nips_2021_8kk8a_zvWua", "nips_2021_8kk8a_zvWua" ]
nips_2021_vCWztO0ppL
A Convergence Analysis of Gradient Descent on Graph Neural Networks
Pranjal Awasthi, Abhimanyu Das, Sreenivas Gollapudi
accept
This paper analyzes convergence properties of gradient descent for graph neural networks by using the neural tangent kernel technique. More specifically, it shows convergence with iteration complexity $\epsilon^{-2}\log(1/\epsilon)$ for one hidden layer ReLU-GNNs and shows linear convergence for deep linear GNNs. The theory is verified through a numerical experiment. In the previous researches, NTK is analyzed mainly for FNNs, but this paper extends it to GNN settings. This requires some technical novelty and is not trivial. The second analysis for the deep linear model is a bit restrictive because it requires linear activation and the parameters in all layers are the same. However, this also requires some technical novelty to extend the proof for FNNs to GNNs. In that sense, this paper has novelty. The numerical experiments justify the theoretical results, but I recommend the authors that they present the graphs in log-log scale or semilog scale to see the correctness of the convergence rate given in the theorems. Although there are some concerns as I stated above, this paper still possesses novelty and the convergence analysis for GNNs is an important issue in the literature. Hence, I recommend this paper for acceptance.
train
[ "PC9sQveWJDo", "JVOnKt1OD-A", "2YKX8agTTEi", "oONh5NWB2G", "8N_fINYOFY8", "4CTj4hDT-0", "Ct6rfdaaGW", "wdntX66i4Q7", "QIm9eS4H8tq", "eq3OyaaEZ1i" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " > [So1] I agree with the authors in that the analyses of this paper can extend to the case where GNNs cannot observed all labels. I want to note that GNNs need to know features of ALL nodes. That is, we need to consider the transductive setting where a learner needs to know features of test instances.\n\nWe agree...
[ -1, 6, -1, 6, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, 4, -1, -1, -1, -1, 4, 3 ]
[ "2YKX8agTTEi", "nips_2021_vCWztO0ppL", "Ct6rfdaaGW", "nips_2021_vCWztO0ppL", "oONh5NWB2G", "JVOnKt1OD-A", "eq3OyaaEZ1i", "QIm9eS4H8tq", "nips_2021_vCWztO0ppL", "nips_2021_vCWztO0ppL" ]
nips_2021_EPceRw--ZWr
Differentiable rendering with perturbed optimizers
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision. Solutions to this inverse and ill-posed problem typically involve a search for models that best explain observed image data. Notably, images depend both on the properties of observed scenes and on the process of image formation. Hence, if optimization techniques should be used to explain images, it is crucial to design differentable functions for the projection of 3D scenes into images, also known as differentiable rendering. Previous approaches to differentiable rendering typically replace non-differentiable operations by smooth approximations, impacting the subsequent 3D estimation. In this paper, we take a more general approach and study differentiable renderers through the prism of randomized optimization and the related notion of perturbed optimizers. In particular, our work highlights the link between some well-known differentiable renderer formulations and randomly smoothed optimizers, and introduces differentiable perturbed renderers. We also propose a variance reduction mechanism to alleviate the computational burden inherent to perturbed optimizers and introduce an adaptive scheme to automatically adjust the smoothing parameters of the rendering process. We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction. By providing informative gradients that can be used as a strong supervisory signal, we demonstrate the benefits of perturbed renderers to obtain more accurate solutions when compared to the state-of-the-art alternatives using smooth gradient approximations.
accept
This paper has mixed reviewers. Most of reviewers agree with the novelty and contribution of proposing a new differentiable rendering method, differentiable perturbed renderers. The whole paper is solid and experiments are good enough to validate the contribution. Reviewer yvH4 had some serious concerns about the practical issues of the proposed methods, including generalization ability to other 3D shape representations, computational expensive, not competitive to existing approaches. The authours clarified some of these points. In the final version, the authours are advisable to futher integrate the rebuttal and improve the paper quality.
train
[ "rEod3tflXz", "4uNe63hKpiI", "3bPt1rbyAP", "LMNBpQPwZRI", "bS7SkZypHyw", "CUN9USkEvrL", "l1kfsZqB5Fp", "uejZd5fZKQ5", "S4P_PUKyAE0", "4oJ4Wt8zKQk", "c7JpE38n9Sd" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new differentiable renderer by introducing perturbed optimizers. The paper re-writes the non-differentiable rasterization and aggregation steps to heaven side functions and relaxes them to be differentiable by adding noise. \nThe author shows that previous work, especially softras, can be tre...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "nips_2021_EPceRw--ZWr", "3bPt1rbyAP", "bS7SkZypHyw", "c7JpE38n9Sd", "rEod3tflXz", "4oJ4Wt8zKQk", "S4P_PUKyAE0", "nips_2021_EPceRw--ZWr", "nips_2021_EPceRw--ZWr", "nips_2021_EPceRw--ZWr", "nips_2021_EPceRw--ZWr" ]
nips_2021_yUNQBMsLGA
BCORLE($\lambda$): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market
Yang Zhang, Bo Tang, Qingyu Yang, Dou An, Hongyin Tang, Chenyang Xi, Xueying LI, Feiyu Xiong
accept
This paper received mixed reviews from the reviewers. Its strength is that it seems successful in proposing a budget constrained offline reinforcement learning and evaluation framework for the coupon allocation problem on e-commerce platforms. The drawbacks of the paper are the limited novelty of the approach (it appears to combine/unify several prior methods) and the specificity of the application. Given that there is no evidence of wider applicability of the proposed method, I recommend rejecting this paper for NeurIPS. However, the authors are encouraged to submit the manuscript to a more applied venue.
train
[ "wAt4Y1Tihz", "S8IIBJnAtcB", "j-f94gPPyYD", "bFhQphdixmS", "yZ5dJ2LbIC_", "8p8Tz-wX-sG", "6uR1rdnyBQI", "AXn_o4eiot" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed review and valuable feedback to our work. We address the concerns that the reviewer stated point by point as follows.\n\n_**The motivation and explanation of R-BCQ and REME algorithms**_\n\nR-BCQ method is proposed for learning a coupons allocation policy in an offline manner. For exis...
[ -1, -1, -1, -1, 4, 6, 7, 5 ]
[ -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "yZ5dJ2LbIC_", "AXn_o4eiot", "6uR1rdnyBQI", "8p8Tz-wX-sG", "nips_2021_yUNQBMsLGA", "nips_2021_yUNQBMsLGA", "nips_2021_yUNQBMsLGA", "nips_2021_yUNQBMsLGA" ]
nips_2021_i2vd6-7bgBi
Nested Variational Inference
We develop nested variational inference (NVI), a family of methods that learn proposals for nested importance samplers by minimizing an forward or reverse KL divergence at each level of nesting. NVI is applicable to many commonly-used importance sampling strategies and provides a mechanism for learning intermediate densities, which can serve as heuristics to guide the sampler. Our experiments apply NVI to (a) sample from a multimodal distribution using a learned annealing path (b) learn heuristics that approximate the likelihood of future observations in a hidden Markov model and (c) to perform amortized inference in hierarchical deep generative models. We observe that optimizing nested objectives leads to improved sample quality in terms of log average weight and effective sample size.
accept
The paper introduces an approach to sampling from intractable distributions (NVI) by learning sequences of proposal distributions for nested importance sampling, with the proposal at each level of nesting learned by minimizing the (local) reverse KL-divergence. The reviewers found the method interesting and novel but felt that the presentation was too unfocused, and the evaluation, while broad, insufficiently systematic to be convincing. In particular, they found the annealed sampling study in Section 4.1 informative but thought that it was compromised by the use of a toy dataset. The paper would be strengthened by including a more realistic dataset in the study, a clearer motivation for the choice of the baselines (i.e. making it clear that the baselines correspond to ablations), and providing more detail about how exactly NVI is applied in each setting. In summary, the paper clearly has a lot of potential, but needs some more work to deliver on it.
train
[ "Gd1smAaA6VJ", "stwIxRRxBuD", "ZBo_pUDL9FT", "He5MjM-5ZMZ", "SNBP_BzS2Zw", "-DOPQooARa", "hC2Wbq8AdLa", "68KTmbcOZDk", "FtlKUYjwodp", "k1DCBN_buc7", "H4uvtWh_DxW" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors present a method to jointly learn a series of proposals and intermediate densities for e.g. SMC, or annealed importance sampling. They use a novel objective which sums divergences between distributions over each adjacent pair of variables in the chain, and argue that this leads to lower-variance gradie...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_i2vd6-7bgBi", "He5MjM-5ZMZ", "68KTmbcOZDk", "SNBP_BzS2Zw", "-DOPQooARa", "hC2Wbq8AdLa", "H4uvtWh_DxW", "k1DCBN_buc7", "Gd1smAaA6VJ", "nips_2021_i2vd6-7bgBi", "nips_2021_i2vd6-7bgBi" ]
nips_2021_IUqgofswxo
Exponential Bellman Equation and Improved Regret Bounds for Risk-Sensitive Reinforcement Learning
We study risk-sensitive reinforcement learning (RL) based on the entropic risk measure. Although existing works have established non-asymptotic regret guarantees for this problem, they leave open an exponential gap between the upper and lower bounds. We identify the deficiencies in existing algorithms and their analysis that result in such a gap. To remedy these deficiencies, we investigate a simple transformation of the risk-sensitive Bellman equations, which we call the exponential Bellman equation. The exponential Bellman equation inspires us to develop a novel analysis of Bellman backup procedures in risk-sensitive RL algorithms, and further motivates the design of a novel exploration mechanism. We show that these analytic and algorithmic innovations together lead to improved regret upper bounds over existing ones.
accept
Reviewers believe that the improved regret bound in this paper makes a significant contribution and that the paper should be considered for acceptance on the basis of that contribution alone. However, the one sticking point during reviewer discussion concerned the distributional RL connections in the paper. More precisely, a long nested discussion starting with nUcE's initial review concerns about the mostly undeveloped link to Distributional RL (also reflected in other reviews) and continued towards the end of the discussion chain with reviewer tvfS. The authors presented a detailed mathematical presentation in their final nested comment that convinced reviewer tvfS of the connection being claimed. While ultimately the reviewer concerns were addressed with this final author response, it is critically important that this mathematical presentation and discussion be included on revision, perhaps in an Appendix. Furthermore, while reviewer tvfS is satisfied with the author response, reviewer nUcE would still prefer that a revised paper focus on the regret bound contributions and focus on the distributional RL discussion as a "a side remark that opens to future interesting research directions". The authors are asked to carefully consider all of these comments as they prepare their final version.
val
[ "PTFyRfv7N7w", "BI7WjT0rLc7", "nj6d7CcEtKS", "-Pfb0Sy-o6b", "2YAh7KBtRg-", "n_cW8Qq6bB-", "WnfhjQcdpTg", "_Fx7GiXmMUO", "JhGYK4i7_Jy", "Ryo2-1yS1nf", "Z8k5zj1tM0O", "z1U8MNx9_E7", "xrak967go36", "WyJm6I61ds5", "WTP0zYZX_Gq", "IC44cMPjUTN" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Yes, your understanding is correct. More formally, we should say that $G$ is an *optimistic* estimate of the risk-sensitive (exponentiated) Q-fucntion. When we use one sample in Line 5 and leave out the bonus and thresholding operation in Line 7, the iterate $V$ in Line 8 coincides with the one-sample estimate of...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "BI7WjT0rLc7", "nj6d7CcEtKS", "-Pfb0Sy-o6b", "z1U8MNx9_E7", "n_cW8Qq6bB-", "WnfhjQcdpTg", "_Fx7GiXmMUO", "JhGYK4i7_Jy", "IC44cMPjUTN", "WTP0zYZX_Gq", "xrak967go36", "WyJm6I61ds5", "nips_2021_IUqgofswxo", "nips_2021_IUqgofswxo", "nips_2021_IUqgofswxo", "nips_2021_IUqgofswxo" ]
nips_2021_Tv0O_cAdKtW
On sensitivity of meta-learning to support data
Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4\% or as high as 95\% on standard few-shot image classification benchmarks. We explain our empirical findings in terms of class margins, which in turn suggests that robust and safe meta-learning requires larger margins than supervised learning.
accept
The submission investigates the extent to which few-shot classification algorithms are sensitive to the selection of support images. It introduces a greedy algorithm for finding the worst/best support sets for a learning episode and reports that the performance of all six evaluated approaches drastically varies as a function of the support set composition. When adversarial training is employed to increase robustness, the authors report success for training episodes, but crucially these benefits do not translate to test episodes. Finally, the submission presents theory that suggests that robustness in metric-based few-shot learners could be obtained by encouraging inter-class separation and tighter intra-class clusters. Reviewers appreciated the paper's writing clarity and found the problem setting to be interesting and of practical significance to the community. The authors provided the clarifications requested by the reviewers. Some reviewers pointed out that the authors failed to propose an effective solution, to which the authors rightfully replied that "problem-reporting" submissions can still make a substantial contribution to the field, citing Szegedy et al.'s paper on adversarial examples as evidence. Some reviewers noted that the paper's impact could be increased by considering domains beyond image classification, but ultimately found that it meets the bar for acceptance as it stands. I therefore recommend acceptance.
train
[ "JxI8dNtkd16", "9iH-lqiVYBm", "3_qvTjvd8a", "icuDZnKa8Bp", "LvLps6l2eBz", "DKLMnsqSY90", "cNdjcZD8HqF", "-wGyAQYSZkv", "mAkiyNQGkVh", "oQSaPNFdQU", "NC0g4iYjCDe", "MbyTTxUdV9R", "DUrfPDiOt-J", "3PckogHsrLt", "xyBFH1g9C4c", "4K1WBf-gnPr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper empirically finds that popular meta-learning algorithms are very sensitive to the selections of the support images. The author shows that six meta-learning algorithms can achieve testing accuracy within a wide accuracy (worst case accuracy is extremely poor) given different support samples. They also fi...
[ 6, -1, -1, 7, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_Tv0O_cAdKtW", "DUrfPDiOt-J", "oQSaPNFdQU", "nips_2021_Tv0O_cAdKtW", "DKLMnsqSY90", "3PckogHsrLt", "mAkiyNQGkVh", "nips_2021_Tv0O_cAdKtW", "MbyTTxUdV9R", "icuDZnKa8Bp", "nips_2021_Tv0O_cAdKtW", "-wGyAQYSZkv", "JxI8dNtkd16", "4K1WBf-gnPr", "icuDZnKa8Bp", "nips_2021_Tv0O_cAdKtW...
nips_2021_Kb26p7chwhf
On Large-Cohort Training for Federated Learning
Federated learning methods typically learn a model by iteratively sampling updates from a population of clients. In this work, we explore how the number of clients sampled at each round (the cohort size) impacts the quality of the learned model and the training dynamics of federated learning algorithms. Our work poses three fundamental questions. First, what challenges arise when trying to scale federated learning to larger cohorts? Second, what parallels exist between cohort sizes in federated learning, and batch sizes in centralized learning? Last, how can we design federated learning methods that effectively utilize larger cohort sizes? We give partial answers to these questions based on extensive empirical evaluation. Our work highlights a number of challenges stemming from the use of larger cohorts. While some of these (such as generalization issues and diminishing returns) are analogs of large-batch training challenges, others (including catastrophic training failures and fairness concerns) are unique to federated learning.
accept
The work is a well written and well executed empirical study of phenomena associated with large cohort sizes when training practical cross-device FL models. All reviewers ultimately recommended to accept this paper for publication, and I concur. Moreover, I enjoyed the rich discussion between the reviewers and the authors. I have read the paper in detail myself, and have enjoyed the experience. However, since no theoretical explanation is offered, and because the scope of the experiments is necessarily limited, I am not fully convinced about the robustness of the conclusions. Because of this, I consider this paper to be a borderline accept. --- A few caveats: - I believe that some of these observations may have natural theoretical explanations from existing/known theory (granted, under some constraining assumptions on the models and losses trained), which the authors are not exploring. - Given the experimental setup, whatever the results would be, *something* would certainly be observed, and a commentary on that something could always be made. Would that mean that a pattern or a phenomenon was discovered? Maybe, but not necessarily. Many more experiments across a vast array of model types and sizes would be needed for a more convincing argument to be made. It is very hard to say whether the presented observations are robust enough - would they be observed for other models and datasets? What would happen, for example, if you experimented with simple linear models that lead to convex optimization problems? What happens if local steps are removed and one uses compressed communication instead? Having said that, the observations, however fragile, are useful for further theoretical and empirical studies, and when replicated by other researchers in other settings, may serve as an inspiration to design practical mitigation strategies, such as those outlined in the work.
val
[ "xnL8ZYBPdDi", "jWRtoh2-ikG", "x8XxOrpnMT", "BihCmpsIHpy", "_EJk5E96GwZ", "NBiMSWUJEts", "XXUOa_CFP5E", "_yKIVKr2FpP", "WghyPY25zrZ", "Y_ZjVHfkVHs", "2c5qN1ol9N", "BvPg82_Ys6a", "EBeSx3PwpgU", "VuPGksWP425" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for you clarifications! I think this work provides a wide range of experimental results. These results are insightful and informative. I understand that it is impossible to cover all aspects in one paper, so I hope this work can be influential and topics introduced in this paper will be explored further...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "jWRtoh2-ikG", "x8XxOrpnMT", "Y_ZjVHfkVHs", "_EJk5E96GwZ", "NBiMSWUJEts", "WghyPY25zrZ", "2c5qN1ol9N", "Y_ZjVHfkVHs", "EBeSx3PwpgU", "VuPGksWP425", "BvPg82_Ys6a", "nips_2021_Kb26p7chwhf", "nips_2021_Kb26p7chwhf", "nips_2021_Kb26p7chwhf" ]