paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_Z2LtauFNu2r
Deeply Shared Filter Bases for Parameter-Efficient Convolutional Neural Networks
Modern convolutional neural networks (CNNs) have massive identical convolution blocks, and, hence, recursive sharing of parameters across these blocks has been proposed to reduce the amount of parameters. However, naive sharing of parameters poses many challenges such as limited representational power and the vanishing/exploding gradients problem of recursively shared parameters. In this paper, we present a recursive convolution block design and training method, in which a recursively shareable part, or a filter basis, is separated and learned while effectively avoiding the vanishing/exploding gradients problem during training. We show that the unwieldy vanishing/exploding gradients problem can be controlled by enforcing the elements of the filter basis orthonormal, and empirically demonstrate that the proposed orthogonality regularization improves the flow of gradients during training. Experimental results on image classification and object detection show that our approach, unlike previous parameter-sharing approaches, does not trade performance to save parameters and consistently outperforms over parameterized counterpart networks. This superior performance demonstrates that the proposed recursive convolution block design and the orthogonality regularization not only prevent performance degradation, but also consistently improve the representation capability while a significant amount of parameters are recursively shared.
accept
Authors propose to use recursively shared filter basis for compact CNNs. I agree with most reviewers that the idea of shared filter basis in CNN is not new and similar systems have been discussed before. However, ideas such as imposing orthogonality constraint on the shared basis seem interesting, and the effectiveness is well explained. Extensive experiments have been conducted to show improvement. 3 out of 4 reviewers are inclined to accept the paper.
val
[ "1RG-zUVEqr", "7af6jMyNzvm", "zV65UwdyVl", "RqaP9h-hNU6", "Rl9xA6j97ta", "kpSQIa8etA", "1SkdSPw96Q", "zRovRNTL0RV" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper describes a method to share parameters between iterations of a residual block. Rather than sharing all parameters or none, a filter basis W_basis and mixture coefficients alpha are defined; the basis is shared between layers, while the coefficients are unshared. A theoretical argument shows a potentia...
[ 6, 6, -1, -1, -1, -1, 6, 5 ]
[ 4, 5, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_Z2LtauFNu2r", "nips_2021_Z2LtauFNu2r", "7af6jMyNzvm", "1RG-zUVEqr", "1SkdSPw96Q", "zRovRNTL0RV", "nips_2021_Z2LtauFNu2r", "nips_2021_Z2LtauFNu2r" ]
nips_2021_iCoK73Q9TW2
On Optimal Robustness to Adversarial Corruption in Online Decision Problems
Shinji Ito
accept
The reviewers unanimously support acceptance of the paper.
val
[ "cRpntbVGvhO", "so3_7kenod", "j-bQILespz", "AyFYe53mcOK", "RaczEkNWJQo", "bfVAZ3hMad7", "YYF4uyXFyjG", "S3C7Il2d4SH", "-dhlZRBvn2", "TWLl4o6yJz3", "NgdCCHHKbT", "JfZfrp57-tc" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer M84b,\n\nThank you for reading and commenting on our response.\n\n> This suggests that some existing results should be comparable to your work. May you include some comparisons and discussions among the theoretical results in the paper?\n\nYes, there are several comparable existing studies, as you p...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "j-bQILespz", "nips_2021_iCoK73Q9TW2", "S3C7Il2d4SH", "RaczEkNWJQo", "NgdCCHHKbT", "JfZfrp57-tc", "NgdCCHHKbT", "so3_7kenod", "TWLl4o6yJz3", "nips_2021_iCoK73Q9TW2", "nips_2021_iCoK73Q9TW2", "nips_2021_iCoK73Q9TW2" ]
nips_2021_AhlzUugOFIo
Directed Spectrum Measures Improve Latent Network Models Of Neural Populations
Systems neuroscience aims to understand how networks of neurons distributed throughout the brain mediate computational tasks. One popular approach to identify those networks is to first calculate measures of neural activity (e.g. power spectra) from multiple brain regions, and then apply a linear factor model to those measures. Critically, despite the established role of directed communication between brain regions in neural computation, measures of directed communication have been rarely utilized in network estimation because they are incompatible with the implicit assumptions of the linear factor model approach. Here, we develop a novel spectral measure of directed communication called the Directed Spectrum (DS). We prove that it is compatible with the implicit assumptions of linear factor models, and we provide a method to estimate the DS. We demonstrate that latent linear factor models of DS measures better capture underlying brain networks in both simulated and real neural recording data compared to available alternatives. Thus, linear factor models of the Directed Spectrum offer neuroscientists a simple and effective way to explicitly model directed communication in networks of neural populations.
accept
Through a productive discussion with the authors, the reviewers came to a borderline/accept consensus. The proposed "directed spectrum" measure is simple (in fact, the authors say it was an introduced as an intermediate quantity in a 1982 paper by Geweke), but appears to have some nice properties. While some reviewers praised the clarity of this paper, Reviewer 3jsZ and I found the paper rather dense (and the unnecessary subscripts didn't help). I hope the authors will work to improve this aspect of the paper before publication.
val
[ "k0WUPqFJBn3", "PbgeO636Xi", "GXldP4_noWK", "sFn52g866_k", "SpQ7L_kE0rj", "v19AJRTaPkv", "VhSX4TwyIfS", "tLacMTjQFfs", "gmKk2YPGczW", "35xq0rD3xT", "oUxgbU3YZn3", "pX6rNl7lCdh" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The main goal of the Directed Spectrum is to provide a simple method for estimating directed communication within latent brain networks. With regards to d), we emphasize that using fft or other ‘raw’ features is a complementary, rather than competing, method that achieves a different goal. Based on your request, ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "GXldP4_noWK", "sFn52g866_k", "tLacMTjQFfs", "gmKk2YPGczW", "v19AJRTaPkv", "pX6rNl7lCdh", "oUxgbU3YZn3", "35xq0rD3xT", "nips_2021_AhlzUugOFIo", "nips_2021_AhlzUugOFIo", "nips_2021_AhlzUugOFIo", "nips_2021_AhlzUugOFIo" ]
nips_2021_ZUvaSolQZh3
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered.
accept
The authors note that using an ensemble with N-networks in SAC and using the minimum for the Bellman backup performs well on Offline RL. This is a simple approach and interesting observation. As using a large N is computationally expensive, they develop an approach to achieve similar results with many fewer networks. After the author response, all reviewers and I found the empirical section to clearly demonstrate the strength of their approach. Although there are some concerns on novelty, I think the strong empirical results and simplicity of their approach makes this paper a worthwhile contribution. The clarifications made during the response phase were important, so I encourage the authors to revise the paper with these in mind.
train
[ "Zl_-NhxT95H", "1PMXhPCbEeb", "CHs_gHImdc", "AYVD6dTr38b", "t4X1GapmzWX", "4UuK8MPjZS", "_JgknzlouZ", "f057_XRafDh", "RZEGnR4gK_1", "3QRAxhtivHH", "ZK3JykrmoK7", "RCj1J1OkVzW", "BfBCh9Du82", "pIaEBkttDJE", "tYCAnXWQWK3" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response.\n\nYour feedback has helped us improve our paper in terms of conciseness. We also appreciate your suggesting a notation for the update rule of our algorithm.\n\nPlease let us know if there are additional questions!", " Thank you for the response.\n\nWe are glad to have addressed some...
[ -1, -1, 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "4UuK8MPjZS", "AYVD6dTr38b", "nips_2021_ZUvaSolQZh3", "ZK3JykrmoK7", "nips_2021_ZUvaSolQZh3", "_JgknzlouZ", "f057_XRafDh", "RZEGnR4gK_1", "t4X1GapmzWX", "nips_2021_ZUvaSolQZh3", "CHs_gHImdc", "tYCAnXWQWK3", "pIaEBkttDJE", "nips_2021_ZUvaSolQZh3", "nips_2021_ZUvaSolQZh3" ]
nips_2021_q88AMOYEKLa
Distribution-free inference for regression: discrete, continuous, and in between
Yonghoon Lee, Rina Barber
accept
The consensus of the reviewing committee is that the nice theoretical contributions of this paper are sufficient for a Neurips publication. There are some concerns about the importance of these results in applications and the authors are encouraged to address this concern in their revision.
train
[ "6A1ImNn5kCh", "fxomUPtz08D", "hX12bSN44Id", "kY58TmLKsWq", "Olaku0HsP3M", "LORFiC7j3am", "nhovBeX_r2P", "vrU8--lAE4", "KufVw2o6B9", "YRp4jaeuXr", "Vtu0BF5OkMY", "_cDYMoaM0H", "IpjGjkKFqWm", "JUohQjGgwty", "HdWbdvc_6q0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors provide theoretical results on the type of problems that can be accurately learned (from a confidence interval standpoint) depending on the properties of the data distribution.\nThis departs from the usual convention of assuming as little as possible on the distribution, by adding a fairl...
[ 7, -1, 5, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 8 ]
[ 2, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_q88AMOYEKLa", "Vtu0BF5OkMY", "nips_2021_q88AMOYEKLa", "vrU8--lAE4", "nhovBeX_r2P", "nips_2021_q88AMOYEKLa", "_cDYMoaM0H", "HdWbdvc_6q0", "hX12bSN44Id", "JUohQjGgwty", "6A1ImNn5kCh", "LORFiC7j3am", "nips_2021_q88AMOYEKLa", "nips_2021_q88AMOYEKLa", "nips_2021_q88AMOYEKLa" ]
nips_2021_TJOQw_vMlAj
Statistical Inference with M-Estimators on Adaptively Collected Data
Bandit algorithms are increasingly used in real-world sequential decision-making problems. Associated with this is an increased desire to be able to use the resulting datasets to answer scientific questions like: Did one type of ad lead to more purchases? In which contexts is a mobile health intervention effective? However, classical statistical approaches fail to provide valid confidence intervals when used with data collected with bandit algorithms. Alternative methods have recently been developed for simple models (e.g., comparison of means). Yet there is a lack of general methods for conducting statistical inference using more complex models on data collected with (contextual) bandit algorithms; for example, current methods cannot be used for valid inference on parameters in a logistic regression model for a binary reward. In this work, we develop theory justifying the use of M-estimators---which includes estimators based on empirical risk minimization as well as maximum likelihood---on data collected with adaptive algorithms, including (contextual) bandit algorithms. Specifically, we show that M-estimators, modified with particular adaptive weights, can be used to construct asymptotically valid confidence regions for a variety of inferential targets.
accept
The expert reviewers all appreciated the paper and agree it provides useful results and that the paper should be accepted. The authors are to be congratulated for an interesting contribution that nicely handles a very timely topic. The authors are expected to address the points raised by reviewers in a final version, including as they outlined in their response. In my opinion the most important thing to address is to make very clear early on the setting where the method is appropriate, namely well-specified parametric models, and that this excludes, for example, a constant model for the mean reward of an arm or policy. This is not to detract from the work -- rather, making clear what the paper sets out to do and what it does not will significantly improve clarity for the reader and therefore the paper's impact. A discussion regarding what would actually happen under misspecification (and whether slight misspecification results in only slight aberrations from the results) would potentially add a lot too (I believe the target estimand would then depend on the logging policies, if I understand correctly).
test
[ "0Sw0Dbyyhqj", "l7kLGA8qACN", "QPOSpQhQvg-", "zRP-DH5G_OA", "Jraj2oqYxXy", "lKXQ3-IUGj4", "Q9sJULFk0T", "mD0V_APq0s" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for taking the time to read our paper and for your thoughtful comments! We really appreciate that you found our paper to be well-written and that it makes a significant contribution to the area.\n\nRegarding the minor comments that you had:\n\n1. The step we take in equality (a) actually can be ...
[ -1, -1, -1, -1, 7, 6, 7, 6 ]
[ -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "Jraj2oqYxXy", "lKXQ3-IUGj4", "Q9sJULFk0T", "mD0V_APq0s", "nips_2021_TJOQw_vMlAj", "nips_2021_TJOQw_vMlAj", "nips_2021_TJOQw_vMlAj", "nips_2021_TJOQw_vMlAj" ]
nips_2021_VKVShLsAuZ
NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem
We present NeuroLKH, a novel algorithm that combines deep learning with the strong traditional heuristic Lin-Kernighan-Helsgaun (LKH) for solving Traveling Salesman Problem. Specifically, we train a Sparse Graph Network (SGN) with supervised learning for edge scores and unsupervised learning for node penalties, both of which are critical for improving the performance of LKH. Based on the output of SGN, NeuroLKH creates the edge candidate set and transforms edge distances to guide the searching process of LKH. Extensive experiments firmly demonstrate that, by training one model on a wide range of problem sizes, NeuroLKH significantly outperforms LKH and generalizes well to much larger sizes. Also, we show that NeuroLKH can be applied to other routing problems such as Capacitated Vehicle Routing Problem (CVRP), Pickup and Delivery Problem (PDP), and CVRP with Time Windows (CVRPTW).
accept
The reviewers all agreed that the combination of learning techniques to augment traditional problem solvers is an exciting direction, and this paper adds to that direction. Reviewer dK7t was ultimately convinced by the author's response and material in the appendix (Figure S.1) that the contribution was worthwhile even if the objective improvement was small in many of the TSPs. I think the paper could be improved by directly addressing this in the paper: both the discussion of relative improvement and highlighting the larger improvements in other domains.
train
[ "iYD1gSpffut", "q-kASJ_M7B9", "EaXPy-kK41", "eKPGI17uNzI", "-lyyL2Yku7U", "o1eFFaZCmp1", "jdhk4SPubRJ", "x5gH8Oh6fHP", "TbgcBhC9NBW", "WxRKXU0MjPU", "cBbk4jOzuEJ", "y2-V8fjYefm", "DC2JZ6fG2B", "-jKY_9VywBz", "c33cUSDBYxJ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you very much for the response. We are very glad that most of your concerns have been addressed. Regarding the last concern on the reasons behind the good generalization ability of our proposed method, like we briefly discussed in the point 8.(2) of our response, it is more related to the task formulation a...
[ -1, 6, -1, -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 3, -1, -1, 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "EaXPy-kK41", "nips_2021_VKVShLsAuZ", "y2-V8fjYefm", "o1eFFaZCmp1", "nips_2021_VKVShLsAuZ", "DC2JZ6fG2B", "TbgcBhC9NBW", "nips_2021_VKVShLsAuZ", "-jKY_9VywBz", "c33cUSDBYxJ", "q-kASJ_M7B9", "q-kASJ_M7B9", "-lyyL2Yku7U", "x5gH8Oh6fHP", "nips_2021_VKVShLsAuZ" ]
nips_2021_Pf9RjFoUdLZ
LSH-SMILE: Locality Sensitive Hashing Accelerated Simulation and Learning
The advancement of deep neural networks over the last decade has enabled progress in scientific knowledge discovery in the form of learning Partial Differential Equations (PDEs) directly from experiment data. Nevertheless, forward simulation and backward learning of large-scale dynamic systems require handling billions of mutually interacting elements, the scale of which overwhelms current computing architectures. We propose Locality Sensitive Hashing Accelerated Simulation and Learning (LSH-SMILE), a unified framework to scale up both forward simulation and backward learning of physics systems. LSH-SMILE takes advantage of (i) the locality of PDE updates, (ii) similar temporal dynamics shared by multiple elements. LSH-SMILE hashes elements with similar dynamics into a single hash bucket and handles their updates at once. This allows LSH-SMILE to scale with respect to the number of non-empty hash buckets, a drastic improvement over conventional approaches. Theoretically, we prove a novel bound on the errors introduced by LSH-SMILE. Experimentally, we demonstrate that LSH-SMILE simulates physics systems at comparable quality with exact approaches, but with way less time and space complexity. Such savings also translate to better learning performance due to LSH-SMILE's ability to propagate gradients over a long duration.
accept
All the reviewers concurred that this paper is above bar for publication. Rebuttal reaffirmed that sentiment. Reviewers like the idea of using LSH for speeding up simulations. In particular, applying LSH algorithm to a completely new domain, i.e. physic systems. The results look promising and since the paper belongs to interdisciplinary research, reviewers agree that even though the paper is less advanced in algorithmic techniques, it will be a educative read for a broader community of people. Please take in account comments from the reviewers to improve the paper.
train
[ "kJ9fx-56Qek", "fOCBlMO018f", "RucR0qhP8Hr", "hGFt1X30zH_", "I0e2lsqQpQ", "OdOsjT1Hmv", "J8xcxHFKc2q", "ECntxxRrep5", "IOm2E3DBkXA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a method (LSH-SMILE) that applies LSH to speed up PDE forward simulation and backward learning.\nObserving that most of the elements are very similar, they can be hashed into buckets and each bucket can be represented by a single value, thus reducing computational and memory cost. \nThe method i...
[ 6, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "nips_2021_Pf9RjFoUdLZ", "RucR0qhP8Hr", "IOm2E3DBkXA", "ECntxxRrep5", "J8xcxHFKc2q", "kJ9fx-56Qek", "nips_2021_Pf9RjFoUdLZ", "nips_2021_Pf9RjFoUdLZ", "nips_2021_Pf9RjFoUdLZ" ]
nips_2021_MTs2adH_Qq
Meta-learning with an Adaptive Task Scheduler
To benefit the learning of a new task, meta-learning has been proposed to transfer a well-generalized meta-model learned from various meta-training tasks. Existing meta-learning algorithms randomly sample meta-training tasks with a uniform probability, under the assumption that tasks are of equal importance. However, it is likely that tasks are detrimental with noise or imbalanced given a limited number of meta-training tasks. To prevent the meta-model from being corrupted by such detrimental tasks or dominated by tasks in the majority, in this paper, we propose an adaptive task scheduler (ATS) for the meta-training process. In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks. We identify two meta-model-related factors as the input of the neural scheduler, which characterize the difficulty of a candidate task to the meta-model. Theoretically, we show that a scheduler taking the two factors into account improves the meta-training loss and also the optimization landscape. Under the setting of meta-learning with noise and limited budgets, ATS improves the performance on both miniImageNet and a real-world drug discovery benchmark by up to 13% and 18%, respectively, compared to state-of-the-art task schedulers.
accept
The idea of the tasks scheduler for meta-learning is interesting. The solution is technically sound and intuitive. The theoretical analysis is also interesting, although it requires more discussions on when the assumption can hold. The experiments are quite exhaustive. The discussion on the computational cost is non-trivial, and more detailed analysis is needed
test
[ "lV44hVTgPC", "Nho_CRSa4P", "vLI9VC5-pNn", "NmHNyYVsC_", "zY-vIsu0HL8", "6edUd_ozWmJ", "SN-PVmJh8Bx", "Uv0vzSTrJzN", "k6o4WyAHPIk", "mLT40Mella", "16xU_hwWHW6", "tSX-DCORlS", "fYdoFxtuKD", "YCPFU-PlsI", "vnVoVnXfEj", "jQlb4WU19BE", "EmQGCkbXllS", "WuSA8DX1rns" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed Adaptive Task Scheduler (ATS)\u0010 for meta-learning, which trains a neural network to adaptively select tasks. A bi-level optimization strategy is used to optimize the the meta-model and neural scheduler. Experiments are conducted on miniImagenet and drug compounds, which beats the baseline ...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_MTs2adH_Qq", "NmHNyYVsC_", "6edUd_ozWmJ", "zY-vIsu0HL8", "SN-PVmJh8Bx", "jQlb4WU19BE", "tSX-DCORlS", "k6o4WyAHPIk", "mLT40Mella", "fYdoFxtuKD", "nips_2021_MTs2adH_Qq", "YCPFU-PlsI", "16xU_hwWHW6", "lV44hVTgPC", "EmQGCkbXllS", "WuSA8DX1rns", "nips_2021_MTs2adH_Qq", "nips_...
nips_2021_iPHnzuU6S94
Neural Active Learning with Performance Guarantees
We investigate the problem of active learning in the streaming setting in non-parametric regimes, where the labels are stochastically generated from a class of functions on which we make no assumptions whatsoever. We rely on recently proposed Neural Tangent Kernel (NTK) approximation tools to construct a suitable neural embedding that determines the feature space the algorithm operates on and the learned model computed atop. Since the shape of the label requesting threshold is tightly related to the complexity of the function to be learned, which is a-priori unknown, we also derive a version of the algorithm which is agnostic to any prior knowledge. This algorithm relies on a regret balancing scheme to solve the resulting online model selection problem, and is computationally efficient. We prove joint guarantees on the cumulative regret and number of requested labels which depend on the complexity of the labeling function at hand. In the linear case, these guarantees recover known minimax results of the generalization error as a function of the label complexity in a standard statistical learning setting.
accept
In the context of streaming data, this paper addresses active learning through a neural contextual bandit and derives theoretical guarantees by relying on the theory of Neural Tangent Kernel (NTK) approximation.The contributions (novel algorithm and its theoretical analysis) are original and relevant. Most of the questions from the reviewers were addressed during the rebuttal. There are still a few concerns about related works and experimental works that the authors are kindly asked to take into account when preparing the final version.
train
[ "zBc4k9rK7QB", "m099szUaFC1", "7FHYkPawF4", "zSHr7ok6W17", "QGIv_k_TVM", "MVUOFk9TGip", "qIGfNOeA2rk", "XKRIBrCOcL3", "eXjteEa0Gs", "zVvC8OcwVCk", "XPtsvVw96bG", "sieEOcV364y" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you. Your response was enlightening.", " Thank you for clarifying these points!", " We thank the reviewer for their additional suggestions which we will take into duly consideration in drafting the revised version of this paper.\n\nThe Authors", " Thank you for your reply. I read the rebuttal and chec...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 4, 1 ]
[ "qIGfNOeA2rk", "XKRIBrCOcL3", "zSHr7ok6W17", "QGIv_k_TVM", "XPtsvVw96bG", "sieEOcV364y", "zVvC8OcwVCk", "eXjteEa0Gs", "nips_2021_iPHnzuU6S94", "nips_2021_iPHnzuU6S94", "nips_2021_iPHnzuU6S94", "nips_2021_iPHnzuU6S94" ]
nips_2021_-sQ1LLWIAAJ
A Gradient Method for Multilevel Optimization
Ryo Sato, Mirai Tanaka, Akiko Takeda
accept
Reviewer anonymously agree that this paper proposes a novel and non-trivial method for an important problem. Some reviewers have also raised concerns regarding clarity that have been addressed by the authors during the rebuttal phrase. I urge the authors to incorporate their feedback in the camera ready.
train
[ "hnEgzKOmqg5", "0-YK2ntk15i", "nEm15xBDFz6", "6dsFp0fYXz", "GSjLknsNpRG", "vz2L8byttrJ", "sq1FPNbnFuJ", "JLbc6NDABri", "5BuvduE2K7A", "INNHJ0gbPz", "GedTAKq2BCF", "0lLe0qiITZ", "J1K1I-WkdL", "KLjmSABV0Ae", "IAlu2cNZB6h" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ " First, we have to say that there was confusion among the authors\ndue to the complicated structure of multilevel optimization.\nWe now have come to an agreement and we would like to share it to respond to your questions.\nIn this reply, we call the upper and lower problems in the above two problems (A3.1) and (A3...
[ -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nEm15xBDFz6", "IAlu2cNZB6h", "GSjLknsNpRG", "sq1FPNbnFuJ", "GedTAKq2BCF", "nips_2021_-sQ1LLWIAAJ", "JLbc6NDABri", "5BuvduE2K7A", "INNHJ0gbPz", "J1K1I-WkdL", "0-YK2ntk15i", "KLjmSABV0Ae", "vz2L8byttrJ", "nips_2021_-sQ1LLWIAAJ", "nips_2021_-sQ1LLWIAAJ" ]
nips_2021_vwgsqRorzz
Edge Representation Learning with Hypergraphs
Graph neural networks have recently achieved remarkable success in representing graph-structured data, with rapid progress in both the node embedding and graph pooling methods. Yet, they mostly focus on capturing information from the nodes considering their connectivity, and not much work has been done in representing the edges, which are essential components of a graph. However, for tasks such as graph reconstruction and generation, as well as graph classification tasks for which the edges are important for discrimination, accurately representing edges of a given graph is crucial to the success of the graph representation learning. To this end, we propose a novel edge representation learning framework based on Dual Hypergraph Transformation (DHT), which transforms the edges of a graph into the nodes of a hypergraph. This dual hypergraph construction allows us to apply message-passing techniques for node representations to edges. After obtaining edge representations from the hypergraphs, we then cluster or drop edges to obtain holistic graph-level edge representations. We validate our edge representation learning method with hypergraphs on diverse graph datasets for graph representation and generation performance, on which our method largely outperforms existing graph representation learning methods. Moreover, our edge representation learning and pooling method also largely outperforms state-of-the-art graph pooling methods on graph classification, not only because of its accurate edge representation learning, but also due to its lossless compression of the nodes and removal of irrelevant edges for effective message-passing.
accept
This paper proposes edge representation learning framework in graphs based on a hyper graph transformation. The reviewers agreed that the problem motivation has some merits. However, all the reviewers had hard time discerning what’s the novelty of the proposed work, and the significance of the contribution. In future submission, it would help to have a better description on the novelty of the significance of the contribution, and contrast it with existing work more clearly.
test
[ "eC_nbf8Yayb", "9T2Lx7W-wFn", "q843IQ6xuFf", "gBSvBNI4ZAW", "xcDQ5pNRUZH", "fKLOAxHZCRh", "sUeOCyabGMJ", "wctiG_ZFfo8", "RkLn67nKhcX", "-bkzFwiV2UY", "cyCsgaASaXL", "3aLz67_TWyp" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your constructive and helpful comments. We appreciate your positive comments that our work has a good motivation focusing on an interesting problem, which is original and novel, and further the proposed idea is good. We address all your concerns below:\n\n---\n**Question 1-1:** In terms...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "-bkzFwiV2UY", "q843IQ6xuFf", "gBSvBNI4ZAW", "xcDQ5pNRUZH", "fKLOAxHZCRh", "RkLn67nKhcX", "cyCsgaASaXL", "3aLz67_TWyp", "nips_2021_vwgsqRorzz", "nips_2021_vwgsqRorzz", "nips_2021_vwgsqRorzz", "nips_2021_vwgsqRorzz" ]
nips_2021_e8blYRui3j
One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval
We present Cross-lingual Open-Retrieval Answer Generation (CORA), the first unified many-to-many question answering (QA) model that can answer questions across many languages, even for ones without language-specific annotated data or knowledge sources.We introduce a new dense passage retrieval algorithm that is trained to retrieve documents across languages for a question.Combined with a multilingual autoregressive generation model, CORA answers directly in the target language without any translation or in-language retrieval modules as used in prior work. We propose an iterative training method that automatically extends annotated data available only in high-resource languages to low-resource ones. Our results show that CORA substantially outperforms the previous state of the art on multilingual open QA benchmarks across 26 languages, 9 of which are unseen during training. Our analyses show the significance of cross-lingual retrieval and generation in many languages, particularly under low-resource settings.
accept
The paper tackles the problem of multilingual open domain question answering, introducing an iterative approach to mining answer passages. There is not a clear consensus amongst the reviewers. It is clear that this is an interesting and under-explored problem, and that the proposed solution works well. Reviewer qfFE argues strongly for acceptance based on the novelty of the setting, the strong empirical comparisons, and the potential for inspiring future work. The major criticism made of the paper is its lack of technical novelty, and reviewers agree that similar iterative retrieval ideas have been proposed in models such as RAG and CRISS. However, there is a non-trivial modeling contribution over these papers to extend them successfully to multilingual open domain QA. Reviewer nAYP also feels that the baselines are weak, but I agree with reviewer qfFE that the strong set of baselines is a significant contribution of the work. Overall, I recommend acceptance.
train
[ "I9K8SyKQT5P", "PrTAdNHc0Ac", "rvE7I0-iHxe", "8dBSktwwkzZ", "SgwCfm0dEuf", "CFTLF0UpOkP", "aEVzhpI3b-V", "lR113ewz9dw", "_sFWXQ0bFPg", "R1q90572LEx", "klEigqHs8gL", "ri7x9kCJLeq", "0aZCOTpjL-3" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for involving the discussion. \nAs listed in our author response, we politely disagree that this work is similar to CRISS and is a simple extension of it. Again, the training schema is significantly different from CRISS (our response, 1-2. Difference in training), which is necessary to achieve many-to-m...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5, 9, 4 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "SgwCfm0dEuf", "8dBSktwwkzZ", "nips_2021_e8blYRui3j", "R1q90572LEx", "klEigqHs8gL", "aEVzhpI3b-V", "0aZCOTpjL-3", "ri7x9kCJLeq", "klEigqHs8gL", "rvE7I0-iHxe", "nips_2021_e8blYRui3j", "nips_2021_e8blYRui3j", "nips_2021_e8blYRui3j" ]
nips_2021_HD6CxZtbmIx
LEADS: Learning Dynamical Systems that Generalize Across Environments
When modeling dynamical systems from real-world data samples, the distribution of data often changes according to the environment in which they are captured, and the dynamics of the system itself vary from one environment to another. Generalizing across environments thus challenges the conventional frameworks. The classical settings suggest either considering data as i.i.d and learning a single model to cover all situations or learning environment-specific models. Both are sub-optimal: the former disregards the discrepancies between environments leading to biased solutions, while the latter does not exploit their potential commonalities and is prone to scarcity problems. We propose LEADS, a novel framework that leverages the commonalities and discrepancies among known environments to improve model generalization. This is achieved with a tailored training formulation aiming at capturing common dynamics within a shared model while additional terms capture environment-specific dynamics. We ground our approach in theory, exhibiting a decrease in sample complexity w.r.t classical alternatives. We show how theory and practice coincides on the simplified case of linear dynamics. Moreover, we instantiate this framework for neural networks and evaluate it experimentally on representative families of nonlinear dynamics. We show that this new setting can exploit knowledge extracted from environment-dependent data and improves generalization for both known and novel environments.
accept
From the SAC: This is an instance where a good rebuttal has helped. The original decision on this paper was a reject, but the SAC felt that your rebuttal was very thorough and detailed. Hence I am recommending moving this paper to an accept. Please do make sure in the next version of the paper to take all reviewer comments into account as well as to merge into the paper the additional discussion in your rebuttal (perhaps, due to space considerations, into the appendices).
train
[ "BfBiyCci14j", "D_ISc5X0xe3", "BnORH9Vd_RS", "nePZTUKTeLI", "5Hk4Z5CeGjx", "AbJrCS8L5wa", "DP3CMBmouXu", "Pk5nWoSxVz", "R4ifD6tOqJ1", "pQWugyIsUDe", "CQkkYVlrXH", "5Q2PG_AJgxB", "qth090c1bzp", "55dYWDQvTg5", "LDWQgUw5jFT" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks, and thanks a lot again for your time and consideration", "This paper proposes a new framework (LEADS) for learning dynamical systems (ODEs, PDEs) with the aim of generalising across related, but slightly different environments. It uses an additive two-component model, comprising a shared model and an en...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "BnORH9Vd_RS", "nips_2021_HD6CxZtbmIx", "5Hk4Z5CeGjx", "D_ISc5X0xe3", "AbJrCS8L5wa", "5Q2PG_AJgxB", "CQkkYVlrXH", "nips_2021_HD6CxZtbmIx", "LDWQgUw5jFT", "qth090c1bzp", "55dYWDQvTg5", "D_ISc5X0xe3", "nips_2021_HD6CxZtbmIx", "nips_2021_HD6CxZtbmIx", "nips_2021_HD6CxZtbmIx" ]
nips_2021_KAFyFabsK88
Storchastic: A Framework for General Stochastic Automatic Differentiation
Modelers use automatic differentiation (AD) of computation graphs to implement complex Deep Learning models without defining gradient computations. Stochastic AD extends AD to stochastic computation graphs with sampling steps, which arise when modelers handle the intractable expectations common in Reinforcement Learning and Variational Inference. However, current methods for stochastic AD are limited: They are either only applicable to continuous random variables and differentiable functions, or can only use simple but high variance score-function estimators. To overcome these limitations, we introduce Storchastic, a new framework for AD of stochastic computation graphs. Storchastic allows the modeler to choose from a wide variety of gradient estimation methods at each sampling step, to optimally reduce the variance of the gradient estimates. Furthermore, Storchastic is provably unbiased for estimation of any-order gradients, and generalizes variance reduction techniques to higher-order gradient estimates. Finally, we implement Storchastic as a PyTorch library at github.com/HEmile/storchastic.
accept
This paper introduces a new framework for stochastic gradient estimation that encompasses various existing gradient estimation techniques. The unification of several gradient estimation techniques into a single framework is recognized by the reviewers as a somewhat significant contribution. The reviewers did not raise any severe concern so overall, I think this is a worthwile contribution as a poster. One reviewer asked for a case study which I would like to authors to include in a revision given that they promised to do so.
train
[ "__7WZ6PtVAm", "EcMs9mw0ZJn", "zwfwbnVrnLy", "n2IDEIOceNm", "29FnOFsfFzD", "cdod3MdgsKr", "AAxO9EVnMoA", "Ykbscs0QhWD", "qUNXhzn38G4", "kmCmYotEFve" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the reply, and agreeing to include a case study in the paper. The motivating example is also helpful to a certain extent.\n\nAlthough my instinct is still that this is a good paper and I am leaning towards acceptance even in its current form, I will keep my score at 6 and remain lower confidence. Se...
[ -1, -1, -1, -1, -1, -1, 6, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 2, 3, 3, 3 ]
[ "EcMs9mw0ZJn", "kmCmYotEFve", "qUNXhzn38G4", "Ykbscs0QhWD", "AAxO9EVnMoA", "nips_2021_KAFyFabsK88", "nips_2021_KAFyFabsK88", "nips_2021_KAFyFabsK88", "nips_2021_KAFyFabsK88", "nips_2021_KAFyFabsK88" ]
nips_2021_WJPAqX5M-2
Concentration inequalities under sub-Gaussian and sub-exponential conditions
We prove analogues of the popular bounded difference inequality (also called McDiarmid's inequality) for functions of independent random variables under sub-gaussian and sub-exponential conditions. Applied to vector-valued concentration and the method of Rademacher complexities these inequalities allow an easy extension of uniform convergence results for PCA and linear regression to the case potentially unbounded input- and output variables.
accept
The referees are in agreement that this submission provides novel concentration-of-measure inequalities. It is very much within the conference scope and of sufficient interest and novelty. All of the referee objections have been addressed during the discussion phase.
train
[ "k9dUN1LXMoR", "SmCQAsc0xWq", "1pJck-QIB8Q", "gpqcn8HC6rC", "7XHoQMfGi1h", "WCBJ404x9BE", "267j-QR21iE", "thUzUCwCdV4", "xCzuH7L7APw", "tjrxQ9E4lwj", "0nEHaOiJfW6", "W_44S5qd-D0", "FxUGGkEKOTO" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The reviewer thanks the authors for their response, and changes the score accordingly from 6 to 7: Accept.\n", "McDiarmid's inequality is one of the workhorses of modern machine learning. In its simplest form, however, it requires the function under consideration to verify a bounded differences property. This w...
[ -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 9 ]
[ -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "tjrxQ9E4lwj", "nips_2021_WJPAqX5M-2", "gpqcn8HC6rC", "thUzUCwCdV4", "nips_2021_WJPAqX5M-2", "267j-QR21iE", "7XHoQMfGi1h", "W_44S5qd-D0", "FxUGGkEKOTO", "SmCQAsc0xWq", "7XHoQMfGi1h", "nips_2021_WJPAqX5M-2", "nips_2021_WJPAqX5M-2" ]
nips_2021_FKmcLhJ4mn
Variance-Aware Off-Policy Evaluation with Linear Function Approximation
We study the off-policy evaluation (OPE) problem in reinforcement learning with linear function approximation, which aims to estimate the value function of a target policy based on the offline data collected by a behavior policy. We propose to incorporate the variance information of the value function to improve the sample efficiency of OPE. More specifically, for time-inhomogeneous episodic linear Markov decision processes (MDPs), we propose an algorithm, \texttt{VA-OPE}, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration. We show that our algorithm achieves a tighter error bound than the best-known result. We also provide a fine-grained characterization of the distribution shift between the behavior policy and the target policy. Extensive numerical experiments corroborate our theory.
accept
This paper studies off-policy evaluation in linear MDPs. The authors extend the FQI work of [10] by reweighing the Bellman residual by an estimate of the value function variance. This slightly improves the instance-dependent sample efficiency, by up to a factor of H. The paper is fairly incremental compared to [10] and of somewhat limited scope, but executed well overall to warrant acceptance.
train
[ "PDyJNwpJL5B", "nppUzMhXosr", "ycNtGycUmK6", "DAglMuLl37e", "nAk2GibNmID", "yWcGVYgmnjM", "e9vK7OMGVdo", "32CjcSHzP6L", "wyYmoR_QmDS", "TDER4geAJHE" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are glad that our response has addressed your concerns.", "This paper proposes a new off-policy evaluation (OPE) algorithm in the finite horizon MDP setting. The reviewer finds the new results are sightly incremental compared with the existing results and thus suggests not accepting the paper. This paper pr...
[ -1, 6, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "ycNtGycUmK6", "nips_2021_FKmcLhJ4mn", "e9vK7OMGVdo", "32CjcSHzP6L", "TDER4geAJHE", "wyYmoR_QmDS", "nppUzMhXosr", "nips_2021_FKmcLhJ4mn", "nips_2021_FKmcLhJ4mn", "nips_2021_FKmcLhJ4mn" ]
nips_2021_AvVDR8R-kQX
A Provably Efficient Sample Collection Strategy for Reinforcement Learning
Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric
accept
This paper was well-received by the reviewers who all agreed that the paper studies an interesting problem and offers a solid solution. There were only minor concerns raised by one reviewer, but these were adequately addressed in the author response. Eventually, the reviewers all agreed that the paper should be accepted for publication at the conference.
train
[ "9LTB2XvSGjq", "39CpVefnnyy", "Ab5Etuk6b5p", "YT9zXL1aE-G", "YpqhVjm8lru", "2F_I7fT1p-", "fD9cI7c54LS", "CUbYWQECJX8", "AOAjNMhha0Z", "XgRTUba7WQm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their thorough replies. I am still convinced this work provides a strong contribution and I will stick with my positive evaluation.", " I have read the response, and thank the authors for answering my questions. I stick with my original (positive) evaluation of this paper."...
[ -1, -1, -1, -1, -1, -1, 7, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "Ab5Etuk6b5p", "YT9zXL1aE-G", "XgRTUba7WQm", "AOAjNMhha0Z", "fD9cI7c54LS", "CUbYWQECJX8", "nips_2021_AvVDR8R-kQX", "nips_2021_AvVDR8R-kQX", "nips_2021_AvVDR8R-kQX", "nips_2021_AvVDR8R-kQX" ]
nips_2021_V08W9xadLPV
Improved Regret Bounds for Tracking Experts with Memory
We address the problem of sequential prediction with expert advice in a non-stationary environment with long-term memory guarantees in the sense of Bousquet and Warmuth [4]. We give a linear-time algorithm that improves on the best known regret bound [27]. This algorithm incorporates a relative entropy projection step. This projection is advantageous over previous weight-sharing approaches in that weight updates may come with implicit costs as in for example portfolio optimization. We give an algorithm to compute this projection step in linear time, which may be of independent interest.
accept
The reviewers have agreed that the presented results are novel, the paper is nicely written, and contains good ideas. On the other hand, the paper is probably only interesting to a small part of the community, providing a little improvement to a well-studied problem, and the algorithm also needs to know the problem parameters in advance. Nevertheless, the positives outweigh the negatives, hence I recommend acceptance.
train
[ "F9PnV01yG2N", "QAACmFvW_A5", "Zxs_AAoGEDX", "vSGBrhgGBKj", "QkrNAJ0bC9", "4lzSALHK6N", "fq4OblTPeQ", "3nTZG2BWQkl", "xifqJl3Lwd", "9jnXul7yoT", "ms33ohv7022", "jX9mMPruKdj", "058vW5iAaE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies 'prediction with expert advice' in a non-stationary environment where the best expert is switching. The main contribution is an improved regret bound in the 'memory' setting (where a small number of experts is optimal), which is achieved with an efficient algorithm (in fact two variants are prop...
[ 6, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "nips_2021_V08W9xadLPV", "vSGBrhgGBKj", "3nTZG2BWQkl", "nips_2021_V08W9xadLPV", "4lzSALHK6N", "vSGBrhgGBKj", "ms33ohv7022", "jX9mMPruKdj", "058vW5iAaE", "F9PnV01yG2N", "nips_2021_V08W9xadLPV", "nips_2021_V08W9xadLPV", "nips_2021_V08W9xadLPV" ]
nips_2021_v_4XcXsAZUn
Robustness of Graph Neural Networks at Scale
Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications. Yet, existing studies of their vulnerability to adversarial attacks rely on relatively small graphs. We address this gap and study how to attack and defend GNNs at scale. We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation despite optimizing over a number of parameters which is quadratic in the number of nodes. We show that common surrogate losses are not well-suited for global attacks on GNNs. Our alternatives can double the attack strength. Moreover, to improve GNNs' reliability we design a robust aggregation function, Soft Median, resulting in an effective defense at all scales. We evaluate our attacks and defense with standard GNNs on graphs more than 100 times larger compared to previous work. We even scale one order of magnitude further by extending our techniques to a scalable GNN.
accept
This paper tackles the adversarial robustness of large-scale GNNs, which is practically important but has been overlooked. The authors provide a novel observation that the conventional cross-entropy (and Carlini-Wagner) loss guides the attacker to attack nodes that are already misclassified when attacking large GNNs, which results in the waste of the computing budget. Then, the authors show that under certain assumptions, a budget-aware loss can achieve the optimal solution, and propose a realization of such surrogate loss, Masked Cross Entropy (MCE), which only considers correctly classified samples, and thus is more efficient. The authors further propose scalable attacks and a scalable defense mechanism which modifies the aggregation function in the message passing framework. The experimental validation of the proposed attacks and the defenses show that they are indeed effective, and efficient in terms of computation and memory cost. The following are the pros and cons of the paper mentioned in the initial reviews. Pros - The tackled problem of ensuring robustness of a large-scale GNNs against adversarial attacks, is an important yet unexplored problem. - The study of the drawbacks of the cross-entropy loss when attacking GNNs is both novel and interesting. - The proposed surrogate loss, Masked Cross Entropy, is both novel and effective for attacking large-scale GNNs, and has a potential to be further explored for other adversarial defense problems. - The proposed attack and the defense methods are efficient and scalable, which makes the model practical. - The paper is well-written and is accompanied with well-structured source code. Cons - The proposed attack and the defense methods are heuristic and lack theoretical justification. - The assumption of white-box attack for large-scale GNNs is impractical. - The assumption on the expected budget required to ensure the optimality of the solution with the budget-aware loss is unrealistic. - The effectiveness of the proposed attacks are shown on a small-scale graph. - Lack of discussion and experimental comparison against relevant baselines, such as Li et al. 20, which also considers defense against adversarial attacks on large graphs. - Missing memory and time complexities of the different types of attacks. While the initial reviews were a bit split, during the discussion period, the reviewers and the authors actively engaged in very thorough, constructive discussions. This cleared away many of the concerns from the reviewers, and the reviewers unanimously agreed to accept the paper at the end. The authors convinced the reviewers that the assumption of white-box attack is reasonable in the defender’s point of view, and acknowledged the reviewers’ arguments that the black-box attack could be more practical when considering the tradeoff between clean and robust accuracy. The authors provided a more detailed explanation of the assumption, asymptotic time and memory complexities, and detailed discussion and experimental comparison against Li et al. 20. The lack of strict theoretical guarantees for the proposed attacks and the defenses are not critical, as their effectiveness have been verified with extensive empirical analysis. In sum, this is a strong paper that tackles a novel and practical problem, analyzes the problem with an existing approach, and proposes an effective solution backed up with both theoretical and empirical analysis, which makes it a clear accept. I praise the authors and the reviewers for their constructive and active discussions, and advise the authors to incorporate them into the revision, as well as the new results they provided in the responses.
val
[ "pmJIwexmO19", "NkK2a3RnQUA", "hf2YyLxjar", "bnd9xw1Zsjo", "QhOJO5IGvb", "6CHdGhu32yQ", "-cb2_55YA9D", "1kQpjQay5_Q", "2OXesLuToya", "9Td8XZ731N", "h9FPzKIpH1G", "euEOLq5eyXI", "tk1grfOowIY", "X5os4MzXQAt", "D1zvSQi0OZ0", "G_s8SfWq_4j", "K4OIl3rnnkb", "eOUYqvyuU7V", "FzjO7HyGbV",...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "...
[ " Thank you very much for your efforts on experimental comparisons against the relevant work. I strongly believe that the additional results make the paper much more solid. I think this is the last comment that I sincerely hope the authors carefully reflect all the missing or unclear parts pointed out by reviewers ...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 2, 3 ]
[ "NkK2a3RnQUA", "2OXesLuToya", "bnd9xw1Zsjo", "9Td8XZ731N", "G_s8SfWq_4j", "h9FPzKIpH1G", "D1zvSQi0OZ0", "nips_2021_v_4XcXsAZUn", "tk1grfOowIY", "eOUYqvyuU7V", "K4OIl3rnnkb", "eOUYqvyuU7V", "M9nCcpvgMoI", "1kQpjQay5_Q", "1kQpjQay5_Q", "FzjO7HyGbV", "nips_2021_v_4XcXsAZUn", "nips_202...
nips_2021_ZPSD4xZc6j8
Random Noise Defense Against Query-Based Black-Box Attacks
The query-based black-box attacks have raised serious threats to machine learning models in many real applications. In this work, we study a lightweight defense method, dubbed Random Noise Defense (RND), which adds proper Gaussian noise to each query. We conduct the theoretical analysis about the effectiveness of RND against query-based black-box attacks and the corresponding adaptive attacks. Our theoretical results reveal that the defense performance of RND is determined by the magnitude ratio between the noise induced by RND and the noise added by the attackers for gradient estimation or local search. The large magnitude ratio leads to the stronger defense performance of RND, and it's also critical for mitigating adaptive attacks. Based on our analysis, we further propose to combine RND with a plausible Gaussian augmentation Fine-tuning (RND-GF). It enables RND to add larger noise to each query while maintaining the clean accuracy to obtain a better trade-off between clean accuracy and defense performance. Additionally, RND can be flexibly combined with the existing defense methods to further boost the adversarial robustness, such as adversarial training (AT). Extensive experiments on CIFAR-10 and ImageNet verify our theoretical findings and the effectiveness of RND and RND-GF.
accept
This paper shows that adding a small amount of random noise to the output of a neural network can prevent current black-box attacks. The theory is convincing, the experiments cover a wide range of attacks, and the reviewers are satisfied by the author response. Even though there are still some questions about the generality of the proposed defense and if it could be evaded by stronger attacks, the proposal is interesting and will motivate future attack research to improve on randomized defenses.
train
[ "y3jw7L9Cs36", "yVTkGzDHgdz", "astMl8Gug65", "FlmM86VImsP", "fxhgZM18jbo", "a-eL3UC5Rst", "1WbDKoKZYaL", "Mtewt98hRkV", "v7kDjw-kEw", "_U6PfMPn-Jh", "vPvfNTeP_CH", "ei0aM5opb3w" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer RFP8,\n\nWe sincerely hope our posted responses can help to address your concerns. Although the discussion phase is due soon, we are still very glad to provide further responses to any remaining concern. That will be greatly appreciated. \n\nSincerely,\n\nAuthors ", " Dear Reviewer opkE,\n\nThanks...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "a-eL3UC5Rst", "astMl8Gug65", "1WbDKoKZYaL", "a-eL3UC5Rst", "ei0aM5opb3w", "_U6PfMPn-Jh", "v7kDjw-kEw", "vPvfNTeP_CH", "nips_2021_ZPSD4xZc6j8", "nips_2021_ZPSD4xZc6j8", "nips_2021_ZPSD4xZc6j8", "nips_2021_ZPSD4xZc6j8" ]
nips_2021_HEzEy_V7LF3
SADGA: Structure-Aware Dual Graph Aggregation Network for Text-to-SQL
The Text-to-SQL task, aiming to translate the natural language of the questions into SQL queries, has drawn much attention recently. One of the most challenging problems of Text-to-SQL is how to generalize the trained model to the unseen database schemas, also known as the cross-domain Text-to-SQL task. The key lies in the generalizability of (i) the encoding method to model the question and the database schema and (ii) the question-schema linking method to learn the mapping between words in the question and tables/columns in the database schema. Focusing on the above two key issues, we propose a \emph{Structure-Aware Dual Graph Aggregation Network} (SADGA) for cross-domain Text-to-SQL. In SADGA, we adopt the graph structure to provide a unified encoding model for both the natural language question and database schema. Based on the proposed unified modeling, we further devise a structure-aware aggregation method to learn the mapping between the question-graph and schema-graph. The structure-aware aggregation method is featured with \emph{Global Graph Linking}, \emph{Local Graph Linking} and \emph{Dual-Graph Aggregation Mechanism}. We not only study the performance of our proposal empirically but also achieved 3rd place on the challenging Text-to-SQL benchmark Spider at the time of writing.
accept
This paper proposes SADGA, a method for improving cross-domain Text-to-SQL, where the model has to generalize to unseen database schemas. The core method proposes to parse both natural language query and the database schema and then compute an alignment between both graphs which is then passed to the module producing SQL output. The assumption is that learning the alignment between query graph and database schema is more transferrable/robust across novel database schemas. The paper shows improvement on cross-domain TextToSQL benchmark (Spider). -- Reviewers are positive about this paper. Although query and schema graphs have already been used in the literature for Text-to-SQL, the central idea of aligning query and schema graphs via local-global aggregation is considered novel and promising. Overall, the model achieves strong empirical performance. One reviewer noted how SAGDA outperforms other models without any pre-trained component, which strengthens the claim of the paper. The main criticisms revolve around clarity of exposition and lack of interpretation. Authors promised in the rebuttal to add interpretable examples of how SAGDA solves the cross-domain TextToSQL benchmark. I additionally suggest the authors to strengthen the main motivation (e.g. matching in graph-space leads to more transferrable model's behaviour). Figure 1 needs some work (e.g. longer and more explanatory caption) as well as the intro needs proof-reading. Authors responded to reviewers' concerns about clarity issues and I believe all the concerns can be easily addressed in the revised version. Overall, this paper provides a well-executed empirical contribution to cross-domain TextToSQL problem with positive empirical results. The graph-matching architecture proposed here can possibly inform and be extended to other tasks. Therefore, I recommend this paper for acceptance.
test
[ "NX16IDOZb1", "ZEfuwxTY2p", "vLzQc5q6Rm_", "pawZNUaavx", "43UI8jTI3i", "C0CRFpkfpiT", "ZN-bNtSdlNL", "DNgooKDi-0e", "hah8UY6TjZo", "OsTUlcmY0Iz", "J5xU5_eYcSg", "SoWXwum4o5u", "YLGjL991UWg" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nWe are grateful for your careful reading and consideration of our response! Thanks for your patience.\n\nBest regards, \nAuthors of Submission 7668", " Thank you for the response. My score remains at 7.", " Thank you for the detailed response to my questions/suggestions - they make sense to...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_HEzEy_V7LF3", "OsTUlcmY0Iz", "DNgooKDi-0e", "hah8UY6TjZo", "nips_2021_HEzEy_V7LF3", "ZN-bNtSdlNL", "43UI8jTI3i", "SoWXwum4o5u", "YLGjL991UWg", "J5xU5_eYcSg", "nips_2021_HEzEy_V7LF3", "nips_2021_HEzEy_V7LF3", "nips_2021_HEzEy_V7LF3" ]
nips_2021_MlFcgL2AP4d
Near-Optimal Offline Reinforcement Learning via Double Variance Reduction
Ming Yin, Yu Bai, Yu-Xiang Wang
accept
The submission proposes OPDVR for offline policy optimization, and sample complexities for finite-horizon case and discounted infinite-horizon case are presented. Though some concerns on the pratical side are proposed, all reviewers agree that the paper has a great theoretical contribution. Thus I recommend accept, and also encourage the authors to have some pratical results in the camera ready version.
train
[ "f_mTOibnfAR", "pVepF5mCLAU", "GQQ2kGEE8B", "IcL7ui2BEor", "UJdXoVW9jCF", "S9-kmCLpBkF", "X7hYGdtSBQ_" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarifications. ", " We appreciate the reviewer for the encouraging comments and the positive feedback! The followings are our detailed responses.\n\n----- \"theoretical work is to inspire practitioners ...\" reviewer 9aNL:\"why someone should care about improving sample complexity by a factor of...
[ -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, 3, 4, 2 ]
[ "pVepF5mCLAU", "X7hYGdtSBQ_", "S9-kmCLpBkF", "UJdXoVW9jCF", "nips_2021_MlFcgL2AP4d", "nips_2021_MlFcgL2AP4d", "nips_2021_MlFcgL2AP4d" ]
nips_2021_CVmU4xzMIFo
Joint Modeling of Visual Objects and Relations for Scene Graph Generation
An in-depth scene understanding usually requires recognizing all the objects and their relations in an image, encoded as a scene graph. Most existing approaches for scene graph generation first independently recognize each object and then predict their relations independently. Though these approaches are very efficient, they ignore the dependency between different objects as well as between their relations. In this paper, we propose a principled approach to jointly predict the entire scene graph by fully capturing the dependency between different objects and between their relations. Specifically, we establish a unified conditional random field (CRF) to model the joint distribution of all the objects and their relations in a scene graph. We carefully design the potential functions to enable relational reasoning among different objects according to knowledge graph embedding methods. We further propose an efficient and effective algorithm for inference based on mean-field variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning. Experimental results on both the relationship retrieval and zero-shot relationship retrieval tasks prove the efficiency and efficacy of our proposed approach.
accept
Thank you for submitting your work to NeurIPS. The paper introduces a supervised approach to scene graph generation from images: objects and relations among them are captured using a CRF conditioned on a (deep) object detector and using e.g. distance-based learnable prototype. Overall, the rolling discussion helped to clarify many of issues raised by the reviewers. This is a solid but also somewhat incremental paper that, and this is a big advantage of the paper, shows the value of hybrid methods for challenging AI tasks. In any case, please incorporate your feedback from the rolling discussion into the final version.
train
[ "jABZ-Rekwu6", "1jqVImZo2ss", "RsYHRW36aDc", "hHWS9tW4OZq", "UQ5ymO_2u6", "yPgkcx4RdNo", "x_mGr1jRn5i", "SAzF0zZlvom", "4fEnKBK4HN", "_ux_mx0fKc0", "Tx5fLOU0pFg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "* The paper proposes a method for generating scene graphs of images by modeling the objects and relationships with a Conditional Random Field. The paper claims that other existing methods do not model the dependency between all the objects and relations in an image. The proposed method is meant for modeling all th...
[ 7, -1, -1, 6, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_CVmU4xzMIFo", "UQ5ymO_2u6", "SAzF0zZlvom", "nips_2021_CVmU4xzMIFo", "4fEnKBK4HN", "_ux_mx0fKc0", "Tx5fLOU0pFg", "hHWS9tW4OZq", "jABZ-Rekwu6", "nips_2021_CVmU4xzMIFo", "nips_2021_CVmU4xzMIFo" ]
nips_2021_ot2ORiBqTa1
Going Beyond Linear Transformers with Recurrent Fast Weight Programmers
Transformers with linearised attention (''linear Transformers'') have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) from the '90s. However, the original FWP formulation is more general than the one of linear Transformers: a slow neural network (NN) continually reprograms the weights of a fast NN with arbitrary architecture. In existing linear Transformers, both NNs are feedforward and consist of a single layer. Here we explore new variations by adding recurrence to the slow and fast nets. We evaluate our novel recurrent FWPs (RFWPs) on two synthetic algorithmic tasks (code execution and sequential ListOps), Wikitext-103 language models, and on the Atari 2600 2D game environment. Our models exhibit properties of Transformers and RNNs. In the reinforcement learning setting, we report large improvements over LSTM in several Atari games. Our code is public.
accept
This paper uses the connection between linear attention transforms and fast weight programmers to introduce a novel transformer architecture, recurrent fast weight programmers, which includes recurrence in the fast weights. The authors demonstrate competitive performance on three distinct benchmarks and provide interesting conceptual insights into the functioning of their new architecture. The discussion focussed on the role of the recurrence (“Why is it needed?”) and novel baselines / ablations. In particular, the authors provided new experimental evidence showing that the recurrence indeed helps, even when compared to a feedforward NN with the same number of parameters. The clarifications and new evidence provided made one reviewer increase their score, while all other reviews remained unchanged. Given the positive reviews, the discussion and the overall contributions of the paper, I recommend this paper to be accepted.
train
[ "tQ4pHf4BGgh", "fXRIanPDsrL", "-ewj0kR1NCR", "Lw3aJLbEWp0", "cmoAJikw6NA", "VJuFJ1gqO_5", "PeEvlZVeZv", "zgyjCSb18WI", "CqN8ERiFD2z", "zu72BFcHFZB", "t2U1tBWWu7", "aWjlQvZHIec", "bDt3EhBz4B-", "RvTjehwf0m" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " This is just a friendly reminder about the NeurIPS rebuttal deadline.\nPlease let us know if you have any remaining questions. Thank you!", " This is just a friendly reminder about the NeurIPS rebuttal deadline.\nPlease let us know if you have any remaining questions. Thank you!", " Thank you very much for yo...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "zgyjCSb18WI", "t2U1tBWWu7", "cmoAJikw6NA", "nips_2021_ot2ORiBqTa1", "CqN8ERiFD2z", "PeEvlZVeZv", "RvTjehwf0m", "bDt3EhBz4B-", "zu72BFcHFZB", "Lw3aJLbEWp0", "aWjlQvZHIec", "nips_2021_ot2ORiBqTa1", "nips_2021_ot2ORiBqTa1", "nips_2021_ot2ORiBqTa1" ]
nips_2021_MSr3u_FCRW
Reinforced Few-Shot Acquisition Function Learning for Bayesian Optimization
Bayesian optimization (BO) conventionally relies on handcrafted acquisition functions (AFs) to sequentially determine the sample points. However, it has been widely observed in practice that the best-performing AF in terms of regret can vary significantly under different types of black-box functions. It has remained a challenge to design one AF that can attain the best performance over a wide variety of black-box functions. This paper aims to attack this challenge through the perspective of reinforced few-shot AF learning (FSAF). Specifically, we first connect the notion of AFs with Q-functions and view a deep Q-network (DQN) as a surrogate differentiable AF. While it serves as a natural idea to combine DQN and an existing few-shot learning method, we identify that such a direct combination does not perform well due to severe overfitting, which is particularly critical in BO due to the need of a versatile sampling policy. To address this, we present a Bayesian variant of DQN with the following three features: (i) It learns a distribution of Q-networks as AFs based on the Kullback-Leibler regularization framework. This inherently provides the uncertainty required in sampling for BO and mitigates overfitting. (ii) For the prior of the Bayesian DQN, we propose to use a demo policy induced by an off-the-shelf AF for better training stability. (iii) On the meta-level, we leverage the meta-loss of Bayesian model-agnostic meta-learning, which serves as a natural companion to the proposed FSAF. Moreover, with the proper design of the Q-networks, FSAF is general-purpose in that it is agnostic to the dimension and the cardinality of the input domain. Through extensive experiments, we demonstrate that the FSAF achieves comparable or better regrets than the state-of-the-art benchmarks on a wide variety of synthetic and real-world test functions.
accept
The paper proposed a novel approach that combines meta learning and reinforcement learning for designing few-shot acquisition functions for Bayesian Optimization. All reviewers find the problem setup interesting and appreciate the novelty and applicability of the proposed algorithm. After a few rounds of interaction during the discussion phase, the reviewers are convinced about the empirical significance of the proposed work. When preparing a revision, the authors are strongly encouraged to take into account the reviews and accommodate the changes reflected in the author discussions---in particular, to further strengthen the empirical analysis, incorporate new references, clarification of the technical challenge, as well as elaborate on details of the FSAF algorithm and its application scope.
val
[ "LcCrxsiHKWH", "WI8IMa7gf3G", "Wjk5kfJBmaV", "CFoCpNlO8MG", "nmzF8Ec5VMJ", "HS-KmYcpTNT", "NHN1599KmoE", "wWP9iNoJ27L", "j-KQfM14yX", "sHZ7zMnwH4V", "4Dp3YPO4FEu", "S9tb9kt8lKB", "qJCvXUiU23P", "TeOAnhAJpvE", "4S-zDJm-ag0" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We truly appreciate your valuable feedback and thank you for taking the time to participate in the discussion. We will incorporate your suggestions into the final version of the paper.", " Thanks for the additional experiment results. It is nice to see that FSAF still remains its advantage after including the m...
[ -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 2, 5, 4 ]
[ "Wjk5kfJBmaV", "CFoCpNlO8MG", "nmzF8Ec5VMJ", "HS-KmYcpTNT", "HS-KmYcpTNT", "wWP9iNoJ27L", "nips_2021_MSr3u_FCRW", "4S-zDJm-ag0", "TeOAnhAJpvE", "NHN1599KmoE", "qJCvXUiU23P", "nips_2021_MSr3u_FCRW", "nips_2021_MSr3u_FCRW", "nips_2021_MSr3u_FCRW", "nips_2021_MSr3u_FCRW" ]
nips_2021_l4DQWgjbZg
Forster Decomposition and Learning Halfspaces with Noise
A Forster transform is an operation that turns a multivariate distribution into one with good anti-concentration properties. While a Forster transform does not always exist, we show that any distribution can be efficiently decomposed as a disjoint mixture of few distributions for which a Forster transform exists and can be computed efficiently. As the main application of this result, we obtain the first polynomial-time algorithm for distribution-independent PAC learning of halfspaces in the Massart noise model with strongly polynomial sample complexity, i.e., independent of the bit complexity of the examples. Previous algorithms for this learning problem incurred sample complexity scaling polynomially with the bit complexity, even though such a dependence is not information-theoretically necessary.
accept
This paper proposes Forster decomposition, a new analysis tool for learning linear classifiers and applies it to derive an algorithm that learns halfspaces with Massart noise without the dependence on bit complexity (which was necessary for previous algorithms). The contributions is very strong technically and addresses one of the classical problems in ML. While it has downsides, such as being technically rather complex and likely to be fully appreciated by a handful of experts, I recommend acceptance.
test
[ "5DMQPBXCc8", "5cmKItJZ5R", "twM9mSqKea8", "cgymNgACgw4", "VL-ac_f9QhO", "cowIcUabhEa", "gWbXZI3JigJ", "1-3PJZIrJyO", "35t8aNABfI", "Rn0kZU4HJtC", "Fz5J9Hv-jV_" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I will be leaving my evaluation unchanged.", " Thanks for the clarification regarding [HM13]/[AKS20]! And the conceptual contribution of identifying radial isotropic position as the right pre-processing tool for this problem is indeed quite nice. After reading the other reviews, I think...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "twM9mSqKea8", "cgymNgACgw4", "Fz5J9Hv-jV_", "Rn0kZU4HJtC", "35t8aNABfI", "1-3PJZIrJyO", "nips_2021_l4DQWgjbZg", "nips_2021_l4DQWgjbZg", "nips_2021_l4DQWgjbZg", "nips_2021_l4DQWgjbZg", "nips_2021_l4DQWgjbZg" ]
nips_2021_G7kpTaNrfe
Cortico-cerebellar networks as decoupling neural interfaces
The brain solves the credit assignment problem remarkably well. For credit to be assigned across neural networks they must, in principle, wait for specific neural computations to finish. How the brain deals with this inherent locking problem has remained unclear. Deep learning methods suffer from similar locking constraints both on the forward and feedback phase. Recently, decoupled neural interfaces (DNIs) were introduced as a solution to the forward and feedback locking problems in deep networks.Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve similar locking problems akin to DNIs. To demonstrate the potential of this framework we introduce a systems-level model in which a recurrent cortical network receives online temporal feedback predictions from a cerebellar module. We test this cortico-cerebellar recurrent neural network (ccRNN) model on a number of sensorimotor (line and digit drawing) and cognitive tasks (pattern recognition and caption generation) that have been shown to be cerebellar-dependent. In all tasks, we observe that ccRNNs facilitates learning while reducing ataxia-like behaviours, consistent with classical experimental observations. Moreover, our model also explains recent behavioural and neuronal observations while making several testable predictions across multiple levels.Overall, our work offers a novel perspective on the cerebellum as a brain-wide decoupling machine for efficient credit assignment and opens a new avenue between deep learning and neuroscience.
accept
This paper proposes that the previously published synthetic gradient approach, decoupled neural interfaces (Jaderberg et al. 2017), may serve as a model of the cortico-cerebellar learning. The reviewers found the proposed connection between DNI and cortico-cerebellar learning to be a promising lead for future biological investigation. Since understanding cerebellar function and role in human cognition is a deep and important problem, the initial proposal of this possible connection to a recent AI approach is considered a valuable contribution. While this connection is not validated in this work, the connection is considered in light of what is already generally known about the cerebellum, and the work explores the consequences of this comparison. The results of the paper involve comparisons between a baseline RNN model which uses truncated BPTT and a ccRNN model that learns using the DNI mechanism. While the DNI mechanism is not new, the tasks chosen are ones that are supposed to provide further illustrate the implications of the connection to cortico-cerebellar learning. Personally, I found the submission to be creative in identifying the potential relationship between the existing ML model and the architecture of the brain. I would emphasize that the ML portion is not new, so the contribution of this paper is specifically the proposal of how the DNI approach might serve as an analogy to cortico-cerebellar learning. I initially found some details, especially around the presentation of the results, a bit unclear. However, the authors and reviewers engaged during the discussion phase in a way that I believe will make the final version of the paper sufficiently clear. Multiple reviewers raised their scores during the exchange. I'm willing to endorse the reviewer consensus that this paper be accepted.
train
[ "dm-9wm3R1XB", "IcXSZcC68j", "SmYCG9TfW2-", "Xwm42pQJM84", "AIDMtL8Rvpc", "wcnxozMOwlB", "QSMh2uq0cK", "yqnKzZouvO5", "DqjpM3LUyTs", "OP2Y9M3oj4", "t_doAjuhphG", "iIEDybPv_5B", "NQTeHJLORz", "tVg__DBpN4o" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We would also like to thank you for the constructive review interaction and reading our (long) rebuttal.\n\nThe feedback locking problem in the brain is also something that we have learnt to appreciate throughout this project. Glad that you can also appreciate this as an interesting problem.", "During a network...
[ -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "SmYCG9TfW2-", "nips_2021_G7kpTaNrfe", "yqnKzZouvO5", "nips_2021_G7kpTaNrfe", "OP2Y9M3oj4", "QSMh2uq0cK", "t_doAjuhphG", "DqjpM3LUyTs", "IcXSZcC68j", "Xwm42pQJM84", "tVg__DBpN4o", "NQTeHJLORz", "nips_2021_G7kpTaNrfe", "nips_2021_G7kpTaNrfe" ]
nips_2021_AWMU04iXQ08
To The Point: Correspondence-driven monocular 3D category reconstruction
We present To The Point (TTP), a method for reconstructing 3D objects from a single image using 2D to 3D correspondences given only foreground masks, a category specific template and optionally sparse keypoints for supervision. We recover a 3D shape from a 2D image by first regressing the 2D positions corresponding to the 3D template vertices and then jointly estimating a rigid camera transform and non-rigid template deformation that optimally explain the 2D positions through the 3D shape projection. By relying on correspondences we use a simple per-sample optimization problem to replace CNN-based regression of camera pose and non-rigid deformation and thereby obtain substantially more accurate 3D reconstructions. We treat this optimization as a differentiable layer and train the whole system in an end-to-end manner using geometry-driven losses. We report systematic quantitative improvements on multiple categories and provide qualitative results comprising diverse shape, poses and texture prediction examples.
accept
The paper presented a new, iterative optimization procedure in the reconstruction branch of the conventional 3D reconstruction framework, instead of training network s to predict shape and pose directly as in UMR and ACSM. In general, the reviewers were a bit mixed in the initial review. After the rebuttal and the discussion phase, R1 increased the rating. Please include many of the rebuttal points in the final version, especially run time analysis and comparison with UMR.
train
[ "rbvzn0JqtCR", "ExmjlIlsFrU", "7QGFmV4bSFS", "i3A_W7t7OBN", "4_6DoSLy8D5", "hvyWc9Ywm-", "NxojWfiTbb", "AiyGzTX7TLw" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper tackles the problem of learning single-image 3D object reconstruction from single-view collections. The proposed method couples a correspondence prediction network with an iterative optimization procedure that solves shape and pose based on the predicted correspondences, starting with a category templat...
[ 6, 6, 8, -1, -1, -1, -1, 6 ]
[ 4, 3, 4, -1, -1, -1, -1, 4 ]
[ "nips_2021_AWMU04iXQ08", "nips_2021_AWMU04iXQ08", "nips_2021_AWMU04iXQ08", "rbvzn0JqtCR", "AiyGzTX7TLw", "7QGFmV4bSFS", "ExmjlIlsFrU", "nips_2021_AWMU04iXQ08" ]
nips_2021_aXbuWbta0V8
Proper Value Equivalence
Christopher Grimm, Andre Barreto, Greg Farquhar, David Silver, Satinder Singh
accept
All reviewers are positive about this work, and they believe the paper provides novel insights to the value equivalency principle. Some issues are brought up in the reviews, most of which are satisfactorily answered in the authors' responses (we had private discussions). Please consult their reviews for the details. Some of them are: - The paper is sometimes too rushed in its explanations. Perhaps the authors can remove some of the derivations (e.g., Eqs. 13-18) to an appendix, so that they can expand their discussions elsewhere. - Some reviewers found the detail insufficient for reproducibility. - More clear discussion on the relation between bisimulation and value equivalence can be helpful. The revisions required to improve the paper are minor enough that another round of reviews is not required. Therefore, I recommend the *acceptance* of this paper. === In addition to these comments from reviewers, I have some questions and comment that I would appreciate if the authors answer them in their revisions. These are not critical, so they are inconsequential to the decision. - The loss function in Eq. (8) and (9) have summation over policies (and value functions). But (P)VE requires the exact match for all policies (and values, for VE). Requiring an exact match suggests that we need to have a loss that encourages the error to be small *uniformly* over values and policies. This means that we may need to consider the supremum over $v$ and $\pi$, instead of summation over them. The summation can be just too relaxed to impose the required equivalency. This might be more of an issue when the value and policy space is very large, e.g., infinite number of elements. In that case, a large error in a small (say, zero measure) subset of the value/policy space does not affect the loss at all, but it violates the value equivalency. If we agree that this is the right/better way to write the loss function, then the losses (8) and (9) might be written as (8') $\sup_{\pi \in \Pi} \sup_{v \in V} || T_\pi^k v - \tilde{T}_\pi^k v||$ and (9') $\sup_{\pi \in \Pi} || v_\pi - \tilde{T}_\pi^k v||$. These show some similarities with VAML. VAML uses the Bellman optimality operator, but we can easily have a version for Bellman operator for a policy $\pi$. In that case, with k = 1, the inner optimization $\sup_{v \in V} || T_\pi^k v - \tilde{T}_\pi^k v||$ would be the original VAML's, and for k > 1, it would be a multi-step extension of VAML (which was not introduced in that paper though). (2) Is there any typo in Proposition 1(ii)? Currently, we have $\mathbb{M}^k$ there, but shouldn't it be $\mathcal{M}^k$? The same comment for Proposition 3.
test
[ "WQBtitsik-", "mKLa02AGwNB", "R2p2Pn10ZuN", "z3fByVa0kf0", "UOo6v8So6Xu", "lnwhVNhaR6U", "c0CdINWTOtq", "1D3Y2E0ycAX", "63_9dba6hWw", "uSgfbxlqHmg" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\nThank you for the reply. \nI believe you have addressed most of my and other reviewers' concerns.\nI keep with my score and recommend accepting the paper.", " Dear authors, thanks for engaging so actively with our feedback. I think you covered all the major points raised by me and the other revie...
[ -1, -1, -1, -1, -1, -1, 8, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "UOo6v8So6Xu", "lnwhVNhaR6U", "uSgfbxlqHmg", "63_9dba6hWw", "1D3Y2E0ycAX", "c0CdINWTOtq", "nips_2021_aXbuWbta0V8", "nips_2021_aXbuWbta0V8", "nips_2021_aXbuWbta0V8", "nips_2021_aXbuWbta0V8" ]
nips_2021__A4-JP8d_f
Challenges and Opportunities in High Dimensional Variational Inference
Current black-box variational inference (BBVI) methods require the user to make numerous design choices – such as the selection of variational objective and approximating family – yet there is little principled guidance on how to do so. We develop a conceptual framework and set of experimental tools to understand the effects of these choices, which we leverage to propose best practices for maximizing posterior approximation accuracy. Our approach is based on studying the pre-asymptotic tail behavior of the density ratios between the joint distribution and the variational approximation, then exploiting insights and tools from the importance sampling literature. Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors. In the latter case, we show that mass-covering variational objectives are difficult to optimize and do not improve accuracy, but flexible variational families can improve accuracy and the effectiveness of importance sampling – at the cost of additional optimization challenges. Therefore, for moderate-to-high-dimensional posteriors we recommend using the (mode-seeking) exclusive KL divergence since it is the easiest to optimize, and improving the variational family or using model parameter transformations to make the posterior and optimal variational approximation more similar. On the other hand, in low-dimensional settings, we show that heavy-tailed variational families and mass-covering divergences are effective and can increase the chances that the approximation can be improved by importance sampling.
accept
This paper is concerned with some broad questions in variational inference, namely what variational families should be used (e.g. light-tailed, heavy-tailed, flows) what divergence should be optimized (e.g. inclusive KL, exclusive KL) and how performance could be diagnosed after inference is complete. On a strict technical level, this paper appears to contain few contributions. Nevertheless, reviewers were overall positive about the paper's attempt to establish and support some grand (albeit somewhat informal) themes. These are 1) that mode-seeking divergences are easier to optimize than mode-spanning divergences 2) that this difficulty can be understood by considering the polynomial dependence of the divergence on importance weights and 3) that one can fit a generalized Pareto distribution and use the k statistic to diagnose inference success. One weakness identified by several reviewers agreed on was an inadequate discussion of prior work. The paper would be stronger with a better review of prior work related to all three of the themes, here, i.e. on the difficulty of optimizing different divergences, on using different variational families for VI and on inference diagnostics. There are some citations now, but with somewhat cursory discussions. In particular, the paper would be stronger if it was self-contained so that someone not familiar with the PSIS framework can follow it. The authors have agreed to expand their discussion of prior work. Reviewers also had some specific comments about the experimental results (both what was done and the presentation). The authors have also been receptive to this feedback.
train
[ "LYDPySY16cM", "ReelISzRX_", "zqw3284Ysb", "Ms_2rnwN8F", "L3uYsU1qmN", "joh-6lJZ5P", "WghxXEImqFc", "QF8BBwJalXQ", "xlvIB8xoNqq", "mVvHe6sFILu", "A3ld2tFGJo4", "VEJQlKNRbGD" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Apologies for the oversight! We will be sure expand our discussion of prior work. While we borrow some ideas from [14,30], the goal of our paper is quite different: to understand the trade-offs and limitations of different choices for variational objective and approximating family. That is, we aim to provide guid...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "zqw3284Ysb", "L3uYsU1qmN", "QF8BBwJalXQ", "nips_2021__A4-JP8d_f", "joh-6lJZ5P", "Ms_2rnwN8F", "A3ld2tFGJo4", "mVvHe6sFILu", "VEJQlKNRbGD", "nips_2021__A4-JP8d_f", "nips_2021__A4-JP8d_f", "nips_2021__A4-JP8d_f" ]
nips_2021_9DlCh34E1bN
On the Expressivity of Markov Reward
Reward is the driving force for reinforcement-learning agents. This paper is dedicated to understanding the expressivity of reward as a way to capture tasks that we would want an agent to perform. We frame this study around three new abstract notions of “task” that might be desirable: (1) a set of acceptable behaviors, (2) a partial ordering over behaviors, or (3) a partial ordering over trajectories. Our main results prove that while reward can express many of these tasks, there exist instances of each task type that no Markov reward function can capture. We then provide a set of polynomial-time algorithms that construct a Markov reward function that allows an agent to optimize tasks of each of these three types, and correctly determine when no such reward function exists. We conclude with an empirical study that corroborates and illustrates our theoretical findings.
accept
Reviewers agree that this paper is interesting, relevant, novel, clear, well-written, and technically sound. I congratulate the authors for their work and I invite them to modify their paper following the reviewers' suggestions.
train
[ "DY045XT4edv", "qzOhZTazjd", "On20e7Kx5Up", "Yg_w7jIkg4b", "3HpYF701qIG", "Q9lsrGHwJFF", "778cq7LZTi3", "NXxaNNzo4M", "DofPZ5mkr9E", "Fm7h7sRnU_Y" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for your response. As stated in the review, most of my comments were reflections and possible extensions. I am very happy with the draft, apart from minor changes for presentation and clarifications. ", "This paper studies the ability of Markovian reward functions to represent tasks def...
[ -1, 8, -1, 8, -1, -1, -1, -1, 7, 8 ]
[ -1, 3, -1, 4, -1, -1, -1, -1, 4, 3 ]
[ "3HpYF701qIG", "nips_2021_9DlCh34E1bN", "Q9lsrGHwJFF", "nips_2021_9DlCh34E1bN", "Fm7h7sRnU_Y", "qzOhZTazjd", "Yg_w7jIkg4b", "DofPZ5mkr9E", "nips_2021_9DlCh34E1bN", "nips_2021_9DlCh34E1bN" ]
nips_2021_PmJVah9D8B
One More Step Towards Reality: Cooperative Bandits with Imperfect Communication
The cooperative bandit problem is increasingly becoming relevant due to its applications in large-scale decision-making. However, most research for this problem focuses exclusively on the setting with perfect communication, whereas in most real-world distributed settings, communication is often over stochastic networks, with arbitrary corruptions and delays. In this paper, we study cooperative bandit learning under three typical real-world communication scenarios, namely, (a) message-passing over stochastic time-varying networks, (b) instantaneous reward-sharing over a network with random delays, and (c) message-passing with adversarially corrupted rewards, including byzantine communication. For each of these environments, we propose decentralized algorithms that achieve competitive performance, along with near-optimal guarantees on the incurred group regret as well. Furthermore, in the setting with perfect communication, we present an improved delayed-update algorithm that outperforms the existing state-of-the-art on various network topologies. Finally, we present tight network-dependent minimax lower bounds on the group regret. Our proposed algorithms are straightforward to implement and obtain competitive empirical performance.
accept
The paper investigates 3 scenarios in the multi-agent cooperative bandit model where the communications between agents is not perfect. In particular, the paper covers the cases when: (i) communication links fail; (ii) communications are delayed; and (iii) the communication messages are corrupted. For teach of these cases, the author propose a UCB based algorithm that exploits the graph structure of communication channels, and show that group regret is smaller than the sum of individual regrets (when agents chosen not to collaborate at all), even in the case of imperfect communication. That is, it's better to form a grand coalition rather than acting independently. While the majority of the reviewers found the responses of the authors to be sufficient and have clarified most of their concerns, the 3rd reviewer still argues that the results (especially Theorems 1 and 3) are rather incremental. Let me express my own opinion here that I respectfully disagree with this reviewer, as I think each of these problems themselves is quite essential and therefore worth investigating. The proof techniques might not be too difficult, but I also agree with one of the reviewers that this should be a positive thing, rather than a reason for rejection (if the motivation + positioning of the problem is well justified). Therefore, I would like to recommend the opinion of the majority of the reviewers, that is, to accept this paper (although as a poster).
train
[ "8xlAt0Pt-9q", "roAGl4P2giZ", "svc7YeEOBPR", "xz5utHXKKra", "BTbZoj6Vj6", "5eRm394QYp0", "6nUw8lFCxuz", "24yepYrQWkZ", "OtFcUjVpi_I", "Q_3In2ddccB", "x-_UVl6nNvI", "5xPAHj0GGJc" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ " We thank the reviewer for the response.\n\n---**Results are novel and not incremental:** Here we consider three types of communication imperfections and we discuss the novelty and non incremental nature of results in each imperfection as follows. Despite the simplicity of our ideas, theoretical analysis poses sig...
[ -1, 5, -1, -1, 7, -1, 7, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1 ]
[ "roAGl4P2giZ", "nips_2021_PmJVah9D8B", "5eRm394QYp0", "24yepYrQWkZ", "nips_2021_PmJVah9D8B", "5xPAHj0GGJc", "nips_2021_PmJVah9D8B", "OtFcUjVpi_I", "6nUw8lFCxuz", "roAGl4P2giZ", "nips_2021_PmJVah9D8B", "BTbZoj6Vj6" ]
nips_2021_NHX9w7ex3fW
Multi-Agent Reinforcement Learning in Stochastic Networked Systems
We study multi-agent reinforcement learning (MARL) in a stochastic network of agents. The objective is to find localized policies that maximize the (discounted) global reward. In general, scalability is a challenge in this setting because the size of the global state/action space can be exponential in the number of agents. Scalable algorithms are only known in cases where dependencies are static, fixed and local, e.g., between neighbors in a fixed, time-invariant underlying graph. In this work, we propose a Scalable Actor Critic framework that applies in settings where the dependencies can be non-local and stochastic, and provide a finite-time error bound that shows how the convergence rate depends on the speed of information spread in the network. Additionally, as a byproduct of our analysis, we obtain novel finite-time convergence results for a general stochastic approximation scheme and for temporal difference learning with state aggregation, which apply beyond the setting of MARL in networked systems.
accept
While the initial reviews were contrasting, the rebuttals helped the reviewers to appreciate better the work. The major issue raised by the Reviewers concerns the organization of the work. All the Reviewers feel that the paper should be re-organized as the authors discuss in their rebuttals. I invite the authors to do that in the camera ready. A further, but secondary, issue concerns the lack of empirical results. My opinion is that this are not strictly necessary as the paper is mainly theoretical. However, having them in the paper would make it stronger. So, I invite the authors to add some empirical results in the final version of the paper.
train
[ "yqF0px6RPWy", "0w0jClf_lvC", "EqaY6FoeZDI", "IFWhCkD4nM1", "VRvhRuPLoNe", "pnpkQiofQt", "NIdjngQ6EKs", "Vi8zmTAKCVf", "SXJWV6ct1Gu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors propose a scalable actor-critic framework for a class of multi-agent reinforcement learning (MARL) problems with a stochastic and non-local network of agents. The authors provide a finite-time error bound for the algorithm and show the dependency of the convergence rate on the speed of i...
[ 7, 6, -1, 6, -1, -1, -1, -1, 7 ]
[ 4, 2, -1, 4, -1, -1, -1, -1, 3 ]
[ "nips_2021_NHX9w7ex3fW", "nips_2021_NHX9w7ex3fW", "pnpkQiofQt", "nips_2021_NHX9w7ex3fW", "yqF0px6RPWy", "IFWhCkD4nM1", "0w0jClf_lvC", "SXJWV6ct1Gu", "nips_2021_NHX9w7ex3fW" ]
nips_2021_4c1EiEvivpx
Neural Scene Flow Prior
Before the deep learning revolution, many perception algorithms were based on runtime optimization in conjunction with a strong prior/regularization penalty. A prime example of this in computer vision is optical and scene flow. Supervised learning has largely displaced the need for explicit regularization. Instead, they rely on large amounts of labeled data to capture prior statistics, which are not always readily available for many problems. Although optimization is employed to learn the neural network, at runtime, the weights of this network are frozen. As a result, these learning solutions are domain-specific and do not generalize well to other statistically different scenarios. This paper revisits the scene flow problem that relies predominantly on runtime optimization and strong regularization. A central innovation here is the inclusion of a neural scene flow prior, which utilizes the architecture of neural networks as a new type of implicit regularizer. Unlike learning-based scene flow methods, optimization occurs at runtime, and our approach needs no offline datasets---making it ideal for deployment in new environments such as autonomous driving. We show that an architecture based exclusively on multilayer perceptrons (MLPs) can be used as a scene flow prior. Our method attains competitive---if not better---results on scene flow benchmarks. Also, our neural prior's implicit and continuous scene flow representation allows us to estimate dense long-term correspondences across a sequence of point clouds. The dense motion information is represented by scene flow fields where points can be propagated through time by integrating motion vectors. We demonstrate such a capability by accumulating a sequence of lidar point clouds.
accept
This iinteresting and well written paper proposes to use neural network training at runtime (i.e., during inference) to fit a scene flow function g (a plain MLP) that maps 3D points from a point cloud at time t to time t'. The function is formulated per and applied to individual points as an alternative formulation flow regularisation, meaning that the several thousand points in typical self-driving car dataset are sufficient for it to converge. Strikingly, the method is very competitive to supervised methods (that are faster at runtime but need to be trained) as well as to ICP. After a lengthy discussion between the 4 reviewers and the authors (longer that the paper itself), the consensus was to accept the paper with scores 6, 7, 7, 7, once the specifics of the evaluations (training the supervised methods on 2k points vs. training on 8k points) were elucidated and the authors accepted to do some revisions to the manuscript. I personally commend all reviewers and authors for exemplary thoroughness and collaboration on this review.
train
[ "1HY8qA0wb74", "_UdGmjb-_Ct", "Ax03F218VHh", "KH1I2thSGUC", "fnwU3QbJfSX", "FyDZjgcWj4", "q4BBk-dV8T", "2yZ4bDEN1-O", "jeJco0J6Kbm", "jB0yQl7yipR", "c14erCMtflb", "IKOEad4FlHx", "tKQAWdQtclJ", "0qci8VVsLU", "MtvkZlmCOl", "xQ86zffs-cz", "IthKmarKnGo", "VkSKIU-Ucp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "o...
[ "\nThis paper highlights the fact that deep learning has caused a paradigm shift in many computer vision applications from methods based on optimization with strong priors to ones that optimize feed-forward neural networks in an offline process. The paper argues that these feed-forward models are bad at generaliza...
[ 7, -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_4c1EiEvivpx", "2yZ4bDEN1-O", "xQ86zffs-cz", "nips_2021_4c1EiEvivpx", "0qci8VVsLU", "nips_2021_4c1EiEvivpx", "jeJco0J6Kbm", "jB0yQl7yipR", "c14erCMtflb", "IthKmarKnGo", "IKOEad4FlHx", "tKQAWdQtclJ", "MtvkZlmCOl", "KH1I2thSGUC", "FyDZjgcWj4", "VkSKIU-Ucp", "1HY8qA0wb74", "...
nips_2021_-h99IwQN-f
The future is log-Gaussian: ResNets and their infinite-depth-and-width limit at initialization
Mufan Li, Mihai Nica, Dan Roy
accept
The papers studies the infinite width and depth limit of ReLU ResNets, in particular the limiting distribution of the outputs of the network, for which under certain conjectured assumptions, is claimed to be log-Gaussian. While the reviewers have mixed opinions, after an extensive discussion between authors and reviewers and among reviewers themselves, most agree on the result being interesting and novel. While there are still some questions on technicality, in particular whether some of the conjectures hold true, in view of encouraging novelty and new insights, the meta-review would recommend acceptance of the paper as a poster to the conference.
train
[ "ZnuYwLCBqrp", "2mMEK_v40RL", "ulHLiRseIG5", "IEmmKnJpT9x", "oLT2s8r6iaR", "HoHgMaaoPM", "plGaRHVf9Ro", "AgqB7qtxH65", "WUCmEfCA6E5", "oo0dKO7eocn", "edVHBzGKck", "1k8gq11knp", "XYZa0TYYcr3", "5YDdPxOndne", "irCvKSpSsB", "owWlHozzr8", "ARxHpOFIVq", "R8SUmOwQJEt", "HbyIYpOVE9m", ...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ " Thank you for the timely and constructive response! We are glad to have the opportunity to engage with you while the window is still open. \n\n> Regarding the conjecture, while I think it would be a solid paper if the conjecture is resolved, I do not think it is a fair point to penalize the paper for not resolvin...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "2mMEK_v40RL", "ulHLiRseIG5", "UMjE4N8jBh", "HoHgMaaoPM", "nips_2021_-h99IwQN-f", "AgqB7qtxH65", "WUCmEfCA6E5", "WUCmEfCA6E5", "tLedChqAoID", "XYZa0TYYcr3", "oLT2s8r6iaR", "nips_2021_-h99IwQN-f", "R8SUmOwQJEt", "oLT2s8r6iaR", "1k8gq11knp", "UMjE4N8jBh", "oLT2s8r6iaR", "HbyIYpOVE9m"...
nips_2021_VJQMp5xu24
Grammar-Based Grounded Lexicon Learning
Jiayuan Mao, Haoyue Shi, Jiajun Wu, Roger Levy, Josh Tenenbaum
accept
This paper presents a novel CCG-based neuro-symbolic framework for grounded language tasks like VQA. Experiments are conducted on both VQA (CLEVR) and navigation tasks. Results demonstrate marked improvements in generalization to particularly challenging question types (e.g. counting in CLEVR). Overall, reviewers are borderline -- half in favor of acceptance, half weakly against. Several reviewers praised the paper's clarity and well-articulated motivations. Several also viewed the demonstrated improvements to generalization as impactful, and found the proposed training method (which helps address computational costs associated with latent derivations) to be a valuable contribution. However, reviewers also raised several important concerns. First, one reviewer raised concerns about the potential rigidity of the proposed formalism (e.g. that aspects of the grammar must be specified in advance) -- though authors have pointed out how simple and generalizable (e.g. between VQA and navigation tasks) the pre-specified component is. Second, one reviewer was concerned that the main hypothesis of the paper -- that taking a lexicalist approach to grounded language tasks may offer benefits -- could be better motivated, analyzed, and justified. The same reviewer raised a broader, but related point about the readability of the current draft. The draft, while clear, is quite dense and depends to some extent on both strong familiarity with deep learning and specific branches of linguistics, potentially limiting its audience and impact. Taking these points in balance, I lean towards acceptance. The paper does present a novel and well-motivated approach that may, apart from the this specific application, have influence on others working with models that combine discrete syntactic formalisms with neural representations. The paper does demonstrate strong improvements in generalization to unseen question types, matching the initial motivation of the general shape of approach. However, I strongly agree that final revisions should include a better survey and introduction to the relevant linguistic concepts and, perhaps even more importantly, more in depth discussion of how these specific results relate to the lexicalist hypothesis, around which the framework is centered.
train
[ "4dYvhhiinx7", "rR6WVXty0W_", "AKEMDMWMwgs", "KyBe2JT3ss", "NbtgKgpfrel", "zvkYOy1-gMq", "lkb255u8IjE", "16t6VL7bs3g", "JvrTYfwwuS6", "0W4Cv2srZZk", "HZs_SPKAEYT", "GnmSHO1J6k", "8kSSi1iZcr-" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 45Px,\n\nThanks for your time and consideration. We deeply appreciate the points you raised about adding more background material, a more thorough and conceptual presentation of our datasets and results, and more discussion of representation and future directions. We are already incorporating these ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "8kSSi1iZcr-", "JvrTYfwwuS6", "KyBe2JT3ss", "HZs_SPKAEYT", "nips_2021_VJQMp5xu24", "8kSSi1iZcr-", "8kSSi1iZcr-", "GnmSHO1J6k", "0W4Cv2srZZk", "nips_2021_VJQMp5xu24", "nips_2021_VJQMp5xu24", "nips_2021_VJQMp5xu24", "nips_2021_VJQMp5xu24" ]
nips_2021_FYHktcK-7v
Distributed Deep Learning In Open Collaborations
Modern deep learning applications require increasingly more compute to train state-of-the-art models. To address this demand, large corporations and institutions use dedicated High-Performance Computing clusters, whose construction and maintenance are both environmentally costly and well beyond the budget of most organizations. As a result, some research directions become the exclusive domain of a few large industrial and even fewer academic actors. To alleviate this disparity, smaller groups may pool their computational resources and run collaborative experiments that benefit all participants. This paradigm, known as grid- or volunteer computing, has seen successful applications in numerous scientific areas. However, using this approach for machine learning is difficult due to high latency, asymmetric bandwidth, and several challenges unique to volunteer computing. In this work, we carefully analyze these constraints and propose a novel algorithmic framework designed specifically for collaborative training. We demonstrate the effectiveness of our approach for SwAV and ALBERT pretraining in realistic conditions and achieve performance comparable to traditional setups at a fraction of the cost. Finally, we provide a detailed report of successful collaborative language model pretraining with nearly 50 participants.
accept
Although there was a spread of opinions from the reviewers, they did all agree that the problem area addressed by the paper is both interesting and important. Indeed, finding ways to create more opportunities for those outside of select industry or academic groups to participate in massive scale model training is important from both a research perspective and from a community perspective that takes into account the overall health of the field into account. Additionally, while the reviewers initially raised some important concerns, these have been well addressed by the author responses. In particular, the scaling results and the convergence analysis are both very helpful, and should be included in the final version in some way. One thing to note is that the spirit of reviewer HY7V's comment that the paper is trying to address many (important) issues is indeed a factor here, as evidenced by the repeated references to the (extensive) appendix. I recognize that the page limits of NeurIPS are a significant constraint, and that the "best" form of this paper is likely a journal article that incorporates the appendix information more fully into the main text and narrative. That said, taking all of the reviews into account and the ensuing discussion, I do think that this conference-paper version has significant merit and will spark interest and discussion within the field.
train
[ "uDAPxYb49Lg", "Jg5L2I_KCzB", "dqngogBwnz-", "BJ2-asZ7WT1", "6DQVCmbO7R-", "xdNeDVlm1UQ", "Zb7c8Y_8Jly", "P4XPyfXbmc7", "9AT4C6KVxIi", "o9-uvIdhjZ", "fG-zMRAYZ0N" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper discusses a distributed training setup where multiple small institutes/groups pool computational resources together for training ML models collaboratively. The paper focuses on two key problems: (1) maintaining consistent training outcomes under dynamic composition of participants. (2) determining the co...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, 7, 8 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_FYHktcK-7v", "nips_2021_FYHktcK-7v", "nips_2021_FYHktcK-7v", "fG-zMRAYZ0N", "o9-uvIdhjZ", "uDAPxYb49Lg", "Jg5L2I_KCzB", "Jg5L2I_KCzB", "uDAPxYb49Lg", "nips_2021_FYHktcK-7v", "nips_2021_FYHktcK-7v" ]
nips_2021_HiYDAwAGWud
Neural Ensemble Search for Uncertainty Estimation and Dataset Shift
Ensembles of neural networks achieve superior performance compared to standalone networks in terms of accuracy, uncertainty calibration and robustness to dataset shift. Deep ensembles, a state-of-the-art method for uncertainty estimation, only ensemble random initializations of a fixed architecture. Instead, we propose two methods for automatically constructing ensembles with varying architectures, which implicitly trade-off individual architectures’ strengths against the ensemble’s diversity and exploit architectural variation as a source of diversity. On a variety of classification tasks and modern architecture search spaces, we show that the resulting ensembles outperform deep ensembles not only in terms of accuracy but also uncertainty calibration and robustness to dataset shift. Our further analysis and ablation studies provide evidence of higher ensemble diversity due to architectural variation, resulting in ensembles that can outperform deep ensembles, even when having weaker average base learners. To foster reproducibility, our code is available: https://github.com/automl/nes
accept
This paper extends prior work on deep ensembles by introducing methods for constructing ensembles of varying architectures instead of relying on multiple random nationalizations of the same architecture. The methods introduced in the paper represent a novel combination of existing approaches applied specifically to developing architecturally diverse ensembles to improve uncertainty estimation. The writing is clear and the methods are technically correct. The authors present extensive experiments showing that the proposed method outperforms prior approaches. The reviewers had many questions and suggestions for the authors in their initial reviews. Following the discussion, the reviewers were in agreement that their primary questions had been adequately addressed and that the paper should be accepted. The authors need to be sure to include all of the discussed updates in the final version of the paper.
train
[ "c0kzoSU8OK", "ZuMelffnlY", "Sox7H5ilhh", "Qx8HOBJYjOM", "aii0bmOUfod", "m9xwOYz7DiX", "wFLEkbOYbKU", "y9nKPiREi4U", "kaIGZOp1_Q-", "0jA__dTZjPl", "yZ00svfljGO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper proposes a novel approach for searching ensembles of neural networks to improve the quality of uncertainty estimation and performance improvement under the dataset shift. The main idea of the paper is to construct a pool of architectures, from which a new ensemble member is selected using a forward gre...
[ 7, 6, 6, -1, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_HiYDAwAGWud", "nips_2021_HiYDAwAGWud", "nips_2021_HiYDAwAGWud", "wFLEkbOYbKU", "nips_2021_HiYDAwAGWud", "nips_2021_HiYDAwAGWud", "yZ00svfljGO", "c0kzoSU8OK", "Sox7H5ilhh", "ZuMelffnlY", "aii0bmOUfod" ]
nips_2021_fhDSTihtiB6
Finding Bipartite Components in Hypergraphs
Hypergraphs are important objects to model ternary or higher-order relations of objects, and have a number of applications in analysing many complex datasets occurring in practice. In this work we study a new heat diffusion process in hypergraphs, and employ this process to design a polynomial-time algorithm that approximately finds bipartite components in a hypergraph. We theoretically prove the performance of our proposed algorithm, and compare it against the previous state-of-the-art through extensive experimental analysis on both synthetic and real-world datasets. We find that our new algorithm consistently and significantly outperforms the previous state-of-the-art across a wide range of hypergraphs.
accept
The paper provides a clean solution for a new problem, but the motivations and applications of the paper is not completely clear. The results is not straightforward. The applications and motivating examples presented in the paper, on the other hand, are not convincing. Given the results are mathematically interesting and the area is of interest to the community, the paper could be accepted as a poster.
train
[ "CRDlILbu8NS", "_eZIAZaUF9", "9VmKdioGL52", "JOfBnNBsBF0", "5Iw9ggJ7z4V", "cddq43kLEf0" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed and helpful review. We have addressed your comments below and will be happy to continue the discussion if you have any remaining questions.\n\n### Why does the diffusion involve only the maximum and minimum values?\nThis is an excellent question and it warrants some discussion, particu...
[ -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "cddq43kLEf0", "JOfBnNBsBF0", "5Iw9ggJ7z4V", "nips_2021_fhDSTihtiB6", "nips_2021_fhDSTihtiB6", "nips_2021_fhDSTihtiB6" ]
nips_2021_onyYGbBJ2Mh
Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation
Recently, utilizing reinforcement learning (RL) to generate molecules with desired properties has been highlighted as a promising strategy for drug design. Molecular docking program -- a physical simulation that estimates protein-small molecule binding affinity -- can be an ideal reward scoring function for RL, as it is a straightforward proxy of the therapeutic potential. Still, two imminent challenges exist for this task. First, the models often fail to generate chemically realistic and pharmacochemically acceptable molecules. Second, the docking score optimization is a difficult exploration problem that involves many local optima and less smooth surface with respect to molecular structure. To tackle these challenges, we propose a novel RL framework that generates pharmacochemically acceptable molecules with large docking scores. Our method -- Fragment-based generative RL with Explorative Experience replay for Drug design (FREED) -- constrains the generated molecules to a realistic and qualified chemical space and effectively explores the space to find drugs by coupling our fragment-based generation method and a novel error-prioritized experience replay (PER). We also show that our model performs well on both de novo and scaffold-based schemes. Our model produces molecules of higher quality compared to existing methods while achieving state-of-the-art performance on two of three targets in terms of the docking scores of the generated molecules. We further show with ablation studies that our method, predictive error-PER (FREED(PE)), significantly improves the model performance.
accept
The submission has its merits but, even after carefully checking the author rebuttal, the reviewers unanimously agree that the submission is not strong enough for publication at NeurIPS. The main reasons for that are the lack of novelty, clarity, and supporting evidence.
train
[ "pAKBaYEy-jd", "T2UFbLsNK0I", "-6AOj-M3ZP6", "JoqsJ_AU1Sj", "Xx85gmwf1VP", "UVlq169ZDAv", "z8d6M6tcwJh", "E0DTvCOqNNA", "K4KNrzL-2bW", "YDKkDa7yi9s", "l1UOUi1ib0D", "LxQFGoqraHE", "1MTqgoKOcFm", "vmb-scxCxNG", "8-FQFuIjdfk", "HSeVl94Obra", "JM9OSk-YmB1" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their answers. I also appreciate the fact that they included a generative model in their baselines. Unfortunately, even though the additions and clarifications improve the paper, I still believe that it does not meet the NeurIPS bar mostly due to what I believe is a somehow l...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 3 ]
[ "E0DTvCOqNNA", "UVlq169ZDAv", "JoqsJ_AU1Sj", "1MTqgoKOcFm", "nips_2021_onyYGbBJ2Mh", "l1UOUi1ib0D", "K4KNrzL-2bW", "HSeVl94Obra", "JM9OSk-YmB1", "HSeVl94Obra", "Xx85gmwf1VP", "8-FQFuIjdfk", "vmb-scxCxNG", "nips_2021_onyYGbBJ2Mh", "nips_2021_onyYGbBJ2Mh", "nips_2021_onyYGbBJ2Mh", "nip...
nips_2021_NVpGLJUuPx5
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
Although the optimization objectives for learning neural networks are highly non-convex, gradient-based methods have been wildly successful at learning neural networks in practice. This juxtaposition has led to a number of recent studies on provable guarantees for neural networks trained by gradient descent. Unfortunately, the techniques in these works are often highly specific to the particular setup in each problem, making it difficult to generalize across different settings. To address this drawback in the literature, we propose a unified non-convex optimization framework for the analysis of neural network training. We introduce the notions of proxy convexity and proxy Polyak-Lojasiewicz (PL) inequalities, which are satisfied if the original objective function induces a proxy objective function that is implicitly minimized when using gradient methods. We show that stochastic gradient descent (SGD) on objectives satisfying proxy convexity or the proxy PL inequality leads to efficient guarantees for proxy objective functions. We further show that many existing guarantees for neural networks trained by gradient descent can be unified through proxy convexity and proxy PL inequalities.
accept
This paper introduces the notions of proxy convexity and proxy PL condition, to analyze convergence in optimization of non-convex functions. This is of particular interest in non-convex neural network training, for which a unified analysis is not yet available in the existing literature. The paper shows that these notions enable a unified analysis of convergence in various settings including the Neural Tangent Kernel (NTK), i.e., infinite-width regime, the fixed width regime in two-layer networks and learning a single ReLU neuron. Overall, the paper is clearly written and provides good progress towards this direction, although most of the presented convergence results can be established via other methods. The reviewers all agree that the paper presents a nice framework, and expressed minor concerns and suggestions. Please take into account the updated reviews when preparing the final version to accommodate the requested changes. Thank you for your submission to NeurIPS.
train
[ "Xq1OaYlzu7v", "cLSvt3Tfzh-", "YFSCNf7WKdH", "LLGHDlqOB60", "Rm-lY3WLq9a", "NNBpreHYmh9", "jCmmxbt_RMp", "9Fl0bBpjJ3b", "X1kDLDM33We" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for addressing my concerns. I have raised my score. ", "This papers introduces two notions, proxy convexity and proxy PL condition, in order to prove convergence results for non-convex functions, in particular for the analysis of neural network training. \n1. A function $f$ is $(g,h)$-proxy convex if $\\...
[ -1, 7, 7, -1, -1, -1, -1, 6, 7 ]
[ -1, 4, 3, -1, -1, -1, -1, 3, 4 ]
[ "NNBpreHYmh9", "nips_2021_NVpGLJUuPx5", "nips_2021_NVpGLJUuPx5", "X1kDLDM33We", "YFSCNf7WKdH", "cLSvt3Tfzh-", "9Fl0bBpjJ3b", "nips_2021_NVpGLJUuPx5", "nips_2021_NVpGLJUuPx5" ]
nips_2021_INBO6h9gtG
Covariance-Aware Private Mean Estimation Without Private Covariance Estimation
Gavin Brown, Marco Gaboardi, Adam Smith, Jonathan Ullman, Lydia Zakynthinou
accept
All reviewers agree that this paper provides a non-trivial advancement for the important problem of differentially private mean estimation. Reviewers 6nea, 8L9y, and ubCi found the proof techniques and algorithmic strategies insightful and novel. Reviewer ubCi found the assumptions underlying this paper to be significantly more realistic than prior work. The only complaint about the paper is that the proposed algorithms have exponential running time, but reviewers 8L9y and ubCi feel that these algorithms could be the starting point for more practical algorithms. I therefore recommend that this paper be accepted.
train
[ "PwC4lwvUDr", "MZ_LV_-Dv1f", "CROUCPP6Z59", "PBRZRVmZ1L", "G7rxjnadv4M" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "This paper investigates mean estimation of high dimensional Gaussian (and sub-Gaussian) distributions under the constraint of differential privacy (DP). More specifically, the authors aim at privately estimating the mean of a high-dimensional Gaussian in the Mahalanobis distance (w.r.t. to the covariance matrix). ...
[ 8, 7, -1, 7, 8 ]
[ 4, 3, -1, 4, 3 ]
[ "nips_2021_INBO6h9gtG", "nips_2021_INBO6h9gtG", "MZ_LV_-Dv1f", "nips_2021_INBO6h9gtG", "nips_2021_INBO6h9gtG" ]
nips_2021__FPtOcc0ygy
Label consistency in overfitted generalized $k$-means
Linfan Zhang, Arash Amini
accept
This paper shows that in certain settings, overparameterized k-means can recovery a refinement of the true clusters even when recovering the true clusters themselves is impossible. The reviewers agree that this is a novel observation that broadens the applicability of k-means and our understanding of when performance guarantees can be obtained. The paper is also timely and adds to the recent literature on overspecified models. The reviewers also point out several weaknesses of this work: - The assumptions, in particular the separation condition, appear strong. While the authors promise to add a lower bound in their response, the lower bound does not reflect dependence on the dimension. - The conclusion seems unsurprising under their assumptions, and the analysis involves standard techniques. The paper can be made stronger by incorporating several suggestions from the reviewers, including - Adding a lower bound. - Clarifying the difference between the two clustering tasks involved. - Discussion of the relation between their assumption with "distribution stability" (mentioned by Reviewer wtkZ), as well as with the assumptions made in prior work on Gaussian mixture models.
train
[ "slif_-BB1P8", "KyscsV4nYlz", "lg5bJYdjT5a", "zJJmau10y9U", "aV50e69r2ts", "jagaMAx5_Ab", "VYnhr-mhbms", "pbDX20m6vNF", "jvhnKQiM2og", "BHcIdT75F9G", "WVqZiwuO73p", "DADFQ7T39fp", "rljPxmQhd4M", "qDxIm4pCEuA", "nIsLsEa1_s", "1d8Ghs6NUE0", "8Az3HSZCn9F" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper considers the problem of label recovery from data clustering. Generally speaking, these two tasks are unrelated: the label recovery is a supervised learning problem, where each data point has a unknonwn true label, and the goal is to infer the label for each data point; while the clustering problem is to...
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021__FPtOcc0ygy", "nips_2021__FPtOcc0ygy", "nips_2021__FPtOcc0ygy", "aV50e69r2ts", "rljPxmQhd4M", "nips_2021__FPtOcc0ygy", "jagaMAx5_Ab", "WVqZiwuO73p", "slif_-BB1P8", "8Az3HSZCn9F", "DADFQ7T39fp", "KyscsV4nYlz", "nIsLsEa1_s", "slif_-BB1P8", "lg5bJYdjT5a", "8Az3HSZCn9F", "nips...
nips_2021_sK6CtXIgKcp
Open-set Label Noise Can Improve Robustness Against Inherent Label Noise
Learning with noisy labels is a practically challenging problem in weakly supervised learning. In the existing literature, open-set noises are always considered to be poisonous for generalization, similar to closed-set noises. In this paper, we empirically show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels. Inspired by the observations, we propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training. With ODNL, the extra capacity of the neural network can be largely consumed in a way that does not interfere with learning patterns from clean data. Through the lens of SGD noise, we show that the noises induced by our method are random-direction, conflict-free and biased, which may help the model converge to a flat minimum with superior stability and enforce the model to produce conservative predictions on Out-of-Distribution instances. Extensive experimental results on benchmark datasets with various types of noisy labels demonstrate that the proposed method not only enhances the performance of many existing robust algorithms but also achieves significant improvement on Out-of-Distribution detection tasks even in the label noise setting.
accept
UPDATE: The revision from the authors has been reviewed. After some back-and-forth with the authors to discuss the details of the 300K Random Images dataset that they have chosen to use, the paper has been officially accepted. ---- Reviewers generally appreciated the paper's observation regarding the benefit of open-set label noise as being interesting and non-obvious. The simplicity and effectiveness of the technique, coupled with the theoretical analysis, were also appreciated. Some concerns were however raised on: (1) choice of hyper-parameters (2) relation to OAT and other out-of-distribution regularization schemes Point (1) was convincingly addressed in the response. There was some debate about point (2). From my reading, I do agree that there are conceptual similarities between the proposed technique and OAT. However, I also take the authors' point about the similarity being closer to OE, a baseline which the authors discuss and compare against. Further, even when considered as a variant of OAT for the label noise setting, the present paper contributes to theoretical understanding of such out-of-distribution regularization schemes. The authors are encouraged to incorporate the reviewers' suggestions, include a discussion of OAT, and contrast their method to other regularization schemes for label noise (Wasserstein Adversarial Regularization (WAR) on label noise, ICLR 2020; Robust early-learning: hindering the memorization of noisy labels, ICLR 2021). It was discovered late in the review process that this paper makes use of the 80 million tiny images dataset, which has been retracted (https://groups.csail.mit.edu/vision/TinyImages/). Following the NeurIPS ethical guidelines (https://neurips.cc/public/EthicsGuidelines), this dataset should not be used. For this reason, the paper is being conditionally accepted.
train
[ "u42kzu_xfUe", "sEyGOCFII0Q", "fWkaJAJCp_P", "G0WfbaZ6WT9", "pMluVTU-31E", "71CVYr1QwI6", "YEDyF99JPj", "z6oJsGrqAiZ", "KO3L3ShOBuB", "Rx487EvZmNS", "L4FWsIqq2-1", "HfxvpfXGMjq", "vrVxJtYcfz", "pMopCaGcWY", "esbaQVGyC6f", "5dxC6KDKqdW" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank all the reviewers for your recognition and encouragement. Thanks to the rebuttal, all concerns of three reviewers have been well addressed in discussion, while we are still looking forward to receiving feedback from Reviewer wWfP. Specifically, the main concern from Reviewer wWfP is about the differences...
[ -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_sK6CtXIgKcp", "nips_2021_sK6CtXIgKcp", "HfxvpfXGMjq", "71CVYr1QwI6", "nips_2021_sK6CtXIgKcp", "z6oJsGrqAiZ", "Rx487EvZmNS", "KO3L3ShOBuB", "YEDyF99JPj", "L4FWsIqq2-1", "pMluVTU-31E", "5dxC6KDKqdW", "esbaQVGyC6f", "sEyGOCFII0Q", "nips_2021_sK6CtXIgKcp", "nips_2021_sK6CtXIgKcp...
nips_2021_FKCTeO1fsvH
The Complexity of Sparse Tensor PCA
Davin Choo, Tommaso d'Orsi
accept
This paper consider the "Sparse Tensor PCA" problem, an interesting generalisation of the spike-matrix model. It is a theoretical paper focusing on computational complexity, and the main results (Theorem 1, 2) discussed the performance of a family of algorithms that smoothly interpolates between polynomial-time and the exponential-time exhaustive search algorithm. Overall, there is a clear agreement on the reviewers side to accept the paper for publication at Neurips. The consensus is that this is a good and solid paper, in terms of results and clarity. While the paper adapt existing algorithmic and lower bound techniques to the sparse tensor PCA problem, it has been judged well written, and enjoyable to read. Some of the reviewers actually increased their score after the rebuttal, acknowledging that the authors successfully answered their comments.
test
[ "FiocEBQrr5V", "My8qtCJs7WJ", "Mh0JUDmt4bR", "rczoa554bGt", "Rw8UNBHMiR", "3owgrU7jFmb", "IkOdpAEtzl0", "Xiaee0qDNl", "qZdyt9YJ0eq", "5-ZiAe3xYxX", "CxB_WuOp-Vx", "TkdC9pMwAZx", "6ZxcBIU1dTb", "-aOEf_McYoD", "63oTfHcCj8_", "02smXZi30ef" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the reviewer for the response and her/his suggestion of studying first the case $p\\rightarrow \\infty$.\n\nIt is indeed interesting to understand how to \"fill\" the gap between efficient algorithms and computational lower-bounds observed in this manuscript.", " > I think we are on the same page wit...
[ -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "IkOdpAEtzl0", "Mh0JUDmt4bR", "rczoa554bGt", "Rw8UNBHMiR", "CxB_WuOp-Vx", "nips_2021_FKCTeO1fsvH", "TkdC9pMwAZx", "3owgrU7jFmb", "-aOEf_McYoD", "6ZxcBIU1dTb", "63oTfHcCj8_", "02smXZi30ef", "nips_2021_FKCTeO1fsvH", "nips_2021_FKCTeO1fsvH", "nips_2021_FKCTeO1fsvH", "nips_2021_FKCTeO1fsvH...
nips_2021_7HQiArc-sKf
Learning to Elect
Voting systems have a wide range of applications including recommender systems, web search, product design and elections. Limited by the lack of general-purpose analytical tools, it is difficult to hand-engineer desirable voting rules for each use case. For this reason, it is appealing to automatically discover voting rules geared towards each scenario. In this paper, we show that set-input neural network architectures such as Set Transformers, fully-connected graph networks and DeepSets are both theoretically and empirically well-suited for learning voting rules. In particular, we show that these network models can not only mimic a number of existing voting rules to compelling accuracy --- both position-based (such as Plurality and Borda) and comparison-based (such as Kemeny, Copeland and Maximin) --- but also discover near-optimal voting rules that maximize different social welfare functions. Furthermore, the learned voting rules generalize well to different voter utility distributions and election sizes unseen during training.
accept
Majority of the reviewers are excited about the novel applications of ML techniques to social choice and were impressed by the good performance of the solution proposed in this paper. The paper is well-written and the work is solid. Some concerns were raised about the significance and motivation of the actual technical problem solved in this paper (i.e., learning social welfare maximizers) and technical depth in ML. The response was generally effective and clarified some points raised by the reviewers. After the discussions, reviewers' opinions did not change much (except the score of one reviewer was raised from 6 to 7). The overall sentiment remained positive. After all, the novelty and potential to stimulate future work and discussions outweigh the cons, which is the main reason behind the recommendation.
val
[ "60p3Sb80J3", "PuRugPzRBZq", "e7Qod9dBsL1", "b5g0dkmVMzT", "w_uN792YoRW", "WKw2bSfGwxz", "1Ek7A0B13cJ", "6ihdDHPqTZ5", "_nR3JPzE5Md", "3vw06YF7zk", "nmqjGxQ3B75", "2xxm3KS5X2d" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper empirically studies the ability of PIN (Set Transformer, GIN, DeepSets and MLP) applied to predicting the (top-1) winner of voting rules (Plurality, Borda, Copeland, Maximin and Kemeny) and predicting the social-welfare-maximizing (utilitarian and egalitarian) candidates. The PINs are trained on synthet...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2021_7HQiArc-sKf", "nips_2021_7HQiArc-sKf", "b5g0dkmVMzT", "PuRugPzRBZq", "60p3Sb80J3", "2xxm3KS5X2d", "nmqjGxQ3B75", "3vw06YF7zk", "nips_2021_7HQiArc-sKf", "nips_2021_7HQiArc-sKf", "nips_2021_7HQiArc-sKf", "nips_2021_7HQiArc-sKf" ]
nips_2021_ZBeCVICs1Ua
KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support
We study the gradient flow for a relaxed approximation to the Kullback-Leibler (KL) divergencebetween a moving source and a fixed target distribution.This approximation, termed theKALE (KL approximate lower-bound estimator), solves a regularized version ofthe Fenchel dual problem defining the KL over a restricted class of functions.When using a Reproducing Kernel Hilbert Space (RKHS) to define the functionclass, we show that the KALE continuously interpolates between the KL and theMaximum Mean Discrepancy (MMD). Like the MMD and other Integral ProbabilityMetrics, the KALE remains well defined for mutually singulardistributions. Nonetheless, the KALE inherits from the limiting KL a greater sensitivity to mismatch in the support of the distributions, compared with the MMD. These two properties make theKALE gradient flow particularly well suited when the target distribution is supported on a low-dimensional manifold. Under an assumption of sufficient smoothness of the trajectories, we show the global convergence of the KALE flow. We propose a particle implementation of the flow given initial samples from the source and the target distribution, which we use to empirically confirm the KALE's properties.
accept
This paper develops KALE particle flow, although the paper is incremental and the noise injection scheme is an adaption from MMD flow, reviewers and the AC agreed that the theoretical contribution of the paper is reasonable but the experimental side of the paper is rather weak. Weak accept
train
[ "ihD5YCCDhjZ", "Ynn9fcOFkMS", "I6bsvgE-FhJ", "cS3WJ2tM_mm", "2MzqDXAAO3n", "xoQTayQvnz", "2UwwzMGbqKf", "4Ne8wu79tsK", "956jVXOQjk-", "FJXeE5zoIJm", "G9QwCeubHHl", "U2acOGyxOLl", "_mI2jOwDFJq" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response, which adequately addressed my questions. I think the discussion raised some interesting points, especially regarding the sample complexity: the dimension-free approximation bounds (A1(b)) is another nice feature of KALE. I recommend acceptance for this work. ", " Thank you ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "FJXeE5zoIJm", "2MzqDXAAO3n", "nips_2021_ZBeCVICs1Ua", "956jVXOQjk-", "4Ne8wu79tsK", "2UwwzMGbqKf", "_mI2jOwDFJq", "U2acOGyxOLl", "I6bsvgE-FhJ", "G9QwCeubHHl", "nips_2021_ZBeCVICs1Ua", "nips_2021_ZBeCVICs1Ua", "nips_2021_ZBeCVICs1Ua" ]
nips_2021_lHvy0DLYWm
When Is Generalizable Reinforcement Learning Tractable?
Agents trained by reinforcement learning (RL) often fail to generalize beyond the environment they were trained in, even when presented with new scenarios that seem similar to the training environment. We study the query complexity required to train RL agents that generalize to multiple environments. Intuitively, tractable generalization is only possible when the environments are similar or close in some sense. To capture this, we introduce Weak Proximity, a natural structural condition that requires the environments to have highly similar transition and reward functions and share a policy providing optimal value. Despite such shared structure, we prove that tractable generalization is impossible in the worst case. This holds even when each individual environment can be efficiently solved to obtain an optimal linear policy, and when the agent possesses a generative model. Our lower bound applies to the more complex task of representation learning for efficient generalization to multiple environments. On the positive side, we introduce Strong Proximity, a strengthened condition which we prove is sufficient for efficient generalization.
accept
The paper studies the important problem of generalization in RL. The authors proposed different metrics of similarity between MDPs and derive negative or positive results depending on the strength of the assumption. Based on reviews and rebuttal, I believe this a solid paper with interesting and non trivial results. Some assumptions seem rather strong but I'm persuaded that the paper is an interesting starting point for further research on the topic. So I'm proposing acceptance for it.
train
[ "JkXNEBcOUK7", "CYN6atucK7Y", "RveDn-I12p9", "6ENtmaRohVS", "QtH_eDdh35h", "0alS63T62XZ", "Y2vhkYpogUL", "QsYMZ_ghOjh", "uffdirM8rjW", "3-td7UTin7W", "biJiypsoRrl", "ojtwdTt5jDj", "_nJexlC1uXg", "MP5lhcbLHQy", "Id9Zfgsh83" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Re: Alternate Mixture**\nSince this is a different problem setting, it is not immediately clear to us whether the learning difficulty will apply, and this would merit future investigation.\n\n**Re: Unique Optimal Policy**\nIndeed, if the optimal policy is unique and shared by all the MDPs, and optimizing for th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 2 ]
[ "CYN6atucK7Y", "Y2vhkYpogUL", "6ENtmaRohVS", "uffdirM8rjW", "0alS63T62XZ", "QsYMZ_ghOjh", "ojtwdTt5jDj", "Id9Zfgsh83", "MP5lhcbLHQy", "_nJexlC1uXg", "nips_2021_lHvy0DLYWm", "nips_2021_lHvy0DLYWm", "nips_2021_lHvy0DLYWm", "nips_2021_lHvy0DLYWm", "nips_2021_lHvy0DLYWm" ]
nips_2021_DLKakJ2W-In
Relational Self-Attention: What's Missing in Attention for Video Understanding
Convolution has been arguably the most important feature transform for modern neural networks, leading to the advance of deep learning. Recent emergence of Transformer networks, which replace convolution layers with self-attention blocks, has revealed the limitation of stationary convolution kernels and opened the door to the era of dynamic feature transforms. The existing dynamic transforms, including self-attention, however, are all limited for video understanding where correspondence relations in space and time, i.e., motion information, are crucial for effective representation. In this work, we introduce a relational feature transform, dubbed the relational self-attention (RSA), that leverages rich structures of spatio-temporal relations in videos by dynamically generating relational kernels and aggregating relational contexts. Our experiments and ablation studies show that the RSA network substantially outperforms convolution and self-attention counterparts, achieving the state of the art on the standard motion-centric benchmarks for video action recognition, such as Something-Something-V1&V2, Diving48, and FineGym.
accept
The paper received three borderlines (1 negative, 2 positive) and one accept ratings. The main weaknesses were: 1. The results on Kinetics are weak. 2. The ideas are kind of incremental. 3. The presentation of this paper is unclear. * The AC does not consider 1 an important concern. It would be nice to have strong results on Kinetics as well, but kinetics relies more on appearance rather than motion and long-range interactions (the focus of this paper). * The AC shares the same concern as 2 and 3. The technical contributions and the writing quality do not meet the bar of NeurIPS. Due to these issues, the AC recommends rejection. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere.
train
[ "CFeTapLZJfl", "gqy5BR1mQHA", "A767ngEzJcc", "yW93RUYOmVR", "VqWS7jHw3ne", "wDwkn9ARVM", "aAgTpvHkGlT", "EZYKIWWuLBC", "tAqe-hEuHr", "YFgIyPzhAPe", "9kzVFSQPrbm", "zx89PsBCWqv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their clarifications and extra experiments.\n\nAfter reading the other reviews, I agree that the paper could be more clear and terms like content-to-content or content-to-position should be better explained. At the moment they are mainly self-explained by the terms in the equations but thi...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "aAgTpvHkGlT", "EZYKIWWuLBC", "VqWS7jHw3ne", "wDwkn9ARVM", "tAqe-hEuHr", "YFgIyPzhAPe", "9kzVFSQPrbm", "zx89PsBCWqv", "nips_2021_DLKakJ2W-In", "nips_2021_DLKakJ2W-In", "nips_2021_DLKakJ2W-In", "nips_2021_DLKakJ2W-In" ]
nips_2021_Htnjc4kHsNF
Towards Enabling Meta-Learning from Target Models
Su Lu, Han-Jia Ye, Le Gan, De-Chuan Zhan
accept
The reviewers agree that the paper has interesting ideas, and represents an exciting direction in meta learning domain. Hopefully, reviewer comments should help preparing the next draft.
train
[ "IKBxD1LPfX", "QLAo2L87xAt", "ui18hCgg1b", "1gYRUFGvbuh", "giO2QMNgra1", "vv4zpVRe_9v", "x0DyRKWVDwg", "3w7E4v1aMIq", "wEJkzMdZmlg", "lTBgtsxuCH", "RplHdijRZyT", "UQi7C3ktSkG", "soBZJn2er4t" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an efficient S/T protocol meta-learning algorithm. Specifically, in order to reduce the number of required target models and the high computational cost, only the target models on those hardest tasks are constructed by fine-tunning the pre-trained network, then the knowledge distillation is use...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "nips_2021_Htnjc4kHsNF", "giO2QMNgra1", "x0DyRKWVDwg", "vv4zpVRe_9v", "wEJkzMdZmlg", "RplHdijRZyT", "lTBgtsxuCH", "IKBxD1LPfX", "IKBxD1LPfX", "soBZJn2er4t", "UQi7C3ktSkG", "nips_2021_Htnjc4kHsNF", "nips_2021_Htnjc4kHsNF" ]
nips_2021_H5TBqNFPKSJ
A Near-Optimal Algorithm for Debiasing Trained Machine Learning Models
We present a scalable post-processing algorithm for debiasing trained models, including deep neural networks (DNNs), which we prove to be near-optimal by bounding its excess Bayes risk. We empirically validate its advantages on standard benchmark datasets across both classical algorithms as well as modern DNN architectures and demonstrate that it outperforms previous post-processing methods while performing on par with in-processing. In addition, we show that the proposed algorithm is particularly effective for models trained at scale where post-processing is a natural and practical choice.
accept
The balance of the reviews is in favor of accepting the paper. I want to follow this consensus. While some of the reviewers described significant issues with the paper, I do not view these issues as disqualifying. I expect the camera-ready version of the paper will be able to address most of them as per the author response.
train
[ "QSAhaiK2jYr", "SkdjJygDe38", "3r8fxQCAGla", "o7omGfCOXq", "wcnVL6KkbUR", "7TYSCD60lrM" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a near-optimal post-processing algorithm (via a regularized optimization objective) that is theoretically guaranteed to debias trained machine learning models for the problem of statistical parity, which is both scalable and performs favorably relative to several existing post-processing and in...
[ 7, -1, -1, -1, 5, 7 ]
[ 4, -1, -1, -1, 4, 3 ]
[ "nips_2021_H5TBqNFPKSJ", "QSAhaiK2jYr", "7TYSCD60lrM", "wcnVL6KkbUR", "nips_2021_H5TBqNFPKSJ", "nips_2021_H5TBqNFPKSJ" ]
nips_2021_ws4BkjI1l-q
GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement
Advances in unsupervised learning of object-representations have culminated in the development of a broad range of methods for unsupervised object segmentation and interpretable object-centric scene generation. These methods, however, are limited to simulated and real-world datasets with limited visual complexity. Moreover, object representations are often inferred using RNNs which do not scale well to large images or iterative refinement which avoids imposing an unnatural ordering on objects in an image but requires the a priori initialisation of a fixed number of object representations. In contrast to established paradigms, this work proposes an embedding-based approach in which embeddings of pixels are clustered in a differentiable fashion using a stochastic stick-breaking process. Similar to iterative refinement, this clustering procedure also leads to randomly ordered object representations, but without the need of initialising a fixed number of clusters a priori. This is used to develop a new model, GENESIS-v2, which can infer a variable number of object representations without using RNNs or iterative refinement. We show that GENESIS-v2 performs strongly in comparison to recent baselines in terms of unsupervised image segmentation and object-centric scene generation on established synthetic datasets as well as more complex real-world datasets.
accept
- The proposed method is tackling an important problem. The reviewers found the approach is reasonable and has some novelty. Demonstration of the method on the real world dataset is a step forward in the line of research. - The major concerns from the reviewers are addressed well enough by the rebuttal. - The clarity is fair enough but some clarification as pointed by the reviewers would make the paper better.
train
[ "B-fXZovELFM", "o17m_YC1rw", "vz4N1KMvJKx", "TXAuUESjpNX", "qyX5if6dQDv", "bCi1ZSG3ex_", "a6pxhFIBikI", "3Ae5m-3IaHf", "QLXhEHFltIp", "aev0jUYXzwT", "ja6yvdptdsG", "O-O_OSF4S2q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the comments.\nI keep my original rating.\n\nAdditional notes:\n If you make the claim that KL regularization is the key to high segmentation quality, then you probably want to explain why the KL regularization in your baseline cannot achieve a similar effect.", " Thanks for the response. After r...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "qyX5if6dQDv", "bCi1ZSG3ex_", "3Ae5m-3IaHf", "a6pxhFIBikI", "O-O_OSF4S2q", "QLXhEHFltIp", "ja6yvdptdsG", "aev0jUYXzwT", "nips_2021_ws4BkjI1l-q", "nips_2021_ws4BkjI1l-q", "nips_2021_ws4BkjI1l-q", "nips_2021_ws4BkjI1l-q" ]
nips_2021_wRFj6EKvpl
How Data Augmentation affects Optimization for Linear Regression
Though data augmentation has rapidly emerged as a key tool for optimization in modern machine learning, a clear picture of how augmentation schedules affect optimization and interact with optimization hyperparameters such as learning rate is nascent. In the spirit of classical convex optimization and recent work on implicit bias, the present work analyzes the effect of augmentation on optimization in the simple convex setting of linear regression with MSE loss.We find joint schedules for learning rate and data augmentation scheme under which augmented gradient descent provably converges and characterize the resulting minimum. Our results apply to arbitrary augmentation schemes, revealing complex interactions between learning rates and augmentations even in the convex setting. Our approach interprets augmented (S)GD as a stochastic optimization method for a time-varying sequence of proxy losses. This gives a unified way to analyze learning rate, batch size, and augmentations ranging from additive noise to random projections. From this perspective, our results, which also give rates of convergence, can be viewed as Monro-Robbins type conditions for augmented (S)GD.
accept
As far as I am aware, this is the first work that presents theoretical results for optimization dynamics with data augmentation, and three of the four reviewers found the paper to be of sufficient significance and impact to warrant acceptance. I ultimately found the argument that the paper should be rejected on the grounds that it analyzes optimization dynamics instead of generalization error unconvincing. While I generally agree that the primary motivation for data augmentation is to prevent overfitting, it's not obvious to me that the effects of data augmentation on optimization are well understood or trivial, and it's not even a priori obvious to me that optimization should necessarily converge for all data augmentation schemes outside of basic ones like additive Gaussian noise.
train
[ "D-HgVDf7O60", "U8w8-baqgvd", "4d_OYlbnG_b", "Hd_pBqNY6Ww", "aUGRiHTCqyv", "TDIgbsX66Nx", "PSQRWJihqsD", "0AdlLL4E6W", "H9f3qp1HICk", "EQVA70H7Jqj", "Q6JEaxWyUOP" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for this note. We wanted to note in case it was not clear that our general Theorems 3.1 and 3.2 apply to other couplings of data augmentation and learning rate decay. More specifically, the expressions in conditions (3.7) -- (3.12) are explicit functions of those quantities, and for any such schedule it ...
[ -1, 6, -1, -1, -1, -1, -1, -1, 3, 6, 7 ]
[ -1, 2, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "4d_OYlbnG_b", "nips_2021_wRFj6EKvpl", "TDIgbsX66Nx", "aUGRiHTCqyv", "Q6JEaxWyUOP", "U8w8-baqgvd", "EQVA70H7Jqj", "H9f3qp1HICk", "nips_2021_wRFj6EKvpl", "nips_2021_wRFj6EKvpl", "nips_2021_wRFj6EKvpl" ]
nips_2021_XnIYa2OG2sr
An Exact Characterization of the Generalization Error for the Gibbs Algorithm
Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm. However, existing bounds are often loose and lack of guarantees. As a result, they may fail to characterize the exact generalization ability of a learning algorithm.Our main contribution is an exact characterization of the expected generalization error of the well-known Gibbs algorithm (a.k.a. Gibbs posterior) using symmetrized KL information between the input training samples and the output hypothesis. Our result can be applied to tighten existing expected generalization error and PAC-Bayesian bounds. Our approach is versatile, as it also characterizes the generalization error of the Gibbs algorithm with data-dependent regularizer and that of the Gibbs algorithm in the asymptotic regime, where it converges to the empirical risk minimization algorithm. Of particular relevance, our results highlight the role the symmetrized KL information plays in controlling the generalization error of the Gibbs algorithm.
accept
This paper studies the Gibbs posterior (called Gibbs algorithm in the paper), an extension of the posterior in Bayesian statistics, where the negative log-likelihood is replaced by a general loss function. The Gibbs posterior is motivated by information theory, PAC-Bayes bounds and many approaches that aim at generalizing the Bayesian techniques in ML. The main result of the paper (Theorem 1) is an exact characterization of the Gibbs posterior expected risk, as the symmetrized KL information between the sample and the parameter over the inverse-temperature. The authors then provide applications of this result: improvements of existing PAC-Bayes bounds in the sub-Gaussian case (Section 4), and asymptotic study in this case where the inverse temperature grows to infinity (Section 5; this case is not covered by existing PAC-Bayes bounds). Finally, in Section 6, the author(s) generalize Theorem 1 to the case where one adds a possibly data-dependent regularizer to the loss function. Theorem 1 is new. It leads to interesting generalization or improvements of existing results. The paper should be of interest for researchers not only in the PAC-Bayes community, but more generally for all researchers in Bayesian ML. My opinion is that the paper is overal very well written. This is also the opinion of all the Reviewers, even though some of them mentioned that the paper is a little dense in parts. Some of them raised minor points / typos, please take them into account. I would just point out what I believe is a major terminology problem. The probability distribution in (3) is sometimes refered to as "pseudo-posterior", "generalized posterior", "Gibbs posterior" (e.g. [4]) or "Gibbs estimator" (e.g. [19]). It is sipply a Gibbs distribution, but "posterior" or "estimator" emphasize the fact that in this case, this distribution is used as a tools for statistics / machine learning. This is the first case I see it refered to as "Gibbs algorithm", and this is problematic for two reasons: - this is not an algorithm, - the term "Gibbs algorithm" is sometimes used to refer to the "Gibbs sampling" algorithm https://en.wikipedia.org/wiki/Gibbs_sampling introduced and studied by [Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on pattern analysis and machine intelligence, (6), 721-741]. To keep calling "Gibbs algorithm" the generalized posterior in (3) would thus lead to confusion. I strongly recommend to the authors to change for "generalized posterior", "Gibbs posterior" or even "Gibbs distribution". For these reasons, I strongly recommend to change the name of the paper from "Gibbs algorithm", for example to "Gibbs posterior".
train
[ "hNRgFuUPmf", "NgJMjm43LMb", "uiqxJ8PwvM", "lgC_buvaFIn", "jhvySeNhxiQ", "6Q2acY38hx", "MhH5nAyjXzv", "LT8V0H_K_fc", "aBi2OrKQmT3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper gives an exact characterization of the expected generalization error for the Gibbs algorithm by way of a symmetrized KL divergence between the input training data and the output hypothesis. This allows for tightening some existing generalization bounds under certain assumptions. \n The Gibbs algorithm ...
[ 7, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ 3, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "nips_2021_XnIYa2OG2sr", "uiqxJ8PwvM", "aBi2OrKQmT3", "hNRgFuUPmf", "LT8V0H_K_fc", "MhH5nAyjXzv", "nips_2021_XnIYa2OG2sr", "nips_2021_XnIYa2OG2sr", "nips_2021_XnIYa2OG2sr" ]
nips_2021__8vCV7AxPZ
Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning
Importance Sampling (IS) is a widely used building block for a large variety of off-policy estimation and learning algorithms. However, empirical and theoretical studies have progressively shown that vanilla IS leads to poor estimations whenever the behavioral and target policies are too dissimilar. In this paper, we analyze the theoretical properties of the IS estimator by deriving a novel anticoncentration bound that formalizes the intuition behind its undesired behavior. Then, we propose a new class of IS transformations, based on the notion of power mean. To the best of our knowledge, the resulting estimator is the first to achieve, under certain conditions, two key properties: (i) it displays a subgaussian concentration rate; (ii) it preserves the differentiability in the target distribution. Finally, we provide numerical simulations on both synthetic examples and contextual bandits, in comparison with off-policy evaluation and learning baselines.
accept
Importance sampling (IS, aka IPS) is an oft used and oft studied technique for counterfactual reasoning. While it is unbiased (under a "full support" assumption), it can have very large variance when the target policy's distribution is substantially different from that of the data collection (i.e., logging) policy, and this can have an adverse effect on policy evaluation and learning. This paper analyzes the concentration properties of the IS estimator, proving a new "anti-concentration bound," which shows that reward estimated by IS can be bounded away from the true expected reward with probability $\geq \delta$. The paper then proposes a power-mean transformation of the importance weights, and proves that the resulting estimator has subgaussian concentration. Experiments with the new estimator round out the paper, and the results show that the transformed importance weights generally outperform the "vanilla" importance weights. The reviews agree that this is an interesting, novel analysis, and that it is well written and easy to follow. There were some concerns, but they have mostly been answered by the authors' responses. One unresolved comment (from Reviews 8Vhk and 9nhQ) is that the case for subgaussianity would be more convincing with more empirical evidence that it helps in practice. The reviewers were puzzled why the model-based baselines, which don't exhibit subgaussian tails, outperformed the proposed methods in some experiments. Regardless, this is a strong paper that deserves to be accepted. Based on the novelty and relevance of the topic, I would go as far as to nominate it for a spotlight talk.
train
[ "JkS3l_qstpu", "OpPNi4JjB2t", "S-VoCkPDl-6", "oUVr7_wsH3o", "cfDr4vSE6Ku", "wb0Wk_CLQrs", "RjxAUncmpFv", "Jj3QuzCBes2", "qrywGCvjlJ", "0abtmfkJXwP", "IwnRFk2F5me", "frC3NiAfRiC", "0pYk2heNFk", "i722KaaEUhR", "UUhiqh78-A", "bbv6-FWQgcn" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new estimator for off-policy evaluation and learning: the estimator modifies original importance sampling by adding a weight correction term controlled by one additional hyper-parameter. The paper analyzed the theoretical properties of the proposed estimator and used experiments to show the e...
[ 7, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 2, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "nips_2021__8vCV7AxPZ", "S-VoCkPDl-6", "i722KaaEUhR", "wb0Wk_CLQrs", "nips_2021__8vCV7AxPZ", "RjxAUncmpFv", "Jj3QuzCBes2", "qrywGCvjlJ", "0abtmfkJXwP", "0pYk2heNFk", "bbv6-FWQgcn", "UUhiqh78-A", "cfDr4vSE6Ku", "JkS3l_qstpu", "nips_2021__8vCV7AxPZ", "nips_2021__8vCV7AxPZ" ]
nips_2021_XL9DWRG7mJn
Rethinking gradient sparsification as total error minimization
Atal Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis
accept
This paper reformulates an existing problem (how to sparsify gradients in distributed training) and proposes to minimize a new objective (the total compression error subject to communication constraint as opposed to per-iteration compression error). This change of viewpoint leads to a new algorithm (hard threshold algorithm with variable sparsity). The authors show the effectiveness of the proposed algorithm through theoretical bounds and experiments. All reviewers agree that this is a valuable contribution. Comments from previous submission to ICML are adequately addressed.
train
[ "O7Pa-FIoBJ8", "sb4i9bO5c6F", "Inez-R5X9DB", "hO-JUkgT_HK", "izpo_kBPLuT", "8BKgSsPRwr8", "wpfn0XbVep", "oiEJMsDqEUb", "aChnufbJY_s", "-lB5hoBJkV" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " We thank the reviewer for the positive feedback, and for pointing out a meaningful direction for future research. We will clearly elaborate our use of the word optimal, as mentioned in the response to reviewer o9PJ.", " Many thanks to the authors' detailed replies and other reviewers' insights into this paper. ...
[ -1, -1, 7, 7, 7, -1, -1, -1, -1, 7 ]
[ -1, -1, 4, 5, 3, -1, -1, -1, -1, 3 ]
[ "sb4i9bO5c6F", "8BKgSsPRwr8", "nips_2021_XL9DWRG7mJn", "nips_2021_XL9DWRG7mJn", "nips_2021_XL9DWRG7mJn", "Inez-R5X9DB", "izpo_kBPLuT", "-lB5hoBJkV", "hO-JUkgT_HK", "nips_2021_XL9DWRG7mJn" ]
nips_2021_r6Khc1Lq9z1
Approximate optimization of convex functions with outlier noise
We study the problem of minimizing a convex function given by a zeroth order oracle that is possibly corrupted by {\em outlier noise}. Specifically, we assume the function values at some points of the domain are corrupted arbitrarily by an adversary, with the only restriction being that the total volume of corrupted points is bounded. The goal then is to find a point close to the function's minimizer using access to the corrupted oracle.We first prove a lower bound result showing that, somewhat surprisingly, one cannot hope to approximate the minimizer {\em nearly as well} as one might expect, even if one is allowed {\em an unbounded number} of queries to the oracle. Complementing this negative result, we then develop an efficient algorithm that outputs a point close to the minimizer of the convex function, where the specific distance matches {\em exactly}, up to constant factors, the distance bound shown in our lower bound result.
accept
This paper studies the problem of minimizing a convex function given an evaluation oracle to it -- with the twist that the oracle has outlier-noise: for some parameter K, a volume equal to that of a ball of radius K can be corrupted adversarially (thus, when evaluated at any point inside this ball, the oracle can return an arbitrary, potentially adversarial noise). This immediately puts a lower bound of K on the distance to the optimum. The paper proves a non-trivial lower bound of K\sqrt{\beta/\alpha} on the best distance guarantees to the optimum for any \alpha-convex \beta-smooth function and shows that one can achieve this bound within a constant factor with poly(\beta/\alpha) queries. The idea of the algorithm is simple and natural (get somewhat close to the optimum by gradient descent and then do a clever bootstrapping by averaging gradients in a ball). But the paper studies and proves a clean result about a basic problem of interest to optimization and machine learning community broadly construed. We recommend acceptance.
train
[ "XhsJkWN6-z8", "tzabXqr0_2Q", "YjscT5GBwql", "92Vr-cUaBO", "kzTGq_ZwA4t", "nyyQtHkR3l4" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper studies the problem of convex optimization using a zeroth order oracle with adversarial noise. In particular, the paper studies the setting where there is some convex function $f: \\mathbb{R}^d \\rightarrow \\mathbb{R}$ and the learner may interact with an oracle that returns the value of $f$ at a given...
[ 6, 6, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, 3 ]
[ "nips_2021_r6Khc1Lq9z1", "nips_2021_r6Khc1Lq9z1", "XhsJkWN6-z8", "tzabXqr0_2Q", "nyyQtHkR3l4", "nips_2021_r6Khc1Lq9z1" ]
nips_2021_LEqVjnffcWo
Fair Classification with Adversarial Perturbations
L. Elisa Celis, Anay Mehrotra, Nisheeth Vishnoi
accept
This paper provides novel theoretical results regarding fair classification in the presence of adversarial perturbations of the training examples, as well as supporting experiments on real world datasets. The reviewers found the problem formulation to be well motivated by application, and they deemed the theoretical results to be sound, novel, and interesting. Therefore, I recommend acceptance of the paper. Throughout the discussion there are a number of concerns that should be addressed in the next version of the paper, and I strongly encourage the authors to do so. To name a few: (1) definitions and terminology should be defined early and clearly, (2) the additional experiments discussed in the responses should be added to the paper, and (3) the implementation strategy of solving the proposed programs in practice should be described clearly for the paper to be self-contained.
train
[ "Mk2URs6c1U_", "XG68Egqhm7", "OjiE6p_AY57", "CEO2-iLBMHE", "B88bGt1k2pr", "_iwSClyfMY", "JCia8IFcZF1", "6D7udG439_n", "aJGL6EG83yC", "ayYOTy7M6nX", "8fVEDhymcz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for answering my questions! I'll keep my score at 7.", " Thanks for answering my questions. I do not have any further concerns. I agree to accept the paper.", " Thanks for your response. The added experiments results make the paper more convincing. I champion the acceptance of the paper.", "This pape...
[ -1, -1, -1, 7, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, 3, 3, 4 ]
[ "B88bGt1k2pr", "6D7udG439_n", "_iwSClyfMY", "nips_2021_LEqVjnffcWo", "8fVEDhymcz", "ayYOTy7M6nX", "CEO2-iLBMHE", "aJGL6EG83yC", "nips_2021_LEqVjnffcWo", "nips_2021_LEqVjnffcWo", "nips_2021_LEqVjnffcWo" ]
nips_2021_dJcUhDVu1G
Distributed Saddle-Point Problems Under Data Similarity
Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov
accept
Overall the majority of reviewers liked the paper and found the contributions interesting for NeurIPS. I also went over the paper and I think it provides an interesting and worthy contribution to distributed algorithms, studying saddle-point optimization problems under data-similarity (mostly motivated from statistical settings) and providing lower bounds on communication and matching upper-bounds. I recommend acceptance.
test
[ "sETmR2C949M", "rjEQ_UDVN3E", "UEMqrrDDJyf", "U-XgS7zKYQ0", "8dXOjCAKzRN", "UGLyz80lI_", "04o2CMSLaCw", "QuGfU1Q3La6", "o6TLBQn9r6", "HfY19gtnbL", "tRjHDxBICK6" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies distributed algorithms for convex-concave saddle point problem under both centralized and decentralized setting. Under similarity assumption, the proposed algorithms with nearly matching the lower bounds over either types of networks.\n This paper studies distributed convex-concave saddle point...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 2, 2, 3 ]
[ "nips_2021_dJcUhDVu1G", "UEMqrrDDJyf", "UGLyz80lI_", "tRjHDxBICK6", "sETmR2C949M", "HfY19gtnbL", "o6TLBQn9r6", "nips_2021_dJcUhDVu1G", "nips_2021_dJcUhDVu1G", "nips_2021_dJcUhDVu1G", "nips_2021_dJcUhDVu1G" ]
nips_2021_fxHzZlo4dxe
Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces
We consider the problem of optimizing combinatorial spaces (e.g., sequences, trees, and graphs) using expensive black-box function evaluations. For example, optimizing molecules for drug design using physical lab experiments. Bayesian optimization (BO) is an efficient framework for solving such problems by intelligently selecting the inputs with high utility guided by a learned surrogate model. A recent BO approach for combinatorial spaces is through a reduction to BO over continuous spaces by learning a latent representation of structures using deep generative models (DGMs). The selected input from the continuous space is decoded into a discrete structure for performing function evaluation. However, the surrogate model over the latent space only uses the information learned by the DGM, which may not have the desired inductive bias to approximate the target black-box function. To overcome this drawback, this paper proposes a principled approach referred as LADDER. The key idea is to define a novel structure-coupled kernel that explicitly integrates the structural information from decoded structures with the learned latent space representation for better surrogate modeling. Our experiments on real-world benchmarks show that LADDER significantly improves over the BO over latent space method, and performs better or similar to state-of-the-art methods.
accept
Recent work in Bayesian optimization has proposed latent space optimization, where Bayesian optimization is performed in the latent space of an autoencoder. This paper makes a potentially interesting advancement over the standard approach of using only latent codes as input to the GP. In particular, this paper defines a kernel jointly over a latent code z and the decoded structure \Phi(z). Ultimately, I think the simplicity of the authors' approach is its strength. An obvious weakness of latent space optimization is that, because the decoder is not perfect, the decoded structure \Phi(z) may not have the predicted objective value of z, even if the surrogate model were highly accurate. Directly incorporating the decoded structure into the surrogate model is an obvious and natural baseline approach to this, but (crucially) one that I have not seen elsewhere. Please incorporate the additional results run during the author feedback period and other reviewer feedback in the final version of the paper.
train
[ "s7ckLdW9Mp3", "FnkWbpTs3s4", "gUIgppba7AF", "_DIgyVQGjTt", "O_sSE7dpmPT", "XLp52CtMlgd", "3Nw7rLM-7Y7", "ZnE6MMhy16j" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a method for Bayesian optimization in combinatorial spaces, i.e. spaces of general objects, e.g. trees and graphs. Existing methods usually map these objects into a supspace of $\\mathbb{R}^n$ , then perform traditional BO in that latent space. However, doing this ignores the structural inform...
[ 5, 6, 6, -1, -1, -1, -1, 7 ]
[ 4, 4, 4, -1, -1, -1, -1, 3 ]
[ "nips_2021_fxHzZlo4dxe", "nips_2021_fxHzZlo4dxe", "nips_2021_fxHzZlo4dxe", "s7ckLdW9Mp3", "FnkWbpTs3s4", "ZnE6MMhy16j", "gUIgppba7AF", "nips_2021_fxHzZlo4dxe" ]
nips_2021_jZ6FlEB78CG
Gradual Domain Adaptation without Indexed Intermediate Domains
The effectiveness of unsupervised domain adaptation degrades when there is a large discrepancy between the source and target domains. Gradual domain adaption (GDA) is one promising way to mitigate such an issue, by leveraging additional unlabeled data that gradually shift from the source to the target. Through sequentially adapting the model along the "indexed" intermediate domains, GDA substantially improves the overall adaptation performance. In practice, however, the extra unlabeled data may not be separated into intermediate domains and indexed properly, limiting the applicability of GDA. In this paper, we investigate how to discover the sequence of intermediate domains when it is not already available. Concretely, we propose a coarse-to-fine framework, which starts with a coarse domain discovery step via progressive domain discriminator training. This coarse domain sequence then undergoes a fine indexing step via a novel cycle-consistency loss, which encourages the next intermediate domain to preserve sufficient discriminative knowledge of the current intermediate domain. The resulting domain sequence can then be used by a GDA algorithm. On benchmark data sets of GDA, we show that our approach, which we name Intermediate DOmain Labeler (IDOL), can lead to comparable or even better adaptation performance compared to the pre-defined domain sequence, making GDA more applicable and robust to the quality of domain sequences. Codes are available at https://github.com/hongyouc/IDOL.
accept
The paper considers gradual domain adaptation without access to intermediate sequence of distributions between source and target. This is a relatively new problem and a very important one. Author discussions helped address the reviewer concerns and therefore I suggest the paper to be accepted. I ask the authors to please include the additional information discussed in the rebuttal period in the camera ready. I also strongly recommend the authors add more explanations about the limitations of the current experiment setting (GDA) and discuss the possible ways to actively address these issues. Although the paper is interesting and the results are a good first step to tackle the problem of gradual domain adaptation, the experiment are limited, limitations of the proposed method are not investigated and the baselines are simple. Therefore, the paper does not qualify for a spotlight.
train
[ "Ba_wjBtk0NL", "L0B2-ggMyV", "YOtahfDJKD", "IOskf5kg7u", "DSmhgdrKGQc", "uE5jKCp2Jbp", "lW4oRZvgv1", "4vuyrsTZJRB", "NaQ6E6tIja_", "C_JfZMESiKe", "A6zeN3YSm50", "iKGlWZfCRdq", "DO8BEQU_RH4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I appreciate the detailed response by the authors.\nMy main concern about the scope of experiments is overall addressed, so I would like to increase my rating to \"accept\".\nHowever, I strongly recommend the authors add more explanations about the limitations of the current experiment setting (GDA) and discuss t...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "IOskf5kg7u", "nips_2021_jZ6FlEB78CG", "NaQ6E6tIja_", "lW4oRZvgv1", "uE5jKCp2Jbp", "C_JfZMESiKe", "4vuyrsTZJRB", "L0B2-ggMyV", "iKGlWZfCRdq", "DO8BEQU_RH4", "nips_2021_jZ6FlEB78CG", "nips_2021_jZ6FlEB78CG", "nips_2021_jZ6FlEB78CG" ]
nips_2021_Nb03vOtUfz
K-level Reasoning for Zero-Shot Coordination in Hanabi
Brandon Cui, Hengyuan Hu, Luis Pineda, Jakob Foerster
accept
The authors revisit classic ideas and show that with small modifications they can be made to achieve state-of-the-art coordination with held-out partners in Hanabi. Additionally, the authors provide some insight into why their modifications work, namely in curbing overfitting. While there were some concerns about novelty, the (mostly well-presented) results speak for themselves and make an interesting contribution to the ad-hoc teamplay / zero-shot coordination literature. That said, I have serious concerns with the current discussion of "human-AI coordination" that I expect to be addressed for the camera-ready. The current draft repeatedly refers to demonstrating "human-AI coordination", however this claim is misleading, since evaluation is done only with a human proxy behavioral cloning (BC) agent, and not actual humans. Although the authors refer to previous circumstantial evidence showing agent scores with BC bots are similar to that with actual humans, this does not imply that this should be true of all agents, and seems an especially dangerous claim for coordination games. Moreover, using human proxies over actual humans removes the possibility of qualitative feedback from human players, as well as subjective preferences over agents, both of which are important components of evaluating human-AI coordination. Indeed, my confidence in the authors claim of BC-human equivalence is further weakened by claims in the draft that their results "confirm that ZSC [aka cross-play (XP)] is a great proxy setting for human-AI coordination and ad-hoc team play." While there seems to be a correlation in all score types, this claim is overblown. For example, in table 3, OP and synchronous KLR achieve similar cross-play scores, but the latter gets nearly double the score of the former with the human proxy - that's a big difference that would be missed if only doing cross-play! If the match between evaluation with human proxies and real humans is as tenuous, the human-AI coordination statements could be worse than misleading and actually wrong. Thus, I expect to see the human-AI coordination claims removed throughout the draft, including section titles, and clarified to specifically refer to coordination with human proxies or BC bots. Moreover, I would like to suggest that the authors in the future do not double down on coordination with human proxy bots alone, since this may well lead to a false sense of progress on their stated goal of coordination with real humans. One more minor point of feedback - while the set of baselines included is fairly strong and appropriate, there are a couple more that would make the paper's claims stronger. One is simultaneous training but with a fully connected interaction graph. While the authors evaluate a few different interaction graphs, they don't seem to evaluate this very simple one, as far as I can tell. Another are BRs to the various bots (especially to the human proxy), which would help place in context how well the evaluated agents are performing. Finally, as pointed out by Reviewer VvuS, there are issues with the current CH baseline; however, the authors seem aware and are working on fixing them. Finally, a few more minor points on presentation: Multiple reviewers found figure 1 confusing. This should be easy to improve. Also, I personally found figure 2 to be an odd way to represent probability distributions. The line segments connecting probabilities here are meaningless. Plotting points on a simplex, for example, might be more clear. Finally, [1] is a relevant study on different interaction graphs in multi-agent RL that e.g. should likely be cited for the claims on lines 64 and 351. [1] Garnelo et al, Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity, AAMAS 2021
train
[ "CrCSL-BgL4", "faJAlZI2V0C", "y3FICco2ioF", "YdduPFAIosD", "TlY2tD83Q0G", "h5ShVL27wi", "YdNMdGl_mFj", "0_oF1swr73r", "sx47eCnx3V8", "d1Zeekc4pC_", "0pvOcBVkeJB", "6Bm6c7xj8Wf", "GZI2DQQTSzB", "zOZiR1rMXQq", "M48i-8dmS3C", "LZGWrIY5giI", "qdzlS2U3l5D", "iocuyqym_0", "gEO67Go11f0"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ "An important problem in multi-agent RL is how to train agents to cooperate and coordinate well with humans. One approach has been to train agents that can coordinate well with themselves, using self-play. Recently more attention has been paid to the problem of coordinating well with independent training runs: refe...
[ 7, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "nips_2021_Nb03vOtUfz", "h5ShVL27wi", "YdduPFAIosD", "nips_2021_Nb03vOtUfz", "d1Zeekc4pC_", "0_oF1swr73r", "6Bm6c7xj8Wf", "GZI2DQQTSzB", "d1Zeekc4pC_", "qdzlS2U3l5D", "qdzlS2U3l5D", "iocuyqym_0", "M48i-8dmS3C", "LZGWrIY5giI", "CrCSL-BgL4", "gEO67Go11f0", "YdduPFAIosD", "F1mek6WG-hz...
nips_2021_jVzGglbNuW5
Learning Markov State Abstractions for Deep Reinforcement Learning
A fundamental assumption of reinforcement learning in Markov decision processes (MDPs) is that the relevant decision process is, in fact, Markov. However, when MDPs have rich observations, agents typically learn by way of an abstract state representation, and such representations are not guaranteed to preserve the Markov property. We introduce a novel set of conditions and prove that they are sufficient for learning a Markov abstract state representation. We then describe a practical training procedure that combines inverse model estimation and temporal contrastive learning to learn an abstraction that approximately satisfies these conditions. Our novel training objective is compatible with both online and offline training: it does not require a reward signal, but agents can capitalize on reward information when available. We empirically evaluate our approach on a visual gridworld domain and a set of continuous control benchmarks. Our approach learns representations that capture the underlying structure of the domain and lead to improved sample efficiency over state-of-the-art deep reinforcement learning with visual features---often matching or exceeding the performance achieved with hand-designed compact state information.
accept
All reviewers agree that the proposed approach is an important contribution to the area of representation learning in reinforcement learning. Although some lingering concerns regarding the empirical evaluation of the method still remain, overall the reviewers think that the paper introduces a fairly elegant and potentially useful recipe for enforcing the Markov property. We strongly advise the authors to carefully consider the suggestions made in the reviews and also during the discussion phase. For example, the fact that there exist multiple state abstractions that satisfy the Markov property and that other, additional, properties may also play an important role is a point that should be more clear in the paper. Another observation that came up multiple times in the reviews and was a point of contention in the discussion that followed was the lack of comparisons with a version of the proposed approach that only uses the “inverse loss”. The authors should add a comment to the paper explaining the reasons not to have such a comparison. There were also a few concerns regarding the presentation that should be taken into account when preparing a new version of the submission.
train
[ "-z4d1WCCrWN", "0Cb2M4KPsuE", "LJdiUQHg3sJ", "qiUYVofsNCa", "0avUUcgQ00q", "90fac6n16Gv", "_V6gK7mJV4L", "1Zz9MwY9MW3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n-------\n\n\nThis paper analyzes the problem of learning Markov state abstractions in\nreinforcement learning. The authors first prove that the abstraction\nmust satisfy two conditions: an inverse model equality and a density\nratio equality. These two conditions cannot be enforced in the training\nobject...
[ 6, 6, -1, -1, -1, -1, 7, 6 ]
[ 3, 4, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_jVzGglbNuW5", "nips_2021_jVzGglbNuW5", "1Zz9MwY9MW3", "0Cb2M4KPsuE", "_V6gK7mJV4L", "-z4d1WCCrWN", "nips_2021_jVzGglbNuW5", "nips_2021_jVzGglbNuW5" ]
nips_2021_PesaDDyvSk
Towards Deeper Deep Reinforcement Learning with Spectral Normalization
In computer vision and natural language processing, innovations in model architecture that increase model capacity have reliably translated into gains in performance. In stark contrast with this trend, state-of-the-art reinforcement learning (RL) algorithms often use small MLPs, and gains in performance typically originate from algorithmic innovations. It is natural to hypothesize that small datasets in RL necessitate simple models to avoid overfitting; however, this hypothesis is untested. In this paper we investigate how RL agents are affected by exchanging the small MLPs with larger modern networks with skip connections and normalization, focusing specifically on actor-critic algorithms. We empirically verify that naively adopting such architectures leads to instabilities and poor performance, likely contributing to the popularity of simple models in practice. However, we show that dataset size is not the limiting factor, and instead argue that instability from taking gradients through the critic is the culprit. We demonstrate that spectral normalization (SN) can mitigate this issue and enable stable training with large modern architectures. After smoothing with SN, larger models yield significant performance improvements --- suggesting that more ``easy'' gains may be had by focusing on model architectures in addition to algorithmic innovations.
accept
This paper argues, in RL applications --specifically focusing on SAC algorithm-- typical deeper and wider architectures don't provide big gains that we see in supervised learning. Then paper shows that spectral normalization could be used to improve the performance of deep architectures. The paper focuses on the continuous control problems. The authors did a very good job during the rebuttal period and they managed to address the most concerns raised by the reviewers. Specifically, the authors have addressed the limited scope of just discussing on SAC by adding DDPG results too.. The paper could be improved by having the idea tested on a wider class of algorithms and with more smoothness controlling techniques, which the authors have identified as the limitations in the work. The results provided in this paper would still be valuable for the NeurIPS community. I would recommend the authors to cite and discuss about the concurrent work suggested by the reviewer FNPT [1]. [1] Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective Florin Gogianu, Tudor Berariu, Mihaela Rosca, Claudia Clopath, Lucian Busoniu, Razvan Pascanu, ICML 2021
val
[ "Y-iuA4AtaB", "6Bqd5JWW7a1", "35JHRcSeL23", "ay3O8dw3NX1", "4Kk8et0b7Wd", "gXd9edGYbhA", "UUpRGoEH-2f", "W04pAx-uUDV", "e_UBawrJUSv", "7LJIFNCte3m", "GRmbyfpn_sC" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank reviewer vkHq for thoughtful feedback and for updating the review!\n\nWe will make sure to mention that we have obtained baseline numbers through private communications to avoid confusion. It pleases us that there is consensus that removing the minor theoretical claim (proposition 1) will make the paper ...
[ -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "ay3O8dw3NX1", "nips_2021_PesaDDyvSk", "nips_2021_PesaDDyvSk", "gXd9edGYbhA", "6Bqd5JWW7a1", "nips_2021_PesaDDyvSk", "35JHRcSeL23", "GRmbyfpn_sC", "6Bqd5JWW7a1", "6Bqd5JWW7a1", "nips_2021_PesaDDyvSk" ]
nips_2021_Dti5bw14YZF
Functionally Regionalized Knowledge Transfer for Low-resource Drug Discovery
More recently, there has been a surge of interest in employing machine learning approaches to expedite the drug discovery process where virtual screening for hit discovery and ADMET prediction for lead optimization play essential roles. One of the main obstacles to the wide success of machine learning approaches in these two tasks is that the number of compounds labeled with activities or ADMET properties is too small to build an effective predictive model. This paper seeks to remedy the problem by transferring the knowledge from previous assays, namely in-vivo experiments, by different laboratories and against various target proteins. To accommodate these wildly different assays and capture the similarity between assays, we propose a functional rationalized meta-learning algorithm FRML for such knowledge transfer. FRML constructs the predictive model with layers of neural sub-networks or so-called functional regions. Building on this, FRML shares an initialization for the weights of the predictive model across all assays, while customizes it to each assay with a region localization network choosing the pertinent regions. The compositionality of the model improves the capacity of generalization to various and even out-of-distribution tasks. Empirical results on both virtual screening and ADMET prediction validate the superiority of FRML over state-of-the-art baselines powered with interpretability in assay relationship.
accept
This paper seeks to remedy the low-resource drug discovery problem by transferring the knowledge from previous assays, namely in-vivo experiments, by different laboratories and against various target proteins. The authors propose a functional rationalized meta-learning algorithm FRML for such knowledge transfer. The approach appears well motivated and the empirical results appear advance the state-of-the-art. The reviewers provided detailed reviews providing a long list of pros and a rather short list of cons of the approach/manuscript. They are all coming to the same conclusion that this is a good paper for NeurIPS. I follow the recommendation of the reviewers to accept the manuscript. From the scores, it may not be in the spotlight or oral presentation range (7), however, I think it could be bumped up if one aims to increase health-related topics in the oral presentation program.
val
[ "xxCQSy6JF8J", "8L4rQgVxjOx", "hR6yLQXMCHy", "YA0L1kYNqlD", "x-tb28g83U", "vimp1zyj4yN" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review and your valuable comments and suggestions. We address your comments point by point as follows. Could you please kindly let us know if you have any additional reservations, or whether our response adequately addresses your concerns. \n\n**Q1**: Choice of baseline methods\n\n**A1**: As di...
[ -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, 4, 2, 4 ]
[ "vimp1zyj4yN", "x-tb28g83U", "YA0L1kYNqlD", "nips_2021_Dti5bw14YZF", "nips_2021_Dti5bw14YZF", "nips_2021_Dti5bw14YZF" ]
nips_2021_90EVPQ7uCV
Memory-Efficient Approximation Algorithms for Max-k-Cut and Correlation Clustering
Nimita Shinde, Vishnu Narayanan, James Saunderson
accept
This submission was universally appreciated for its application to SDP-based methods for clustering but with lower memory requirements. Though, there were some suggestions that the paper might be more of a natural fit at a TCS conference instead of NeurIPS.
train
[ "uYIsBQvtTpN", "Xsjqse0dQXg", "Stua2dbWb6", "oXVxX6eGb9f", "kac6ibc7XP", "H6BmHxAnOZm", "wUrHdRLfpEH", "YQ1wmjbwLqV" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies several methods for speeding up / reducing memory of computing max k-way cuts, via methods such as sampling and sparsification. They rigorously prove that the semi-definite program can be approximated to an error of \\epsilon in about m \\epsilon^{-2} space, and furthermore, the quality of appro...
[ 7, -1, -1, -1, -1, 7, 6, 4 ]
[ 3, -1, -1, -1, -1, 2, 4, 2 ]
[ "nips_2021_90EVPQ7uCV", "YQ1wmjbwLqV", "wUrHdRLfpEH", "uYIsBQvtTpN", "H6BmHxAnOZm", "nips_2021_90EVPQ7uCV", "nips_2021_90EVPQ7uCV", "nips_2021_90EVPQ7uCV" ]
nips_2021_BwzggTWi8bM
Panoptic 3D Scene Reconstruction From a Single RGB Image
Richly segmented 3D scene reconstructions are an integral basis for many high-level scene understanding tasks, such as for robotics, motion planning, or augmented reality. Existing works in 3D perception from a single RGB image tend to focus on geometric reconstruction only, or geometric reconstruction with semantic segmentation or instance segmentation.Inspired by 2D panoptic segmentation, we propose to unify the tasks of geometric reconstruction, 3D semantic segmentation, and 3D instance segmentation into the task of panoptic 3D scene reconstruction -- from a single RGB image, predicting the complete geometric reconstruction of the scene in the camera frustum of the image, along with semantic and instance segmentations.We propose a new approach for holistic 3D scene understanding from a single RGB image which learns to lift and propagate 2D features from an input image to a 3D volumetric scene representation.Our panoptic 3D reconstruction metric evaluates both geometric reconstruction quality as well as panoptic segmentation.Our experiments demonstrate that our approach for panoptic 3D scene reconstruction outperforms alternative approaches for this task.
accept
The paper proposes a method to learn panoptic 3D scene reconstruction from a single image. Depth and instance segmentation from an RGB image instantiate a 3D feature volume through back-projection which is subsequently refined to produce semantics, instance segmentations and 3D reconstruction. All reviewers acknowledge the good empirical performance of the method and the high relevance of the problem tackled for the NeurIPS audience. Concerns were raised regarding comparisons of the paper to previous methods that did not perform jointly the tasks of segmentation and 3D reconstruction, which the rebuttal addressed. Authors are encouraged to include all additional experiments in the final version.
train
[ "A8YhfjFlyuq", "jcM8FoAdT74", "0UG6Ww5eSut", "InZJsc3g_7o", "yBV1NiInO5u", "G6nn3y-29mx", "rEjoShf2njj", "R3rG7_5ODfn", "UueNO5kgvZ0", "dsIucCCkC8S", "6xpy9aNhQI", "tsqsXArtUe", "ufhf7sMUAfd", "NwvpQ_2jht", "BIHs1EFuWpS" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the clarifications in the rebuttal. I still believe this is a strong paper and would like to maintain my rating of accepting this paper at NeurIPS.", " Thanks for the suggestions - we are happy to include the additional experiments and clarifications to help improve the pap...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "ufhf7sMUAfd", "0UG6Ww5eSut", "rEjoShf2njj", "UueNO5kgvZ0", "nips_2021_BwzggTWi8bM", "R3rG7_5ODfn", "NwvpQ_2jht", "yBV1NiInO5u", "BIHs1EFuWpS", "ufhf7sMUAfd", "yBV1NiInO5u", "nips_2021_BwzggTWi8bM", "nips_2021_BwzggTWi8bM", "nips_2021_BwzggTWi8bM", "nips_2021_BwzggTWi8bM" ]
nips_2021_Yt89iqqswiM
Measuring Generalization with Optimal Transport
Understanding the generalization of deep neural networks is one of the most important tasks in deep learning. Although much progress has been made, theoretical error bounds still often behave disparately from empirical observations. In this work, we develop margin-based generalization bounds, where the margins are normalized with optimal transport costs between independent random subsets sampled from the training distribution. In particular, the optimal transport cost can be interpreted as a generalization of variance which captures the structural properties of the learned feature space. Our bounds robustly predict the generalization error, given training data and network parameters, on large scale datasets. Theoretically, we demonstrate that the concentration and separation of features play crucial roles in generalization, supporting empirical results in the literature.
accept
The paper proposes a novel generalization bound that is both theoretically grounded and works in practice. This is an important achievement that has been acknowledged by all reviewers. The least convinced reviewer appreciated your answers and the consensus during the discussion was for accepting the paper. Still the discussion with the reviewer about the constant c(k,d) is important and should be added in the final version of the paper.
train
[ "s0HPpIBUJz", "TiixiVx86G", "czxALmXfca1", "11qAiFMCYeX", "jQCtrzXawfo", "6C6cwr1bdA7", "MBuFkl38J4l", "30KZUXP3IAp", "KYjQDAakCG", "2Oqh5PtcGLs", "v6RdDxP544h", "ke8Ea3xzhQ2" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper develops generalization bounds based on class margins and the optimal transport cost (in Wasserstein-1 distance) between two independent random subsets sampled from the training distribution. The authors argue that the optimal transport cost term explains the correlation between clustered representation...
[ 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_Yt89iqqswiM", "11qAiFMCYeX", "nips_2021_Yt89iqqswiM", "KYjQDAakCG", "MBuFkl38J4l", "30KZUXP3IAp", "czxALmXfca1", "ke8Ea3xzhQ2", "s0HPpIBUJz", "v6RdDxP544h", "nips_2021_Yt89iqqswiM", "nips_2021_Yt89iqqswiM" ]
nips_2021_cSVl6MtPIEX
Uniform Concentration Bounds toward a Unified Framework for Robust Clustering
Debolina Paul, Saptarshi Chakraborty, Swagatam Das, Jason Xu
accept
The authors develop a unified framework for robust clustering using the Median of Means approach, and provide an analysis of the consistency and convergence of the algorithms derived through this approach. The framework applies to a diverse set of clustering problems, and empirical comparisons show convincingly that the proposed method is indeed more robust than the baselines considered. Given the wide usage of clustering methods, this work will be of wide interest to the community.
train
[ "Fgugjoe3B3v", "09U-5zK6uW", "9VNuzH9cab", "_py3q6me4Zt", "mRyzdoYsR1A", "SzivLiF62T", "jnMbGij_jt3", "MZQPc38lZK" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors consider the minimization of a Median-of-Means (MoM) objective, which is suitable to yield minimizers of a centroid-based clustering which is more robust to outliers. The type of the centroid-based clustering is defined over the employed measurement of distances, which can be any Bregman divergence. T...
[ 7, -1, -1, -1, -1, 7, 7, 7 ]
[ 3, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2021_cSVl6MtPIEX", "Fgugjoe3B3v", "SzivLiF62T", "jnMbGij_jt3", "MZQPc38lZK", "nips_2021_cSVl6MtPIEX", "nips_2021_cSVl6MtPIEX", "nips_2021_cSVl6MtPIEX" ]
nips_2021_3rjYr0K-OGC
Learning Signal-Agnostic Manifolds of Neural Fields
Deep neural networks have been used widely to learn the latent structure of datasets, across modalities such as images, shapes, and audio signals. However, existing models are generally modality-dependent, requiring custom architectures and objectives to process different classes of signals. We leverage neural fields to capture the underlying structure in image, shape, audio and cross-modal audiovisual domains in a modality-independent manner. We cast our task as one of learning a manifold, where we aim to infer a low-dimensional, locally linear subspace in which our data resides. By enforcing coverage of the manifold, local linearity, and local isometry, our model -- dubbed GEM -- learns to capture the underlying structure of datasets across modalities. We can then travel along linear regions of our manifold to obtain perceptually consistent interpolations between samples, and can further use GEM to recover points on our manifold and glean not only diverse completions of input images, but cross-modal hallucinations of audio or image signals. Finally, we show that by walking across the underlying manifold of GEM, we may generate new samples in our signal domains.
accept
The paper proposes GEM, an approach for multi-modal manifold learning, which is novel and with interesting results. I found the rebuttal very well done and I think it clarified many open issues with the approach, although not to the degree to have a total agreement of the reviewers. Nevertheless I find the clarifications sufficient and I am inclined to accept the work.
train
[ "kcnQ2TVVNE", "EQDZVwSFWqM", "5bzEXRW73ja", "_OYJoqvARXF", "mZ2kwyoWKuJ", "QPIXLHkWdet", "kgnFARV7RmH", "s20HiR7qLrI", "c7X3E4tLHFP", "89KHVYW3TwI", "t4syT7PNbDy", "hno8Md5WBWf" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank reviewers for their detailed comments and feedback. We’re glad that reviewers unanimously agree that Generative Manifold Learning (GEM), is an interesting and different research direction (“an interesting idea with various interesting applications” (C9Je), “interesting and important” (w67Z), “novel appro...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_3rjYr0K-OGC", "hno8Md5WBWf", "t4syT7PNbDy", "nips_2021_3rjYr0K-OGC", "QPIXLHkWdet", "c7X3E4tLHFP", "t4syT7PNbDy", "hno8Md5WBWf", "_OYJoqvARXF", "kcnQ2TVVNE", "nips_2021_3rjYr0K-OGC", "nips_2021_3rjYr0K-OGC" ]
nips_2021_UYI6Sk_3Nox
Low-dimensional Structure in the Space of Language Representations is Reflected in Brain Responses
How related are the representations learned by neural language models, translation models, and language tagging tasks? We answer this question by adapting an encoder-decoder transfer learning method from computer vision to investigate the structure among 100 different feature spaces extracted from hidden representations of various networks trained on language tasks.This method reveals a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings. We call this low-dimensional structure a language representation embedding because it encodes the relationships between representations needed to process language for a variety of NLP tasks. We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI. Additionally, we find that the principal dimension of this structure can be used to create a metric which highlights the brain's natural language processing hierarchy. This suggests that the embedding captures some part of the brain's natural language representation structure.
accept
Reviewers found that the method presented in this submission is useful and casts our understanding of language processing in the brain in a new light. I encourage the authors to update their manuscript to include the results from the discussion with the reviewers, as these were decisive.
train
[ "ATzrSIyxtvx", "O0LaKR1tWGE", "w3DUmHT4ZXS", "RYpKSIJotv-", "qMs3gRLrfdI", "yEFQVK_QSam", "su9VGuso8eF", "SrQu3CXEHMm", "d1qEoS7OT5x", "AFrMwbxlV_n", "3HTq4ADNZiP", "yPaONfWm5hL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper is about two things: understanding learned NLP representations by finding and understanding similarities between *different* representations; and comparing these representations to fMRI data. The paper takes a new approach to both problems, and finds a dimension in the space of language representations ...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_UYI6Sk_3Nox", "nips_2021_UYI6Sk_3Nox", "RYpKSIJotv-", "qMs3gRLrfdI", "AFrMwbxlV_n", "ATzrSIyxtvx", "SrQu3CXEHMm", "yPaONfWm5hL", "3HTq4ADNZiP", "O0LaKR1tWGE", "nips_2021_UYI6Sk_3Nox", "nips_2021_UYI6Sk_3Nox" ]
nips_2021_sjLs5OXcL7j
On the Suboptimality of Thompson Sampling in High Dimensions
In this paper we consider Thompson Sampling for combinatorial semi-bandits. We demonstrate that, perhaps surprisingly, Thompson Sampling is sub-optimal for this problem in the sense that its regret scales exponentially in the ambient dimension, and its minimax regret scales almost linearly. This phenomenon occurs under a wide variety of assumptions including both non-linear and linear reward functions in the Bernoulli distribution setting. We also show that including a fixed amount of forced exploration to Thompson Sampling does not alleviate the problem. We complement our theoretical results with numerical results and show that in practice Thompson Sampling indeed can perform very poorly in some high dimension situations.
accept
The paper has received mixed reviews in the first round, but the author response has successfully addressed the concerns of the reviewers. After some internal discussion, the reviewers all agreed that the paper offers a strong and interesting contribution and should thus be accepted for publication at the conference.
train
[ "RyZI4ssdFn", "Q0M5FxQzdhZ", "Zy9eNQvlXP2", "wxEqDBBXS5v", "lCoE_tS0mhH", "Dg2316r6CvK", "tZZQyh9oyCG", "fq7CxusvPKl", "jjKKyi-Vuq0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear Authors,\n\nThank you for your convincing and detailed rebuttals. I have increased my score to a 7 to reflect my opinion that the paper is interesting and will be strengthened by the edits outlined in the rebuttal.", "This paper focusses on Thompson Sampling for stochastic combinatorial semibandits. The ma...
[ -1, 7, 7, 6, -1, -1, -1, -1, 8 ]
[ -1, 3, 4, 3, -1, -1, -1, -1, 4 ]
[ "Dg2316r6CvK", "nips_2021_sjLs5OXcL7j", "nips_2021_sjLs5OXcL7j", "nips_2021_sjLs5OXcL7j", "Zy9eNQvlXP2", "Q0M5FxQzdhZ", "wxEqDBBXS5v", "jjKKyi-Vuq0", "nips_2021_sjLs5OXcL7j" ]
nips_2021_sUFdZqWeMM
Learning Debiased and Disentangled Representations for Semantic Segmentation
Deep neural networks are susceptible to learn biased models with entangled feature representations, which may lead to subpar performances on various downstream tasks. This is particularly true for under-represented classes, where a lack of diversity in the data exacerbates the tendency. This limitation has been addressed mostly in classification tasks, but there is little study on additional challenges that may appear in more complex dense prediction problems including semantic segmentation. To this end, we propose a model-agnostic and stochastic training scheme for semantic segmentation, which facilitates the learning of debiased and disentangled representations. For each class, we first extract class-specific information from the highly entangled feature map. Then, information related to a randomly sampled class is suppressed by a feature selection process in the feature space. By randomly eliminating certain class information in each training iteration, we effectively re- duce feature dependencies among classes, and the model is able to learn more debiased and disentangled feature representations. Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks, with especially notable performance gains on under-represented classes.
accept
This work describes a new training scheme for semantic segmentation models (“DropClass”) that is intended to encourage the learning of feature representations that avoid entangling features from objects (e.g. car & road) that tend to co-occur/co-locate. Reviewers were generally positive about the work, and found it to be interesting and novel. Reviewers and authors had something of a philosophical debate about the nature of dataset bias and entanglement, and it will be helpful for the authors to clarify these terms in the resulting paper, and to ensure that limitations of the method (e.g. tendency for low frequency classes to become entangled with semantically similar class) are appropriately highlighted for readers. All in all, this is strong work that should be of interest to the NeurIPS community.
train
[ "Pi-qipmOUh", "v_mlokE0U80", "kJQZl4PBjJ", "ym5vmI3Tfx", "jgQf_nWylVC", "PnIKRpPcgNh", "YnY6t2TEF9y", "wFb3Ox0oTI-", "N8e5SuejNaA", "xDOdl1kfZeX", "17Va4tMvuLi", "BTveAz8bQYF", "iWPOeuP-a_p", "vogprqlCuC3", "-SGNSLLshL9" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear Reviewer 8XQh,\n\nThank you so much for your encouraging comments. We will revise our paper as discussed and improve the presentation.\n\nBest wishes,\n\nAuthors", " Dear Reviewer 4WPr,\n\nYour comments are indeed helpful to improve our paper. We will reflect on your feedback, especially for formalizing th...
[ -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 5, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "wFb3Ox0oTI-", "ym5vmI3Tfx", "nips_2021_sUFdZqWeMM", "jgQf_nWylVC", "PnIKRpPcgNh", "N8e5SuejNaA", "nips_2021_sUFdZqWeMM", "iWPOeuP-a_p", "xDOdl1kfZeX", "BTveAz8bQYF", "-SGNSLLshL9", "kJQZl4PBjJ", "YnY6t2TEF9y", "nips_2021_sUFdZqWeMM", "nips_2021_sUFdZqWeMM" ]
nips_2021_f_eOQN87eXc
Diversity Matters When Learning From Ensembles
Deep ensembles excel in large-scale image classification tasks both in terms of prediction accuracy and calibration. Despite being simple to train, the computation and memory cost of deep ensembles limits their practicability. While some recent works propose to distill an ensemble model into a single model to reduce such costs, there is still a performance gap between the ensemble and distilled models. We propose a simple approach for reducing this gap, i.e., making the distilled performance close to the full ensemble. Our key assumption is that a distilled model should absorb as much function diversity inside the ensemble as possible. We first empirically show that the typical distillation procedure does not effectively transfer such diversity, especially for complex models that achieve near-zero training error. To fix this, we propose a perturbation strategy for distillation that reveals diversity by seeking inputs for which ensemble member outputs disagree. We empirically show that a model distilled with such perturbed samples indeed exhibits enhanced diversity, leading to improved performance.
accept
The authors propose a method for distilling an ensemble to a single model by training on inputs which are perturbed to make ensemble member outputs disagree. Most reviewers (1b8g, 7xWm, z2qY) seemed to agree that the method was sensible, elegant, and clearly-presented, and that the gradient-matching intuition was clever and well-supported. Reviewer p6SF raised significant concerns about correctness of the authors' transferability assumption, but after rebuttal and discussion these were mostly addressed. p6SF also raised concerns about clarity, but these too were partially addressed and not in my opinion sufficient to justify rejection. Another common concern (1b8g, 7xWm, z2qY) was the lack of large-scale experiments, but the authors address this with TinyImageNet experiments in rebuttal to the satisfaction of multiple reviewers (1b8g, z2qY). Therefore I recommend acceptance.
val
[ "JD4-WFSqVPr", "Sst46rOoLYT", "YLsyX8bwu8S", "aL1I-kE9V9-", "Yvzs60eWV8C", "pb6yUg1Q0KT", "vfS87Yy8WCv", "NKqHCcCbZoS", "5_57OAN3UK8", "R0O0otRhlRp", "SIGxgIsHK8L", "i-clwMQqIO", "n2DQhs4JwpC", "pvVi4hPNyLJ", "-HxrIqyOKbz", "dbHV5mtTaJ", "b_VHXaWIuA", "j3hkWYRrGV2", "GbtBsf_XIc" ...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_re...
[ " We thank you for your reevaluation and positive comments.", "The paper studies the question of ensemble function diversity and how to preserve it on ensemble distillation to a single model. The authors use input perturbations that make the models, especially well trained models that are near 0 loss on the train...
[ -1, 7, -1, -1, 5, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "YLsyX8bwu8S", "nips_2021_f_eOQN87eXc", "j3hkWYRrGV2", "NKqHCcCbZoS", "nips_2021_f_eOQN87eXc", "aL1I-kE9V9-", "nips_2021_f_eOQN87eXc", "5_57OAN3UK8", "dbHV5mtTaJ", "n2DQhs4JwpC", "pvVi4hPNyLJ", "nips_2021_f_eOQN87eXc", "-HxrIqyOKbz", "b_VHXaWIuA", "i-clwMQqIO", "Yvzs60eWV8C", "GbtBsf...
nips_2021_xfDXF0I_bt
Locally Valid and Discriminative Prediction Intervals for Deep Learning Models
Crucial for building trust in deep learning models for critical real-world applications is efficient and theoretically sound uncertainty quantification, a task that continues to be challenging. Useful uncertainty information is expected to have two key properties: It should be valid (guaranteeing coverage) and discriminative (more uncertain when the expected risk is high). Moreover, when combined with deep learning (DL) methods, it should be scalable and affect the DL model performance minimally. Most existing Bayesian methods lack frequentist coverage guarantees and usually affect model performance. The few available frequentist methods are rarely discriminative and/or violate coverage guarantees due to unrealistic assumptions. Moreover, many methods are expensive or require substantial modifications to the base neural network. Building upon recent advances in conformal prediction [13, 33] and leveraging the classical idea of kernel regression, we propose Locally Valid and Discriminative prediction intervals (LVD), a simple, efficient, and lightweight method to construct discriminative prediction intervals (PIs) for almost any DL model. With no assumptions on the data distribution, such PIs also offer finite-sample local coverage guarantees (contrasted to the simpler marginal coverage). We empirically verify, using diverse datasets, that besides being the only locally valid method for DL, LVD also exceeds or matches the performance (including coverage rate and prediction accuracy) of existing uncertainty quantification methods, while offering additional benefits in scalability and flexibility.
accept
Three reviewers indicated acceptance, two of them with clearly positive scores. Their main arguments in favor of this paper were good methodological contributions, convincing experiments, clarity of motivation and arguments, and a convincing rebuttal that added many details and addressed most points of criticism. On the other hand, there was one clearly negative review, but during the discussion period, the other members of the review committee (including myself) had the impression that the authors' rebuttal addressed all these pints of criticism in a detailed and convincing way. So I recommend acceptance of this paper.
train
[ "HQYzh3FgTD", "70R8awMnJoj", "Un9jaa6ZrAf", "ipksCfR0bx", "XjMR1on2g9N", "9OPwNSfZogn", "ABAVGxzX6u", "ZNrVlWt5MYN", "MEzihNg7B4-", "CuywJTYSI5U", "vhTp5l9ixSz", "6WuGowCwxRv", "TSgO2umePU6", "ObIPRYwAcnp" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors the Locally Valid and Discriminative (LVD) method to construct confidence intervals. The idea has three steps. (1) First is to take the embeddings from a neural network and train a kernel regression model on the task, learning a Gaussian kernel $K$. (2) Second is to, on a separate held-out \"conformal\...
[ 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_xfDXF0I_bt", "Un9jaa6ZrAf", "CuywJTYSI5U", "XjMR1on2g9N", "ZNrVlWt5MYN", "nips_2021_xfDXF0I_bt", "nips_2021_xfDXF0I_bt", "9OPwNSfZogn", "TSgO2umePU6", "ObIPRYwAcnp", "HQYzh3FgTD", "nips_2021_xfDXF0I_bt", "nips_2021_xfDXF0I_bt", "nips_2021_xfDXF0I_bt" ]
nips_2021_byCQ9Uu4PD
Personalized Federated Learning With Gaussian Processes
Federated learning aims to learn a global model that performs well on client devices with limited cross-client communication. Personalized federated learning (PFL) further extends this setup to handle data heterogeneity between clients by learning personalized models. A key challenge in this setting is to learn effectively across clients even though each client has unique data that is often limited in size. Here we present pFedGP, a solution to PFL that is based on Gaussian processes (GPs) with deep kernel learning. GPs are highly expressive models that work well in the low data regime due to their Bayesian nature.However, applying GPs to PFL raises multiple challenges. Mainly, GPs performance depends heavily on access to a good kernel function, and learning a kernel requires a large training set. Therefore, we propose learning a shared kernel function across all clients, parameterized by a neural network, with a personal GP classifier for each client. We further extend pFedGP to include inducing points using two novel methods, the first helps to improve generalization in the low data regime and the second reduces the computational cost. We derive a PAC-Bayes generalization bound on novel clients and empirically show that it gives non-vacuous guarantees. Extensive experiments on standard PFL benchmarks with CIFAR-10, CIFAR-100, and CINIC-10, and on a new setup of learning under input noise show that pFedGP achieves well-calibrated predictions while significantly outperforming baseline methods, reaching up to 21% in accuracy gain.
accept
The paper adopts and adapts a number of known GP learning techniques to develop a method for personalised federated learning. The proposed method is computationally intensive, but performs well in practice and even comes with theoretical utility analysis. After author feedback, all reviewers appear happy with the paper and recommend acceptance, assuming the authors revise the paper to incorporate the material from the author feedback.
train
[ "Cd3bu4kDEXJ", "khxI7xlq6tt", "j1vizvUtx0D", "W0n15sFQOBJ", "nbo--PidB5w", "xkylW6HRKz", "yGOMm13jg2f", "COKEWy0bp_C", "i4K1XrLxq_C", "EZQh0V6KyG", "LJB6gBYDUbN", "KWD8zVs67QX", "zVK-ORNrW3b", "VtexWzHKQpp", "K3LYVZlh24E" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " We thank the reviewer for revaluating the paper and for increasing the score. We greatly appreciate that. We will include points 2 & 6 as well as other comments that were raised in the revised version of the paper.", "This paper proposes to use Gaussian processes for personalized federated learning (PFL), and u...
[ -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "j1vizvUtx0D", "nips_2021_byCQ9Uu4PD", "i4K1XrLxq_C", "nbo--PidB5w", "LJB6gBYDUbN", "COKEWy0bp_C", "nips_2021_byCQ9Uu4PD", "KWD8zVs67QX", "EZQh0V6KyG", "khxI7xlq6tt", "K3LYVZlh24E", "zVK-ORNrW3b", "yGOMm13jg2f", "nips_2021_byCQ9Uu4PD", "nips_2021_byCQ9Uu4PD" ]
nips_2021_ChWy1anEuow
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
Modern machine learning systems such as deep neural networks are often highly over-parameterized so that they can fit the noisy training data exactly, yet they can still achieve small test errors in practice. In this paper, we study this "benign overfitting" phenomenon of the maximum margin classifier for linear classification problems. Specifically, we consider data generated from sub-Gaussian mixtures, and provide a tight risk bound for the maximum margin linear classifier in the over-parameterized setting. Our results precisely characterize the condition under which benign overfitting can occur in linear classification problems, and improve on previous work. They also have direct implications for over-parameterized logistic regression.
accept
This paper is favored by two reviewers and is acceptable by all. Reviewers agree that it is clearly written and organized, even enjoyable to read, and that the results are solid on their own. Throughout the various exchanges between reviewers and authors, some valid suggestions emerged that may strengthen the paper, for instance: * Discussing the possibility that d > n^2 is required and situating that relative to assumptions in related work (from discussion with GwQP and nXks), namely highlighting the sense in which this is comparatively mild. * Considering commenting on how to approach an extension to data that is not centered (from discussion with 1YJ4). This may be more of an optional suggestion, but again may be useful to understanding the setup and analysis. Overall the paper makes a number of technical contributions to the actively developing topic of studying benign overfitting. Together with mostly favorable reviews, I recommend it for acceptance.
train
[ "lIsVgrQeJek", "k0fEmaSYR1B", "CgsnrO9qqgS", "DqFMfiqptAK", "9Zkehca6C4q", "Xa68IJ_bHbn", "Rtn4x0LugAu", "0O57raFrAc4", "XPKaPAx6u4K", "4Czf1Pq_m4Z", "whAFJQR76kL" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing my concerns. I am keeping my score. ", " Thank you for your further comment and question.\n\n1. Regarding point 5, in your \"corrected\" condition above, the LHS should be squared, $|| \\mathbf{\\mu} ||_2^2 \\geq C || \\mathbf{\\mu} ||\\_{\\mathbf{\\Sigma}}$?\n\nSorry for the typo. It s...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "Xa68IJ_bHbn", "CgsnrO9qqgS", "9Zkehca6C4q", "whAFJQR76kL", "4Czf1Pq_m4Z", "XPKaPAx6u4K", "0O57raFrAc4", "nips_2021_ChWy1anEuow", "nips_2021_ChWy1anEuow", "nips_2021_ChWy1anEuow", "nips_2021_ChWy1anEuow" ]
nips_2021_9Jsop0faZtU
Implicit SVD for Graph Representation Learning
Sami Abu-El-Haija, Hesham Mostafa, Marcel Nassar, Valentino Crespi, Greg Ver Steeg, Aram Galstyan
accept
The reviewers were generally positive about this paper -- they liked the SVD based approach to simplifying or initializing deep neural networks for graph representation learning, and appreciated the positive empirical results. Thus, the paper is recommended for acceptance. Connections to prior work should be clarified in the revise submission -- the abstract of the paper seems to imply that the idea of computing a partial SVD of an implicit matrix is new. There is significant work in the numerical linear algebra community on matrix-free SVD methods -- in fact this is a critically important feature of nearly all Krylov based and sketching methods for partial SVD. See this link for a number of references to background literature: https://stats.stackexchange.com/questions/159325/what-fast-algorithms-exist-for-computing-truncated-svd
test
[ "M8hmKePEq3w", "t0fJ6DEw4hJ", "pRdlC2v-bg", "is6_123hRY1", "4LFi6SDjqhy", "-h7WneqDT51", "N4Aw_AC_2n", "erIHnstFqgh", "1xTzdpgt8BY" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors suggest to express large matrices as operations over given manageable leaf matrices. They keep track of the resulting abstract syntax tree (AST) and offer a specialised SVD decomposition implementation that can perform the computation on the final expression by traversing the AST and operating on the i...
[ 9, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_9Jsop0faZtU", "pRdlC2v-bg", "1xTzdpgt8BY", "nips_2021_9Jsop0faZtU", "erIHnstFqgh", "M8hmKePEq3w", "1xTzdpgt8BY", "nips_2021_9Jsop0faZtU", "nips_2021_9Jsop0faZtU" ]
nips_2021_lrdXc17jm6
Offline Model-based Adaptable Policy Learning
In reinforcement learning, a promising direction to avoid online trial-and-error costs is learning from an offline dataset. Current offline reinforcement learning methods commonly learn in the policy space constrained to in-support regions by the offline dataset, in order to ensure the robustness of the outcome policies. Such constraints, however, also limit the potential of the outcome policies. In this paper, to release the potential of offline policy learning, we investigate the decision-making problems in out-of-support regions directly and propose offline Model-based Adaptable Policy LEarning (MAPLE). By this approach, instead of learning in in-support regions, we learn an adaptable policy that can adapt its behavior in out-of-support regions when deployed. We conduct experiments on MuJoCo controlling tasks with offline datasets. The results show that the proposed method can make robust decisions in out-of-support regions and achieve better performance than SOTA algorithms.
accept
Most reviewers agreed that the paper has an interesting new way of performing offline rl (i.e., how to do well in out-of-support regions), and the empirical results on standard benchmark are very promising.
train
[ "H0E03cooxn2", "hXfMdyq0AvG", "BEPIiRgrhJD", "TwjPKKgd5La", "0oqY1GU956c", "o5OQj5ueg5", "Frnrq-tdqt", "qPHBuiw9cWO", "BnvU1-k-49X", "yVC7NfaclpC", "rlNm_nvMytH", "BHJvsJL40p3", "-i833FVqVR", "2V4uzMzu_ZQ", "34modfTQoJs", "49Uv07VrAqp", "2JcJb6Hsq9", "3lpS8FqymbG", "HF3R-dHLiln" ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and appreciation of the work!", " Thank you for addressing my comments! My score will remain as-is.", " Thank you for running this additional experiment! The performance of MAPLE is indeed good, as one would hope. This provides additional strength to the claim that MAPLE can succeed o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "BEPIiRgrhJD", "34modfTQoJs", "rlNm_nvMytH", "nips_2021_lrdXc17jm6", "Frnrq-tdqt", "qPHBuiw9cWO", "BnvU1-k-49X", "BHJvsJL40p3", "3lpS8FqymbG", "HF3R-dHLiln", "2JcJb6Hsq9", "HF3R-dHLiln", "49Uv07VrAqp", "-i833FVqVR", "2JcJb6Hsq9", "nips_2021_lrdXc17jm6", "nips_2021_lrdXc17jm6", "nip...
nips_2021_34kP--v0qVT
Multilingual Pre-training with Universal Dependency Learning
The pre-trained language model (PrLM) demonstrates domination in downstream natural language processing tasks, in which multilingual PrLM takes advantage of language universality to alleviate the issue of limited resources for low-resource languages. Despite its successes, the performance of multilingual PrLM is still unsatisfactory, when multilingual PrLMs only focus on plain text and ignore obvious universal linguistic structure clues. Existing PrLMs have shown that monolingual linguistic structure knowledge may bring about better performance. Thus we propose a novel multilingual PrLM that supports both explicit universal dependency parsing and implicit language modeling. Syntax in terms of universal dependency parse serves as not only pre-training objective but also learned representation in our model, which brings unprecedented PrLM interpretability and convenience in downstream task use. Our model outperforms two popular multilingual PrLM, multilingual-BERT and XLM-R, on cross-lingual natural language understanding (NLU) benchmarks and linguistic structure parsing datasets, demonstrating the effectiveness and stronger cross-lingual modeling capabilities of our approach.
accept
This paper presents an approach to multilingual pretraining that (a) incorporates supervised dependency parsing as an auxiliary objective and (b) incorporates dependency scores back into the encoder itself. Results on XNLI, XQuAD, and UD dependency and constituency parsing show gains over baselines that do not use syntactic structure. Reviewers are split with two in favor of rejection and two voting for acceptance. Most reviewers viewed both the proposed approach and the positive results as potentially impactful. However, a serious concern was surfaced. Specifically, that the experimental comparisons conflate the two potential contributions: (1) gains from incorporating parsing scores back into the encoder, and (2) gains from training in a multilingual setting. These points need to be separately evaluated in the context of missing, but closely related baselines. For example, how does the method compare with related baselines like Struct-BERT or LIMIT-BERT that also consider syntactic structure, but in a monolingual setting? Does incorporating parsing scores back into these encoders increase performance in a monolingual setting? Do these baselines improve in a multi-lingual setting without other architectural changes? These concerns sway me to recommend rejection for the current draft. But I strongly encourage authors to resubmit with additional experiments.
train
[ "Gnb3CWAyob", "L2y5ObEUAxf", "F3OGBm6zoZz", "kZBroqeuum", "wSgDQVS-0FW", "_ns5xsrUHVz", "fkewDvOxbtG", "s7uCtnw60Pu", "JEDcVQEC5ne", "Oz1mIC6AqfA" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all the reviewers for the insightful feedback with the score increased. Below we make some responses to the comments we received recently.\nWhile we have clarified the innovation of our work in our previous response, if we may, let us emphasize it again here by comparing our work to recent related resear...
[ -1, 6, -1, -1, -1, -1, -1, 5, 7, 4 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_34kP--v0qVT", "nips_2021_34kP--v0qVT", "nips_2021_34kP--v0qVT", "Oz1mIC6AqfA", "s7uCtnw60Pu", "JEDcVQEC5ne", "L2y5ObEUAxf", "nips_2021_34kP--v0qVT", "nips_2021_34kP--v0qVT", "nips_2021_34kP--v0qVT" ]
nips_2021_0DBYkHfkZlk
Parameter-free HE-friendly Logistic Regression
Junyoung Byun, WOOJIN LEE, Jaewook Lee
accept
This paper presents an interesting solution to the problem of training models on private data. The progress is made by first noticing that not all features are equally sensitive and then using it to reduce the classification problem, which is hard to train using HE to a regression problem that is easier to solve. Most reviewers agreed that the paper makes a nice contribution to the field and should be accepted. The presentation of the work is fine but could be improved. Adding to comments made by the reviewers note, for example, that on line 122 “knowledge distillation” is discussed as if it was already presented before. The line reads “…which mimics the first phase of the knowledge distillation in that …” but this is the first-time knowledge distillation is being discussed. In this case, removing the “the” before “knowledge distillation” and adding a reference to a relevant paper could help the reader. We encourage the authors to revisit the paper and try to improve it readability.
train
[ "N0vZDkDRHqh", "mTqQWGxjoW", "SlA1YxCeoyr", "pbvDVq5oiZH", "SOgKOwiff8R", "54LSUuVq-25", "UIU9j_W1g-", "XhdO6waXqXY", "Ua4__kNk1Kf", "keeeosHgMz", "O30KUH1_Zn8" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer WSkd:\n\nYour feedback is highly appreciated and will help us to improve our work. Please let us know if our response is not clear enough to answer your questions, or if there are any more points that we need to explain. We are looking forward to your response.", " Dear Reviewer XQWo:\n\nYour feed...
[ -1, -1, -1, 6, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, 3, 3, 3 ]
[ "O30KUH1_Zn8", "keeeosHgMz", "Ua4__kNk1Kf", "nips_2021_0DBYkHfkZlk", "pbvDVq5oiZH", "O30KUH1_Zn8", "Ua4__kNk1Kf", "keeeosHgMz", "nips_2021_0DBYkHfkZlk", "nips_2021_0DBYkHfkZlk", "nips_2021_0DBYkHfkZlk" ]
nips_2021_EocGDCLaw-d
Active clustering for labeling training data
Gathering training data is a key step of any supervised learning task, and it is both critical and expensive. Critical, because the quantity and quality of the training data has a high impact on the performance of the learned function. Expensive, because most practical cases rely on humans-in-the-loop to label the data. The process of determining the correct labels is much more expensive than comparing two items to see whether they belong to the same class. Thus motivated, we propose a setting for training data gathering where the human experts perform the comparatively cheap task of answering pairwise queries, and the computer groups the items into classes (which can be labeled cheaply at the very end of the process). Given the items, we consider two random models for the classes: one where the set partition they form is drawn uniformly, the other one where each item chooses its class independently following a fixed distribution. In the first model, we characterize the algorithms that minimize the average number of queries required to cluster the items and analyze their complexity. In the second model, we analyze a specific algorithm family, propose as a conjecture that they reach the minimum average number of queries and compare their performance to a random approach. We also propose solutions to handle errors or inconsistencies in the experts' answers.
accept
This is a well-written paper that makes a nice theoretical contribution in analyzing two models for the problem of gathering training data, where the human experts are able to answer pairwise queries. There is a noted lack of experiments, but the theoretical contributions alone are enough to warrant acceptance.
train
[ "GxxY88aZ2a0", "mnIugvG-RbT", "-xToN3ncLWm", "YPk4B6F1Y5N", "6n1V7kDE4Xz" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We agree that additional information on the items, such as a classifier trained on the answers to the first queries, improves the active clustering algorithms. One could even use less than n queries to classify n items if the classifier was trusted on the answers it considers as most certain. However, there is an...
[ -1, -1, 5, 7, 7 ]
[ -1, -1, 4, 3, 1 ]
[ "-xToN3ncLWm", "YPk4B6F1Y5N", "nips_2021_EocGDCLaw-d", "nips_2021_EocGDCLaw-d", "nips_2021_EocGDCLaw-d" ]
nips_2021_iNUKmzaL-M5
Exploring Social Posterior Collapse in Variational Autoencoder for Interaction Modeling
Multi-agent behavior modeling and trajectory forecasting are crucial for the safe navigation of autonomous agents in interactive scenarios. Variational Autoencoder (VAE) has been widely applied in multi-agent interaction modeling to generate diverse behavior and learn a low-dimensional representation for interacting systems. However, existing literature did not formally discuss if a VAE-based model can properly encode interaction into its latent space. In this work, we argue that one of the typical formulations of VAEs in multi-agent modeling suffers from an issue we refer to as social posterior collapse, i.e., the model is prone to ignoring historical social context when predicting the future trajectory of an agent. It could cause significant prediction errors and poor generalization performance. We analyze the reason behind this under-explored phenomenon and propose several measures to tackle it. Afterward, we implement the proposed framework and experiment on real-world datasets for multi-agent trajectory prediction. In particular, we propose a novel sparse graph attention message-passing (sparse-GAMP) layer, which helps us detect social posterior collapse in our experiments. In the experiments, we verify that social posterior collapse indeed occurs. Also, the proposed measures are effective in alleviating the issue. As a result, the model attains better generalization performance when historical social context is informative for prediction.
accept
The paper identifies a failure mode of a common approach to modeling interaction using Variational Autoencoders in multi-agent behavior and trajectory forecasting, which the authors term "Social Posterior Collapse". This identified tendency to underfit in settings where relevant social context could improve prediction and generalization performance. The authors propose an approach to solving this problem using a contextual VAE formulation and auxiliary prediction task, and demonstrate that these lead to the expected empirical improvements in multi-agent interaction settings. Reviewers assessed the focus of the papers as an important and interesting problem that is well diagnosed theoretically. Given that VAEs are commonly used to model multi-agent interactions, the identified failure mode, proposed metric for diagnosing it, and the novel solution to this problem have good potential for impact. A number of concerns were raised in the initial reviews. In particular, reviewers suggested additional empirical validation to more clearly pin point the failure case, and to probe its generality as well as the generality of the new proposed solution. In addition, reviewers made valuable suggestions for improving clarity as well as further increasing the contribution of the paper by moving an experiment from the appendix to the main paper. In the rebuttal and discussion, reviewers indicated that their concerns were largely addressed. The most negative reviewer unfortunately did not follow up on the discussion. The AC reviewed their concerns and the author response, and assessed the concerns as sufficiently addressed. While the paper can be improved further, remaining issues can be addressed in the camera ready version and do not prevent acceptance of the paper. The AC recommends that the paper is accepted, and strongly encourages the authors to take on board all reviewer suggestions to further improve the camera ready version.
train
[ "CQAQbBQV7t", "cEt6N8kgDhY", "5USgbicVrqL", "wYhaR8Cw2TO", "7eX9_oGoeXH", "UpkqQPgvDDO", "ExKT9RctTWU", "U3BEGP50p26", "tTpLgOwf77J", "8CQD6-gowVT", "l9_YtGl1l-E" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for their clarifications about the first and second comments. However, I still believe that in order to validate their hypothesis, the authors could demonstrate this problem in many more frameworks using VAE in social interaction context (trajectory prediction, human motion interaction etc)....
[ -1, -1, -1, 7, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, 2, 4, 3 ]
[ "U3BEGP50p26", "UpkqQPgvDDO", "7eX9_oGoeXH", "nips_2021_iNUKmzaL-M5", "wYhaR8Cw2TO", "l9_YtGl1l-E", "8CQD6-gowVT", "tTpLgOwf77J", "nips_2021_iNUKmzaL-M5", "nips_2021_iNUKmzaL-M5", "nips_2021_iNUKmzaL-M5" ]
nips_2021_lmm2W2ICtjk
Ensembling Graph Predictions for AMR Parsing
In many machine learning tasks, models are trained to predict structure data such as graphs. For example, in natural language processing, it is very common to parse texts into dependency trees or abstract meaning representation (AMR) graphs. On the other hand, ensemble methods combine predictions from multiple models to create a new one that is more robust and accurate than individual predictions. In the literature, there are many ensembling techniques proposed for classification or regression problems, however, ensemble graph prediction has not been studied thoroughly. In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions. As the problem is NP-Hard, we propose an efficient heuristic algorithm to approximate the optimal solution. To validate our approach, we carried out experiments in AMR parsing problems. The experimental results demonstrate that the proposed approach can combine the strength of state-of-the-art AMR parsers to create new predictions that are more accurate than any individual models in five standard benchmark datasets.
accept
This paper details the problem of (labeled) graph ensembling, in particular for the task of semantic (AMR) parsing, and proposes a heuristic algorithm for solving it. This is a challenging problem of interest. The reviewers find the work well-motivated and well explained, and raised some issues about comparisons, which the authors satisfactorily addressed with additional experiments in the discussion period. One outstanding point would be to clarify early on the scope of graphs for which the method works, more clearly. (However, the reviewers do not feel strongly that title should mention AMR, as the work is indeed more general.) Indeed, in some settings like link prediction, averaging of arc weights can be much simpler, and it might not be clear for all readers right away why and when the problem gets harder, based on the readers' background. The reviewers made plenty of editing suggestions that I strongly encourage the authors to implement. In addition, I would like to add a few: - while I agree with the reviewer that it's better to rename the title of Section 3, the idiom provides helpful context so I would encourage you to keep it in the body (if not in the section title.) - line 83 and on: `$support$` should be styled as `$\operatorname{support}$` or defined as a macro with `$\DeclareMathOperator{\support}{support}$`. - line 146, the subscript `$g_{pivot}$` should be `$g_\text{pivot}$` -- same everywhere else. - all tables: try to align numbers on their decimal point (e.g. by right-alinging with a tabular font.) - Check the capitalization and formatting in your references. (e.g. "Amr" should be "AMR", "Machine learning" should be "Machine Learning" etc.)
train
[ "plKC1qmvEOm", "tWbjcj-hC_B", "LPtR2OFlMW-", "K11KPZ3-AvT", "GK_hAyCRYBY", "YxH6Ghbq7Um", "TfZQ9bR8sOR", "lzkd3w3wU2A", "XUdFK_0a_gK", "LnE2xito86n" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper presents an ensemble approach for graph predictions.\nIt consists in an algorithm that finds the graph that have the most nodes and edges matching, among different graphs produced by different systems or different checkpoints of the same system.\nThe authors tested the system on AMR graphs produced by st...
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_lmm2W2ICtjk", "nips_2021_lmm2W2ICtjk", "nips_2021_lmm2W2ICtjk", "lzkd3w3wU2A", "YxH6Ghbq7Um", "tWbjcj-hC_B", "LPtR2OFlMW-", "LnE2xito86n", "plKC1qmvEOm", "nips_2021_lmm2W2ICtjk" ]
nips_2021_ZYJ1r6sStU
On the interplay between data structure and loss function in classification problems
One of the central features of modern machine learning models, including deep neural networks, is their generalization ability on structured data in the over-parametrized regime. In this work, we consider an analytically solvable setup to investigate how properties of data impact learning in classification problems, and compare the results obtained for quadratic loss and logistic loss. Using methods from statistical physics, we obtain a precise asymptotic expression for the train and test errors of random feature models trained on a simple model of structured data. The input covariance is built from independent blocks allowing us to tune the saliency of low-dimensional structures and their alignment with respect to the target function.Our results show in particular that in the over-parametrized regime, the impact of data structure on both train and test error curves is greater for logistic loss than for mean-squared loss: the easier the task, the wider the gap in performance between the two losses at the advantage of the logistic. Numerical experiments on MNIST and CIFAR10 confirm our insights.
accept
The paper presents an interesting analytical result on generalization which all three reviewers considered worthwhile. Based on the given scores (5,6,7) and the fact that the machine learning field would likely benefit from some more theoretical contributions I recommend acceptance as a poster presentation.
train
[ "NLCM5LdFKy", "463c6S9H638", "yFWe0_BJV_h", "JkcUv1Xy9v5", "QIQ8AYonnGp", "Lq6ZcwxMZif" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors study the loss achieved by random feature models that learns from simple (but structured) data.\nThe main result consists in the comparison of two losses (square and logistic) under three settings in the data structure (misaligned, isotropic and aligned that roughly represent the difficulty of the task...
[ 6, -1, -1, -1, 7, 5 ]
[ 4, -1, -1, -1, 3, 4 ]
[ "nips_2021_ZYJ1r6sStU", "NLCM5LdFKy", "Lq6ZcwxMZif", "QIQ8AYonnGp", "nips_2021_ZYJ1r6sStU", "nips_2021_ZYJ1r6sStU" ]
nips_2021_B83B16bWvuI
Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems
Suhas Kowshik, Dheeraj Nagaraj, Prateek Jain, Praneeth Netrapalli
accept
The contributions were considered to be significant with high potential for impact. The main concern was that the paper contains, in a sense, "too much": there are many results, which made it harder for some reviewers to appreciate the level of contribution/improvement, and which raises the question of how to package the results into a coherent final version of the paper. Perhaps the authors can also consider submitting to a journal.
train
[ "gt7DzHOHMf", "5G3P54zc_I", "c2eHZwbFWMU", "-fTwVd-cVO", "xsiAJd9nHf5", "3uYYuBw6tAk", "ruBH5tZxEEa", "vwUW2AdCpq", "XOh5JVQNGoi", "LyEZ37zvpWY", "ibPq7x35Dgq", "UzNQPiEQqsH", "Ol7NpGLigY_" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the prompt response.", " Thank you for the useful feedback regarding readability. We will indeed move some of the less important details to the appendix in order to make some space for relevant discussions regarding the results, especially regarding the interpretation of the results of...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, 3, 3, 2 ]
[ "xsiAJd9nHf5", "c2eHZwbFWMU", "ruBH5tZxEEa", "nips_2021_B83B16bWvuI", "vwUW2AdCpq", "nips_2021_B83B16bWvuI", "Ol7NpGLigY_", "-fTwVd-cVO", "UzNQPiEQqsH", "ibPq7x35Dgq", "nips_2021_B83B16bWvuI", "nips_2021_B83B16bWvuI", "nips_2021_B83B16bWvuI" ]
nips_2021_bJz3cFePTna
Mixture Proportion Estimation and PU Learning:A Modern Approach
Saurabh Garg, Yifan Wu, Alexander J. Smola, Sivaraman Balakrishnan, Zachary Lipton
accept
This paper proposes new methods for the related problems of mixture proportion estimation and positive-unlabeled learning, with theoretical support, and shows state of the art performance, especially for large scale problems. I tend to agree with one of the reviewers that the MPE method is not really that novel, sharing many conceptual similarities with previous ROC based methods. This should be clearly addressed in the final revision, as should all reviewer comments. In addition it would be desired to have some theory for the iterative scheme. Without such, the authors also need to address possible failure cases and limitations of $(TED)^n$. Nonetheless, there is still sufficiently novelty and merit to warrant publication. Additional comments: While the experimental contributions are clear, I'd like to ask the authors to comment on how the MPE theory compares to prior work. Is this theory merely "supporting", or does it offer advances in MPE theory in any substantive way? Relevant reference: Henry Reeve and Ata Kaban. Exploiting geometric structure in mixture proportion estimation with generalised Blanchard-Lee-Scott estimators, ALT 2019
train
[ "mgGiR6BKpyz", "H-GCH-xbdxP", "pwntmX1K1Cv", "U-1hF0d9lUT", "jRv-fbssv3N", "1b-Zj3uaIYU", "u-20fguyHGY", "u1Y9RSPDim-", "YYdkcY2r2XZ", "r3ogcVYjFZ0", "E1MLBHjQJ7p", "7URS0pP3GYn" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you. I very much appreciate the detailed response.", " Dear authors: \n\nThanks for your detailed and careful reply to my review comments. I have also read the other reviews for this paper and still stand by this paper.\n\nBest, \nReviewer 31y4\n", "The paper proposes two methods for PU learning. First...
[ -1, -1, 8, -1, 7, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "YYdkcY2r2XZ", "u-20fguyHGY", "nips_2021_bJz3cFePTna", "u1Y9RSPDim-", "nips_2021_bJz3cFePTna", "r3ogcVYjFZ0", "7URS0pP3GYn", "pwntmX1K1Cv", "E1MLBHjQJ7p", "jRv-fbssv3N", "nips_2021_bJz3cFePTna", "nips_2021_bJz3cFePTna" ]
nips_2021_lEf52hTHq0Q
Escape saddle points by a simple gradient-descent based algorithm
Chenyi Zhang, Tongyang Li
accept
​​This paper proposes a robust Hessian power method to find negative curvature direction, based on which (stochastic) gradient descent can find the local minima with improved gradient complexity (the improvement is in the poly-log dependence in the problem dimension $n$). All the reviewers are in strong support of this paper. I, therefore, recommend acceptance.
train
[ "cDV-Y57PDq8", "ltRpBMZewA6", "J1DnLEZmuqf", "kwUcT2T1Lre", "3psYZ2MDUn", "BmvGsYZeNu7", "QE-NTGHMqMJ", "y6y0oH2UX_6", "S8kVVceX7y5", "i46ENHnAxZr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We appreciate for your detailed comments and suggestions!\n\nRegarding your first comment on related works based on the Hessian-vector product (HVP) oracle, we agree with you that if speaking in the strict oracle complexity sense, our result is not totally new. We are happy to clarify this more and modify our Tab...
[ -1, -1, 6, 8, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 3, 3, -1, -1, -1, -1, 4, 4 ]
[ "ltRpBMZewA6", "nips_2021_lEf52hTHq0Q", "nips_2021_lEf52hTHq0Q", "nips_2021_lEf52hTHq0Q", "J1DnLEZmuqf", "kwUcT2T1Lre", "i46ENHnAxZr", "S8kVVceX7y5", "nips_2021_lEf52hTHq0Q", "nips_2021_lEf52hTHq0Q" ]
nips_2021_T3_AJr9-R5g
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
The increasing computational requirements of deep neural networks (DNNs) have led to significant interest in obtaining DNN models that are sparse, yet accurate. Recent work has investigated the even harder case of sparse training, where the DNN weights are, for as much as possible, already sparse to reduce computational costs during training. Existing sparse training methods are mainly empirical and often have lower accuracy relative to the dense baseline. In this paper, we present a general approach called Alternating Compressed/DeCompressed (AC/DC) training of DNNs, demonstrate convergence for a variant of the algorithm, and show that AC/DC outperforms existing sparse training methods in accuracy at similar computational budgets; at high sparsity levels, AC/DC even outperforms existing methods that rely on accurate pre-trained dense models. An important feature of AC/DC is that it allows co-training of dense and sparse models, yielding accurate sparse-dense model pairs at the end of the training process. This is useful in practice, where compressed variants may be desirable for deployment in resource-constrained settings without re-doing the entire training flow, and also provides us with insights into the accuracy gap between dense and compressed models.
accept
The paper suggests a network compression scheme which alternates, in training, between dense (uncompressed) phases and sparse (compressed) phases. The compressed phases work along the lines of stochastic IHT. The paper’s merits are mostly in the experimental side. There is also a theoretical part which in my opinion does not directly explain the experimental success but is still somewhat relevant to the main idea. The reviewers seem to have done a thorough review and seem to slightly lean toward acceptance. Given the importance of the field of network compression together with the merits of this paper, I am inclined to accept.
test
[ "7P6jH04HitX", "51amt5OfcNm", "-oKnA1LcwS_", "V_jEDTjZS9A", "KvKqA3fVbTg", "eMzy2z9XLj_", "oBfB9XmC2kJ", "aFOF2lLpw0q", "J7bOGZxLL7e" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " I would like to thank the authors for their response that addressed my questions and concerns. I stress that the issue regarding practical relevance does not negatively affect my rating and I appreciate work on sparse models even though practical implications are limited. I appreciate your comments and clarificat...
[ -1, 7, 6, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, 3, -1, -1, -1, -1, -1, 3 ]
[ "oBfB9XmC2kJ", "nips_2021_T3_AJr9-R5g", "nips_2021_T3_AJr9-R5g", "eMzy2z9XLj_", "-oKnA1LcwS_", "51amt5OfcNm", "J7bOGZxLL7e", "nips_2021_T3_AJr9-R5g", "nips_2021_T3_AJr9-R5g" ]
nips_2021_31NfehDva-h
HyperSPNs: Compact and Expressive Probabilistic Circuits
Probabilistic circuits (PCs) are a family of generative models which allows for the computation of exact likelihoods and marginals of its probability distributions. PCs are both expressive and tractable, and serve as popular choices for discrete density estimation tasks. However, large PCs are susceptible to overfitting, and only a few regularization strategies (e.g., dropout, weight-decay) have been explored. We propose HyperSPNs: a new paradigm of generating the mixture weights of large PCs using a small-scale neural network. Our framework can be viewed as a soft weight-sharing strategy, which combines the greater expressiveness of large models with the better generalization and memory-footprint properties of small models. We show the merits of our regularization strategy on two state-of-the-art PC families introduced in recent literature -- RAT-SPNs and EiNETs -- and demonstrate generalization improvements in both models on a suite of density estimation benchmarks in both discrete and continuous domains.
accept
Thank you for submitting your work to NeurIPS. The paper targets training large scale sum-product networks with a given structure. To this end, it pushes the idea of generating the mixture weights of large PCs using a small-scale neural network. While there is a connection to Conditional SPNs as presented at PGM 2020, it actually covers the different aspect of learning SPNs and not conditional ones. It is very interesting to see that a shared neural network can be used during training to map the embeddings in to the PC's original parameter space. Scaling PCs (and not only conditional ones) is a really important question. Simple but highly effective and nice idea.
train
[ "TTqVkDbmjH3", "NNOLYuHPFGA", "4KO1Mr8Afru", "oMc20QK1ywb", "_ysCxazF3G6", "dr9MM9Qz7B4", "er0tDc27GRH", "z0FtgLttZte", "j5DPworMATJ", "jUsmyNNSh60", "El9DX-EMjQn", "azq-BLFLX46" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the additional results. They have addressed my concerns, and I am therefore leaving my rating at 7.", " Please let us know if you would like further discussion around the provided references. We hope we have clarified the originality of our work. We have also incorporated your suggestion on examin...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "oMc20QK1ywb", "_ysCxazF3G6", "dr9MM9Qz7B4", "azq-BLFLX46", "El9DX-EMjQn", "jUsmyNNSh60", "j5DPworMATJ", "nips_2021_31NfehDva-h", "nips_2021_31NfehDva-h", "nips_2021_31NfehDva-h", "nips_2021_31NfehDva-h", "nips_2021_31NfehDva-h" ]
nips_2021_NGPmH3vbAA_
Scaling Vision with Sparse Mixture of Experts
Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent scalability in Natural Language Processing. In Computer Vision, however, almost all performant networks are "dense", that is, every input is processed by every parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision Transformer, that is scalable and competitive with the largest dense networks. When applied to image recognition, V-MoE matches the performance of state-of-the-art networks, while requiring as little as half of the compute at inference time. Further, we propose an extension to the routing algorithm that can prioritize subsets of each input across the entire batch, leading to adaptive per-image compute. This allows V-MoE to trade-off performance and compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE to scale vision models, and train a 15B parameter model that attains 90.35% on ImageNet.
accept
All the reviewers agree that this submission made a significant contribution to the community. The combination of MoE and spatial dynamic computation is interesting and smart. The experimental study of this paper is valuable to the community. Although combining MoE to transformer has been studied in the NLP community and thus the originality of this submission is kind of limited, the submission still makes valuable contributions as mentioned by the reviewers. AC has read the submission, reviews and discussion and agrees with the reviewers on their recommendation.
val
[ "isZGnNb8DNO", "oR2ScWc2eN", "zUL_2dIqKkd", "0fC582mneJP", "OoJn0xHNZEd", "W0jyPXuNl0y", "2gQOxTec686", "yjdoejS-16W", "xWn0HbNy5L", "HC18TRQWzh", "6oEJAhFM6AK", "WXr4pvslpyp", "0Zi8QeDoYHO", "Xtuq3XsN3TG", "DGFArY3UO25", "LkuifKIVLKE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an approach to scale vision transformers by using sparse mixture of experts. The idea is simple and straightforward---using a subset of experts defined by a topk operation. The authors conducted sufficient experiments on ImageNet and the experiments demontraste the approach can achieve a nice t...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 9, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ "nips_2021_NGPmH3vbAA_", "0Zi8QeDoYHO", "0fC582mneJP", "yjdoejS-16W", "xWn0HbNy5L", "2gQOxTec686", "6oEJAhFM6AK", "nips_2021_NGPmH3vbAA_", "nips_2021_NGPmH3vbAA_", "Xtuq3XsN3TG", "DGFArY3UO25", "LkuifKIVLKE", "isZGnNb8DNO", "nips_2021_NGPmH3vbAA_", "nips_2021_NGPmH3vbAA_", "nips_2021_N...
nips_2021_uPWdkoZHgba
Two-sided fairness in rankings via Lorenz dominance
We consider the problem of generating rankings that are fair towards both users and item producers in recommender systems. We address both usual recommendation (e.g., of music or movies) and reciprocal recommendation (e.g., dating). Following concepts of distributive justice in welfare economics, our notion of fairness aims at increasing the utility of the worse-off individuals, which we formalize using the criterion of Lorenz efficiency. It guarantees that rankings are Pareto efficient, and that they maximally redistribute utility from better-off to worse-off, at a given level of overall utility. We propose to generate rankings by maximizing concave welfare functions, and develop an efficient inference procedure based on the Frank-Wolfe algorithm. We prove that unlike existing approaches based on fairness constraints, our approach always produces fair rankings. Our experiments also show that it increases the utility of the worse-off at lower costs in terms of overall utility.
accept
Reviewers were very appreciative of the melding of traditional economic concepts such as Lorenz dominance with the topical problem of two-sided fairness in ranking/matching platforms. Parallels to Pareto efficiency and other realistic desiderata and concerns were also appreciated. Some comparisons to baselines/related work seemed marginal or weak. In a subsequent or final version, I would ask that the authors do indeed make the changes promised during the post-rebuttal discussion (e.g., addition of discussion of sponsored search / other paid settings, all typos/method clarifications, newer results, and so on). This is a case where the rebuttal and discussion directly and positively influenced the reviewers' (and this AC's) opinion.
train
[ "4h4XEleo9WG", "JdKOti5ktaG", "mFDagfdnZLQ", "FtII5w-tNx", "BOOMkcmKbxr", "Nz37tC3Lys", "IgwUyZ79r9", "PsthaaaOW7P", "f3vLT-i5kBn", "eFv6X_E_pYj", "mwrKXoabhsy", "ZAbM8wwFLmI", "3zibM-p2wo4", "SrvBXEe9pA", "EKQDIEIH3oG" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear Reviewer oh3N,\n\nThank you for your comment, we will add point 2 to the discussion in the paper. If any issues remain, please let us know. ", " Dear Reviewer WERu,\n\nDid our response clarify the trade-offs involved in reciprocal recommendation? Thank you in advance for letting us know if any issues remai...
[ -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "PsthaaaOW7P", "BOOMkcmKbxr", "nips_2021_uPWdkoZHgba", "eFv6X_E_pYj", "Nz37tC3Lys", "f3vLT-i5kBn", "nips_2021_uPWdkoZHgba", "3zibM-p2wo4", "mwrKXoabhsy", "SrvBXEe9pA", "ZAbM8wwFLmI", "EKQDIEIH3oG", "IgwUyZ79r9", "mFDagfdnZLQ", "nips_2021_uPWdkoZHgba" ]
nips_2021_JOOsoL_J6Fc
Stability & Generalisation of Gradient Descent for Shallow Neural Networks without the Neural Tangent Kernel
We revisit on-average algorithmic stability of Gradient Descent (GD) for training overparameterised shallow neural networks and prove new generalisation and excess risk bounds without the Neural Tangent Kernel (NTK) or Polyak-Łojasiewicz (PL) assumptions. In particular, we show oracle type bounds which reveal that the generalisation and excess risk of GD is controlled by an interpolating network with the shortest GD path from initialisation (in a sense, an interpolating network with the smallest relative norm). While this was known for kernelised interpolants, our proof applies directly to networks trained by GD without intermediate kernelisation. At the same time, by relaxing oracle inequalities developed here we recover existing NTK-based risk bounds in a straightforward way, which demonstrates that our analysis is tighter. Finally, unlike most of the NTK-based analyses we focus on regression with label noise and show that GD with early stopping is consistent
accept
This paper presents a bound on the expected risk of a two-layer neural network with smooth activation, where the first layer is trained by gradient descent. The bounds cover noisy and noiseless cases and depend on the distance of the interpolation solution which is closest to the initialization. The reviewers agree that the paper provides a theoretical analysis for the generalization gap of two-layer neural networks using algorithmic stability using a proof technique that is not based on the Neural Tangent Kernel, while obtaining results that have a similar generality of the ones of NTK for two layers networks. Moreover, as observed by some reviewers, the bounds require the width of the networks to be bigger than a polynomial of the number of iterations (specifically, number of iterations times the step size), observing that it also works out of the NTK regime. In any case, according to the reviewers, the paper provides a relatively novel proof strategy in the context of theoretical analysis of the generalization of deep learning that is conceptually simpler, and also able to shed light on the properties of the first non-trivial example of a dnn, i.e. a 2-layer nn. We think that this contribution is enough for publication. However, we strongly encourage the authors to improve the writing of the paper, in particular Section 3, as observed by some reviewers.
train
[ "9Galc1ktAtP", "SlC9b59ZVe-", "vTy3H9_4q3X", "yArzGHLpbCH", "qzqbt7chHqS", "kn3Ca2yxVfm", "GOPlFsHQAVg", "fWDdRbWJHE", "EvPrL6XrSo8", "2EwqEOwDEHB", "TC6IcC7G61", "5dP8KPYFzi4", "25Vn0_27adQ", "vJXXEI4EHS-", "YrlewYUNCQN", "5qCs5mspcD_", "6iGbGcyTxLP", "6i-BR27ryr", "hx0KyaG99_",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ " Thank you for your comment.\n\nThe scaling can yield a network whose output remains $O(1)$ as both: the activation can grow linearly at 0 e.g. hyperbolic tangent $\\text{tanh}$, Swish or smoothed RELU; and the output consists of summing over the $m$ neurons.\n\nSpecifically, suppose each neuron weight $w_i$ for $...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "SlC9b59ZVe-", "vTy3H9_4q3X", "yArzGHLpbCH", "qzqbt7chHqS", "kn3Ca2yxVfm", "5dP8KPYFzi4", "iza423S9Pw1", "TAgax0lWgEr", "iza423S9Pw1", "TC6IcC7G61", "5qCs5mspcD_", "25Vn0_27adQ", "6i-BR27ryr", "TAgax0lWgEr", "iza423S9Pw1", "J9qATtjwL92", "nips_2021_JOOsoL_J6Fc", "hx0KyaG99_", "ni...
nips_2021_GYr3qnFKgU
Adversarial Intrinsic Motivation for Reinforcement Learning
Learning with an objective to minimize the mismatch with a reference distribution has been shown to be useful for generative modeling and imitation learning. In this paper, we investigate whether one such objective, the Wasserstein-1 distance between a policy's state visitation distribution and a target distribution, can be utilized effectively for reinforcement learning (RL) tasks. Specifically, this paper focuses on goal-conditioned reinforcement learning where the idealized (unachievable) target distribution has full measure at the goal. This paper introduces a quasimetric specific to Markov Decision Processes (MDPs) and uses this quasimetric to estimate the above Wasserstein-1 distance. It further shows that the policy that minimizes this Wasserstein-1 distance is the policy that reaches the goal in as few steps as possible. Our approach, termed Adversarial Intrinsic Motivation (AIM), estimates this Wasserstein-1 distance through its dual objective and uses it to compute a supplemental reward function. Our experiments show that this reward function changes smoothly with respect to transitions in the MDP and directs the agent's exploration to find the goal efficiently. Additionally, we combine AIM with Hindsight Experience Replay (HER) and show that the resulting algorithm accelerates learning significantly on several simulated robotics tasks when compared to other rewards that encourage exploration or accelerate learning.
accept
This paper presents an elegant idea and a fine investigation -- an important contribution to the community. After clarifications made by the authors, all reviewers agreed that this paper should be accepted. When preparing the final version, please make sure to address the reviewers' feedback.
train
[ "xH-HUeS-667", "dQ3o379yDl", "xnYyiPLAMlE", "GNYuBNsuDET", "NKWY2-HDxnJ", "28CB4ODpxI4", "KjKaaf8bCNh", "RL8MduxZGP", "GgJRDc_6UqB", "ZWc-MHjz09Y", "xyDMSeNAFS1", "loCl5-aCoab", "ABPI6Jmo94C", "Ci8qPQHFcGW", "_zr13bi6hM4", "clj7lT2-St", "ZWdadlGEaHW", "WAXpaGXrpRE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a method for goal-conditioned RL. The main idea is to learn a function that looks like a value-function and then use that value function as a reward function for RL. This method is motivated as minimizing a certain Wasserstein distance. Empirically, the proposed method outperforms goal-conditio...
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_GYr3qnFKgU", "ABPI6Jmo94C", "nips_2021_GYr3qnFKgU", "NKWY2-HDxnJ", "KjKaaf8bCNh", "GgJRDc_6UqB", "RL8MduxZGP", "loCl5-aCoab", "ZWc-MHjz09Y", "xyDMSeNAFS1", "_zr13bi6hM4", "Ci8qPQHFcGW", "clj7lT2-St", "ZWdadlGEaHW", "xH-HUeS-667", "WAXpaGXrpRE", "xnYyiPLAMlE", "nips_2021_...
nips_2021_w5fW0TNWPyc
Machine Learning for Variance Reduction in Online Experiments
Yongyi Guo, Dominic Coey, Mikael Konutgan, Wenting Li, Chris Schoener, Matt Goldman
accept
The reviewers praise the quality of the write up (which seems to be a significant achievement wrt to the previous version of the paper) and the crafty combination of existing methods to build an estimator of causal effects. Both theoretical and empirical results are provided that back up the relevance of the contribution. This makes a paper on par with NeurIPS standard. It is however expected for the authors to include in a subsequent version of the paper: - an updated list of references provided by the reviewers and a discussion on how those works compare with the present work; - empirical results based on competition models that are more elaborate than the ones proposed here: if the results of the method proposed in the present paper turn out not to be significantly better than those more elaborate models, then it is expected from the author to thoroughly share their insights on why this is the case.
train
[ "m7JLiY2y8jX", "5ZIhfppoSi6", "HUnFtRvhR4Z", "wLhlglKLKzg", "crDObp0BE7N", "FiGfbA4bVJC", "iRNA5VNUyJZ", "TUYmV7aOQLw" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new approach for variance reduction in randomized controlled trials. The proposed method first predicts the outcome variable Y from a set of covariates X that are independent of the treatment assignment T, and measures the average treatment effect by regressing the difference between this pre...
[ 7, 5, -1, -1, -1, -1, 7, 9 ]
[ 3, 4, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_w5fW0TNWPyc", "nips_2021_w5fW0TNWPyc", "5ZIhfppoSi6", "iRNA5VNUyJZ", "m7JLiY2y8jX", "TUYmV7aOQLw", "nips_2021_w5fW0TNWPyc", "nips_2021_w5fW0TNWPyc" ]
nips_2021_RF7AA89cfzl
L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization
Silicon-photonics-based optical neural network (ONN) is a promising hardware platform that could represent a paradigm shift in efficient AI with its CMOS-compatibility, flexibility, ultra-low execution latency, and high energy efficiency. In-situ training on the online programmable photonic chips is appealing but still encounters challenging issues in on-chip implementability, scalability, and efficiency. In this work, we propose a closed-loop ONN on-chip learning framework L2ight to enable scalable ONN mapping and efficient in-situ learning. L2ight adopts a three-stage learning flow that first calibrates the complicated photonic circuit states under challenging physical constraints, then performs photonic core mapping via combined analytical solving and zeroth-order optimization. A subspace learning procedure with multi-level sparsity is integrated into L2ight to enable in-situ gradient evaluation and fast adaptation, unleashing the power of optics for real on-chip intelligence. Extensive experiments demonstrate our proposed L2ight outperforms prior ONN training protocols with 3-order-of-magnitude higher scalability and over 30x better efficiency, when benchmarked on various models and learning tasks. This synergistic framework is the first scalable on-chip learning solution that pushes this emerging field from intractable to scalable and further to efficient for next-generation self-learnable photonic neural chips. From a co-design perspective, L2ight also provides essential insights for hardware-restricted unitary subspace optimization and efficient sparse training. We open-source our framework at the link.
accept
The paper proposes a calibration and training method for optical neural networks, a possible alternative to the curent implementation in silico. The main topic of the paper is therefore on hardware, and optical implementation of machine learning. The presented approach here is suited for Mach Zhender interferometers arrays. The framework supports on-chip learning by fine-tuning the implemented weights to compensate for manufacturing imperfections in the chip. The consensus among reviewer is that this was a very original paper, discussing new direction on the hardware implementation of machine learning algorithm that could be important in the future. The paper was found to be well organised, and discuss well the state of the art. The new proposed method was found to be clearly described, and the performances to be well backed through simulations. Expert reviewers in both optics and ML assed that the work was interesting and promising. In the initial round of reviewal: the lack of discussion of the limitations of the framework proposed was pointed out. After rebuttal and discussion between the authors and the reviewers, this was however addressed to the satisfaction of the reviewer. In conclusion, the proposed approach, L2ight, was thus found to be a significant improvement over the state-of-the-art, building upon and improving previous works clearly. Given the importance of the topic of hardware implementation of machine learning, the area chair thus recommend acceptance to Neurips.
train
[ "jZbwIMp11S", "6E929dnjg5", "2CZk3-fg3_X", "InjxIT966LI", "WIZAAZvDwSA", "XBTV8CHzVM_", "2-pRxMuji0Q", "88ntId5Dsb", "GKXWsR6zjk", "exlgZz8ciIf", "KK5Ld_P_5R", "rNnlFvyH4MX", "LbHQXVTxa7h", "AUCXTtJnM7", "gSri8IAsGI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Optical Neural Networks (ONN) with their attojoule per Multiply Accumulate energy efficiency and sub-nanosecond latency are becoming useful implementation hardware for large scale deep learning models and datasets. Despite their efficiency and speed, ONNs suffer significant loss in accuracy due to manufacturing de...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_RF7AA89cfzl", "WIZAAZvDwSA", "nips_2021_RF7AA89cfzl", "exlgZz8ciIf", "XBTV8CHzVM_", "LbHQXVTxa7h", "jZbwIMp11S", "KK5Ld_P_5R", "rNnlFvyH4MX", "KK5Ld_P_5R", "AUCXTtJnM7", "gSri8IAsGI", "jZbwIMp11S", "2CZk3-fg3_X", "nips_2021_RF7AA89cfzl" ]
nips_2021_b83ibRX55T
Towards Gradient-based Bilevel Optimization with Non-convex Followers and Beyond
In recent years, Bi-Level Optimization (BLO) techniques have received extensive attentions from both learning and vision communities. A variety of BLO models in complex and practical tasks are of non-convex follower structure in nature (a.k.a., without Lower-Level Convexity, LLC for short). However, this challenging class of BLOs is lack of developments on both efficient solution strategies and solid theoretical guarantees. In this work, we propose a new algorithmic framework, named Initialization Auxiliary and Pessimistic Trajectory Truncated Gradient Method (IAPTT-GM), to partially address the above issues. In particular, by introducing an auxiliary as initialization to guide the optimization dynamics and designing a pessimistic trajectory truncation operation, we construct a reliable approximate version of the original BLO in the absence of LLC hypothesis. Our theoretical investigations establish the convergence of solutions returned by IAPTT-GM towards those of the original BLO without LLC. As an additional bonus, we also theoretically justify the quality of our IAPTT-GM embedded with Nesterov's accelerated dynamics under LLC. The experimental results confirm both the convergence of our algorithm without LLC, and the theoretical findings under LLC.
accept
The paper proposes a truncated unrolling type method for solving a bi-level optimization with non-convex lower-level. It contains two interesting ideas, namely, Initialization Auxiliary and Pessimistic Trajectory Truncation. The proposed method admits a convergence guarantee, meanwhile, the computation cost is reduced. The techniques and analytic framework are novel, and enhance the understanding of gradient-based bi-level optimization methods. In summary, this work will possibly inspire some new bi-level optimization algorithms and technical analysis.
val
[ "bvXYyOxG72G", "8qGvhgJPLmc", "60A9xMU0a38", "d_hZ0xGpkQx", "dP6JJjg0BUP", "g9X1AkAH3e-", "sokzdYGfYvN", "2arglpb3hU", "IYwC--ptIC", "Vpc7ZYIuHG", "FONlDnILBp", "89Yt6rcr5t", "PxaRSo4lnKn" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a bilevel optimization method based on unrolled gradient-descent iterations of the lower-level (LL) problem, while providing a theoretical connection between the ideal bilevel optimization and the unrolled one when LL problem might not be convex. The main idea is two-fold: Regarding the initial...
[ 7, 7, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, 7 ]
[ 4, 4, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 3 ]
[ "nips_2021_b83ibRX55T", "nips_2021_b83ibRX55T", "d_hZ0xGpkQx", "FONlDnILBp", "IYwC--ptIC", "89Yt6rcr5t", "Vpc7ZYIuHG", "nips_2021_b83ibRX55T", "PxaRSo4lnKn", "8qGvhgJPLmc", "bvXYyOxG72G", "2arglpb3hU", "nips_2021_b83ibRX55T" ]
nips_2021_JbqW3KmmE6
Multi-Facet Clustering Variational Autoencoders
Work in deep clustering focuses on finding a single partition of data. However, high-dimensional data, such as images, typically feature multiple interesting characteristics one could cluster over. For example, images of objects against a background could be clustered over the shape of the object and separately by the colour of the background. In this paper, we introduce Multi-Facet Clustering Variational Autoencoders (MFCVAE), a novel class of variational autoencoders with a hierarchy of latent variables, each with a Mixture-of-Gaussians prior, that learns multiple clusterings simultaneously, and is trained fully unsupervised and end-to-end. MFCVAE uses a progressively-trained ladder architecture which leads to highly stable performance. We provide novel theoretical results for optimising the ELBO analytically with respect to the categorical variational posterior distribution, correcting earlier influential theoretical work. On image benchmarks, we demonstrate that our approach separates out and clusters over different aspects of the data in a disentangled manner. We also show other advantages of our model: the compositionality of its latent space and that it provides controlled generation of samples.
accept
The paper is well-written and the proposed method shows good performance (quantitative results were shown in the rebuttal). Correcting a flaw in previous work is a plus. Reviewer's major concerns, including lack of quantitative results, dependence on hyperparameter choice, etc., have been addressed in the rebuttal.
train
[ "P6H20RWaIu1", "fA8trXTNa8T", "AUfYvMYbtKK", "Xfp1zCqi2L6", "5RP43wdFhr_", "i8XCGTOoCng", "852uxi2-8D", "n7GN_DYGTC5", "OqFaIRMN0d8", "Hnrcqeci4YK", "C1OvShJWj9F", "LSapdklIqM", "t6Td3gLh3iU", "DxgHzI5zFlK", "yyeC9Qn3ZEL", "_HT4wpPojJP", "N53aT-vPxl", "0UkRZB1BWoX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you so much again for your brilliant feedback! Your insights and suggestions have been extremely helpful. Thank you for your kind words of support for our methodological idea, proof-of-concept experiments and clarity of presentation. We are delighted that you feel our additional experiments and explanations...
[ -1, -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "fA8trXTNa8T", "_HT4wpPojJP", "5RP43wdFhr_", "nips_2021_JbqW3KmmE6", "i8XCGTOoCng", "yyeC9Qn3ZEL", "n7GN_DYGTC5", "t6Td3gLh3iU", "nips_2021_JbqW3KmmE6", "N53aT-vPxl", "0UkRZB1BWoX", "nips_2021_JbqW3KmmE6", "DxgHzI5zFlK", "OqFaIRMN0d8", "Xfp1zCqi2L6", "Hnrcqeci4YK", "nips_2021_JbqW3Km...
nips_2021_lS_rOGT9lfG
Synthetic Design: An Optimization Approach to Experimental Design with Synthetic Controls
We investigate the optimal design of experimental studies that have pre-treatment outcome data available. The average treatment effect is estimated as the difference between the weighted average outcomes of the treated and control units. A number of commonly used approaches fit this formulation, including the difference-in-means estimator and a variety of synthetic-control techniques. We propose several methods for choosing the set of treated units in conjunction with the weights. Observing the NP-hardness of the problem, we introduce a mixed-integer programming formulation which selects both the treatment and control sets and unit weightings. We prove that these proposed approaches lead to qualitatively different experimental units being selected for treatment. We use simulations based on publicly available data from the US Bureau of Labor Statistics that show improvements in terms of mean squared error and statistical power when compared to simple and commonly used alternatives such as randomized trials.
accept
The expert reviewers for the most part appreciated the paper and were guardedly positive. The paper is commended for posing an interesting new question. At the same time there were concerns about the relevance of the estimand targeted as well as formal guarantees. The authors suggested possible ways to address this that should be incorporated into the paper. Crucially, the permutation inference result would add an important aspect to the paper that would merit its acceptance. Moreover, the authors should give a detailed discussion about their ATET estimand and explain carefully its limitations in the absence of homogeneity, which is the practically common setting, and how possibly slight violations might affect the interpretation of the results.
test
[ "Yq7Cr0neuk8", "tvA3iSzPR39", "qoi_t8tSVH3", "SP4cU5bb0cx", "y6Dmx4rT6wz", "GpwD5JWPNrZ", "KIpwHvq21CN", "JcX5UXJ5JUO", "5_Q0y2zmco2", "Xp0sND331VD" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This work studies an approach for designing experiments in panel data settings. Based on data from T time periods, the proposed approach selects weights and treatment assignments for a subsequent time period to minimize an empirical estimate of the mean-squared error of an individual-level or weighted average trea...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "nips_2021_lS_rOGT9lfG", "nips_2021_lS_rOGT9lfG", "SP4cU5bb0cx", "y6Dmx4rT6wz", "GpwD5JWPNrZ", "JcX5UXJ5JUO", "Yq7Cr0neuk8", "tvA3iSzPR39", "Xp0sND331VD", "nips_2021_lS_rOGT9lfG" ]