paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_IwazGtnJgF6
CBP: backpropagation with constraint on weight precision using a pseudo-Lagrange multiplier method
Backward propagation of errors (backpropagation) is a method to minimize objective functions (e.g., loss functions) of deep neural networks by identifying optimal sets of weights and biases. Imposing constraints on weight precision is often required to alleviate prohibitive workloads on hardware. Despite the remarkable success of backpropagation, the algorithm itself is not capable of considering such constraints unless additional algorithms are applied simultaneously. To address this issue, we propose the constrained backpropagation (CBP) algorithm based on the pseudo-Lagrange multiplier method to obtain the optimal set of weights that satisfy a given set of constraints. The defining characteristic of the proposed CBP algorithm is the utilization of a Lagrangian function (loss function plus constraint function) as its objective function. We considered various types of constraints — binary, ternary, one-bit shift, and two-bit shift weight constraints. As a post-training method, CBP applied to AlexNet, ResNet-18, ResNet-50, and GoogLeNet on ImageNet, which were pre-trained using the conventional backpropagation. For most cases, the proposed algorithm outperforms the state-of-the-art methods on ImageNet, e.g., 66.6\%, 74.4\%, and 64.0\% top-1 accuracy for ResNet-18, ResNet-50, and GoogLeNet with binary weights, respectively. This highlights CBP as a learning algorithm to address diverse constraints with the minimal performance loss by employing appropriate constraint functions. The code for CBP is publicly available at \url{https://github.com/dooseokjeong/CBP}.
accept
This paper uses a Lagrange multiplier method to train neural networks with constraints on the values of the weights that allows them to be quantized. While some reviewers appreciated the simplicity and effectiveness of this method, they found that the paper didn't properly explain the additional costs of the method, or justify its practical benefits. There also seems to be some questions about whether or not the baseline methods compared to represent the SOTA for weight quantization. The quality of the writing also seems pretty rough in places, based on my own quick scan. While I think this paper may contain a solid contribution to the field, the draft could use some work.
train
[ "QTOZHSs888-", "lLxQmFFuf1p", "WRveStTBJUl", "siQO3BsTSb7", "EEH20I5ugVE", "KZmPT0-bJmA", "U3OErFehH5L" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' responses. I agree with the authors that space complexity of weights is important especially when considering the transmission cost of data. However, the space complexity of acitvations is also important during the inference computations on devices with limited on-chip memory since activ...
[ -1, 4, -1, -1, -1, 6, 4 ]
[ -1, 4, -1, -1, -1, 3, 4 ]
[ "EEH20I5ugVE", "nips_2021_IwazGtnJgF6", "lLxQmFFuf1p", "U3OErFehH5L", "KZmPT0-bJmA", "nips_2021_IwazGtnJgF6", "nips_2021_IwazGtnJgF6" ]
nips_2021_Kzuys6WghCV
On the Sample Complexity of Privately Learning Axis-Aligned Rectangles
Menachem Sadigurschi, Uri Stemmer
accept
This paper gives a differentially private algorithm for learning axis-aligned rectangles with a sample complexity that is linear in the underlying dimension improving the bound by a factor of \sqrt{d} on prior works. This is accomplished via a neat trick that might find further applications in the design of private learning algorithms. The reviewers thought that the writing was rushed and at times sloppy but the contributions were interesting and substantial. We recommend acceptance.
train
[ "H96PoxvXsw7", "Cz0UJIMr7jm", "Hm7bES4btq5", "ipW7NpEwX2v", "9aI-Fr73cu3", "oWeYgCLX-3y", "Iw-x-VcVcMY", "aORd4b_8nJJ", "h4UokQoihqO" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am happy with the way the authors addressed my comments and would like to keep my score.", " Thank you very much for your time and your helpful comments and suggestions. We will take all comments to our attention and make corrections and improvements accordingly. In the following, we respond to a specific poi...
[ -1, -1, -1, -1, -1, 9, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "Hm7bES4btq5", "h4UokQoihqO", "aORd4b_8nJJ", "Iw-x-VcVcMY", "oWeYgCLX-3y", "nips_2021_Kzuys6WghCV", "nips_2021_Kzuys6WghCV", "nips_2021_Kzuys6WghCV", "nips_2021_Kzuys6WghCV" ]
nips_2021_QM8oG0bz1o
Implicit Sparse Regularization: The Impact of Depth and Early Stopping
Jiangyuan Li, Thanh Nguyen, Chinmay Hegde, Ka Wai Wong
accept
In this paper the authors study the implicit bias of gradient descent for sparse linear regression problem. They focus on a particular parameterization where the model is the hadamard product of two vectors and more generally deep diagonal linear network. Prior literature has shown the propensity or implicit bias of gradient descent towards sparse solutions in some settings including deep linear cases but with simplified assumptions and no noise. The authors' result however applies in the general deep diagonal linear case and also considers noise. The authors show that under proper incoherence assumptions there is an early stopping window in which the reconstruction error is sufficiently small. The authors also utilize their theory to provide a variety of insights. The reviewers overall liked the generality of assumptions compared to prior work and thought the writing was clear. The reviewers did raise some concerns about (1) related work, (2) originality, (3) condition number of signal, (4) too strong of an incoherence assumption, (5) and lack of special insight for deep networks. During the discussion period the authors addressed some of the questions raised which led to reviewers raising the scores. My own reading of the paper is similar. The paper has some nice results but with a few caveats which limits the utility. Therefore, I recommended acceptance as a poster. I strongly urge the authors to revise the paper per reviewer suggestions.
train
[ "RvDk6zxqsM3", "JQjZsyV6-ci", "F8ZYgbYwOT_", "M9ROomsdKu", "JBTztOLnNZY", "iur-qgUX5J", "KTbSi2NfKqC", "N4ZBPSCTa52", "HabA3KpRSn3", "zGAlgJnMwur", "YcUujXJ7rcw" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this work, the authors study the implicit bias of gradient descent for sparse linear regression when parameterizing the ground truth signal as a depth-$N$ diagonal linear network. Previous works have shown that when utilizing a Hadamard parameterization $w = u \\odot u - v \\odot v$, gradient descent exhibits a...
[ 7, -1, 6, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, 4, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_QM8oG0bz1o", "zGAlgJnMwur", "nips_2021_QM8oG0bz1o", "N4ZBPSCTa52", "nips_2021_QM8oG0bz1o", "KTbSi2NfKqC", "JBTztOLnNZY", "F8ZYgbYwOT_", "YcUujXJ7rcw", "RvDk6zxqsM3", "nips_2021_QM8oG0bz1o" ]
nips_2021_S3e377aHvS9
Efficient Generalization with Distributionally Robust Learning
Soumyadip Ghosh, Mark Squillante, Ebisa Wollega
accept
This paper gives a new method for solving the distributed robust learning problem. The new method uses a subsampling technique for the inner maximization problem which lead to faster convergence. The paper also demonstrates the effectiveness of the new method in practical settings. The reviewers have raised several concerns initially about novelty and theoretical guarantees, but they were addressed in the response period. Now the reviewers believe that the paper is technically sound and the empirical results are convincing.
train
[ "44Pgw785OuO", "PqT_wQ08d8", "5hGVi4RgOl7", "C6oZdDPgC1t", "U8_gRyaBCo7", "14-bUsHFh1", "_53tqFyDr4I", "fmOxZOInl0Y" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your feedback! Now, the supplementary material is accessible. I'm not sure what happened. All of my technical concerns have been addressed. I'm happy to increase my score accordingly.", "This paper develop and analyze an efficient double-loop-based stochastic algorithms for solving DRO problems. Th...
[ -1, 6, -1, -1, -1, -1, 8, 6 ]
[ -1, 3, -1, -1, -1, -1, 3, 4 ]
[ "U8_gRyaBCo7", "nips_2021_S3e377aHvS9", "PqT_wQ08d8", "_53tqFyDr4I", "PqT_wQ08d8", "fmOxZOInl0Y", "nips_2021_S3e377aHvS9", "nips_2021_S3e377aHvS9" ]
nips_2021_y8y6GJUL01H
No-regret Online Learning over Riemannian Manifolds
We consider online optimization over Riemannian manifolds, where a learner attempts to minimize a sequence of time-varying loss functions defined on Riemannian manifolds. Though many Euclidean online convex optimization algorithms have been proven useful in a wide range of areas, less attention has been paid to their Riemannian counterparts. In this paper, we study Riemannian online gradient descent (R-OGD) on Hadamard manifolds for both geodesically convex and strongly geodesically convex loss functions, and Riemannian bandit algorithm (R-BAN) on Hadamard homogeneous manifolds for geodesically convex functions. We establish upper bounds on the regrets of the problem with respect to time horizon, manifold curvature, and manifold dimension. We also find a universal lower bound for the achievable regret by constructing an online convex optimization problem on Hadamard manifolds. All the obtained regret bounds match the corresponding results are provided in Euclidean spaces. Finally, some numerical experiments validate our theoretical results.
accept
This paper considers online optimization problems in Hadamard manifolds - i.e., simply connected, complete Riemannian manifolds with everywhere non-positive sectional curvature. The authors consider several different settings - full-gradient versus gradient-free algorithms against both convex or strongly-convex objectives - and they provide bounds that mirror the corresponding bounds for the Euclidean case. The reviews speak for themselves and the concerns raised during the review phase were addressed by the authors. As a result, there were no reservations about making an "accept" recommendation. In preparing the camera-ready version of their paper, the authors should make sure to include all the comments made by the reviewers. In addition, I would have the following recommendations/questions: - An aspect which has regrettably been overlooked in the paper is what the Hadamard requirement really means. By the Cartan-Hadamard theorem, every $n$-dimensional Hadamard manifold is diffeomorphic to $\mathbb{R}^n$, so the topology of these manifolds is trivial. This is a crucial limitation, because barycentric constructions cannot be defined otherwise (at the very least, not easily), and it is not possible to use the theory of the paper to solve optimization problems defined on, say, a matrix group – like the set of invertible matrices $\mathrm{GL}(n)$, or the set of orthogonal matrices $\mathrm{O}(n)$. The paper doesn't make any such allusions but, at the same time, it's important to state clearly - and early - that Hadamard manifolds are topologically (and diffeomorphically) trivial. - The section on Riemannian manifolds should be expanded, and the authors should be careful with the notation they employ: I am strongly in favor of representing vector fields as differential operators (which they do, sometimes explicitly other times implicitly), but this should be carefully explained, and some elements of the appendix should be transferred to the main text. In this regard, the authors might find helpful Lee's textbooks on smooth and Riemannian manifolds - the notation is already quite close, so this would essentially be a matter of fixing a few glitches, as in the definition of the Hessian. - I would also be curious to see a comparison between the authors' work and recent Riemannian approaches to online optimization like the 2019 ICLR paper by Bécigneul and Ganea ("Riemannian adaptive optimization methods") and the 2020 ICLR paper by Antonakopoulos et al. ("Online and Stochastic Optimization beyond Lipschitz Continuity: A Riemannian Approach"). A more detailed presentation of previous works on the topic - like Bonnabel's (2013) paper - would also help with the positioning of this work. [To be clear, the settings and results are quite different and there is no issue of an overlap, but explaining these differences would be helpful to the reader]
val
[ "fo3ZUixzL08", "FmqWzJxi6_R", "HfibYNBY30D", "qwBiXNp9lLg", "h4FC70JV7MV", "w4srLQuwxro", "UW1bW-f0BHo", "LEgsRqGtUIX", "jaWVOLP2r8Q", "TzaodE8M25", "ADEebnkqzUG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " I am not convinced by the necessity of homogeneity but I understand that it is non trivial to remove it. The exponential dependence of $\\rho$ on $D$ seems a bit ugly, but it is the case with other papers in the area, and it happens often in negative curvature because volumes increase exponentially fast. The disc...
[ -1, 7, 7, -1, -1, 6, -1, -1, -1, -1, 7 ]
[ -1, 4, 3, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "LEgsRqGtUIX", "nips_2021_y8y6GJUL01H", "nips_2021_y8y6GJUL01H", "jaWVOLP2r8Q", "UW1bW-f0BHo", "nips_2021_y8y6GJUL01H", "w4srLQuwxro", "FmqWzJxi6_R", "HfibYNBY30D", "ADEebnkqzUG", "nips_2021_y8y6GJUL01H" ]
nips_2021_IWhFd34QSSj
Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning
Goal-conditioned hierarchical reinforcement learning (HRL) has shown promising results for solving complex and long-horizon RL tasks. However, the action space of high-level policy in the goal-conditioned HRL is often large, so it results in poor exploration, leading to inefficiency in training. In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i.e., promising states to explore. The key component of HIGL is twofold: (a) sampling landmarks that are informative for exploration and (b) encouraging the high level policy to generate a subgoal towards a selected landmark. For (a), we consider two criteria: coverage of the entire visited state space (i.e., dispersion of states) and novelty of states (i.e., prediction error of a state). For (b), we select a landmark as the very first landmark in the shortest path in a graph whose nodes are landmarks. Our experiments demonstrate that our framework outperforms prior-arts across a variety of control tasks, thanks to efficient exploration guided by landmarks.
accept
Reviewers found the paper to contain novel ideas with solid justifications. The author response was particularly useful in addressing some of the initially raised issues. We therefore recommend acceptance. The authors are encouraged to take the reviews into account to improve presentation and results, especially to include some of the new ones that were provided in the rebuttal. I’d also strongly encourage the authors to strengthen the related work section. The paper unfortunately ignores over two decades of significant hierarchical RL work, some of which appear related enough to deserve a discussion. Examples are the following subgoal discovery work that rely on graph structures, and some of them have a notion of novelty (although different from this work): https://dl.acm.org/doi/abs/10.1007/3-540-36755-1_25 https://dl.acm.org/doi/abs/10.1145/1015330.1015355 https://dl.acm.org/doi/abs/10.1145/1015330.1015353 A potential challenge mentioned by the reviewers is to scale to high-dimensional space, and the authors suggested working in the goal space. The following paper works in the goal space, and relies on successful trajectories to find goals in high-dimensional state spaces: https://aclanthology.org/D18-1253 (https://aclanthology.org/D18-1253/).
train
[ "I5seJsQworg", "x5NCY9XoNyg", "YX7UqMTS7xS", "zn5uD0w_tfN", "57WXGnIGbCQ", "2gTBYniLm1_", "4eZV1iAh-Jc", "Jwb15aViaip", "C9ovV0Ny1GO", "ajUBpO7T4zN", "xeQd-_G91Px", "F06jxyNkLS", "g-1XT2q4e8" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ " We are happy to hear that our rebuttal addressed your concerns well.\n\nWe will add the suggested experimental results and discussion into the final version.\n\nThank you again for the valuable suggestions and comments, which we believe further strengthen our paper !\n\nBest, Authors.\n\n", " We are happy to he...
[ -1, -1, -1, 7, 6, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, 4, 5, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "2gTBYniLm1_", "YX7UqMTS7xS", "F06jxyNkLS", "nips_2021_IWhFd34QSSj", "nips_2021_IWhFd34QSSj", "4eZV1iAh-Jc", "xeQd-_G91Px", "g-1XT2q4e8", "57WXGnIGbCQ", "zn5uD0w_tfN", "C9ovV0Ny1GO", "Jwb15aViaip", "nips_2021_IWhFd34QSSj" ]
nips_2021_Rk7B9kmp7R8
Minimax Regret for Stochastic Shortest Path
Alon Cohen, Yonathan Efroni, Yishay Mansour, Aviv Rosenberg
accept
This paper makes solid contribution to the stochastic shortest path problem. Reviewers are all in favor of acceptance. Please do incorporate all the suggestions from the reviews into the final version.
train
[ "l7kGZpt8rci", "lD7TpxEPWA1", "lL5phaZdsHj", "cOqIc9g_MjD", "wPQbB3xGKPN", "9dSxLK1Heu5", "KoK538ExTXk", "gi9JyJX63bD" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "As the title suggests, this paper established the minimax regret for the stochastic shortest path (SSP) problem. Specifically, given a state space S, action space A, an upper bound B_\\star on the total cost of the shortest path, the rate-optimal regret (up to logarithmic factors) with K episodes is of the order \...
[ 7, -1, -1, -1, -1, 6, 7, 6 ]
[ 3, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2021_Rk7B9kmp7R8", "gi9JyJX63bD", "l7kGZpt8rci", "KoK538ExTXk", "9dSxLK1Heu5", "nips_2021_Rk7B9kmp7R8", "nips_2021_Rk7B9kmp7R8", "nips_2021_Rk7B9kmp7R8" ]
nips_2021_qGn3Rlgul5F
Parametrized Quantum Policies for Reinforcement Learning
With the advent of real-world quantum computing, the idea that parametrized quantum computations can be used as hypothesis families in a quantum-classical machine learning system is gaining increasing traction. Such hybrid systems have already shown the potential to tackle real-world tasks in supervised and generative learning, and recent works have established their provable advantages in special artificial tasks. Yet, in the case of reinforcement learning, which is arguably most challenging and where learning boosts would be extremely valuable, no proposal has been successful in solving even standard benchmarking tasks, nor in showing a theoretical learning advantage over classical algorithms. In this work, we achieve both. We propose a hybrid quantum-classical reinforcement learning model using very few qubits, which we show can be effectively trained to solve several standard benchmarking environments. Moreover, we demonstrate, and formally prove, the ability of parametrized quantum circuits to solve certain learning tasks that are intractable to classical models, including current state-of-art deep neural networks, under the widely-believed classical hardness of the discrete logarithm problem.
accept
The authors present a quantum reinforcement learning regime that utilizes a parameterized quantum circuit. The paper is clearly written. There are a few concerns with the scientific contributions and side points from the reviewers, especially why the PQC plays an essential role in the learning model. However, the authors have replied explicitly the concerns in the discussion and the reviewers and the authors have reached consensus. The authors also explain in detail on possible applications of the work and the questions on the complexity analysis. In short, I would recommend this work to be accepted.
train
[ "T1_yZ1WzDu", "Fb5QnMv9mZG", "iOVgeGxBKV-", "cpruG2RMZJf", "M0hNKUcOsNF", "IgUezXdrZ2_", "uXxlfvyzyf", "rYVQP8UjYg", "d0zJKftFPl", "ZmAxuUy7i1", "8mk--yJKNmR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I thank the authors for their response. I think the empirical results are encouraging and the idea is reasonably novel, so will keep the score as is.\n\n--- \nThe paper discusses a parametrized quantum computations as a policy trained via policy gradient. The policy is parametrized with a quantum computer whereas ...
[ 6, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_qGn3Rlgul5F", "nips_2021_qGn3Rlgul5F", "nips_2021_qGn3Rlgul5F", "IgUezXdrZ2_", "uXxlfvyzyf", "rYVQP8UjYg", "ZmAxuUy7i1", "d0zJKftFPl", "iOVgeGxBKV-", "Fb5QnMv9mZG", "T1_yZ1WzDu" ]
nips_2021_sS8rRmgAatA
On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations
KL-regularized reinforcement learning from expert demonstrations has proved highly successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral policies derived from expert demonstrations suffers from previously unrecognized pathological behavior that can lead to slow, unstable, and suboptimal online training. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that we can resolve this pathology by specifying a non-parametric behavioral policy and that doing so allows KL-regularized RL to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks - without ad-hoc algorithmic design choices.
accept
There has been a split amongst the reviewers for this paper. Although each expressed interest in the pathology of KL-regularization stated by the author, there has not been a consensus about whether the author's approach to this is impactful. I would recommend the position of the majority, 3 reviewers have found the set of baselines and architectures attempted are suitable to explore the problem at hand, and the emirical results are promising enough to be of value to the community as written (and with promised helpful additions in the discussion). Therefore, I recommend acceptance. I do also sympathize with the position of reviewer hW6R, in that I may not be likely to use KL-regularization to solve the RL-from expert problem as of this year and have concerns about scaling GPs for high-dimensional problems. While these concerns are relevant, and could inform the authors about helpful discussion in the final version or their next papers, I still believe the contribution of a focused analysis of this widely used form of KL-regularization is sufficient in current form.
train
[ "icevRt_bqfD", "yTimS5st2bh", "MeWwMDNJ-gt", "KsniBHwZD7", "ldB_CuUMAUg", "cprwXAK1yZ", "Cc0T_O-uIev", "G8sWvaBSmLb", "JMlXKEMP7E", "OfbzcjO10b", "S_K0Bfjlap4", "iTCNnGFDKlK", "PLwybpUMpee" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time to read and reply to our response. We appreciate your feedback and your contribution to the discussion. We will include the clarifications and additional experiments from our response in the updated draft.", " I thank the authors for their candid responses. This allays most of the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 4 ]
[ "yTimS5st2bh", "Cc0T_O-uIev", "nips_2021_sS8rRmgAatA", "PLwybpUMpee", "PLwybpUMpee", "iTCNnGFDKlK", "S_K0Bfjlap4", "OfbzcjO10b", "nips_2021_sS8rRmgAatA", "nips_2021_sS8rRmgAatA", "nips_2021_sS8rRmgAatA", "nips_2021_sS8rRmgAatA", "nips_2021_sS8rRmgAatA" ]
nips_2021_wPA_5Wsjt8i
Conditional Generation Using Polynomial Expansions
Generative modeling has evolved to a notable field of machine learning. Deep polynomial neural networks (PNNs) have demonstrated impressive results in unsupervised image generation, where the task is to map an input vector (i.e., noise) to a synthesized image. However, the success of PNNs has not been replicated in conditional generation tasks, such as super-resolution. Existing PNNs focus on single-variable polynomial expansions which do not fare well to two-variable inputs, i.e., the noise variable and the conditional variable. In this work, we introduce a general framework, called CoPE, that enables a polynomial expansion of two input variables and captures their auto- and cross-correlations. We exhibit how CoPE can be trivially augmented to accept an arbitrary number of input variables. CoPE is evaluated in five tasks (class-conditional generation, inverse problems, edges-to-image translation, image-to-image translation, attribute-guided generation) involving eight datasets. The thorough evaluation suggests that CoPE can be useful for tackling diverse conditional generation tasks. The source code of CoPE is available at https://github.com/grigorisg9gr/polynomialnetsforconditionalgeneration.
accept
This paper proposed an extension of polynomial neural networks (PNNs) for application to conditional generation tasks. Experiments show that the proposed conditional model can be applied to five different tasks (class-conditional generation, inverse problems, edges-to-image translation, image-to-image translation, attribute-guided generation) with relevance to the NeurIPS conference. The reviewers all agree that the contribution is useful and novel. All reviewers were satisfied with the author response to the minor concerns that were raised in the first round of reviewing, with one reviewer raising their score. The reviewers engaged in committee discussion and reached a consensus recommendation to accept the paper.
train
[ "kO3Ulr8qejz", "OeGHmBsPLiZ", "V1_B56uvNKv", "1J_Kx-XBNQ8", "f9We2tHy_Dy", "tKe1Sl8ZypB", "1ZbUfGfWkxp", "t8jc92-I9fv", "PfMokfTnAen" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for the response. The response, especially the tables, has addressed most of my concerns, and I decide to increase my score.", "Deep polynomial neural networks (PNNs) have demonstrated great results on unsupervised image generation, but has not been applied to conditional generation tasks. This paper ...
[ -1, 6, 8, 7, -1, -1, -1, -1, 7 ]
[ -1, 3, 4, 4, -1, -1, -1, -1, 3 ]
[ "t8jc92-I9fv", "nips_2021_wPA_5Wsjt8i", "nips_2021_wPA_5Wsjt8i", "nips_2021_wPA_5Wsjt8i", "V1_B56uvNKv", "1J_Kx-XBNQ8", "PfMokfTnAen", "OeGHmBsPLiZ", "nips_2021_wPA_5Wsjt8i" ]
nips_2021_vh7qBSDZW3G
Efficient constrained sampling via the mirror-Langevin algorithm
We propose a new discretization of the mirror-Langevin diffusion and give a crisp proof of its convergence. Our analysis uses relative convexity/smoothness and self-concordance, ideas which originated in convex optimization, together with a new result in optimal transport that generalizes the displacement convexity of the entropy. Unlike prior works, our result both (1) requires much weaker assumptions on the mirror map and the target distribution, and (2) has vanishing bias as the step size tends to zero. In particular, for the task of sampling from a log-concave distribution supported on a compact set, our theoretical results are significantly better than the existing guarantees.
accept
The authors have identified an assumption that extends the work of [Zhang, Peyre, Fadili, Pereyra] to have guarantees for the mirrored Langevin algorithm -- not to be confused from the mirrored Langevin dynamics of [Hsieh et al]. However. the assumption cannot be implemented in practice since the authors cannot do the perfect discretization in the flow step. The standard Euler discretization normally results in the Cox-Ingersoll-Ross (CIR) process, which has a notoriously slow convergence rate. While the authors use a different discretization process, it cannot be implemented. The additional assumptions of the relative smoothness + strong convexity seems restrictive. Even in the case of the simplex constraints, the authors use the log-barrier and not the usual entropy, which is the staple of mirror descent methods. Moreover, there is no comparison with [Hsieh et al] for the same example. Note that those authors use the entropic mirror map. There are weaknesses as there is no analysis on the proposed discretization and the ensuing bias introduced while the paper is beautifully written with solid mathematical content. In particular, as one of the reviewers point out, this is not a real discrete time paper nor a continuous time paper - but it is somewhere in between. A revised paper with some analysis on the discretization would make it super compelling.
train
[ "rMzTiCXWVOR", "ApwfLd6VOxo", "5FPvrZSPSsF", "B50VDfbOHC2", "5kK_ie_sfzV", "OShD6pB2bHT", "n1POnmFj8Iw", "nFzhYEvaXT6", "el-rX1JN3Hm" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " OK, thanks. I will keep my score as it is.\n\n(3): Please clarify in the paper. It was not clear at all for me.", " Thank you for your review. We will address your points in turn.\n\n(1) Discretization: Theoretically, our work follows the well-established Nemirovsky-Yudin model of computation which measures the...
[ -1, -1, -1, -1, -1, 5, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "5kK_ie_sfzV", "OShD6pB2bHT", "el-rX1JN3Hm", "nFzhYEvaXT6", "n1POnmFj8Iw", "nips_2021_vh7qBSDZW3G", "nips_2021_vh7qBSDZW3G", "nips_2021_vh7qBSDZW3G", "nips_2021_vh7qBSDZW3G" ]
nips_2021_0zvTBoQb5PA
Adaptive Online Packing-guided Search for POMDPs
Chenyang Wu, Guoyu Yang, Zongzhang Zhang, Yang Yu, Dong Li, Wulong Liu, Jianye Hao
accept
The paper describes a new online planning technique for POMDPs that introduces two new ideas: re-sampling based on KLD and belief packing. The paper is well written and comprehensive in the sense that it includes a good empirical evaluation and a theoretical analysis. The reviewers raised concerns about the scalability of the approach, the notation and the assumptions of the theoretical analysis. The author response was very helpful in terms of clarifying those issues. If the paper is accepted, the authors are strongly encouraged to follow the reviewers' advice.
train
[ "Lnwzl_S_rVI", "UkNWdirbWnJ", "Iny-Kld7y5l", "pVuH9c_dUyv", "bJTC_0sZPzr", "3xz3gYZGJ_U", "HfotsAPWH4", "EKqI8jEtjln", "qv3rapVUZrc", "Sw79phUe0f3", "vgzlCrUc3d", "8-asTU7zyOp", "rpLCIg_JwkG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " \nThanks for your comments. We answer your questions one by one as follows.\n\n**Q1**: Does the analysis only consider discrete states and observations? Could $d_{\\infty}^{\\max}$ be infinite?\n\n**A1**: The analysis does consider continuous states and observations. In fact, it is built assuming continuous state...
[ -1, -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, 7, 5 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "UkNWdirbWnJ", "qv3rapVUZrc", "nips_2021_0zvTBoQb5PA", "bJTC_0sZPzr", "HfotsAPWH4", "nips_2021_0zvTBoQb5PA", "Sw79phUe0f3", "3xz3gYZGJ_U", "rpLCIg_JwkG", "Iny-Kld7y5l", "8-asTU7zyOp", "nips_2021_0zvTBoQb5PA", "nips_2021_0zvTBoQb5PA" ]
nips_2021_IWJ9jvXAoVQ
Turing Completeness of Bounded-Precision Recurrent Neural Networks
Previous works have proved that recurrent neural networks (RNNs) are Turing-complete. However, in the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. The memory module dynamically recruits new neurons when more memories are needed, and releases them when memories become irrelevant. We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine's time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and bounded-precision RNNs, revisiting and extending the theoretical foundations of RNNs.
accept
This paper has gone through deep discussions among the reviewers. Assuming all the proofs are correct, a major weakness of this paper is its disconnection with the community. The stack-RNNs and related models are not discussed in the paper. It is unclear what the community can learn from this work. However, the contribution that fixed precision RNN models with stacks are as powerful as a Turing machine is novel and significant. The analyzed RNN model is more or less practical. The theory may have a wide impact. Meanwhile, Meanwhile, the authors are highly encouraged to complete the connection to existing related work.
train
[ "viCTMPSQIqZ", "TuATu0WrgG4", "bo5iRRk0Kr", "iSj2n8bTyOE", "gRGARIom-Fj", "2E03lNFRApz", "gsSov1D3uFF", "VZqDZ1FouMA", "ZAl09BX4MJg", "AYLpXnoR83D", "sC4XfMHznJ", "IRPQL7Ogak4", "y0RsqF7e2_", "jPqO16WLA7Q", "hwgJkwMqLEj" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Given the extended discussion with the reviewer on the relationship of the growing memory module with the stack RNNs, which we acknowledge to be important, we will add a separate section in the revised draft as below. \n\nAt last, we would also like to emphasize that the contribution of our paper lies in the proo...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 3, 9, 7 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 5, 3, 2 ]
[ "bo5iRRk0Kr", "bo5iRRk0Kr", "iSj2n8bTyOE", "gRGARIom-Fj", "AYLpXnoR83D", "nips_2021_IWJ9jvXAoVQ", "VZqDZ1FouMA", "ZAl09BX4MJg", "2E03lNFRApz", "y0RsqF7e2_", "hwgJkwMqLEj", "jPqO16WLA7Q", "nips_2021_IWJ9jvXAoVQ", "nips_2021_IWJ9jvXAoVQ", "nips_2021_IWJ9jvXAoVQ" ]
nips_2021_ybw2U70q_Vd
End-to-end Multi-modal Video Temporal Grounding
We address the problem of text-guided video temporal grounding, which aims to identify the time interval of a certain event based on a natural language description. Different from most existing methods that only consider RGB images as visual features, we propose a multi-modal framework to extract complementary information from videos. Specifically, we adopt RGB images for appearance, optical flow for motion, and depth maps for image structure. While RGB images provide abundant visual cues of certain events, the performance may be affected by background clutters. Therefore, we use optical flow to focus on large motion and depth maps to infer the scene configuration when the action is related to objects recognizable with their shapes. To integrate the three modalities more effectively and enable inter-modal learning, we design a dynamic fusion scheme with transformers to model the interactions between modalities. Furthermore, we apply intra-modal self-supervised learning to enhance feature representations across videos for each modality, which also facilitates multi-modal learning. We conduct extensive experiments on the Charades-STA and ActivityNet Captions datasets, and show that the proposed method performs favorably against state-of-the-art approaches.
accept
This paper introduces an approach for multi-modal video temporal grounding. The paper received diverging review scores (4, 6, 6, 7), with the positive reviewer championing the paper. All reviewers agree that the approach is somewhat incremental, but it introduces as one of the first the use of depth information. The rebuttal addressed a significant part of the concerns of the reviewers. The recommendation is accept as a poster, with the expectation that the authors include the information provided in the rebuttal.
train
[ "WP0QFHQBZdX", "BwncHCg608x", "7EQ-jooz2xn", "3lHXThoGmw4", "3TrZNSlopxN", "ft9LRjWhVfJ", "Ja2Ue3-0ASY", "BUzTcslWY03", "WdutDJokR5", "0H5vI2Q9hXV" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of multi-modal video grounding. The authors exploit multiple modalities including RGB, depth, optical flow to extract complementary information for improving video grouding. A dynamic fusion scheme with transformer is proposed to better learn the interaction and integration of multip...
[ 6, 6, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "nips_2021_ybw2U70q_Vd", "nips_2021_ybw2U70q_Vd", "WdutDJokR5", "3TrZNSlopxN", "WdutDJokR5", "BwncHCg608x", "0H5vI2Q9hXV", "WP0QFHQBZdX", "nips_2021_ybw2U70q_Vd", "nips_2021_ybw2U70q_Vd" ]
nips_2021_6RB77-6-_oI
How Powerful are Performance Predictors in Neural Architecture Search?
Early methods in the rapidly developing field of neural architecture search (NAS) required fully training thousands of neural networks. To reduce this extreme computational cost, dozens of techniques have since been proposed to predict the final performance of neural architectures. Despite the success of such performance prediction methods, it is not well-understood how different families of techniques compare to one another, due to the lack of an agreed-upon evaluation metric and optimization for different constraints on the initialization time and query time. In this work, we give the first large-scale study of performance predictors by analyzing 31 techniques ranging from learning curve extrapolation, to weight-sharing, to supervised learning, to zero-cost proxies. We test a number of correlation- and rank-based performance measures in a variety of settings, as well as the ability of each technique to speed up predictor-based NAS frameworks. Our results act as recommendations for the best predictors to use in different settings, and we show that certain families of predictors can be combined to achieve even better predictive power, opening up promising research directions. We release our code, featuring a library of 31 performance predictors.
accept
This is an in-depth investigation of existing techniques. The experiments support the claims/questions in the paper well. The authors engaged well during the discussion phase to clarify all reviewers' concerns. The findings are likely going to be very useful to the NAS community.
train
[ "vNCNAy1Hxm", "spPnNtIjhWX", "3SC8-PM4eN2", "V51iaigpEEL", "UjKzecjKwTI", "5B37xjYpwDE", "0IVrUM6iFpi", "c-0rneHkZ56", "LptZOJcG7mN", "gL0r8az9RKM", "yXUnHRhETmz", "eE08eBzix5s", "mnB7na7R1OT", "_l1LaKvW14B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " You are right. I apologize for overlooking the fact that experimental papers have been accepted. \nYou have addressed all my concerns. Thanks for including additional predictors, testing on the entire test set for NASBench datasets and looking at the impact of random seed. I am increasing my score.", "This pape...
[ -1, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "yXUnHRhETmz", "nips_2021_6RB77-6-_oI", "nips_2021_6RB77-6-_oI", "UjKzecjKwTI", "LptZOJcG7mN", "3SC8-PM4eN2", "nips_2021_6RB77-6-_oI", "yXUnHRhETmz", "3SC8-PM4eN2", "_l1LaKvW14B", "spPnNtIjhWX", "mnB7na7R1OT", "nips_2021_6RB77-6-_oI", "nips_2021_6RB77-6-_oI" ]
nips_2021_khZGbgRQjjM
Stylized Dialogue Generation with Multi-Pass Dual Learning
Stylized dialogue generation, which aims to generate a given-style response for an input context, plays a vital role in intelligent dialogue systems. Considering there is no parallel data between the contexts and the responses of target style S1, existing works mainly use back translation to generate stylized synthetic data for training, where the data about context, target style S1 and an intermediate style S0 is used. However, the interaction among these texts is not fully exploited, and the pseudo contexts are not adequately modeled. To overcome the above difficulties, we propose multi-pass dual learning (MPDL), which leverages the duality among the context, response of style S1 and response of style S_0. MPDL builds mappings among the above three domains, where the context should be reconstructed by the MPDL framework, and the reconstruction error is used as the training signal. To evaluate the quality of synthetic data, we also introduce discriminators that effectively measure how a pseudo sequence matches the specific domain, and the evaluation result is used as the weight for that data. Evaluation results indicate that our method obtains significant improvement over previous baselines.
accept
This paper proposes a new method for stylized dialog generation. The task is that one learns a model to automatically generates a stylized response given a context in dialogue. The authors apply the technique of dual learning to address the problem by learning bi-directional transformations of texts between the contexts C, the original responses S0, and responses with the desired style S1. Pros * The paper is generally clearly written. * A new approach has been proposed, which appears to be reasonable and technically sound. * The proposed approach appears to be new. * Experimental results have shown that the proposed method outperforms the baselines on two large datasets. An ablation study has also been conducted. * The authors also plan to release their code and data. Cons * There are some issues with the presentation. The authors have addressed some of them in the rebuttal. They are encouraged to reflect the discussion results in the new version of the paper. * The authors also conducted additional experiments during the rebuttal. They are asked to include the results in the new version of the paper. * The proposed method appears to be complicated. One may doubt whether it can be really used in practice.*
train
[ "O8vM_cMz9y-", "T2ikjIJ68m8", "TPQmV9oN9dh", "QaXWYZrjQYh", "OKnPMvEpy9R", "1sVTz2zIaq", "E6rSkkUZBbx", "kWhmkt5a1e", "WXT2qN1D6b1", "hvp85AgF7a", "-0wmPlMB0Wt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks a lot for addressing my concerns.\nThe additional experiments look valuable.\nHence, I have raised the score to 7.", "This paper focuses on proposing multi-pass dual learning (MPDL) to build mappings among dialogue context and responses in two styles. The proposed model is reasonable and is the extension...
[ -1, 7, -1, 6, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "1sVTz2zIaq", "nips_2021_khZGbgRQjjM", "OKnPMvEpy9R", "nips_2021_khZGbgRQjjM", "QaXWYZrjQYh", "T2ikjIJ68m8", "hvp85AgF7a", "-0wmPlMB0Wt", "QaXWYZrjQYh", "nips_2021_khZGbgRQjjM", "nips_2021_khZGbgRQjjM" ]
nips_2021_6VMXq5GCB9R
Entropy-based adaptive Hamiltonian Monte Carlo
Hamiltonian Monte Carlo (HMC) is a popular Markov Chain Monte Carlo (MCMC) algorithm to sample from an unnormalized probability distribution. A leapfrog integrator is commonly used to implement HMC in practice, but its performance can be sensitive to the choice of mass matrix used therein. We develop a gradient-based algorithm that allows for the adaptation of the mass matrix by encouraging the leapfrog integrator to have high acceptance rates while also exploring all dimensions jointly. In contrast to previous work that adapt the hyperparameters of HMC using some form of expected squared jumping distance, the adaptation strategy suggested here aims to increase sampling efficiency by maximizing an approximation of the proposal entropy. We illustrate that using multiple gradients in the HMC proposal can be beneficial compared to a single gradient-step in Metropolis-adjusted Langevin proposals. Empirical evidence suggests that the adaptation method can outperform different versions of HMC schemes by adjusting the mass matrix to the geometry of the target distribution and by providing some control on the integration time.
accept
The paper introduces a gradient-based sampling algorithm with adaptive mass matrix to account for the integration error. The referees find the writing of the paper excellent and also agree upon the numerical experiments being convincing after multiple round of discussions. The paper should be accepted to the conference as a poster.
train
[ "bp9gyHyUOL-", "iGEsAPixufe", "s6I7yg3WIU", "_Lt8FpSzfxh", "Xoka0mSnbVi", "lgs6dY8o_ij", "7CEa2fJuMY9", "V8_GwqIUl-I", "dCB3ccOXsTH", "Y08FSVENn30", "3Qb3Fgd9pv4", "r4SS1vaoxR", "loFhMVMVFSS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper develops a gradient-based algorithm that enables adaptation of the mass matrix by taking into account the leapfrog integration error. The adaptation strategy incorporates an approximation of the proposal entropy in comparison to previous approaches that use other techniques such as the expected squared j...
[ 7, 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_6VMXq5GCB9R", "nips_2021_6VMXq5GCB9R", "Y08FSVENn30", "lgs6dY8o_ij", "nips_2021_6VMXq5GCB9R", "7CEa2fJuMY9", "V8_GwqIUl-I", "dCB3ccOXsTH", "loFhMVMVFSS", "3Qb3Fgd9pv4", "iGEsAPixufe", "bp9gyHyUOL-", "Xoka0mSnbVi" ]
nips_2021_5qsptDcsdEj
Continual World: A Robotic Benchmark For Continual Reinforcement Learning
Continual learning (CL) --- the ability to continuously learn, building on previously acquired knowledge --- is a natural requirement for long-lived autonomous reinforcement learning (RL) agents. While building such agents, one needs to balance opposing desiderata, such as constraints on capacity and compute, the ability to not catastrophically forget, and to exhibit positive transfer on new tasks. Understanding the right trade-off is conceptually and computationally challenging, which we argue has led the community to overly focus on catastrophic forgetting. In response to these issues, we advocate for the need to prioritize forward transfer and propose Continual World, a benchmark consisting of realistic and meaningfully diverse robotic tasks built on top of Meta-World as a testbed. Following an in-depth empirical evaluation of existing CL methods, we pinpoint their limitations and highlight unique algorithmic challenges in the RL setting. Our benchmark aims to provide a meaningful and computationally inexpensive challenge for the community and thus help better understand the performance of existing and future solutions. Information about the benchmark, including the open-source code, is available at https://sites.google.com/view/continualworld.
accept
I think reviewers were generally satisfied that this was a useful contribution. I found the reviews generally fair and the paper itself generally meritorious. Obviously there are some concerns, mostly related to the inherent difficulty of producing a satisfactory continual learning benchmark. This paper is somewhere near the current pareto-frontier of the tradeoff between "doing something useful but hard" and "doing something in a completely satisfactory way". I don't completely know if that frontier is close enough to the absolute quality threshold for NeurIPS, but I tend to think it is.
train
[ "oHbBEHdN6WY", "Rhl2kCJHdnU", "Buf88sO8BE", "Fe1U0AvZp1e", "jjZwfXw7lE", "e3o_-kLfH1", "vLJ2U5tfnXO", "JeD2zCcEH2", "9wiCZ080bkB", "YMU3l0-44bf" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your suggestion. It is indeed interesting and might provide a more fine-grained measurement of forward transfer. We will consider it in the future version of our benchmark. \n\nAs a side remark: in some experiments (not included in the paper), we considered semantically similar tasks (e.g., a task i...
[ -1, -1, 7, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, 3, 3 ]
[ "Rhl2kCJHdnU", "e3o_-kLfH1", "nips_2021_5qsptDcsdEj", "YMU3l0-44bf", "9wiCZ080bkB", "JeD2zCcEH2", "Buf88sO8BE", "nips_2021_5qsptDcsdEj", "nips_2021_5qsptDcsdEj", "nips_2021_5qsptDcsdEj" ]
nips_2021_1gLyEmOsKE8
Towards Best-of-All-Worlds Online Learning with Feedback Graphs
Liad Erez, Tomer Koren
accept
This paper design a novel, bi-level regularization procedure to successfully achieve a best-of-all-worlds regret guarantee. I’ll start with the negatives and them move on to the positives. On the negative side, all of the reviewers acknowledge that the results in the stochastic setting are suboptimal. Specifically, it is unclear if the exponent of 5 on the $\log(T)$ term is essential or if this dependence could be reduced (it does seem suboptimal). Another issue is that the authors’ algorithm is restricted to a fixed clique covering. All the reviewers agreed that this is a significant disadvantage, as the choice of clique covering influences the stochastic setting regret bound (by way of the minimum gaps in the corresponding cliques), and hence it is not possible for the algorithm to select the optimal clique covering (since the gaps are unknown). This is one reason why some assessments were lower than they originally were. In my view, this is the most significant weakness of this work. Also, the current regret bounds are based on clique coverings, whereas it would be desirable to obtain results based on independence number. On the positive side, there are important technical advances in this work. All the reviewers (and I) thought that this is a very difficult problem, and the authors do make important first progress on this problem. Their contribution has significant novelty, and even if they have not yet found the right regularizer for this problem, they have taken an important step in this direction. So, to be clear, even making this progress for a fixed clique covering is considered to be significant. Therefore, this paper meets the bar for publication at NeurIPS, and it will be exciting to see what additional progress can be made going forward. For the final version, I would like to stress that the authors follow the advice of reviewer 3tpU regarding avoiding the informal presentation by way of $\theta(G)$. Not all clique coverings are the same, and among clique coverings of the same size, the corresponding regret bounds (based on the minimum gaps) could be very different. On that note, I encourage the authors to mention adaptivity to the best clique covering a good direction for future work.
train
[ "fiBLhfU-CMz", "q3YGr_f8qrx", "xSPpKFOObQ2", "3buIfhVt6iA", "ONqJmayDoP1", "GxO7tPZw4E", "7jXKdEAD4qI", "kxqeeBEaxZ", "N8PANmSnQK", "Fd6o-0kffne", "7n-vzSLfYTO", "35-g879jrr", "kwSOk_T6BD8", "daF5bak-uu" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks to the authors for the reply. After reading the author response and other reviews, I agree that this paper makes some important progress to a difficult problem. I increased my score to 6.", "This paper studies a best-of-all-worlds algorithm for online learning with feedback graphs by incorporating a nove...
[ -1, 6, -1, 7, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3 ]
[ "7n-vzSLfYTO", "nips_2021_1gLyEmOsKE8", "ONqJmayDoP1", "nips_2021_1gLyEmOsKE8", "35-g879jrr", "7jXKdEAD4qI", "kxqeeBEaxZ", "kwSOk_T6BD8", "nips_2021_1gLyEmOsKE8", "daF5bak-uu", "q3YGr_f8qrx", "3buIfhVt6iA", "N8PANmSnQK", "nips_2021_1gLyEmOsKE8" ]
nips_2021__RnHyIeu5Y5
ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias
Transformers have shown great potential in various computer vision tasks owing to their strong capability in modeling long-range dependency using the self-attention mechanism. Nevertheless, vision transformers treat an image as 1D sequence of visual tokens, lacking an intrinsic inductive bias (IB) in modeling local visual structures and dealing with scale variance. Alternatively, they require large-scale training data and longer training schedules to learn the IB implicitly. In this paper, we propose a new Vision Transformer Advanced by Exploring intrinsic IB from convolutions, i.e., ViTAE. Technically, ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context by using multiple convolutions with different dilation rates. In this way, it acquires an intrinsic scale invariance IB and is able to learn robust feature representation for objects at various scales. Moreover, in each transformer layer, ViTAE has a convolution block in parallel to the multi-head self-attention module, whose features are fused and fed into the feed-forward network. Consequently, it has the intrinsic locality IB and is able to learn local features and global dependencies collaboratively. Experiments on ImageNet as well as downstream tasks prove the superiority of ViTAE over the baseline transformer and concurrent works. Source code and pretrained models will be available at https://github.com/Annbless/ViTAE.
accept
This work proposes a new method for combining convolution layers with attention in computer vision problems by allowing the attention layers to focus on capturing long range correlation structures. In addition, this work analyzes multiple dimensions for how to combine convolutional networks with visual transformers in order to achieve favorable performance in terms of computer vs performance. All of the reviewers were impressed by the soundness of the experiments, the clarity of the exposition and the significance of the results. For all of these reasons, this paper will be accepted.
train
[ "VCNB12_lhMH", "URvj0Q6wtB", "t6BjHxU1huP", "D9ZXna5G7hs", "Fut42_BvyRN", "Lv3a4OJJjtY", "HOyw1NgMG7K", "sHKbhcJ_ivq", "izhbPJG4kE4", "i_KJhMwcKST", "dYB9Ugfg6o2" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The standard vision transformer model lacks the capability to model inductive biases desirable for images e.g., local features encoding, and scale invariance. It, therefore, requires large-scale training data (300 million images), or other variants (e.g., Data Efficient ViT) distilling Knowledge from a pertained C...
[ 7, -1, 8, 8, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, -1, 4, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021__RnHyIeu5Y5", "sHKbhcJ_ivq", "nips_2021__RnHyIeu5Y5", "nips_2021__RnHyIeu5Y5", "Lv3a4OJJjtY", "D9ZXna5G7hs", "dYB9Ugfg6o2", "VCNB12_lhMH", "t6BjHxU1huP", "nips_2021__RnHyIeu5Y5", "nips_2021__RnHyIeu5Y5" ]
nips_2021_Tku-9lhJC5
Open Rule Induction
Rules have a number of desirable properties. It is easy to understand, infer new knowledge, and communicate with other inference systems. One weakness of the previous rule induction systems is that they only find rules within a knowledge base (KB) and therefore cannot generalize to more open and complex real-world rules. Recently, the language model (LM)-based rule generation are proposed to enhance the expressive power of the rules.In this paper, we revisit the differences between KB-based rule induction and LM-based rule generation. We argue that, while KB-based methods inducted rules by discovering data commonalitiess, the current LM-based methods are learning rules from rules''. This limits these methods to only producecanned'' rules whose patterns are constrained by the annotated rules, while discarding the rich expressive power of LMs for free text.Therefore, in this paper, we propose the open rule induction problem, which aims to induce open rules utilizing the knowledge in LMs. Besides, we propose the Orion (\underline{o}pen \underline{r}ule \underline{i}nducti\underline{on}) system to automatically mine open rules from LMs without supervision of annotated rules. We conducted extensive experiments to verify the quality and quantity of the inducted open rules. Surprisingly, when applying the open rules in downstream tasks (i.e. relation extraction), these automatically inducted rules even outperformed the manually annotated rules.
accept
Automatically extracting logic rules from unstructured data can be practically useful. The proposed rule extraction method based on pre-trained language models is novel and flexible. The empirical results are good on different tasks. The rebuttals clarified lots of issues raised in the initial reviews. An immediate limitation of this work is that the bias of pre-trained LMs could result in unexpected rules.
train
[ "8OLiayTunK3", "FOGDQIp_kSF", "tW09rq-X1Di", "GT_KmCK1DC", "btiBUczEGJM", "csJLJxiDtgn", "g7xaf882vS8", "n7-onhR4hPW", "zLR69-sjbiG", "M_yEwq_fUST", "1vAgeL2SQwg", "wnaD0XyAXzM", "Q0Rv80-U08", "4DJ00pFsPRd", "TYGNnWBncqa", "uJk2c2b1U1f" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Glad to see that our approach has convinced you more. Thank you for your advice. Our paper will focus on open rules with one atom as we defined in Definition 1 (line 82). While in addition, to clarify the problem boundary as you suggested, we will add the discussion on existential rules after the problem definiti...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "FOGDQIp_kSF", "tW09rq-X1Di", "csJLJxiDtgn", "nips_2021_Tku-9lhJC5", "1vAgeL2SQwg", "g7xaf882vS8", "n7-onhR4hPW", "wnaD0XyAXzM", "M_yEwq_fUST", "TYGNnWBncqa", "GT_KmCK1DC", "4DJ00pFsPRd", "uJk2c2b1U1f", "nips_2021_Tku-9lhJC5", "nips_2021_Tku-9lhJC5", "nips_2021_Tku-9lhJC5" ]
nips_2021_405l3VpbqRA
Post-Contextual-Bandit Inference
Contextual bandit algorithms are increasingly replacing non-adaptive A/B tests in e-commerce, healthcare, and policymaking because they can both improve outcomes for study participants and increase the chance of identifying good or even best policies. To support credible inference on novel interventions at the end of the study, nonetheless, we still want to construct valid confidence intervals on average treatment effects, subgroup effects, or value of new policies. The adaptive nature of the data collected by contextual bandit algorithms, however, makes this difficult: standard estimators are no longer asymptotically normally distributed and classic confidence intervals fail to provide correct coverage. While this has been addressed in non-contextual settings by using stabilized estimators, variance stabilized estimators in the contextual setting pose unique challenges that we tackle for the first time in this paper. We propose the Contextual Adaptive Doubly Robust (CADR) estimator, a novel estimator for policy value that is asymptotically normal under contextual adaptive data collection. The main technical challenge in constructing CADR is designing adaptive and consistent conditional standard deviation estimators for stabilization. Extensive numerical experiments using 57 OpenML datasets demonstrate that confidence intervals based on CADR uniquely provide correct coverage.
accept
Thank you to the authors and the reviewers for their contributions! There was a long discussion on this paper, and we sought external opinions to add clarifications. The final consensus is that the paper makes an interesting technical contribution to the literature on inference over adaptively collected data, specifically via contextual bandits. However, the paper is not very clear in terms of its key results and their limitations, and I urge the authors to clarify the exposition in the next version. Key issues have already been noted in the discussion period; one reviewer also specifically requested clarifications on whether Assumptions 5 and 6 hold in the experiments (i.e., whether the \epsilon is decaying too fast) or to adapt the experiment parameters accordingly.
train
[ "glJgIqXjXlL", "fxyuDFO39_m", "3r303Hb36q6", "C6h9zzmoqXW", "aoDTAokztx-", "qAFAz6jyRr", "nL6aAJpyll4", "UFqRY4Y4c2", "OVja-8myxLF", "dU184HNIaRu", "GtBlV0S0a4", "RyfNQXjtDe", "PagZMIo4vSm", "wGxvndKN5jb", "l4X41tIqeeL", "-tZFOuL2E4V", "gzoF8agwqeW", "P0NMp7IN2z", "wiT_SCdO4hQ", ...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "...
[ " (fixed a typo below re $\\sigma_{0,t}\\gtrsim \\delta_t^{-1/2}$)\n\nRe Lindeberg condition: \n\nThank you for clarifying your point. You are right that the way we write our proof we do need $\\omega$ and not $\\Omega$, but at the same time one does not really actually need Assumption 3 for Theorem 1. Let us expla...
[ -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 7 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "fxyuDFO39_m", "Y5sX1ts9-eY", "nips_2021_405l3VpbqRA", "aoDTAokztx-", "P0NMp7IN2z", "nips_2021_405l3VpbqRA", "UFqRY4Y4c2", "OVja-8myxLF", "dU184HNIaRu", "RyfNQXjtDe", "PagZMIo4vSm", "GtBlV0S0a4", "wGxvndKN5jb", "l4X41tIqeeL", "wiT_SCdO4hQ", "z6165HufpIw", "os3MD_Bo9H3", "nzjASFynrP...
nips_2021_79xCSCP6qs
Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme
Recently, a series of algorithms have been explored for GAN compression, which aims to reduce tremendous computational overhead and memory usages when deploying GANs on resource-constrained edge devices. However, most of the existing GAN compression work only focuses on how to compress the generator, while fails to take the discriminator into account. In this work, we revisit the role of discriminator in GAN compression and design a novel generator-discriminator cooperative compression scheme for GAN compression, termed GCC. Within GCC, a selective activation discriminator automatically selects and activates convolutional channels according to a local capacity constraint and a global coordination constraint, which help maintain the Nash equilibrium with the lightweight generator during the adversarial training and avoid mode collapse. The original generator and discriminator are also optimized from scratch, to play as a teacher model to progressively refine the pruned generator and the selective activation discriminator. A novel online collaborative distillation scheme is designed to take full advantage of the intermediate feature of the teacher generator and discriminator to further boost the performance of the lightweight generator. Extensive experiments on various GAN-based generation tasks demonstrate the effectiveness and generalization of GCC. Among them, GCC contributes to reducing 80% computational costs while maintains comparable performance in image translation tasks.
accept
This paper proposes a generator-discriminator co-training compression method to balance the model capacity of the lightweight generator and discriminator during GANs compression. The reviewers appreciate the idea of balancing generator and discriminator as well as comprehensive results and clear writing, although there are still concerns regarding the technical novelty and the ablation studies. The rebuttal addressed most of the reviewers' concerns and highlighted the difference from prior works (e.g., knowledge distillation for discriminator, NAS for discriminator architecture search). I agree with most reviewers and believe that the paper's contribution is significant enough, and the results are convincing. Therefore, I recommend accepting the paper.
train
[ "NZMCHy79dJ4", "yMgsHB05rNA", "3gNiOxMZbZv", "0HeuusNVKET", "6wntc40aPMQ", "xZVqpO-aNde", "OUnrUZKEiqs", "ItOwezZClKp", "6ZVBsm2RU3i", "vYivjsU0g9", "WlX0Do3nCD", "gRXW_YMkF-" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ " Thanks for the clarification! I am more convinced than before. Nevertheless, given the known challenge in balancing the generator and discriminator during compression, it would be good to see not only the strengths but also the limits of the proposed method. I, therefore, keep my rating to 6, and suggest authors ...
[ -1, -1, 6, -1, 5, 6, -1, 7, -1, -1, -1, -1 ]
[ -1, -1, 3, -1, 4, 4, -1, 4, -1, -1, -1, -1 ]
[ "gRXW_YMkF-", "NZMCHy79dJ4", "nips_2021_79xCSCP6qs", "OUnrUZKEiqs", "nips_2021_79xCSCP6qs", "nips_2021_79xCSCP6qs", "6ZVBsm2RU3i", "nips_2021_79xCSCP6qs", "xZVqpO-aNde", "6wntc40aPMQ", "ItOwezZClKp", "3gNiOxMZbZv" ]
nips_2021__y2G1-i7L8
Asymptotically Exact Error Characterization of Offline Policy Evaluation with Misspecified Linear Models
We consider the problem of offline policy evaluation~(OPE) with Markov decision processes~(MDPs), where the goal is to estimate the utility of given decision-making policies based on static datasets. Recently, theoretical understanding of OPE has been rapidly advanced under (approximate) realizability assumptions, i.e., where the environments of interest are well approximated with the given hypothetical models. On the other hand, the OPE under unrealizability has not been well understood as much as in the realizable setting despite its importance in real-world applications.To address this issue, we study the behavior of a simple existing OPE method called the linear direct method~(DM) under the unrealizability. Consequently, we obtain an asymptotically exact characterization of the OPE error in a doubly robust form. Leveraging this result, we also establish the nonparametric consistency of the tile-coding estimators under quite mild assumptions.
accept
The paper characterizes the asymptotic bias of off-policy evaluation using linear function approximation when realizability assumptions do not hold, with convergence rate analysis. The major concerns of the reviewers focused on the paper's lack of discussions of closely related results in MIS (e.g., Uehara et al, Tang et al); in fact, some of the high-level claims in the paper (e.g., first to discover connection between direct methods and density-ratio learning, first to provide "doubly robust" guarantee to LSTDQ) are already known in the prior works. After extensive post-rebuttal discussions, the AC decides that this is borderline paper that can go either way, but their personal recommendation is acceptance: the paper's exact characterization of the bias in LSTDQ is a significant result. As mentioned above, prior works have noticed the curious nature of LSTDQ and how it connects to MIS methods, but an exact characterization of its bias is missing, and different approaches to bounding the bias (e.g., under either realizability of value function or density ratio) yield different results, lacking unification. This paper solves this problem by providing the *exact* characterization of the bias, which takes the form of the product of realizability errors of value function and density ratio, respectively. For reference, the most comparable results that is previously known takes the form of minimum over these errors; see post-rebuttal discussions (also Remark 2 in Appendix A of Uehara et al, which does not require realizability). While multiplicative errors are known for doubly robust estimators (e.g., Tang et al, Kallus and Uehara), these are meta-estimators and require approximate value function and density ratio as inputs, so the results are somewhat orthogonal to the current paper. Furthermore, the paper complements the bias analysis by a convergence rate analysis, which takes care of non i.i.d. data and finite-sample effects. With all these, the AC feels that the paper will be a valuable reference for anyone who wants to investigate the statistical behavior of linear OPE estimators, and therefore recommends acceptance. Despite the AC's recommendation, the authors are noted that missing significant related works is a major issue, and a rejection will also be reasonable given that the revision needed is quite substantial that may warrant another round of reviewing. Regardless of the final decision, the authors are urged to fix overclaiming and draw connections to related works. Just to give a few pointers: - Abstract: "OPE ... has been advanced under the (approximate) realizability assumptions. However, such assumptions undermine the applicability of the results since the given environmental models may be completely wrong." This sentence suggests a qualitative difference from existing works, whereas the actual difference is more quantitative. - Line 60: " new interpretation of the linear DMs as marginal density ratio estimator" Already known in the literature. - A comparison to previous upper bounds on the bias should be provided. Finally, some additional detailed comments from the AC: 1. The authors might want to reconsider the title, which is not very informative in its current form given that the central result is the exact characterization of bias and the connection between DM (LSTDQ) and MIS is somewhat known. Also, linear function approximation is a crucial component of the setup, which is not reflected in the title. 2. The authors mentioned the lower bound of Wang et al several times during discussions, but may have some slight misunderstanding of their results. It was mentioned that they assumed bounded $1/C_{\mu}$, which is not true. Wang et al's data assumption is about the minimal eigenvalue of $\Sigma$, which is a much weaker one; the authors should be careful when discussing related issues in the paper. On a somewhat related note, the paper's derivation requires invertible $\Sigma$, but that may not be necessary. In the usual LSTD analyses, $\Sigma$ and $(I-\gamma F^{\\#})$ is often multiplied together as a single matrix (usually in the form of $E[\phi(s)\phi(s) - \gamma \phi(s) \phi(s')]$, and the requirement of invertibility of $\Sigma$ may be an artifact of extracting out $\Sigma$. 3. Def 9 for "Bellman residual" is not as interpretable as Def 8. For Def 8, one can easily see that the residual is 0 if the linear class realizes the true density ratio function. For Def 9, it is unclear/unintuitive why the residual is 0 when the linear class realizes the true value function (though I think $Q^{\\#}=Q^\pi$ when $Q^\pi$ is realizable and therefore the residual is $0$ in this case); the reference to $Q^{\\#}$ in the definition is somewhat undesirable since the quantity is a complicated expression in the estimator which does not make direct sense in the MDP. Also, if $Q^{\\#}$ is allowed in the expression, bias can be trivially written as $E_{p_0^\pi}[Q^{\\#} - Q^\pi]$ which is vacuous. Is it possible to rewrite it as an expression that looks like more standard forms of Bellman errors, e.g., replacing $B Q^{\\#} - q$ with $B q - q$? Uehara et al. Minimax Weight and Q-Function Learning for Off-Policy Evaluation. Kallus and Uehara. Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes. Tang et al. Doubly robust bias reduction in infinite horizon off-policy estimation. Wang et al. What are the Statistical Limits of Offline RL with Linear Function Approximation?
train
[ "LHXj49QqXta", "4_X8-tMUPS_", "GmZCakZyXv", "oGBlAROr15N", "RrY-aAwfkYr", "EetG33Ed-a", "8dH8BccHh3q", "K2icu_f-a93", "J9qdKjd_B1P", "OUGnsB6C0bf", "cEauQhTXXti", "YURLqT4LiD", "_b5VL580RK5", "98kWTSW3LPs", "fgzyJjDU2yR", "2inao3Ay1Lu", "DUVhg6TB6U3", "cXgVm_SFnJL", "pz1VkwXhaxa"...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "a...
[ " > alternative expressions of the residual that does not refer to $Q^\\\\sharp$ .\n\nYes, as you said, it may be not possible to solve the issue immediately, but at least we will mention it.\n\nAlso, thank you so much for the additional reference.", " Ok I think we are on the same page then. The authors are stil...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "4_X8-tMUPS_", "GmZCakZyXv", "RrY-aAwfkYr", "EetG33Ed-a", "EetG33Ed-a", "8dH8BccHh3q", "K2icu_f-a93", "J9qdKjd_B1P", "OUGnsB6C0bf", "cEauQhTXXti", "YURLqT4LiD", "cXgVm_SFnJL", "2inao3Ay1Lu", "fgzyJjDU2yR", "_b5VL580RK5", "nips_2021__y2G1-i7L8", "nips_2021__y2G1-i7L8", "pz1VkwXhaxa"...
nips_2021_AVWROGUWpu
Topographic VAEs learn Equivariant Capsules
In this work we seek to bridge the concepts of topographic organization and equivariance in neural networks. To accomplish this, we introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables. We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST. Furthermore, through topographic organization over time (i.e. temporal coherence), we demonstrate how predefined latent space transformation operators can be encouraged for observed transformed input sequences -- a primitive form of unsupervised learned equivariance. We demonstrate that this model successfully learns sets of approximately equivariant features (i.e. "capsules") directly from sequences and achieves higher likelihood on correspondingly transforming test sequences. Equivariance is verified quantitatively by measuring the approximate commutativity of the inference network and the sequence transformations. Finally, we demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
accept
Solid work making an original contribution on the link between topographic representations (such as those found in the brain) to equivariance learning.
train
[ "thjZImgPXn3", "QQ7Xc7PELIj", "ftGxFw2_2My", "fYR6fJjaJE", "eXXEooBUx2Y", "x3t8-DFe5aM", "2-PRM8nIoL_", "uWC4SxIUn8x", "EptjgnPiajl", "G-EFbOVnS_W", "V6AwMtH1bXE", "ZXh4jv4Cmxh", "WmAiGkVxOjd", "1JZEo4dXsk" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate the reviewer’s interest in our work. Given the relative infancy of such approximate equivariance as a topic, we are unaware of any references where ‘output-state dependent’ transformations would be observed. Our aim was to provide the convolution example as a thought experiment demonstrati...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "QQ7Xc7PELIj", "ftGxFw2_2My", "x3t8-DFe5aM", "nips_2021_AVWROGUWpu", "EptjgnPiajl", "2-PRM8nIoL_", "uWC4SxIUn8x", "V6AwMtH1bXE", "fYR6fJjaJE", "nips_2021_AVWROGUWpu", "1JZEo4dXsk", "WmAiGkVxOjd", "nips_2021_AVWROGUWpu", "nips_2021_AVWROGUWpu" ]
nips_2021__Rtm4rYnIIL
MobILE: Model-Based Imitation Learning From Observation Alone
This paper studies Imitation Learning from Observations alone (ILFO) where the learner is presented with expert demonstrations that consist only of states visited by an expert (without access to actions taken by the expert). We present a provably efficient model-based framework MobILE to solve the ILFO problem. MobILE involves carefully trading off exploration against imitation - this is achieved by integrating the idea of optimism in the face of uncertainty into the distribution matching imitation learning (IL) framework. We provide a unified analysis for MobILE, and demonstrate that MobILE enjoys strong performance guarantees for classes of MDP dynamics that satisfy certain well studied notions of complexity. We also show that the ILFO problem is strictly harder than the standard IL problem by reducing ILFO to a multi-armed bandit problem indicating that exploration is necessary for solving ILFO efficiently. We complement these theoretical results with experimental simulations on benchmark OpenAI Gym tasks that indicate the efficacy of MobILE. Code for implementing the MobILE framework is available at https://github.com/rahulkidambi/MobILE-NeurIPS2021.
accept
I appreciate the authors for detailed rebuttal and additional experimental results, and the reviewers for engaging in detailed feedback (especially reviewers axQZ and eB6p for active discussions). The novelty and impact were better communicated after the discussions. I like that the paper studies a generic problem of ILFO from model-based perspectives, discusses its fundamental differences from IL theoretically, and proposes better strategic exploration through models as a practical solution. One concern is some missing references/baselines for model-based GAIL/GAIFO-like algorithms, e.g. MAIL [Baram et al. 2016], but MobLIE ablation study without optimism appears to cover similar cases.
val
[ "pYiwA62IguM", "7gL03Mmmr0N", "qYJqoKL3STK", "oqVUtDgv73r", "xjKmNOe1vQ", "PKNDssFZ-k", "WuDv-QjhPT", "P7aVOVDwp3b", "eLuGxxHzuaH", "25wujYUv3M", "wWYrIz94tK", "_LQIsQn9xb", "7SjGCHRm81v", "e164v-nvU9i", "6WvEz2IEQn5", "JlwP5BpvB7F", "vWOppfCilAg" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I now have less doubts about the significance of the approach. I've raised my score. ", "This paper introduces MobILE, an approach for training agents to imitate from expert states only, i.e., when actions are unavailable. In particular, the approach utilizes an exploration bonus to more efficiently explore wit...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "qYJqoKL3STK", "nips_2021__Rtm4rYnIIL", "oqVUtDgv73r", "P7aVOVDwp3b", "nips_2021__Rtm4rYnIIL", "WuDv-QjhPT", "_LQIsQn9xb", "6WvEz2IEQn5", "e164v-nvU9i", "7SjGCHRm81v", "_LQIsQn9xb", "xjKmNOe1vQ", "vWOppfCilAg", "JlwP5BpvB7F", "7gL03Mmmr0N", "nips_2021__Rtm4rYnIIL", "nips_2021__Rtm4rY...
nips_2021_ZgUZmeV1Mtu
Few-Round Learning for Federated Learning
In federated learning (FL), a number of distributed clients targeting the same task collaborate to train a single global model without sharing their data. The learning process typically starts from a randomly initialized or some pretrained model. In this paper, we aim at designing an initial model based on which an arbitrary group of clients can obtain a global model for its own purpose, within only a few rounds of FL. The key challenge here is that the downstream tasks for which the pretrained model will be used are generally unknown when the initial model is prepared. Our idea is to take a meta-learning approach to construct the initial model so that any group with a possibly unseen task can obtain a high-accuracy global model within only R rounds of FL. Our meta-learning itself could be done via federated learning among willing participants and is based on an episodic arrangement to mimic the R rounds of FL followed by inference in each episode. Extensive experimental results show that our method generalizes well for arbitrary groups of clients and provides large performance improvements given the same overall communication/computation resources, compared to other baselines relying on known pretraining methods.
accept
This paper proposes a meta-learning approach for federated learning which allows arbitrary groups of clients can obtain a good global model in a few communication steps. The reviews were quite positive and the author response (which contained additional experimental results) helped to further consolidate the scores. Overall, the problem of few-round FL and the proposed approach were deemed to be nicely motivated, interesting and useful in practice. Therefore, the paper is accepted.
train
[ "ClbLVVt7qi", "G2zZc64S-r_", "ydoX6rJIer", "GTBlKTns4cp", "bldblwNbqtM", "IXZVdnIV0t7", "Ioq0lKaHD8", "PTQc5ZY3Ua", "vk9jrklxQWW", "yMTRwTMR6D4", "Fe4L-wosy_", "AkVt02ebYge" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Please allow us to add a clarification regarding your comment on real federated architecture. When **comparing the training times** of different schemes, we agree that simulation with multiple machines (GPUs) could be more convincing, but we stress that we are **comparing classification accuracies after a fixed n...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, 4, 3, 4 ]
[ "GTBlKTns4cp", "bldblwNbqtM", "GTBlKTns4cp", "vk9jrklxQWW", "nips_2021_ZgUZmeV1Mtu", "bldblwNbqtM", "Fe4L-wosy_", "AkVt02ebYge", "yMTRwTMR6D4", "nips_2021_ZgUZmeV1Mtu", "nips_2021_ZgUZmeV1Mtu", "nips_2021_ZgUZmeV1Mtu" ]
nips_2021_kaIcRYq-NpG
On Path Integration of Grid Cells: Group Representation and Isotropic Scaling
Understanding how grid cells perform path integration calculations remains a fundamental problem. In this paper, we conduct theoretical analysis of a general representation model of path integration by grid cells, where the 2D self-position is encoded as a higher dimensional vector, and the 2D self-motion is represented by a general transformation of the vector. We identify two conditions on the transformation. One is a group representation condition that is necessary for path integration. The other is an isotropic scaling condition that ensures locally conformal embedding, so that the error in the vector representation translates conformally to the error in the 2D self-position. Then we investigate the simplest transformation, i.e., the linear transformation, uncover its explicit algebraic and geometric structure as matrix Lie group of rotation, and explore the connection between the isotropic scaling condition and a special class of hexagon grid patterns. Finally, with our optimization-based approach, we manage to learn hexagon grid patterns that share similar properties of the grid cells in the rodent brain. The learned model is capable of accurate long distance path integration. Code is available at https://github.com/ruiqigao/grid-cell-path.
accept
The reviewers are overall positive. The presented approach is mathematically sound and shows that grid cells can emerge from an optimization framework with minimal assumptions. The main criticism raised is whether the approach is useful beyond already existing models. The authors should clarify potential applications more explicitly in the revision, as well as include the additional experiments they cited in the rebuttal.
val
[ "1GJ5CI2X_5k", "WIvRrgc81PF", "BTfNlgMgaGJ", "bgsFWYDhXqP", "A5IGMyDWj06", "Ala9FlDBU6", "qKBt0NMuJiL", "U0zc4Lwoc30", "wfBGhszfU4H" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Following your suggestion on adding experiments on applications**, we conduct the following two experiments and have obtained initial results. The first experiment is to integrate our grid cells model with egocentric vision. The second experiment is on path planning. The two experiments further illustrate the u...
[ -1, -1, -1, -1, -1, -1, 5, 10, 7 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "BTfNlgMgaGJ", "BTfNlgMgaGJ", "A5IGMyDWj06", "wfBGhszfU4H", "qKBt0NMuJiL", "U0zc4Lwoc30", "nips_2021_kaIcRYq-NpG", "nips_2021_kaIcRYq-NpG", "nips_2021_kaIcRYq-NpG" ]
nips_2021_M3lIEwZLmvI
Online Convex Optimization with Continuous Switching Constraint
Guanghui Wang, Yuanyu Wan, Tianbao Yang, Lijun Zhang
accept
This paper studies an interesting variant of OCO with continuous switching constraints. I have found the contributions of the paper quite interesting and novel. This is a very nice extension to the work by Chen et al 2020. So I recommend acceptance.
train
[ "PQv3TiyryWA", "mUCDrh45Cmo", "5YFreo2GhHX", "Ye0wFey-NFU", "9Cz6-Ltj1vg", "BO2gsr2NGfW", "GYui4qQ9Gfc", "mjwbcvN8v5u", "yav0CXAyBss" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work studies the well-known Online Convex Optimization (OCO) problem under an additional $\\ell_2$-norm hard switching constraint of $\\sum_{t=2}^T |w_t - w_{t-1}| \\leq S$ where $w_t$'s are the actions of the algorithm and $S$ is the switching budget. They first provide hardness results to show that the regi...
[ 6, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ 5, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "nips_2021_M3lIEwZLmvI", "GYui4qQ9Gfc", "yav0CXAyBss", "PQv3TiyryWA", "mjwbcvN8v5u", "GYui4qQ9Gfc", "nips_2021_M3lIEwZLmvI", "nips_2021_M3lIEwZLmvI", "nips_2021_M3lIEwZLmvI" ]
nips_2021_8twKpG5s8Qh
Why Do Better Loss Functions Lead to Less Transferable Features?
Previous work has proposed many new loss functions and regularizers that improve test accuracy on image classification tasks. However, it is not clear whether these loss functions learn better representations for downstream tasks. This paper studies how the choice of training objective affects the transferability of the hidden representations of convolutional neural networks trained on ImageNet. We show that many objectives lead to statistically significant improvements in ImageNet accuracy over vanilla softmax cross-entropy, but the resulting fixed feature extractors transfer substantially worse to downstream tasks, and the choice of loss has little effect when networks are fully fine-tuned on the new tasks. Using centered kernel alignment to measure similarity between hidden representations of networks, we find that differences among loss functions are apparent only in the last few layers of the network. We delve deeper into representations of the penultimate layer, finding that different objectives and hyperparameter combinations lead to dramatically different levels of class separation. Representations with higher class separation obtain higher accuracy on the original task, but their features are less useful for downstream tasks. Our results suggest there exists a trade-off between learning invariant features for the original task and features relevant for transfer tasks.
accept
This paper performed an empirical study on the impact of loss functions to the transferability of deep features for downstream tasks, and pinpointed several interesting findings which are previously unknown. Reviewers generally believed that this study is novel and significant to the community, the experimentation with several quantitative measures is solid, and the findings are conclusive and inspiring. Reviewers also raised some relevant concerns on the limited scope of this study, such as the limitation to ImageNet pre-training and less interesting downstream datasets, and to vision classification backbones and SGD optimizers. While these concerns are way of exhaustivity, authors are encouraged to extend the study to at least more interesting downstream datasets (such as Chest X-ray in the transfusion work, NeurIPS 2019) embodying a larger diversity of domain shifts, because this is the most relevant perspective to make their findings more conclusive for practical problems.
train
[ "hZwx3zc-iC", "KlyqOI3rwwl", "ZjQ4jSFOuP", "YK1f7Op8wA", "1D9HnPa2r1", "5MJIriNnKoJ", "jQuaxzrTMOq", "An78uNJtLtk", "DXK1RQoGSFb", "zFhXZFslOxv", "W_IW0rbtQyY", "RmuROY8tVKW", "vxlUpt0eyG6", "alY1_OqqKaK", "4nx8nlOlxQ4" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the impact of common supervised objectives and regularization on transfer learning. With extensive experiments, the authors find out that although these objectives (or regularizers) lead to better performance on the pre-training dataset, their performance on the downstream tasks are always worse...
[ 5, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 9, 6, 7 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_8twKpG5s8Qh", "W_IW0rbtQyY", "RmuROY8tVKW", "DXK1RQoGSFb", "zFhXZFslOxv", "nips_2021_8twKpG5s8Qh", "An78uNJtLtk", "5MJIriNnKoJ", "4nx8nlOlxQ4", "vxlUpt0eyG6", "hZwx3zc-iC", "alY1_OqqKaK", "nips_2021_8twKpG5s8Qh", "nips_2021_8twKpG5s8Qh", "nips_2021_8twKpG5s8Qh" ]
nips_2021_FMPuzXV1fR
Breaking the centralized barrier for cross-device federated learning
Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients which gives rise to the client drift phenomenon. In fact, obtaining an algorithm for FL which is uniformly better than simple centralized training has been a major open problem thus far. In this work, we propose a general algorithmic framework, Mime, which i) mitigates client drift and ii) adapts arbitrary centralized optimization algorithms such as momentum and Adam to the cross-device federated learning setting. Mime uses a combination of control-variates and server-level statistics (e.g. momentum) at every client-update step to ensure that each local update mimics that of the centralized method run on iid data. We prove a reduction result showing that Mime can translate the convergence of a generic algorithm in the centralized setting into convergence in the federated setting. Further, we show that when combined with momentum based variance reduction, Mime is provably faster than any centralized method--the first such result. We also perform a thorough experimental exploration of Mime's performance on real world datasets.
accept
Most of the concerns of the reviewers were addressed during the rebuttal period and the reviewers unanimously believe that the paper is above the acceptance threshold as the theoretical results are novel. Please revise the paper according to the reviewer's comments to clarify the confusions. In addition, please address the following concern which was raised by the reviewers in the discussion period: -- All plots are for test accuracy/error. Please include the optimization error convergence rate plots to have a better connection with the rest of the paper.
train
[ "QRTirU2lZEk", "mgCKEnqkKTr", "XpeNtWQ-y5a", "ObHj0HioIK", "xTd36mI5Fox", "bjDQzS0Bx4w", "A53FDrbKit8", "2ljiqa0eOWQ", "3sNIw-lHhGr", "-CqzYW_Hvh", "VacOma8luwM", "OYGZQPYO9-B", "IWtctTeKSyo", "RyrSZ6dkh63", "fEqUzJQpV-7", "LSjcKXzdpaO", "EI_6epfWQT", "3hIrTDeQli5" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " We appreciate the updated and the increased score :)\n\n> One issue I have is the claim that their algorithm beats the centralized lower bound, which is expected because they allow multiple local updates.\n\nIndeed, we circumvent centralized lower bounds using local steps. While it may be intuitive that local st...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, 7, -1, 6, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1 ]
[ "XpeNtWQ-y5a", "nips_2021_FMPuzXV1fR", "EI_6epfWQT", "nips_2021_FMPuzXV1fR", "2ljiqa0eOWQ", "A53FDrbKit8", "-CqzYW_Hvh", "ObHj0HioIK", "EI_6epfWQT", "OYGZQPYO9-B", "nips_2021_FMPuzXV1fR", "LSjcKXzdpaO", "nips_2021_FMPuzXV1fR", "VacOma8luwM", "nips_2021_FMPuzXV1fR", "ObHj0HioIK", "mgC...
nips_2021_VOjwYOfGZcL
Adversarially robust learning for security-constrained optimal power flow
Priya Donti, Aayushya Agarwal, Neeraj Vijay Bedmutha, Larry Pileggi, J. Zico Kolter
accept
This paper considers the N-k security-constrained optimal power flow problem using ideas from adversarially robust training. In particular, they design gradient based methods to solve this problem and implement it on N-3 SCOPF which has been traditionally considered out of reach of existing approaches. All the reviewers agree that this paper makes a nice and interesting contribution to this problem and recommend acceptance.
train
[ "rpkbUNhThz6", "3tMXZXL12Vd", "kLsOBFho2PA", "OcVivq7TIW8", "WsII8Mrss8", "9iPJ91bVDJ3", "8R3aw24opTn", "2H7KloPdhFX", "HSLL7-MTqjQ", "mzG86WsyfCc", "THRkx4YmcZh", "iZ0vsmnA0vB", "VbtoHFgsJyU", "rUAFTz9-sH6" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors develop a new method for solving the N-k SCOPF problem, an optimal power flow problem robust to any $k$ outages. Specifically, they relax the problem to allow for any convex combination of outages that has total 1-norm less than $k$. They then leverage recently popularized implicit differentiation tech...
[ 8, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 1, 2 ]
[ "nips_2021_VOjwYOfGZcL", "kLsOBFho2PA", "OcVivq7TIW8", "8R3aw24opTn", "HSLL7-MTqjQ", "nips_2021_VOjwYOfGZcL", "2H7KloPdhFX", "rpkbUNhThz6", "mzG86WsyfCc", "9iPJ91bVDJ3", "rUAFTz9-sH6", "VbtoHFgsJyU", "nips_2021_VOjwYOfGZcL", "nips_2021_VOjwYOfGZcL" ]
nips_2021_m0l7vTv70BK
Learning a Single Neuron with Bias Using Gradient Descent
Gal Vardi, Gilad Yehudai, Ohad Shamir
accept
The reviewers generally find the paper to be a valuable addition to the neurips proceeding, despite some concerns on the originality (i.e., the paper seems to be a direct extension of prior works in terms of the problem setup and proof techniques.) The AC recommends the authors to discuss the relationship with prior works (on a both conceptual level and low-level technical level) more prominently.
train
[ "qCvNJc9efE", "u_KpBQHKXa2", "sBRYUffFOAc", "XaAZ9HwP2I", "vd15wdFDBl", "rrhw4XBgKud", "CgUQTGxCNB4", "HXIC2kLn7Zn", "BCXtquGCF5b" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarification. I choose to retain my score. ", "In this paper, the authors consider the task of learning a single ReLu neuron with bias using gradient descent, which has a significantly different behavior than the unbiased case. For example, adding a bias leads to examples where gradient desce...
[ -1, 7, 7, -1, -1, -1, -1, 6, 5 ]
[ -1, 3, 3, -1, -1, -1, -1, 4, 5 ]
[ "XaAZ9HwP2I", "nips_2021_m0l7vTv70BK", "nips_2021_m0l7vTv70BK", "BCXtquGCF5b", "sBRYUffFOAc", "u_KpBQHKXa2", "HXIC2kLn7Zn", "nips_2021_m0l7vTv70BK", "nips_2021_m0l7vTv70BK" ]
nips_2021_rXppDp76U9
Making a (Counterfactual) Difference One Rationale at a Time
Rationales, snippets of extracted text that explain an inference, have emerged as a popular framework for interpretable natural language processing (NLP). Rationale models typically consist of two cooperating modules: a selector and a classifier with the goal of maximizing the mutual information (MMI) between the "selected" text and the document label. Despite their promises, MMI-based methods often pick up on spurious text patterns and result in models with nonsensical behaviors. In this work, we investigate whether counterfactual data augmentation (CDA), without human assistance, can improve the performance of the selector by lowering the mutual information between spurious signals and the document label. Our counterfactuals are produced in an unsupervised fashion using class-dependent generative models. From an information theoretic lens, we derive properties of the unaugmented dataset for which our CDA approach would succeed. The effectiveness of CDA is empirically evaluated by comparing against several baselines including an improved MMI-based rationale schema on two multi-aspect datasets. Our results show that CDA produces rationales that better capture the signal of interest.
accept
This paper initially received diverging scores, but after the author response, two reviewers updated their reviews and increased their scores. The reviewers are now unanimous in their assessment that the paper tackles an important problem using an interesting and novel approach, and draws a unique connection between explanations and spurious correlations. It also shows good improvements compared to the baseline. There are some additional experiments and analyses requested by the reviewers that I urge the authors to add: more extraction methods, more careful experimental settings (multiple runs, report mean and std), other datasets besides the two (the fact there are several versions of them isn't enough), and qualitative analyses. Finally, while not mentioned by the reviewers, one should acknowledge that the evaluation only targets plausibility (by comparing with human explanations) and not faithfulness. This is especially concerning considering the issues with select-predict models raised by Jacovi and Goldberg, "Aligning Faithful Interpretations with their Social Attribution". Therefore, the paper should at the very least discuss these limitations, and more ideally, also evaluate faithfulness (e.g., via sufficiency and comprehensiveness metrics; see Wiegreffe and Marasović, "Teach Me to Explain: A Review of Datasets for Explainable NLP", for a good summary of these concepts). Despite these issues, the paper makes valuable contributions and I therefore recommend acceptance.
train
[ "HdHJVcQLD07", "rskkJpQP6wo", "hzl2NprBt03", "EbZgaik6NwV", "1ZE92ITU-m", "BUukyTAsU-F", "147wxqhfIni", "aUu0B3xbaO", "_F_aKtVVJd", "yVCL2_h4Gjf", "EgmWIG6TU9I", "ZdpPkJMrve", "bIVkAIY0mNF", "dKBUFurw7z" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We will for sure add details about the dataset choices. Thank-you for your time and notes!", " We really appreciate your time and suggestions. Thanks!", " Thank-you again for your time, questions, and comments.\n\n[Question] I am not sure what are the 7 datasets considered are -- from the paper and Supp mater...
[ -1, -1, -1, -1, 7, -1, -1, 8, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, -1, 5, -1, -1, 4, -1, -1, -1, -1, 4, 3 ]
[ "147wxqhfIni", "BUukyTAsU-F", "EbZgaik6NwV", "yVCL2_h4Gjf", "nips_2021_rXppDp76U9", "ZdpPkJMrve", "EgmWIG6TU9I", "nips_2021_rXppDp76U9", "aUu0B3xbaO", "dKBUFurw7z", "bIVkAIY0mNF", "1ZE92ITU-m", "nips_2021_rXppDp76U9", "nips_2021_rXppDp76U9" ]
nips_2021_wEFC5PY0g_0
3D Siamese Voxel-to-BEV Tracker for Sparse Point Clouds
3D object tracking in point clouds is still a challenging problem due to the sparsity of LiDAR points in dynamic environments. In this work, we propose a Siamese voxel-to-BEV tracker, which can significantly improve the tracking performance in sparse 3D point clouds. Specifically, it consists of a Siamese shape-aware feature learning network and a voxel-to-BEV target localization network. The Siamese shape-aware feature learning network can capture 3D shape information of the object to learn the discriminative features of the object so that the potential target from the background in sparse point clouds can be identified. To this end, we first perform template feature embedding to embed the template's feature into the potential target and then generate a dense 3D shape to characterize the shape information of the potential target. For localizing the tracked target, the voxel-to-BEV target localization network regresses the target's 2D center and the z-axis center from the dense bird's eye view (BEV) feature map in an anchor-free manner. Concretely, we compress the voxelized point cloud along z-axis through max pooling to obtain a dense BEV feature map, where the regression of the 2D center and the z-axis center can be performed more effectively. Extensive evaluation on the KITTI tracking dataset shows that our method significantly outperforms the current state-of-the-art methods by a large margin.
accept
Initially, one of the reviewers expressed concerns about the paper (lack of clarity and limited novelty) and ranked the paper below acceptance. Another area of concern was related to a number of claims that the authors made and were not fully justified experimentally. As the ensuing rebuttal managed to successfully address most of reviewer’s concerns, ACs and the majority of the reviewers agreed that this is a strong paper that deserves acceptance. Authors are highly encouraged to address the key comments reported by reviewers as well as to implement all the improvements (as indicated by authors in the rebuttal, including addressing the over statements/claims as discussed above) in the final camera-ready version.
train
[ "8El_C2FdoI", "SoB68Edyia6", "TNe3qht9sQN", "rTll8d26gnK", "HLux-e7dDmZ", "Mqe-QjU4CR", "58x-b-BTPoa", "mvDpTJHxPz2", "widLntAC1l", "sweR5ZWHKbG", "LP04VPV3UQz", "Q22z63qcv5", "OGiI8RFZ3OS", "37FkRsHERl0", "Dlsn7LvU_j-", "80ds-xXACf6", "yZ-yVZ6S2hE", "QaoZxeB4YZG", "jsYdRF9In1Z",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "...
[ "This paper proposes a 3D object tracker that localizes the target object given its template and a small search space. Their proposed model largely consists of the following two modules: Siamese Shape-Aware Feature Learning Network and Voxel-to-BEV target localization. The feature learning network takes the templat...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 6, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4, 4, 4 ]
[ "nips_2021_wEFC5PY0g_0", "LP04VPV3UQz", "58x-b-BTPoa", "widLntAC1l", "37FkRsHERl0", "mvDpTJHxPz2", "sweR5ZWHKbG", "OGiI8RFZ3OS", "yZ-yVZ6S2hE", "mxdJ9VPAmfa", "8El_C2FdoI", "80ds-xXACf6", "XDbZYoLNFzh", "P1ER53b79TU", "jsYdRF9In1Z", "QaoZxeB4YZG", "nips_2021_wEFC5PY0g_0", "nips_202...
nips_2021_lxj5ksjmwnq
Stateful Strategic Regression
Automated decision-making tools increasingly assess individuals to determine if they qualify for high-stakes opportunities. A recent line of research investigates how strategic agents may respond to such scoring tools to receive favorable assessments. While prior work has focused on the short-term strategic interactions between a decision-making institution (modeled as a principal) and individual decision-subjects (modeled as agents), we investigate interactions spanning multiple time-steps. In particular, we consider settings in which the agent's effort investment today can accumulate over time in the form of an internal state - impacting both his future rewards and that of the principal. We characterize the Stackelberg equilibrium of the resulting game and provide novel algorithms for computing it. Our analysis reveals several intriguing insights about the role of multiple interactions in shaping the game's outcome: First, we establish that in our stateful setting, the class of all linear assessment policies remains as powerful as the larger class of all monotonic assessment policies. While recovering the principal's optimal policy requires solving a non-convex optimization problem, we provide polynomial-time algorithms for recovering both the principal and agent's optimal policies under common assumptions about the process by which effort investments convert to observable features. Most importantly, we show that with multiple rounds of interaction at her disposal, the principal is more effective at incentivizing the agent to accumulate effort in her desired direction. Our work addresses several critical gaps in the growing literature on the societal impacts of automated decision-making - by focusing on longer time horizons and accounting for the compounding nature of decisions individuals receive over time.
accept
There is broad agreement among the reviewers on what the paper contributes, but the reviewers evaluate it differently, with one quite strongly in favor and the others more on the fence but not opposed to accepting; part of this is how to value the conceptual aspect of the paper which seems to be its main strength. I'd recommend a weak accept for this one. There are some specific comments the authors need to address, and generally we encourage them to be open and transparent, in particular highlighting and explaining the assumption that socially undesirable efforts don't accumulate.
train
[ "pP6c6PrOnlr", "sbNM8a_An1R", "hJBMGOvnAD", "ZjJskWyfc0G", "jK3BzLRy5Bd", "2yB3IDN5LpI", "-xAxnYqWjwI", "MFPbTsxsBP", "akYUCMkVvmH", "-uemgKXHfCf", "VkgVXuXewq", "D93eB-3vEXJ" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper discusses the problem of stateful strategic regression, where the basic idea is to model the interaction between a principal and a strategic agent over multiple time steps. Specifically, the principal announces a sequence of assessment policies, and the agent invests effort to play a best response to thi...
[ 8, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_lxj5ksjmwnq", "ZjJskWyfc0G", "nips_2021_lxj5ksjmwnq", "akYUCMkVvmH", "2yB3IDN5LpI", "-uemgKXHfCf", "pP6c6PrOnlr", "VkgVXuXewq", "hJBMGOvnAD", "D93eB-3vEXJ", "nips_2021_lxj5ksjmwnq", "nips_2021_lxj5ksjmwnq" ]
nips_2021_wRXzOa2z5T
Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning
We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input. To this end, we introduce a general-purpose deep learning architecture that takes as input the entire dataset instead of processing one datapoint at a time. Our approach uses self-attention to reason about relationships between datapoints explicitly, which can be seen as realizing non-parametric models using parametric attention mechanisms. However, unlike conventional non-parametric models, we let the model learn end-to-end from the data how to make use of other datapoints for prediction. Empirically, our models solve cross-datapoint lookup and complex reasoning tasks unsolvable by traditional deep learning models. We show highly competitive results on tabular data, early results on CIFAR-10, and give insight into how the model makes use of the interactions between points.
accept
This paper explores alternative modeling paradigms for supervised deep learning. In this paper, the authors describe an approach wherein input to the model is the entire dataset along with the query instance. Predictions are made collectively via attention between (1) data points and (2) attributes. This model which attends over entire dataset for predication is named Non-Parametric Transformers (NPTs). The authors describe how the NPT can be applied in various settings, such as classification. Through extensive experiments and ablation studies, the effectiveness of NPT is demonstrated on established benchmarks tasks. Moreover, I (and all the reviewers) found the paper well-written and very clear in its exposition and idea presentation. We thank the reviewers and authors for engaging in an active discussion, which resulted in clearing a lot of the concerns (e.g. speed/flops) and a lot of constructive feedback were provided to improve the paper. The authors provided extensive new empirical results/ablation studies as part of the discussion, please include them in the final version of the paper as they add great value and understanding to the model as a whole. Overall reviewers reached to the consensus that the paper has many merits and should be accepted.
train
[ "iUE_ZKFh4le", "zzgIsrLQcTT", "mi56Fevw5Ac", "VACfXp1x7qR", "pq6sauYBS0b", "7euIu5kyMV-", "nw9tZjyKSoE", "xKrH_ooJx-D", "Qv786_0u2n", "5aaiA01F721", "JsnbOqQ9Qmz", "7OUdD486_Dq", "Q_KPfVlJofJ", "Y8Zftr02T9K", "dNAUsm3s6f8", "gN4_rLidezv", "g-v3xZFjPfA", "fXEW5YH8Qs", "fOmR5zaACAA...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ "Deep parametric models have demonstrated tremendous success in NLP, computer vision, and many other settings. These models typically take as input an instance and output a prediction / label(s) for that instance. In this paper, the authors describe an approach that stands in contrast, the input to the model is the...
[ 8, -1, -1, -1, -1, 7, -1, 8, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "nips_2021_wRXzOa2z5T", "mi56Fevw5Ac", "fOmR5zaACAA", "nw9tZjyKSoE", "Qv786_0u2n", "nips_2021_wRXzOa2z5T", "g-v3xZFjPfA", "nips_2021_wRXzOa2z5T", "5aaiA01F721", "JsnbOqQ9Qmz", "fXEW5YH8Qs", "nips_2021_wRXzOa2z5T", "Y8Zftr02T9K", "gN4_rLidezv", "nips_2021_wRXzOa2z5T", "dNAUsm3s6f8", "...
nips_2021_6p-zJaheTW
Your head is there to move you around: Goal-driven models of the primate dorsal pathway
Neurons in the dorsal visual pathway of the mammalian brain are selective for motion stimuli, with the complexity of stimulus representations increasing along the hierarchy. This progression is similar to that of the ventral visual pathway, which is well characterized by artificial neural networks (ANNs) optimized for object recognition. In contrast, there are no image-computable models of the dorsal stream with comparable explanatory power. We hypothesized that the properties of dorsal stream neurons could be explained by a simple learning objective: the need for an organism to orient itself during self-motion. To test this hypothesis, we trained a 3D ResNet to predict an agent's self-motion parameters from visual stimuli in a simulated environment. We found that the responses in this network accounted well for the selectivity of neurons in a large database of single-neuron recordings from the dorsal visual stream of non-human primates. In contrast, ANNs trained on an action recognition dataset through supervised or self-supervised learning could not explain responses in the dorsal stream, despite also being trained on naturalistic videos with moving objects. These results demonstrate that an ecologically relevant cost function can account for dorsal stream properties in the primate brain.
accept
This paper received 4 accepts (including a marginal accept). The paper makes an original and complete contribution to the field and addresses an important gap in biological vision. I would add that unlike much of the current work in computational models of vision where the goal appears to simply quantitatively fit model responses to neural data, this paper performs extensive in-sillico electrophysiology on the model responses to highlight qualitative properties using classic stimuli that have been used to characterize the dorsal stream. The AC recommends accepting as a spotlight.
train
[ "HpHbhV3OmU", "bFAUROE0Gfq", "ZJIr3beB70n", "F3Nm9NR8JY-", "51mqy5t6LP9", "VDqa0q5LZoj", "yXXJJ895vxf", "soxE5KAWiuq", "2QVKWG8BYr", "CmSP5NPskZ" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your patience as we refit the models at multiple scales. We find that changing the scale from .66X to 1X to 1.5X broadly maintains the layer assignments in dorsalnet, with some minor shifts in the expected direction: the median mean layer assignment for V1 is 1.0 at scale .66X, 1.14 at scale 1X, 1.3...
[ -1, 6, -1, 7, -1, -1, -1, -1, 7, 8 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, 4, 4 ]
[ "ZJIr3beB70n", "nips_2021_6p-zJaheTW", "soxE5KAWiuq", "nips_2021_6p-zJaheTW", "F3Nm9NR8JY-", "2QVKWG8BYr", "CmSP5NPskZ", "bFAUROE0Gfq", "nips_2021_6p-zJaheTW", "nips_2021_6p-zJaheTW" ]
nips_2021_KYrenOCQuM
Achieving Rotational Invariance with Bessel-Convolutional Neural Networks
For many applications in image analysis, learning models that are invariant to translations and rotations is paramount. This is the case, for example, in medical imaging where the objects of interest can appear at arbitrary positions, with arbitrary orientations. As of today, Convolutional Neural Networks (CNN) are one of the most powerful tools for image analysis. They achieve, thanks to convolutions, an invariance with respect to translations. In this work, we present a new type of convolutional layer that takes advantage of Bessel functions, well known in physics, to build Bessel-CNNs (B-CNNs) that are invariant to all the continuous set of possible rotation angles by design.
accept
After lively discussion, all reviewers found this paper interesting and thought the paper ought to be accepted. Everyone liked the idea of using Bessel functions to achieve local rotational invariance clever and found the technique to be potentially quite impactful -- especially in the low data regime. Reviewers also found the experimental results compelling.
train
[ "CJuzHUGHBtA", "DLMpmClQvHU", "eT7FKrhNzxx", "igkoWmPh_R2", "aU3MhF1kH6V", "ALYyzjcOgzX", "t0WYFplcUjJ", "MEKQQxXJdP5", "CNivyMAyTo-" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors for a detailed response. I keep my score (7, accept). I think this is a good paper which gives a good solution to an important problem. I still have some minor concerns over effects of interpolation, statements about low-pass images, imperfect invariance and related guarantees.", "...
[ -1, 6, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, 3, 4, 4 ]
[ "igkoWmPh_R2", "nips_2021_KYrenOCQuM", "t0WYFplcUjJ", "MEKQQxXJdP5", "CNivyMAyTo-", "DLMpmClQvHU", "nips_2021_KYrenOCQuM", "nips_2021_KYrenOCQuM", "nips_2021_KYrenOCQuM" ]
nips_2021_u7Qb7pQk8tF
Unsupervised Domain Adaptation with Dynamics-Aware Rewards in Reinforcement Learning
Unsupervised reinforcement learning aims to acquire skills without prior goal representations, where an agent automatically explores an open-ended environment to represent goals and learn the goal-conditioned policy. However, this procedure is often time-consuming, limiting the rollout in some potentially expensive target environments. The intuitive approach of training in another interaction-rich environment disrupts the reproducibility of trained skills in the target environment due to the dynamics shifts and thus inhibits direct transferring. Assuming free access to a source environment, we propose an unsupervised domain adaptation method to identify and acquire skills across dynamics. Particularly, we introduce a KL regularized objective to encourage emergence of skills, rewarding the agent for both discovering skills and aligning its behaviors respecting dynamics shifts. This suggests that both dynamics (source and target) shape the reward to facilitate the learning of adaptive skills. We also conduct empirical experiments to demonstrate that our method can effectively learn skills that can be smoothly deployed in target.
accept
The reviewers highly appreciated the author response and the additional details. Reviewer UYK4 didn't update the review but indicated in the discussion to have read the replies. We had quite an extensive discussion between the reviewers on whether the setting of this paper too narrow and whether that would actually speak against publishing it. In the end they came up with a few examples (please add some motivating examples that have practical relevance to the paper) and everybody was convinced that this is a sound and interesting paper.
test
[ "J9MJ6Vp3bnm", "pFSRrvDBKI4", "SkFAA7gGT-B", "RVFAsFfR4fq", "e1ioTkW5bkF", "AdHBubd58KH", "ikwiKVzQGHb", "_ct4yXOJMN", "FdM4399aJR0" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response and thank you for providing the pointer concerning the a-priori access to the target environment.", " Thank you! We are very glad that you agree with the importance of our problem, and we appreciate your insightful comments. \n\n(3) \"It is possible that \"GPIM, finetuned in target\"...
[ -1, -1, 7, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, 3, 3 ]
[ "AdHBubd58KH", "RVFAsFfR4fq", "nips_2021_u7Qb7pQk8tF", "e1ioTkW5bkF", "SkFAA7gGT-B", "FdM4399aJR0", "_ct4yXOJMN", "nips_2021_u7Qb7pQk8tF", "nips_2021_u7Qb7pQk8tF" ]
nips_2021_yILzFBjR0Y
GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph
The representation learning on textual graph is to generate low-dimensional embeddings for the nodes based on the individual textual features and the neighbourhood information. Recent breakthroughs on pretrained language models and graph neural networks push forward the development of corresponding techniques. The existing works mainly rely on the cascaded model architecture: the textual features of nodes are independently encoded by language models at first; the textual embeddings are aggregated by graph neural networks afterwards. However, the above architecture is limited due to the independent modeling of textual features. In this work, we propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models. With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow, making each node's semantic accurately comprehended from the global perspective. In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph. Extensive evaluations are conducted on three large-scale benchmark datasets, where GraphFormers outperform the SOTA baselines with comparable running efficiency. The source code is released at https://github.com/microsoft/GraphFormers .
accept
This paper proposes a new model for learning representations on a textual graph. The key idea is to combine Transformer and graph neural network. Experimental results show that the proposed approach works better than the traditional cascade approach. Strength * The paper is generally clearly written, although there is still room for improvements on the presentation. * The proposed model appears to be reasonable and technically sound. * Experimental results have demonstrated the effectiveness of the proposed method. * The reviewers pointed out some issues with the paper. The authors have successfully addressed most of them, particularly they provide new experimental results. Weakness * There are details that are not explained very clearly. The authors are encouraged to further improve the presentation. * There are still problems with the English of the paper. For example, “graph aggregation and text encoding are iterative performed” --> it should be "iteratively". The authors are encouraged to do further proofreading. Please significantly revise the paper based on your replies in the rebuttal if the paper is accepted. Minor comments: It is not clear for non-experts what the downstream tasks are from the explanation in the abstract and introduction. Please add that.
train
[ "Dg-TrJacDX", "WmgEa5y7DTD", "pH2tTeeKS83", "3zVbh9_YaOI", "nniS0wzpf7x", "p2_eyK3q6R1", "8NU6ukz6erH", "e1bygXnuo_D", "n_9aKF0Kcy2", "iE8X-h6saOT", "8gFbaxq84Yd", "7h5KzMFS9u", "8RdQwLFc_-q", "ZnVwC7mKKet", "fWmD1M1fCuA", "nShK-eCtXAs", "LekmNA4Pq36", "WHuNMh4Nb2V" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Reading the other reviews and responses, I raised my score.", "This paper proposes a means to encode text feature for nodes in textual graphs, where a layerwise aggregation module (implemented as a multi-head self-attention) is appended after each Transformer encoder block that aggregates the hidden representat...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "8gFbaxq84Yd", "nips_2021_yILzFBjR0Y", "3zVbh9_YaOI", "nips_2021_yILzFBjR0Y", "p2_eyK3q6R1", "8NU6ukz6erH", "nShK-eCtXAs", "n_9aKF0Kcy2", "8RdQwLFc_-q", "WHuNMh4Nb2V", "7h5KzMFS9u", "WmgEa5y7DTD", "LekmNA4Pq36", "3zVbh9_YaOI", "nips_2021_yILzFBjR0Y", "ZnVwC7mKKet", "nips_2021_yILzFBj...
nips_2021_z71OSKqTFh7
A Universal Law of Robustness via Isoperimetry
Sebastien Bubeck, Mark Sellke
accept
This paper adds an interesting ingredient to the substantial literature on interpolating / memorizing data using neural networks by asking how things change if the neural network is required to be smooth (Lipschitz). The results essentially show that there is a cost to smoothness, appearing as a multiplicative factor of d in the number of parameters of the model. Smoothness of the model loosely corresponds to robustness against adversarial examples, and thus, the results shed light on the empirically observed phenomenon that sufficiently large models can be made more robust. Significantly, they give fairly precise numerical predictions that may help to guide the practice of deep learning.
train
[ "kcs6OcQ-fwt", "eKc1JF09CvC", "zZ8gCQz5KJk", "_Q_OKEwOfMy", "VUawC3vChcv", "20PNSice_9D", "RxsyiZqVoVM", "9kTCeX23s9z", "t8wevSYJxA5", "8qCHFfsIsl3", "tmP3PTtaY-o", "gJQbFBdCN0p" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper provides a lower bound on the *global* Lipschitz constant (with respect to the 2-norm) of any function within a parameterized function class that fits data below the noise level that scales inversely in the number of parameters p, showing that overparameterization is *necessary* to ensure a smooth fit o...
[ 7, -1, 10, -1, -1, 6, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, 5, -1, -1, 4, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_z71OSKqTFh7", "t8wevSYJxA5", "nips_2021_z71OSKqTFh7", "VUawC3vChcv", "tmP3PTtaY-o", "nips_2021_z71OSKqTFh7", "9kTCeX23s9z", "20PNSice_9D", "kcs6OcQ-fwt", "gJQbFBdCN0p", "zZ8gCQz5KJk", "nips_2021_z71OSKqTFh7" ]
nips_2021_sAaymAJB_OW
On Contrastive Representations of Stochastic Processes
Learning representations of stochastic processes is an emerging problem in machine learning with applications from meta-learning to physical object models to time series. Typical methods rely on exact reconstruction of observations, but this approach breaks down as observations become high-dimensional or noise distributions become complex. To address this, we propose a unifying framework for learning contrastive representations of stochastic processes (CReSP) that does away with exact reconstruction. We dissect potential use cases for stochastic process representations, and propose methods that accommodate each. Empirically, we show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes. Our methods tolerate noisy high-dimensional observations better than traditional approaches, and the learned representations transfer to a range of downstream tasks.
accept
One of the reviews lacks thoroughness - I'm happy to disregard this. The work proposes meta learning through neural processes and contrastive learning. There is some disagreement about the level of contribution here, with several reviewers pointing out that the FCLR method also contributes contrastive learning, and another reviewer pointing out that this is not so incremental because of extensions in several directions. One reviewer raises a strong technical objection, that the "stochastic process perspective taken in the paper seems only tenuously related to the actual training and prediction schemes". The authors refute - "using a deterministic embedding of context data that is sampled from a stochastic process is (not) invalid". the authors agree to amend the work to improve clarity, and the reviewer conceded that clarifying this point in the manuscript would make for a strong contribution. One reviewer raised issues with the experiments - that they do illuminate the method or support the proposed ideas. Yet I'm inclined to agree with another reviewer who found the detailed analysis, descriptions, and hyperparameters in the supplementary material. I also approve of the experiment to show limitations of the method in learning the strongly multimodal case. Overall, I think the authors have enough feedback from the reviewers to make some tweaks to some of the explanations in the paper, making this a contribution to the NeurIPS community.
val
[ "coj4Jd4ahi4", "FT46bO842c", "pdouWOZI_-i", "ldVLBATw6y", "AyMZNm2jSLN", "PrErOIxfQ4z", "bvATs99IAx0", "TRJ7J-DoTQB", "arQ6DU3tvWh", "MjM2SqIcv3-", "WHOtRtmtOed", "DuHtgzaSQt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for their response and clarification of the main questions that I included in my review. I still think that the introduction of contrastive learning into the neural process mechanism is interesting, particularly on the point of making tasks likelihood-free. My main concerns about the clarity...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "AyMZNm2jSLN", "pdouWOZI_-i", "TRJ7J-DoTQB", "nips_2021_sAaymAJB_OW", "MjM2SqIcv3-", "DuHtgzaSQt", "WHOtRtmtOed", "arQ6DU3tvWh", "nips_2021_sAaymAJB_OW", "nips_2021_sAaymAJB_OW", "nips_2021_sAaymAJB_OW", "nips_2021_sAaymAJB_OW" ]
nips_2021_MR4I3CjpeCv
A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance
Sudeep Salgia, Sattar Vakili, Qing Zhao
accept
The reviewers are all in agreement that this work is suitable for publication at NeurIPS, particularly due to being one very few works to attain near-optimal cumulative regret in kernelized bandits. The reviewers also judged that their concerns were adequately addressed by the authors. Although the comments/suggestions given are largely minor, I ask the authors to carefully consider them in the camera-ready version. I particularly highlight the following regarding the related work: - "[5] seems potentially very closely related, but is only mentioned in passing. Please provide a clear comparison in the responses and also in the final paper, both in Section 1.3 and in any relevant technical sections." - "[the recent paper] "High-Dimensional Experimental Design and Kernel Bandits” (ICML 2021) should also be mentioned" (it is appreciated that this paper was only made public very close to the NeurIPS deadline)
train
[ "R3x4pblsuii", "XPl9ZW2Qg_", "k-BW3xDHeMs", "T2OObG_UHgX", "M2TFclBS3bB", "WiWoZULHlk0", "UgqXrAdI2om" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response and for answering my questions. On the whole, my opinion remains the same that this is a strong theoretical paper that will benefit from an extra page and some extra time for its experiment section. I'd definitely appreciate seeing a regret vs. # of samples plot in the final version. "...
[ -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, 4, 2, 4 ]
[ "XPl9ZW2Qg_", "UgqXrAdI2om", "M2TFclBS3bB", "WiWoZULHlk0", "nips_2021_MR4I3CjpeCv", "nips_2021_MR4I3CjpeCv", "nips_2021_MR4I3CjpeCv" ]
nips_2021_ba27-RzNaIv
Scalars are universal: Equivariant machine learning, structured like classical physics
Soledad Villar, David Hogg, Kate Storey-Fisher, Weichi Yao, Ben Blum-Smith
accept
Equivariant neural networks are very important for applications of ML to physics problems. This paper explores the idea of building equivariant neural networks from invariant scalar functions which are simpler to build. This principle is well-known and commonly used in physics but has not been extensively explored in the ML literature for equivariant model building. The paper was originally presented as a general mechanism for building Gauge equivariance --- however all the presented theory only considers global (coordinate) symmetries. A few extra ingredients are needed to build Gauge equivariance on top of the presented theory such as the parallel transport of tensors at different locations to build local invariants. The authors promised the paper was modified to its correct scope. Concerns have been raised regarding the lack of experiments in the original submission, but the authors added an implementation of these ideas and experiments to an anonymised git repository. The experimental results are a bit preliminary but look promising.
train
[ "-49lhd1MD4O", "-UyNc3C84z-", "ZqN9g2wQqnQ", "vXof7-ZtxJ", "OczDz1M_He", "qyKSgDyP9wU", "BDRgo5YULEv", "ZcDVgYZTDBm", "jjjmabNS-O0", "AxVC8mZ7b1m", "Vk5HFftTAk9", "ZhmIT1G44_o", "Gt_RnbxNN86", "TiRpHBmJSzo" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper makes an interesting observation using standard ideas from classical invariant theory. Specifically, that the design of networks equivariant to various symmetry groups in classical physics can be done using a collection of invariant scalars. This suggests an attractive alternative approach for the desig...
[ 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 2, 3, 2 ]
[ "nips_2021_ba27-RzNaIv", "AxVC8mZ7b1m", "qyKSgDyP9wU", "nips_2021_ba27-RzNaIv", "nips_2021_ba27-RzNaIv", "ZcDVgYZTDBm", "ZhmIT1G44_o", "OczDz1M_He", "TiRpHBmJSzo", "-49lhd1MD4O", "Gt_RnbxNN86", "nips_2021_ba27-RzNaIv", "nips_2021_ba27-RzNaIv", "nips_2021_ba27-RzNaIv" ]
nips_2021_X2K8KVEaAXG
Unsupervised Object-Level Representation Learning from Scene Images
Contrastive self-supervised learning has largely narrowed the gap to supervised pre-training on ImageNet. However, its success highly relies on the object-centric priors of ImageNet, i.e., different augmented views of the same image correspond to the same object. Such a heavily curated constraint becomes immediately infeasible when pre-trained on more complex scene images with many objects. To overcome this limitation, we introduce Object-level Representation Learning (ORL), a new self-supervised learning framework towards scene images. Our key insight is to leverage image-level self-supervised pre-training as the prior to discover object-level semantic correspondence, thus realizing object-level representation learning from scene images. Extensive experiments on COCO show that ORL significantly improves the performance of self-supervised learning on scene images, even surpassing supervised ImageNet pre-training on several downstream tasks. Furthermore, ORL improves the downstream performance when more unlabeled scene images are available, demonstrating its great potential of harnessing unlabeled data in the wild. We hope our approach can motivate future research on more general-purpose unsupervised representation learning from scene data.
accept
- The proposed method is tackling an important problem. The reviewers found the approach is reasonable but technically not very novel. - The many concerns from the reviewers are addressed well enough by the rebuttal. - The strength is mainly the experiment results which are wide, consistent and reasonably convincing. - The clarity is good enough but can be improved.
train
[ "IbkPXUyBQM6", "-lg8JFxZZqC", "WoBjS4b46x8", "bB66Pri-41h", "y4yheRoZr-v", "PYwEWNGDV9d", "FZgeezoz7Tm", "5bgWlVBg7j", "GvUdWA8F0c", "DVkmpe_CAYc", "bbscr-6UA9" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposes a technique to adapt self-supervised contrastive learning to scene images where multiple objects are present in each image. Specifically, the proposed technique has three stages: 1. image-level pre-training with standard contrastive learning; 2. Finding object pairs images (crops) using selectiv...
[ 6, -1, -1, 7, -1, -1, -1, -1, -1, 4, 6 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_X2K8KVEaAXG", "DVkmpe_CAYc", "PYwEWNGDV9d", "nips_2021_X2K8KVEaAXG", "GvUdWA8F0c", "bB66Pri-41h", "DVkmpe_CAYc", "IbkPXUyBQM6", "bbscr-6UA9", "nips_2021_X2K8KVEaAXG", "nips_2021_X2K8KVEaAXG" ]
nips_2021_OeWooOxFwDa
Do Transformers Really Perform Badly for Graph Representation?
The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer. The code and models of Graphormer will be made publicly available at \url{https://github.com/Microsoft/Graphormer}.
accept
This paper presents a set of simple modifications to the popular transformer architecture so that they can work well on graph structured data. The paper is quite clear and well-written. Extensive empirical results show that these architectures can outperform graph neural networks on a wide range of benchmarks. All reviewers consistently recommend accepting this paper, and are optimistic about the adoption of the proposed techniques due to their simplicity and strong empirical results. The authors should consider the reviewers’ suggestions to improve the paper, in particular the additional related works mentioned in the reviews that can better help put this work into context.
train
[ "KqPacq39vib", "bObGuRr92vx", "IKKiD7CwHi0", "W3GmNAwdOgg", "MGbeBZ3Gmt", "sKbsJ0ZdRB0", "opAK5FImnz6", "6LmpndyKGI2", "sFmIFrEe6-Z", "1FHMgeN8NRF", "EaCpKsIZjzk", "Xh1pOl2t8UY" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the valuable comments.\n\nWe follow your suggestion \"...experiments on smaller molecular data sets (a few hundreds/thousands graphs)...\" to conduct experiment on QM7 dataset which only contains 7165 molecule samples. Could you please kindly name a few \"data constrained\" datasets so that we would...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "bObGuRr92vx", "opAK5FImnz6", "nips_2021_OeWooOxFwDa", "MGbeBZ3Gmt", "sKbsJ0ZdRB0", "Xh1pOl2t8UY", "EaCpKsIZjzk", "1FHMgeN8NRF", "IKKiD7CwHi0", "nips_2021_OeWooOxFwDa", "nips_2021_OeWooOxFwDa", "nips_2021_OeWooOxFwDa" ]
nips_2021_ags1UxpXAl
Powerpropagation: A sparsity inducing weight reparameterisation
The training of sparse neural networks is becoming an increasingly important tool for reducing the computational footprint of models at training and evaluation, as well enabling the effective scaling up of models. Whereas much work over the years has been dedicated to specialised pruning techniques, little attention has been paid to the inherent effect of gradient based training on model sparsity. Inthis work, we introduce Powerpropagation, a new weight-parameterisation for neural networks that leads to inherently sparse models. Exploiting the behaviour of gradient descent, our method gives rise to weight updates exhibiting a “rich get richer” dynamic, leaving low-magnitude parameters largely unaffected by learning. Models trained in this manner exhibit similar performance, but have a distributionwith markedly higher density at zero, allowing more parameters to be pruned safely. Powerpropagation is general, intuitive, cheap and straight-forward to implement and can readily be combined with various other techniques. To highlight its versatility, we explore it in two very different settings: Firstly, following a recent line of work, we investigate its effect on sparse training for resource-constrained settings. Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark. Secondly, we advocate the useof sparsity in overcoming catastrophic forgetting, where compressed representations allow accommodating a large number of tasks at fixed model capacity. In all cases our reparameterisation considerably increases the efficacy of the off-the-shelf methods.
accept
This paper suggests a simple re-parameterization, motivated by recent theoretical results, which for improves SOTA results in the areas of pruning and continual learning. All reviewers seemed to like the idea, the writing and the results. The rebuttal addressed remaining concerns, and all reviewers voted for acceptance.
train
[ "sOXZ5G1bx9o", "mLvg7C46PU-", "pO-2pKYSh3V", "Cq7ecmAN6n2", "t5y1q8LOCw", "GB3U0ZR5PTQ", "8vEpzMIL8oI", "yNK4hWSbHM8", "_kkG4_Fn3sW", "p6totM0l7t3", "vPQShvFPJb3", "tS5kYW-I5", "ekKldhTswHn" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear all,\n\nWe would like to share additional sparsity results on Language Modelling using the popular TransformerXL model. We feel that this helps strengthen the argument that Powerpropagation is truly model-agnostic and general reparametrisation and also ensures that the experimental evaluation is comparable t...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "nips_2021_ags1UxpXAl", "t5y1q8LOCw", "yNK4hWSbHM8", "nips_2021_ags1UxpXAl", "p6totM0l7t3", "yNK4hWSbHM8", "vPQShvFPJb3", "ekKldhTswHn", "tS5kYW-I5", "Cq7ecmAN6n2", "nips_2021_ags1UxpXAl", "nips_2021_ags1UxpXAl", "nips_2021_ags1UxpXAl" ]
nips_2021_chwaxchpG3
Stronger NAS with Weaker Predictors
Neural Architecture Search (NAS) often trains and evaluates a large number of architectures. Recent predictor-based NAS approaches attempt to alleviate such heavy computation costs with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor. Given limited samples, these predictors, however, are far from accurate to locate top architectures due to the difficulty of fitting the huge search space. This paper reflects on a simple yet crucial question: if our final goal is to find the best architecture, do we really need to model the whole space well?. We propose a paradigm shift from fitting the whole architecture space using one strong predictor, to progressively fitting a search path towards the high-performance sub-space through a set of weaker predictors. As a key property of the weak predictors, their probabilities of sampling better architectures keep increasing. Hence we only sample a few well-performed architectures guided by the previously learned predictor and estimate a new better weak predictor. This embarrassingly easy framework, dubbed WeakNAS, produces coarse-to-fine iteration to gradually refine the ranking of sampling space. Extensive experiments demonstrate that WeakNAS costs fewer samples to find top-performance architectures on NAS-Bench-101 and NAS-Bench-201. Compared to state-of-the-art (SOTA) predictor-based NAS methods, WeakNAS outperforms all with notable margins, e.g., requiring at least 7.5x less samples to find global optimal on NAS-Bench-101. WeakNAS can also absorb their ideas to boost performance more. Further, WeakNAS strikes the new SOTA result of 81.3% in the ImageNet MobileNet Search Space. The code is available at: https://github.com/VITA-Group/WeakNAS.
accept
This paper proposes a Neural Architecture Search method which utilizes a series of weak performance predictors, that are trained to focus on the samples that are more likely to perform better by the previous weak predictor. This is done by alternating between the learning of the weak predictor based on the sampled architectures, and the sampling with this weak predictor. The authors validate their method with a proof-of-concept experiment, as well as experimental validation on the NAS-Bench-101 and NAS-Bench-201 benchmarks, against relevant predictor-based and BO-based NAS baselines. The results show that the proposed alternating scheme with the weak predictors are significantly more effective and sample-efficient. The reviewers in general found the high-level idea of using a series of weak predictors to narrow down the search space, and the proposed alternating search scheme, to be novel and reasonable. Also, they find the experimental validation convincing, and the proposed scheme to be highly effective and efficient. However, at the same time, the reviewers had the following concerns. - The paper lacks many details, such as the pseudo-code of the algorithm, complexity analysis, implementation details, and the details of the retraining strategy. - The proposed method is not validated against some of the SOTA methods for NAS. - The novelty of the method over existing iterative search space refining strategies, or NAS based on Bayesian optimization, has not been sufficiently discussed. During the discussion period, the authors provided the code, discussion on the computational complexity, the results of the SOTA methods, and the discussion on the novelty of the proposed method over existing methods. This cleared away most of the concerns from the reviewers, resulting in a unanimous agreement to accept the paper. However, I have additional concerns, which I want the authors to clearly address in the revision. Most come from the lack of formal definitions and analysis. - There is no formal definition of “weak” and “strong” performance predictors. This should be formally defined in order to clarify what are weak and what are strong predictors. - There is no visualization of the actual search space while running the proposed algorithm, that is similar to Figure 2, which makes it difficult to see what happens there during the iterative search process. - There is no analysis of the failure case - in what cases does the weak predictor miss out high-performing architectures? - There is no theoretical or empirical analysis on how many steps, or weak predictors, are necessary to find an optimal architecture from a pool of $N$ architectures, which seems crucial in the use of the proposed algorithm. I recommend accepting the paper since the proposed idea is interesting and seems to work well in practice, and all reviewers are unanimously positive toward it. Yet, I believe that a lot of effort should be put into analyzing how the proposed method works. The paper is too informal and looks half-baked at its current state, and I strongly recommend the authors to incorporate the discussions with the reviewers, as well as the answer to my concerns above.
train
[ "gc0baKxJOqP", "ziHwGCN6V6y", "0IQ1hMADa03", "BP7yvIur6H0", "MgVvFU8jQFx", "PJeZjGybpI0", "ALCWPDlgIE8", "4jjs7ZFVB5w", "7M-bQ1e0lh3", "VBCsUQyOKc", "lQaY_DOEb9g", "hP5tTUnGyH", "V1JpklrLoo-", "pANDfBkSN_B", "fOc_1Xzt3d5", "B_3IPUC70EY", "Dko1zaniomO", "cBpLpFDBrlx", "iqTYrEsXju0...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "au...
[ " Q1: Lack of theoretical justification. Assume that the search space is uniformly at random distributed, what value is selected to be the cutoff threshold? In this paper the number of iterations is used as shown in Figure 3, however, this is still implicitly shrink the search space rather than explicitly, making i...
[ -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "MgVvFU8jQFx", "nips_2021_chwaxchpG3", "BP7yvIur6H0", "ALCWPDlgIE8", "nips_2021_chwaxchpG3", "ALCWPDlgIE8", "VBCsUQyOKc", "Fg6-AOwLq5B", "iqTYrEsXju0", "B_3IPUC70EY", "V1JpklrLoo-", "cBpLpFDBrlx", "hP5tTUnGyH", "zzaw1zrEarx", "nips_2021_chwaxchpG3", "Dko1zaniomO", "ziHwGCN6V6y", "g...
nips_2021_P3268DYnsXh
Convolutional Normalization: Improving Deep Convolutional Network Robustness and Training
Normalization techniques have become a basic component in modern convolutional neural networks (ConvNets). In particular, many recent works demonstrate that promoting the orthogonality of the weights helps train deep models and improve robustness. For ConvNets, most existing methods are based on penalizing or normalizing weight matrices derived from concatenating or flattening the convolutional kernels. These methods often destroy or ignore the benign convolutional structure of the kernels; therefore, they are often expensive or impractical for deep ConvNets. In contrast, we introduce a simple and efficient ``Convolutional Normalization'' (ConvNorm) method that can fully exploit the convolutional structure in the Fourier domain and serve as a simple plug-and-play module to be conveniently incorporated into any ConvNets. Our method is inspired by recent work on preconditioning methods for convolutional sparse coding and can effectively promote each layer's channel-wise isometry. Furthermore, we show that our ConvNorm can reduce the layerwise spectral norm of the weight matrices and hence improve the Lipschitzness of the network, leading to easier training and improved robustness for deep ConvNets. Applied to classification under noise corruptions and generative adversarial network (GAN), we show that the ConvNorm improves the robustness of common ConvNets such as ResNet and the performance of GAN. We verify our findings via numerical experiments on CIFAR and ImageNet. Our implementation is available online at \url{https://github.com/shengliu66/ConvNorm}.
accept
In this paper, the authors present a simple and efficient ConvNorm method that can fully exploit the convolutional structure in the Fourier domain. The paper is clearly written and well motivated. Extensive experiments empirically demonstrate the effectiveness of such normalization. As all reviewers achieve consensus that the paper is well written and can be accepted, I vote for acceptance. The authors are expected to make a thorough revision by considering the the reviews.
val
[ "vyiBjGbo0aF", "LobgzSPffn", "PO6ONHcm5AD", "uG9d3PjEM9H", "K0qZ2s0Sqf", "WJ-xgx5MOGM", "YARINSTQxoM", "HS8fgiQEsf", "ZDTN4L1TRzq", "zEISb4gKr_t", "PiT8mGVmyQu", "JDgLM8aChgJ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to orthogonalize the convolutional kernel in a channel-wise manner. To do the computation more efficiently, they calculate the matrix inversion via FFT in the frequency domain. They compare with the baselines on the tasks of adversarial attack, image classification and GAN. Originality: I think...
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_P3268DYnsXh", "nips_2021_P3268DYnsXh", "K0qZ2s0Sqf", "WJ-xgx5MOGM", "YARINSTQxoM", "zEISb4gKr_t", "vyiBjGbo0aF", "JDgLM8aChgJ", "PiT8mGVmyQu", "LobgzSPffn", "nips_2021_P3268DYnsXh", "nips_2021_P3268DYnsXh" ]
nips_2021_eW8HEhY9F7C
Nearly-Tight and Oblivious Algorithms for Explainable Clustering
Buddhima Gamlath, Xinrui Jia, Adam Polak, Ola Svensson
accept
Considering that the ICML papers are concurrent works, the results in the paper are substantial improvement over previous works in several interesting problems. In the camera-ready version, please include a comparison with the concurrent works and highlight the unique insights and techniques developed in this paper.
train
[ "HOay6hRAxZ4", "iGkxoYvco5N", "le-HOXccKs", "J-Z6ULzFIjh", "KYvxODGvadz", "ewQ116-2ji", "lhqK3UzonF_", "PU9ViRsan7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your time in reviewing our paper. We agree with your comments on the independent work.\n\nIndeed, we were unaware of these independent works (including the ICML paper by Makarychev and Shan) until these papers became available on ArXiv (about five weeks after the NeurIPS deadline). We became aware o...
[ -1, -1, -1, -1, 8, 7, 7, 6 ]
[ -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "KYvxODGvadz", "lhqK3UzonF_", "PU9ViRsan7", "ewQ116-2ji", "nips_2021_eW8HEhY9F7C", "nips_2021_eW8HEhY9F7C", "nips_2021_eW8HEhY9F7C", "nips_2021_eW8HEhY9F7C" ]
nips_2021_SFFFiGtdAt
Deep Networks Provably Classify Data on Curves
Data with low-dimensional nonlinear structure are ubiquitous in engineering and scientific problems. We study a model problem with such structure---a binary classification task that uses a deep fully-connected neural network to classify data drawn from two disjoint smooth curves on the unit sphere. Aside from mild regularity conditions, we place no restrictions on the configuration of the curves. We prove that when (i) the network depth is large relative to certain geometric properties that set the difficulty of the problem and (ii) the network width and number of samples is polynomial in the depth, randomly-initialized gradient descent quickly learns to correctly classify all points on the two curves with high probability. To our knowledge, this is the first generalization guarantee for deep networks with nonlinear data that depends only on intrinsic data properties. Our analysis proceeds by a reduction to dynamics in the neural tangent kernel (NTK) regime, where the network depth plays the role of a fitting resource in solving the classification problem. In particular, via fine-grained control of the decay properties of the NTK, we demonstrate that when the network is sufficiently deep, the NTK can be locally approximated by a translationally invariant operator on the manifolds and stably inverted over smooth functions, which guarantees convergence and generalization.
accept
This paper studies classification problems where the problem data are two disjoint curves using deep neural networks, and provides a Neural Tangent Kernel (NTK) based analysis of generalization. The authors identify key concepts related to the geometric difficulty of the problem and relate it to the depth of the network. The reviewers all agreed that the paper contains interesting ideas and insightful theoretical results, and recommended acceptance. A few reviewers expressed minor concerns and suggestions. Please take into account the updated reviews when preparing the final version to accommodate the requested changes. Thank you for your submission to NeurIPS.
train
[ "ieCWjFNEvQn", "2hU6nepWSbO", "KTLXQ7wY5e6", "jlcpoP2IOl6", "sqvXSITXMRx", "O_lusVyYo25", "XRDUmbFxiNp", "CVRyyJy0F3", "64HWco4UDz2", "l5zDGv1yZd", "jKmT4TJxKsU", "S2BlpK1JDHA", "3_JI2JiO5Di" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. The authors addressed my concerns on the NTK regime. I think this is a solid theoretical contribution to the analysis of NTK and kernel regression under structured data. I would like to keep my current positive review. ", " After reading the detailed response, I would like to keep the s...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "64HWco4UDz2", "CVRyyJy0F3", "nips_2021_SFFFiGtdAt", "XRDUmbFxiNp", "l5zDGv1yZd", "nips_2021_SFFFiGtdAt", "KTLXQ7wY5e6", "jKmT4TJxKsU", "3_JI2JiO5Di", "S2BlpK1JDHA", "nips_2021_SFFFiGtdAt", "nips_2021_SFFFiGtdAt", "nips_2021_SFFFiGtdAt" ]
nips_2021_dUEpGV2mhf
COMBO: Conservative Offline Model-Based Policy Optimization
Model-based reinforcement learning (RL) algorithms, which learn a dynamics model from logged experience and perform conservative planning under the learned model, have emerged as a promising paradigm for offline reinforcement learning (offline RL). However, practical variants of such model-based algorithms rely on explicit uncertainty quantification for incorporating conservatism. Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable. We empirically find that uncertainty estimation is not accurate and leads to poor performance in certain scenarios in offline model-based RL. We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that trains a value function using both the offline dataset and data generated using rollouts under the model while also additionally regularizing the value function on out-of-support state-action tuples generated via model rollouts. This results in a conservative estimate of the value function for out-of-support state-action tuples, without requiring explicit uncertainty estimation. Theoretically, we show that COMBO satisfies a policy improvement guarantee in the offline setting. Through extensive experiments, we find that COMBO attains greater performance compared to prior offline RL on problems that demand generalization to related but previously unseen tasks, and also consistently matches or outperforms prior offline RL methods on widely studied offline RL benchmarks, including image-based tasks.
accept
The reviewers all agreed that this work is promising. The authors also provided important clarifications in the response, including increasing the number of runs and clarifying that hyperparameter selection was actually done with an approach that they developed for this paper suitable for the offline setting. This improvements undoubtedly make the paper stronger. However, the original omission of the strategy to select hyperparameters offline is a large omission, as it is a critical part of the algorithm. This algorithm should be discussed and contrasted to other approaches. The theory was not extensively discussed (unfortunately), and relies heavily on the CQL theory. The CQL theory already has some constants that make for relatively loose bounds. Here, there are additional issues with the introduction of beta. The first main result relies on having a potentially very large beta, especially if you look at the proof and see the terms that beta*(small positive constant) is overcoming. The second result, on policy improvement, relies again a potentially large beta and on a term being positive that is not guaranteed to be positive (there is simply a discussion in the paper on why it reasonably could be positive in settings of interest). Theory for offline RL is hard, and so progress should be acknowledged. But, at the same time, given the complexity of these results, it is important to more clearly explain the limitations and what this theory can truly guarantee. As a note, the concern about per-environment hyperparameter tuning was resolved in the discussion. It is reasonable to pick hyperparameters per environment, since you have an automated algorithm to do so. This is very different from optimizing (tuning) hyperparameters with sweeps, which is not an algorithm to be used in practice but rather an approach to evaluate methods. As it stands, this work is borderline, and would highly benefit from another round to incorporate these changes and be re-reviewed. Some benefit of the doubt can be given that these changes may be done for the final paper, and so if there is space in the program, this paper could be accepted.
val
[ "oGxKOhRasrc", "QbN0AxvlFhN", "vILxVW5otkG", "HetBua3fiU", "EJLsUSh6xQ2", "SvO4RIhxeDj", "h-XSs9KES_A", "oG9i8MYjdJq", "kVrt2sBf-no", "4g3ee5Kg40", "SUdj6kBAnV", "P4l_S-pThmN", "vnIAKDtyxmO", "EeratEJLuxS", "3AAWb51Z3DU", "J8K4K2PAIxj", "xv-UDihhJp" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents a new approach for model-based offline RL. Here, the model is not used to estimate the return directly using roll-outs, but to generate synthetic data that is then used to train the Q-function. In a way, the method is a combination of CQL and MOPO. What is new is the idea of using this synthetic...
[ 8, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "nips_2021_dUEpGV2mhf", "HetBua3fiU", "nips_2021_dUEpGV2mhf", "SvO4RIhxeDj", "h-XSs9KES_A", "h-XSs9KES_A", "oG9i8MYjdJq", "4g3ee5Kg40", "vnIAKDtyxmO", "vILxVW5otkG", "vILxVW5otkG", "nips_2021_dUEpGV2mhf", "oGxKOhRasrc", "J8K4K2PAIxj", "xv-UDihhJp", "nips_2021_dUEpGV2mhf", "nips_2021_...
nips_2021_RHZs3GqLBwg
Time-series Generation by Contrastive Imitation
Consider learning a generative model for time-series data. The sequential setting poses a unique challenge: Not only should the generator capture the conditional dynamics of (stepwise) transitions, but its open-loop rollouts should also preserve the joint distribution of (multi-step) trajectories. On one hand, autoregressive models trained by MLE allow learning and computing explicit transition distributions, but suffer from compounding error during rollouts. On the other hand, adversarial models based on GAN training alleviate such exposure bias, but transitions are implicit and hard to assess. In this work, we study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy, where the reinforcement signal is provided by a global (but stepwise-decomposable) energy model trained by contrastive estimation. At training, the two components are learned cooperatively, avoiding the instabilities typical of adversarial objectives. At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality. By expressly training a policy to imitate sequential behavior of time-series features in a dataset, this approach embodies "generation by imitation". Theoretically, we illustrate the correctness of this formulation and the consistency of the algorithm. Empirically, we evaluate its ability to generate predictively useful samples from real-world datasets, verifying that it performs at the standard of existing benchmarks.
accept
After very detailed replies by the authors all reviewers of the paper have recommended acceptance.
train
[ "t-IAIUAB74-", "kiyl7FgqRxY", "7dspyq3Oelv", "2neJuT3ALU9", "49k4Nye60vL", "oDlhubGd2as", "801MXDseVzU", "0Owx0V_pGLz", "VCoO-2Y7cBn", "MrA_AtLLaMi", "CnU78YT8gWk", "aAO4IkG7JlG", "t2toKsjPIFK", "uSkHxk_xiH0", "evMNugFgGfP", "h51jlQmTZYH", "PoARuUCXdJA", "Hg-kZQwPgLu", "PoQqsElRp...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ "The paper proposes an imitation learning method that simultaneously learns a policy and an energy (“discriminator”) function. This is similar to a variational approach to learn a Gibbs distribution over trajectories, where the variational parameters consist of the policy and variational inference consists of runni...
[ 7, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_RHZs3GqLBwg", "2neJuT3ALU9", "nips_2021_RHZs3GqLBwg", "t-IAIUAB74-", "7dspyq3Oelv", "nips_2021_RHZs3GqLBwg", "CnU78YT8gWk", "aAO4IkG7JlG", "aAO4IkG7JlG", "aAO4IkG7JlG", "aAO4IkG7JlG", "evMNugFgGfP", "7dspyq3Oelv", "t-IAIUAB74-", "oDlhubGd2as", "nips_2021_RHZs3GqLBwg", "oDl...
nips_2021_6PoupJO89MG
Differentially Private Sampling from Distributions
Sofya Raskhodnikova, Satchit Sivakumar, Adam Smith, Marika Swanberg
accept
This paper initiates the study of differentially private *sampling* (as opposed to learning) discrete distributions. The authors obtain nearly tight results for this problem, when the underlying distribution is discrete on a finite alphabet or a binary product distribution. The reviewers agreed that the question studied is interesting and the results are technically novel. Overall, there was a consensus that this paper is above the acceptance threshold.
train
[ "Eo_iEnluIzd", "Df2XiI7t6v3", "7IGcMnqiJyi", "PookEW0qot", "YXjjHkpxMOB", "TMX6NAHkSI", "xJkkhr_AUVh", "OhpLNQ07wu" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the problem of sampling from a distribution in a differentially private manner. More formally, the learner is given i.i.d samples from some unknown distribution $P$ over a domain $\\mathcal{U}$ and its goal is to output an element of $\\mathcal{U}$ such that the distribution of the output is clo...
[ 6, -1, -1, -1, -1, 7, 7, 5 ]
[ 3, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2021_6PoupJO89MG", "7IGcMnqiJyi", "YXjjHkpxMOB", "Eo_iEnluIzd", "OhpLNQ07wu", "nips_2021_6PoupJO89MG", "nips_2021_6PoupJO89MG", "nips_2021_6PoupJO89MG" ]
nips_2021_9dZ4oIjkv76
On the Expected Complexity of Maxout Networks
Learning with neural networks relies on the complexity of their representable functions, but more importantly, their particular assignment of typical parameters to functions of different complexity. Taking the number of activation regions as a complexity measure, recent works have shown that the practical complexity of deep ReLU networks is often far from the theoretical maximum. In this work, we show that this phenomenon also occurs in networks with maxout (multi-argument) activation functions and when considering the decision boundaries in classification tasks. We also show that the parameter space has a multitude of full-dimensional regions with widely different complexity, and obtain nontrivial lower bounds on the expected complexity. Finally, we investigate different parameter initialization procedures and show that they can increase the speed of convergence in training.
accept
This paper considers the number of linear regions (and pieces of the decision boundary) of maxout networks at initialization, showing that these numbers grow polynomially in expectation, instead of exponentially as in the worst case. This is an interesting theoretical investigation building on prior work answering the same question for ReLU networks. The reviewers note that the approaches used in this paper are somewhat incremental to this prior work, and that the upper bound on the constant C_grad presented in the paper's formal results has an exponential dependence on depth for natural initializations. However, the reviewers agree that these points do not detract from the interest and validity of the paper's results, and I recommend acceptance.
train
[ "FVqqZ7JeBmb", "TwrK1XDRA3F", "3W4a95Ex-EB", "30Lf7snpFzO", "Lu0Oxo1kf30", "iSidmz2r78g", "DpUGZBlduag", "UfA6Ok86ouX", "_AHCM4iO7GK", "cwHZP1yRAjS", "znas4jC4HIQ", "mgJLlrAn35V", "Opmcyi6sDBD", "hS4fRdXuiUi", "DbCf43-zNZ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This articles provides bounds for both average case and worst case complexity (in terms of number of linear regions and related measures) for max-out networks. I found the article to have a few results that were interesting (to do with the difference between ReLU and max-out), a few results that were correct but r...
[ 7, -1, -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ 5, -1, -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_9dZ4oIjkv76", "3W4a95Ex-EB", "DpUGZBlduag", "Opmcyi6sDBD", "UfA6Ok86ouX", "nips_2021_9dZ4oIjkv76", "mgJLlrAn35V", "nips_2021_9dZ4oIjkv76", "nips_2021_9dZ4oIjkv76", "znas4jC4HIQ", "_AHCM4iO7GK", "FVqqZ7JeBmb", "DbCf43-zNZ", "iSidmz2r78g", "nips_2021_9dZ4oIjkv76" ]
nips_2021_tQgj7CDTfKB
Cross-view Geo-localization with Layer-to-Layer Transformer
In this work, we address the problem of cross-view geo-localization, which estimates the geospatial location of a street view image by matching it with a database of geo-tagged aerial images. The cross-view matching task is extremely challenging due to drastic appearance and geometry differences across views. Unlike existing methods that predominantly fall back on CNN, here we devise a novel layer-to-layer Transformer (L2LTR) that utilizes the properties of self-attention in Transformer to model global dependencies, thus significantly decreasing visual ambiguities in cross-view geo-localization. We also exploit the positional encoding of the Transformer to help the L2LTR understand and correspond geometric configurations between ground and aerial images. Compared to state-of-the-art methods that impose strong assumptions on geometry knowledge, the L2LTR flexibly learns the positional embeddings through the training objective. It hence becomes more practical in many real-world scenarios. Although Transformer is well suited to our task, its vanilla self-attention mechanism independently interacts within image patches in each layer, which overlooks correlations between layers. Instead, this paper proposes a simple yet effective self-cross attention mechanism to improve the quality of learned representations. Self-cross attention models global dependencies between adjacent layers and creates short paths for effective information flow. As a result, the proposed self-cross attention leads to more stable training, improves the generalization ability, and prevents the learned intermediate features from being overly similar. Extensive experiments demonstrate that our L2LTR performs favorably against state-of-the-art methods on standard, fine-grained, and cross-dataset cross-view geo-localization tasks. The code is available online.
accept
This paper proposes a siamese convolutional + vision transformer architecture for cross-view geolocalisation, from ground view equirectangular panorama images to satellite views (cartesian or polar coordinates). The transformer has a cross-attention mechanism between consecutive layers (I was confused by the word “evolving”, since no genetic algorithms were involved). The method is thoroughly evaluated on a US cross-modal localisation dataset where it achieves state-of-the-art results. Reviewer kVJu praised the idea and significance of results but criticised some speculative or unsubstantiated claims in the paper, as well as the clarity, including the usage of term “evolution”; the authors responded with a long list of modifications and additional results. Reviewer JyiQ was uncertain why positional encoding and cross-layer self-attention specifically helped with the localisation task (the authors’ response was only empiriical but did not provide intuition about why it worked). Reviewer Bnto had questions about ground view image orientations, which were answered by the authors through additional experiments. Reviewer 9M5m made the usual comment about novelty (which I will dismiss) and narrowness of scope (again, that’s an easy critique to make), as well as a claim about learning rate (addressed by the authors). All scores but for reviewer 9M5m (4) were positive (6, 6, 7). Reviewer 9M5m said they would update their score to 5 (which makes the paper average equal to 6). I have dismissed claims of “novelty” and “narrowness of scope” and decided to accept the paper.
train
[ "BV7UTvzZORm", "Yyop2wCaCY5", "ysfFkA219cb", "O34DvbnYF4x", "NVK5F-fWfKl", "3m7uKFrZx96", "qHRXBVool0", "TmB0zXXAFdv", "U9QVLmYekq", "PfBrJS8eBUR", "IlU142syFj", "1ZfayI_gzi", "HthqCO-P3mx", "DZBP99r3iB" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for thoroughly addressing the comments and adding detail where requested. I am reassured in maintaining my recommendation to accept.", " Dear Reviewer 9M5m,\n\nAfter carefully reading the paper and other reviews, and reading your own comment (\"After reading other reviews, I may change my rating to ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "TmB0zXXAFdv", "ysfFkA219cb", "O34DvbnYF4x", "NVK5F-fWfKl", "DZBP99r3iB", "nips_2021_tQgj7CDTfKB", "1ZfayI_gzi", "U9QVLmYekq", "IlU142syFj", "HthqCO-P3mx", "nips_2021_tQgj7CDTfKB", "nips_2021_tQgj7CDTfKB", "nips_2021_tQgj7CDTfKB", "nips_2021_tQgj7CDTfKB" ]
nips_2021_PCLsRp_4R7C
TAAC: Temporally Abstract Actor-Critic for Continuous Control
We present temporally abstract actor-critic (TAAC), a simple but effective off-policy RL algorithm that incorporates closed-loop temporal abstraction into the actor-critic framework. TAAC adds a second-stage binary policy to choose between the previous action and a new action output by an actor. Crucially, its "act-or-repeat" decision hinges on the actually sampled action instead of the expected behavior of the actor. This post-acting switching scheme let the overall policy make more informed decisions. TAAC has two important features: a) persistent exploration, and b) a new compare-through Q operator for multi-step TD backup, specially tailored to the action repetition scenario. We demonstrate TAAC's advantages over several strong baselines across 14 continuous control tasks. Our surprising finding reveals that while achieving top performance, TAAC is able to "mine" a significant number of repeated actions with the trained policy even on continuous tasks whose problem structures on the surface seem to repel action repetition. This suggests that aside from encouraging persistent exploration, action repetition can find its place in a good policy behavior. Code is available at https://github.com/hnyu/taac.
accept
The discussion helped clarifying points that were misunderstood by some reviewers initially. It also helped improving the clarity of the paper. There is a consensus on the scores and, now, the contribution is regarded as significant and simple. The method performs very well, it is simple enough to be implemented easily and it is computationally neutral.
train
[ "Ne-W8fJeDJU", "XQqh_N6OOwi", "F4GIqFXPyjv", "AUEASixezt", "EeiE9tJUSZ", "GaEk1QAH8Nv", "mBFNR8buV3", "GdYJsYjP-wI", "3LhzqthBqnT", "zGKw3WJWxxI", "p_RmHHJegWH", "K5gVNSm5Suk" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to add a component to the standard RL actor-critic architecture, that learns when to repeat (or not) the last action executed by the agent. This component is a simple binary classifier, conditioned on $s_t$, $a^-$ (the previous action) and $\\hat{a}$ (what the actor wants to do now), that choose...
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_PCLsRp_4R7C", "mBFNR8buV3", "nips_2021_PCLsRp_4R7C", "EeiE9tJUSZ", "F4GIqFXPyjv", "p_RmHHJegWH", "Ne-W8fJeDJU", "K5gVNSm5Suk", "nips_2021_PCLsRp_4R7C", "F4GIqFXPyjv", "nips_2021_PCLsRp_4R7C", "nips_2021_PCLsRp_4R7C" ]
nips_2021_OumxnZ9lrg-
Learning Robust Hierarchical Patterns of Human Brain across Many fMRI Studies
Multi-site fMRI studies face the challenge that the pooling introduces systematic non-biological site-specific variance due to hardware, software, and environment. In this paper, we propose to reduce site-specific variance in the estimation of hierarchical Sparsity Connectivity Patterns (hSCPs) in fMRI data via a simple yet effective matrix factorization while preserving biologically relevant variations. Our method leverages unsupervised adversarial learning to improve the reproducibility of the components. Experiments on simulated datasets display that the proposed method can estimate components with higher accuracy and reproducibility, while preserving age-related variation on a multi-center clinical data set.
accept
Dear authors, despite some initial concerns reviewers have now, after discussion and reading your rebuttal, endorsed the paper (although not strongly). I will therefore suggest a rather positive outcome for this work. Best regards, The AC
train
[ "Qv91Fc61lP5", "J03O9c50tBV", "CPS5n-WX98S", "nLGtid8sXrx", "8x6BgFfPn2r", "boJ9pBPUQfk", "UwTOWmahxCV", "GeCA-Fe_Qym", "r01oF63UfzR", "bP1aSx0YnCn", "DoI5xwjugO4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks for the author's response. They addressed most of my concerns. So I would like to increase the score to 6. ", "This paper proposes a combined technique based on matrix factorization and adversarial models to reduce the batch effect problem in fMRI data. The new model, called rshSCP, is designed to be rob...
[ -1, 6, 6, -1, -1, 6, -1, -1, -1, -1, 7 ]
[ -1, 4, 4, -1, -1, 4, -1, -1, -1, -1, 3 ]
[ "UwTOWmahxCV", "nips_2021_OumxnZ9lrg-", "nips_2021_OumxnZ9lrg-", "GeCA-Fe_Qym", "r01oF63UfzR", "nips_2021_OumxnZ9lrg-", "J03O9c50tBV", "CPS5n-WX98S", "DoI5xwjugO4", "boJ9pBPUQfk", "nips_2021_OumxnZ9lrg-" ]
nips_2021_z4L8_Egn5Ey
Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games
We study gradient descent-ascent learning dynamics with timescale separation in unconstrained continuous action zero-sum games where the minimizing player faces a nonconvex optimization problem and the maximizing player optimizes a Polyak-Lojasiewicz (PL) or strongly-concave (SC) objective. In contrast to past work, we assess convergence in relation to game-theoretic equilibria instead of only notions of stationarity. In pursuit of this goal, we prove that the only locally stable points of the continuous-time limiting system correspond to strict local minmax equilibria in each class of games. For the class of nonconvex-PL zero-sum games, we exploit timescale separation to construct a potential function that when combined with the stability characterization and an asymptotic saddle avoidance result gives a global asymptotic almost-sure convergence guarantee to a set of the strict local minmax equilibrium. For the class of nonconvex-SC zero-sum games, we show the surprising property that the function of the game can be made a potential with timescale separation. Combining this insight with the stability characterization allows us to generalize methods for efficiently escaping saddle points from nonconvex optimization to this class of zero-sum games and obtain a global finite-time convergence guarantee for gradient descent-ascent with timescale separation to approximate local minmax equilibria.
accept
The authors study the behavior of simultaneous gradient descent-ascent with timescale separation (i.e. a slower learning rate for the descent dynamics compared to the ascent dynamics) to solve unconstrained and sequential min_x max_y f(x,y) optimization problems for objectives f(x,y) that are nonconvex wrt the variables of the outer (min) player and strongly concave or satisfy the PL condition wrt to the variables of the inner (y) player. They focus on the stability analysis and convergence rates of GDA (and its perturbed gradient variant) to local minimax points. The definition of local min-max points is that the max player plays a global best response to min, while the min player chooses a local minimum for the function \max_{y \in B} f(.,y) where B is a ball of small radius. The authors show that local min-max points are stable/attracting for GDA dynamics, as long as the learning rate of the inner/max player is large enough compared to that of the min player. For nonconvex-strongly concave landscapes, they also show convergence in \tilde{O}(1/\eps^2) of perturbed GDA to an \esp- local min-max and \tilde{O}(1/\eps^4) of perturbed SGDA. Prior work had shown convergence to stationary points of \max_{y \in B} f(.,y) (for a slightly different algorithm) and this work does stability analysis to argue that the points it converges to are actually local minima. Stability analyses had also been done for related algorithms. So at a high level the results here are not surprising. Yet, they are not trivial. Doing the complete analysis to establish the results requires quite a bit of work, and this is the first paper that goes beyond stationarity. Given that min-max optimization for objectives that are not convex-concave is an interesting direction for the field, this is a solid contribution.
train
[ "K5giymVGZef", "9CzNSOBtlWl", "-Js0BTR_Zph", "3PJLZX05JAf", "BmwDOdeu18c", "lhcear9pJ3u", "cs6qqVlwu7H", "OTde--j1gCN", "LPdsJfR_JmE", "ckU4Un7dQg0", "yDLesRRn2v2", "l1BjRP-HFX" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper studies gradient descent ascent learning dynamics with timescale separation in unconstrained continuous action zero-sum games. The minimizing player faces a nonconvex optimization problem, and the maximizing player optimizes a Polyak-Łojasiewicz (PŁ) or strongly-concave (SC) objective. Like [15], this p...
[ 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_z4L8_Egn5Ey", "nips_2021_z4L8_Egn5Ey", "nips_2021_z4L8_Egn5Ey", "yDLesRRn2v2", "LPdsJfR_JmE", "cs6qqVlwu7H", "ckU4Un7dQg0", "nips_2021_z4L8_Egn5Ey", "K5giymVGZef", "l1BjRP-HFX", "-Js0BTR_Zph", "nips_2021_z4L8_Egn5Ey" ]
nips_2021_mxowVJFe8D5
Bandit Quickest Changepoint Detection
Many industrial and security applications employ a suite of sensors for detecting abrupt changes in temporal behavior patterns. These abrupt changes typically manifest locally, rendering only a small subset of sensors informative. Continuous monitoring of every sensor can be expensive due to resource constraints, and serves as a motivation for the bandit quickest changepoint detection problem, where sensing actions (or sensors) are sequentially chosen, and only measurements corresponding to chosen actions are observed. We derive an information-theoretic lower bound on the detection delay for a general class of finitely parameterized probability distributions. We then propose a computationally efficient online sensing scheme, which seamlessly balances the need for exploration of different sensing options with exploitation of querying informative actions. We derive expected delay bounds for the proposed scheme and show that these bounds match our information-theoretic lower bounds at low false alarm rates, establishing optimality of the proposed method. We then perform a number of experiments on synthetic and real datasets demonstrating the effectiveness of our proposed method.
accept
The paper studies the problem of change point detection in a multi-sensor setting, where the learner can read observations from a set of available sensors to monitor whether any change in observation distribution happened or not. While this problem is clearly connected with change point detection in the offline setting and to bandit in non-stationary environments, it has an interest on its own and it poses very specific challenges that need to be carefully analyzed, starting from an accurate definition of the performance measure. The proposed algorithm builds on solid intuitions and some known techniques (such as CUMSUM) and it is shown to achieve near-optimal performance in the limit where false rate tends to zero. I think these contributions are valuable and they represent a very interesting step into understanding this problem. For this reason, I'm proposing acceptance. Nonetheless, the paper generated a considerable amount of discussion and the authors provided very much needed clarification on some points including the interpretation of the theoretical results and the gap between upper and lower bound. In particular, the latter point shows an existing gap depending on the dimensionality d between the upper and lower bound. While it is perfectly fine to have such a gap, the authors should clearly state in the revised this aspect so that other people can understand where the limitations of the current work lie. Overall, the paper would greatly benefit from integrating the additional insights provided during the rebuttal into the final version of the paper.
train
[ "veGQ4EOXnV", "1Kms4AQaPBM", "64vxjvgpbc", "yyqYk9hlSl", "NJBBfW0F5v", "n4J4-8BH8Kf", "KoWeH4t5ws", "2e29ISbMsE", "PzekUXf4uVF", "z4vwFmGmxAf", "dR0rZVU1cE", "8t88tw4U1zR", "74XdV4B0GWe", "F-cBtgAgjPI", "RpJq3fcXpv-", "67VX3ySVdl", "9jhLWVwCPwM", "718yE44SE0J", "d__puEBfdiR", "...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official...
[ " * We would like to point out that this paper is the first to exhibit detection delay bounds, both upper and lower, as a function of the false alarm rate parameter $\\alpha$, for the problem of adaptive, sequential quickest changepoint detection. \n\n* We completely characterise optimal detection delay performance...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "1Kms4AQaPBM", "64vxjvgpbc", "yyqYk9hlSl", "2e29ISbMsE", "KoWeH4t5ws", "2e29ISbMsE", "yyqYk9hlSl", "dR0rZVU1cE", "z4vwFmGmxAf", "718yE44SE0J", "Y4sXwobKXs", "74XdV4B0GWe", "Br9uxQqlY2", "dR0rZVU1cE", "Br9uxQqlY2", "jaXt83DHSQC", "dR0rZVU1cE", "8t88tw4U1zR", "dR0rZVU1cE", "nips_...
nips_2021_enKhMfthDFS
Can multi-label classification networks know what they don’t know?
Estimating out-of-distribution (OOD) uncertainty is a major challenge for safely deploying machine learning models in the open-world environment. Improved methods for OOD detection in multi-class classification have emerged, while OOD detection methods for multi-label classification remain underexplored and use rudimentary techniques. We propose JointEnergy, a simple and effective method, which estimates the OOD indicator scores by aggregating label-wise energy scores from multiple labels. We show that JointEnergy can be mathematically interpreted from a joint likelihood perspective. Our results show consistent improvement over previous methods that are based on the maximum-valued scores, which fail to capture joint information from multiple labels. We demonstrate the effectiveness of our method on three common multi-label classification benchmarks, including MS-COCO, PASCAL-VOC, and NUS-WIDE. We show that JointEnergy can reduce the FPR95 by up to 10.05% compared to the previous best baseline, establishing state-of-the-art performance.
accept
I recommend to accept this paper In this paper, the authors proposed a novel method to address an important yet under explored problem: out-of-distribution detection for multi-label classification models. This paper is well-written, addressing important problem, proposing simple-but-effective method. All the reviewers are inclined to accept this paper. I would suggest the authors to add additional experimental results done in the rebuttal phase and take the reviewers' suggestions into the camera-ready version.
val
[ "Skhk4tqQ8Wb", "A9t1CKPdhhk", "TdKQrcoKT1b", "hq8f5WjtQvm", "FoU8TkDW3Y", "lJXgW29W5B5", "8O5-L8b9pFB", "-D6zuEmHG6-", "cVQA1q7Xmo" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new method for OOD detection in the multi-label setting, called JointEnergy. \nThe authors claim that JointEnergy can be mathematically interpreted from a joint likelihood perspective, and that their model achieves sota results on a variety of benchmarks. **Significance**: high - the problem ...
[ 7, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ 3, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2021_enKhMfthDFS", "nips_2021_enKhMfthDFS", "8O5-L8b9pFB", "cVQA1q7Xmo", "Skhk4tqQ8Wb", "-D6zuEmHG6-", "nips_2021_enKhMfthDFS", "nips_2021_enKhMfthDFS", "nips_2021_enKhMfthDFS" ]
nips_2021_0lz4QxW2tDf
Balanced Chamfer Distance as a Comprehensive Metric for Point Cloud Completion
Chamfer Distance (CD) and Earth Mover’s Distance (EMD) are two broadly adopted metrics for measuring the similarity between two point sets. However, CD is usually insensitive to mismatched local density, and EMD is usually dominated by global distribution while overlooks the fidelity of detailed structures. Besides, their unbounded value range induces a heavy influence from the outliers. These defects prevent them from providing a consistent evaluation. To tackle these problems, we propose a new similarity measure named Density-aware Chamfer Distance (DCD). It is derived from CD and benefits from several desirable properties: 1) it can detect disparity of density distributions and is thus a more intensive measure of similarity compared to CD; 2) it is stricter with detailed structures and significantly more computationally efficient than EMD; 3) the bounded value range encourages a more stable and reasonable evaluation over the whole test set. We adopt DCD to evaluate the point cloud completion task, where experimental results show that DCD pays attention to both the overall structure and local geometric details and provides a more reliable evaluation even when CD and EMD contradict each other. We can also use DCD as the training loss, which outperforms the same model trained with CD loss on all three metrics. In addition, we propose a novel point discriminator module that estimates the priority for another guided down-sampling step, and it achieves noticeable improvements under DCD together with competitive results for both CD and EMD. We hope our work could pave the way for a more comprehensive and practical point cloud similarity evaluation. Our code will be available at https://github.com/wutong16/DensityawareChamfer_Distance.
accept
The reviewers all agreed that this submission should be rejected. The reviewers find that the method lacks sufficient novelty and significance, and that the experimental results are not strong enough. In particular, the results show that the proposed method does not provide a significant improvement when used as a training objective, and the experiments are not comprehensive enough for the task of evaluation. As a (very) minor side note, I also noticed that the paper claims that the EMD requires the point sets to have the same number of points, which certainly isn't true. The EMD is just the Wasserstein distance, which can be evaluated exactly between any two discrete distributions regardless of the (finite) number of points in the support.
val
[ "dITI4QAgqbn", "--7eIyZCApr", "ZR3LctyLDu", "P7UC8r95cs7", "udKW7zlx-_o", "OWp_an0DNM", "br9mnAMaZb", "nwdzXSk5tiy", "lkJaB9ROwp5", "tMvn7LoHKiL" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi,\n\nThank you for the response.\n\nThe reviewer is leaning to reject the paper and keep the original rating.\n\nHere the reviewer chooses to 'expose' the concerns during the internal discussion. Hopefully, this could help the authors improve the current submission.\n\nMy main concern is that the paper is still...
[ -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 3, 4 ]
[ "OWp_an0DNM", "udKW7zlx-_o", "nwdzXSk5tiy", "lkJaB9ROwp5", "tMvn7LoHKiL", "br9mnAMaZb", "nips_2021_0lz4QxW2tDf", "nips_2021_0lz4QxW2tDf", "nips_2021_0lz4QxW2tDf", "nips_2021_0lz4QxW2tDf" ]
nips_2021_7SGgWl2uVG-
Optimal Gradient-based Algorithms for Non-concave Bandit Optimization
Bandit problems with linear or concave reward have been extensively studied, but relatively few works have studied bandits with non-concave reward. This work considers a large family of bandit problems where the unknown underlying reward function is non-concave, including the low-rank generalized linear bandit problems and two-layer neural network with polynomial activation bandit problem.For the low-rank generalized linear bandit problem, we provide a minimax-optimal algorithm in the dimension, refuting both conjectures in \cite{lu2021low,jun2019bilinear}. Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality and attains optimal rates in several structured polynomial settings (in the dimension). We further demonstrate the applicability of our algorithms in RL in the generative model setting, resulting in improved sample complexity over prior approaches.Finally, we show that the standard optimistic algorithms (e.g., UCB) are sub-optimal by dimension factors. In the neural net setting (with polynomial activation functions) with noiseless reward, we provide a bandit algorithm with sample complexity equal to the intrinsic algebraic dimension. Again, we show that optimistic approaches have worse sample complexity, polynomial in the extrinsic dimension (which could be exponentially worse in the polynomial degree).
accept
The reviewers are in agreement that this paper provides important fundamental results on non-linear bandit problems. The only score below the acceptance threshold gives the lowest possible confidence, and was accordingly downweighted to form the final decision. Despite the general positive assessment, all reviewers agreed that the paper could be quite difficult to read for many readers. I ask that in the camera-ready version, the authors very carefully incorporate the reviewer suggestions on improving clarity. Here are some excerpts from the reviewer discussion, in case they were missed in the reviews: - "the current Section 2 uses a quite non-standard structure --- it merges the problem setting (which is technical) and the paper outline (which needs to be clear and easy to find) together. The notations are also quite complicated" - "the two tables in the paper are difficult to understand without additional efforts. I have pointed out two issues in my review" - ", Section 3 contain lots of lemmas, theorems, definitions and assumptions. The presentation is split by multiple "paragraph titles", which makes the flow fragmented" - "Section 3.4 seems to be quite isolated from other parts in the main paper, and the results are of somehow different favors" [and could be moved to the appendix] - "After some removal of non-essential results, the authors should be able to save quite some space, and they should use such space to add clear explanations to their key results, as well as providing more high-level intuition." In addition, based on the reviewer discussion and my own reading, I will mention the following (these are just examples and are far from exhaustive): - Terms like "eluder dimension" should always be defined clearly. For terms that are uncommon but not as central to your paper, you can consider defining them in an appendix, and cross-referencing that appendix in the main text. - Be careful with the consistency of explanations and notation, e.g., Line 146 doesn't quite match Line 8 of Algorithm 1, and T(.) in Algorithm 2 may not have been defined - Consider adding more citations (and possibly mentioning the main differences) when similar ideas have appeared previously, e.g., phased elimination - Be careful for English grammar issues, e.g., wording of Theorem 3.13
train
[ "Zlcp6NHuXs1", "Q-UDMgr0jy", "twPKARlde82", "c02o9y9VIwm", "V5K1Pol17Rm", "TTj0vEsLc5H", "SOAHB3LFnf", "Qr-eQEHKvmR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies nonlinear bandit optimization with structured polynomial reward functions, with a focus on the pure exploration setting. The paper proposes some stochastic zeroth-order gradient ascent algorithms with phase elimination and show that they attain minimax sample complexity (in terms of $1/\\epsilon$...
[ 8, 8, -1, -1, -1, -1, 7, 5 ]
[ 4, 3, -1, -1, -1, -1, 2, 1 ]
[ "nips_2021_7SGgWl2uVG-", "nips_2021_7SGgWl2uVG-", "Q-UDMgr0jy", "Qr-eQEHKvmR", "SOAHB3LFnf", "Zlcp6NHuXs1", "nips_2021_7SGgWl2uVG-", "nips_2021_7SGgWl2uVG-" ]
nips_2021_l7Yjt_8WvJ
On Optimal Interpolation in Linear Regression
Understanding when and why interpolating methods generalize well has recently been a topic of interest in statistical learning theory. However, systematically connecting interpolating methods to achievable notions of optimality has only received partial attention. In this paper, we ask the question of what is the optimal way to interpolate in linear regression using functions that are linear in the response variable (as the case for the Bayes optimal estimator in ridge regression) and depend on the data, the population covariance of the data, the signal-to-noise ratio and the covariance of the prior for the signal, but do not depend on the value of the signal itself nor the noise vector in the training data. We provide a closed-form expression for the interpolator that achieves this notion of optimality and show that it can be derived as the limit of preconditioned gradient descent with a specific initialization. We identify a regime where the minimum-norm interpolator provably generalizes arbitrarily worse than the optimal response-linear achievable interpolator that we introduce, and validate with numerical experiments that the notion of optimality we consider can be achieved by interpolating methods that only use the training data as input in the case of an isotropic prior. Finally, we extend the notion of optimal response-linear interpolation to random features regression under a linear data-generating model.
accept
This is a good paper and I recommend to accept it. There are plenty of points in the reviews that suggest ways to improve the paper and I recommend that the authors follow up on these.
train
[ "nnWxae_84cl", "H6WVgcMiVcb", "G8naOY1q-61", "9xkk1ZJY-0W", "36Or6mpDu3L", "-i0fo1Kj5RA", "LIjySrJj4Lh", "Yj0mCRYdp0j", "QQDqkE2hMrX", "AtUlOFZn4ul", "OLCgEmfSiAw" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThank you for your comment regarding our response (in section \"Post rebuttal comments\" of the original review). We will certainly incorporate the suggestions raised by the reviewers in the very next version of the paper. The comments illustrate important points which are certain to improve the...
[ -1, -1, 7, -1, 6, -1, -1, -1, -1, 6, 4 ]
[ -1, -1, 4, -1, 3, -1, -1, -1, -1, 4, 3 ]
[ "Yj0mCRYdp0j", "LIjySrJj4Lh", "nips_2021_l7Yjt_8WvJ", "-i0fo1Kj5RA", "nips_2021_l7Yjt_8WvJ", "QQDqkE2hMrX", "AtUlOFZn4ul", "G8naOY1q-61", "36Or6mpDu3L", "nips_2021_l7Yjt_8WvJ", "nips_2021_l7Yjt_8WvJ" ]
nips_2021_aZEgelE6kJN
Differentiable Optimization of Generalized Nondecomposable Functions using Linear Programs
Zihang Meng, Lopamudra Mukherjee, Yichao Wu, Vikas Singh, Sathya Narayanan Ravi
accept
Thank you for your submission to NeurIPS. The reviewers and I are in agreement that the proposed work presents some substantial advances to the topic of optimizing non-decomposable metrics, in this case using a nice application of unrolled smoothed LP solving. The reviewers provided several comments, and in some cases have already improved their scores after rebuttal. I would only suggest that the authors be sure to make these modifications, but I have no concerns, as these are largely easy to make. I'm happy to recommend the paper for acceptance.
test
[ "RPVVYg95Sl", "nI0Z6Qstggw", "ar0wAnPgXi4", "sPrQJ2S9Eh", "hvXKJULrri", "PFZ19QxYwhV", "UibY2UyO3R4", "dTe6XLunteD" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " We thank Reviewer Gi3P for reading our response and updating the review&score. We will modify our paper to have better clarity and add a more thorough related work section (as mentioned in our response).", " We thank Reviewer mCRV for reading our response and the updated review. We will make the changes about t...
[ -1, -1, 7, 6, -1, -1, -1, 7 ]
[ -1, -1, 4, 2, -1, -1, -1, 3 ]
[ "sPrQJ2S9Eh", "ar0wAnPgXi4", "nips_2021_aZEgelE6kJN", "nips_2021_aZEgelE6kJN", "sPrQJ2S9Eh", "ar0wAnPgXi4", "dTe6XLunteD", "nips_2021_aZEgelE6kJN" ]
nips_2021_q7wQ3Z6_keU
Towards Understanding Cooperative Multi-Agent Q-Learning with Value Factorization
Value factorization is a popular and promising approach to scaling up multi-agent reinforcement learning in cooperative settings, which balances the learning scalability and the representational capacity of value functions. However, the theoretical understanding of such methods is limited. In this paper, we formalize a multi-agent fitted Q-iteration framework for analyzing factorized multi-agent Q-learning. Based on this framework, we investigate linear value factorization and reveal that multi-agent Q-learning with this simple decomposition implicitly realizes a powerful counterfactual credit assignment, but may not converge in some settings. Through further analysis, we find that on-policy training or richer joint value function classes can improve its local or global convergence properties, respectively. Finally, to support our theoretical implications in practical realization, we conduct an empirical analysis of state-of-the-art deep multi-agent Q-learning algorithms on didactic examples and a broad set of StarCraft II unit micromanagement tasks.
accept
The reviewers and AC discussed the paper. There was agreement that the approach leads to useful insight about value function decomposition in MARL. The author response was also very helpful in clarifying several points about the paper. Still, a number of improvements need to be made. Some of these updates have already been discussed in the author response (e.g., clarifications to the theory and fitted Q-iteration, discussion of QTRAN and QPLEX), but the authors should thoroughly update the paper as suggested by the reviewers.
train
[ "H0zdCGZ51PN", "XerGtg71ywn", "G8yq_lBWrBs", "dE90Yjlm0_O", "WyVz_-PT00", "w-wf39bUlEv", "Sy_l35SDY3m", "r7smCYLjOms", "D66d97U3EWt", "d6y13yFCH6V", "_p70FJv_0K4", "Qd8qpgL1IpI", "MGJBf1pHMv", "dOaFK8OyFK6", "UjQaeRG4YU-", "LR-oj366UYc", "VeIdFhIzT2B", "dOaEUyUbrf2" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for inspiring comments and increasing the score. In the revision, we will incorporate our discussions in the rebuttal to clarify the questions raised in the initial submission. We conclude a list of update changes in the global response of **Summary of the revision**.\n\nRegarding the offlin...
[ -1, -1, 6, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "G8yq_lBWrBs", "nips_2021_q7wQ3Z6_keU", "nips_2021_q7wQ3Z6_keU", "w-wf39bUlEv", "nips_2021_q7wQ3Z6_keU", "Sy_l35SDY3m", "r7smCYLjOms", "dOaFK8OyFK6", "d6y13yFCH6V", "_p70FJv_0K4", "Qd8qpgL1IpI", "UjQaeRG4YU-", "dOaEUyUbrf2", "WyVz_-PT00", "G8yq_lBWrBs", "VeIdFhIzT2B", "nips_2021_q7wQ...
nips_2021_XK4eVsG2LKw
Margin-Independent Online Multiclass Learning via Convex Geometry
Guru Guruganesh, Allen Liu, Jon Schneider, Joshua Wang
accept
The reviewers are in consensus after discussion that this paper should be accepted even if the score of one reviewer does not reflect this. It is strongly suggested that the presentation around the Introduction be updated (see reviews) and also to make sure that terms with technical meanings such as adversarial are defined for the general reader.
train
[ "qBLxlfMHK4", "jA9pgcUmJJg", "UvbHdUBFcHp", "GS9KLFmhXdx", "8TRfH7rv5Rc", "BLYbgTq3Bu", "GKr2nciftsj", "1_daGm1Wa0" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your thoughtful review of the paper. We agree with your comments regarding the presentation and will take them into account when we update the paper. We respond to some specific comments below:\n\n- L48: Yes, v here should be a unit vector (or at least bounded in norm). We will change this to make t...
[ -1, -1, -1, -1, 7, 5, 7, 7 ]
[ -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "1_daGm1Wa0", "GKr2nciftsj", "8TRfH7rv5Rc", "BLYbgTq3Bu", "nips_2021_XK4eVsG2LKw", "nips_2021_XK4eVsG2LKw", "nips_2021_XK4eVsG2LKw", "nips_2021_XK4eVsG2LKw" ]
nips_2021_p9dySshcS0q
STEP: Out-of-Distribution Detection in the Presence of Limited In-Distribution Labeled Data
Existing semi-supervised learning (SSL) studies typically assume that unlabeled and test data are drawn from the same distribution as labeled data. However, in many real-world applications, it is desirable to have SSL algorithms that not only classify the samples drawn from the same distribution of labeled data but also detect out-of-distribution (OOD) samples drawn from an unknown distribution. In this paper, we study a setting called semi-supervised OOD detection. Two main challenges compared with previous OOD detection settings are i) the lack of labeled data and in-distribution data; ii) OOD samples could be unseen during training. Efforts on this direction remain limited. In this paper, we present an approach STEP significantly improving OOD detection performance by introducing a new technique: Structure-Keep Unzipping. It learns a new representation space in which OOD samples could be separated well. An efficient optimization algorithm is derived to solve the objective. Comprehensive experiments across various OOD detection benchmarks clearly show that our STEP approach outperforms other methods by a large margin and achieves remarkable detection performance on several benchmarks.
accept
The paper studied out-of-distribution detection problem and proposed semi-supervised OOD detection setting with a new technique called structure-keep unzipping to solve the new setting. The writing is clear, the motivation is strong, the idea is novel, and the results are significant. Thus, I think it should be accepted for publication. A reviewer had three concerns but the authors wrote a nice rebuttal which addressed two concerns. I think the third concern, which was discussed internally, is also sensible. The problem setting is closely related to positive-unlabeled learning; see [1]. If PU learning can be used to detect in-distribution data, it can of course be used to detect out-of-distribution data. Please add PU learning into the section 4 related work. A minor comment is that OOD in the title should be given the full term instead of the abbreviation --- first, OOD is not as well known as SVM and it may hurt the understanding (the title should be understandable to every NeurIPS participant), and second, in-distribution is not abbreviated. [1] Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, Dacheng Tao, and Chang Xu. Positive-Unlabeled Compression on the Cloud. NeurIPS 2019.
train
[ "LmrFk7mt2-K", "ljx7R9M95Sd", "nHHQjValVZ", "6NeojWa_x9G", "fCYNLp0gsyO", "aul6Fa2anXd", "gkZzV_qmAZJ", "gMtNHt3iOC", "Ehpr3vWUgUS", "UF7jBF9P_Xi", "ITas_6mZp_e" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the updated results. I raised my score since the authors have addressed the major concerns. In the final version, I strongly recommend the authors include the complete results on broader OOD datasets (including some of the more challenging ones such as Textures and CIFAR-10 vs. CIFAR-100), together ...
[ -1, 6, -1, -1, -1, -1, -1, -1, 9, 8, 7 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ "nHHQjValVZ", "nips_2021_p9dySshcS0q", "6NeojWa_x9G", "fCYNLp0gsyO", "ljx7R9M95Sd", "ITas_6mZp_e", "Ehpr3vWUgUS", "UF7jBF9P_Xi", "nips_2021_p9dySshcS0q", "nips_2021_p9dySshcS0q", "nips_2021_p9dySshcS0q" ]
nips_2021_SPrVNsXnGd
Renyi Differential Privacy of The Subsampled Shuffle Model In Distributed Learning
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively throughinteractions with a server from whom we need privacy. Motivated by stochastic optimization and the federated learning (FL) paradigm, we focus on the case where a small fraction of data samples are randomly sub-sampled in each round to participate in the learning process, which also enables privacy amplification. To obtain even stronger local privacy guarantees, we study this in the shuffle privacy model, where each client randomizes its response using a local differentially private (LDP) mechanism and the server only receives a random permutation (shuffle) of the clients' responses without theirassociation to each client. The principal result of this paper is a privacy-optimization performance trade-off for discrete randomization mechanisms in this sub-sampled shuffle privacy model. This is enabledthrough a new theoretical technique to analyze the Renyi Differential Privacy (RDP) of the sub-sampled shuffle model. We numerically demonstrate that, for important regimes, with composition our boundyields significant improvement in privacy guarantee over the state-of-the-art approximate Differential Privacy (DP) guarantee (with strong composition) for sub-sampled shuffled models. We also demonstrate numerically significant improvement in privacy-learning performance operating point using real data sets. Despite these advances, an open question is to bridge the gap between lower and upper privacy bounds in our RDP analysis.
accept
There is a consensus that the paper provides an interesting theoretical contribution to an important and timely problem, the shuffle model being an actively studied model of privacy that is also increasingly studied in federated learning (FL). The authors' result on analyzing the privacy loss in the subsampled shuffle model and the accompanying empirical evaluation are valuable contributions towards tightening the privacy/utility trade-offs of DP-SGD in a natural FL setting. However, there is also a consensus that the paper has a somewhat limited technical novelty, with proofs very similar to ones in recent work. Also, some concerns are raised regarding the remaining gap between the obtained lower and upper bounds. That said, the reviewers are in agreement in leaning towards acceptance.
train
[ "IgEf1YE1iYb", "m5V8KMoj4dV", "y-OVFI2B2V", "4Mf1sTvvp3e", "zteccEe93h", "G0FSWXLmHPv", "QwvyL6CiR9k", "o97seDP5V-", "w4zVqCcFeJX", "JgtXWQ7-Glp", "QIUfgQLg25", "udGl6DuTEqr", "u2P2SEQV_vC", "RQlzzGB5Nv2", "8X3Epmmssg", "aSoMOCu7Emy", "yr4wagY0ho", "yGYeRtHdI2V", "47e0BoFlBRg", ...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ "The focus of the paper is on Rényi differential privacy (RDP) in the shuffle model of DP. The main result in the paper is to introduce (at least somewhat loose) upper and lower RDP bounds for an arbitrary subsampled discrete mechanisms in the shuffle model. I think this is a novel, if fairly limited contribution. ...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_SPrVNsXnGd", "y-OVFI2B2V", "zteccEe93h", "nips_2021_SPrVNsXnGd", "G0FSWXLmHPv", "QwvyL6CiR9k", "o97seDP5V-", "w4zVqCcFeJX", "4Mf1sTvvp3e", "IgEf1YE1iYb", "IgEf1YE1iYb", "IgEf1YE1iYb", "IgEf1YE1iYb", "IgEf1YE1iYb", "8EtcVNBBLeF", "8EtcVNBBLeF", "4Mf1sTvvp3e", "4Mf1sTvvp3e...
nips_2021_gL8btosnTj
Gradient-based Editing of Memory Examples for Online Task-free Continual Learning
We explore task-free continual learning (CL), in which a model is trained to avoid catastrophic forgetting in the absence of explicit task boundaries or identities. Among many efforts on task-free CL, a notable family of approaches are memory-based that store and replay a subset of training examples. However, the utility of stored seen examples may diminish over time since CL models are continually updated. Here, we propose Gradient based Memory EDiting (GMED), a framework for editing stored examples in continuous input space via gradient updates, in order to create more "challenging" examples for replay. GMED-edited examples remain similar to their unedited forms, but can yield increased loss in the upcoming model updates, thereby making the future replays more effective in overcoming catastrophic forgetting. By construction, GMED can be seamlessly applied in conjunction with other memory-based CL algorithms to bring further improvement. Experiments validate the effectiveness of GMED, and our best method significantly outperforms baselines and previous state-of-the-art on five out of six datasets.
accept
This paper introduces the Gradient based Memory Editing (GMED) framework for task-free online continual learning that edits stored examples using gradient updates. The paper is well-written. The proposed method is interesting, novel, and has simple understandable intuition, though theoretical justifications are lacking. Experiments are extensive. While improvements are somewhat marginal, the experiments are thorough enough to show efficacy of the technique.
train
[ "dkH0H96Ss85", "-rDA804VSSm", "ChKBZRqlWvR", "49jrZjMPsYD", "J2DFvmXq2xV", "K1JM-L1S24g", "MCgwrN8vPK1", "upfoTrS8clt", "HZ1oat96xld", "gLmMqz4b9xj", "CETCmsCq3yx", "9MMmHsAJlpI", "i4qvID0avM8", "3p3N0jxy7Tk", "YSVhp0JTx3_", "6nlR-U5T4ha", "OwlxGUoyOki", "I0E92pLu2s", "u0NCC9YQWP...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", ...
[ " Thank you very much for your insightful questions, Reviewer vw9x! We appreciate this discussion and are glad that you find it helpful. We hope that you can reconsider the evaluation of our work. Please do let us know if you have follow-up questions.\n\n", " Thanks for your response and the additional experiment...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "-rDA804VSSm", "Y2bul47jbNa", "J2DFvmXq2xV", "upfoTrS8clt", "XIWMsI-BQF", "XHasm4DqFV", "Y5edUPgtxfV", "YSVhp0JTx3_", "xj6MeTi7THr", "Y5edUPgtxfV", "XHasm4DqFV", "nips_2021_gL8btosnTj", "3p3N0jxy7Tk", "u0NCC9YQWPo", "z9qNT_wmXY", "9MMmHsAJlpI", "Y5edUPgtxfV", "XHasm4DqFV", "PmaXa...
nips_2021_8pOPKfibVN
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations. However, since auxiliary losses are minimized only on training data, they suffer from the same generalization gap as regular task losses. Moreover, by adding a term to the loss function, the model optimizes a different objective than the one we care about. In this work we address both problems: first, we take inspiration from transductive learning and note that after receiving an input but before making a prediction, we can fine-tune our networks on any unsupervised loss. We call this process tailoring, because we customize the model to each input to ensure our prediction satisfies the inductive bias. Second, we formulate meta-tailoring, a nested optimization similar to that in meta-learning, and train our models to perform well on the task objective after adapting them using an unsupervised loss. The advantages of tailoring and meta-tailoring are discussed theoretically and demonstrated empirically on a diverse set of examples.
accept
This paper proposes tailoring, a technique for incorporating unsupervised objectives during test time. The main distinction and past work is that tailoring is applied to each individual test datapoint. The paper includes some theoretical justification of the approach as well as rather extensive experiments. Reviewers were generally positive on the paper. The main criticism of the paper came down to lack of clarity in writing, but the authors have addressed this concern through their rebuttal. Before the camera-ready version, please incorporate the proposed changes to writing.
train
[ "2ghKvYiZWwd", "PzIlTCeY5kr", "QwklLM0GxKo", "Q3MeZG7G9E", "QPw6RV22B-4", "SbTMGwEd6AT", "w_pvJf8QNkZ", "Dt45tj4RMl3", "4zzDMZEUgX7", "15M3dFN7DjK", "QUbG7t7V2O", "0Tsm-tzLN9H", "oBkVTM6TU10" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " That's a good question! In this paper, we suggest tailoring losses serve a similar role to auxiliary losses, but better serve the outer objective and, in the experiments shown in the paper, work better. Intuitively, you want tailoring losses to be informative of the true task, so that the gradient minimizing the ...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "PzIlTCeY5kr", "15M3dFN7DjK", "QPw6RV22B-4", "nips_2021_8pOPKfibVN", "SbTMGwEd6AT", "w_pvJf8QNkZ", "Q3MeZG7G9E", "oBkVTM6TU10", "0Tsm-tzLN9H", "QUbG7t7V2O", "nips_2021_8pOPKfibVN", "nips_2021_8pOPKfibVN", "nips_2021_8pOPKfibVN" ]
nips_2021_vvi7KqHQiA
Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity
Understanding the implicit bias of training algorithms is of crucial importance in order to explain the success of overparametrised neural networks. In this paper, we study the dynamics of stochastic gradient descent over diagonal linear networks through its continuous time version, namely stochastic gradient flow. We explicitly characterise the solution chosen by the stochastic flow and prove that it always enjoys better generalisation properties than that of gradient flow.Quite surprisingly, we show that the convergence speed of the training loss controls the magnitude of the biasing effect: the slower the convergence, the better the bias. To fully complete our analysis, we provide convergence guarantees for the dynamics. We also give experimental results which support our theoretical claims. Our findings highlight the fact that structured noise can induce better generalisation and they help explain the greater performances of stochastic gradient descent over gradient descent observed in practice.
accept
Reviewers generally agreed that the paper is clearly written, technically solid and interesting. Reservations with regards to the significance of the analyzed setting (diagonal linear neural networks) were raised, but given that this setting already received notable attention, the committee reached a decision by which this paper comprises a meaningful step forward (analysis of stochasticity), and should thus be accepted to the conference.
train
[ "Co5wObEugJv", "hhcvC9UoCj", "RrmQr5A7xl7", "bSjVezvDYmM", "aFh7jY8whV", "VpoL4v-QV_q", "Qa3y5oRaoBU", "qibwMgRPA4i", "y2FLjrUujDR", "hlIhCwGb3a2", "kMSDRc_iVQY" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the implicit bias of SGD in diagonal linear networks. Specifically, through the study of SGD continues time version, i.e., stochastic gradient flow, the authors investigate the noise influence on the generalization performance. The authors show that with high probability stochastic gradient flow ...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, 2, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_vvi7KqHQiA", "nips_2021_vvi7KqHQiA", "bSjVezvDYmM", "VpoL4v-QV_q", "qibwMgRPA4i", "hlIhCwGb3a2", "Co5wObEugJv", "kMSDRc_iVQY", "hhcvC9UoCj", "nips_2021_vvi7KqHQiA", "nips_2021_vvi7KqHQiA" ]
nips_2021_aLkuboH1SQX
Iterative Teacher-Aware Learning
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency. The teacher adjusts her teaching method for different students, and the student, after getting familiar with the teacher’s instruction mechanism, can infer the teacher’s intention to learn faster. Recently, the benefits of integrating this cooperative pedagogy into machine concept learning in discrete spaces have been proved by multiple works. However, how cooperative pedagogy can facilitate machine parameter learning hasn’t been thoroughly studied. In this paper, we propose a gradient optimization based teacher-aware learner who can incorporate teacher’s cooperative intention into the likelihood function and learn provably faster compared with the naive learning algorithms used in previous machine teaching works. We give theoretical proof that the iterative teacher-aware learning (ITAL) process leads to local and global improvements. We then validate our algorithms with extensive experiments on various tasks including regression, classification, and inverse reinforcement learning using synthetic and real data. We also show the advantage of modeling teacher-awareness when agents are learning from human teachers.
accept
The paper studies a teacher-aware learning process in the context of iterative machine teaching. In particular, a new learner model is proposed that incorporates the teacher's intention into the likelihood function to learn faster. Extensive experiments are performed on various tasks with synthetic and real data, including a user study on learning from human teachers. The reviewers acknowledged the importance of the studied problem setting and generally appreciated the results. However, the reviewers raised several concerns with the current version of the paper, including: (a) the heuristic nature of the teaching model is not analyzed; (b) the learner feedback as an inner product is not well motivated or explained.; (c) the theoretical guarantees only show that under certain assumptions the proposal cannot do worse than a naive learner; (d) the learning mechanism referred to in the paper is not well-defined; (e) some of the important details of the experimental set-up for the online IRL experiments and the study with human teachers are missing from the main paper. I want to thank the authors for their detailed responses that helped in answering some of the reviewers' questions. While the overall assessment is positive, the paper still stands as borderline. Nevertheless, this is interesting and potentially impactful work. The reviewers have provided detailed and constructive feedback to the authors. We strongly encourage the authors to incorporate the reviewers' feedback when preparing a revision of the paper.
train
[ "6vyKMmnSXm", "EGt-cz7d7w", "Lk0EsRSG1-C", "7E0Q1-yCkyO", "-W6W2Gm7Oby", "54evhei0cmH", "rCI4EVl0CQ", "s4kO57LwRef", "hgFYQL0aZ6", "R05XKPkW5dy", "WCCB0T1ZBUZ", "khQjRvGqn-T", "JNIKLK6Yy9x" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ " Thanks for your response. Comments on your response:\n\n1. Readability: I think supplementary material has some very good details. It is obvious that the authors have put in lots of effort into this paper. I appreciate that! I highly recommend moving Algorithm 1 into the main paper. There is a typo in Algorithm 1...
[ -1, -1, 6, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1 ]
[ "-W6W2Gm7Oby", "7E0Q1-yCkyO", "nips_2021_aLkuboH1SQX", "s4kO57LwRef", "rCI4EVl0CQ", "nips_2021_aLkuboH1SQX", "WCCB0T1ZBUZ", "JNIKLK6Yy9x", "nips_2021_aLkuboH1SQX", "nips_2021_aLkuboH1SQX", "54evhei0cmH", "hgFYQL0aZ6", "Lk0EsRSG1-C" ]
nips_2021_fU7-so5RRhW
Clockwork Variational Autoencoders
Deep learning has enabled algorithms to generate realistic images. However, accurately predicting long video sequences requires understanding long-term dependencies and remains an open challenge. While existing video prediction models succeed at generating sharp images, they tend to fail at accurately predicting far into the future. We introduce the Clockwork VAE (CW-VAE), a video prediction model that leverages a hierarchy of latent sequences, where higher levels tick at slower intervals. We demonstrate the benefits of both hierarchical latents and temporal abstraction on 4 diverse video prediction datasets with sequences of up to 1000 frames, where CW-VAE outperforms top video prediction models. Additionally, we propose a Minecraft benchmark for long-term video prediction. We conduct several experiments to gain insights into CW-VAE and confirm that slower levels learn to represent objects that change more slowly in the video, and faster levels learn to represent faster objects.
accept
This paper proposes a novel architecture for long term video generation. Reviewers agree that the approach is novel, it focuses on an important and under-explore problem of video generation and that the experimental section supports the value of the contribution.
train
[ "Hpc_hnoTiGW", "5WeY6unJlIA", "7zHaoS600Co", "7BuqYS_dtU-", "coL5kSuhkf", "yO16n9xUk-", "OltLMQjmtYv", "8nWoU73ix9U", "k6dzraXAmt8", "8CX7r6VNjEl", "e7xVqlVny9H", "mOFpbEVEz1p", "5OZN7EIpRf", "nRICTcce1ws", "A4RZyX8zYXJ", "BpLdBGgRVAt", "DomWFjBAO-F" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes Clockwork-VAEs, a hierarchical latent dynamics video prediction model. CW-VAE uses different clock speeds at each level to model dependencies occurring at different frequencies. This allows it to perform well on long-term video prediction. The paper presents results on four diverse datasets, in...
[ 7, 6, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_fU7-so5RRhW", "nips_2021_fU7-so5RRhW", "7BuqYS_dtU-", "e7xVqlVny9H", "yO16n9xUk-", "A4RZyX8zYXJ", "nips_2021_fU7-so5RRhW", "k6dzraXAmt8", "mOFpbEVEz1p", "5OZN7EIpRf", "5OZN7EIpRf", "nRICTcce1ws", "5WeY6unJlIA", "Hpc_hnoTiGW", "DomWFjBAO-F", "OltLMQjmtYv", "nips_2021_fU7-so...
nips_2021_JuNatTaGZ6J
How Does it Sound?
One of the primary purposes of video is to capture people and their unique activities. It is often the case that the experience of watching the video can be enhanced by adding a musical soundtrack that is in-sync with the rhythmic features of these activities. How would this soundtrack sound? Such a problem is challenging since little is known about capturing the rhythmic nature of free body movements. In this work, we explore this problem and propose a novel system, called `RhythmicNet', which takes as an input a video which includes human movements and generates a soundtrack for it. RhythmicNet works directly with human movements by extracting skeleton keypoints and implements a sequence of models which translate the keypoints to rhythmic sounds.RhythmicNet follows the natural process of music improvisation which includes the prescription of streams of the beat, the rhythm and the melody. In particular, RhythmicNet first infers the music beat and the style pattern from body keypoints per each frame to produce rhythm. Next, it implements a transformer-based model to generate the hits of drum instruments and implements a U-net based model to generate the velocity and the offsets of the instruments. Additional types of instruments are added to the soundtrack by further conditioning on the generated drum sounds. We evaluate RhythmicNet on large scale datasets of videos that include body movements with inherit sound association, such as dance, as well as 'in the wild' internet videos of various movements and actions. We show that the method can generate plausible music that aligns well with different types of human movements.
accept
This paper received 4 positive reviews and 1 negative review. In the rebuttal, the authors have addressed most of the concerns. AC feels this work is very interesting and deserves to be published on NeurIPS 2021. The reviewers did raise some valuable concerns (e.g., including more human studies) that should be addressed in the final camera-ready version of the paper. The authors are encouraged to make other necessary changes.
train
[ "6-SgBFgmCSp", "uAXz6Ju4eXI", "HSMBR6O89ET", "TlijojXgGE9", "icfnlPgbRUf", "1HltUuPqpXQ", "dhgttivU_3f", "0SFp1wSG3i", "Aq69iGMviX2", "UWJ8MsqPHpw", "pBemXpC5RFv", "-H7BfMSuzNE", "N0G5UThcUt0", "BCxZURjUWpW", "d7sEzEWxy2r", "VMPkPD9irVT" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for reiterating the meaning of style from our first reply. We realized that style and rhythm in the first sentence were switched. Instead of “We define the 'style' as a combination of beat and incidences of transitional movements of the human body, such as rapid and sudden movements.” The co...
[ -1, -1, -1, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "uAXz6Ju4eXI", "-H7BfMSuzNE", "TlijojXgGE9", "icfnlPgbRUf", "Aq69iGMviX2", "nips_2021_JuNatTaGZ6J", "nips_2021_JuNatTaGZ6J", "N0G5UThcUt0", "VMPkPD9irVT", "d7sEzEWxy2r", "1HltUuPqpXQ", "BCxZURjUWpW", "dhgttivU_3f", "nips_2021_JuNatTaGZ6J", "nips_2021_JuNatTaGZ6J", "nips_2021_JuNatTaGZ6...
nips_2021_yxg-i8DAHK
Stabilizing Dynamical Systems via Policy Gradient Methods
Stabilizing an unknown control system is one of the most fundamental problems in control systems engineering. In this paper, we provide a simple, model-free algorithm for stabilizing fully observed dynamical systems. While model-free methods have become increasingly popular in practice due to their simplicity and flexibility, stabilization via direct policy search has received surprisingly little attention. Our algorithm proceeds by solving a series of discounted LQR problems, where the discount factor is gradually increased. We prove that this method efficiently recovers a stabilizing controller for linear systems, and for smooth, nonlinear systems within a neighborhood of their equilibria. Our approach overcomes a significant limitation of prior work, namely the need for a pre-given stabilizing control policy. We empirically evaluate the effectiveness of our approach on common control benchmarks.
accept
All reviewers find an interesting and important idea in this paper. Although some reviewers point out that the similar idea is found in some existing works, the authors clarify the significant difference or their work from those. As a total, this is a nice paper that could appear in NeurIPS. Therefore, my recommendation is acceptance (poster) for this paper.
train
[ "uRPSVuxg10X", "S2xtOqZ0W3h", "_Dh2VFZe-PS", "oSKd7CuPd2b", "V1SdqQc4sKh", "6dapXGUqRwD", "gv3YYshhhOs", "X0z_WspMT-n", "KJ2vvI774H9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a model-free algorithm for stabilizing linear and nonlinear dynamical systems. The algorithm solves a series of discounted LQR problems using policy gradient, and provably converges to a stabilizing controller for LTI systems; for nonlinear systems, the algorithm recovers a stabilizing controll...
[ 7, -1, 7, -1, 7, -1, -1, -1, -1 ]
[ 3, -1, 3, -1, 4, -1, -1, -1, -1 ]
[ "nips_2021_yxg-i8DAHK", "KJ2vvI774H9", "nips_2021_yxg-i8DAHK", "X0z_WspMT-n", "nips_2021_yxg-i8DAHK", "gv3YYshhhOs", "V1SdqQc4sKh", "_Dh2VFZe-PS", "uRPSVuxg10X" ]
nips_2021_uXc42E9ZPFs
Language models enable zero-shot prediction of the effects of mutations on protein function
Modeling the effect of sequence variation on function is a fundamental problem for understanding and designing proteins. Since evolution encodes information about function into patterns in protein sequences, unsupervised models of variant effects can be learned from sequence data. The approach to date has been to fit a model to a family of related sequences. The conventional setting is limited, since a new model must be trained for each prediction task. We show that using only zero-shot inference, without any supervision from experimental data or additional training, protein language models capture the functional effects of sequence variation, performing at state-of-the-art.
accept
While evaluating this paper, the reviewers had an extensive discussion about the relative strengths, and in particular what counts as "novelty". While the reviewers did not come to unanimous consensus here, I am swayed by the majority of reviewers who cited the strong empirical rigor in analyzing the behavior of the proposed models, in addition to the careful combination of many small improvements that lead to strong results on an important application area in predicting the functional effects of protein mutations. Indeed, my own view is that the term "novelty" is often overly constrained to mean "a brand new modeling technique" when instead it should be interpreted to mean "something important that adds to the field's overall understanding and knowledge". Indeed, the recent ML literature is filled with important papers that usefully shine new light on previously known modeling techniques or methods. Given the strong reviews from the majority of reviewers, and the acknowledgement of the strong empirical results and careful explication of the rigorous evaluations, I am happy to recommend acceptance of this paper. I do expect that the authors will use the extensive reviewer feedback to further revise and strengthen the paper for its final form.
val
[ "XTIV6V2OKRp", "ML6fLVXDr5", "UR5DbJECm9W", "IsZ3uKybbyw", "JVZlIGC3RTl", "vOOFykCBfvZ", "XqjhjaQUbM", "DUkMONJtmiF", "NSUof4lGhRC", "-XRrfrIlDfT", "wMfthWoW-IM", "eBQ7_ZON7aW", "wEogvsbj7Fo", "zmPQMfLIxxN", "DL1dIpJVHnG", "pVIbW19Z8vu", "JQ7VitCN26x", "sUIrRbGvpcQ", "H4wTMBiyMse...
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_re...
[ "The paper adds a follow-up experiment to a recent paper demonstrating the power of large language models (particularly, BERT and the MSA transformer) for a variety of protein structure and function prediction tasks. The new task (zero-shot scoring of the effects of mutations) has a variety of useful applications f...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "nips_2021_uXc42E9ZPFs", "wMfthWoW-IM", "-XRrfrIlDfT", "XqjhjaQUbM", "NSUof4lGhRC", "DL1dIpJVHnG", "DUkMONJtmiF", "vOOFykCBfvZ", "JQ7VitCN26x", "zmPQMfLIxxN", "wEogvsbj7Fo", "nips_2021_uXc42E9ZPFs", "XTIV6V2OKRp", "DseSRk94eqb", "nXQoxUSNBDB", "H4wTMBiyMse", "sUIrRbGvpcQ", "nips_20...
nips_2021_uqv8-U4lKBe
Deep Reinforcement Learning at the Edge of the Statistical Precipice
Deep reinforcement learning (RL) algorithms are predominantly evaluated by comparing their relative performance on a large suite of tasks. Most published results on deep RL benchmarks compare point estimates of aggregate performance such as mean and median scores across tasks, ignoring the statistical uncertainty implied by the use of a finite number of training runs. Beginning with the Arcade Learning Environment (ALE), the shift towards computationally-demanding benchmarks has led to the practice of evaluating only a small number of runs per task, exacerbating the statistical uncertainty in point estimates. In this paper, we argue that reliable evaluation in the few run deep RL regime cannot ignore the uncertainty in results without running the risk of slowing down progress in the field. We illustrate this point using a case study on the Atari 100k benchmark, where we find substantial discrepancies between conclusions drawn from point estimates alone versus a more thorough statistical analysis. With the aim of increasing the field's confidence in reported results with a handful of runs, we advocate for reporting interval estimates of aggregate performance and propose performance profiles to account for the variability in results, as well as present more robust and efficient aggregate metrics, such as interquartile mean scores, to achieve small uncertainty in results. Using such statistical tools, we scrutinize performance evaluations of existing algorithms on other widely used RL benchmarks including the ALE, Procgen, and the DeepMind Control Suite, again revealing discrepancies in prior comparisons. Our findings call for a change in how we evaluate performance in deep RL, for which we present a more rigorous evaluation methodology, accompanied with an open-source library rliable, to prevent unreliable results from stagnating the field.
accept
I thank the authors for their submission and active participation in the discussions. The reviewers unanimously agree that this is a very strong paper with a convincing motivation [9zHZ] and thorough experiments [9zHZ,GYfs,9Qic]. It raises an important issue in deep RL research that could significantly help future work [KFpZ,9Qic]. Furthermore, reviewers appreciate the release of open source code and tools that will help future researchers [9zHZ,GYfs] with their evaluation protocol. Overall, this paper presents an important step in more rigorous evaluation of deep RL research. It identified shortcomings of prior evaluation protocols, questioning some of the conclusions made in recent years, as well as providing clear guidance on how to improve evaluation in the future. I agree with the reviewers and strongly recommend acceptance. I also encourage the authors to use the feedback by the reviewers to further improve their paper, in particular it's clarity.
train
[ "hZW1kufA_Hb", "S7FsNCyONXS", "7B1GSJZZQKZ", "5CYtv3dj1F", "aWYOVqljJNN", "RJsiabzEvj-", "Wed2GqvI6_T", "4xH37PivD3_", "0fwbtUoFd2J", "oEFwGwzz-C4", "SZmRBxUy23g", "sc2ai_yDHls", "2HrvadMdvy" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " We thank the reviewer again for their suggestions and clarifications. We kindly request the reviewer to raise their confidence in the main review, so that it's clearly visible to the AC.", "This paper discusses the evaluation protocol dominant in deep reinforcment learning (evaluating 3-5 runs across a range of...
[ -1, 9, -1, -1, 7, -1, -1, 8, -1, -1, -1, -1, 7 ]
[ -1, 3, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "7B1GSJZZQKZ", "nips_2021_uqv8-U4lKBe", "SZmRBxUy23g", "nips_2021_uqv8-U4lKBe", "nips_2021_uqv8-U4lKBe", "0fwbtUoFd2J", "sc2ai_yDHls", "nips_2021_uqv8-U4lKBe", "aWYOVqljJNN", "2HrvadMdvy", "S7FsNCyONXS", "4xH37PivD3_", "nips_2021_uqv8-U4lKBe" ]
nips_2021_sthiz9zeXGG
DRONE: Data-aware Low-rank Compression for Large NLP Models
The representations learned by large-scale NLP models such as BERT have been widely used in various tasks. However, the increasing model size of the pre-trained models also brings efficiency challenges, including inference speed and model size when deploying models on mobile devices. Specifically, most operations in BERT consist of matrix multiplications. These matrices are not low-rank and thus canonical matrix decomposition could not find an efficient approximation. In this paper, we observe that the learned representation of each layer lies in a low-dimensional space. Based on this observation, we propose DRONE (data-aware low-rank compression), a provably optimal low-rank decomposition of weight matrices, which has a simple closed form solution that can be efficiently computed. DRONE can be applied to both fully connected and self-attention layers appearing in the BERT model. In addition to compressing standard models, out method can also be used on distilled BERT models to further improve compression rate. Experimental results show that DRONE is able to improve both model size and inference speed with limited loss in accuracy. Specifically, DRONE alone achieves 1.92x speedup on the MRPC task with only 1.5% loss in accuracy, and when DRONE is combined with distillation, it further achieves over 12.3x speedup on various natural language inference tasks.
accept
This paper proposes DRONE, a data-aware low-rank compression algorithm for BERT, achieving a significant speed-up with marginal accuracy loss. The authors show that DRONE can be used orthogonally with knowledge distillation and quantization. In this sense, the paper has its clear value to the community. The reviewers raised some concerns on the technical novelty, the precise quantification of the speed-up, and the experimental details. Overall speaking, the authors did a good job in their rebuttal, and some of the reviewers’ concerns were successfully addressed. The reviewers made several rounds of discussions, and the conclusion is that it would be good to accept the paper as a poster.
train
[ "OulEuVl7Ikn", "MHh8-hL51r", "Z6Np7aR4seG", "Gmg8t4k4YHT", "1_aT63YETY0", "kwlc_si1-Je", "5NIG6t8QIKa", "fr1DzWBHkR4", "d8gAOjGP9An", "8kgDtl9HYz9", "Vs3Rd0GLiuX", "ZlndKrXTw8", "_GLV2lFJiBv", "irZu2RLO6XC", "4SOABrBxrY", "Ksenxogp0lt", "gZvlvqXz7b0", "Oeo48UwqICF", "MPII6S80hc_"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "au...
[ "This paper proposes DRONE, a data-aware low-rank model compression method designed specifically for large language models, e.g., BERT. The method is motivated by an observation that the weight matrices in a pre-trained BERT model are generally not low rank, but the data/intermediate results usually lie in a low-ra...
[ 6, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_sthiz9zeXGG", "1_aT63YETY0", "nips_2021_sthiz9zeXGG", "Vs3Rd0GLiuX", "8kgDtl9HYz9", "nips_2021_sthiz9zeXGG", "fr1DzWBHkR4", "d8gAOjGP9An", "ZlndKrXTw8", "1HJRV-ahABd", "irZu2RLO6XC", "_GLV2lFJiBv", "4SOABrBxrY", "MPII6S80hc_", "Ksenxogp0lt", "Oeo48UwqICF", "d_EgaC-Z11K", ...
nips_2021_tKlYQJLYN8v
DSelect-k: Differentiable Selection in the Mixture of Experts with Applications to Multi-Task Learning
The Mixture-of-Experts (MoE) architecture is showing promising results in improving parameter sharing in multi-task learning (MTL) and in scaling high-capacity neural networks. State-of-the-art MoE models use a trainable "sparse gate'" to select a subset of the experts for each input example. While conceptually appealing, existing sparse gates, such as Top-k, are not smooth. The lack of smoothness can lead to convergence and statistical performance issues when training with gradient-based methods. In this paper, we develop DSelect-k: a continuously differentiable and sparse gate for MoE, based on a novel binary encoding formulation. The gate can be trained using first-order methods, such as stochastic gradient descent, and offers explicit control over the number of experts to select. We demonstrate the effectiveness of DSelect-k on both synthetic and real MTL datasets with up to 128 tasks. Our experiments indicate that DSelect-k can achieve statistically significant improvements in prediction and expert selection over popular MoE gates. Notably, on a real-world, large-scale recommender system, DSelect-k achieves over 22% improvement in predictive performance compared to Top-k. We provide an open-source implementation of DSelect-k.
accept
All four reviewers agreed that the paper is very well written, easy to read, the idea is novel, and the experimental results are convincing and consistent. In the discussion, there was no disagreement in accepting the paper. As the authors promised, I would like to encourage the authors to include comparisons against Gumbel softmax based apporachs or some other stochastic subset selection methods. It would be great to prepare text and figures to highlight the shortcomings of these approaches (e.g. impossibility of per-example gating etc.).
train
[ "CiIL88EFBr-", "A8xUA8xq6pG", "fX0RHxWJSx", "oa-37L0PO6V", "b_vlzucPFDH", "LcbKsrtScyM", "HStPwn9XWmk", "XzXCisAflLU", "rS2BcGqp9eO", "Y05isTaLxE", "bgAfziNP8o3" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the follow-up response, I keep my positive score (6) for this work.", " Thank you for the discussion and for increasing the score.\n\n1\\. We will emphasize the advantage of the closed-form expressions for the forward/backward passes in the introduction and the related work section. Thank you agai...
[ -1, -1, 6, -1, -1, -1, -1, -1, 6, 6, 9 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "A8xUA8xq6pG", "oa-37L0PO6V", "nips_2021_tKlYQJLYN8v", "XzXCisAflLU", "bgAfziNP8o3", "Y05isTaLxE", "rS2BcGqp9eO", "fX0RHxWJSx", "nips_2021_tKlYQJLYN8v", "nips_2021_tKlYQJLYN8v", "nips_2021_tKlYQJLYN8v" ]
nips_2021_73OmmrCfSyy
Mind the Gap: Assessing Temporal Generalization in Neural Language Models
Our world is open-ended, non-stationary, and constantly evolving; thus what we talk about and how we talk about it change over time. This inherent dynamic nature of language contrasts with the current static language modelling paradigm, which trains and evaluates models on utterances from overlapping time periods. Despite impressive recent progress, we demonstrate that Transformer-XL language models perform worse in the realistic setup of predicting future utterances from beyond their training period, and that model performance becomes increasingly worse with time. We find that, while increasing model size alone—a key driver behind recent progress—does not solve this problem, having models that continually update their knowledge with new information can indeed mitigate this performance degradation over time. Hence, given the compilation of ever-larger language modelling datasets, combined with the growing list of language-model-based NLP applications that require up-to-date factual knowledge about the world, we argue that now is the right time to rethink the static way in which we currently train and evaluate our language models, and develop adaptive language models that can remain up-to-date with respect to our ever-changing and non-stationary world. We publicly release our dynamic, streaming language modelling benchmarks for WMT and arXiv to facilitate language model evaluation that takes temporal dynamics into account.
accept
This paper presents a study of temporal generalization of language models to time periods beyond which they are trained on, demonstrating that current “static” LMs do not generalize well to text from the future and degrade over time. The paper argues for studying dynamic language modeling and presents new benchmarks for the same. All the reviewers and the AC found this to be novel, solid and timely work, especially given the recent proliferation of large-scale language models in a wide variety of real-world applications.
train
[ "N3fdqBH3AI", "gLDbxwnxlOL", "Sf046G4m3Yr", "yjkdfvXvZC-", "2gV_GyTCV42", "zl2zSp44lEL", "r32Krbp88P", "UFkWqSe2irq", "btKOOpGTekG", "VsUR4QJyx7I", "iGhWer3HbUf", "vHf8Dpq2NEh" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Glad to see that the authors continue analyzing the counter-intuitive results from Fig 4 and commit to explaining them in the next version of the paper. ", "Motivated by the fact that in practice models are applied to text generated later than their training data, this paper shows that splitting the datasets in...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, 7, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "UFkWqSe2irq", "nips_2021_73OmmrCfSyy", "r32Krbp88P", "zl2zSp44lEL", "vHf8Dpq2NEh", "iGhWer3HbUf", "gLDbxwnxlOL", "VsUR4QJyx7I", "nips_2021_73OmmrCfSyy", "nips_2021_73OmmrCfSyy", "nips_2021_73OmmrCfSyy", "nips_2021_73OmmrCfSyy" ]
nips_2021_ErNCn2kr1OZ
Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks
Melih Barsbey, Milad Sefidgaran, Murat A. Erdogdu, Gaël Richard, Umut Simsekli
accept
Four reviewers recommend this paper to be accepted. The paper provides result addressing why and when pruning is possible. A concern is that the main result is based on a condition that might be too ideal. Taking the discussion into consideration, I find that the manuscript makes valuable contributions outweighing its limitations. Hence I am recommending the submission for publication. I ask the authors to implement the improvements promised in their responses and to carefully consider the reviewers comments in the preparation of the final manuscript.
test
[ "VZu0rGGRHgI", "TpQ9-cHn61", "95NYwLVz-0k", "0l4fBCvM3Og", "SXoqkjpV7h", "_or5e5zcLqW", "TsfOvJr8eHs", "2WiBW14mn4", "hX49RyWyya1", "zvXk4c15yec" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. I think clarifying the generalization bound is helpful. I'll keep my score.", " We thank the reviewer for their valuable and detailed feedback. \n\nWhile we agree with the reviewer (and also acknowledge in the paper) that we repeatedly use a seminal result from compressive sensing to d...
[ -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "0l4fBCvM3Og", "zvXk4c15yec", "hX49RyWyya1", "2WiBW14mn4", "TsfOvJr8eHs", "nips_2021_ErNCn2kr1OZ", "nips_2021_ErNCn2kr1OZ", "nips_2021_ErNCn2kr1OZ", "nips_2021_ErNCn2kr1OZ", "nips_2021_ErNCn2kr1OZ" ]
nips_2021_dnDkuSzNh8
Targeted Neural Dynamical Modeling
Latent dynamics models have emerged as powerful tools for modeling and interpreting neural population activity. Recently, there has been a focus on incorporating simultaneously measured behaviour into these models to further disentangle sources of neural variability in their latent space. These approaches, however, are limited in their ability to capture the underlying neural dynamics (e.g. linear) and in their ability to relate the learned dynamics back to the observed behaviour (e.g. no time lag). To this end, we introduce Targeted Neural Dynamical Modeling (TNDM), a nonlinear state-space model that jointly models the neural activity and external behavioural variables. TNDM decomposes neural dynamics into behaviourally relevant and behaviourally irrelevant dynamics; the relevant dynamics are used to reconstruct the behaviour through a flexible linear decoder and both sets of dynamics are used to reconstruct the neural activity through a linear decoder with no time lag. We implement TNDM as a sequential variational autoencoder and validate it on simulated recordings and recordings taken from the premotor and motor cortex of a monkey performing a center-out reaching task. We show that TNDM is able to learn low-dimensional latent dynamics that are highly predictive of behaviour without sacrificing its fit to the neural data.
accept
This paper proposes a new sequential VAE that simultaneously models behavioral and neural time series. The reviewers agree that it provides meaningful contribution to neural data analysis. The discussion among the reviewers and the authors was very constructive and productive, and the overall writing of the final version should reflect the nuances raised through the process and results from the additional experiments.
train
[ "0bSewcXyy7y", "msU2Jv1fZBg", "g_xX8Il0CEi", "xugDbKEiwI5", "0B1MiDMESw", "M1pbUdn9s44", "Y1hBl9EcNrA", "XqlfPzRWuJA", "NLdRQZtHQ3k", "XJAWBXN8hra", "0BfQyqL-CNR", "Z9IxSRiVqpe", "bTNkjsb2Kfo", "IRAT3BEcE_S", "s1QNv0rt8IY", "SmUxzDviWG", "ahTYzQa8bI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I am satisfied with the responses. ", "In this work, the authors propose a state-space model and inference algorithm that aims to recover the latent nonlinear dynamics from sequential data with particular interests in neuroscience. The proposed model inherits the latent factor analy...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "IRAT3BEcE_S", "nips_2021_dnDkuSzNh8", "M1pbUdn9s44", "nips_2021_dnDkuSzNh8", "XJAWBXN8hra", "Y1hBl9EcNrA", "0BfQyqL-CNR", "xugDbKEiwI5", "bTNkjsb2Kfo", "XqlfPzRWuJA", "Z9IxSRiVqpe", "msU2Jv1fZBg", "SmUxzDviWG", "ahTYzQa8bI", "nips_2021_dnDkuSzNh8", "nips_2021_dnDkuSzNh8", "nips_2021...
nips_2021_yhjpeuWepoj
Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation
Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g. due to data privacy or intellectual property). In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data. Our method is based on the observation that target data, which might no longer align with the source domain classifier, still forms clear clusters. We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity. We observe that higher affinity should be assigned to reciprocal neighbors, and propose a self regularization loss to decrease the negative impact of noisy neighbors. Furthermore, to aggregate information with more context, we consider expanded neighborhoods with small affinity values. In the experimental results we verify that the inherent structure of the target features is an important source of information for domain adaptation. We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art performance on several 2D image and 3D point cloud recognition datasets. Code is available in https://github.com/Albert0147/SFDA_neighbors.
accept
Following the rebuttal and discussion period, this paper received borderline scores with three leaning towards acceptance and one recommending rejection. All reviewers agreed that this paper proposes a novel algorithm that leverages local structure in a way different from prior work (using nearest neighbor consistency instead of global cluster structure through K-means). Most reviewers found the empirical results to be compelling, though eN4k did have doubts about whether the results would scale (both in terms of performance and complexity) to larger datasets such as DomainNet. However, after considering all reviews, discussion, and rebuttal comments, the AC is inclined to agree that the algorithmic novelty is interesting and sufficiently empirically justified with the existing datasets to warrant publication. The authors should be sure to include the discussion around K-NN vs K-means structure and clarify their claims and the other minor points brought up in their final version. Further both scalability and how to set the hyperparams K, M were brought up by multiple reviewers, please add a discussion around these points.
train
[ "7AzoltXHb8t", "tSRw9Hfs3Sm", "1-o18Q-yg2p", "bqGUkQuRF5A", "RMyODYCa6hx", "nYpU-4xYf7Z", "IyvQlG76Spp", "stQW4kJQnoF", "2bYvh_0PsY", "WuRsSdOPTPl", "3M1jQtZeYD_", "ljpqIFPNqb-", "xwvMi73e7kx", "KF5Y6TmFE9P", "1f3evwOrabB", "UelqsZH8Lfe" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The score is corrected.", "This paper proposes a method for source-free domain adaptation (SFDA). While methods for unsupervised domain adaptation access the labeled source domain data during adaptation, SFDA does not allow models to do so. The proposed method utilizes the neighborhood affinity of the target da...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "bqGUkQuRF5A", "nips_2021_yhjpeuWepoj", "nYpU-4xYf7Z", "RMyODYCa6hx", "IyvQlG76Spp", "stQW4kJQnoF", "tSRw9Hfs3Sm", "3M1jQtZeYD_", "xwvMi73e7kx", "ljpqIFPNqb-", "KF5Y6TmFE9P", "UelqsZH8Lfe", "1f3evwOrabB", "nips_2021_yhjpeuWepoj", "nips_2021_yhjpeuWepoj", "nips_2021_yhjpeuWepoj" ]
nips_2021_S9ZyhWC17wJ
Learning with Noisy Correspondence for Cross-modal Matching
Cross-modal matching, which aims to establish the correspondence between two different modalities, is fundamental to a variety of tasks such as cross-modal retrieval and vision-and-language understanding. Although a huge number of cross-modal matching methods have been proposed and achieved remarkable progress in recent years, almost all of these methods implicitly assume that the multimodal training data are correctly aligned. In practice, however, such an assumption is extremely expensive even impossible to satisfy. Based on this observation, we reveal and study a latent and challenging direction in cross-modal matching, named noisy correspondence, which could be regarded as a new paradigm of noisy labels. Different from the traditional noisy labels which mainly refer to the errors in category labels, our noisy correspondence refers to the mismatch paired samples. To solve this new problem, we propose a novel method for learning with noisy correspondence, named Noisy Correspondence Rectifier (NCR). In brief, NCR divides the data into clean and noisy partitions based on the memorization effect of neural networks and then rectifies the correspondence via an adaptive prediction model in a co-teaching manner. To verify the effectiveness of our method, we conduct experiments by using the image-text matching as a showcase. Extensive experiments on Flickr30K, MS-COCO, and Conceptual Captions verify the effectiveness of our method. The code could be accessed from www.pengxi.me .
accept
The paper initially received positive reviews from four reviewers (i.e., 6 7 7 8). In the rebuttal, the authors address the concerns from the reviewers very well (e.g., comparison with CLIP), and then all reviewers further raise their scores (to 7 8 9 9). After carefully reading the manuscript, comments, and the corresponding response, I agree with the reviewers that this paper makes a significant contribution to the community, namely, revealing a NEW problem (i.e., noisy correspondence/mismatching pairs) in practice for the first time. The work is certainly valuable to the community including but not limited to cross-modal matching because the essence of many tasks such as ReId, visual grounding, and VQA, is exploring the correspondence of paired data. Overall, this paper is with strong motivation, a technical sound approach, extensive experiments, and good writing.
train
[ "YsOotnKbUjN", "-Rx4LcKEQZ", "f1WUuq1Vr07", "na_ViLxWkJv", "FMik58fiqj0", "3V07qj7Xg7", "_-zZvGcZIX", "Xpv8JvGZQF4", "5kz50s6sn-q", "_W_RUfawNo0", "diZmRfjK6CI", "Vzn6iGitADC", "VtdOrmDMFBn", "Cw9xI8n03IA", "B3Fii9wWEGl", "My0HLAFpTd" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " We sincerely thank you for your positive recognition and evaluation of our work and will correct the typos in the next version.", " Thanks for your positive comments. We would consider adding more discussion to clarify the roles of the proposed losses and soft margins in the next submission. ", " Thanks for y...
[ -1, -1, -1, -1, -1, 8, 9, 7, -1, -1, 9, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 5, 5, 5, -1, -1, 5, -1, -1, -1, -1, -1 ]
[ "5kz50s6sn-q", "_W_RUfawNo0", "FMik58fiqj0", "Xpv8JvGZQF4", "Cw9xI8n03IA", "nips_2021_S9ZyhWC17wJ", "nips_2021_S9ZyhWC17wJ", "nips_2021_S9ZyhWC17wJ", "Vzn6iGitADC", "diZmRfjK6CI", "nips_2021_S9ZyhWC17wJ", "diZmRfjK6CI", "_-zZvGcZIX", "3V07qj7Xg7", "My0HLAFpTd", "Xpv8JvGZQF4" ]
nips_2021_fL9_f9hIzaZ
Offline Reinforcement Learning with Reverse Model-based Imagination
In offline reinforcement learning (offline RL), one of the main challenges is to deal with the distributional shift between the learning policy and the given dataset. To address this problem, recent offline RL methods attempt to introduce conservatism bias to encourage learning in high-confidence areas. Model-free approaches directly encode such bias into policy or value function learning using conservative regularizations or special network structures, but their constrained policy search limits the generalization beyond the offline dataset. Model-based approaches learn forward dynamics models with conservatism quantifications and then generate imaginary trajectories to extend the offline datasets. However, due to limited samples in offline datasets, conservatism quantifications often suffer from overgeneralization in out-of-support regions. The unreliable conservative measures will mislead forward model-based imaginations to undesired areas, leading to overaggressive behaviors. To encourage more conservatism, we propose a novel model-based offline RL framework, called Reverse Offline Model-based Imagination (ROMI). We learn a reverse dynamics model in conjunction with a novel reverse policy, which can generate rollouts leading to the target goal states within the offline dataset. These reverse imaginations provide informed data augmentation for model-free policy learning and enable conservative generalization beyond the offline dataset. ROMI can effectively combine with off-the-shelf model-free algorithms to enable model-based generalization with proper conservatism. Empirical results show that our method can generate more conservative behaviors and achieve state-of-the-art performance on offline RL benchmark tasks.
accept
This paper presents an offline RL method based on reverse models. The contributions are original, and and while further analysis would be helpful in rigorously understanding how reverse models are helping, the empirical analysis convincingly shows that the improvements do not simply come from better model fitting. The empirical results are quite strong, including on challenging offline RL benchmarks. I recommend acceptance. Please see the reviews and other comments on this page for further feedback when preparing the final version of the paper!
train
[ "kbsd-0Xxrrj", "QaU98XN4RXI", "pHnLK5YV7_Q", "XnUBhtFJpnx", "MmWkAX0f3J_", "N3sE1CE7PqO", "U7cRgE2Ya_h", "uR8XHgUpb_v", "2283-Sz-eRc", "8gz63EVY-gA", "ZYpMRYbPQYy", "ndKDsU0dLoH", "JskskBCPTjO", "tBdw1iUfsJ6", "hpJtXr2L97g", "vtCeCZTdjS", "nkUupyOqVwd", "UB5qb-bM_EK", "Jqij8paK2l...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " We thank the reviewer very much for the inspiring feedback! We really appreciate your comments and will incorporate empirical evidence of the rebuttal into the next revision.", " I thank the authors for providing additional empirical evidence in support of the idea that reverse models lead to more conservatism,...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "QaU98XN4RXI", "pHnLK5YV7_Q", "UB5qb-bM_EK", "MmWkAX0f3J_", "tBdw1iUfsJ6", "U7cRgE2Ya_h", "uR8XHgUpb_v", "2283-Sz-eRc", "ZYpMRYbPQYy", "ZYpMRYbPQYy", "nkUupyOqVwd", "hpJtXr2L97g", "YbuGZVABZ4", "Jqij8paK2l5", "JskskBCPTjO", "nips_2021_fL9_f9hIzaZ", "nips_2021_fL9_f9hIzaZ", "nips_20...
nips_2021_vqHak8NLk25
Parameter Prediction for Unseen Deep Architectures
Deep learning has been successful in automating the design of features in machine learning pipelines. However, the algorithms optimizing neural network parameters remain largely hand-designed and computationally inefficient. We study if we can use deep learning to directly predict these parameters by exploiting the past knowledge of training other networks. We introduce a large-scale dataset of diverse computational graphs of neural architectures - DeepNets-1M - and use it to explore parameter prediction on CIFAR-10 and ImageNet. By leveraging advances in graph neural networks, we propose a hypernetwork that can predict performant parameters in a single forward pass taking a fraction of a second, even on a CPU. The proposed model achieves surprisingly good performance on unseen and diverse networks. For example, it is able to predict all 24 million parameters of a ResNet-50 achieving a 60% accuracy on CIFAR-10. On ImageNet, top-5 accuracy of some of our networks approaches 50%. Our task along with the model and results can potentially lead to a new, more computationally efficient paradigm of training networks. Our model also learns a strong representation of neural architectures enabling their analysis.
accept
This paper presents a technique for prediction of parameters in deep architectures. After the initial reviews, the reviewers were concerned about 1) performance in the low-data regime 2) training cost of GHNs 3) relevance/utility for NAS application 4) improvement via finetuning. The authors clarified 1) and 4) well. 2) is high (as one would expect) but still reasonable, considering that predictions for new architectures can be quickly generated after initial training and that practitioners often adopt even more expensive approaches, such as OFA. The advantage of the OFA framework is that each architecture can be trained in parallel on a cluster, but then again, only a finite set of architectures can be trained this way, and the cost increases linearly with the number of architectures. 3) was briefly discussed in the paper and is relatively strong for the DARTS search space on CIFAR-10, although a comparison to standard/broader NAS benchmarks would have made a stronger point here.
train
[ "FVek9-QYJD3", "e_a4IcKFu-j", "repBJtrMeVX", "inYtvK7T6vi", "UeGkw3tpA8r", "imvePD0Rd2Q", "5X_q6jcK5TP", "iZEPMnBYQvv", "M-QiOsHT5GI", "dHHpJa4HcQ", "aYSG56yaOf", "pPiPLXjHbq", "LXrRMkC6wCp", "Aeq4qJ0cmv4", "kBWZw-uneL" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We first thank the reviewers for the additional discussion and increasing the scores.\n\nWe have cleaned up our code and confirmed the reproducibility of our results. We are planning to release the code soon, once the usability testing is complete.\n\nAs a toy working example of using our code, below is the code ...
[ -1, -1, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_vqHak8NLk25", "M-QiOsHT5GI", "nips_2021_vqHak8NLk25", "nips_2021_vqHak8NLk25", "LXrRMkC6wCp", "iZEPMnBYQvv", "nips_2021_vqHak8NLk25", "5X_q6jcK5TP", "repBJtrMeVX", "5X_q6jcK5TP", "kBWZw-uneL", "Aeq4qJ0cmv4", "inYtvK7T6vi", "nips_2021_vqHak8NLk25", "nips_2021_vqHak8NLk25" ]
nips_2021_YTKwvw7XI1
FMMformer: Efficient and Flexible Transformer via Decomposed Near-field and Far-field Attention
Tan Nguyen, Vai Suliafu, Stanley Osher, Long Chen, Bao Wang
accept
This paper proposed a fast and memory-efficient transformer approximation, inspired by the fast multiple method (FFM), which combined the sparse and low-rank approximation. The method is well-motivated and easy-to-follow. The authors demonstrated the benefits of the proposed neural architecture empirically on synthetic data, LRA benchmark, and language modeling on WikiText-103. There are several existing work on combining sparse transformer with low-rank transformer, as pointed by the reviewers (pp7v and MHtR), however, since these are the co-current work, the novelty of the submission should not be discounted. Actually, as opposite. it actually strengthens the significancy of the method. I personally like Sec 2.1 to motivate the FMM. The major criticism from most of the reviewers actually lies in the empirical experiment part, which makes the claim of the paper relatively weak. In my opinion, these suggestions are reasonable. For example, the comprehensive comparison with more baselines, detailed ablation study on the trade-off between accuracy, speed, memory vs. architecture parameters, and more practical tasks on real-world benchmarks should be included. The authors rebuttal addressed some of them, but some of them are still missing. In sum, the idea and new architecture proposed by this paper is novel and well-motivated, however, the benefits of this architecture is not well empirically justified. I hope in the final version, the authors can take the reviewers' comments into account to improve the weakness of the paper.
train
[ "N7nuMdi_jOx", "C6ykZ1O9no-", "wbn0dtshMBm", "_kEEO8WPQ4N", "LaixDHkM8SB", "nkcfeoNtA-", "8IIpmUh7FuK", "FgvGixwJNJZ", "29uWj9WKxA", "KldT6RXMr6", "nQHU0Y3VReQ", "jbJ0YtX_J7O" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your feedback.", " Dear authors,\n\nThank you for the additional responses. To clarify where I stand after the rebuttal: while you provided a response to the different points that were raised and alleviated some of the concerns on the experiment design, my overall appreciation of the paper remains un...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, 4, 4, 5 ]
[ "C6ykZ1O9no-", "wbn0dtshMBm", "_kEEO8WPQ4N", "8IIpmUh7FuK", "nips_2021_YTKwvw7XI1", "nQHU0Y3VReQ", "KldT6RXMr6", "jbJ0YtX_J7O", "LaixDHkM8SB", "nips_2021_YTKwvw7XI1", "nips_2021_YTKwvw7XI1", "nips_2021_YTKwvw7XI1" ]
nips_2021_NNZ0caVe2ak
Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix Recovery
We propose a new framework -- Square Root Principal Component Pursuit -- for low-rank matrix recovery from observations corrupted with noise and outliers. Inspired by the square root Lasso, this new formulation does not require prior knowledge of the noise level. We show that a single, universal choice of the regularization parameter suffices to achieve reconstruction error proportional to the (a priori unknown) noise level. In comparison, previous formulations such as stable PCP rely on noise-dependent parameters to achieve similar performance, and are therefore challenging to deploy in applications where the noise level is unknown. We validate the effectiveness of our new method through experiments on simulated and real datasets. Our simulations corroborate the claim that a universal choice of the regularization parameter yields near optimal performance across a range of noise levels, indicating that the proposed method outperforms the (somewhat loose) bound proved here.
accept
The paper studies the low-rank matrix recovery problem from observations corrupted by noise and sparse outliers. It proposes a framework called square root principal component pursuit, which is a combination of stable principal component pursuit and square root Lasso. The proposed technique has the advantage that the choice of the regularization parameter is noise-independent, so it is easier to be deployed in applications where the noise level is unknown. The main concern is about the estimation error bound obtained in the paper which is sub-optimal by a dimension-dependent factor. More explanation on why this sub-optimality occurs compared to prior works should be provided.
train
[ "sck-ALhmP2p", "daqgZVwMXnO", "D6nOr9WNlXa", "npD01KBlXXi", "yX_kMQ6ZDiI", "InPcaNticUY", "eIFp7iU58PA", "dyMesJn1AdT", "-kFVWa73Zhw" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Low-rank matrix recovery in the presence of sparse corruptions and random noise, under incoherence assumptions on the low-rank and on the sparse components, have been considered. Following an existing line of work, a tuning-free optimization programs has been proposed for demixing and denoising. The analysis, util...
[ 5, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_NNZ0caVe2ak", "nips_2021_NNZ0caVe2ak", "-kFVWa73Zhw", "sck-ALhmP2p", "dyMesJn1AdT", "eIFp7iU58PA", "nips_2021_NNZ0caVe2ak", "nips_2021_NNZ0caVe2ak", "nips_2021_NNZ0caVe2ak" ]
nips_2021_DEsIX_D_vR
Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction
Link prediction is a very fundamental task on graphs. Inspired by traditional path-based methods, in this paper we propose a general and flexible representation learning framework based on paths for link prediction. Specifically, we define the representation of a pair of nodes as the generalized sum of all path representations, with each path representation as the generalized product of the edge representations in the path. Motivated by the Bellman-Ford algorithm for solving the shortest path problem, we show that the proposed path formulation can be efficiently solved by the generalized Bellman-Ford algorithm. To further improve the capacity of the path formulation, we propose the Neural Bellman-Ford Network (NBFNet), a general graph neural network framework that solves the path formulation with learned operators in the generalized Bellman-Ford algorithm. The NBFNet parameterizes the generalized Bellman-Ford algorithm with 3 neural components, namely Indicator, Message and Aggregate functions, which corresponds to the boundary condition, multiplication operator, and summation operator respectively. The NBFNet covers many traditional path-based methods, and can be applied to both homogeneous graphs and multi-relational graphs (e.g., knowledge graphs) in both transductive and inductive settings. Experiments on both homogeneous graphs and knowledge graphs show that the proposed NBFNet outperforms existing methods by a large margin in both transductive and inductive settings, achieving new state-of-the-art results.
accept
This paper proposed NBFNet a framework based on GNNs for representation learning tasks on graphs. All the reviewers agreed that the paper presents quite impressive empirical results and the proposed method is novel. There was also a general agreement amongst the reviewers that the paper seemed to be written in a rush and hence clarity was an issue. After taking into account the authors' responses it was decided that the paper is marginally above the acceptance threshold. I recommend acceptance and strongly urge the authors to improve the final camera ready version and take the reviewers' comments into account.
train
[ "mE2YLOeW9Mg", "YtTa8r8yGSB", "TZcU7PebxdF", "GLGXg-44cPU", "FocWc8PpA-c", "FFGPO35xN9Z", "5xtvTLT6eOj", "HrTsThYX6aw", "dQWXOyY_o-L", "PbN5AtCrNcM", "T2HBy9fAPuL", "ZXmEwxyhiAZ", "MTFEVFL51tj", "Ay97axZmq2Y", "xjrwjzI0ji6", "c6px47BL6y", "3xsrfU1xlKE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Summary: In this study the authors propose a general framework to learn neural INDICATOR, MESSAGE and AGGREGATE functions for the generalized Bellman-Ford (BF) algorithm. They apply the neural BF to link prediction problems in multirelational knowledge graphs in the in-sample setting as well as in the out-of-sampl...
[ 6, 6, -1, -1, 6, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, -1, -1, 4, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_DEsIX_D_vR", "nips_2021_DEsIX_D_vR", "xjrwjzI0ji6", "dQWXOyY_o-L", "nips_2021_DEsIX_D_vR", "T2HBy9fAPuL", "YtTa8r8yGSB", "FocWc8PpA-c", "mE2YLOeW9Mg", "ZXmEwxyhiAZ", "nips_2021_DEsIX_D_vR", "3xsrfU1xlKE", "nips_2021_DEsIX_D_vR", "T2HBy9fAPuL", "YtTa8r8yGSB", "FocWc8PpA-c", ...
nips_2021_wD6GWHqLuhS
CorticalFlow: A Diffeomorphic Mesh Transformer Network for Cortical Surface Reconstruction
In this paper, we introduce CorticalFlow, a new geometric deep-learning model that, given a 3-dimensional image, learns to deform a reference template towards a targeted object. To conserve the template mesh’s topological properties, we train our model over a set of diffeomorphic transformations. This new implementation of a flow Ordinary Differential Equation (ODE) framework benefits from a small GPU memory footprint, allowing the generation of surfaces with several hundred thousand vertices. To reduce topological errors introduced by its discrete resolution, we derive numeric conditions which improve the manifoldness of the predicted triangle mesh. To exhibit the utility of CorticalFlow, we demonstrate its performance for the challenging task of brain cortical surface reconstruction. In contrast to the current state-of-the-art, CorticalFlow produces superior surfaces while reducing the computation time from nine and a half minutes to one second. More significantly, CorticalFlow enforces the generation of anatomically plausible surfaces; the absence of which has been a major impediment restricting the clinical relevance of such surface reconstruction methods.
accept
The paper presents a Geometric deep learning model that learns to diffeomorphic deform a regular template mesh towards a targeted object. The paper was found interesting from a methods perspective and it lead to impressive results. The methodology is strong and elegant, making a strong case for a "deep diffeomorphic" approach. The experimental baselines are well chosen. The only controversy concerned the state of the art, that may not have been sufficiently well acknowledged. Part of the contributions are grounded in a set of now classical contributions on LDDMM, which was not made clear in the writing. The authors however received this comment well and should address it. In summary, there is a clear accept here. But the updated version should dedicate more space to acknowledging prior work.
train
[ "-TuqLcwubky", "wkeZteL_3WN", "bzS2TP486-", "rUJa_xtCO_u", "3QJuZ_FNaB6", "yPC0A4vD84", "wjoo3A4tV3Y", "rUHhgPpqs_j", "ghMDQuzFyCz", "0xk5pRBx4P2", "nzQl68PmcS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors wish to thank the reviewer for his answer and supportive comments.\n\nWe are aware of this literature and we will refocus our statement in the final version of our paper. In our last comment we were referring to the deep-learning-based approaches and their pervasive use of the scaling and squaring met...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "bzS2TP486-", "bzS2TP486-", "rUJa_xtCO_u", "rUHhgPpqs_j", "ghMDQuzFyCz", "nzQl68PmcS", "0xk5pRBx4P2", "nips_2021_wD6GWHqLuhS", "nips_2021_wD6GWHqLuhS", "nips_2021_wD6GWHqLuhS", "nips_2021_wD6GWHqLuhS" ]
nips_2021_4S4nbt-rD6
Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning
Despite recent advances in its theoretical understanding, there still remains a significant gap in the ability of existing PAC-Bayesian theories on meta-learning to explain performance improvements in the few-shot learning setting, where the number of training examples in the target tasks is severely limited. This gap originates from an assumption in the existing theories which supposes that the number of training examples in the observed tasks and the number of training examples in the target tasks follow the same distribution, an assumption that rarely holds in practice. By relaxing this assumption, we develop two PAC-Bayesian bounds tailored for the few-shot learning setting and show that two existing meta-learning algorithms (MAML and Reptile) can be derived from our bounds, thereby bridging the gap between practice and PAC-Bayesian theories. Furthermore, we derive a new computationally-efficient PACMAML algorithm, and show it outperforms existing meta-learning algorithms on several few-shot benchmark datasets.
accept
All reviewers and myself agree on the interest of the topic for the NeurIPS community, on the significance of the results, and on the clarity of the contributions. The discussion phase allowed to satisfactorily address the few concerns expressed by the reviews. As pointed out by a reviewer, the submission provides valuable insights on meta-learning generalisation bounds when the base learner has access to small amounts of data, which in some cases leads to tighter bounds. This observation helps to mitigate the distributional shift in the distribution over the number of samples.
train
[ "_UYzLyUkueP", "vfz1J303HrA", "95R7MYXFgw0", "wEzcDG0BCxt", "CzIDovap0gZ", "n39Cum_boea", "5AcQTIVPj8P" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the Few-shot meta-learning setting according to PAC-Bayes theory. The authors focus on a context where the number of examples from source tasks and the target ones do not follow the same distribution. The authors develop two PAC-Bayes bounds tailored for the few-shot learning setting. Additional...
[ 7, -1, 8, -1, -1, -1, 8 ]
[ 4, -1, 3, -1, -1, -1, 2 ]
[ "nips_2021_4S4nbt-rD6", "wEzcDG0BCxt", "nips_2021_4S4nbt-rD6", "95R7MYXFgw0", "_UYzLyUkueP", "5AcQTIVPj8P", "nips_2021_4S4nbt-rD6" ]
nips_2021_KpKWDyXq17d
SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression
Steve Yadlowsky, Taedong Yun, Cory McLean, Alexander D'Amour
accept
Following the discussion phase, the reviewers and myself have reached a consensus on this submission, and all agree on its interest to the NeurIPS community, its significance and its potential impact in disseminating earlier ideas to a broader group of practitioners, through an astute algorithm. To quote from the discussion with the reviewers: the submission builds on the solid foundations of Sur and Candès, and delivers non-trivial theoretical insights on a new (provably correct) algorithm, which we anticipate will be impactful for practitioners of logistic regression in high dimensional settings.
train
[ "8wqCqxMyABf", "u0A9mv1UGzP", "gULwWXI5hdl", "GE6Gc2K_wck", "qxKjrtsXlWS", "1AIkw-wZP0C", "qMTOeUwvIv", "MYZ8dhUw-Dc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Building off recent work by Sur & Candes (2019), this paper proposes a method for computing point estimates and well-calibrated confidence intervals for high-dimensional logistic regression (specifically, when the ratio $d/n$ converges to a constant as $n \\to \\infty$). \n Originality: The method builds on the w...
[ 9, 6, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_KpKWDyXq17d", "nips_2021_KpKWDyXq17d", "GE6Gc2K_wck", "MYZ8dhUw-Dc", "u0A9mv1UGzP", "8wqCqxMyABf", "nips_2021_KpKWDyXq17d", "nips_2021_KpKWDyXq17d" ]
nips_2021_VvUldGZ3izR
ELLA: Exploration through Learned Language Abstraction
Building agents capable of understanding language instructions is critical to effective and robust human-AI collaboration. Recent work focuses on training these agents via reinforcement learning in environments with synthetic language; however, instructions often define long-horizon, sparse-reward tasks, and learning policies requires many episodes of experience. We introduce ELLA: Exploration through Learned Language Abstraction, a reward shaping approach geared towards boosting sample efficiency in sparse reward environments by correlating high-level instructions with simpler low-level constituents. ELLA has two key elements: 1) A termination classifier that identifies when agents complete low-level instructions, and 2) A relevance classifier that correlates low-level instructions with success on high-level tasks. We learn the termination classifier offline from pairs of instructions and terminal states. Notably, in departure from prior work in language and abstraction, we learn the relevance classifier online, without relying on an explicit decomposition of high-level instructions to low-level instructions. On a suite of complex BabyAI environments with varying instruction complexities and reward sparsity, ELLA shows gains in sample efficiency relative to language-based shaping and traditional RL methods.
accept
The paper describes an approach to training an agent to follow instructions for tasks composed of multiple low-level subtasks. The method uses reward shaping, whereby the agent receives supplementary rewards when sub-tasks are achieved. The shaped rewards are determined using two classifiers, a "terminal classifier" that classifies the completion of a low-level subtask, and a "relevance classifier" that assesses the relationship between low-level tasks and the success of the high-level task. The latter is learned online without the need to break high-level instructions into their lower-level components. Experimental results on the the BabyAI task and a grid world task demonstrate gains in sample efficiency relative to recent baselines. The reviewers agree that the paper considers an important and challenging domain. The problem of understanding natural language instructions has long been the focus of AI and robotics research, and has recently received renewed attention within the broader machine learning community. The idea of exploiting the hierarchical nature of the tasks as a means of encouraging exploration through reward shaping is reasonable and well motivated. The reviewers point out that the approach relies on several strong assumptions (e.g., involving the existence of terminal states and access to low-level instructions). Two of the reviewers also raise concerns about the baselines and the amount of information that they are provided with compared to ELLA. The author response helped to resolve some of these concerns, but the authors are encouraged to ensure that they are addressed in any subsequent version of the paper.
val
[ "xGPorZGYUYA", "xOOUmwHwQL", "ga7PyArL0kq", "8tNRl0LaUSK", "m4PXfW8bEXD", "NqkIyeMNOhI", "LFivwAcHvZZ", "lm23guyDcse", "OYRVunmwM6X", "bevRD4bcauR", "izUTcy63rYN", "kAtWO_gScd" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " We understand that this review format is taxing, so we've strived to crisply address all the points in your review. We hope that if there are any remaining questions or additional feedback you have, we can work together to answer them, and make this paper the best it can be!", "The paper present ELLA, a three-p...
[ -1, 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, 2 ]
[ "m4PXfW8bEXD", "nips_2021_VvUldGZ3izR", "OYRVunmwM6X", "LFivwAcHvZZ", "izUTcy63rYN", "nips_2021_VvUldGZ3izR", "bevRD4bcauR", "nips_2021_VvUldGZ3izR", "xOOUmwHwQL", "NqkIyeMNOhI", "kAtWO_gScd", "nips_2021_VvUldGZ3izR" ]
nips_2021_ZRcjSOmYraB
Learning Distilled Collaboration Graph for Multi-Agent Perception
To promote better performance-bandwidth trade-off for multi-agent perception, we propose a novel distilled collaboration graph (DiscoGraph) to model trainable, pose-aware, and adaptive collaboration among agents. Our key novelties lie in two aspects. First, we propose a teacher-student framework to train DiscoGraph via knowledge distillation. The teacher model employs an early collaboration with holistic-view inputs; the student model is based on intermediate collaboration with single-view inputs. Our framework trains DiscoGraph by constraining post-collaboration feature maps in the student model to match the correspondences in the teacher model. Second, we propose a matrix-valued edge weight in DiscoGraph. In such a matrix, each element reflects the inter-agent attention at a specific spatial region, allowing an agent to adaptively highlight the informative regions. During inference, we only need to use the student model named as the distilled collaboration network (DiscoNet). Attributed to the teacher-student framework, multiple agents with the shared DiscoNet could collaboratively approach the performance of a hypothetical teacher model with a holistic view. Our approach is validated on V2X-Sim 1.0, a large-scale multi-agent perception dataset that we synthesized using CARLA and SUMO co-simulation. Our quantitative and qualitative experiments in multi-agent 3D object detection show that DiscoNet could not only achieve a better performance-bandwidth trade-off than the state-of-the-art collaborative perception methods, but also bring more straightforward design rationale. Our code is available on https://github.com/ai4ce/DiscoNet.
accept
This paper proposes a new architecture for collaborative multi-agent perception problem. The core idea is to train a (privileged) teacher which receives a holistic-view point cloud from all agents and distill the teacher's feature map to individual agents to encourage each agent to incorporate the other agents' information without direct access to their information. The paper also introduces a large-scale multi-agent 3D object detection dataset and show that the proposed method outperforms the baselines with much less communication. All of the reviewers agreed that the proposed architecture is reasonable and novel, and the overall results look good as well. In addition, the new dataset introduced by the paper would be valuable to the multi-agent perception community. In the meantime, there was a concern that the main compression benefit comes from 1x1 conv rather than the main idea of the paper. But, the authors promised to revise the paper so that the contribution of each idea becomes clear, and the reviewers acknowledged that 1x1 conv is also a part of the proposed architecture. Another concern was that it was not crystal clear why the proposed method is not evaluated on 2D datasets, where the baselines were evaluated. The authors clarified that the proposed architecture is specifically designed for 3D perception, though some of the ideas could be generally applicable. I encourage the authors to clarify this in the camera-ready version. Assuming that the authors will reflect these comments, I'd recommend to accept the paper.
train
[ "6WMjJJ40Iy", "ykGdadesAsp", "mFXiITAjPaw", "gJ57fPFZfMz", "Xo5KRjLNHRp", "ZeZeAbuT4Bh", "FB376Yi3kCC", "f-QUe0fKpAl", "WuLmztHVI6u", "vY2HQ94ehvl", "VJrrZkwST-Y", "fR5jNDjXgM" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ### 1. Design rationale of matrix-valued edge weight.\n In the proposed collaboration graph, each edge models the collaboration between two agents and the corresponding edge weight reflects the attention level of such a pairwise collaboration. *Since each agent has its own spatial visibility due to occlusion or l...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "ykGdadesAsp", "WuLmztHVI6u", "Xo5KRjLNHRp", "nips_2021_ZRcjSOmYraB", "ZeZeAbuT4Bh", "gJ57fPFZfMz", "vY2HQ94ehvl", "VJrrZkwST-Y", "fR5jNDjXgM", "nips_2021_ZRcjSOmYraB", "nips_2021_ZRcjSOmYraB", "nips_2021_ZRcjSOmYraB" ]
nips_2021_KSLNajziJeA
Federated-EM with heterogeneity mitigation and variance reduction
The Expectation Maximization (EM) algorithm is the default algorithm for inference in latent variable models. As in any other field of machine learning, applications of latent variable models to very large datasets make the use of advanced parallel and distributed architecture mandatory. This paper introduces FedEM, which is the first extension of the EM algorithm to the federated learning context. FedEM is a new communication efficient method, which handles partial participation of local devices, and is robust to heterogeneous distribution of the datasets. To alleviate the communication bottleneck, FedEM compresses appropriately defined complete data sufficient statistics. We also develop and analyze an extension of FedEM to further incorporate a variance reduction scheme. In all cases, we derive finite-time complexity bounds for smooth non-convex problems. Numerical results are presented to support our theoretical findings, as well as an application to federated missing values imputation for biodiversity monitoring.
accept
The reviewers appreciate the incorporation of EM algorithm into federated learning for dealing with data heterogeneity. The paper makes good theoretical contributions but also has weaknesses in experiments. Overall, I am in favor of acceptance. Please incorporate the new results in the final version. Please revise the paper based on the reviews and rebuttals.
train
[ "IVA2jmfgLJb", "OwNRkZet7N", "yLNsaH6DxOi", "kT4qbzLucs", "Gq_g6F81Uv7", "9rB5dtitBZZ", "D1uOq6HX0Qo", "CtqvtrOwlm_", "UGi1zXgt3Pu", "hSf3uVdDw4g", "C2RMurSdWc0", "kEQQQ50NMu0" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " We would like to thank you for taking the time to consider our rebuttal and for updating your score. \n\nRegarding the data-heterogeneity setting in the paper, this point has been clarified in the revised version of the paper, and we would like to summarise some aspects of our approach. \nOur paper copes with da...
[ -1, -1, 8, 5, -1, 7, -1, -1, -1, -1, -1, 3 ]
[ -1, -1, 1, 5, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "Gq_g6F81Uv7", "C2RMurSdWc0", "nips_2021_KSLNajziJeA", "nips_2021_KSLNajziJeA", "CtqvtrOwlm_", "nips_2021_KSLNajziJeA", "hSf3uVdDw4g", "kT4qbzLucs", "kEQQQ50NMu0", "9rB5dtitBZZ", "yLNsaH6DxOi", "nips_2021_KSLNajziJeA" ]
nips_2021_32eyjxaRxp
On the Role of Optimization in Double Descent: A Least Squares Study
Empirically it has been observed that the performance of deep neural networks steadily improves as we increase model size, contradicting the classical view on overfitting and generalization. Recently, the double descent phenomena has been proposed to reconcile this observation with theory, suggesting that the test error has a second descent when the model becomes sufficiently overparametrized, as the model size itself acts as an implicit regularizer. In this paper we add to the growing body of work in this space, providing a careful study of learning dynamics as a function of model size for the least squares scenario. We show an excess risk bound for the gradient descent solution of the least squares objective. The bound depends on the smallest non-zero eigenvalue of the sample covariance matrix of the input features, via a functional form that has the double descent behaviour. This gives a new perspective on the double descent curves reported in the literature, as our analysis of the excess risk allows to decouple the effect of optimization and generalization error. In particular, we find that in case of noiseless regression, double descent is explained solely by optimization-related quantities, which was missed in studies focusing on the Moore-Penrose pseudoinverse solution. We believe that our derivation provides an alternative view compared to existing work, shedding some light on a possible cause of this phenomena, at least in the considered least squares setting. We empirically explore if our predictions hold for neural networks, in particular whether the spectrum of the sample covariance of intermediary hidden layers has a similar behaviour as the one predicted by our derivations.
accept
This paper considers the double descent phenomenon for least-squares regression with gradient descent optimization, showing a double descent behavior for the excess risk, and relating this peak around interpolation to the learning dynamics of the optimization algorithm. All reviewers agreed that the paper made a useful contribution for the community.
train
[ "m3ojRpvNEoi", "obOgFBshpho", "OkeJwCgQFeY", "N-Tul0PdaU", "XkAGYGpWqa", "k22K5GLazR", "HAooNRjyKjA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper deals with the double descent phenomenon in the setting of least-squares regression by gradient descent. Similarly to a number of recent works, it shows a double descent behavior for the excess risk: a U shaped curve until the number of features coincides with the number of datapoints. The main contribu...
[ 8, 7, -1, -1, -1, -1, 7 ]
[ 4, 2, -1, -1, -1, -1, 3 ]
[ "nips_2021_32eyjxaRxp", "nips_2021_32eyjxaRxp", "m3ojRpvNEoi", "obOgFBshpho", "HAooNRjyKjA", "nips_2021_32eyjxaRxp", "nips_2021_32eyjxaRxp" ]
nips_2021_NO_cSsVghGb
Neural Architecture Dilation for Adversarial Robustness
With the tremendous advances in the architecture and scale of convolutional neural networks (CNNs) over the past few decades, they can easily reach or even exceed the performance of humans in certain tasks. However, a recently discovered shortcoming of CNNs is that they are vulnerable to adversarial attacks. Although the adversarial robustness of CNNs can be improved by adversarial training, there is a trade-off between standard accuracy and adversarial robustness. From the neural architecture perspective, this paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy. Under a minimal computational overhead, the introduction of a dilation architecture is expected to be friendly with the standard performance of the backbone CNN while pursuing adversarial robustness. Theoretical analyses on the standard and adversarial error bounds naturally motivate the proposed neural architecture dilation algorithm. Experimental results on real-world datasets and benchmark neural networks demonstrate the effectiveness of the proposed algorithm to balance the accuracy and adversarial robustness.
accept
The tradeoff between natural accuracy and adversarial accuracy is critical in adversarial training, and this paper proposed a quite different point of view: instead of neural architecture compression, *neural architecture dilation* may be more appropriate for achieving the goal. I personally think the new point of view may open the door to a new world. While there were some concerns in the beginning, the authors have done a particularly good job in their rebuttal. In the end, 3 reviewers hold positive opinions and 1 reviewer is fine with either acceptance or rejection. Thus, I think it should be accepted for publication.
train
[ "4Gb6QbfEvN", "f6YI7BJ4sa7", "YQhGnjLErzW", "23YNZ0NxBG8", "16eMBEC0cL", "ggc5ZDPgWs6", "POpTu2D8c3R", "3nuQfaHf3XT" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors proposed NADAR to balance the natural accuracy and adversarial robustness by dilating the standard trained model. Specifically, given a backbone trained on the standard dataset, the authors added some cells as the dilation network to enhance the adversarial robustness with adversarial tr...
[ 6, -1, -1, -1, -1, 6, 5, 7 ]
[ 4, -1, -1, -1, -1, 4, 5, 4 ]
[ "nips_2021_NO_cSsVghGb", "POpTu2D8c3R", "4Gb6QbfEvN", "ggc5ZDPgWs6", "3nuQfaHf3XT", "nips_2021_NO_cSsVghGb", "nips_2021_NO_cSsVghGb", "nips_2021_NO_cSsVghGb" ]
nips_2021_Xhj3PdCf4q9
Clustering Effect of Adversarial Robust Models
Adversarial robustness has received increasing attention along with the study of adversarial examples. So far, existing works show that robust models not only obtain robustness against various adversarial attacks but also boost the performance in some downstream tasks. However, the underlying mechanism of adversarial robustness is still not clear. In this paper, we interpret adversarial robustness from the perspective of linear components, and find that there exist some statistical properties for comprehensively robust models. Specifically, robust models show obvious hierarchical clustering effect on their linearized sub-networks, when removing or replacing all non-linear components (e.g., batch normalization, maximum pooling, or activation layers). Based on these observations, we propose a novel understanding of adversarial robustness and apply it on more tasks including domain adaption and robustness boosting. Experimental evaluations demonstrate the rationality and superiority of our proposed clustering strategy.
accept
The paper targets the problem of how to understand the mechanism of adversarial robustness, which is a hot research topic. The authors presented that adversarial robust models have a clustering effect. Specifically, robust models show a clear hierarchical clustering effect on their linear sub-networks. By exploiting the effect, the authors illustrated how to further boost adversarial robustness. We think the paper will interest many readers.
train
[ "-uDW6Y-5wBl", "9KUEYMnBFi4", "rtZk0WAMP8T", "F6rZJHWPHG", "_h34CUHWCpT", "DeqKD2OD7Nt" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your valuable suggestions.\n\n**Q1.** In Eq. 9, how did you compute the features $z_i$?\n \n**A1.** Thanks for your detailed reading. They are directly extracted from the feature before the FC layer.\n\n**Q2.** Does the improvement in robustness come from the regularization term itself or the class hie...
[ -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, 3, 4, 5 ]
[ "DeqKD2OD7Nt", "_h34CUHWCpT", "F6rZJHWPHG", "nips_2021_Xhj3PdCf4q9", "nips_2021_Xhj3PdCf4q9", "nips_2021_Xhj3PdCf4q9" ]