paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_bc-f0ZBNker
Representation Learning Beyond Linear Prediction Functions
Recent papers on the theory of representation learning has shown the importance of a quantity called diversity when generalizing from a set of source tasks to a target task. Most of these papers assume that the function mapping shared representations to predictions is linear, for both source and target tasks. In practice, researchers in deep learning use different numbers of extra layers following the pretrained model based on the difficulty of the new task. This motivates us to ask whether diversity can be achieved when source tasks and the target task use different prediction function spaces beyond linear functions. We show that diversity holds even if the target task uses a neural network with multiple layers, as long as source tasks use linear functions. If source tasks use nonlinear prediction functions, we provide a negative result by showing that depth-1 neural networks with ReLu activation function need exponentially many source tasks to achieve diversity. For a general function class, we find that eluder dimension gives a lower bound on the number of tasks required for diversity. Our theoretical results imply that simpler tasks generalize better. Though our theoretical results are shown for the global minimizer of empirical risks, their qualitative predictions still hold true for gradient-based optimization algorithms as verified by our simulations on deep neural networks.
accept
The reviewers agreed that this paper studies a practically relevant problem (transfer/representation learning) and provides novel upper and lower bounds on transfer learning performance, as well as experimental results to complement the theoretical guarantees. While many of the technical tools (e.g., eluder dimension) will be familiar within research on contextual bandits and reinforcement learning, their use in this context is novel, and is likely to find broader use. The results here also suggest a number of interesting new follow-up directions. The paper is also well-written.
train
[ "WSHQipJV-Rd", "JSIuXddEMSa", "6CXeD8fXlB1", "0VKL-3S_0tj", "M8NJcJ9CrFl", "uRqTkULRy-3", "fnYtR94uYVk", "cNimAJdq355", "RzxtbObEAeH", "A66J7yV_Wh" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the importance of diversity in representation learning. The novelty over previous work is that this work studies the case where the prediction classes for the source and target can be different and nonlinear.\n\nThis paper first shows that diversity over the source prediction class $F_{so}$ impl...
[ 7, 7, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "nips_2021_bc-f0ZBNker", "nips_2021_bc-f0ZBNker", "0VKL-3S_0tj", "M8NJcJ9CrFl", "WSHQipJV-Rd", "JSIuXddEMSa", "A66J7yV_Wh", "RzxtbObEAeH", "nips_2021_bc-f0ZBNker", "nips_2021_bc-f0ZBNker" ]
nips_2021_GlEWs-V9boR
Volume Rendering of Neural Implicit Surfaces
Neural volume rendering became increasingly popular recently due to its success in synthesizing novel views of a scene from a sparse set of input images. So far, the geometry learned by neural volume rendering techniques was modeled using a generic density function. Furthermore, the geometry itself was extracted using an arbitrary level set of the density function leading to a noisy, often low fidelity reconstruction.The goal of this paper is to improve geometry representation and reconstruction in neural volume rendering. We achieve that by modeling the volume density as a function of the geometry. This is in contrast to previous work modeling the geometry as a function of the volume density. In more detail, we define the volume density function as Laplace's cumulative distribution function (CDF) applied to a signed distance function (SDF) representation. This simple density representation has three benefits: (i) it provides a useful inductive bias to the geometry learned in the neural volume rendering process; (ii) it facilitates a bound on the opacity approximation error, leading to an accurate sampling of the viewing ray. Accurate sampling is important to provide a precise coupling of geometry and radiance; and (iii) it allows efficient unsupervised disentanglement of shape and appearance in volume rendering.Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions, outperforming relevant baselines. Furthermore, switching shape and appearance between scenes is possible due to the disentanglement of the two.
accept
This submission introduces a timely and important contribution that integrates SDF representations with neural volume rendering. All reviewers are positive, and three of the four reviewers strongly recommend acceptance. The AC agrees. The authors should try to address the reviewers' concerns in the camera-ready version. This includes adding the ablation studies from the rebuttal period, incorporating the discussion as L3KT suggested, among others.
val
[ "ZDp1MhHANJI", "CLl5mCd_ntq", "x3W7VfbSiuD", "TPxS7IVPQcR", "F-6MPIh5L5X", "gXo5RprR5qH", "Z4fYVYNeKfS", "xSZm1pJS0K3", "XtE7gZfZM-N", "N9XL4qIOYEz", "E8RHK795HGW", "lOTe7L1Xx6T", "dABuXaMXlFb", "NPb4dMiSWb_" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a neural approach based on NeRF, for the purpose of learning a 3D representation of a shape from captured images. Instead of allowing volume density to be freely allocated along rays, as in NeRF, this paper proposes concentrating volume density around a single surface. To do this, the method re...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 9, 9 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_GlEWs-V9boR", "x3W7VfbSiuD", "xSZm1pJS0K3", "E8RHK795HGW", "gXo5RprR5qH", "N9XL4qIOYEz", "XtE7gZfZM-N", "ZDp1MhHANJI", "NPb4dMiSWb_", "dABuXaMXlFb", "lOTe7L1Xx6T", "nips_2021_GlEWs-V9boR", "nips_2021_GlEWs-V9boR", "nips_2021_GlEWs-V9boR" ]
nips_2021_Tqx7nJp7PR
MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers
As major progress is made in open-ended text generation, measuring how close machine-generated text is to human language remains a critical open problem. We introduce Mauve, a comparison measure for open-ended text generation, which directly compares the learnt distribution from a text generation model to the distribution of human-written text using divergence frontiers. Mauve scales up to modern text generation models by computing information divergences in a quantized embedding space. Through an extensive empirical study on three open-ended generation tasks, we find that Mauve identifies known properties of generated text, scales naturally with model size, and correlates with human judgments, with fewer restrictions than existing distributional evaluation metrics.
accept
This paper proposes Mauve, a novel metric for evaluating open-ended text generation. The paper is exceptionally well written, the idea makes a lot of sense and the experimental results are comprehensive, thorough and convincing. The reviewers unanimously agree that this is a clear accept.
train
[ "qULX711DnZu", "g9hg8CpTEF-", "ejhO4G_fu3W", "_iShVBR2EWU", "BnsNvQrQSXv", "iHzMo8dqwHN", "SESvJHumRMP", "Rchz1190-o", "kqlgS9OjSci", "BfUV-EPpas", "VM12b9r1cuf" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response! I am satisfied.", "This paper proposes comparison metrics for open-ended text generation. The method, named Mauve, compares the model’s distribution against the distribution of human-written text. \nThere are two types of errors identified and established to motivate the metric. The...
[ -1, 8, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "SESvJHumRMP", "nips_2021_Tqx7nJp7PR", "VM12b9r1cuf", "iHzMo8dqwHN", "g9hg8CpTEF-", "BfUV-EPpas", "kqlgS9OjSci", "nips_2021_Tqx7nJp7PR", "nips_2021_Tqx7nJp7PR", "nips_2021_Tqx7nJp7PR", "nips_2021_Tqx7nJp7PR" ]
nips_2021_r2uzPR4AYo
Accurately Solving Rod Dynamics with Graph Learning
Iterative solvers are widely used to accurately simulate physical systems. These solvers require initial guesses to generate a sequence of improving approximate solutions. In this contribution, we introduce a novel method to accelerate iterative solvers for rod dynamics with graph networks (GNs) by predicting the initial guesses to reduce the number of iterations. Unlike existing methods that aim to learn physical systems in an end-to-end manner, our approach guarantees long-term stability and therefore leads to more accurate solutions. Furthermore, our method improves the run time performance of traditional iterative solvers for rod dynamics. To explore our method we make use of position-based dynamics (PBD) as a common solver for physical systems and evaluate it by simulating the dynamics of elastic rods. Our approach is able to generalize across different initial conditions, discretizations, and realistic material properties. We demonstrate that it also performs well when taking discontinuous effects into account such as collisions between individual rods. Finally, to illustrate the scalability of our approach, we simulate complex 3D tree models composed of over a thousand individual branch segments swaying in wind fields.
accept
This paper considers accelerating the simulation of rod dynamics by training a graph neural network to act as an initialisation for an iterative solver. This is an interesting and original paper. While reviewers were generally in favour of this paper, they had a number of concerns. The primary question, including after the rebuttal, is around the significance of the results. Despite that, there is value in this contribution. The authors should make the promised changes to the manuscript.
train
[ "1e_oGG6MK9B", "M8p6gHSi0my", "kxN_xdW7qP", "GSyt_AeRyWv", "YaAU58MaUGl", "w-e90unbu8P", "LSfKymy7piX" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your additional feedback. We will make sure to carefully address these concerns in the final version. Specifically, we are happy to add the “advanced CG” experiment described in the rebuttal letter as requested under point (1) and provide a clarifying discussion, also w.r.t. other preconditioners.\n...
[ -1, -1, 6, 6, -1, 6, 6 ]
[ -1, -1, 3, 3, -1, 4, 3 ]
[ "M8p6gHSi0my", "LSfKymy7piX", "nips_2021_r2uzPR4AYo", "nips_2021_r2uzPR4AYo", "nips_2021_r2uzPR4AYo", "nips_2021_r2uzPR4AYo", "nips_2021_r2uzPR4AYo" ]
nips_2021_jg9LM8QItms
Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training
The mean field theory of multilayer neural networks centers around a particular infinite-width scaling, in which the learning dynamics is shown to be closely tracked by the mean field limit. A random fluctuation around this infinite-width limit is expected from a large-width expansion to the next order. This fluctuation has been studied only in the case of shallow networks, where previous works employ heavily technical notions or additional formulation ideas amenable only to that case. Treatment of the multilayer case has been missing, with the chief difficulty in finding a formulation that must capture the stochastic dependency across not only time but also depth.In this work, we initiate the study of the fluctuation in the case of multilayer networks, at any network depth. Leveraging on the neuronal embedding framework recently introduced by Nguyen and Pham, we systematically derive a system of dynamical equations, called the second-order mean field limit, that captures the limiting fluctuation distribution. We demonstrate through the framework the complex interaction among neurons in this second-order mean field limit, the stochasticity with cross-layer dependency and the nonlinear time evolution inherent in the limiting fluctuation. A limit theorem is proven to relate quantitatively this limit to the fluctuation realized by large-width networks.We apply the result to show a stability property of gradient descent mean field training: in the large-width regime, along the training trajectory, it progressively biases towards a solution with "minimal fluctuation" (in fact, vanishing fluctuation) in the learned output function, even after the network has been initialized at or has converged (sufficiently fast) to a global optimum. This extends a similar phenomenon previously shown only for shallow networks with a squared loss in the empirical risk minimization setting, to multilayer networks with a loss function that is not necessarily convex in a more general setting.
accept
The paper studies the fluctuation of a finite-width multi-layer neural network around its mean-field limit, based on the infinite-width mean-field limit under the neuronal embedding framework. While referees have concerns on the practicality of the parametrization, they find the result interesting and novel, especially with the newly added numerical experiments. The meta-reviewer recommends acceptance of the paper as a poster.
train
[ "RS2IoWwtFtl", "-qcZAMENahX", "1bhFJ5KFyWW", "82P6y_TNy8-", "Ydu2pwQq4Gj", "XZSlXbliGrq", "VHr7hmubqu", "nofUk9D6yTc", "mE2-fQMSFpl", "OibEz6PQV-B", "IY8QKvWiZep" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Based on the infinite-width mean-field (MF) limit of multi-layer neural networks (NNs) under the neuronal embedding framework proposed previously, this paper studies the fluctuation of a finite-width multi-layer NN (under random sampling and gradient flow training) around its mean-field limit. The authors show tha...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6 ]
[ 2, -1, -1, -1, -1, 4, -1, -1, -1, -1, 2 ]
[ "nips_2021_jg9LM8QItms", "82P6y_TNy8-", "Ydu2pwQq4Gj", "RS2IoWwtFtl", "nofUk9D6yTc", "nips_2021_jg9LM8QItms", "XZSlXbliGrq", "IY8QKvWiZep", "RS2IoWwtFtl", "nips_2021_jg9LM8QItms", "nips_2021_jg9LM8QItms" ]
nips_2021_4CRpaV4pYp
Medical Dead-ends and Learning to Identify High-Risk States and Treatments
Machine learning has successfully framed many sequential decision making problems as either supervised prediction, or optimal decision-making policy identification via reinforcement learning. In data-constrained offline settings, both approaches may fail as they assume fully optimal behavior or rely on exploring alternatives that may not exist. We introduce an inherently different approach that identifies "dead-ends" of a state space. We focus on patient condition in the intensive care unit, where a "medical dead-end" indicates that a patient will expire, regardless of all potential future treatment sequences. We postulate "treatment security" as avoiding treatments with probability proportional to their chance of leading to dead-ends, present a formal proof, and frame discovery as an RL problem. We then train three independent deep neural models for automated state construction, dead-end discovery and confirmation. Our empirical results discover that dead-ends exist in real clinical data among septic patients, and further reveal gaps between secure treatments and those administered.
accept
The reviewers appreciated the paper and agree it provides an interesting new problem and a useful new method. The authors are expected to address the points raised by reviewers in a final version, including as they outlined in their response. Most crucially, the authors should address the issues raised by the ethics reviewers: the additional discussion outlined by the authors in their response was deemed an appropriate way to do this and the authors need to implement this.
val
[ "CG5XkBeg1vt", "1lMWkgXzRqU", "jtlhKJJVy7L", "P_TDTTR-p78", "MIslN_9CHTg", "ffKffJ5aAk", "jF0a9tOepKu", "i9pADxigQO", "nYY_-sjomxc", "MrdsBR-LHxG", "u_BDxgctIop", "z9UAtNGly-S", "WKtrpAqj-o6", "bnPugDn1L3w", "q6Ng6_mFMMM" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your continued constructive comments. These will certainly help us improve the paper's camera-ready version.\n\n`1.` All the items you mentioned will indeed help to better clarify our contributions. We will update the Introduction as suggested.\n\n`2.` We agree. We should also note that the security...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "1lMWkgXzRqU", "i9pADxigQO", "bnPugDn1L3w", "WKtrpAqj-o6", "jF0a9tOepKu", "u_BDxgctIop", "nips_2021_4CRpaV4pYp", "q6Ng6_mFMMM", "P_TDTTR-p78", "z9UAtNGly-S", "nips_2021_4CRpaV4pYp", "nips_2021_4CRpaV4pYp", "nips_2021_4CRpaV4pYp", "nips_2021_4CRpaV4pYp", "nips_2021_4CRpaV4pYp" ]
nips_2021_JXREUkyHi7u
Overcoming the Convex Barrier for Simplex Inputs
Harkirat Singh Behl, M. Pawan Kumar, Philip Torr, Krishnamurthy Dvijotham
accept
The paper studies the robustness verification problem for neural networks over simplex inputs. Unlike the Linf case where a tight bound requires exponential complexity, the paper proposes an efficient method to propagate the simplex through ReLU networks where the size of relaxation remains linear in the number of neurons and shows the algorithm can successfully compute tight bounds on several benchmarking datasets. After discussions, the reviewers think that although the method is an extension of (Tjandraatmadja et al.), the derived closed-form expression is new and it is interesting to see that such a nice form can be obtained in the L1 case. Therefore we recommend acceptance of the paper. The reviewers also think the paper is not well written and hope the authors improve the presentation of the paper based on the review comments.
test
[ "_ErQsCucdIa", "Fn_RoasF6bW", "P2lz9CpQoSr", "ASu_1EC1TK", "56vraGYAf9_", "gdAvBrN-otC", "E0xmbEDV3AV", "NIJqMv8mr3e", "K1Tl_XmxsL8", "VjQZWv5jJu", "oyqjiNnl1P9", "IRmU5bEeQ7o", "dPDxLmReKkD", "hPh5yZVt4QI", "es05mApoNf3", "ZRl_gUsEJ-C", "X03_puKurV8" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\n\nThank you for the reply.\nAfter reading the other reviews, including the discussion with aXAA, as well as your replies I retain my initial score.\nIf all discussed items are included in a revision I don't have major concerns.\n\nBest,\nReviewer vqRE", "This paper proposes to use probability sim...
[ -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "IRmU5bEeQ7o", "nips_2021_JXREUkyHi7u", "nips_2021_JXREUkyHi7u", "gdAvBrN-otC", "es05mApoNf3", "E0xmbEDV3AV", "NIJqMv8mr3e", "K1Tl_XmxsL8", "VjQZWv5jJu", "oyqjiNnl1P9", "dPDxLmReKkD", "X03_puKurV8", "P2lz9CpQoSr", "Fn_RoasF6bW", "ZRl_gUsEJ-C", "nips_2021_JXREUkyHi7u", "nips_2021_JXRE...
nips_2021_XeeTWJvAQl
High-probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails
Ashok Cutkosky, Harsh Mehta
accept
The paper considers non-convex SO using first-order algorithms for which the gradient estimates may have heavy tail distributions. Using gradient clipping normalized GD with momentum is shown to convergence to near critical points in high-probability with best-known rates for smooth losses when the gradients only have bounded p'th moments for some p between 1 and 2. Next high-probability bounds are obtained for the case of second-order smooth losses. The reviewers are unanimously strongly in favor of accepting the paper and I concur. This seems like a nice contribution.
train
[ "Az-_N9inlJ", "FWMh88dtDv", "gC2vsttwZO", "TpT3jtAN_W", "4OKDW3kvn59", "kKtoEs4g8p_", "yi5CEZ76Opz", "j76n3qQkW7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " I'm satisfied with this paper. I increased my score. Please make sure you include explicit constants somewhere either in the main body or appendix of the paper, and fix all the typos!", "The paper studies non-convex stochastic optimization using normalized SGD with clipping and momentum. They obtain convergence...
[ -1, 7, -1, 8, -1, -1, -1, 7 ]
[ -1, 3, -1, 5, -1, -1, -1, 4 ]
[ "4OKDW3kvn59", "nips_2021_XeeTWJvAQl", "kKtoEs4g8p_", "nips_2021_XeeTWJvAQl", "FWMh88dtDv", "TpT3jtAN_W", "j76n3qQkW7", "nips_2021_XeeTWJvAQl" ]
nips_2021__RSgXL8gNnx
Batch Normalization Orthogonalizes Representations in Deep Random Networks
This paper underlines an elegant property of batch-normalization (BN): Successive batch normalizations with random linear updates make samples increasingly orthogonal. We establish a non-asymptotic characterization of the interplay between depth, width, and the orthogonality of deep representations. More precisely, we prove, under a mild assumption, the deviation of the representations from orthogonality rapidly decays with depth up to a term inversely proportional to the network width. This result has two main theoretical and practical implications: 1) Theoretically, as the depth grows, the distribution of the outputs contracts to a Wasserstein-2 ball around an isotropic normal distribution. Furthermore, the radius of this Wasserstein ball shrinks with the width of the network. 2) Practically, the orthogonality of the representations directly influences the performance of stochastic gradient descent (SGD). When representations are initially aligned, we observe SGD wastes many iterations to disentangle representations before the classification. Nevertheless, we experimentally show that starting optimization from orthogonal representations is sufficient to accelerate SGD, with no need for BN.
accept
This paper provides a setting and analysis where batch normalization increasingly orthogonalizes data as it is mapped through a network. Reviewers are uniformly supportive and I recommend acceptance. Even so, a few of the reviews (and responses) were quite detailed, and I request the authors adjust the paper to clarify these points in their revisions.
train
[ "coUnCDP0UYv", "HmkgRBW27I", "2I4kTQvb3U", "y35HI_4PfE", "fez18r0r3F", "qyoCesQrf0", "smwXHQ-u1T", "nG8NiDYIcWT", "ngUOh7ExGq8", "kuip1FdJvX", "5nvWLrhS-m", "CuWmuR2Dflb", "w-rY4wvmIW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for your detailed response! I agree the assumptions are mild if the batch size is small. I will keep my score.", "This papers focuses on the analysis of batch normalization which has been attracting a lot of attention recently. The key contribution of the paper is in the finding that batch normalizat...
[ -1, 7, -1, 7, -1, -1, 7, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "5nvWLrhS-m", "nips_2021__RSgXL8gNnx", "ngUOh7ExGq8", "nips_2021__RSgXL8gNnx", "qyoCesQrf0", "kuip1FdJvX", "nips_2021__RSgXL8gNnx", "nips_2021__RSgXL8gNnx", "y35HI_4PfE", "smwXHQ-u1T", "w-rY4wvmIW", "HmkgRBW27I", "nips_2021__RSgXL8gNnx" ]
nips_2021_9bqxRuRwBlu
Support vector machines and linear regression coincide with very high-dimensional features
Navid Ardeshir, Clayton Sanford, Daniel J. Hsu
accept
The paper improves existing bounds for the support vector proliferation regime. While initially there were some concerns among the reviewers, the authors managed to convince the more skeptical reviewers during the rebuttal phase. On the negative side, the paper deals, as its predecessors, with a rather restricted data model and direct practical implications remain vague.
train
[ "yJA3OP0DMf-", "wnqgY9m9ba3", "Zm0kxhOLWw4", "o8ilbHH7zb4", "hQFj2UlMwP1", "A47RplgeT_1", "OrlIxDQKfMk", "Xpm_wsacHyk", "yIUsA_Ch5U1", "xRdKCHIiEsQ", "KTunoLjOQns", "n39BS2WOVj4", "CjUl_akBox4", "qGrwhPzoWqP" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper explores the generality of support vector proliferation and makes the following three contributions: (1) proving a super-linear lower bound on the dimension (in terms of sample size) required for support vector proliferation in independent feature models. (2) identifying a sharp phase transition in Gaus...
[ 7, -1, -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ 4, -1, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_9bqxRuRwBlu", "OrlIxDQKfMk", "hQFj2UlMwP1", "nips_2021_9bqxRuRwBlu", "CjUl_akBox4", "nips_2021_9bqxRuRwBlu", "xRdKCHIiEsQ", "yJA3OP0DMf-", "qGrwhPzoWqP", "A47RplgeT_1", "CjUl_akBox4", "o8ilbHH7zb4", "nips_2021_9bqxRuRwBlu", "nips_2021_9bqxRuRwBlu" ]
nips_2021_vRwnHlAgK5x
Coupled Segmentation and Edge Learning via Dynamic Graph Propagation
Image segmentation and edge detection are both central problems in perceptual grouping. It is therefore interesting to study how these two tasks can be coupled to benefit each other. Indeed, segmentation can be easily transformed into contour edges to guide edge learning. However, the converse is nontrivial since general edges may not always form closed contours. In this paper, we propose a principled end-to-end framework for coupled edge and segmentation learning, where edges are leveraged as pairwise similarity cues to guide segmentation. At the core of our framework is a recurrent module termed as dynamic graph propagation (DGP) layer that performs message passing on dynamically constructed graphs. The layer uses learned gating to dynamically select neighbors for message passing using max-pooling. The output from message passing is further gated with an edge signal to refine segmentation. Experiments demonstrate that the proposed framework is able to let both tasks mutually improve each other. On Cityscapes validation, our best model achieves 83.7% mIoU in semantic segmentation and 78.7% maximum F-score in semantic edge detection. Our method also leads to improved zero-shot robustness on Cityscapes with natural corruptions (Cityscapes-C).
accept
The reviewers appreciated the clear and intuitive presentation of the proposed graph propagation strategy. The author responses addressed well the requests for clarification and additional experiments made by reviewers, including results on COCO, the discussion of computational cost, and the details of performance gains over DGP. All reviewers recommend acceptance. The ACs concur with this recommendation.
train
[ "-TBCMnFuyv9", "PZsA_s8Z-nX", "cZSkt7M52fU", "2eyf40xfKB_", "41zfMbXwdKs", "xEsu10E-4m", "eeUPFPFMwwd", "TshicSaPoz", "8c1MCL27E5m", "5fdJRZkELt", "gHc2S_VKZdU", "gCOYfJi_QO", "vRpr1ZtoUOG" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for the kind support and constructive feedback!", " Thank you for the kind support and constructive feedback!", " Thank you for your additional experiment on COCO. It seems that your proposed architecture also leads to improvement on a more general and more challenging semantic segmentation task/dat...
[ -1, -1, -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, 5 ]
[ "cZSkt7M52fU", "41zfMbXwdKs", "gHc2S_VKZdU", "nips_2021_vRwnHlAgK5x", "8c1MCL27E5m", "TshicSaPoz", "nips_2021_vRwnHlAgK5x", "5fdJRZkELt", "vRpr1ZtoUOG", "eeUPFPFMwwd", "2eyf40xfKB_", "nips_2021_vRwnHlAgK5x", "nips_2021_vRwnHlAgK5x" ]
nips_2021_LU687itn08w
Offline RL Without Off-Policy Evaluation
Most prior approaches to offline reinforcement learning (RL) have taken an iterative actor-critic approach involving off-policy evaluation. In this paper we show that simply doing one step of constrained/regularized policy improvement using an on-policy Q estimate of the behavior policy performs surprisingly well. This one-step algorithm beats the previously reported results of iterative algorithms on a large portion of the D4RL benchmark. The one-step baseline achieves this strong performance while being notably simpler and more robust to hyperparameters than previously proposed iterative algorithms. We argue that the relatively poor performance of iterative approaches is a result of the high variance inherent in doing off-policy evaluation and magnified by the repeated optimization of policies against those estimates. In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy.
accept
Reviewers are positive, and a consensus is reached for acceptance (spotlight) through rebuttal + internal discussions. While the method is very simple, the message is clear and the authors have done excellent job on concise and thorough writing and experimentation. As offline RL + D4RL benchmark are becoming mainstream, such work can likely guide the community to explore more impactful research directions.
val
[ "F-iofoPNyp_", "b3uq8EZXuZz", "j1nC5-1dj5", "pmM-xCtxws", "uYUd2XjjWka", "L56dBRO1lI", "GTkK3x3QRi2", "Q53MFnsIkbm", "OpLMpx-8tqK", "cQjBl68Ll-5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper investigates the well-known 1-step heuristic for Q-function based offline RL methods.\nThe method is evaluated for a model-free, Q-function based offline RL method on various benchmarks and the overall good performance of the approach is shown.\nThe paper differs from results published in [1] mainly by t...
[ 7, 8, -1, -1, -1, -1, -1, -1, 9, 7 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_LU687itn08w", "nips_2021_LU687itn08w", "uYUd2XjjWka", "GTkK3x3QRi2", "b3uq8EZXuZz", "OpLMpx-8tqK", "F-iofoPNyp_", "cQjBl68Ll-5", "nips_2021_LU687itn08w", "nips_2021_LU687itn08w" ]
nips_2021_iX0TSH45eOd
Continuous vs. Discrete Optimization of Deep Neural Networks
Existing analyses of optimization in deep learning are either continuous, focusing on (variants of) gradient flow, or discrete, directly treating (variants of) gradient descent. Gradient flow is amenable to theoretical analysis, but is stylized and disregards computational efficiency. The extent to which it represents gradient descent is an open question in the theory of deep learning. The current paper studies this question. Viewing gradient descent as an approximate numerical solution to the initial value problem of gradient flow, we find that the degree of approximation depends on the curvature around the gradient flow trajectory. We then show that over deep neural networks with homogeneous activations, gradient flow trajectories enjoy favorable curvature, suggesting they are well approximated by gradient descent. This finding allows us to translate an analysis of gradient flow over deep linear neural networks into a guarantee that gradient descent efficiently converges to global minimum almost surely under random initialization. Experiments suggest that over simple deep neural networks, gradient descent with conventional step size is indeed close to gradient flow. We hypothesize that the theory of gradient flows will unravel mysteries behind deep learning.
accept
The paper is an important theoretical contribution to the field of deep neural networks providing one of the first rigorous mathematical analysis of the commonly used discrete gradient-descent methods via the rich literature on continuous gradient flows. All the reviewers agree that the manuscript is a strong contribution to the field. Furthermore, reviewers' comments were addressed by the authors in detail in the rebuttal phase.
train
[ "Zc5_DU8U-WQ", "IKQGSu6YjH", "7HH2LxYozFv", "UiJ_n88SxNf", "C0une14Vrwu", "eWNFZgG--Q0", "8pmfRk8beEK", "PYWeOJjEAt4", "A_KcEfBBwBp", "DI1ERXS2fyX", "FLk1IoA-Erj", "cZ0m256nlOf", "34iZG6ZfJL7", "KTuwgzmG7g6", "rBw-nxHP3yY", "4Ej_KplEyCK", "DhY0WYOD2d", "Atp-aQDD69F" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for the support! Per your suggestion, we will take measures to make the text more intuitive.\n\nWith regards to high level question (3):\n\n* By \"almost always\" we mean \"with probability one\". There could still be initializations with which gradient flow will not converge to global minimum (e.g. s...
[ -1, 7, -1, 6, -1, -1, -1, 7, -1, -1, -1, 8, -1, -1, -1, -1, -1, 7 ]
[ -1, 2, -1, 4, -1, -1, -1, 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, 1 ]
[ "IKQGSu6YjH", "nips_2021_iX0TSH45eOd", "C0une14Vrwu", "nips_2021_iX0TSH45eOd", "eWNFZgG--Q0", "34iZG6ZfJL7", "A_KcEfBBwBp", "nips_2021_iX0TSH45eOd", "rBw-nxHP3yY", "FLk1IoA-Erj", "DhY0WYOD2d", "nips_2021_iX0TSH45eOd", "UiJ_n88SxNf", "IKQGSu6YjH", "PYWeOJjEAt4", "Atp-aQDD69F", "cZ0m25...
nips_2021_dwJyEMPZ04I
CrypTen: Secure Multi-Party Computation Meets Machine Learning
Secure multi-party computation (MPC) allows parties to perform computations on data while keeping that data private. This capability has great potential for machine-learning applications: it facilitates training of machine-learning models on private data sets owned by different parties, evaluation of one party's private model using another party's private data, etc. Although a range of studies implement machine-learning models via secure MPC, such implementations are not yet mainstream. Adoption of secure MPC is hampered by the absence of flexible software frameworks that `"speak the language" of machine-learning researchers and engineers. To foster adoption of secure MPC in machine learning, we present CrypTen: a software framework that exposes popular secure MPC primitives via abstractions that are common in modern machine-learning frameworks, such as tensor computations, automatic differentiation, and modular neural networks. This paper describes the design of CrypTen and measure its performance on state-of-the-art models for text classification, speech recognition, and image classification. Our benchmarks show that CrypTen's GPU support and high-performance communication between (an arbitrary number of) parties allows it to perform efficient private evaluation of modern machine-learning models under a semi-honest threat model. For example, two parties using CrypTen can securely predict phonemes in speech recordings using Wav2Letter faster than real-time. We hope that CrypTen will spur adoption of secure MPC in the machine-learning community.
accept
There is an agreement in the reviews that the paper’s secure multi-party computation (SMPC) framework for deep learning training and inference provides a valuable contribution. While most of the techniques are already known and the protection is only against honest-but-curious (as opposed to malicious) adversaries, the paper seems to fill an important gap and the contributions are a step forward towards broader deployment of SMPC protocols for ML primitives, and could stimulate future work in the area. I therefore lean towards acceptance.
train
[ "dnGnVOG3Z5", "IaKwSaGyNDu", "LfiNp4VGZ1", "8wTDGQe80ky", "kyhDSz-amm0", "vPtF5DWJ-C", "od8jkBtIw_-", "pcYb-2W7Sah" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your insightful comments and positive feedback on our work!\n\nMPC Compilers: Compilers have the potential to generate more efficient code because they know the computation graph before they execute it, and can apply optimizations to it. However, from past experience with deep-learning frameworks, i...
[ -1, -1, -1, -1, 8, 6, 8, 7 ]
[ -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "pcYb-2W7Sah", "od8jkBtIw_-", "vPtF5DWJ-C", "kyhDSz-amm0", "nips_2021_dwJyEMPZ04I", "nips_2021_dwJyEMPZ04I", "nips_2021_dwJyEMPZ04I", "nips_2021_dwJyEMPZ04I" ]
nips_2021_ud-WYSo9JSL
Can contrastive learning avoid shortcut solutions?
The generalization of representations learned via contrastive learning depends crucially on what features of the data are extracted. However, we observe that the contrastive loss does not always sufficiently guide which features are extracted, a behavior that can negatively impact the performance on downstream tasks via “shortcuts", i.e., by inadvertently suppressing important predictive features. We find that feature extraction is influenced by the difficulty of the so-called instance discrimination task (i.e., the task of discriminating pairs of similar points from pairs of dissimilar ones). Although harder pairs improve the representation of some features, the improvement comes at the cost of suppressing previously well represented features. In response, we propose implicit feature modification (IFM), a method for altering positive and negative samples in order to guide contrastive models towards capturing a wider variety of predictive features. Empirically, we observe that IFM reduces feature suppression, and as a result improves performance on vision and medical imaging tasks.
accept
The reviewers agreed that this paper addresses and important practical question (feature supression in contrastive learning), and provides a theoretically-motivated and empirically-validated way to address it.
train
[ "gwJWP1-ubH", "4tjyLAE45R", "8I2j7kkMaOC", "N1eIGlrggRe", "9-4MT-pMgX1", "nzTEVLHVWeV", "N_88dmLtIls", "itPdFz5dLpL", "bA-pCsYkmGv", "Fgd3nXG-e-W", "ii5wJ0yXa2I", "feVEzVcTi0k", "5sRAvRXIxFN", "N-CcpJl78h", "KE3Y9ZH9qxG" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I will keep my score and still recommend for acceptance.", " Thank you for confirming your rating. We are adding / have added each of these points to the manuscript, and it will be fully completed for a final draft. We are grateful for your suggestions which are actionable, and will...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "feVEzVcTi0k", "8I2j7kkMaOC", "Fgd3nXG-e-W", "9-4MT-pMgX1", "bA-pCsYkmGv", "nips_2021_ud-WYSo9JSL", "nzTEVLHVWeV", "ii5wJ0yXa2I", "N_88dmLtIls", "5sRAvRXIxFN", "KE3Y9ZH9qxG", "N-CcpJl78h", "nips_2021_ud-WYSo9JSL", "nips_2021_ud-WYSo9JSL", "nips_2021_ud-WYSo9JSL" ]
nips_2021_NWYlZ5z8Q-R
See More for Scene: Pairwise Consistency Learning for Scene Classification
Scene classification is a valuable classification subtask and has its own characteristics which still needs more in-depth studies. Basically, scene characteristics are distributed over the whole image, which cause the need of “seeing” comprehensive and informative regions. Previous works mainly focus on region discovery and aggregation, while rarely involves the inherent properties of CNN along with its potential ability to satisfy the requirements of scene classification. In this paper, we propose to understand scene images and the scene classification CNN models in terms of the focus area. From this new perspective, we find that large focus area is preferred in scene classification CNN models as a consequence of learning scene characteristics. Meanwhile, the analysis about existing training schemes helps us to understand the effects of focus area, and also raises the question about optimal training method for scene classification. Pursuing the better usage of scene characteristics, we propose a new learning scheme with a tailored loss in the goal of activating larger focus area on scene images. Since the supervision of the target regions to be enlarged is usually lacked, our alternative learning scheme is to erase already activated area, and allow the CNN models to activate more area during training. The proposed scheme is implemented by keeping the pairwise consistency between the output of the erased image and its original one. In particular, a tailored loss is proposed to keep such pairwise consistency by leveraging category-relevance information. Experiments on Places365 show the significant improvements of our method with various CNNs. Our method shows an inferior result on the object-centric dataset, ImageNet, which experimentally indicates that it captures the unique characteristics of scenes.
accept
This paper presents a systematic study on the potential recognition ability of CNN for scenes in terms of the effective receptive field (ERF). It is interesting to see that the proposed model is based on exploiting the characteristic of scene classification rather than following the same architecture of object recognition. Moreover, all the reviewers agree that the proposed method provides a key contribution to the fundamental CV tasks.
train
[ "SjwgdWP2TfE", "EYiQ6V-P3tg", "wRpuaxXSZI2", "Jg9FFa42wvh", "0bZJ0bGa_r", "jk7gL3oG2xno", "neagdD3cHJh", "wvEafa_vKX9" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In this work, the authors enlarge effective receptive fields (ERF) of CNNs for scene image classification. To this goal, the authors propose a new learning scheme and a tailored loss. The proposed learning target is to make the outputs of an input image and its modified version to be consistent. The tailored loss ...
[ 7, 7, -1, -1, -1, -1, -1, 7 ]
[ 5, 4, -1, -1, -1, -1, -1, 5 ]
[ "nips_2021_NWYlZ5z8Q-R", "nips_2021_NWYlZ5z8Q-R", "EYiQ6V-P3tg", "SjwgdWP2TfE", "SjwgdWP2TfE", "wvEafa_vKX9", "EYiQ6V-P3tg", "nips_2021_NWYlZ5z8Q-R" ]
nips_2021_mjyMGFL8N2
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm, which learns representations by pushing positive pairs, or similar examples from the same class, closer together while keeping negative pairs far apart. Despite the empirical successes, theoretical foundations are limited -- prior analyses assume conditional independence of the positive pairs given the same class label, but recent empirical applications use heavily correlated positive pairs (i.e., data augmentations of the same image). Our work analyzes contrastive learning without assuming conditional independence of positive pairs using a novel concept of the augmentation graph on data. Edges in this graph connect augmentations of the same data, and ground-truth classes naturally form connected sub-graphs. We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective on neural net representations. Minimizing this objective leads to features with provable accuracy guarantees under linear probe evaluation. By standard generalization bounds, these accuracy guarantees also hold when minimizing the training contrastive loss. In all, this work provides the first provable analysis for contrastive learning where the guarantees can apply to realistic empirical settings.
accept
This paper provides a new theoretical framework for contrastive learning. In particular, the authors find a new type of assumption, connecting spectral graph theory with self-supervised contrastive learning, which is much more realistic than what is common in prior statistical theory for contrastive learning. All reviewers agree on the significance of theoretical contribution provided by the paper. One of major concerns was on the connectivity assumption of the augmentation graph raised by Reviewer 1aqU. As the authors did in their rebuttal, it is quite useful to provide some concrete visual examples in the final draft, to avoid some misunderstanding. AC thinks it is also useful to provide some (real-world or synthetic) scenarios when the assumption breaks, possibly with supporting experiments (to see how much the assumption is crucial). In overall, AC thinks that the paper is very well written and could be a pioneering work, not only for theoretical purposes, but also for some better algorithmic solutions in the future.
val
[ "FTW0BAUdHqQ", "Jn5QVVoEScS", "Otakpx603mq", "MxRzLThjyip", "KFxfsN3k7H", "azrjZ1ojw6", "77Tjc7jITz-", "mr9wjE9Y8dh", "3QE_SmGPTH", "hrTltDUYxI", "tI1DgOccPM2" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper theoretically analyzes contrastive learning relying on a concept of population augmentation graph. In the graph, two augmented samples are connected if they can be augmented from the same original instance. Based on this setup,\n- This paper shows that spectral decomposition can be reformulated as a con...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_mjyMGFL8N2", "nips_2021_mjyMGFL8N2", "KFxfsN3k7H", "KFxfsN3k7H", "3QE_SmGPTH", "FTW0BAUdHqQ", "tI1DgOccPM2", "hrTltDUYxI", "Jn5QVVoEScS", "nips_2021_mjyMGFL8N2", "nips_2021_mjyMGFL8N2" ]
nips_2021_XOSrNXGp_qJ
Greedy Approximation Algorithms for Active Sequential Hypothesis Testing
Kyra Gan, Su Jia, Andrew Li
accept
The paper considers the problem of active sequential hypothesis testing, where a sequence of actions are adaptively chosen to find the best hypothesis without testing all of them. The paper considers the important novel setting when the test returns a result that is stochastic, and proposes a greedy approach by linking the problem to submodular function ranking. Theoretical analysis and empirical results on the COSMIC cancer data are presented. Three expert reviewers carefully considered the strengths and weaknesses of the paper. All reviewers agree that the problem setting is important, and the link to submodularity and the resulting greedy algorithm is interesting. They identified several issues with clarity and precision in the paper, which the authors tried to address during the rebuttal period, which the reviewers appreciated (and changed their scores). Post rebuttal, the reviewers had a brief discussion and agreed that the paper is a good one, and would be a valuable contribution to the NeurIPS program. The authors are strongly advised to take the reviewer feedback into account in the final version. I am pleased to be able to recommend this paper to be accepted to NeurIPS. Congratulations!
train
[ "aMkOhQKz1-n", "iqR-7w32hiZ", "fdA2dYQeQfj", "bG0qvoiXMQ5", "Ck8otlLoC0g", "gSqnDqQg5E_", "xXSp0O1VAan", "EBwNvKCi2ec" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This work presents approximation algorithms for the problem of active sequential hypotheses testing. The proposed algorithms leverage a connection with submodular function ranking that is novel and interesting. Both \"fully adaptive\" and \"partially adaptive\" algorithms are given, with different features and gua...
[ 6, 6, -1, -1, -1, -1, -1, 7 ]
[ 3, 5, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_XOSrNXGp_qJ", "nips_2021_XOSrNXGp_qJ", "nips_2021_XOSrNXGp_qJ", "nips_2021_XOSrNXGp_qJ", "iqR-7w32hiZ", "EBwNvKCi2ec", "aMkOhQKz1-n", "nips_2021_XOSrNXGp_qJ" ]
nips_2021_K_8bE0OQ9vC
When False Positive is Intolerant: End-to-End Optimization with Low FPR for Multipartite Ranking
Multipartite ranking is a basic task in machine learning, where the Area Under the receiver operating characteristics Curve (AUC) is generally applied as the evaluation metric. Despite that AUC reflects the overall performance of the model, it is inconsistent with the expected performance in some application scenarios, where only a low False Positive Rate (FPR) is meaningful. To leverage high performance under low FPRs, we consider an alternative metric for multipartite ranking evaluating the True Positive Rate (TPR) at a given FPR, denoted as TPR@FPR. Unfortunately, the key challenge of direct TPR@FPR optimization is two-fold: \textbf{a)} the original objective function is not differentiable, making gradient backpropagation impossible; \textbf{b)} the loss function could not be written as a sum of independent instance-wise terms, making mini-batch based optimization infeasible. To address these issues, we propose a novel framework on top of the deep learning framework named \textit{Cross-Batch Approximation for Multipartite Ranking (CBA-MR)}. In face of \textbf{a)}, we propose a differentiable surrogate optimization problem where the instances having a short-time effect on FPR are rendered with different weights based on the random walk hypothesis. To tackle \textbf{b)}, we propose a fast ranking estimation method, where the full-batch loss evaluation is replaced by a delayed update scheme with the help of an embedding cache. Finally, experimental results on four real-world benchmarks are provided to demonstrate the effectiveness of the proposed method.
accept
TPR@FPR is a useful metric in many important practical applications (e.g. biometrics). The paper made a valuable contribution on the challenging problem of optimizing TPR@FPR for multipartite ranking. On the top of deep learning models, it proposed a novel framework by introducing a differentiable surrogate and a fast ranking estimation method to optimize the non-decomposable objective function. Overall, the problem is well motivated and the proposed framework/algorithms for optimizing TPR@FPR are novel and interesting.
train
[ "HK4JsJsxTrP", "W_g43k_zPnx", "A_3vSCOhGW", "Im1XqWH9zy8", "u3iQENMTDR", "fHUJRg934Z", "eMisKvDUy2u", "MvrhryvSFyG", "XnbeDrm0MXn", "ToM5aF3EL2Y", "6MsFnFEVgDq", "k7BdSX4h94V" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for providing a detailed response. Most of my concerns are now addressed. After reading the responses of other reviewers, I found the proposed framework could improve TPR@FPR without dropping the efficiency or the performance of traditional metrics. More interestingly, the authors also prese...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "eMisKvDUy2u", "k7BdSX4h94V", "fHUJRg934Z", "k7BdSX4h94V", "k7BdSX4h94V", "6MsFnFEVgDq", "ToM5aF3EL2Y", "XnbeDrm0MXn", "nips_2021_K_8bE0OQ9vC", "nips_2021_K_8bE0OQ9vC", "nips_2021_K_8bE0OQ9vC", "nips_2021_K_8bE0OQ9vC" ]
nips_2021_MvGKpmPsN7c
Convex Polytope Trees and its Application to VAE
A decision tree is commonly restricted to use a single hyperplane to split the covariate space at each of its internal nodes. It often requires a large number of nodes to achieve high accuracy. In this paper, we propose convex polytope trees (CPT) to expand the family of decision trees by an interpretable generalization of their decision boundary. The splitting function at each node of CPT is based on the logical disjunction of a community of differently weighted probabilistic linear decision-makers, which also geometrically corresponds to a convex polytope in the covariate space. We use a nonparametric Bayesian prior at each node to infer the community's size, encouraging simpler decision boundaries by shrinking the number of polytope facets. We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given. We empirically demonstrate the efficiency of CPT over existing state-of-the-art decision trees in several real-world classification and regression tasks from diverse domains.
accept
Reviewers were very split about this paper, so much so that they did not come to a consensus. After looking through the paper and discussion I have decided to vote to accept for the following reasons: (i) usefulness: trees are an important class of ML models due to their speed, interpretability, accuracy, and ability to handle data with different scales. This work is able to maintain these properties while improving accuracy over recent work; (ii) novelty: while convex polytopes have been used as decision boundaries before it is non-trivial to integrate them into a learning procedure for trees: the proposed approach frames it in a principled way as a maximization of mutual information; (iii) clarity: the paper clearly presents the ideas it introduces. Alongside the recommendations made by reviewers (which the authors should carefully take into account when preparing the camera-ready version) I would urge the authors to consider the following things: (a) remove generative modeling: I don't think the generative modelling part of the paper adds much to the paper and I would consider removing this part and retitling the paper "Convex Polytope Trees". I think most readers interested in decision trees will not be interested in this aspect, and it detracts from the most important contributions (the new splitting function, the fully differentiable training). Instead I would emphasize (possibly with an instructive diagram) that your approach can be used as a drop-in NN layer (because it is fully differentiable). This will get the point across without distracting reviewers with additional experiments; (b) details on the gamma process prior: it would be nice to see an experiment of the effect of the gamma process prior. At the moment this is mentioned very briefly in the paper, but it would be very instructive to see the role it plays in the trees that are built (e.g., how do the trees change without this prior?); (c) algorithm: include a more succinct version of algorithm 1 in the main body of the paper, this will make the training of the method much clearer to the reader; (d) instructive figures: it would be nice to have another instructive figure explaining how the splits in this work differ from classical and modern tree methods; (e) additional baseline: it would be great for the authors to compare against Tanno, R., Arulkumaran, K., Alexander, D., Criminisi, A., and Nori, A. Adaptive neural trees. In Proc. of ICML, 2019 as it performs very competitively in prior work (I am not an author of this work). If the authors are able to make these changes and the ones mentioned by the reviewers it will make a good paper even better, and a nice contribution to the conference.
train
[ "vYGcRVi6aWP", "Ix_txyR_gn4", "xlAE1FtDiFI", "U9Z9fi1uu5", "m9OioVNGaKS", "PzUhy1pLiG", "TJN8ZAVX-bG", "HWs9OU2nZGe", "rSGneFZEtCC", "w-dYWuD4-RC", "pSfL1ReaGrI", "YD9hG_a1UJj" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe sincerely appreciate your recognizing our contributions and giving us additional feedback. There are many insightful comments and suggestions, which we will take into careful consideration when revising our paper.\n\n", " Dear authors,\nthank you for your patience and please find an update ...
[ -1, -1, 6, -1, -1, 3, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 5, -1, -1, 4, -1, -1, -1, -1, 4, 3 ]
[ "Ix_txyR_gn4", "m9OioVNGaKS", "nips_2021_MvGKpmPsN7c", "PzUhy1pLiG", "xlAE1FtDiFI", "nips_2021_MvGKpmPsN7c", "PzUhy1pLiG", "xlAE1FtDiFI", "YD9hG_a1UJj", "pSfL1ReaGrI", "nips_2021_MvGKpmPsN7c", "nips_2021_MvGKpmPsN7c" ]
nips_2021_dvyUaK4neD0
The Skellam Mechanism for Differentially Private Federated Learning
We introduce the multi-dimensional Skellam mechanism, a discrete differential privacy mechanism based on the difference of two independent Poisson random variables. To quantify its privacy guarantees, we analyze the privacy loss distribution via a numerical evaluation and provide a sharp bound on the Rényi divergence between two shifted Skellam distributions. While useful in both centralized and distributed privacy applications, we investigate how it can be applied in the context of federated learning with secure aggregation under communication constraints. Our theoretical findings and extensive experimental evaluations demonstrate that the Skellam mechanism provides the same privacy-accuracy trade-offs as the continuous Gaussian mechanism, even when the precision is low. More importantly, Skellam is closed under summation and sampling from it only requires sampling from a Poisson distribution -- an efficient routine that ships with all machine learning and data analysis software packages. These features, along with its discrete nature and competitive privacy-accuracy trade-offs, make it an attractive practical alternative to the newly introduced discrete Gaussian mechanism.
accept
The reviewers were convinced by the technical content of the paper, but were somewhat unconvinced of the value over the distributed discrete Gaussian. The main claimed advantages seemed to be that the primitives were readily available in more libraries (which reviewers did not think was a strong argument), and that the discrete Gaussian is not closed under summation. There is some discussion of this comparison at the end of Section 4. It is requested that the authors place this comparison front-and-center, and expand more upon the points, in order to make the comparison crystal clear. In particular, the authors must be more detailed with their critiques of the discrete Gaussian. What is stopping the discrete Gaussian from being implemented in libraries besides initiative from some interested researcher? In terms of how the discrete Gaussian does not compose: how do you square this with the following sentence from [19]? "The bound of the theorem is surprisingly strong; if σ^2 = τ^2 = 3, then the bound is ≤ 10^{-12}". The plots (Figure 2 and 3) seem to indicate larger differences than what I would have expected from this statement, even in the parameter regime corresponding to this statement. The authors must precisely, thoroughly, and honestly detail why this method is favorable to the discrete Gaussian in the final version. Despite some cynicism about the motivation, this paper had a very thorough analysis of a natural mechanism, and thus should be accepted.
train
[ "67wiCFWQN10", "hcsUZoKjmA4", "JorHy8WEF1w", "Ku_V-n6zj2h", "7NnkoNvP-Gn", "8yGnv42ym5_", "SmI19BKdQGk", "NlQon9aTht", "0lrlQaA1l2Q", "qOn3YQVW7Fr", "vGfoFDfo96c" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for replying to my questions and other reviewers' questions. I also appreciate that the authors have performed a comparison of the sampling time, which shows the advantages of proposed method. The reason I ask this question is that when I perform the discrete gaussian mechanism in a distributed setting,...
[ -1, 6, 7, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ -1, 3, 3, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "NlQon9aTht", "nips_2021_dvyUaK4neD0", "nips_2021_dvyUaK4neD0", "SmI19BKdQGk", "nips_2021_dvyUaK4neD0", "qOn3YQVW7Fr", "JorHy8WEF1w", "vGfoFDfo96c", "hcsUZoKjmA4", "7NnkoNvP-Gn", "nips_2021_dvyUaK4neD0" ]
nips_2021_yaxePRTOhqk
Stability and Deviation Optimal Risk Bounds with Convergence Rate $O(1/n)$
Yegor Klochkov, Nikita Zhivotovskiy
accept
The reviewers are unanimous that this work provides a significant contribution in the area of uniform stability, which should interest the learning theory community as well as the wider NeurIPS community. A key consequence of the authors' results is the positive resolution of a long-standing open question of Shalev-Shwartz et al.'s COLT 2009 paper: this question is about whether ERM can obtain fast rates of convergence for Lipschitz, stochastic strongly convex optimization with high probability, where the dependence on the failure probability $\delta$ is logarithmic. While the authors do heavily draw from previous work, certain technical components like the novel excess risk decomposition are highly original, provide an important technical contribution, and might lead to progress in future works. In addition, the work is very well written and clear. This work would be a welcome contribution to NeurIPS.
train
[ "pszu4479vVI", "KDhNAIp0lUm", "GRSlf9o8lp6", "xo2-nlM0H43", "-jRtvt4W7L", "BHG9JYW50bk", "ylPxcGvyL6K", "IhoA8iGdC0c", "EtEs0-CCc3M", "x7BqHnLrYha", "UhYn9K_sbsv", "oBHl0nZcTc" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors presented an improved variance-type excess risk analysis for uniformly stable learning algorithms. Particularly, it was proved that if the population risk satisfies the Bernstein condition, then a $\\gamma$-stable algorithm with optimization error $\\Delta_{opt}$ over a set of $n$ observations can ach...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 9, 7 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "nips_2021_yaxePRTOhqk", "GRSlf9o8lp6", "ylPxcGvyL6K", "IhoA8iGdC0c", "BHG9JYW50bk", "oBHl0nZcTc", "pszu4479vVI", "x7BqHnLrYha", "nips_2021_yaxePRTOhqk", "nips_2021_yaxePRTOhqk", "nips_2021_yaxePRTOhqk", "nips_2021_yaxePRTOhqk" ]
nips_2021_Oeb2LbHAfJ4
SketchGen: Generating Constrained CAD Sketches
Computer-aided design (CAD) is the most widely used modeling approach for technical design. The typical starting point in these designs is 2D sketches which can later be extruded and combined to obtain complex three-dimensional assemblies. Such sketches are typically composed of parametric primitives, such as points, lines, and circular arcs, augmented with geometric constraints linking the primitives, such as coincidence, parallelism, or orthogonality. Sketches can be represented as graphs, with the primitives as nodes and the constraints as edges. Training a model to automatically generate CAD sketches can enable several novel workflows, but is challenging due to the complexity of the graphs and the heterogeneity of the primitives and constraints. In particular, each type of primitive and constraint may require a record of different size and parameter types.We propose SketchGen as a generative model based on a transformer architecture to address the heterogeneity problem by carefully designing a sequential language for the primitives and constraints that allows distinguishing between different primitive or constraint types and their parameters, while encouraging our model to re-use information across related parameters, encoding shared structure. A particular highlight of our work is the ability to produce primitives linked via constraints that enables the final output to be further regularized via a constraint solver. We evaluate our model by demonstrating constraint prediction for given sets of primitives and full sketch generation from scratch, showing that our approach significantly out performs the state-of-the-art in CAD sketch generation.
accept
The paper contributes a generative model for CAD sketches. In particular, the work proposes a language to address the heterogeneity in sketch primitives and constraints, which allows a CAD sketch to be tokenized such that it can be handled using sequence models (such as Transformers) followed by a constraint optimizer. Overall, the reviewers are positive about the paper. The reviewers think the work is well motivated and the paper is clearly written. While several reviewers think the experiments can be strengthened with more comparison, the reviewers generally agree the work is novel and makes reasonable modeling choices, which are valuable to the field. The reviewers raised a set of questions for clarification, which were mostly addressed by the authors in the rebuttal. The authors should further address these points in the revision.
val
[ "7T8r_AYWM5Y", "MHqoFIy2WtF", "eVZ69j_-SPV", "n1hocl2r-A_", "1Q6789pWUd2", "gClteelqrb-", "21Pqpu_rUHc", "xRNN9FWakh", "BfdaCaJjJI_" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new CAD sketches generation method, which designs a new language for CAD primitives and constraints and proposes a transformer-based CAD sketch generator. Results on sketch generation and auto-constraining sketches show the effectiveness of the proposed method.\n Strengths:\n\n1. The paper de...
[ 6, 6, -1, -1, -1, -1, -1, 5, 7 ]
[ 3, 3, -1, -1, -1, -1, -1, 4, 5 ]
[ "nips_2021_Oeb2LbHAfJ4", "nips_2021_Oeb2LbHAfJ4", "nips_2021_Oeb2LbHAfJ4", "MHqoFIy2WtF", "xRNN9FWakh", "BfdaCaJjJI_", "7T8r_AYWM5Y", "nips_2021_Oeb2LbHAfJ4", "nips_2021_Oeb2LbHAfJ4" ]
nips_2021_1ODSsnoMBav
CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models. However, the application of well-known UDA approaches does not generalize well in Semi-Supervised Domain Adaptation (SSDA) scenarios where few labeled samples from the target domain are available.This paper proposes a simple Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap between the labeled and unlabeled target distributions and the inter-domain gap between source and unlabeled target distribution in SSDA. We suggest employing class-wise contrastive learning to reduce the inter-domain gap and instance-level contrastive alignment between the original(input image) and strongly augmented unlabeled target images to minimize the intra-domain discrepancy. We have empirically shown that both of these modules complement each other to achieve superior performance. Experiments on three well-known domain adaptation benchmark datasets, namely DomainNet, Office-Home, and Office31, demonstrate the effectiveness of our approach. CLDA achieves state-of-the-art results on all the above datasets.
accept
Overall, the reviewers found the proposed method simple but interesting, the paper well-written, and the experiments/ablations thorough. The reviewers raised a number of concerns including comparisons to related works (and concurrent work) and fairness of comparisons [aZTZ, bD47, MSYR], pseudo-labeling noise [aZTZ, bD47], as well as smaller clarifications and analysis. The authors provided a thorough rebuttal with significant additional experiments, including even comparisons to the concurrent CVPR works. The reviewers have been overall satisfied with the responses and some have decided to increase their scores. As a result, I recommend acceptance and strongly encourage the authors to incorporate the many experimental results and clarifications into the revised paper, including discussions on the experimental setup and noise resilience.
train
[ "MREJxbZ3IZ", "97BYQpamXGQ", "9KGaI2XHvFD", "BGDIE-pMmho", "9uwSdqeoGW7", "-K1O39ZHfmI", "3YcqYSpvSur", "5ONGy-g086", "8NGO2rzbmFN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a contrastive learning method for semi-supervised domain adaptation (CLDA), where few labelled samples from target domain are available. The CLDA consists in inter-domain contrastive alignment, which decreases distance between same classes from source and target domains, and instance contrastiv...
[ 6, 6, 6, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_1ODSsnoMBav", "nips_2021_1ODSsnoMBav", "nips_2021_1ODSsnoMBav", "5ONGy-g086", "MREJxbZ3IZ", "97BYQpamXGQ", "8NGO2rzbmFN", "9KGaI2XHvFD", "nips_2021_1ODSsnoMBav" ]
nips_2021_IQOawME4sqW
Differentially Private n-gram Extraction
Kunho Kim, Sivakanth Gopi, Janardhan Kulkarni, Sergey Yekhanin
accept
This paper proposes a DP algorithm to extract all n-grams (sequence of n consecutive words appearing in the data) of length 1 up to some maximum length T, where each user’s data is represented by a single string. It uses an earlier algorithm to find all 1-grams, then iteratively builds the set of n-grams of length from 2 to T by effectively pruning the search space. The reviewers agree that this is an interesting paper, and all of them support acceptance.
train
[ "lNbIdpdDyXU", "7XNKSdEiYS", "rGV_BYgubM", "krQDiU4Abha", "m3zW6paRXm2", "0UvJx6TXUub", "hLxz5LpaWve", "qDvZeDG_ukh" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Paper studies the problem of n-gram extraction with differential privacy as a generalization of differentially private set union. paper proposes to deal with the problem of extracting n-grams with differential privacy posed as a generalization of DPSU. Paper is clearly written and deals with an important issue as...
[ 7, -1, -1, -1, -1, 6, 6, 7 ]
[ 3, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2021_IQOawME4sqW", "qDvZeDG_ukh", "lNbIdpdDyXU", "hLxz5LpaWve", "0UvJx6TXUub", "nips_2021_IQOawME4sqW", "nips_2021_IQOawME4sqW", "nips_2021_IQOawME4sqW" ]
nips_2021_mqWkNXJBX4h
Capturing implicit hierarchical structure in 3D biomedical images with self-supervised hyperbolic representations
We consider the task of representation learning for unsupervised segmentation of 3D voxel-grid biomedical images. We show that models that capture implicit hierarchical relationships between subvolumes are better suited for this task. To that end, we consider encoder-decoder architectures with a hyperbolic latent space, to explicitly capture hierarchical relationships present in subvolumes of the data. We propose utilizing a 3D hyperbolic variational autoencoder with a novel gyroplane convolutional layer to map from the embedding space back to 3D images. To capture these relationships, we introduce an essential self-supervised loss---in addition to the standard VAE loss---which infers approximate hierarchies and encourages implicitly related subvolumes to be mapped closer in the embedding space. We present experiments on synthetic datasets along with a dataset from the medical domain to validate our hypothesis.
accept
This paper addresses the unsupervised segmentation of 3D biomedical images. The authors propose to leverage hyperbolic representations as continuous models for modeling hierarchical structures. A VAE based on Gyroplane convolutional layers at the decoder level is introduced, and trained with a self-supervised loss in addition to the ELBO data fitting based on making distance between . The paper initially received three weak accept and one weak reject recommendation. The main concerns pointed out by reviewers were about the lack of novelty of the proposed method, and the need to compare to stronger baselines and on broader datasets. After rebuttal, all reviewers stick on their initial ratings. The AC carefully read the submission and authors' feedback. Although the approach uses known tools (hyperbolic representation, self-supervised learning), the AC considers that its adaptation for unsupervised 3D biomedical image segmentation is meaningful. The paper is also well written and clear. The experiments are overall convincing, although the AC agrees with the reviewers' concerns related to the fact that the synthetic dataset could have been designed more thoroughly. The AC thus recommends acceptance, but highly encourages the authors to carefully take into account and reviewers' comments and there feedback to improve the final version of the paper. Especially, the different metrics should be included in the main paper, and the author should also verify that their reimplementation of the baselines reach the performances reported in the published papers. The submission has been discussed with the senior area chair who agrees with the recommendation.
train
[ "iiFnyVp6bw", "sHCvK6NaoDb", "0Bxt5b3lNbP", "pMxnesfZlg", "NtDS2QL9xW", "uFDrJSOT-AN", "8cn2NOK91Sb", "0ZczqMtkRge", "zqZwLLKxAum" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for the responses, which address most of my questions. After reading the reviews and rebuttals, I maintain my score. Although the technical novelty is limited since all main components of the proposed method already exist, I see its sufficient originality in exploring the feasibility of comb...
[ -1, -1, -1, -1, -1, 6, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "pMxnesfZlg", "0ZczqMtkRge", "zqZwLLKxAum", "8cn2NOK91Sb", "uFDrJSOT-AN", "nips_2021_mqWkNXJBX4h", "nips_2021_mqWkNXJBX4h", "nips_2021_mqWkNXJBX4h", "nips_2021_mqWkNXJBX4h" ]
nips_2021_mf9XiRCEgZu
Noisy Recurrent Neural Networks
We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states. Specifically, we consider RNNs that can be viewed as discretizations of stochastic differential equations driven by input data. This framework allows us to study the implicit regularization effect of general noise injection schemes by deriving an approximate explicit regularizer in the small noise regime. We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin. Sufficient conditions for global stability are obtained, highlighting the phenomenon of stochastic stabilization, where noise injection can improve stability during training. Our theory is supported by empirical results which demonstrate that the RNNs have improved robustness with respect to various input perturbations.
accept
This paper received overall favorable reviews. This paper was found to be well written and the topic to be of interest and innovative. Please address reviewers comment in the final version.
train
[ "XFz8smJTCHi", "XVSjc8cAtCT", "J8ay9NSE2aS", "HBykubzHm0F", "78ICNyX5Ft7", "UhkaJ4dvOF", "luz8D50d6jZ", "OZUKUMKhKn", "0CcleVA-q-m", "6hutZZ0301H" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for reading our response and reconsidering to change the score. We regret to hear that you still think that this paper is only slightly above the acceptance threshold, and we would like to better understand the reasons for this and what we can do to push the paper towards a clear accept. \n\nW...
[ -1, -1, 6, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, 3, 2, 4 ]
[ "XVSjc8cAtCT", "OZUKUMKhKn", "nips_2021_mf9XiRCEgZu", "6hutZZ0301H", "J8ay9NSE2aS", "0CcleVA-q-m", "OZUKUMKhKn", "nips_2021_mf9XiRCEgZu", "nips_2021_mf9XiRCEgZu", "nips_2021_mf9XiRCEgZu" ]
nips_2021_C__ChZs8WjU
Matrix encoding networks for neural combinatorial optimization
Machine Learning (ML) can help solve combinatorial optimization (CO) problems better. A popular approach is to use a neural net to compute on the parameters of a given CO problem and extract useful information that guides the search for good solutions. Many CO problems of practical importance can be specified in a matrix form of parameters quantifying the relationship between two groups of items. There is currently no neural net model, however, that takes in such matrix-style relationship data as an input. Consequently, these types of CO problems have been out of reach for ML engineers. In this paper, we introduce Matrix Encoding Network (MatNet) and show how conveniently it takes in and processes parameters of such complex CO problems. Using an end-to-end model based on MatNet, we solve asymmetric traveling salesman (ATSP) and flexible flow shop (FFSP) problems as the earliest neural approach. In particular, for a class of FFSP we have tested MatNet on, we demonstrate a far superior empirical performance to any methods (neural or not) known to date.
accept
The paper presents a general approach to solving approximately with GNNs a large class of matrix-based combinatorial optimization problems. After the discussion, the paper is * an interesting contribution for combinatorial optimization problem solving from the machine learning perspective. * will be helpful for other problems not covered in the paper. * the paper is nice too read. However, there are shortcomings of the paper: * The proposed method does not outperform existing methods (on ATSP). We agree however that outperforming highly optimized codes like LKH is not necessarily expected. Due to the novelty of the architecture and the tackled setting and the relevant results the paper is a good fit for NeurIPS. We urge the authors to reflect the discussion and the additional experiments in the final paper and include the promised changes in their final paper.
train
[ "qEQXtkJr30a", "d2IsE-flB1", "iyO4oqZ7FSL", "LWlL8J451-w", "mbstG_IcWli", "YcNM89a8ol8", "3clBWlUjWO6", "-nh8K58KiVo", "8yIKOSLjTVI", "Nelya06G6Q6", "nMdEbKtQkT", "qJhph442MZa", "mhPG9rtC4N", "qXaHRFXALMt", "2kvktbBacJ", "_k1vQXjq3Tq", "5RSm8ycOi7M" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " \n\nWe thank the reviewers, once again, for their time and many helpful comments. Here we summarize the planned changes of our manuscript (main text + Appendix) for the camera-ready version that we are working on.\n\n \n\n1. A subsection describing related works that have used bipartite graph encoding or have so...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_C__ChZs8WjU", "5RSm8ycOi7M", "YcNM89a8ol8", "3clBWlUjWO6", "nips_2021_C__ChZs8WjU", "-nh8K58KiVo", "8yIKOSLjTVI", "nMdEbKtQkT", "mhPG9rtC4N", "nips_2021_C__ChZs8WjU", "qXaHRFXALMt", "5RSm8ycOi7M", "mbstG_IcWli", "Nelya06G6Q6", "_k1vQXjq3Tq", "nips_2021_C__ChZs8WjU", "nips_...
nips_2021_XqEF9riB93S
When Is Unsupervised Disentanglement Possible?
A common assumption in many domains is that high dimensional data are a smooth nonlinear function of a small number of independent factors. When is it possible to recover the factors from unlabeled data? In the context of deep models this problem is called “disentanglement” and was recently shown to be impossible without additional strong assumptions [17, 19]. In this paper, we show that the assumption of local isometry together with non-Gaussianity of the factors, is sufficient to provably recover disentangled representations from data. We leverage recent advances in deep generative models to construct manifolds of highly realistic images for which the ground truth latent representation is known, and test whether modern and classical methods succeed in recovering the latent factors. For many different manifolds, we find that a spectral method that explicitly optimizes local isometry and non-Gaussianity consistently finds the correct latent factors, while baseline deep autoencoders do not. We propose how to encourage deep autoencoders to find encodings that satisfy local isometry and show that this helps them discover disentangled representations. Overall, our results suggest that in some realistic settings, unsupervised disentanglement is provably possible, without any domain-specific assumptions.
accept
This paper examines the hotly debated topic of whether one can learn disentangled representations from unlabelled data. The key contribution is to show that a combination of non-Gaussianity and local isometry is sufficient to make unsupervised disentanglement theoretically possible. This is then backed up by a proof-of-concept algorithm and experiments. After receiving initially quite mixed review scores, this paper has undergone substantial discussions by the reviewers, both publically with the authors and behind closed doors, resulting in two reviewers who initially recommend rejection increasing their scores to back marginal acceptance. Perhaps the key underlying issue that has been debated is how strong the assumptions required by the theoretical results actually are in practice. After much back and forth, the general consensus on this seems to be that the assumptions are actually not unreasonable, but need to be much more carefully explained and discussed in the paper. More generally, the overall sentiment of the reviewers, which I concur with myself, is that the underlying ideas in the paper are interesting and potentially quite significant, but that noticeable work on the writing of the paper itself is needed to improve the clarity and ensure the work can be properly appreciated by its audience. Though, at the time of writing, Reviewer pb6M still also has concerns about the relationship between the theory and experiments in the paper and which mapping the local isometry assumption applies to, I agree with the authors that this concern seems to be based on a misunderstanding rather than a genuine issue. Putting this together, I believe that the decision of acceptance comes mostly down to whether the clarity issues are deemed too severe for publication / require an unacceptably large amount of changes for the camera-ready. I think this is quite a tight call, but recommend giving the author's the benefit of the doubt that the required changes will be adequately incorporated. The basis for this is that I think that the work does have the potential to stimulate notable interest in an area that is quickly moving, such that the risk of the clarity issues not being solved is outweighed by the potential advantages of ensuring the ideas are published. If this decision is upheld, I sincerely hope that the authors will make the required efforts to improve the paper for the camera-ready as the changes required are non-trivial and it would be a shame to leave the writing of the paper letting down the underlying technical content. I would also very strongly suggest the authors change the title of the paper, which is inappropriately general and does a disservice to previous work that has considered the same question.
train
[ "cZxY2_peyO_", "FjybfEMhVkc", "a15-asnj09", "OGP-cHxUOrb", "xKMSyE-ffxT", "_uVYN1ydwfi", "UrD1A1jEfM_", "NW0YkpuJw9w", "I3LBJRfjHwW", "_PcNIS0GQne", "I6YMloGr50D", "67u7GI50Kp8", "Mdq1vb6DhN", "9SwZs6MlOR", "4Z9viDPpQA", "5P8DqejvBbj" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Our paper is divided into a theoretical section and an experiments section. The former is the more significant part; the latter is there to support it.\n\nIn the theoretical part, we prove (theoretically, without considering any specific manifold or dataset) that disentanglement is possible, if the assumptions (n...
[ -1, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4 ]
[ -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "FjybfEMhVkc", "a15-asnj09", "OGP-cHxUOrb", "Mdq1vb6DhN", "nips_2021_XqEF9riB93S", "nips_2021_XqEF9riB93S", "67u7GI50Kp8", "I3LBJRfjHwW", "9SwZs6MlOR", "nips_2021_XqEF9riB93S", "xKMSyE-ffxT", "_uVYN1ydwfi", "5P8DqejvBbj", "4Z9viDPpQA", "nips_2021_XqEF9riB93S", "nips_2021_XqEF9riB93S" ]
nips_2021_KzYIEQ_B1BX
Continuous Latent Process Flows
Partial observations of continuous time-series dynamics at arbitrary time stamps exist in many disciplines. Fitting this type of data using statistical models with continuous dynamics is not only promising at an intuitive level but also has practical benefits, including the ability to generate continuous trajectories and to perform inference on previously unseen time stamps. Despite exciting progress in this area, the existing models still face challenges in terms of their representational power and the quality of their variational approximations. We tackle these challenges with continuous latent process flows (CLPF), a principled architecture decoding continuous latent processes into continuous observable processes using a time-dependent normalizing flow driven by a stochastic differential equation. To optimize our model using maximum likelihood, we propose a novel piecewise construction of a variational posterior process and derive the corresponding variational lower bound using trajectory re-weighting. Our ablation studies demonstrate the effectiveness of our contributions in various inference tasks on irregular time grids. Comparisons to state-of-the-art baselines show our model's favourable performance on both synthetic and real-world time-series data.
accept
This paper explores a continuous latent time series model, the Continuous Latent Process Flow (CLPF). CLPFs can be seen as a generalisation of CTFPs where, instead of transforming a base Wiener process with a time-conditional flow, it first introduces a latent process SDE which conditions a CLPF-like model (that uses an OU base process instead of a Wiener process). In that sense, CLPFs combine elements from VRNNs, latent SDEs and CTFPs. In terms of motivation/discussions, it is clear why CLPFs should be strictly better than CTFPs, but it is not clear why they should be better than latent SDEs (at least not "a priori") so adding more discussions on this could improve the draft. The experiments show good performance on real-world data relative to CTFPs, latent SDEs and VRNNs. In spite of some hesitance amongst reviewers, I find the experimental results compelling. But I also agree that the paper could benefit from more detailed experiments comparing to VRNNs, for instance to clarify if the difference with respect to VRNN is due to implementation differences (e.g. passing absolute time or not, or small architectural differences).
train
[ "USTbk1hTOkd", "r-UJec1Nubf", "jFSejtK3EJt", "lv4ESp6v-Do", "H06dYVrIUAk", "yr8_DEwtUn5", "nbLcqg77Vqz", "4ZO08mvRGX4", "xN-ILqJAnz", "0MIabO3QFxa", "fct_HD6OQjr", "vZqtXXBRwit", "PtPm_6wzwO_", "wpKFfY5j5Pc" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a continuous latent time series model called the Continuous Latent Process Flow (CLPF). Irregular time series observations are modeled as invertible transformations (i.e. normalizing flows) of an underlying stochastic process (the base process), where the flow transformation is itself parameter...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "nips_2021_KzYIEQ_B1BX", "4ZO08mvRGX4", "0MIabO3QFxa", "xN-ILqJAnz", "xN-ILqJAnz", "0MIabO3QFxa", "4ZO08mvRGX4", "wpKFfY5j5Pc", "vZqtXXBRwit", "PtPm_6wzwO_", "USTbk1hTOkd", "nips_2021_KzYIEQ_B1BX", "nips_2021_KzYIEQ_B1BX", "nips_2021_KzYIEQ_B1BX" ]
nips_2021_xwGeq7I4Opv
Perturbation-based Regret Analysis of Predictive Control in Linear Time Varying Systems
Yiheng Lin, Yang Hu, Guanya Shi, Haoyuan Sun, Guannan Qu, Adam Wierman
accept
The reviewers were unanimous in their appreciation of the paper and hence I recommend a clear Accept. I request the authors to look into improving the notation in the paper and re-assessing the paper in terms of clarity of presentation of the proofs etc to improve readability. Suggestions of this form have been laid out in multiple reviews.
test
[ "2snCF2o3WvK", "J8gGLIMmSkQ", "jEVOtVu8mXC", "ad8bJpv0QYm", "ArpszuAE9o", "J8aLJyAl-s", "EfO87jxSgu2", "pbm3587HlX2", "IYMx8x93zPi", "u0SSmKOUG4B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I think a table of notation would b very helpful, as would be using distinct letters to avoid overloading. Thanks!", "This paper establishes both dynamic regret and competitive ratio bounds for online control with time-varying linear dynamics and adversarial perturbative noise, thereby extending past work for t...
[ -1, 8, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "J8aLJyAl-s", "nips_2021_xwGeq7I4Opv", "ArpszuAE9o", "u0SSmKOUG4B", "J8gGLIMmSkQ", "IYMx8x93zPi", "pbm3587HlX2", "nips_2021_xwGeq7I4Opv", "nips_2021_xwGeq7I4Opv", "nips_2021_xwGeq7I4Opv" ]
nips_2021_dBE8OI8_ZOa
Dataset Distillation with Infinitely Wide Convolutional Networks
The effectiveness of machine learning algorithms arises from being able to extract useful features from large amounts of data. As model and dataset sizes increase, dataset distillation methods that compress large datasets into significantly smaller yet highly performant ones will become valuable in terms of training efficiency and useful feature extraction. To that end, we apply a novel distributed kernel-based meta-learning framework to achieve state-of-the-art results for dataset distillation using infinitely wide convolutional neural networks. For instance, using only 10 datapoints (0.02% of original dataset), we obtain over 64% test accuracy on CIFAR-10 image classification task, a dramatic improvement over the previous best test accuracy of 40%. Our state-of-the-art results extend across many other settings for MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and SVHN. Furthermore, we perform some preliminary analyses of our distilled datasets to shed light on how they differ from naturally occurring data.
accept
Reviewers had some mixed feelings concerning this paper. On one hand, the reviewers felt the paper direction is important and well justified. On the other hand, some large scale results are missing and would overall make the claims of this paper stronger. However, the main concern was raised by Reviewer AopF: I believe the paper structure is quite difficult to follow (the "Set up" section looks more like a Related Work section and the method is never clearly specified as the next section is the Experimental section) and a significant rewriting is needed before accepting this paper. Thus I recommend the authors to take this in account and to resubmit this paper in another venue.
train
[ "wj1d17t-mlI", "SqUKZp664wE", "qElrjOtT3-R", "69107upbRt4", "0xuWBXBSSJg", "z1Q09VF7qt0", "yVfVSJyiu9i", "duztsMdB4J", "iDdEiXQU8BF", "TvEncrJxJ0G", "yzi-0XdwROT" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear all, \n\nIn our submission, we mentioned that one of our contributions would be that we would open source the datasets we constructed (using 1000s of GPU hours). We are happy to report that the data is now available (over 760 checkpoints, spanning 38 different sets of hyperparameters, hosted through GitHub /...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_dBE8OI8_ZOa", "qElrjOtT3-R", "z1Q09VF7qt0", "nips_2021_dBE8OI8_ZOa", "duztsMdB4J", "iDdEiXQU8BF", "yzi-0XdwROT", "69107upbRt4", "TvEncrJxJ0G", "nips_2021_dBE8OI8_ZOa", "nips_2021_dBE8OI8_ZOa" ]
nips_2021_-1rrzmJCp4
SPANN: Highly-efficient Billion-scale Approximate Nearest Neighborhood Search
The in-memory algorithms for approximate nearest neighbor search (ANNS) have achieved great success for fast high-recall search, but are extremely expensive when handling very large scale database. Thus, there is an increasing request for the hybrid ANNS solutions with small memory and inexpensive solid-state drive (SSD). In this paper, we present a simple but efficient memory-disk hybrid indexing and search system, named SPANN, that follows the inverted index methodology. It stores the centroid points of the posting lists in the memory and the large posting lists in the disk. We guarantee both disk-access efficiency (low latency) and high recall by effectively reducing the disk-access number and retrieving high-quality posting lists. In the index-building stage, we adopt a hierarchical balanced clustering algorithm to balance the length of posting lists and augment the posting list by adding the points in the closure of the corresponding clusters. In the search stage, we use a query-aware scheme to dynamically prune the access of unnecessary posting lists. Experiment results demonstrate that SPANN is 2X faster than the state-of-the-art ANNS solution DiskANN to reach the same recall quality 90% with same memory cost in three billion-scale datasets. It can reach 90% recall@1 and recall@10 in just around one millisecond with only 32GB memory cost. Code is available at: https://github.com/microsoft/SPTAG.
accept
The paper presents an empirically efficient algorithm for disk-scale similarity search. The algorithm offers significant (2x or higher) improvements over the state of the art. The reviewers’ opinion was that the techniques used in the algorithm were relatively standard, but they were carefully engineered and put together to obtain a practical software artifact. One concern was that the experiments have been performed on only two data sets, but the authors promised further experiments in the final version of the paper.
train
[ "t0Tvjj-UYRe", "pEL_jMLv4xO", "wy5R9gKBPEO", "oGBVdx3408d", "ZULfklAaItJ", "861HCCKlUxL", "zhMe9y8w25b", "v-oMpC3c7d", "KZGnkeC0TTe", "23IC2P55eSi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors study the approximate nearest neighbor search (ANNS) problem and develop an inverted index based algorithm using both memory and disk in the searching to reduce the required amount of memory. The proposed algorithm first partitions the vectors into clusters and then stores in the memory ...
[ 6, 6, -1, 8, -1, -1, -1, -1, -1, 8 ]
[ 4, 4, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_-1rrzmJCp4", "nips_2021_-1rrzmJCp4", "v-oMpC3c7d", "nips_2021_-1rrzmJCp4", "861HCCKlUxL", "oGBVdx3408d", "23IC2P55eSi", "pEL_jMLv4xO", "t0Tvjj-UYRe", "nips_2021_-1rrzmJCp4" ]
nips_2021__bOfK2k_7R
Distilling Object Detectors with Feature Richness
In recent years, large-scale deep models have achieved great success, but the huge computational complexity and massive storage requirements make it a great challenge to deploy them in resource-limited devices. As a model compression and acceleration method, knowledge distillation effectively improves the performance of small models by transferring the dark knowledge from the teacher detector. However, most of the existing distillation-based detection methods mainly imitating features near bounding boxes, which suffer from two limitations. First, they ignore the beneficial features outside the bounding boxes. Second, these methods imitate some features which are mistakenly regarded as the background by the teacher detector. To address the above issues, we propose a novel Feature-Richness Score (FRS) method to choose important features that improve generalized detectability during distilling. The proposed method effectively retrieves the important features outside the bounding boxes and removes the detrimental features within the bounding boxes. Extensive experiments show that our methods achieve excellent performance on both anchor-based and anchor-free detectors. For example, RetinaNet with ResNet-50 achieves 39.7% in mAP on the COCO2017 dataset, which even surpasses the ResNet-101 based teacher detector 38.9% by 0.8%.
accept
The paper presents a method (Feature-Richness Score) to choose important features to distill for object detection, leveraging a feature pyramid network. There is significant prior work in the domain but the method is different enough. The experimental evaluation is convincing as it encompasses several detectors, backbones, and datasets. A limit of the method is that it does not work out of the box with approaches without FPN (e.g. DETR). Overall, the paper can be improved easily and the authors' response convinced 3 reviewers (out of 4) to raise their score by one point. The outcome makes this paper fitting as poster at NeurIPS.
train
[ "HZFrF8q-swu", "9KrLl4vI_1J", "ZDlcxMnXaqf", "DsNGoLJNowZ", "79HHH3NCO69", "3PcnWE32q7A", "dXQZ8sV1Wr", "pzeM99ZwTR0", "lRbYsV3mqje", "4vFcNEsuDUS", "2MY7VJTeCTU", "drZHS5EV6cq", "gB8QC0rz3o", "GKERhtFWy8k" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks authors for further clarification! Now I am confident on comparison with the baselines and other existing approaches.", " Thanks for the detailed response from authors. I tend to keep my rating of score 7.", " Thanks for your careful and valuable comments. We will explain your concerns point by point.\...
[ -1, -1, -1, 6, -1, 7, -1, -1, 6, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, 5, -1, 5, -1, -1, 4, -1, -1, -1, -1, 5 ]
[ "ZDlcxMnXaqf", "drZHS5EV6cq", "pzeM99ZwTR0", "nips_2021__bOfK2k_7R", "4vFcNEsuDUS", "nips_2021__bOfK2k_7R", "2MY7VJTeCTU", "gB8QC0rz3o", "nips_2021__bOfK2k_7R", "DsNGoLJNowZ", "3PcnWE32q7A", "GKERhtFWy8k", "lRbYsV3mqje", "nips_2021__bOfK2k_7R" ]
nips_2021_wLsA3nurh9W
Analysis of one-hidden-layer neural networks via the resolvent method
Vanessa Piccolo, Dominik Schröder
accept
This paper considers the limited singular value distribution of a random feature model, which consists of an entrywise nonlinearity and a product of two i.i.d. matrices. In particular, the generalization performance of a single hidden layer neural network where the input data and hidden layer weights are i.i.d. random can be determined through this analysis. The authors show that the resolvent method can be used to simplify and generalize the existing techniques for this problem. The reviews all agreed that the paper contains strong theoretical results, only expressed minor concerns and suggestions, and recommended acceptance. As Reviewer o1S8 noted, adding a discussion of the advantage of the resolvent approach and clarifying the novelty of the results w.r.t. existing work (e.g., dealing with additive bias) will improve the paper. Please take into account the updated reviews when preparing the final version to accommodate the requested changes. Thank you for your submission to NeurIPS.
test
[ "_CfAw-FEkRm", "1MCFvGbypw", "GjWtGhXKhNV", "nAVQNTZUY_", "eqE6jPMBzT6", "Pvdp5V75P1", "bGkCU_1mkWH", "SXN4QkKpdF", "3pL0u18EAi", "PYuY9xhQ5W" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarifications. I will conserve my score of 8. Thus I strongly recommend accepting the paper.", " Thanks for this response---they address my concerns adequately, and (if the paper is accepted) I do encourage the authors to provide these fuller details of the argument as discussed.", " We are gr...
[ -1, -1, -1, -1, -1, -1, 8, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "Pvdp5V75P1", "GjWtGhXKhNV", "PYuY9xhQ5W", "SXN4QkKpdF", "3pL0u18EAi", "bGkCU_1mkWH", "nips_2021_wLsA3nurh9W", "nips_2021_wLsA3nurh9W", "nips_2021_wLsA3nurh9W", "nips_2021_wLsA3nurh9W" ]
nips_2021_ZQQqo8H1qjC
Grounding Spatio-Temporal Language with Transformers
Language is an interface to the outside world. In order for embodied agents to use it, language must be grounded in other, sensorimotor modalities. While there is an extended literature studying how machines can learn grounded language, the topic of how to learn spatio-temporal linguistic concepts is still largely uncharted. To make progress in this direction, we here introduce a novel spatio-temporal language grounding task where the goal is to learn the meaning of spatio-temporal descriptions of behavioral traces of an embodied agent. This is achieved by training a truth function that predicts if a description matches a given history of observations. The descriptions involve time-extended predicates in past and present tense as well as spatio-temporal references to objects in the scene. To study the role of architectural biases in this task, we train several models including multimodal Transformer architectures; the latter implement different attention computations between words and objects across space and time. We test models on two classes of generalization: 1) generalization to new sentences, 2) generalization to grammar primitives. We observe that maintaining object identity in the attention computation of our Transformers is instrumental to achieving good performance on generalization overall, and that summarizing object traces in a single token has little influence on performance. We then discuss how this opens new perspectives for language-guided autonomous embodied agents.
accept
To begin, I should clarify that during discussions, Reviewer MYUM stated that while their objection stands after reading other reviews, they see it as a corner case, and that they voted "marginally below threshold". I believe they forgot to update the rating on their review, so I'm interpreting their 4 as a 5+. Simply put, this paper introduces a new setting for testing the grounded language learning capabilities of learning algorithms and architectures, whereby spatial and temporally extended relationships between objects must be captured by agents via (synthetic) language understanding. They propose a number of architectures which aim to help with this sort of language grounding, and compare them to sensible baselines. They evaluate against several forms of generalization, both on the linguistic and the relational front. The main objection that could be levelled at this sort of environment is that the synthetic language is somewhat toy in that it is (as far as I understand) unambiguous, neatly structured, and has fairly homogenous semantics, but I think such testbeds are important if they serve as a starting point for research into more complex topics, such as the spatially and temporally extended relationships this work caters to. It's a neat contribution to language-informed/directed sequential decision making, and given the reviewer consensus post discussion, I don't think anyone is looking to champion the case for rejecting it. I recommend acceptance.
train
[ "Pm_6mgtYZWP", "6wENG0BGJj", "aNG14NkVhy6", "46muVGhreis", "jTjomF_j-0R", "qnHQKcT4UEj", "PAxxWarH1nH", "fs9hUT3Aivm", "dBbNWGk8ojU", "OHszmL3jXr", "675AV2Lxky" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank reviewer 8BQz for their comment. We are glad that our motivations for studying the truth function appear clear now. We perfected our introduction, making these motivations more salient.\n\nWe now provide clarifications on the hard negative experiments. For both of our control experiments (hard trajectori...
[ -1, 6, -1, -1, -1, -1, -1, -1, 4, 7, 6 ]
[ -1, 2, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "46muVGhreis", "nips_2021_ZQQqo8H1qjC", "PAxxWarH1nH", "qnHQKcT4UEj", "dBbNWGk8ojU", "675AV2Lxky", "6wENG0BGJj", "OHszmL3jXr", "nips_2021_ZQQqo8H1qjC", "nips_2021_ZQQqo8H1qjC", "nips_2021_ZQQqo8H1qjC" ]
nips_2021_CxefshFHEqh
Learning where to learn: Gradient sparsity in meta and continual learning
Finding neural network weights that generalize well from small datasets is difficult. A promising approach is to learn a weight initialization such that a small number of weight changes results in low generalization error. We show that this form of meta-learning can be improved by letting the learning algorithm decide which weights to change, i.e., by learning where to learn. We find that patterned sparsity emerges from this process, with the pattern of sparsity varying on a problem-by-problem basis. This selective sparsity results in better generalization and less interference in a range of few-shot and continual learning problems. Moreover, we find that sparse learning also emerges in a more expressive model where learning rates are meta-learned. Our results shed light on an ongoing debate on whether meta-learning can discover adaptable features and suggest that learning by sparse gradient descent is a powerful inductive bias for meta-learning systems.
accept
This paper proposes an extension to MAML that also meta-learns weight update masks, and analyses the resulting learning outcomes under a variety of settings. Both conventional few-shot learning and continual learning benchmarks are considered. Although some work has been done in this direction such as MT-Net, reviewers are generally positive after rebuttal phase and find this to be worthwhile contribution. Overall I recommend acceptance.
train
[ "kDYmneqWV9", "FLHK1CXV_Sf", "RLqZWcoMU8w", "8Oojcc-h_Al", "eCZSX7tqOU5", "GltOf3NNyly", "eQaekB_qbLy", "bJ5lsUSR00", "jy-drDM1yDG", "oTPi8vDPz5", "A6D_8qLDe-M", "zCXUdm-C60P", "3cye7bDr2k_", "lfc3MfoIYfh", "TzbUt1XpYww", "dfT9ffvcIS9" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We would like to share one last update on the exponential learning rate model. We reran from scratch our experiments with this model and tracked the time-evolution of the learning rates as meta-learning proceeded, cf. figure linked below on an anonymous image hosting online service (corresponding to 1-shot miniIm...
[ -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "FLHK1CXV_Sf", "RLqZWcoMU8w", "3cye7bDr2k_", "nips_2021_CxefshFHEqh", "nips_2021_CxefshFHEqh", "nips_2021_CxefshFHEqh", "bJ5lsUSR00", "oTPi8vDPz5", "lfc3MfoIYfh", "GltOf3NNyly", "TzbUt1XpYww", "dfT9ffvcIS9", "8Oojcc-h_Al", "nips_2021_CxefshFHEqh", "nips_2021_CxefshFHEqh", "nips_2021_Cx...
nips_2021_l3vp7IDY6PZ
Domain Invariant Representation Learning with Domain Density Transformations
Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains. Naively training a model on the aggregate set of data (pooled from all source domains) has been shown to perform suboptimally, since the information learned by that model might be domain-specific and generalize imperfectly to target domains. To tackle this problem, a predominant domain generalization approach is to learn some domain-invariant information for the prediction task, aiming at a good generalization across domains. In this paper, we propose a theoretically grounded method to learn a domain-invariant representation by enforcing the representation network to be invariant under all transformation functions among domains. We next introduce the use of generative adversarial networks to learn such domain transformations in a possible implementation of our method in practice. We demonstrate the effectiveness of our method on several widely used datasets for the domain generalization problem, on all of which we achieve competitive results with state-of-the-art models.
accept
This paper proposes an approach to align both the marginal and the conditional distributions with a theoretical analysis. Authors have done a good job to address many concerns from reviewers. After the discussion period, all three reviewers are inclined to accept the paper (note that one increased the score from 4 to 6 without editing the original review.)
train
[ "RaQVUUq852g", "wigsTMYupD3", "VIjaPcifHtA", "DDJz49eu-vv", "num1Q-IXFI_", "dxVGTZwGcLs", "lfOHMCyADWs", "V_3AQRchF-s", "GoZ8QJWzD0", "pqQcP3QCppl", "MhhEcyqXssw", "htmVnHN6D2n", "33RvSnw23q", "AabOmY6JNpy", "dJsSw9ow2IE", "IFma-GhuxAH", "B4uPmTcL_TN", "QguTKoH_QS3", "6JUsDKjpbQ"...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", ...
[ " Dear reviewer,\n\nThank you for your reconsideration. \n\nIf it is not too much trouble to ask, can you also edit the score in the original review to avoid any confusion for the AC?\n\nThank you very much.\n\nBest regards,\n\nAuthors.", " The authors have addressed a lot of my concerns. They provided more basel...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "wigsTMYupD3", "CidLf7Tacz", "IFma-GhuxAH", "nips_2021_l3vp7IDY6PZ", "V_3AQRchF-s", "IFma-GhuxAH", "nips_2021_l3vp7IDY6PZ", "pqQcP3QCppl", "htmVnHN6D2n", "MhhEcyqXssw", "htmVnHN6D2n", "33RvSnw23q", "QguTKoH_QS3", "lfOHMCyADWs", "IFma-GhuxAH", "B4uPmTcL_TN", "CidLf7Tacz", "lfOHMCyAD...
nips_2021_InYbKA26YG2
PlayVirtual: Augmenting Cycle-Consistent Virtual Trajectories for Reinforcement Learning
Learning good feature representations is important for deep reinforcement learning (RL). However, with limited experience, RL often suffers from data inefficiency for training. For un-experienced or less-experienced trajectories (i.e., state-action sequences), the lack of data limits the use of them for better feature learning. In this work, we propose a novel method, dubbed PlayVirtual, which augments cycle-consistent virtual trajectories to enhance the data efficiency for RL feature representation learning. Specifically, PlayVirtual predicts future states in the latent space based on the current state and action by a dynamics model and then predicts the previous states by a backward dynamics model, which forms a trajectory cycle. Based on this, we augment the actions to generate a large amount of virtual state-action trajectories. Being free of groudtruth state supervision, we enforce a trajectory to meet the cycle consistency constraint, which can significantly enhance the data efficiency. We validate the effectiveness of our designs on the Atari and DeepMind Control Suite benchmarks. Our method achieves the state-of-the-art performance on both benchmarks.
accept
After the author rebuttal and discussion, the reviewers have unanimously agreed to accept the paper. All three reviewers expressed gratitude for the improved clarification of the method and follow-up experiments and the overall empirical value of the presented experiments, but still expressed some reservations along the lines of "The paper would clearly be much stronger if it provided more insight as to why and in what way this cycle consistency loss improves the learned representations". For this reason, I recommend accepting this paper as a poster.
train
[ "zpmqdy9iMT", "kKpmz4ZMXi", "kJ9WZhzBv4b", "NZ6vF9s8Qa", "A6hBDJKCemX", "IuqJY0R1gcC", "fbliDx5MLvA", "cDanT-Gdvf5", "yE-M9E0oIUt", "7jjehOqRAnq", "X4lQZKqgp68", "YqXrcv2CoLg", "yxCyYI6wUNm", "3BoilIsuz39", "wSntNrnx9oZ", "SEbe9UkcfEi", "QfAN7VIywl", "hZ44w-yVhTb", "3D5rmvJHEIc",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_r...
[ " Thank you! We will try our best to improve our text based on your comments.", " Thanks so much for the clarification. These extra details will help the readers understand the representations learned due to the additional cycle-consistency loss.", " Thank you for your reply!\n\nThe states in the dataset are ob...
[ -1, -1, -1, -1, -1, 6, -1, 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, 5, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "kKpmz4ZMXi", "kJ9WZhzBv4b", "NZ6vF9s8Qa", "fbliDx5MLvA", "IuqJY0R1gcC", "nips_2021_InYbKA26YG2", "yE-M9E0oIUt", "nips_2021_InYbKA26YG2", "SEbe9UkcfEi", "nips_2021_InYbKA26YG2", "YqXrcv2CoLg", "nips_2021_InYbKA26YG2", "3BoilIsuz39", "hZ44w-yVhTb", "QfAN7VIywl", "cDanT-Gdvf5", "YqXrcv...
nips_2021_4-Py8BiJwHI
Efficient Equivariant Network
Lingshen He, Yuxuan Chen, zhengyang shen, Yiming Dong, Yisen Wang, Zhouchen Lin
accept
The authors have written a very convincing rebuttal that leans me toward acceptance of this paper.
test
[ "C0pWOAfqzb0", "T8H-IkGN53", "lBxQTpDaUt", "DMnNHSEu7Vv", "ISSuE_48AUJ", "S5Tj2uuAs8a", "e3LUJxdw8cU", "_q5sviIblkR", "ph1TOLkC5ZI", "UJtV_sni5a", "dsrqZwt-CSe", "kozGE1lUeRu", "7nil985bMC_" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, thanks for your nice feedback and updating your score.\n\nBest regards, The Authors", "This paper introduces a functional abstraction for group-convolutional networks employing first- and second-order features. Based on this functional abstraction, a new functional form for second-order group-con...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "lBxQTpDaUt", "nips_2021_4-Py8BiJwHI", "T8H-IkGN53", "T8H-IkGN53", "T8H-IkGN53", "_q5sviIblkR", "T8H-IkGN53", "UJtV_sni5a", "7nil985bMC_", "ph1TOLkC5ZI", "kozGE1lUeRu", "nips_2021_4-Py8BiJwHI", "nips_2021_4-Py8BiJwHI" ]
nips_2021_GPYHMC-MXl
Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
Model-agnostic meta-reinforcement learning requires estimating the Hessian matrix of value functions. This is challenging from an implementation perspective, as repeatedly differentiating policy gradient estimates may lead to biased Hessian estimates. In this work, we provide a unifying framework for estimating higher-order derivatives of value functions, based on off-policy evaluation. Our framework interprets a number of prior approaches as special cases and elucidates the bias and variance trade-off of Hessian estimates. This framework also opens the door to a new family of estimates, which can be easily implemented with auto-differentiation libraries, and lead to performance gains in practice.
accept
The expert reviewers for the most part appreciated the paper and were mostly positive. One issue that arose in discussion is that the intent and contribution of the paper is not sufficiently clearly stated, leading to some potential misunderstandings; the authors should very carefully use the feedback of all reviewers, both positive and negative, to more clearly frame the idea of bridging of meta-learning gradients and OPE. An addition, given the significant limitations of sequential IS (including as part of DR), the authors should offer a clear discussion of more efficient methods for OPE and gradient estimation such as pointed out by the reviewers and what might the implications of these potentially be in the authors' setting; this is an important discussion even if fully studying these implications in depth may be beyond the scope of the paper. The authors should similarly address the other points raised in the reviews and implement fixes suggested in their response.
train
[ "K4L1PokIb0h", "IXK7zd28WLl", "__bd9eF7e-b", "J_O9XaqX9id", "-GWBejWiSYc", "xl12Q9ptEST", "GyJOvC8yIKO", "t_gny7btH6W", "dpjHFg4-IV1" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi all reviewers, \n\nThank you once again for your efforts in providing feedback to our paper. \n\nWe would like to ask whether your questions & concerns have been fully addressed, or if there are additional questions that we may answer. We appreciate your further feedback.\n\nMany thanks for your time.\n\nSince...
[ -1, 6, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, 1, -1, -1, -1, -1, 3, 5, 3 ]
[ "nips_2021_GPYHMC-MXl", "nips_2021_GPYHMC-MXl", "IXK7zd28WLl", "dpjHFg4-IV1", "t_gny7btH6W", "GyJOvC8yIKO", "nips_2021_GPYHMC-MXl", "nips_2021_GPYHMC-MXl", "nips_2021_GPYHMC-MXl" ]
nips_2021_yTJtgA1Gh2
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation
Kenneth Borup, Lars Andersen
accept
This paper provides an in-depth analysis of self-distillation in kernel regression setting, and studies the effect of using a weighted combination of ground truth labels and predictions made by the model, to define the next set of target values. It provides a closed form solution for the optimal choice of weighting parameter at each step, and shows how to efficiently estimate this weighting parameter for deep learning and significantly reduce the computational requirements compared to a grid search. All reviewers find this paper interesting, and rate it in the accept zone. Some of the initial scores were lower at first, but after author's reply, reviewers found them convincing and increased the rating. In concordance with the reviewers, I find distillation an important tool with increasing popularity, and providing a solid understanding about this technique is of great interest to the community.
train
[ "MDkas_8BcYG", "VsTqx6C-X4q", "YGb4UbiQC0f", "yb266WirAJV", "JMApXJFS2lF", "WpDNX9yWFJC", "oK_cTA5MmSs", "B11UN3Zn57Q" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the theory of self-distillation in the kernel ridge regression setting. The authors extend the theoretical results in Mobahi et al. (2020) to further incorporate the ground-truth labels in the distillation objective and highlight that the ground-truth labels serve to dampen the sparsification an...
[ 6, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, -1, -1, -1, -1, 2, 2 ]
[ "nips_2021_yTJtgA1Gh2", "YGb4UbiQC0f", "yb266WirAJV", "MDkas_8BcYG", "oK_cTA5MmSs", "B11UN3Zn57Q", "nips_2021_yTJtgA1Gh2", "nips_2021_yTJtgA1Gh2" ]
nips_2021_BvJkwMhyInm
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression. Our algorithm hinges on the idea of compressing each convolutional (or fully-connected) layer by slicing its channels into multiple groups and decomposing each group via low-rank decomposition. At the core of our algorithm is the derivation of layer-wise error bounds from the Eckart–Young–Mirsky theorem. We then leverage these bounds to frame the compression problem as an optimization problem where we wish to minimize the maximum compression error across layers and propose an efficient algorithm towards a solution. Our experiments indicate that our method outperforms existing low-rank compression approaches across a wide range of networks and data sets. We believe that our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks.
accept
The paper suggests a network compression scheme using low-rank approximation of linear layers (either convolutional, or dense). So far not much new here, except that this paper deals with the combinatorially hard problem of optimizing the matrix decomposition (compression) parameters in all layers simultaneously, so as to minimally sacrifice accuracy. They suggest an EM-like strategy for solving this optimization problem. There has been a considerable amount of discussion between the reviewers and the authors, including exchange of code snippets, which is rare. After reading through this exchange, I became more inclined to accept the paper. I also think that the optimization problem discussed by the paper does not get enough attention in the network compression literature, and the approach the authors suggest is very reasonable. Therefore, I choose to recommend acceptance in spite of the average “borderline/reject” score average.
train
[ "Lrrm9RfKFnq", "ohcxqbSjJU2", "81WuKLacPF3", "OiZC8gJLxju", "ILuVUiGpxR", "whcc2YSdYeq", "3ueyiGbqTh", "28Upw1tqCj", "8CBFjcENix", "lKymgqqJ2-p", "HyI71whCN-S", "OoyilS_pzKo", "2iOrA0HnHs", "iSA3Y9pnGf", "LtvKUFkx6o0", "Dk81bBRiraE", "DLNmhQhhFJa", "duyf8QGVybh", "szlYfo4rqTE", ...
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The reviewer identified three issues and concerns, all of which have been addressed very thoroughly as we feel. \n\n**First,** the reviewer requested additional ImageNet experiments on an efficient architecture, such as MobileNetV2. Since the initial review, we have actually run these experiments for multiple dif...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "OiZC8gJLxju", "OiZC8gJLxju", "OiZC8gJLxju", "nips_2021_BvJkwMhyInm", "szlYfo4rqTE", "Hwo05vXmZhN", "rW-og9BJAX", "rW-og9BJAX", "duyf8QGVybh", "DLNmhQhhFJa", "duyf8QGVybh", "DLNmhQhhFJa", "nips_2021_BvJkwMhyInm", "rW-og9BJAX", "Hwo05vXmZhN", "Hwo05vXmZhN", "szlYfo4rqTE", "szlYfo4rq...
nips_2021_Bq_RoftLEeN
Equilibrium and non-Equilibrium regimes in the learning of Restricted Boltzmann Machines
Aurélien Decelle, Cyril Furtlehner, Beatriz Seoane
accept
This experimental paper has been well received by the reviewers, who are overall fairly confident that the paper deserves to be accepted. They also unanimously point out the quality of writing and presentation. The reviewers have done a great job, and I trust the authors to implement the promised revisions before publication. The main weakness appears to be a lack of originality. But given the timeliness of the models studied here and the overall quality of presentation, this papers complements well existing literature pointing at the non-equilibrium properties of energy based generative models. Moreover the reviewers acknowledge mostly convincing responses to their comments. In particular, I think it is important that, as promised by the authors, they implement the numerical experiments on more datasets, especially given the pure experimental nature of the paper. Second, a reviewer commented: "Can authors provide a reference protocol for training RBM in applications based on results of this paper? This would be helpful for readers." I completely back-up that point. I think that this would add to the paper. Last but not least: it is of crucial importance that the authors extensively discuss and compare their results to reference https://arxiv.org/abs/1903.12370 , and explain in details how their contributions complements it and confirms further its initial findings in a different context.
train
[ "WMNaHHx1Uau", "4Nb91NEwwJq", "7i8M6k1rXk-", "Uw6sE2x7zCT", "v32MocppHgS", "1txw6S1RAN2", "A2Bgeqmgcn1", "8rkgYmcKz8m", "dh1ID847ktM", "lxtPeqpLEGa", "oINaNf_wZ7", "pRztgs4gSj5" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ " I thank the authors for addressing my concerns and for taking my suggestions into account.\nRegarding the computational cost, I was indeed referring to the evaluation and not the Rdm-k method which is as expensive as CD-k.\nI apologize for any confusion this may have caused.\n\nWhile I maintain my score, I would ...
[ -1, 7, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1 ]
[ -1, 5, -1, -1, -1, 3, 5, -1, -1, -1, -1, -1 ]
[ "dh1ID847ktM", "nips_2021_Bq_RoftLEeN", "8rkgYmcKz8m", "v32MocppHgS", "pRztgs4gSj5", "nips_2021_Bq_RoftLEeN", "nips_2021_Bq_RoftLEeN", "oINaNf_wZ7", "4Nb91NEwwJq", "A2Bgeqmgcn1", "A2Bgeqmgcn1", "1txw6S1RAN2" ]
nips_2021_cMv0gvg88a
Imitation with Neural Density Models
We propose a new framework for Imitation Learning (IL) via density estimation of the expert's occupancy measure followed by Maximum Occupancy Entropy Reinforcement Learning (RL) using the density as a reward. Our approach maximizes a non-adversarial model-free RL objective that provably lower bounds reverse Kullback–Leibler divergence between occupancy measures of the expert and imitator. We present a practical IL algorithm, Neural Density Imitation (NDI), which obtains state-of-the-art demonstration efficiency on benchmark control tasks.
accept
This paper proposes a new framework for imitation learning via obtaining a density estimation of the expert’s occupancy measure, followed by Maximum Occupancy Entropy Reinforcement Learning. The intuitive idea is to encourage the imitator to visit high density state-action pairs under the expert’s occupancy measure while maximally exploring the state-action space. In their initial reviews, the reviewers find the paper well-written. The proposed method is based on in-depth analysis of the problem and it has strong performance in comparison with a wide range of baselines. On the other hand, they also think that the claim on data efficiency not is well supported and lack some necessary discussions (evolving reward and stability of learning, use of non-parametric kernel density, advantages over MMG-GAIL, etc) During discussions, two reviewers stood by their votes for accept. Two other reviewers felt that their concerns were not satisfactorily addressed by authors’ responses. One of them even lowered his/her score.
train
[ "uEd1rVyMrlZ", "XZkb8M77LF", "tCSwgl9jiFv", "FLs6yB8hrl", "1Vdbi5_JbkV", "nxnKoFIASH", "f2DzEhJT9Kq", "IEs8wzeJBa" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents an imitation learning approach based on density estimation of the state-action pairs. The underlying idea is to take expert demonstrations and find a distribution over the inherent state occupancy and then follow-up with a reinforcement learning step. As a result, the learner visits the high de...
[ 7, 5, -1, -1, -1, -1, 7, 5 ]
[ 2, 3, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_cMv0gvg88a", "nips_2021_cMv0gvg88a", "IEs8wzeJBa", "XZkb8M77LF", "uEd1rVyMrlZ", "f2DzEhJT9Kq", "nips_2021_cMv0gvg88a", "nips_2021_cMv0gvg88a" ]
nips_2021_TlE6Ar1sRsR
Accurate Point Cloud Registration with Robust Optimal Transport
This work investigates the use of robust optimal transport (OT) for shape matching. Specifically, we show that recent OT solvers improve both optimization-based and deep learning methods for point cloud registration, boosting accuracy at an affordable computational cost. This manuscript starts with a practical overview of modern OT theory. We then provide solutions to the main difficulties in using this framework for shape matching. Finally, we showcase the performance of transport-enhanced registration models on a wide range of challenging tasks: rigid registration for partial shapes; scene flow estimation on the Kitti dataset; and nonparametric registration of lung vascular trees between inspiration and expiration. Our OT-based methods achieve state-of-the-art results on Kitti and for the challenging lung registration task, both in terms of accuracy and scalability. We also release PVT1010, a new public dataset of 1,010 pairs of lung vascular trees with densely sampled points. This dataset provides a challenging use case for point cloud registration algorithms with highly complex shapes and deformations. Our work demonstrates that robust OT enables fast pre-alignment and fine-tuning for a wide range of registration models, thereby providing a new key method for the computer vision toolbox. Our code and dataset are available online at: https://github.com/uncbiag/robot.
accept
This paper was a borderline case, but despite some weaknesses we are recommending acceptance for NeurIPS. Here, the positive aspects are mostly practical/empirical; the authors show how unbalanced OT can be incorporated into realistic pipelines for rigid/nonrigid registration. But the work is mostly engineering, and the methodological/algorithmic contribution is smaller. The AC/SAC also had some concern about precisely what objective function this algorithm is optimizing. It is not articulated clearly in the paper and appears to be a bilevel problem with optimal transport as the "inner" problem----but then the gradients may not be correct. Empirically this appears to be OK in the results, but it'd be preferable to give a clearer mathematical story in this work. Two additional related papers need to be acknowledged/discussed in the final version of this paper: * Feydy et al. "Optimal Transport for Diffeomorphic Registration." MICCAI 2017. ---- This paper contains many related ideas and incorporates optimal transport losses in a similar fashion. * Mukherjee et al. "Outlier-Robust Optimal Transport." ICML 2021. --- This paper proposes a similar unbalanced transport model. It also uses the same "ROBOT" acronym to refer to their model. Please make sure the camera-ready acknowledges and discusses these two works, and also take the suggestions in the individual reviews seriously.
train
[ "1X3nM6XDpLh", "G91s2JI7ND", "jTNHdgAPKXC", "mqV3OehHqod", "cMXc-RLB3sq", "VKY4Zkw-50", "w-LEt_widmD", "F3dWg7q0rWe", "3yLsk9XdGb-", "zk3tBKrpLuD", "HIjSf1gq7Pt" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\nThank you for your effort; I would like to acknowledge that I have appreciated your replies.\n\nAcross the reviews, I see there is a general positive feeling about this work, mainly due to:\n\n- Experiments: the work shows convincing results, and it improves state of the art in non-trivial settings...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "G91s2JI7ND", "HIjSf1gq7Pt", "zk3tBKrpLuD", "cMXc-RLB3sq", "3yLsk9XdGb-", "w-LEt_widmD", "F3dWg7q0rWe", "nips_2021_TlE6Ar1sRsR", "nips_2021_TlE6Ar1sRsR", "nips_2021_TlE6Ar1sRsR", "nips_2021_TlE6Ar1sRsR" ]
nips_2021_rq_UD6IiBpX
Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions
Alejandro Carderera, Mathieu Besançon, Sebastian Pokutta
accept
Overall all reviewers, and also me, liked the paper, which very elegantly simplifies a recent result of interest regarding convergence of Frank-Wolfe for self-concordent functions, which required a more involved appraoch.
train
[ "NpNpQEu3Jz", "9LbzzjZWGL", "fLUJGHo_o8", "CWRQ5fHkwiV", "qdpe_rl4zr0", "OPXOH8NSZCw", "5z9Dc05W3E", "_7JiYZMB0qz", "HbxRZTrDaId", "oGWjG0lrlJN", "91mhKaKfRwM", "-7Ln0shGXs8", "CYNM1SgoWs_", "QM0Jqx6vpnF" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I don't particularly care too strongly about the font, I just wasn't sure if it was allowed as I've never seen it before.", " I was asking for a comparison in terms of the problem parameters. Know results in this direction are all $O(1/\\varepsilon)$, unless there are other problem ...
[ -1, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, -1, -1, 2, -1, 4, -1, -1, -1, -1, 3, 5 ]
[ "-7Ln0shGXs8", "CWRQ5fHkwiV", "5z9Dc05W3E", "qdpe_rl4zr0", "91mhKaKfRwM", "nips_2021_rq_UD6IiBpX", "HbxRZTrDaId", "nips_2021_rq_UD6IiBpX", "OPXOH8NSZCw", "_7JiYZMB0qz", "QM0Jqx6vpnF", "CYNM1SgoWs_", "nips_2021_rq_UD6IiBpX", "nips_2021_rq_UD6IiBpX" ]
nips_2021_FChSjfcJZVW
Automatic Data Augmentation for Generalization in Reinforcement Learning
Deep reinforcement learning (RL) agents often fail to generalize beyond their training environments. To alleviate this problem, recent work has proposed the use of data augmentation. However, different tasks tend to benefit from different types of augmentations and selecting the right one typically requires expert knowledge. In this paper, we introduce three approaches for automatically finding an effective augmentation for any RL task. These are combined with two novel regularization terms for the policy and value function, required to make the use of data augmentation theoretically sound for actor-critic algorithms. Our method achieves a new state-of-the-art on the Procgen benchmark and outperforms popular RL algorithms on DeepMind Control tasks with distractors. In addition, our agent learns policies and representations which are more robust to changes in the environment that are irrelevant for solving the task, such as the background.
accept
The paper presents a solid contribution to improve generalization in reinforcement learning through data augmentation. Reviewers appreciated the theoretical motivations provided for the method and the addition of UCB to select transformations. I encourage the authors to make revisions promised in their response and address the issues raised by the reviewers.
train
[ "e5CYWxMCN8r", "06tP7L8WSL", "i9R6LZDsAv4", "Hpr4TD2sgHm", "qZqRP7-1K-K", "z3IJ9AjHqo", "bZ5YBQF7jjH", "BKaCL38TxRx", "DQOQxjadttB", "5adg6jUT03W", "FmPRZIRZXzz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for engaging in the discussion and for their thoughtful comments. We will do our best to take this feedback into account.", "This paper presents a method for improving generalization in reinforcement learning using data augmentation. It identifies a key theoretical limitation of previous w...
[ -1, 7, -1, 7, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "i9R6LZDsAv4", "nips_2021_FChSjfcJZVW", "z3IJ9AjHqo", "nips_2021_FChSjfcJZVW", "bZ5YBQF7jjH", "06tP7L8WSL", "Hpr4TD2sgHm", "FmPRZIRZXzz", "5adg6jUT03W", "nips_2021_FChSjfcJZVW", "nips_2021_FChSjfcJZVW" ]
nips_2021_0-0Wk0t6A_Z
Blending Anti-Aliasing into Vision Transformer
The transformer architectures, based on self-attention mechanism and convolution-free design, recently found superior performance and booming applications in computer vision. However, the discontinuous patch-wise tokenization process implicitly introduces jagged artifacts into attention maps, arising the traditional problem of aliasing for vision transformers. Aliasing effect occurs when discrete patterns are used to produce high frequency or continuous information, resulting in the indistinguishable distortions. Recent researches have found that modern convolution networks still suffer from this phenomenon. In this work, we analyze the uncharted problem of aliasing in vision transformer and explore to incorporate anti-aliasing properties. Specifically, we propose a plug-and-play Aliasing-Reduction Module (ARM) to alleviate the aforementioned issue. We investigate the effectiveness and generalization of the proposed method across multiple tasks and various vision transformer families. This lightweight design consistently attains a clear boost over several famous structures. Furthermore, our module also improves data efficiency and robustness of vision transformers.
accept
This paper initially receives mixed reviews. The authors successfully convince the reviewers with the rebuttal. The reviewers discuss extensively the strengths/weakness of this work and in the end all are in favor of accepting this paper. The area chairs agree with the reviewers' recommendation to accept this paper.
train
[ "TPGWEPjz-sj", "qE8eh0XM0lE", "Bp5WICUE7C6", "LbYgRBo-xGW", "wjVEUyA1at", "NMuYQBGBxVT", "bH3x7EHsrEM", "gBHdwe6CepU", "PSQT60NrA9R", "7B_GxHmMDk", "Yr9AIT3tisg", "fUyo9uRmQ-G", "ZLhhRIj8wTm", "D0q7OKEGd4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a plug-and-play anti-aliasing reduction method to solve the aliasing problem in the vision transformer. It analyzes the effectiveness and generalization ability in multiple tasks and various vision transformer methods. Experimental results demonstrate the effectiveness of the proposed method. ...
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "nips_2021_0-0Wk0t6A_Z", "gBHdwe6CepU", "nips_2021_0-0Wk0t6A_Z", "wjVEUyA1at", "NMuYQBGBxVT", "TPGWEPjz-sj", "D0q7OKEGd4", "Bp5WICUE7C6", "D0q7OKEGd4", "Bp5WICUE7C6", "TPGWEPjz-sj", "ZLhhRIj8wTm", "nips_2021_0-0Wk0t6A_Z", "nips_2021_0-0Wk0t6A_Z" ]
nips_2021_8Yrcy55iHE
A Trainable Spectral-Spatial Sparse Coding Model for Hyperspectral Image Restoration
Hyperspectral imaging offers new perspectives for diverse applications, ranging from the monitoring of the environment using airborne or satellite remote sensing, precision farming, food safety, planetary exploration, or astrophysics. Unfortunately, the spectral diversity of information comes at the expense of various sources of degradation, and the lack of accurate ground-truth "clean" hyperspectral signals acquired on the spot makes restoration tasks challenging. In particular, training deep neural networks for restoration is difficult, in contrast to traditional RGB imaging problems where deep models tend to shine. In this paper, we advocate instead for a hybrid approach based on sparse coding principles that retain the interpretability of classical techniques encoding domain knowledge with handcrafted image priors, while allowing to train model parameters end-to-end without massive amounts of data. We show on various denoising benchmarks that our method is computationally efficient and significantly outperforms the state of the art.
accept
This work presents a combined sparse-coding and deep learning approach to achieve practical and efficient de-noising of hyperspectral images. Following a discussion with the reviewers, the authors were able to clarify a number of key points relating to compute time and the relationship to competing methods. More importantly, the authors fixed an error identified by one reviewer and have indicated new results that will be included instead of those currently in the paper. With these corrections, the reviewers have updated their scores and I therefore recommend this work be accepted to NeurIPS.
train
[ "DqJHn87Z4qN", "gwo-pbsQK-", "imnzwV-5Yr", "To1Jy7NlBgg", "vcMQiKOMkJ1", "u2HNKyihCsq", "Otg8Vm65Zmw", "tM7hHMjYvuJ", "Kmy2CIXP-O", "BxSY8ThXe0", "EJpe1YtWWzg", "1tNn50vDYyl", "3vKx8SaLf2n", "WlBj5OLEFT8", "log_ueDEKV4" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " As promised to reviewer XbQS, we have conducted an experiment on CAVE, and made\na comparison with NGMeet. For other baselines, we refer to the following\nsurvey https://arxiv.org/pdf/2011.03462.pdf who also used the 32 512x512x31 images\nfrom the CAVE dataset for the evaluation.\n\nSince evaluation on all image...
[ -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "Otg8Vm65Zmw", "nips_2021_8Yrcy55iHE", "nips_2021_8Yrcy55iHE", "tM7hHMjYvuJ", "Kmy2CIXP-O", "WlBj5OLEFT8", "BxSY8ThXe0", "EJpe1YtWWzg", "3vKx8SaLf2n", "nips_2021_8Yrcy55iHE", "imnzwV-5Yr", "log_ueDEKV4", "gwo-pbsQK-", "nips_2021_8Yrcy55iHE", "nips_2021_8Yrcy55iHE" ]
nips_2021_ejAu7ugNj_M
Posterior Collapse and Latent Variable Non-identifiability
Variational autoencoders model high-dimensional data by positinglow-dimensional latent variables that are mapped through a flexibledistribution parametrized by a neural network. Unfortunately,variational autoencoders often suffer from posterior collapse: theposterior of the latent variables is equal to its prior, rendering thevariational autoencoder useless as a means to produce meaningfulrepresentations. Existing approaches to posterior collapse oftenattribute it to the use of neural networks or optimization issues dueto variational approximation. In this paper, we consider posteriorcollapse as a problem of latent variable non-identifiability. We provethat the posterior collapses if and only if the latent variables arenon-identifiable in the generative model. This fact implies thatposterior collapse is not a phenomenon specific to the use of flexibledistributions or approximate inference. Rather, it can occur inclassical probabilistic models even with exact inference, which wealso demonstrate. Based on these results, we propose a class oflatent-identifiable variational autoencoders, deep generative modelswhich enforce identifiability without sacrificing flexibility. Thismodel class resolves the problem of latent variablenon-identifiability by leveraging bijective Brenier maps andparameterizing them with input convex neural networks, without specialvariational inference objectives or optimization tricks. Acrosssynthetic and real datasets, latent-identifiable variationalautoencoders outperform existing methods in mitigating posteriorcollapse and providing meaningful representations of the data.
accept
The authors focus on a particular notion of posterior collapse in VAEs, namely $p(z|x) = p(z), \forall x$, as well as a notion of injectivity of the decoder---namely that $p(\cdot|z) \neq p(\cdot|z')$. They show that in the *exact* sense, these two notions are equivalent. Moreover, they provide an architecture that ensures injectivity by using Brenier maps (i.e. derivatives of convex functions) and provide some empirical data that this architecture is better at avoiding posterior collapse. The theory in the paper is quite simple and straightforward. The empirical results are fairly small-scale, but suggest this architecture may have promise to avoid posterior collapse.
train
[ "ZjeuFkHmh1", "ai0-oO2UTd", "qkKqJRc6uWC", "gSa9rKH9YUw", "CgaxsyTIRGz", "iYKWgjpJhZt", "HoqQc_YeG3", "RSXPGX0HMDh", "nEqPGzv0uVL", "y5pts9HWi93" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "The authors propose a parametrization of the VAE decoder which ensures that it is injective in the latent space, for a fixed observation. They show that such an injective decoder implies \"latent identifiability\", which in turn is equivalent to he VAE avoiding posterior collapse. The work provided by the authors...
[ 6, -1, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_ejAu7ugNj_M", "iYKWgjpJhZt", "nips_2021_ejAu7ugNj_M", "nips_2021_ejAu7ugNj_M", "RSXPGX0HMDh", "HoqQc_YeG3", "nEqPGzv0uVL", "gSa9rKH9YUw", "ZjeuFkHmh1", "qkKqJRc6uWC" ]
nips_2021_4XOrn_Y-dqp
The Benefits of Implicit Regularization from SGD in Least Squares Problems
Stochastic gradient descent (SGD) exhibits strong algorithmic regularization effects in practice, which has been hypothesized to play an important role in the generalization of modern machine learning approaches. In this work, we seek to understand these issues in the simpler setting of linear regression (including both underparameterized and overparameterized regimes), where our goal is to make sharp instance-based comparisons of the implicit regularization afforded by (unregularized) average SGD with the explicit regularization of ridge regression. For a broad class of least squares problem instances (that are natural in high-dimensional settings), we show: (1) for every problem instance and for every ridge parameter, (unregularized) SGD, when provided with \emph{logarithmically} more samples than that provided to the ridge algorithm, generalizes no worse than the ridge solution (provided SGD uses a tuned constant stepsize); (2) conversely, there exist instances (in this wide problem class) where optimally-tuned ridge regression requires \emph{quadratically} more samples than SGD in order to have the same generalization performance. Taken together, our results show that, up to the logarithmic factors, the generalization performance of SGD is always no worse than that of ridge regression in a wide range of overparameterized problems, and, in fact, could be much better for some problem instances. More generally, our results show how algorithmic regularization has important consequences even in simpler (overparameterized) convex settings.
accept
The paper considers the problem of understanding if SGD can be seen as a solution to implicit regularized empirical risk minimizer for the problem of linear regression. The authors compare the excess risk guarantees of SGD solution to the one obtained by solving ridge regression exactly. The authors show that SGD solution is always almost as good as the one obtained by ridge regression. But on the other hand, there exists a problem setting where the excess risk of the ridge regression is substantially worse than the SGD solution. This shows that SGD cannot be seen as implicitly regularized ridge regression. This paper generated a large number of discussions between the reviewers and also between reviewers and authors. While for some points there isnt total agreements, given the reviews and the discussion, it seems clear to me that the paper should be accepted first because despite some of the mentioned possible shortcomings the paper still is interesting and second, the subtler discussion points are exactly why we need to accept the paper so these issues are more widely discussed.
test
[ "3PIq1LDRagf", "UuYxEutgYl", "a6bXkrVjtm", "rc5vrGGvzNi", "y3w3JurYly", "FBTwlMK6eec", "C6KJL2ICzO2", "rZPGOtRtaWL", "wpUUb4yfevj", "goMsB0zhLGD", "94EXOinWMfj", "42ov0hYcPz5", "LB0pe5foUJ_", "YzWQb0xvfl", "f1YlUSYABYq", "Ze7PvLdj-8O", "8ph3ppy4-e", "jrC3xy9KiY3", "DEwmD1d0wQo", ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "...
[ " Thanks for the clarification.\n\n\nWe would like to clarify that the example where $\\\\| w^\\*\\\\|_2 \\eqsim 1$ and $\\sigma^2 \\eqsim 1$ is introduced to explain the calculation of $k^*$ and $\\lambda$. It is possible that the SNR will go to zero for certain $w^\\*$ (e.g., random $w^\\*$). We also want to emph...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "UuYxEutgYl", "a6bXkrVjtm", "rc5vrGGvzNi", "FBTwlMK6eec", "C6KJL2ICzO2", "goMsB0zhLGD", "8dTsWNy2_GZ", "f1YlUSYABYq", "8ph3ppy4-e", "94EXOinWMfj", "42ov0hYcPz5", "LB0pe5foUJ_", "YzWQb0xvfl", "rZPGOtRtaWL", "wpUUb4yfevj", "jrC3xy9KiY3", "DEwmD1d0wQo", "E8nCCQyA1oT", "zcArnWe6IUz",...
nips_2021_09-zkOYoVof
Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks
Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
accept
This learning-theory paper used stability approach to study the generalization properties of Model-Agnostic Meta-Learning (MAML). The rate of $O(1/mn)$ for strongly convex objective in the homogeneous setting is the first-ever known result for meta learning and greatly improves the existing results. For the heterogeneous setting, the generalization was proved using the total variation distance between the data distributions of this unseen task and the training tasks. As far as I can see from reading through the proof techniques, it is indeed solid and non-trivial. There are various concerns about strong-convexity and smoothness assumption for stability analysis which were well responded and explained in the rebuttal. It should be included as a discussion on the limitation of stability analysis in the final version. I would also particularly encourage the authors to include discussions on other concerns raised by the reviewers such as the generalization bound related to K and the appropriateness of using the TV metric for deriving generalization on unseen tasks for meta learning etc.
train
[ "sNwlQvdX8LL", "XylJUVgjoF7", "euyo3c2tUI", "nQkcY8z5bc9", "9VsiGt2m1iD", "BooJ1BegXIK", "dl455s5hBIL", "DeViFyKkhz3", "UG9mF0WKqst", "-0trc938P8S", "BULNeih0Cmv", "mSPcyq_KFHM", "TabU5uXxHII", "hbwwVes3EHG", "0wEdmSlRvEa", "umPCSR85Rv", "43xpv0lQlE", "MBNrNQDAzCC", "MwSwYK3Aekn"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ "In this paper, the authors provide a generalization and uniform-stability-type analysis for MAML training. In specific, the authors first present a generalization error bound (based on the uniform stability bound) of $O(1/mn)$ for strongly convex objective functions when the new task at the test time belong to one...
[ 6, -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_09-zkOYoVof", "euyo3c2tUI", "hbwwVes3EHG", "nips_2021_09-zkOYoVof", "DeViFyKkhz3", "0wEdmSlRvEa", "TabU5uXxHII", "-0trc938P8S", "nips_2021_09-zkOYoVof", "BULNeih0Cmv", "MwSwYK3Aekn", "MBNrNQDAzCC", "43xpv0lQlE", "sNwlQvdX8LL", "nQkcY8z5bc9", "UG9mF0WKqst", "nips_2021_09-zk...
nips_2021_NXGnwTLlWiR
Factored Policy Gradients: Leveraging Structure for Efficient Learning in MOMDPs
Policy gradient methods can solve complex tasks but often fail when the dimensionality of the action-space or objective multiplicity grow very large. This occurs, in part, because the variance on score-based gradient estimators scales quadratically. In this paper, we address this problem through a factor baseline which exploits independence structure encoded in a novel action-target influence network. Factored policy gradients (FPGs), which follow, provide a common framework for analysing key state-of-the-art algorithms, are shown to generalise traditional policy gradients, and yield a principled way of incorporating prior knowledge of a problem domain's generative processes. We provide an analysis of the proposed estimator and identify the conditions under which variance is reduced. The algorithmic aspects of FPGs are discussed, including optimal policy factorisation, as characterised by minimum biclique coverings, and the implications for the bias variance trade-off of incorrectly specifying the network. Finally, we demonstrate the performance advantages of our algorithm on large-scale bandit and traffic intersection problems, providing a novel contribution to the latter in the form of a spatial approximation.
accept
This paper introduces a novel causal baseline to reduce the variance in policy gradient methods. All reviewers like the paper and I recommend acceptance.
val
[ "5hZqpmYGQXu", "H5Iv5J3Xvn", "jjwZghXlA1G", "J2FTYuo9hKY", "Fy-YXAvhFXz", "UCyzi_LIRRB", "LjIROznO2wv", "EOT-uRqjr8T" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " I appreciate the response from the authors - particularly that they are willing to fix the positioning problem, which was one of my major concerns. I am increasing my score based on the response as well as discussions with other reviewers. ", "This paper studies multi objective MDPs with a factorized action sp...
[ -1, 6, 7, -1, -1, -1, -1, 7 ]
[ -1, 3, 2, -1, -1, -1, -1, 3 ]
[ "Fy-YXAvhFXz", "nips_2021_NXGnwTLlWiR", "nips_2021_NXGnwTLlWiR", "UCyzi_LIRRB", "H5Iv5J3Xvn", "EOT-uRqjr8T", "jjwZghXlA1G", "nips_2021_NXGnwTLlWiR" ]
nips_2021_3zP6RrQtNa
MarioNette: Self-Supervised Sprite Learning
Artists and video game designers often construct 2D animations using libraries of sprites---textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision.
accept
The submission proposes a self-supervised approach to represent images as a composition of elements from a dictionary of patches onto a canvas. Reviewers felt that the paper tackles an interesting problem, is well-executed and provides convincing results for the video game animation domain. The main concern among reviewers is generalizability to more complex domains. The authors respond by pointing out that sprite-based animation and game content is already challenging enough for current approaches. They also point to Section 3.4 of the submission, which shows preliminary results on natural images and videos, noting that a richer image formation model would be required to obtain photorealistic results. Reviewer vWXs was concerned that the proposed approach might be relying too much on task specificities to recover good sprites, which was addressed by the authors by clearing up confusion on how the data is scaled during preprocessing. Reviewer vWXs remains concerned that the translation-only design choice makes the approach brittle to variations in scale, pose, lighting, etc. between the training and evaluation settings, but notes that the approach can recover from that weakness through a better "transform predictor". Given that overall reviewers agree that the submission advances the state-of-the-art with a new and interesting approach, I recommend acceptance.
train
[ "2d09yIa8L8", "ouC5e3tIgSr", "WDef1Jq2_Bz", "TYgAv-Wqgcb", "YEA2yFD1CVg", "GlWqu03CV2M", "MDPj9nrY4rT", "WLW-YtsTljN", "i3UaH49tl51", "VPqBMC-HMCW", "ngNUnR6Daur", "U3_YcyqpeKH", "8fsTk_Z4rv3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a method to disentangle sprite-based video animations into recurring visual elements in a self-supervised manner. Specifically, it decompose a frame into grids and try to reconstruct the frame by drawing elements from a dictionary of discovered sprites for each grid. By training with this recon...
[ 7, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 5, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "nips_2021_3zP6RrQtNa", "VPqBMC-HMCW", "ngNUnR6Daur", "nips_2021_3zP6RrQtNa", "GlWqu03CV2M", "MDPj9nrY4rT", "WLW-YtsTljN", "TYgAv-Wqgcb", "2d09yIa8L8", "U3_YcyqpeKH", "8fsTk_Z4rv3", "nips_2021_3zP6RrQtNa", "nips_2021_3zP6RrQtNa" ]
nips_2021_trNDfee72NQ
RLlib Flow: Distributed Reinforcement Learning is a Dataflow Problem
Eric Liang, Zhanghao Wu, Michael Luo, Sven Mika, Joseph E. Gonzalez, Ion Stoica
accept
The reviewers and the AC appreciate this papers contribution of a nice advance in coding machinery to improve processing for parallel streams of RL data. The authors are encouraged to carefully address the multiple suggestions for further strengthening the paper in their camera ready version.
train
[ "cuJGMFSYWDn", "cOa8KCURQeA", "PGoQ5RudM_f", "yMBpam0-WA2", "QW1y08OwuX", "g3SPPRBmXHS", "W4RC_roNnDG", "C0DnIfTLSsl", "FDLMUnQm9Nn", "LpeNOPQWBOv", "yteqfei1X5", "OUmKqOwc2K" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank you again for your review. As the deadline getting close, it is appreciated if you could tell us whether our response addresses your concerns mentioned above and whether it is sufficient to increase the score. Thanks!", " We would like to thank you again for your review. As the deadline g...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "QW1y08OwuX", "g3SPPRBmXHS", "C0DnIfTLSsl", "W4RC_roNnDG", "OUmKqOwc2K", "yteqfei1X5", "LpeNOPQWBOv", "FDLMUnQm9Nn", "nips_2021_trNDfee72NQ", "nips_2021_trNDfee72NQ", "nips_2021_trNDfee72NQ", "nips_2021_trNDfee72NQ" ]
nips_2021_VjC4uY3_3I
Improve Agents without Retraining: Parallel Tree Search with Off-Policy Correction
Tree Search (TS) is crucial to some of the most influential successes in reinforcement learning. Here, we tackle two major challenges with TS that limit its usability: \textit{distribution shift} and \textit{scalability}. We first discover and analyze a counter-intuitive phenomenon: action selection through TS and a pre-trained value function often leads to lower performance compared to the original pre-trained agent, even when having access to the exact state and reward in future steps. We show this is due to a distribution shift to areas where value estimates are highly inaccurate and analyze this effect using Extreme Value theory. To overcome this problem, we introduce a novel off-policy correction term that accounts for the mismatch between the pre-trained value and its corresponding TS policy by penalizing under-sampled trajectories. We prove that our correction eliminates the above mismatch and bound the probability of sub-optimal action selection. Our correction significantly improves pre-trained Rainbow agents without any further training, often more than doubling their scores on Atari games. Next, we address the scalability issue given by the computational complexity of exhaustive TS that scales exponentially with the tree depth. We introduce Batch-BFS: a GPU breadth-first search that advances all nodes in each depth of the tree simultaneously. Batch-BFS reduces runtime by two orders of magnitude and, beyond inference, enables also training with TS of depths that were not feasible before. We train DQN agents from scratch using TS and show improvement in several Atari games compared to both the original DQN and the more advanced Rainbow. We will share the code upon publication.
accept
This is a solid paper and set of contributions. There are certainly areas where it could be improved; to me these are primarily around the structure of the paper and writing (not that these are poor, but there are these two essentially distinct contributions that some reviewers get tripped up on). z7iw points out that related ideas can be found in the literature and that this negatively impacts novelty, and I agree that this overlap exists but would argue that whatever minor hit novelty takes it is made up for by how this can unify understanding of where these challenges arise. There are several concrete suggestions (or areas for clarification) that the reviewers have brought up, and the authors are encouraged to include them in the final version of the paper.
train
[ "L3l1Kp4etxN", "e1G3wI72U8-", "qKCmPl2mmBQ", "7Qz4WCuhI_", "rIhsbLeW3v7", "yF1Y-DtSFcv", "YI7Mcj-F6m0", "uycT9xbeVrD", "MrSvwS5_aVM", "o_Mxyh8H2s", "3YNWDmVxoUW", "vgjYfVg1Vme", "my9iuUAUvyR" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for reading and considering our clarifications and hope we have answered all major points.", " The answers were very clear and helped. \nI still maintain my choice to accept this paper, I hope the author will release an open source implementation.", " Thanks for your responses!\n\nI jus...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 2, 3, 5 ]
[ "yF1Y-DtSFcv", "YI7Mcj-F6m0", "uycT9xbeVrD", "nips_2021_VjC4uY3_3I", "o_Mxyh8H2s", "nips_2021_VjC4uY3_3I", "my9iuUAUvyR", "vgjYfVg1Vme", "3YNWDmVxoUW", "7Qz4WCuhI_", "nips_2021_VjC4uY3_3I", "nips_2021_VjC4uY3_3I", "nips_2021_VjC4uY3_3I" ]
nips_2021_e2gqGkFjDHg
Redesigning the Transformer Architecture with Insights from Multi-particle Dynamical Systems
Subhabrata Dutta, Tanya Gautam, Soumen Chakrabarti, Tanmoy Chakraborty
accept
This paper was universally well-liked by the reviewers. The reviewers found the theoretical developments to be elegant, with significant practical applications yielding a convincing new model in the transformer family. Several reviewers commented that the exposition might be improved by including more intuitive explanations.
train
[ "3GoAorFO8k3", "AH-q6-83gGM", "MNXPemHr_j", "-awQzFI0Iy", "bE4dW7OySKa", "mkGibSFulG6", "T3myAhF4HFt", "C6hRHsCPd7H", "R0AO_oiHwRb", "7EwvthBRjiq", "7gwb69kkc7l", "k3xElQKWiiu", "25A5WQJLyWq", "Z7vPdIfyHUp" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'm still not convinced by the neccesity of the dynamic system viewpoint, although I'm ok with a story with the ode viewpoint. The approximation have nothing to do with the ode.\n\nThe experiment relates to the Lazyformer is good, it demonstrated the benefit of the paper.\n\nAt this point due to the hardness to r...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "mkGibSFulG6", "nips_2021_e2gqGkFjDHg", "-awQzFI0Iy", "bE4dW7OySKa", "AH-q6-83gGM", "Z7vPdIfyHUp", "7EwvthBRjiq", "k3xElQKWiiu", "Z7vPdIfyHUp", "25A5WQJLyWq", "AH-q6-83gGM", "nips_2021_e2gqGkFjDHg", "nips_2021_e2gqGkFjDHg", "nips_2021_e2gqGkFjDHg" ]
nips_2021_OdklztJBBYH
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks. A range of defense methods have been proposed to train adversarially robust DNNs, among which adversarial training has demonstrated promising results. However, despite preliminary understandings developed for adversarial training, it is still not clear, from the architectural perspective, what configurations can lead to more robust DNNs. In this paper, we address this gap via a comprehensive investigation on the impact of network width and depth on the robustness of adversarially trained DNNs. Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness. We also provide a theoretical analysis explaning why such network configuration can help robustness. These architectural insights can help design adversarially robust DNNs.
accept
This paper studies the impact of network width and depth on the robustness of a model through adversarial training. Different from previous works that looked into the impact by varying the overall width and depth of a network, the authors perform stage-level experiments in which they modify the width and depth of different stages within a model. The findings, which indicate that decreasing the depth and width at the last stage can lead to higher robustness while decreasing depth and width at earlier stages of the network hurts robustness, are useful to the community. However, the upper bound on the Lipschitz constant proposed in this work does not seem to explain the empirical results well. We encourage the authors to discuss the consistency between the theoretical and empirical results more if the paper is finally accepted.
train
[ "Evvr2SBbwCE", "Vv2P493e2gB", "543Oyt__AAG", "0CAkbEWVOqb", "N13fpR5lWpi", "xhhkKfSvMYA", "vSoB3YHKQ1", "prun-RT4Oti", "OQdyVDQT9kJ", "R58G-5NWug9", "Ur-yu77745H", "-D_zTf18TE", "XaTKoJ8X_LT", "IM8FTqawFPh", "Is1nI7oUFoM" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the positive feedback and suggestion. We will make sure the explanations are added to the revision.", " Thanks for your response. I think most of my concerns have been addressed. I suggest the authors to add these explanation in the paper for better understanding. ", " Thanks for your...
[ -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "Vv2P493e2gB", "XaTKoJ8X_LT", "0CAkbEWVOqb", "N13fpR5lWpi", "-D_zTf18TE", "prun-RT4Oti", "nips_2021_OdklztJBBYH", "OQdyVDQT9kJ", "R58G-5NWug9", "Ur-yu77745H", "vSoB3YHKQ1", "IM8FTqawFPh", "Is1nI7oUFoM", "nips_2021_OdklztJBBYH", "nips_2021_OdklztJBBYH" ]
nips_2021_sxjpM-kvVv_
Center Smoothing: Certified Robustness for Networks with Structured Outputs
The study of provable adversarial robustness has mostly been limited to classification tasks and models with one-dimensional real-valued outputs. We extend the scope of certifiable robustness to problems with more general and structured outputs like sets, images, language, etc. We model the output space as a metric space under a distance/similarity function, such as intersection-over-union, perceptual similarity, total variation distance, etc. Such models are used in many machine learning problems like image segmentation, object detection, generative models, image/audio-to-text systems, etc. Based on a robustness technique called randomized smoothing, our center smoothing procedure can produce models with the guarantee that the change in the output, as measured by the distance metric, remains small for any norm-bounded adversarial perturbation of the input. We apply our method to create certifiably robust models with disparate output spaces -- from sets to images -- and show that it yields meaningful certificates without significantly degrading the performance of the base model.
accept
Thank you for your submission to NeurIPS. The reviewers and I are all in agreement that the proposed work presents a nice (if incremental, at least from a technical perspective), extension to randomize smoothing, to apply to more general output spaces than previous approaches. The results are technically sound, even if by no means represent a foundational breakthrough, and the experiments illustrate the potential usefulness of the approach. And the authors did a great job during the response period discussing the issues raised by the reviewers and offering further potential benefits of the approach. The majority of the discussion centered around the question of whether such a result is substantial enough (from a philosophical standpoint) to warrant publication at NeurIPS. And in this setting, I want to make two points. First, I think it's entirely appropriate for the reviewer to bring up the point in this manner. Paper selection for NeurIPS _is_ ultimately about making subjective (rather than just factual) evaluations of papers, and it's entirely appropriate for the reviewer to highlight their concerns about the fundamental novelty and perceived impact of the work. Indeed, it's far preferable to bring it up in an honest and frank manner (and engage fully with the reviewers afterward in discussion), than to try to couch this fundamental concern in the veil of minor technical issues with the paper (which is too often the standard in reviews). If there are questions about a paper's significance, it can and should be brought up in the manner. And second, I must say that I agree to some extent with what this reviewer is saying: there _is_ an extent to which the paper seems to be extending randomized smoothing to "one more domain". And the other three reviewers, while being positive, are only minorly so, and weren't substantially swayed by the author response. Despite this, however, I am going to recommend the paper be accepted. While I'm sympathetic to the points that the negative reviewer makes, ultimately I think that rejecting an otherwise-universally-recommended-to-be-accepted paper on the grounds of perceived significance be an incorrect decision, given that it's such a subjective quality. My own reading of the paper is that it presents a practically-valuable extension to randomized smoothing for problems beyond classification, using technically sound approaches. Although the methodology may not be the most fundamentally groundbreaking, it covers a topic of likely interest to a wide variety of practitioners who may be considering using randomized smoothing, and hence has a notable value for the field. Given all this, I believe the paper does meet the bar for NeurIPS despite the ultimate persisting disagreement between reviewers.
train
[ "38NBqVA_TWp", "6fxaAt1GPCe", "VWJN-V0GNKJ", "_21JF4LzJ42", "C0q9QcwUTu-", "OO6xwax9lCF", "9loWdO_NLNV", "2wjuOFvEGj", "U0YRKX_jU2l", "fNe-2Ki1vd3", "n7lLuLEZJM", "RRGE_WRoVeE", "qObKWOcH9bV", "fHKuL5yomsN", "_qrq-f2c7Mh", "m91ut5W_aey", "4mcULjJR2zg", "cXiDFLyi-if", "CJmmP7ZlLBZ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", ...
[ " Thank you for your response! Below we address the concerns raised:\n\n1. Attacks developed by previous works, for problems in our setting, work for almost all the points in the dataset and can make the model perform poorly on all of them. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text by Carlini a...
[ -1, -1, 6, -1, -1, 6, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 5, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "6fxaAt1GPCe", "m91ut5W_aey", "nips_2021_sxjpM-kvVv_", "CJmmP7ZlLBZ", "9loWdO_NLNV", "nips_2021_sxjpM-kvVv_", "4mcULjJR2zg", "4mcULjJR2zg", "nips_2021_sxjpM-kvVv_", "n7lLuLEZJM", "RRGE_WRoVeE", "qObKWOcH9bV", "fHKuL5yomsN", "_qrq-f2c7Mh", "cXiDFLyi-if", "Iaax878RZmE", "OO6xwax9lCF", ...
nips_2021_TrgTdDW4ta
Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures
Conditional gradient methods (CGM) are widely used in modern machine learning. CGM's overall running time usually consists of two parts: the number of iterations and the cost of each iteration. Most efforts focus on reducing the number of iterations as a means to reduce the overall running time. In this work, we focus on improving the per iteration cost of CGM. The bottleneck step in most CGM is maximum inner product search (MaxIP), which requires a linear scan over the parameters. In practice, approximate MaxIP data-structures are found to be helpful heuristics. However, theoretically, nothing is known about the combination of approximate MaxIP data-structures and CGM. In this work, we answer this question positively by providing a formal framework to combine the locality sensitive hashing type approximate MaxIP data-structures with CGM algorithms. As a result, we show the first algorithm, where the cost per iteration is sublinear in the number of parameters, for many fundamental optimization algorithms, e.g., Frank-Wolfe, Herding algorithm, and policy gradient.
accept
Overall, the majority of the reviewers thought that the combination of MaxIP data structures with conditional gradient methods for continues optimization, even though some of the main derivations are somewhat straightforward and not particularly difficult, is interesting enough to justify publication in NeurIPS, and may encourage further research along these line. I also agree with this view.
train
[ "vwPIUE6YoQK", "ums-klu545", "Xt33vOwmLA", "m0zWP2Mo0b", "GTStmAJq0D", "cCjbL36SLU-", "XuFaVXCaOru", "9IGB9f8sC2q" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your encouragement and suggestions, which will help us improve the paper! We have carefully thought through all your great questions and will add corresponding discussions to answer them in the updated paper. We provide details below:\n\n\n**1. The role of $\\rho$.**\n\nThe $\\rho$ is determined by the...
[ -1, -1, -1, -1, 8, 7, 6, 4 ]
[ -1, -1, -1, -1, 4, 3, 2, 4 ]
[ "XuFaVXCaOru", "9IGB9f8sC2q", "GTStmAJq0D", "cCjbL36SLU-", "nips_2021_TrgTdDW4ta", "nips_2021_TrgTdDW4ta", "nips_2021_TrgTdDW4ta", "nips_2021_TrgTdDW4ta" ]
nips_2021_rJwDMui8DI
Neural Regression, Representational Similarity, Model Zoology & Neural Taskonomy at Scale in Rodent Visual Cortex
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory's 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates.
accept
This paper presents a large-scale / systematic study fitting many modern deep nets to mouse calcium data from the Allen institute. This paper received two accepts, one borderline accept and one borderline reject and was discussed on the forum. There was a bit of a discussion regarding the actual contributions of this paper. There was some agreement between the reviewers that the authors oversold the claim that they introduced a novel regression methods and the authors agreed to remove or significantly downplay this claim.There was also some discussion about the lack of clear hypotheses being tested or significant results contributing to our understanding of the visual cortex. The study is an impressive feat from the engineering perspective but at the end of the day from the dozens of models and results presented the fit are relatively low (correlation coefficients r<0.2 which means r^2<0.04). This is to be expected given how noisy calcium data are and given the small number of stimuli available (~100). However, given these scores it is possible that we are essentially fitting noise and indeed the models are all performing very similarly so there is no really salient results. With that said, the positive reviewers also emphasized that the present study seem to be at least partially addressing a controversy in the field (because a previous study had found that random models did about as well as trained ones). Here at least the authors report the accuracy of trained models that significantly outperform untrained ones. The code is also made freely available and positive reviewers see that as a "launching point" for additional work on mouse (as opposed to primate cortex). All in all, this really is a borderline submission. Given the issues highlighted above, this paper could be accepted if space permits.
val
[ "Rdk1oKnaTm", "xSPFij5RmsQ", "yQrOb4Jg6-a", "B9O79CO6kjw", "ptd7w7LyD3N", "a-s4DZ9PAjk", "AY_X9H9FvyB", "8GKhDYr8Ncv", "HUNCjavprRa", "sQGy3dzKU1U", "5Y2Hf6549Ff", "fDutEoX966S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ " I would like to apologize for the late response and thank the authors for the thorough response.\n\n- [Lack of Guiding Hypotheses] I fully understand the reasoning behind proposing a large scale data-driven analysis. It allows for answering many questions on the brain. My review is pushing the authors on the ac...
[ -1, 6, -1, 7, 6, -1, 8, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 4, -1, 4, -1, -1, -1, -1, -1 ]
[ "5Y2Hf6549Ff", "nips_2021_rJwDMui8DI", "fDutEoX966S", "nips_2021_rJwDMui8DI", "nips_2021_rJwDMui8DI", "nips_2021_rJwDMui8DI", "nips_2021_rJwDMui8DI", "HUNCjavprRa", "AY_X9H9FvyB", "ptd7w7LyD3N", "B9O79CO6kjw", "xSPFij5RmsQ" ]
nips_2021_pMvBiSLGTeU
A Topological Perspective on Causal Inference
This paper presents a topological learning-theoretic perspective on causal inference by introducing a series of topologies defined on general spaces of structural causal models (SCMs). As an illustration of the framework we prove a topological causal hierarchy theorem, showing that substantive assumption-free causal inference is possible only in a meager set of SCMs. Thanks to a known correspondence between open sets in the weak topology and statistically verifiable hypotheses, our results show that inductive assumptions sufficient to license valid causal inferences are statistically unverifiable in principle. Similar to no-free-lunch theorems for statistical inference, the present results clarify the inevitability of substantial assumptions for causal inference. An additional benefit of our topological approach is that it easily accommodates SCMs with infinitely many variables. We finally suggest that our framework may be helpful for the positive project of exploring and assessing alternative causal-inductive assumptions.
accept
This paper proposes a novel topological approach to the causal hierarchy. All of the reviewers found the approach interesting, although some valid concerns were raised regarding clarity and presentation. In particular, a serious concern regarding comparisons with related work was raised and sorted out during the discussion phase. In the end, there was a consensus recommendation to accept this paper which I concur with. We expect the authors will take the reviewer feedback into account, and in particular add a reference and detailed comparison with the paper [1]. [1] S. Bongers, P. Forré, J. Peters, and J. M. Mooij. Foundations of structural causal models with cycles and latent variables. arXiv.org preprint, arXiv:1611.06221v5 [stat.ME], 2021. URL https://arxiv.org/abs/1611.06221
train
[ "arSmWwtqFF5", "CzoedWWwlm", "FgzSicVtWnM", "oxY-fnYq2qC", "jEAM2K6GMBU", "cT05ENVl6Xc", "T38_o9R4qI", "-Wql1uCY_D", "pXF_dGfMziD", "OsqqGIkWV7E", "xTwC8usqpca", "YuzJJl71BQ", "nHMUYwpqF1B", "MHhaVgpz_-x", "l_TYXdhkVnw" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am happy with the response and will make a positive recommendation. ", "The authors propose an alternative analysis of how causal inference at different levels (observational, interventional, counterfactual) can interrelate by proposing a topology on the space of structural causal models. The authors' results...
[ -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "OsqqGIkWV7E", "nips_2021_pMvBiSLGTeU", "pXF_dGfMziD", "cT05ENVl6Xc", "nips_2021_pMvBiSLGTeU", "xTwC8usqpca", "nips_2021_pMvBiSLGTeU", "nHMUYwpqF1B", "CzoedWWwlm", "MHhaVgpz_-x", "jEAM2K6GMBU", "l_TYXdhkVnw", "nips_2021_pMvBiSLGTeU", "nips_2021_pMvBiSLGTeU", "nips_2021_pMvBiSLGTeU" ]
nips_2021_MvTnc_c4xYj
Parameter Inference with Bifurcation Diagrams
Estimation of parameters in differential equation models can be achieved by applying learning algorithms to quantitative time-series data. However, sometimes it is only possible to measure qualitative changes of a system in response to a controlled condition. In dynamical systems theory, such change points are known as bifurcations and lie on a function of the controlled condition called the bifurcation diagram. In this work, we propose a gradient-based approach for inferring the parameters of differential equations that produce a user-specified bifurcation diagram. The cost function contains an error term that is minimal when the model bifurcations match the specified targets and a bifurcation measure which has gradients that push optimisers towards bifurcating parameter regimes. The gradients can be computed without the need to differentiate through the operations of the solver that was used to compute the diagram. We demonstrate parameter inference with minimal models which explore the space of saddle-node and pitchfork diagrams and the genetic toggle switch from synthetic biology. Furthermore, the cost landscape allows us to organise models in terms of topological and geometric equivalence.
accept
All reviewers agree that the paper propose an interesting approach to the problem of estimating parameters in differential equations. Although some reviewers have some technical concerns at their first reviews, they satisfy the authors' responses to address those. It seems that, although there are some points that should be modified from the current form including the terms used in the paper (such as semi-supervised learning), I think we can expect the authors modify the paper in the camera-ready by reflecting the discussion. Based on these, I recommend acceptance (poster) for this paper.
train
[ "9xEBl_2QW5E", "J-Zgo7-vkoM", "wd5Xog_VF93", "K735zQLusJ", "oQWOZenlc1E", "cVm03xbrfGV", "2MM-LDoL1SK", "T7FkoHOixcZ", "f6FTuErAGkC", "bLejGnUOpNw", "lEfQN--C_H1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I believe that what the authors are suggesting will improve the presentation of their work and that the paper can be accepted. However, as more details of their suggested changes are not available to me for anonymity preserving reasons, I leave the decision at the discretion of the Chairs.", " Thank you for you...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "2MM-LDoL1SK", "T7FkoHOixcZ", "oQWOZenlc1E", "nips_2021_MvTnc_c4xYj", "cVm03xbrfGV", "f6FTuErAGkC", "bLejGnUOpNw", "lEfQN--C_H1", "K735zQLusJ", "nips_2021_MvTnc_c4xYj", "nips_2021_MvTnc_c4xYj" ]
nips_2021_A3TwMRCqWUn
Scalable Thompson Sampling using Sparse Gaussian Process Models
Thompson Sampling (TS) from Gaussian Process (GP) models is a powerful tool for the optimization of black-box functions. Although TS enjoys strong theoretical guarantees and convincing empirical performance, it incurs a large computational overhead that scales polynomially with the optimization budget. Recently, scalable TS methods based on sparse GP models have been proposed to increase the scope of TS, enabling its application to problems that are sufficiently multi-modal, noisy or combinatorial to require more than a few hundred evaluations to be solved. However, the approximation error introduced by sparse GPs invalidates all existing regret bounds. In this work, we perform a theoretical and empirical analysis of scalable TS. We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS. These conceptual claims are validated for practical implementations of scalable TS on synthetic benchmarks and as part of a real-world high-throughput molecular design task.
accept
I am ultimately recommending acceptance for this paper on the basis of the reviewer consensus that the theoretical contributions made in this paper are strong. Namely, this is the first paper I am aware of to (1) present a regret bound for Bayesian optimization with sparse GP models using any acquisition function at all, and (2) moreover, the theoretical results are specifically for a recent scalable approximation of Thompson sampling using sparse GPs, noting that Thompson sampling is one of those rare GP operations not made more computationally tractable by simply using sparse GPs alone. However, I want to echo Reviewer eTmV's concerns about the choice of \alpha_{t}=1 in the experiments. Keep in mind, the actual acquisition function that you've demonstrated convergence for is *not* Thompson sampling as we would all understand it, or even an approximation thereof: it's a modification to Thompson sampling in which the (co-)variance envelope samples are drawn from shrinks over time. This is fine, and indeed there is obvious precedent for such a modification for theoretical purposes, because as you point out people who use UCB don't actually typically anneal the trade-off parameter. That said, Srinivas et al. absolutely did evaluate their mechanism with the annealed trade-off parameter: they did not evaluate a constant \beta_{t} but rather scaled it down by a factor of five, leaving a schedule for \beta_{t} that still annealed over time. I don't agree that setting \alpha_{t}=1 is at all analogous to their experimental evaluation. While I completely understand that in practice, people will simply just use Thompson sampling (which has no parameter \alpha_{t}), there's frankly just no good justification for not at least presenting some results using an \alpha_{t} annealing schedule within a constant factor of the one suggested by your theory. You should include these results, if not in the main body of the paper then in the supplementary materials.
train
[ "uz1sqWi6YGJ", "Dvxv8fu1yjS", "VnZunH2C8Ls", "Pi--wuQH8N", "0z6SYBM9ug5", "-0zqB5P3_E", "iEGWqfMwjU7", "2tacer_RkOn", "dzpWQpuAZJw", "BjwpLuw-Wgw" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response! My score remains unchanged.", " Thank you for the predominantly positive review, stressing the novelty and importance of our theoretical work and the clarity of our exposition. We now address your comments about our experimental study.\n\n*“The empirical study is rather scarce.”*\n\...
[ -1, -1, -1, -1, -1, -1, 4, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 1, 2, 2 ]
[ "Dvxv8fu1yjS", "BjwpLuw-Wgw", "dzpWQpuAZJw", "iEGWqfMwjU7", "2tacer_RkOn", "nips_2021_A3TwMRCqWUn", "nips_2021_A3TwMRCqWUn", "nips_2021_A3TwMRCqWUn", "nips_2021_A3TwMRCqWUn", "nips_2021_A3TwMRCqWUn" ]
nips_2021_wGmOLwb8ClT
Robust Counterfactual Explanations on Graph Neural Networks
Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise and align well with human intuition. Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction. These explanations are not robust to noise because independently optimizing the correlation for a single input can easily overfit noise. Moreover, they are not counterfactual because removing an identified subgraph from an input graph does not necessarily change the prediction result. In this paper, we propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs. Our explanations are naturally robust to noise because they are produced from the common decision boundaries of a GNN that govern the predictions of many similar input graphs. The explanations are also counterfactual because removing the set of edges identified by an explanation from the input graph changes the prediction significantly. Exhaustive experiments on many public datasets demonstrate the superior performance of our method.
accept
This paper presents a new method to produce robust counterfactual explanations for the predictions of GNNs. However, reviewers raised several concerns about the method, the experiments, and the writing. Therefore, the paper cannot be accepted in the current form.
train
[ "u1Fj4rgkvvl", "qQlTI8u624W", "9OprUpwOipA", "0jjYNGzhgzB", "8YlDGi0Bbrt", "96rgtwiM8Or", "tEXHKsQw0c1", "PEofb80aTe9", "avis1Em2VYO", "6ZB0hoQSY-G" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the reply. Below is our response.\n \n**Q9: Re Q7.4: In my opinion, the most severe problem with this paper is the relatively handwavy discussion around the concept of robustness. I would have expected a clear definition of robustness and a clear definition of how a method can be define...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "qQlTI8u624W", "0jjYNGzhgzB", "avis1Em2VYO", "avis1Em2VYO", "avis1Em2VYO", "PEofb80aTe9", "6ZB0hoQSY-G", "nips_2021_wGmOLwb8ClT", "nips_2021_wGmOLwb8ClT", "nips_2021_wGmOLwb8ClT" ]
nips_2021_aedFIIRRfXr
Similarity and Matching of Neural Network Representations
We employ a toolset --- dubbed Dr. Frankenstein --- to analyse the similarity of representations in deep neural networks. With this toolset we aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer. We demonstrate that the inner representations emerging in deep convolutional neural networks with the same architecture but different initialisations can be matched with a surprisingly high degree of accuracy even with a single, affine stitching layer. We choose the stitching layer from several possible classes of linear transformations and investigate their performance and properties. The task of matching representations is closely related to notions of similarity. Using this toolset we also provide a novel viewpoint on the current line of research regarding similarity indices of neural network representations: the perspective of the performance on a task.
accept
This paper examines performance when an affine layer is used to “stitch” together layers from two neural networks, so that the hidden representations of one network are used as inputs to an intermediate layer of the other. Reviewers agreed that study of performance achievable via matching with either least squares or task loss was an important contribution. However, several reviewers expressed concern regarding the framing of the work, and in particular, the lack of clarity regarding the relationship between representational similarity methods and the proposed framework. At the urging of Reviewer j6kV, the authors proposed to change the introduction to describe the relationship between representational and functional similarity and to motivate the work from this perspective. Reviewers also raised several more technical questions. The authors provided a summary of the revisions they plan to make, and after reviewing this summary, all reviewers have recommended acceptance. The AC agrees that the paper, with the proposed revisions, provides a valuable perspective on representational and functional properties of neural networks.
train
[ "xXpSoTEF3lw", "DIr1393wX7P", "gIFx6pEFglA", "Jr_TTl1D_8z", "HEGyAKMJ0S6", "m-hOxbGMGsv", "mAWoJY8lagN", "4yyLzTNDgve", "WO3HLP19Dxm", "Iz7C3_Rbu7T", "ayUPSibDAh_", "QB8QhYIf1Bh", "OY-SrJAYmA9", "UZT2aSdjKvd", "vdorQ_8-aJl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Most of previous concerns can be addressed by the responses from authors. Therefore, I would like to increase the score from 5 to 6. In the future version, it will be great to add more explanations on the terminology used and also the relations to CKA methods. ", "**EDIT: I have increased my score from 4 to 6 b...
[ -1, 6, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "ayUPSibDAh_", "nips_2021_aedFIIRRfXr", "Iz7C3_Rbu7T", "nips_2021_aedFIIRRfXr", "nips_2021_aedFIIRRfXr", "mAWoJY8lagN", "nips_2021_aedFIIRRfXr", "OY-SrJAYmA9", "UZT2aSdjKvd", "DIr1393wX7P", "vdorQ_8-aJl", "HEGyAKMJ0S6", "Jr_TTl1D_8z", "HEGyAKMJ0S6", "nips_2021_aedFIIRRfXr" ]
nips_2021_FHQBDiMwvK
DOCTOR: A Simple Method for Detecting Misclassification Errors
Deep neural networks (DNNs) have shown to perform very well on large scale object recognition problems and lead to widespread use for real-world applications, including situations where DNN are implemented as “black boxes”. A promising approach to secure their use is to accept decisions that are likely to be correct while discarding the others. In this work, we propose DOCTOR, a simple method that aims to identify whether the prediction of a DNN classifier should (or should not) be trusted so that, consequently, it would be possible to accept it or to reject it. Two scenarios are investigated: Totally Black Box (TBB) where only the soft-predictions are available and Partially Black Box (PBB) where gradient-propagation to perform input pre-processing is allowed. Empirically, we show that DOCTOR outperforms all state-of-the-art methods on various well-known images and sentiment analysis datasets. In particular, we observe a reduction of up to 4% of the false rejection rate (FRR) in the PBB scenario. DOCTOR can be applied to any pre-trained model, it does not require prior information about the underlying dataset and is as simple as the simplest available methods in the literature.
accept
This submission suggest a new method for detecting out-of-distribution samples for object detection and the reviewers seem to be in agreement that the submission has sufficient novelty and potential impact for an acceptance. Given the detailed new experimental comparisons with other methods, I suggest that paper is accepted for a spotlight presentation at NeurIPS.
train
[ "0CNaRJn7-1e", "Jx8Kt7dFkti", "y2iFGuzcvz", "lniNY_4L14", "QP4Ff2iqXF", "OY9_8ZRqJjj", "3VUDXnc1hn", "fU68ysX9TVu", "iT2FaeOmBDm" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " We greatly appreciate your careful reading of our rebuttal and comments which helped us to improve the quality of our paper. Thank you for updating consequently your score. Best regards, the authors.\n", " We greatly appreciate your careful reading of our rebuttal and your strong support to our work and comm...
[ -1, -1, 7, -1, 6, -1, -1, -1, 6 ]
[ -1, -1, 3, -1, 3, -1, -1, -1, 4 ]
[ "QP4Ff2iqXF", "lniNY_4L14", "nips_2021_FHQBDiMwvK", "3VUDXnc1hn", "nips_2021_FHQBDiMwvK", "QP4Ff2iqXF", "y2iFGuzcvz", "iT2FaeOmBDm", "nips_2021_FHQBDiMwvK" ]
nips_2021_iLn-bhP-kKH
Contrastive Laplacian Eigenmaps
Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. In this paper, we extend the celebrated Laplacian Eigenmaps with contrastive learning, and call them COntrastive Laplacian EigenmapS (COLES). Starting from a GAN-inspired contrastive formulation, we show that the Jensen-Shannon divergence underlying many contrastive graph embedding models fails under disjoint positive and negative distributions, which may naturally emerge during sampling in the contrastive setting. In contrast, we demonstrate analytically that COLES essentially minimizes a surrogate of Wasserstein distance, which is known to cope well under disjoint distributions. Moreover, we show that the loss of COLES belongs to the family of so-called block-contrastive losses, previously shown to be superior compared to pair-wise losses typically used by contrastive methods. We show on popular benchmarks/backbones that COLES offers favourable accuracy/scalability compared to DeepWalk, GCN, Graph2Gauss, DGI and GRACE baselines.
accept
This paper had very split reviews and required some thought when coming to a recommendation. The authors provided very thorough responses to each reviewer's critiques. While the two positive reviews responded saying they were satisfied with the author responses, neither of the negative reviewers responded to the author feedback. One negative reviewer's only objection was a lack of comparison against You et al. 2020. The author's provided such a comparison in the response, showing good results for their method relative to the baseline. The other reviewer had more detailed critiques, but from my estimation of the author's response, those critiques were well rebutted. Unfortunately I do not know if the reviewer themself thought the critiques were answered well. I am inclined to recommend acceptance for this paper, on the basis of the very thorough responses which I believe answered most of the reviewer critiques, and the failure of the reviewers themselves to answer the authors.
test
[ "7ozJAkll6sA", "V5Kgz9zTxMz", "F4MWmTsYPZh", "HJ_fo2TIAQq", "Lb1GvPaRHY6", "-w7zHEnCezR", "nbRAU35qBHx", "u4ERY-55LqK", "bP5_4sUXEpw", "iYFf1-dtFy", "K1CuXTHZQLM", "b3nKLZ4PM-a" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " \nWe thank the reviewer for the valuable comments and pointing to us a very interesting related work. Below we answer all questions.\n\n## 1. More detail about F in Eq. (6).\n\nKindly note that we have given examples of $\\mathbf{F}$ in **lines 112--114** but we will of course expand on these. By-and-large, $\\ma...
[ -1, 6, -1, -1, -1, -1, 7, -1, -1, -1, 6, 5 ]
[ -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, 5, 5 ]
[ "b3nKLZ4PM-a", "nips_2021_iLn-bhP-kKH", "bP5_4sUXEpw", "V5Kgz9zTxMz", "b3nKLZ4PM-a", "nips_2021_iLn-bhP-kKH", "nips_2021_iLn-bhP-kKH", "K1CuXTHZQLM", "V5Kgz9zTxMz", "nbRAU35qBHx", "nips_2021_iLn-bhP-kKH", "nips_2021_iLn-bhP-kKH" ]
nips_2021_ntAkYRaIfox
Machine learning structure preserving brackets for forecasting irreversible processes
Forecasting of time-series data requires imposition of inductive biases to obtain predictive extrapolation, and recent works have imposed Hamiltonian/Lagrangian form to preserve structure for systems with \emph{reversible} dynamics. In this work we present a novel parameterization of dissipative brackets from metriplectic dynamical systems appropriate for learning \emph{irreversible} dynamics with unknown a priori model form. The process learns generalized Casimirs for energy and entropy guaranteed to be conserved and nondecreasing, respectively. Furthermore, for the case of added thermal noise, we guarantee exact preservation of a fluctuation-dissipation theorem, ensuring thermodynamic consistency. We provide benchmarks for dissipative systems demonstrating learned dynamics are more robust and generalize better than either "black-box" or penalty-based approaches.
accept
This is a very good paper and a clear accept. All reviewers agree that there is a novel methodological contribution that extends neural ODE approaches to irreversible dynamical systems. The parametrization of brackets cleverly preserve structures of the dynamical system, leading to exact preservation of thermodynamical consistency and better generalization. The numerical experiments are good, but can be further improved. The paper is written nicely and the main ideas are explained well.
train
[ "Wi1jL1Mj0JG", "EwTb5wPX1uR", "vz-1a4pCag", "DxwrKOhVdb-", "2XZbi8QvHAG", "5fdKkiqJceh", "Vwlk_PKgX_y", "zeh4KQ1dsgS", "2_atNlgqNJv", "VEmx82TZT1", "d4eExylMV5j" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper introduces a new framework for learning neural ODEs obeying metriplectic dynamics. These differ from previous work on Hamiltonian dynamics in that it includes both conservative and dissipative terms guaranteeing that system entropy is non-decreasing. Dissapative dynamics are extremely common in real-worl...
[ 6, -1, -1, -1, -1, 7, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "nips_2021_ntAkYRaIfox", "2XZbi8QvHAG", "zeh4KQ1dsgS", "vz-1a4pCag", "Vwlk_PKgX_y", "nips_2021_ntAkYRaIfox", "2_atNlgqNJv", "5fdKkiqJceh", "Wi1jL1Mj0JG", "d4eExylMV5j", "nips_2021_ntAkYRaIfox" ]
nips_2021_XGSQfOVxVp4
On the Variance of the Fisher Information for Deep Learning
In the realm of deep learning, the Fisher information matrix (FIM) gives novel insights and useful tools to characterize the loss landscape, perform second-order optimization, and build geometric learning theories. The exact FIM is either unavailable in closed form or too expensive to compute. In practice, it is almost always estimated based on empirical samples. We investigate two such estimators based on two equivalent representations of the FIM --- both unbiased and consistent. Their estimation quality is naturally gauged by their variance given in closed form. We analyze how the parametric structure of a deep neural network can affect the variance. The meaning of this variance measure and its upper bounds are then discussed in the context of deep learning.
accept
The reviewers agreed that this is a solid theoretical paper that makes a real contribution. The paper is well-written relative to the complexity of what it's presenting.
train
[ "-63G53bl5-z", "Q0udI5TVCIq", "PCoQKWiKLhc", "BDgQNiIVCU9", "6nWCvW1rbp", "OwCxx5aNxbe", "caNfASmyG_N", "8m2kj30Eawx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Update: I thank the authors for their response, and I will keep the score as is. It would be nice to have a simple empirical analysis in the final supplemental material.\n\n\nThe paper discusses the covariance of two FIM estimators, which is useful in deep learning where the exact FIM over the full dataset can be ...
[ 7, 8, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, 2 ]
[ "nips_2021_XGSQfOVxVp4", "nips_2021_XGSQfOVxVp4", "6nWCvW1rbp", "8m2kj30Eawx", "Q0udI5TVCIq", "-63G53bl5-z", "nips_2021_XGSQfOVxVp4", "nips_2021_XGSQfOVxVp4" ]
nips_2021_mIki_kyHpLb
A$^2$-Net: Learning Attribute-Aware Hash Codes for Large-Scale Fine-Grained Image Retrieval
Xiu-Shen Wei, Yang Shen, Xuhao Sun, Han-Jia Ye, Jian Yang
accept
Thanks for your submission to NeurIPS. The reviewers were all in agreement that this is a solid paper that deserves to be accepted. Overall, the reviewers praised the importance of the problem, the proposed solution, and the empirical results. After the rebuttal and discussion, all advocated for accepting the paper. Despite being overall very positive about the paper, the reviewers did note a few weaknesses; please keep these in mind when preparing a final version of the manuscript.
train
[ "yHMdGcdodTd", "f00TAcb3T5l", "VpyamVNUN9Z", "QkhmFLNKMkX", "SXMUcS-dKbr", "nN60QRDVV86", "eG9aRxEjIip", "kHAymi_Y0Bi" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents an attribute-aware hashing network for generating attribute-aware hash codes. The proposed method includes two keys components: (1) visual feature extraction with attention, (2) learning attribute-specific features with autoencoder. \n\nThe authors conducted experiments on many datasets, and it...
[ 6, 7, -1, -1, -1, -1, 9, 7 ]
[ 4, 3, -1, -1, -1, -1, 5, 5 ]
[ "nips_2021_mIki_kyHpLb", "nips_2021_mIki_kyHpLb", "yHMdGcdodTd", "kHAymi_Y0Bi", "f00TAcb3T5l", "eG9aRxEjIip", "nips_2021_mIki_kyHpLb", "nips_2021_mIki_kyHpLb" ]
nips_2021_ui4xChWcA4R
Shape Registration in the Time of Transformers
Giovanni Trappolini, Luca Cosmo, Luca Moschella, Riccardo Marin, Simone Melzi, Emanuele Rodolà
accept
All reviewers agree that the idea of using a transformer-type architecture for (non-rigid) 3D point cloud registration is interesting and novel. While reviewers have raised questions or pointed out concerns about certain issues (e.g., being derivative), I am of the opinion that the authors sufficiently addressed those points in their answers (despite little or no serious discussion happening). I do, however, encourage the authors to follow the reviewer's advice on improving the writing style in some places and including all required clarifications as promised in the rebuttal.
train
[ "shcf3Lutjxk", "roXuH-jNRr", "-_hSPjZmrac", "AJSf4_XtM7a", "YxZGtZ98l1", "BA3LXsFbeKc", "rodpPsM13rg", "YETvlApWCE" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a Transformer-based point cloud registration scheme. Given two point clouds, the target and an initial latent feature go through the encoder of a Transformer-like network, and the resulting latent vector is combined with the source and goes through the decoder to yield a transformed point cloud...
[ 7, -1, -1, -1, -1, 6, 7, 6 ]
[ 4, -1, -1, -1, -1, 1, 4, 4 ]
[ "nips_2021_ui4xChWcA4R", "shcf3Lutjxk", "YETvlApWCE", "rodpPsM13rg", "BA3LXsFbeKc", "nips_2021_ui4xChWcA4R", "nips_2021_ui4xChWcA4R", "nips_2021_ui4xChWcA4R" ]
nips_2021_c1p817YZAx6
Brick-by-Brick: Combinatorial Construction with Deep Reinforcement Learning
Discovering a solution in a combinatorial space is prevalent in many real-world problems but it is also challenging due to diverse complex constraints and the vast number of possible combinations. To address such a problem, we introduce a novel formulation, combinatorial construction, which requires a building agent to assemble unit primitives (i.e., LEGO bricks) sequentially -- every connection between two bricks must follow a fixed rule, while no bricks mutually overlap. To construct a target object, we provide incomplete knowledge about the desired target (i.e., 2D images) instead of exact and explicit volumetric information to the agent. This problem requires a comprehensive understanding of partial information and long-term planning to append a brick sequentially, which leads us to employ reinforcement learning. The approach has to consider a variable-sized action space where a large number of invalid actions, which would cause overlap between bricks, exist. To resolve these issues, our model, dubbed Brick-by-Brick, adopts an action validity prediction network that efficiently filters invalid actions for an actor-critic network. We demonstrate that the proposed method successfully learns to construct an unseen object conditioned on a single image or multiple views of a target object.
accept
The paper addresses a problem a construction a 3d object with lego, in the action space of locations of placement of a new block, purely from visual observation during test time and trained with reinforcement learning to match the volume of the target object. The main positives of this paper are that this is an interesting setup/problem that has not been addressed before as far as we can tell (the lego construction yes, but from purely visual, trained with RL), they propose an interesting solution of graph neural network with action validation network and succeed at building basic shapes. The main drawback is that while they do succeed in making 3d shape objects (like an airplane), the quality of the constructed objects is not very high and the objects are not so complex (e.g. don’t have that many blocks). However, as this has not been done before, it’s hard to know what to expect - how difficult this problem really is. Because of this and because the setup is novel and this topic is quite uncommon and yet interesting, important and should be studied more, I think the paper should be accepted. For the authors I primarily recommend pushing the performance - getting larger, more diverse 3d shapes constructed of at higher level of quality. There were also few things unclear in the discussion, I also recommend for the authors to explain them with better writing.
train
[ "jQ8_9Dq1JlM", "vTLojlV1_s0", "GhWQAbJuhf", "VPo6n6QbtQq", "To_JN-S5MfF", "l7-ukxI29ps", "s4rzRp_EmDZ", "_zROlgCH9z6", "veOBT0VgLMV", "LpEUll1qwlN", "I2EPwH67jOt", "u1iAH8tP7jW", "bGE4nNlZ_Wq", "e7JvrRkFKF6", "gpQQoEc4IMS", "gYbahc0JlDW", "D84-Dtg4eh", "hh9ZWMzN15L" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear AC,\n\nWe did not use voxel logging, because voxel logging would mask out only a tiny fraction of invalid actions. Precisely speaking, voxel logging suffers from a difficulty in handling many invalid actions including the actions that assemble a brick in the position where the voxels are empty. As mentioned ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "LpEUll1qwlN", "nips_2021_c1p817YZAx6", "nips_2021_c1p817YZAx6", "veOBT0VgLMV", "LpEUll1qwlN", "I2EPwH67jOt", "_zROlgCH9z6", "u1iAH8tP7jW", "LpEUll1qwlN", "u1iAH8tP7jW", "nips_2021_c1p817YZAx6", "hh9ZWMzN15L", "GhWQAbJuhf", "D84-Dtg4eh", "gYbahc0JlDW", "nips_2021_c1p817YZAx6", "nips_...
nips_2021_N51zJ7F3mw
Dissecting the Diffusion Process in Linear Graph Convolutional Networks
Graph Convolutional Networks (GCNs) have attracted more and more attentions in recent years. A typical GCN layer consists of a linear feature propagation step and a nonlinear transformation step. Recent works show that a linear GCN can achieve comparable performance to the original non-linear GCN while being much more computationally efficient. In this paper, we dissect the feature propagation steps of linear GCNs from a perspective of continuous graph diffusion, and analyze why linear GCNs fail to benefit from more propagation steps. Following that, we propose Decoupled Graph Convolution (DGC) that decouples the terminal time and the feature propagation steps, making it more flexible and capable of exploiting a very large number of feature propagation steps. Experiments demonstrate that our proposed DGC improves linear GCNs by a large margin and makes them competitive with many modern variants of non-linear GCNs.
accept
This is a half understanding and half methodology paper, studying why existing linear GCNs cannot benefit from more feature propagation steps, and proposing *decoupled graph convolution* to make linear GCNs possible to benefit from a very large number of feature propagation steps. The writing is clear, the motivation is strong, the idea is novel, and the results are significant. Thus, I think it should be accepted for publication.
train
[ "3LxViv6t_mM", "LeCitDScTJQ", "27qGmicf8qD", "agD8qqUA-Ju", "bnbT9WIWwz7", "lAg2YiIK3iH", "s_v2EaJYrmi", "1I1i4FRcny", "f6U5jNcjSy6", "EdriHSH_8Oa", "hdbPhWUlvdt", "dWEBRjj93KD" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for appreciating our explanations. We will add the discussions in the revision as you suggested. Also, thanks for your interesting paper list and we will have a look.", "Along the line of linear GCNs, this paper dissects the feature propagation steps via continuous graph diffusion. With theoretical analy...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "27qGmicf8qD", "nips_2021_N51zJ7F3mw", "agD8qqUA-Ju", "bnbT9WIWwz7", "lAg2YiIK3iH", "LeCitDScTJQ", "dWEBRjj93KD", "hdbPhWUlvdt", "EdriHSH_8Oa", "nips_2021_N51zJ7F3mw", "nips_2021_N51zJ7F3mw", "nips_2021_N51zJ7F3mw" ]
nips_2021_gnAIV-EKw2
Dynamic Grained Encoder for Vision Transformers
Transformers, the de-facto standard for language modeling, have been recently applied for vision tasks. This paper introduces sparse queries for vision transformers to exploit the intrinsic spatial redundancy of natural images and save computational costs. Specifically, we propose a Dynamic Grained Encoder for vision transformers, which can adaptively assign a suitable number of queries to each spatial region. Thus it achieves a fine-grained representation in discriminative regions while keeping high efficiency. Besides, the dynamic grained encoder is compatible with most vision transformer frameworks. Without bells and whistles, our encoder allows the state-of-the-art vision transformers to reduce computational complexity by 40%-60% while maintaining comparable performance on image classification. Extensive experiments on object detection and segmentation further demonstrate the generalizability of our approach. Code is available at https://github.com/StevenGrove/vtpack.
accept
The paper introduces a dynamic vision transformer model that reduces spatial redundancy of image features. Three reviewers recommend acceptance, highlighting that the paper is well-written, the idea is interesting, and the experiments are rigorous, demonstrating the effectiveness of the method in different tasks (image classification, object detection, and semantic segmentation). One reviewer considers that the paper does not pass the acceptance threshold, due to unclear actual runtime gains when images have low-resolution, a concern shared by other reviewers as well. In the rebuttal, the authors demonstrated that a significant advantage in throughput on GPUs can be obtained through their implementation of an optimized CUDA kernel for batched sparse matrix multiplication. The AC finds this response convincing, and agrees with the majority that the paper passes the acceptance bar of NeurIPS. The authors are encouraged to add the discussion in the rebuttal to the final version of the paper.
train
[ "ma6N7qYuBRj", "sdygffwBTHY", "vOHza8jgt_S", "eiZIDvZHH4R", "w_0M6DfC6V", "ohTk8mX6H2z", "Ib5hbC9_qyq", "C9svdYV73PQ", "XpEqk3JMTex", "48KzN9tOwPN", "tyedgCYZD80", "ZBwWYzX51mI", "4_aaEqA8hv" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate the reviewers' valuable comments. The efficiency of our DGE modules on GPUs mainly relies on the throughput of sparse matrix multiplication, which is dependent on hardware architecture and code optimization. For example, when using the cuSPARSELt library on the latest Nvidia Tesla A100 GPU, ...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 2, 5, 5 ]
[ "w_0M6DfC6V", "nips_2021_gnAIV-EKw2", "XpEqk3JMTex", "nips_2021_gnAIV-EKw2", "C9svdYV73PQ", "nips_2021_gnAIV-EKw2", "4_aaEqA8hv", "eiZIDvZHH4R", "ZBwWYzX51mI", "tyedgCYZD80", "nips_2021_gnAIV-EKw2", "nips_2021_gnAIV-EKw2", "nips_2021_gnAIV-EKw2" ]
nips_2021_pZ5X_svdPQ
Understanding Negative Samples in Instance Discriminative Self-supervised Representation Learning
Instance discriminative self-supervised representation learning has been attracted attention thanks to its unsupervised nature and informative feature representation for downstream tasks. In practice, it commonly uses a larger number of negative samples than the number of supervised classes. However, there is an inconsistency in the existing analysis; theoretically, a large number of negative samples degrade classification performance on a downstream supervised task, while empirically, they improve the performance. We provide a novel framework to analyze this empirical result regarding negative samples using the coupon collector's problem. Our bound can implicitly incorporate the supervised loss of the downstream task in the self-supervised loss by increasing the number of negative samples. We confirm that our proposed analysis holds on real-world benchmark datasets.
accept
This paper aims to understand a discrepancy between the theory literature (namely, Arora et al., 2019) and recent empirical results in self-supervised learning. Theory suggests that adding more negatives to contrastive learning should degrade downstream classification performance while empirical results have shown the contrary. To rectify this discrepancy, the authors expand prior analysis to introduce a new bound that is tighter in settings with large numbers of negatives. Reviewers were conflicted about this paper. They all agreed that the problem is interesting and impactful, that the paper was clearly written, and that the contribution helps to resolve the conflict between theory and practice. There were also a number of technical concerns which were largely resolved through the discussion. However, several reviewers were concerned regarding the applicability of the theory to transfer settings in which the pre-training and downstream datasets differ since the current work only focuses on the case where these two datasets match. While I am sympathetic to this concern, I disagree that this addition is necessary for the paper to have impact or be a worthwhile contribution. Expanding the analysis to transfer settings is non-trivial and there is substantial value in addressing the case which is indeed de facto practice for many research papers on contrastive learning. I would encourage the authors, however, to consider the transfer setting for future work.
train
[ "mj12B3feNyr", "uQhTcj6VN3h", "bWxncVN58Xc", "D6XIEoEORv", "1_DdnwcQh4b", "VmYvQgsqrUi", "osnMvt4hiMd", "_CH9vggHQ1K", "6nsZieDiETC", "QDEnpr0ETdD", "fcNnTSkM97y", "FAuEu2-l660", "A3IX-0a2hB", "LWfITX-GIEq", "Ba0FDsip8hF", "ZQOXzAqol6J" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author" ]
[ " Dear Reviewer QhKs,\n\nIf the reviewer still recognizes the raised points as concerns, we would like to clarify them since this NeurIPS discussion is the rolling style and we believe that we have already addressed all concerns. \n\nWe provide additional experimental results to address one of the concerns, C1, fro...
[ -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1 ]
[ -1, -1, 1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1 ]
[ "fcNnTSkM97y", "nips_2021_pZ5X_svdPQ", "nips_2021_pZ5X_svdPQ", "osnMvt4hiMd", "A3IX-0a2hB", "nips_2021_pZ5X_svdPQ", "LWfITX-GIEq", "bWxncVN58Xc", "bWxncVN58Xc", "VmYvQgsqrUi", "bWxncVN58Xc", "A3IX-0a2hB", "nips_2021_pZ5X_svdPQ", "VmYvQgsqrUi", "A3IX-0a2hB", "A3IX-0a2hB" ]
nips_2021_DKRcikndMGC
On UMAP's True Loss Function
Sebastian Damrich, Fred A. Hamprecht
accept
Visualization with UMAP has become standard practice in multiple fields in recent years, originally presented as being derived from topological construction as an alternative to the popular tSNE visualization. However, over the past few years several studies have investigated its relation with tSNE (and other visualizations) and raised questions regarding the matching (or rather discrepancy) between its theoretical derivation and the implementation used in practice. This work provides an interesting in depth analysis on this topic, constructively providing a closed-form formulation of the loss function that is implemented by the UMAP algorithm (which is different from the loss it was purportedly supposed to optimize). In doing so, it provides concise insights into understanding of previous observations, and more generally the behavior of UMAP. After discussion, the reviewers unanimously agree the paper should be accepted. Most concerns that have been raised by initial reviews have been addressed in the responses (authors: please follow up on these in revision of the manuscript itself). Therefore, I recommend accepting the paper and I am positively certain it will be of great interest to a significant portion of the NeurIPS community.
test
[ "v0d-eb9yIg0", "KLg8yD0QMMv", "MbuNt3s32Xd", "XuKleW_5iYb", "oSPfcXoun3Z", "hBx4PKQfOgh", "6E50gpBx52X", "dFkr1PdpNt1", "zDIzEn4cAi5", "BrXPq-xTEc", "LHpz7KCE5Y", "-2bfWT7kDvm" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We now understand your suggestion better, thank you for clarifying! Consequently, we include three additional metrics that measure the embedding quality in the table below: \n1. the purported UMAP loss, \n2. the effective UMAP loss as predicted by our paper,\n3. the KL divergence. \n\nFor all three, we use the h...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "MbuNt3s32Xd", "BrXPq-xTEc", "KLg8yD0QMMv", "nips_2021_DKRcikndMGC", "hBx4PKQfOgh", "XuKleW_5iYb", "-2bfWT7kDvm", "LHpz7KCE5Y", "nips_2021_DKRcikndMGC", "nips_2021_DKRcikndMGC", "nips_2021_DKRcikndMGC", "nips_2021_DKRcikndMGC" ]
nips_2021_cD2Ls4qXTc
Fast Pure Exploration via Frank-Wolfe
We study the problem of active pure exploration with fixed confidence in generic stochastic bandit environments. The goal of the learner is to answer a query about the environment with a given level of certainty while minimizing her sampling budget. For this problem, instance-specific lower bounds on the expected sample complexity reveal the optimal proportions of arm draws an Oracle algorithm would apply. These proportions solve an optimization problem whose tractability strongly depends on the structural properties of the environment, but may be instrumental in the design of efficient learning algorithms. We devise Frank-Wolfe-based Sampling (FWS), a simple algorithm whose sample complexity matches the lower bounds for a wide class of pure exploration problems. The algorithm is computationally efficient as, to learn and track the optimal proportion of arm draws, it relies on a single iteration of Frank-Wolfe algorithm applied to the lower-bound optimization problem. We apply FWS to various pure exploration tasks, including best arm identification in unstructured, thresholded, linear, and Lipschitz bandits. Despite its simplicity, FWS is competitive compared to state-of-art algorithms.
accept
The paper makes a definitive contribution to the active exploration MAB literature. The core idea is not completely new but the modifications and variations proposed in the paper are meaningful advances relative to state of the art best arm identification algorithms. The paper is well written. The reviewers are all in favor of accepting, the dialogue with the authors was informative and honest with several issues still left open but for well justified reasons (e.g., computational complexity considerations). I would make this a clear accept but not a top paper because of the nature of contribution being relatively technical relative to antecedent work and some of the documented computational limitations. Thanks to the reviewers and authors for well done work and the fruitful exchange.
train
[ "8T7NAPj0QXT", "lczenfd3Ap2", "l3fk8yTXE0p", "7ypboFf-45", "fUvSBN1uN5d", "1lrszMxpom", "nn5uLiTemz4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper studies pure exploration in generic stochastic bandit problems under fixed confidence budgets. At the core of this approach is relying on a single iteration of frank-wolf (FW) to solve a non-smooth problem. The single iteration of FW reduces the computational burden, but is challenged by the non-smoothn...
[ 7, -1, 8, -1, -1, -1, 7 ]
[ 4, -1, 4, -1, -1, -1, 4 ]
[ "nips_2021_cD2Ls4qXTc", "fUvSBN1uN5d", "nips_2021_cD2Ls4qXTc", "nn5uLiTemz4", "8T7NAPj0QXT", "l3fk8yTXE0p", "nips_2021_cD2Ls4qXTc" ]
nips_2021_UQsbDkuGM0N
iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder
Shifeng Zhang, Ning Kang, Tom Ryder, Zhenguo Li
accept
The main concerns by the reviewers were adequately addressed during the rebuttal and subsequent discussion between the authors and reviewers. The reviewers unanimously vote for accepting this paper. The metareviewer sincerely thanks the authors and reviewers for engaging into fruitful discussions.
val
[ "zKiSf7Ay2W", "l4i3oEAT-y9", "k7cAHstGwkI", "Re7bzIswO4V", "AZhyFLs4gH7", "R0tNXTktJ3s", "S6WABRfSmn8", "fZBStTjT7ls", "_zkzQFLTxuI", "lQp4yHAs6gi", "5LJhkRvSUuv", "rq7WzmO8AH", "Zs_ahmdTPIu", "fmgh4LQTqgI", "AIE3V-g9mdw", "Zqp9VmzvTwe", "MywKJa6hZOR", "F81JtsUpTIX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "o...
[ "This paper proposes iFlow - a method to improve lossless compression using flow-based generative models. Specifically, the paper proposes two novel methods, the Modular Scale Transform (MST) and Uniform Base Conversion Systems (UBCS). MST is a method of obtaining a numerically invertible scale transform, to be use...
[ 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 9, 6 ]
[ 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2 ]
[ "nips_2021_UQsbDkuGM0N", "AIE3V-g9mdw", "nips_2021_UQsbDkuGM0N", "rq7WzmO8AH", "5LJhkRvSUuv", "S6WABRfSmn8", "fZBStTjT7ls", "lQp4yHAs6gi", "Zs_ahmdTPIu", "fmgh4LQTqgI", "MywKJa6hZOR", "Zqp9VmzvTwe", "F81JtsUpTIX", "k7cAHstGwkI", "zKiSf7Ay2W", "nips_2021_UQsbDkuGM0N", "nips_2021_UQsbD...
nips_2021_SQxuiYf2TT
History Aware Multimodal Transformer for Vision-and-Language Navigation
Vision-and-language navigation (VLN) aims to build autonomous visual agents that follow instructions and navigate in real scenes. To remember previously visited locations and actions taken, most approaches to VLN implement memory using recurrent states. Instead, we introduce a History Aware Multimodal Transformer (HAMT) to incorporate a long-horizon history into multimodal decision making. HAMT efficiently encodes all the past panoramic observations via a hierarchical vision transformer (ViT), which first encodes individual images with ViT, then models spatial relation between images in a panoramic observation and finally takes into account temporal relation between panoramas in the history. It, then, jointly combines text, history and current observation to predict the next action. We first train HAMT end-to-end using several proxy tasks including single step action prediction and spatial relation prediction, and then use reinforcement learning to further improve the navigation policy. HAMT achieves new state of the art on a broad range of VLN tasks, including VLN with fine-grained instructions (R2R, RxR), high-level instructions (R2R-Last, REVERIE), dialogs (CVDN) as well as long-horizon VLN (R4R, R2R-Back). We demonstrate HAMT to be particularly effective for navigation tasks with longer trajectories.
accept
This paper presents an effective history-aware multi-modal transformer for VLN tasks, which captures well the long-horizon history and spatial relationships. This method efficiently encodes all past panoramic observations using a hierarchical vision transformer, with promising performances on a variety of datasets. It achieves SOTA on R2R. The main concerns raised by reviewers are the unclear writing and the relatively smaller gap from using a new history-aware scheme compared to pretraining models. The authors later provide extensive experiments to demonstrate the generalization capability and large improvements by their modules in more downstream tasks and more diverse datasets. The experiments indeed validating well the contribution of the proposed transformers. Considering the positive comments by three reviewers and the responses solve the most issues by reviewer 7aBY, the AC thus recommends accepting this paper.
train
[ "FMpXkzI7kM", "FCxA0z7Jh8X", "EVwmxFHSoX", "wJrs1hQI7vG", "WX4cxiOXDZL", "BsHybHPErzf", "4uyRLmcI9js", "fT5IId7F0h", "CZnMdw4M0E", "zMSXnWpEWww", "gytHFAGtmn4" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for additional comments. \n\nTo disentangle the contributions of pretraining and history encoding, we carried out additional experiments (see Q2 for more details in our original rebuttal). In a nutshell, when using the same pretraining, we observe SPL performance of 52.3 for the model *with...
[ -1, 4, 7, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, 5, 5, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "FCxA0z7Jh8X", "nips_2021_SQxuiYf2TT", "nips_2021_SQxuiYf2TT", "FCxA0z7Jh8X", "nips_2021_SQxuiYf2TT", "wJrs1hQI7vG", "EVwmxFHSoX", "gytHFAGtmn4", "zMSXnWpEWww", "nips_2021_SQxuiYf2TT", "nips_2021_SQxuiYf2TT" ]
nips_2021_EUlAerrk47Y
Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data
Modern kernel-based two-sample tests have shown great success in distinguishing complex, high-dimensional distributions by learning appropriate kernels (or, as a special case, classifiers). Previous work, however, has assumed that many samples are observed from both of the distributions being distinguished. In realistic scenarios with very limited numbers of data samples, it can be challenging to identify a kernel powerful enough to distinguish complex distributions. We address this issue by introducing the problem of meta two-sample testing (M2ST), which aims to exploit (abundant) auxiliary data on related tasks to find an algorithm that can quickly identify a powerful test on new target tasks. We propose two specific algorithms for this task: a generic scheme which improves over baselines, and a more tailored approach which performs even better. We provide both theoretical justification and empirical evidence that our proposed meta-testing schemes outperform learning kernel-based tests directly from scarce observations, and identify when such schemes will be successful.
accept
The main positive aspect of this paper is its inherent novelty: it is a new and unique way of improving two-sample testing by utilizing many related tasks. The corresponding theory also helps to demonstrate that the method is sound. Through thorough discussion with the reviewers, some additional experiments have been provided, and I ask that you please include these in the paper. The main lingering issue amongst the reviewers is the real-world motivation, or lack thereof. Some of the reviewers struggled to find a real-world scenario in which this kind of data would be present. During internal discussions, one reviewer did mention detecting adversarial examples, or changing distributions in streaming data as possible applications, which I thought worth mentioning here in case it is helpful.
train
[ "ZvOLgvJgraK", "lmqa4ALwFMD", "t4Ur0o85nrf", "yr_kVZkRsoD", "OY1xvsFAUWo", "BfSjlSznQjs", "Vc3-y71FQ_", "sICqD3GEAIE", "op3sV6XxW3", "ssrEnEOT-nq", "tOuB1lK2JJw", "03oNC52zkRX", "_2PE74kWrW", "LVbpY-IFevS", "UYaFwpuAjwM", "gy_dT7drg-", "4ufCxOStIq", "PmS-iGqV99I", "zffT0eLUEM", ...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_re...
[ " Dear Reviewer Ye4V,\n\nMany thanks for your kind support!\n\nWe will do as much as we can to dissect the origin of the advantages of the meta approaches in the revised version. We will also include our discussions here, which will improve the quality of our paper further. \n\nIf you have more suggestions, please ...
[ -1, -1, 6, -1, 8, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "yr_kVZkRsoD", "tOuB1lK2JJw", "nips_2021_EUlAerrk47Y", "BfSjlSznQjs", "nips_2021_EUlAerrk47Y", "_2PE74kWrW", "onvWbWSFYCF", "y7uZCqgwF6X", "gy_dT7drg-", "nips_2021_EUlAerrk47Y", "4ufCxOStIq", "_2PE74kWrW", "V2cCa-alZJu", "PmS-iGqV99I", "zffT0eLUEM", "onvWbWSFYCF", "y7uZCqgwF6X", "U...
nips_2021_k-ghaB9VZBw
Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets
Language models can generate harmful and biased outputs and exhibit undesirable behavior according to a given cultural context. We propose a Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets, an iterative process to significantly change model behavior by crafting and fine-tuning on a dataset that reflects a predetermined set of target values. We evaluate our process using three metrics: quantitative metrics with human evaluations that score output adherence to a target value, toxicity scoring on outputs; and qualitative metrics analyzing the most common word associated with a given social category. Through each iteration, we add additional training dataset examples based on observed shortcomings from evaluations. PALMS performs significantly better on all metrics compared to baseline and control models for a broad range of GPT-3 language model sizes without compromising capability integrity. We find that the effectiveness of PALMS increases with model size. We show that significantly adjusting language model behavior is feasible with a small, hand-curated dataset.
accept
This paper demonstrates that large language models (i.e., GPT-3) can be relatively cheaply fine-tuned so as to generate text continuations consistent with some specific values/toxicity-related goal. Reviewers had some concerns about how broadly these results would apply, but nonetheless all found the results to be significant and timely.
train
[ "lNsCuod0-rr", "IXexlOzH6Zn", "14-7VvcFLyp", "SFrdY9NI4oc", "8vvFbz7zxZh", "qgN1qW2f2ov", "qfOHA_OHuOL", "C7KXvGidX23", "_VXWbbnAuzq", "YjxfM5brne9", "E3mFpUpXhKM", "eu-yJreRP5J", "WjSO621VD0C", "gCfPhXt9eJ5", "SzBvaFpjsmb", "XJ0ciNcBCPx" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "off...
[ " Thank you reviewers LP2G and gwYe for your in-depth discussion and feedback! We know you both have put a lot of time into this and we appreciate your hard work making this submission better. You both raise excellent points and we are happy to temper our claims and add your points on downstream applications to the...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "14-7VvcFLyp", "nips_2021_k-ghaB9VZBw", "8vvFbz7zxZh", "C7KXvGidX23", "SFrdY9NI4oc", "E3mFpUpXhKM", "_VXWbbnAuzq", "_VXWbbnAuzq", "YjxfM5brne9", "IXexlOzH6Zn", "WjSO621VD0C", "gCfPhXt9eJ5", "gCfPhXt9eJ5", "IXexlOzH6Zn", "nips_2021_k-ghaB9VZBw", "nips_2021_k-ghaB9VZBw" ]
nips_2021_3YYmDQpT0p
The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains
Daron Anderson, Douglas Leith
accept
The paper has received mixed reviews; while some reviewers found the results not super significant (essentially simplifying existing results in the literature) and the problem setting restrictive, others found the realization that simple plain lazy OGD is “universal” compelling, and the proof interesting and overall elegant. All considered---and while I agree with the criticism---I think that there is a fair chance that the paper’s observations will draw some attention and promote further investigation, and I support its acceptance. One remaining concern the authors should carefully consider is that the differential geometry background, and more importantly - the motivation and justification for appealing to such techniques, is lacking. I trust the authors to improve this part of their exposition for their final version.
train
[ "zwkg_SA1b6", "Iy256MRzup", "Et-fUOccVOj", "AzEF6dRoT9F", "uQksybYGXtU", "WZVheFnYAAZ", "BpuSQWThtPc", "ciNricthRzW" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "They authors show that for linear optimization on strongly convex domains the lazy subgradient method achieves a best-of-both-worlds behavior of a expected regret bound of O(\\sqrt{N}) and O(\\log(N)) for adversarial and stochastic adversaries, respectively. The best known prior bounds were O(\\sqrt{log(N)}) adver...
[ 6, -1, -1, -1, -1, 4, 6, 8 ]
[ 4, -1, -1, -1, -1, 2, 4, 2 ]
[ "nips_2021_3YYmDQpT0p", "zwkg_SA1b6", "ciNricthRzW", "BpuSQWThtPc", "WZVheFnYAAZ", "nips_2021_3YYmDQpT0p", "nips_2021_3YYmDQpT0p", "nips_2021_3YYmDQpT0p" ]
nips_2021_z-X_PpwaroO
Computer-Aided Design as Language
Computer-Aided Design (CAD) applications are used in manufacturing to model everything from coffee mugs to sports cars. These programs are complex and require years of training and experience to master. A component of all CAD models particularly difficult to make are the highly structured 2D sketches that lie at the heart of every 3D construction. In this work, we propose a machine learning model capable of automatically generating such sketches. Through this, we pave the way for developing intelligent tools that would help engineers create better designs with less effort. The core of our method is a combination of a general-purpose language modeling technique alongside an off-the-shelf data serialization protocol. Additionally, we explore several extensions allowing us to gain finer control over the generation process. We show that our approach has enough flexibility to accommodate the complexity of the domain and performs well for both unconditional synthesis and image-to-sketch translation.
accept
This paper presents a new approach to generating constrained CAD sketches. The key challenge in this problem is generating constraints that relate all the different strokes in a sketch. At a high level, the main idea in this paper is to treat this as a language generation problem, since the sketches with their constraints can be represented in a language. The key new idea for this paper is that instead of learning directly in the language of CAD sketches, they first encode the strings to a very compact and dense representation, which allows the network to learn more efficiently. Versions of this idea have been used in other settings but the idea is new in this context. Additionally, the paper introduces a new dataset for this problem which should be of significant value to the community.
train
[ "TnxGHuIHn1X", "c6XFDBDkXjJ", "wg04XQIrkS", "jjzN-Oo8Zj8", "tQzTnmRYhEW", "vLYnwJGmI94", "uKpYGQnF_SD", "XBu2vl91RrO", "F506RDdcgnl", "XrUnWTm2f6Z", "lbFvsVPWLT", "aMXWMr6Ygmb" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your feedback! We will include the discussion above into our final version.\n\nWe would like to point out that our paper does contain quantitative (at least 3 different metrics and a new benchmark dataset) and qualitative evaluation useful for future improvements. In our rebuttal, we explained why i...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "wg04XQIrkS", "jjzN-Oo8Zj8", "tQzTnmRYhEW", "XBu2vl91RrO", "lbFvsVPWLT", "aMXWMr6Ygmb", "XrUnWTm2f6Z", "F506RDdcgnl", "nips_2021_z-X_PpwaroO", "nips_2021_z-X_PpwaroO", "nips_2021_z-X_PpwaroO", "nips_2021_z-X_PpwaroO" ]
nips_2021_D-ti-5lgbG
COHESIV: Contrastive Object and Hand Embedding Segmentation In Video
In this paper we learn to segment hands and hand-held objects from motion. Our system takes a single RGB image and hand location as input to segment the hand and hand-held object. For learning, we generate responsibility maps that show how well a hand's motion explains other pixels' motion in video. We use these responsibility maps as pseudo-labels to train a weakly-supervised neural network using an attention-based similarity loss and contrastive loss. Our system outperforms alternate methods, achieving good performance on the 100DOH, EPIC-KITCHENS, and HO3D datasets.
accept
The reviewer ratings of this paper were significantly lower prior to the author response. Reviewers were concerned about the quality of the writing. Additional experiments, clarifications and even some simplifications of the proposed method were provided during the author response. As a result of these actions all reviewers increased their scores after reading the author response. After accounting for the author response and discussions with reviewers, all reviewers recommend accepting this paper. It will be important for the authors to address the points and issues raised during the review and author discussion in the final version of this paper. The AC recommends accepting this paper.
train
[ "NY5JAZTxGnu", "rammwVi5JBw", "OkSMUDhX2De", "sCryNLjoA1h", "3A5f5bIR-rC", "7G1yTwumTDM", "1YGwyvayrS", "0a9ZYSTWU3", "x478NQ0XpIU", "0eWtN3IVlcL", "AdxNcvL1pu0", "yxO3-xXsThg", "rLuvkLG3Kpg" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " Thank you for engaging in the discussion, updating your review and for your further helpful and thoughtful feedback. \n\nWe apologize for the incomplete reference. We will be sure to correct it in the updated version. The correct reference for [21] is below: Vladimir Iglovikov, et al. \"TernausNet: U-Net with VGG...
[ -1, 6, -1, -1, -1, 6, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, 2, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "3A5f5bIR-rC", "nips_2021_D-ti-5lgbG", "sCryNLjoA1h", "rLuvkLG3Kpg", "0eWtN3IVlcL", "nips_2021_D-ti-5lgbG", "nips_2021_D-ti-5lgbG", "yxO3-xXsThg", "7G1yTwumTDM", "x478NQ0XpIU", "nips_2021_D-ti-5lgbG", "1YGwyvayrS", "rammwVi5JBw" ]
nips_2021_SBiKnJW9fy
ByPE-VAE: Bayesian Pseudocoresets Exemplar VAE
Recent studies show that advanced priors play a major role in deep generative models. Exemplar VAE, as a variant of VAE with an exemplar-based prior, has achieved impressive results. However, due to the nature of model design, an exemplar-based model usually requires vast amounts of data to participate in training, which leads to huge computational complexity. To address this issue, we propose Bayesian Pseudocoresets Exemplar VAE (ByPE-VAE), a new variant of VAE with a prior based on Bayesian pseudocoreset. The proposed prior is conditioned on a small-scale pseudocoreset rather than the whole dataset for reducing the computational cost and avoiding overfitting. Simultaneously, we obtain the optimal pseudocoreset via a stochastic optimization algorithm during VAE training aiming to minimize the Kullback-Leibler divergence between the prior based on the pseudocoreset and that based on the whole dataset. Experimental results show that ByPE-VAE can achieve competitive improvements over the state-of-the-art VAEs in the tasks of density estimation, representation learning, and generative data augmentation. Particularly, on a basic VAE architecture, ByPE-VAE is up to 3 times faster than Exemplar VAE while almost holding the performance. Code is available at \url{https://github.com/Aiqz/ByPE-VAE}.
accept
The paper proposes a new way of learning a prior in the Variational Auto-Encoder framework. The paper is a follow-up on the VampPrior VAE and the Exemplar VAE. In a nutshell, the idea is to use a Bayesian pseudocoreset to learn pseudoinputs (in the context of the VampPrior). Overall, the reviewers appreciate the paper because: - It is easy to follow. - All concepts are clearly explained. - The idea of using the Byesian pseudocoreset is interesting. The main concern is about the experimental section. The reviewers would prefer to see a deepr comparison of learned pseudoinputs using backpropagation (like in the VampPrior) or by the proposed approach, and how learned pseudoinputs compared to real data. The rebuttal contains many new results and addresses many concerns raised by the reviewers. Therefore, I tend to accept the paper.
train
[ "cJ6-SvhCeyX", "fqfCwMXlfvT", "FgfjoWOacEH", "M5-GocmTohh", "r56qF-MVp3E", "VNByoi0Kjb", "YpfK07wfmiN", "LlyBfjIMfwQ", "hEw72dVH9BS", "wtTiiAoSEkk" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thank you again for approval of our work and your time spending.", " Thank you again for your approval of our work and insightful comments. We will revise our work in detail in the revised edition, especially the experimental section.", "The paper proposes a Bayesian pseudocoreset based exemplar VAE algorithm...
[ -1, -1, 6, -1, 6, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, 3, -1, -1, -1, -1, 4 ]
[ "M5-GocmTohh", "VNByoi0Kjb", "nips_2021_SBiKnJW9fy", "hEw72dVH9BS", "nips_2021_SBiKnJW9fy", "YpfK07wfmiN", "r56qF-MVp3E", "wtTiiAoSEkk", "FgfjoWOacEH", "nips_2021_SBiKnJW9fy" ]
nips_2021_a62JHQKHVv
Recovery Analysis for Plug-and-Play Priors using the Restricted Eigenvalue Condition
The plug-and-play priors (PnP) and regularization by denoising (RED) methods have become widely used for solving inverse problems by leveraging pre-trained deep denoisers as image priors. While the empirical imaging performance and the theoretical convergence properties of these algorithms have been widely investigated, their recovery properties have not previously been theoretically analyzed. We address this gap by showing how to establish theoretical recovery guarantees for PnP/RED by assuming that the solution of these methods lies near the fixed-points of a deep neural network. We also present numerical results comparing the recovery performance of PnP/RED in compressive sensing against that of recent compressive sensing algorithms based on generative models. Our numerical results suggest that PnP with a pre-trained artifact removal network provides significantly better results compared to the existing state-of-the-art methods.
accept
This paper provides the first recovery guarantees for Plug-And-Play (PnP) and Regularization by Denoising (RED) methods for compressed sensing. While it uses similar analysis as in the literature for compressed sensing from generative priors, this extension is significant because it shows those tools work for a class of methods that are computationally cheaper than generative models. This work could be improved by a study of the conditions under which the Set-Restricted Eigenvalue Condition holds in the present context. Despite this weakness, the work has the potential to inspire additional theoretical interest in PnP/RED for solving inverse problems. In the camera ready, please clarify where the residual boundedness and the nonexpansiveness of D must hold, as discussed with the reviewers.
train
[ "XBj6nG1sUL", "U5UO-3lTd-", "k9kVMy5NjEu", "WTE16D3RYP", "GhvbTGdDZjc", "qO47tQh-w3v", "kYAzRiftsqD", "W7U-KOXBvL4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response.\n\nBased on the other reviewer concerns and the author response, I continue to remain positive about this paper. It seems that my original concerns about the strength of the assumptions can be addressed, and I strongly recommend the authors include this in the final version.\n\nI don'...
[ -1, -1, 7, -1, -1, -1, 7, 7 ]
[ -1, -1, 4, -1, -1, -1, 4, 4 ]
[ "qO47tQh-w3v", "GhvbTGdDZjc", "nips_2021_a62JHQKHVv", "k9kVMy5NjEu", "W7U-KOXBvL4", "kYAzRiftsqD", "nips_2021_a62JHQKHVv", "nips_2021_a62JHQKHVv" ]
nips_2021_CtaDl9L0bIQ
Group Equivariant Subsampling
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions, to reduce the spatial dimensions of feature maps and to allow the receptive fields to grow exponentially with depth. However, it is known that such subsampling operations are not translation equivariant, unlike convolutions that are translation equivariant. Here, we first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs. We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling. We use these layers to construct group equivariant autoencoders (GAEs) that allow us to learn low-dimensional equivariant representations. We empirically verify on images that the representations are indeed equivariant to input translations and rotations, and thus generalise well to unseen positions and orientations. We further use GAEs in models that learn object-centric representations on multi-object datasets, and show improved data efficiency and decomposition compared to non-equivariant baselines.
accept
This paper introduces a novel method for subsampling along the orbit of a group action, which maintains the equi-variance property. After the rebuttal, all the reviewers agreed that this work is interesting and recommended to accept it. Thus, I will suggest to accept this paper yet the authors should incorporate the relevant suggestions of each reviewer.
train
[ "tqpngQVeF_a", "ARwKZtymUXa", "u5lPovf6nUu", "7FXi2rv7Et", "zvl6cuKW6eB", "mDUocFmYYB", "W6E3tXQ0KF6", "40navT0aHgP", "scXNfc4T8X", "FVR7aReGstc", "A1rnUvAtI9v", "IshKZSb--nA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces a group-theoretic framework for equivariant subsampling. Based on a similar approach to ref [3] in the case of translations, the idea is to subsample according to intrinsic landmarks and keeping track of the shifts involved. The framework introduced can handle this procedure for general symme...
[ 7, -1, -1, 8, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, -1, 5, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_CtaDl9L0bIQ", "A1rnUvAtI9v", "40navT0aHgP", "nips_2021_CtaDl9L0bIQ", "scXNfc4T8X", "nips_2021_CtaDl9L0bIQ", "FVR7aReGstc", "IshKZSb--nA", "7FXi2rv7Et", "mDUocFmYYB", "tqpngQVeF_a", "nips_2021_CtaDl9L0bIQ" ]
nips_2021_RpEANv3iv8
Data Sharing and Compression for Cooperative Networked Control
Sharing forecasts of network timeseries data, such as cellular or electricity load patterns, can improve independent control applications ranging from traffic scheduling to power generation. Typically, forecasts are designed without knowledge of a downstream controller's task objective, and thus simply optimize for mean prediction error. However, such task-agnostic representations are often too large to stream over a communication network and do not emphasize salient temporal features for cooperative control. This paper presents a solution to learn succinct, highly-compressed forecasts that are co-designed with a modular controller's task objective. Our simulations with real cellular, Internet-of-Things (IoT), and electricity load data show we can improve a model predictive controller's performance by at least 25% while transmitting 80% less data than the competing method. Further, we present theoretical compression results for a networked variant of the classical linear quadratic regulator (LQR) control problem.
accept
This paper proposes an architecture and gradient-based training for an encoder/decoder scheme to reduce communication of an exogenous signal to a remote optimization-based controller. The proposed method takes the control objective into account when training the compression scheme, so that the encoder/decoder is optimized for the task at hand. This paper introduces a (to some extend) novel problem, presents theoretical results for the LQR control special case and demonstrates improved performance on a set of benchmark tasks. However, the reviewers also find that the proposed method is a mix and patch of different techniques (though the proposed combination seems novel) and the numerical demonstrations could be strengthened by including additional baselines. This paper tackles an problem that might become increasingly important in the future. The reviewers therefore advocate the acceptance of the work, but assume that the criticized details will be addressed in the revision, including - a discussion of the stochastic problem variation - clarification of technical contributions and differences to related work (citing (Singh, Pal, 2017) is not the crucial point here, rather that the discussion is _specifc_ (as drafted in the author's response))
train
[ "tCKig2KbIdc", "B4hT_bR5vG3", "jV3IPzJoPB8", "sA-btUU3quX", "z3r7dgkMS1g", "VCUZvoTvvZc", "v3eOzgkiT5v", "a9jlwFwcAD", "w0yUZglfNxC", "YwUUah4L8jZ", "pD4uk6_wHd4", "0nOoiQF7Yns", "Bz3dkpRCCAb", "qOz5f1EoDh", "ga-L_BBJqPh" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for increasing our score and the valuable comments. We will definitely add a discussion of the stochastic case in the paper. ", "The paper discusses an approach for deterministic optimal control under estimates of covariate states. Overall, I found the paper a good read and easy to follow. However, I...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "jV3IPzJoPB8", "nips_2021_RpEANv3iv8", "sA-btUU3quX", "w0yUZglfNxC", "VCUZvoTvvZc", "v3eOzgkiT5v", "a9jlwFwcAD", "0nOoiQF7Yns", "B4hT_bR5vG3", "ga-L_BBJqPh", "qOz5f1EoDh", "Bz3dkpRCCAb", "nips_2021_RpEANv3iv8", "nips_2021_RpEANv3iv8", "nips_2021_RpEANv3iv8" ]
nips_2021_Ai73e_POVd
Hyperbolic Procrustes Analysis Using Riemannian Geometry
Label-free alignment between datasets collected at different times, locations, or by different instruments is a fundamental scientific task. Hyperbolic spaces have recently provided a fruitful foundation for the development of informative representations of hierarchical data. Here, we take a purely geometric approach for label-free alignment of hierarchical datasets and introduce hyperbolic Procrustes analysis (HPA). HPA consists of new implementations of the three prototypical Procrustes analysis components: translation, scaling, and rotation, based on the Riemannian geometry of the Lorentz model of hyperbolic space. We analyze the proposed components, highlighting their useful properties for alignment. The efficacy of HPA, its theoretical properties, stability and computational efficiency are demonstrated in simulations. In addition, we showcase its performance on three batch correction tasks involving gene expression and mass cytometry data. Specifically, we demonstrate high-quality unsupervised batch effect removal from data acquired at different sites and with different technologies that outperforms recent methods for label-free alignment in hyperbolic spaces.
accept
This was a very borderline paper. The original submission appears to be technically sound and novel, but reading the paper without difficulty requires a fair amount of prior expertise in PA. It is dense to the point that it might impact the significance of the work in the general NeurIPS community. However, the reviewers greatly appreciated the enthusiastic interaction with the authors during the rebuttal, and have generally agreed to accept the paper under the assumption that the authors make significant improvements to clarity in the camera ready. As just one example, the reviewers strongly recommend including a high-level introduction of PA in the final submission. Please read through all of the reviewer feedback carefully and follow through on promised adjustments.
val
[ "6pFCI71uQlw", "WWNRLSyl7os", "4whzHzB_JHO", "RdqXL0-EglP", "gasnSvJVuQ", "RP1CvzI3tLE", "0l7EoIy1gAM", "4wHJuGiNyQI", "x9QQ2mQ5Sm", "xDqOxISbz7E", "YGjDZIB6WFG" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for the thoughtful comments and efforts towards improving our paper.\nAs suggested, we will include some background on PA in our revision.", " Dear authors,\n\nthank you very much for your reply and the clarification about my main concern (better KNN performance but worse MMD...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "WWNRLSyl7os", "0l7EoIy1gAM", "YGjDZIB6WFG", "x9QQ2mQ5Sm", "4whzHzB_JHO", "xDqOxISbz7E", "4wHJuGiNyQI", "nips_2021_Ai73e_POVd", "nips_2021_Ai73e_POVd", "nips_2021_Ai73e_POVd", "nips_2021_Ai73e_POVd" ]
nips_2021_AFiH_CNnVhS
No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
A central challenge in training classification models in the real-world federated system is learning with non-IID data. To cope with this, most of the existing works involve enforcing regularization in local optimization or improving the model aggregation scheme at the server. Other works also share public datasets or synthesized samples to supplement the training of under-represented classes or introduce a certain level of personalization. Though effective, they lack a deep understanding of how the data heterogeneity affects each layer of a deep classification model. In this paper, we bridge this gap by performing an experimental analysis of the representations learned by different layers. Our observations are surprising: (1) there exists a greater bias in the classifier than other layers, and (2) the classification performance can be significantly improved by post-calibrating the classifier after federated training. Motivated by the above findings, we propose a novel and simple algorithm called Classifier Calibration with Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated gaussian mixture model. Experimental results demonstrate that CCVR achieves state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10. We hope that our simple yet effective method can shed some light on the future research of federated learning with non-IID data.
accept
The reviewers appreciate the simple yet effective strategy in this paper that handles heterogeneous data in federated learning. The experimental finding that the last layer exhibits the most dissimilarity could inspire future research. Great reproducibility and good writing also make the paper stand out. Therefore, I recommend acceptance. In addition, please incorporate the new results in the final version.
train
[ "fUYGMTE6eT0", "x2shPDHSaH", "N8Fisanz1aH", "UBSA1F-Q6MC", "F9ZhJ0gpjO", "M17wdYpD8Sx", "70ywncsgU6y", "Svlf9pXGJo", "duxlfXgIo8b", "k6fLZ125w88", "vMKSLNE1MMW", "lN97tsAyq_V", "RTLWMcm3FEo", "iredwdfH8Q" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for your further reply and clarification on your concerns. We are glad to see our initial rebuttal has addressed your concern about the experimental baselines. **It seems that the “remaining outstanding issues” you pointed out are all related to the Gaussian assumption we made in the \"Virtual Re...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, 4, 5 ]
[ "F9ZhJ0gpjO", "F9ZhJ0gpjO", "iredwdfH8Q", "iredwdfH8Q", "k6fLZ125w88", "Svlf9pXGJo", "nips_2021_AFiH_CNnVhS", "duxlfXgIo8b", "70ywncsgU6y", "RTLWMcm3FEo", "lN97tsAyq_V", "nips_2021_AFiH_CNnVhS", "nips_2021_AFiH_CNnVhS", "nips_2021_AFiH_CNnVhS" ]
nips_2021_5-Of1DTlq
Preconditioned Gradient Descent for Over-Parameterized Nonconvex Matrix Factorization
Jialun Zhang, Salar Fattahi, Richard Zhang
accept
The paper proposes regularized scaled gradient descent for low-rank matrix factorization. Particularly, it considers the scenarios where the model parameter r is an overestimation of the true rank r*. This is quite a common scenario. Though the scaling (and its regularized variant) is well-known (see papers from 2013 onwards based on numerical analysis or Riemannian metric selection), a deeper convergence analysis was missing that the paper presents. Though experiments are adequate for showing the good performance, some comparisons on real data would have been nice. Nevertheless, the reviewers agree that the paper has useful insights for NeurIPS community.
train
[ "5enCXLnngL", "2Pbce-6rVNJ", "1Q4ITn2DDI2", "HJwggAd90L1", "KZC9B5vcmI6", "IAfnuuft-tM", "2RH5T0wJ3sc", "uh66iV1rEUv", "Z--avWeEYXq", "FWahQu0GxmD", "jUgzVSZBoQ-", "lSlWfCTPXAT", "Y-5kJgAvaMT", "GAQOA5gxNP", "-K-z25a0I5P", "MlvQ7FyWPTq", "C-AXONgWEQ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of (noisy) nonconvex matrix factorization. The authors proposed an algorithm PrecGD that accommodates to over-parameterization and ill-conditioned ground truth. Strengths:\n1. The paper proposes a novel algorithm that performs well in numerical experiments, accompanied with solid th...
[ 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_5-Of1DTlq", "1Q4ITn2DDI2", "HJwggAd90L1", "GAQOA5gxNP", "nips_2021_5-Of1DTlq", "2RH5T0wJ3sc", "uh66iV1rEUv", "Z--avWeEYXq", "FWahQu0GxmD", "Y-5kJgAvaMT", "KZC9B5vcmI6", "C-AXONgWEQ", "KZC9B5vcmI6", "5enCXLnngL", "MlvQ7FyWPTq", "nips_2021_5-Of1DTlq", "nips_2021_5-Of1DTlq" ]
nips_2021_EIfV-XAggKo
Improving Contrastive Learning on Imbalanced Data via Open-World Sampling
Contrastive learning approaches have achieved great success in learning visual representations with few labels of the target classes. That implies a tantalizing possibility of scaling them up beyond a curated “seed" benchmark, to incorporating more unlabeled images from the internet-scale external sources to enhance its performance. However, in practice, larger amount of unlabeled data will require more computing resources due to the bigger model size and longer training needed. Moreover, open-world unlabeled data usually follows an implicit long-tail class or attribute distribution, many of which also do not belong to the target classes. Blindly leveraging all unlabeled data hence can lead to the data imbalance as well as distraction issues. This motivates us to seek a principled approach to strategically select unlabeled data from an external source, in order to learn generalizable, balanced and diverse representations for relevant classes. In this work, we present an open-world unlabeled data sampling framework called Model-Aware K-center (MAK), which follows three simple principles: (1) tailness, which encourages sampling of examples from tail classes, by sorting the empirical contrastive loss expectation (ECLE) of samples over random data augmentations; (2) proximity, which rejects the out-of-distribution outliers that may distract training; and (3) diversity, which ensures diversity in the set of sampled examples. Empirically, using ImageNet-100-LT (without labels) as the seed dataset and two “noisy” external data sources, we demonstrate that MAK can consistently improve both the overall representation quality and the class balancedness of the learned features, as evaluated via linear classifier evaluation on full-shot and few-shot settings. Thecode is available at: https://github.com/VITA-Group/MAK.
accept
The paper worked on contrastive learning on class-imbalanced data and proposed *model-aware K-center* for the purpose. It makes use of additional unlabeled dataset which is bigger than the targeted unlabeled dataset for contrastive learning in terms of the number of instances and the number of classes. According to three principles, namely *tailness*, *proximity*, and *diversity*, certain unlabeled data are sampled from the pool dataset to re-balance the targeted dataset. The writing is clear, the motivation is strong, the idea is novel, and the results are significant. Thus, it should be accepted for publication. It is quite interesting that the proposed method can satisfy the tailness and the proximity at the same time as shown in Figure 1, because the former sounds like accepting long-tail **classes** and the latter sounds like rejecting long-tail **instances**. Since contrastive learning should be regarded as **pre-training**, it may not be very natural if the label space for the downstream tasks is fixed at the time of contrastive learning (it is unsupervised and there is no label at all). Moreover, since the targeted dataset is smaller, is it possible that a class exist in both datasets but **no data has been drawn from this class in the targeted dataset** and then all the data of this class in the bigger pool dataset become out-of-distribution data by the proposed method (i.e, some data should be accepted but will be rejected since this class is missing in the targeted dataset)? Perhaps I have some misunderstanding because I didn't carefully go through the full paper by myself, but I believe clarifying my questions (not concerns, just questions) are very helpful and can maximize the impact of your work. BTW, the following paper should be related to your work, which can sample in-distribution data and reject out-of-distribution data though it considered to enlarge but not re-balance/enrich the targeted dataset (i.e., quantity vs quality): Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, Dacheng Tao, and Chang Xu. Positive-Unlabeled Compression on the Cloud. NeurIPS 2019.
val
[ "PZncClewtHp", "sZYPIScHjdW", "z5Mgehd1jQ5", "q7xhHQIakVB", "PfKKpVqCxaT", "NDgdSe4J0J", "vEuehBNiNOf", "gwHtWXJBVsd" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer very much for the insightful comments and suggestions\n\n**The novelty concern**: Our novelty involves three-folds: Firstly, we provide a new formulation/setup for addressing imbalancedness of contrastive learning using sampling strategy from the free unlabeled data. The general topic of con...
[ -1, 6, -1, -1, -1, 7, 7, 7 ]
[ -1, 3, -1, -1, -1, 5, 5, 3 ]
[ "sZYPIScHjdW", "nips_2021_EIfV-XAggKo", "gwHtWXJBVsd", "NDgdSe4J0J", "vEuehBNiNOf", "nips_2021_EIfV-XAggKo", "nips_2021_EIfV-XAggKo", "nips_2021_EIfV-XAggKo" ]
nips_2021_bzpkxS_JVsI
Searching for Efficient Transformers for Language Modeling
Large Transformer models have been central to recent advances in natural language processing. The training and inference costs of these models, however, have grown rapidly and become prohibitively expensive. Here we aim to reduce the costs of Transformers by searching for a more efficient variant. Compared to previous approaches, our search is performed at a lower level, over the primitives that define a Transformer TensorFlow program. We identify an architecture, named Primer, that has a smaller training cost than the original Transformer and other variants for auto-regressive language modeling. Primer’s improvements can be mostly attributed to two simple modifications: squaring ReLU activations and adding a depthwise convolution layer after each Q, K, and V projection in self-attention.Experiments show Primer’s gains over Transformer increase as compute scale grows and follow a power law with respect to quality at optimal model sizes. We also verify empirically that Primer can be dropped into different codebases to significantly speed up training without additional tuning. For example, at a 500M parameter size, Primer improves the original T5 architecture on C4 auto-regressive language modeling, reducing the training cost by 4X. Furthermore, the reduced training cost means Primer needs much less compute to reach a target one-shot performance. For instance, in a 1.9B parameter configuration similar to GPT-3 XL, Primer uses 1/3 of the training compute to achieve the same one-shot performance as Transformer. We open source our models and several comparisons in T5 to help with reproducibility.
accept
The submission makes two main contributions: defining neural architecture search using lower level primitives than previous work, and using this search to introduce a Transformer variant they call Primer. Results show that Primer can outperform vanilla Transformers on various language modeling tasks. While three of the reviewers support acceptance, Reviewer dssT raises several concerns, arguing that a new search space is not enough of a contribution. The reviewer also would like to see a search space that does not include the squared ReLU and Separable conv plus multi-head attention that are used in the final solution. Other reviewers appreciated the larger, low level search space. I think that the authors convincingly rebut the need to exclude modules from the search space, as for example, excluding squared ReLU would require excluding multiplication. There are also concerns that it is only evaluated on language modeling, but the authors are up front about this limitation, and I agree with most of the reviewers that this is an important and general enough task to focus on. Overall, the consensus is that this is a good paper that should be accepted.
train
[ "keiG8sSqp3O", "Obr-IvyKYg", "jv3NrRVHKaP", "6-ZJ1JYMZZ3", "y8kysT9udY", "yXinJV8btVL", "JEbqYnsEdfc", "HMW0ArYymwl", "SzGrBWuyovX", "Druy5qy215_", "D0NyHBi0TQ" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As noted in our first response, we have conducted additional experiments that mimic the one-shot GPT-3 setup (https://arxiv.org/pdf/2005.14165.pdf). GPT-3 was not open sourced, and so this is not an exact replication, but we have done our best to reproduce the results using a proprietary pretraining dataset and t...
[ -1, -1, -1, 5, -1, -1, -1, -1, 9, 7, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, 4, 4, 4 ]
[ "D0NyHBi0TQ", "6-ZJ1JYMZZ3", "SzGrBWuyovX", "nips_2021_bzpkxS_JVsI", "D0NyHBi0TQ", "Druy5qy215_", "SzGrBWuyovX", "6-ZJ1JYMZZ3", "nips_2021_bzpkxS_JVsI", "nips_2021_bzpkxS_JVsI", "nips_2021_bzpkxS_JVsI" ]
nips_2021_7S3RMGVS5vO
Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets
Max Ryabinin, Andrey Malinin, Mark Gales
accept
The reviewers are all for accepting the work. It's easy to follow and has quite extensive experiments, especially with the help of the MIMO baseline in the rebuttal (please cite and add the additional experiments into the paper). The loss function involves several moving parts, as well as pipelines like distilling from a trained model with much longer training than the baselines they compare to. So the idea isn't necessarily easy to implement. With that said, I think the work's ideas and its empirical evaluation will be of interest to the community.
train
[ "Y4Q15Lszwkv", "p7QJQh8azQ9", "6Ifxmfo3m2C", "Dl7uWbxPuSf", "KTjYj6zEN_D", "cBdeDJI5Nbr", "yob2T7ZBjDG", "SKdKSCTH1tx", "Bqx82vK-f_", "2aAD0XInQo", "tel0ee26mMu", "7eW538J1dS", "g0tAPVSLic", "geZ4lL5unf", "5nTAUeQROU", "qEQnZopnE0C", "gxbcNFS0pNf" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers uncertainty estimation for neural networks, specifically predictive uncertainty and knowledge uncertainty. For these problems, deep ensembles are considered state-of-the-art, but are expensive at training time and, more importantly, inference time. To address the latter problem, this paper exp...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_7S3RMGVS5vO", "KTjYj6zEN_D", "SKdKSCTH1tx", "KTjYj6zEN_D", "7eW538J1dS", "qEQnZopnE0C", "geZ4lL5unf", "5nTAUeQROU", "nips_2021_7S3RMGVS5vO", "nips_2021_7S3RMGVS5vO", "g0tAPVSLic", "nips_2021_7S3RMGVS5vO", "2aAD0XInQo", "gxbcNFS0pNf", "Y4Q15Lszwkv", "nips_2021_7S3RMGVS5vO", ...
nips_2021_rrf6XgIS_Ek
Multi-Person 3D Motion Prediction with Multi-Range Transformers
We propose a novel framework for multi-person 3D motion trajectory prediction. Our key observation is that a human's action and behaviors may highly depend on the other persons around. Thus, instead of predicting each human pose trajectory in isolation, we introduce a Multi-Range Transformers model which contains of a local-range encoder for individual motion and a global-range encoder for social interactions. The Transformer decoder then performs prediction for each person by taking a corresponding pose as a query which attends to both local and global-range encoder features. Our model not only outperforms state-of-the-art methods on long-term 3D motion prediction, but also generates diverse social interactions. More interestingly, our model can even predict 15-person motion simultaneously by automatically dividing the persons into different interaction groups. Project page with code is available at https://jiashunwang.github.io/MRT/.
accept
The final scores for this paper are just below the usual threshold for acceptance. While this paper explores an interesting approach to modelling multiple 3D agent motions, the experimental work was assessed by half of the reviewers as needing more work in a number of ways to fully evaluate the method. The number of users used for the human evaluation was seen as being too small and the specifics of the metrics used to evaluate the method were put into question with respect to evaluating motions of longer duration. With another revision and re-submission I am confident the authors will have a successful submission, but at this time the AC recommends rejection.
train
[ "Azk1-kjHt4N", "RFMUFXrojso", "ZTtud74fA2T", "PhpSJ7DX4S", "a0JgLJ89Gs", "y4atIMh5Wbr", "4xHZ8M5bUjT", "PrNyB6Zq1hn", "9FG5VkMTM92", "P5RYV-nK9NJ", "AXTRCVq3MTI", "c4A84U0KE2D", "NzB27zg1HH9", "0yyV0fJSom", "Xm5GogtGnZ" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for the comments. We will add the quantitative and qualitative results in our rebuttal to the revised paper. Besides that, we still have the following points to clarify:\n \nFor using MPJPE, there are previous studies that used this metric for sequences longer than 1 second, *e.g.* Cao et a...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "PhpSJ7DX4S", "nips_2021_rrf6XgIS_Ek", "9FG5VkMTM92", "0yyV0fJSom", "P5RYV-nK9NJ", "PrNyB6Zq1hn", "0yyV0fJSom", "Xm5GogtGnZ", "RFMUFXrojso", "NzB27zg1HH9", "nips_2021_rrf6XgIS_Ek", "0yyV0fJSom", "nips_2021_rrf6XgIS_Ek", "nips_2021_rrf6XgIS_Ek", "nips_2021_rrf6XgIS_Ek" ]
nips_2021_J28lNO4p3ki
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning
Prashant Khanduri, PRANAY SHARMA, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, Pramod Varshney
accept
The presented algorithm appears to achieve better communication complexities in theory and has reasonable experimental performance. Reviewers are all in favor of accepting the paper.
train
[ "VqLnm3S6lRO", "F4sYxZcfV9O", "6eYUbW-QGMy", "wQIBf4oClQ", "jBbxC89voa", "Os23vFbnQF", "n4BWittid54", "iFrYk29YNAy", "mG902piamRT", "v2owHt3MDk3", "H8X0gjX0ZAu", "B1mXVis6Nf", "3pFBgRGdE1V" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors presented a momentum extension of the classic FedAvg method for non-convex federated learning, along with a unified framework for convergence analysis. The core result shows that the proposed method can achieve near-tight sample and communication complexities under mild conditions, whilst also revealin...
[ 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_J28lNO4p3ki", "wQIBf4oClQ", "nips_2021_J28lNO4p3ki", "n4BWittid54", "Os23vFbnQF", "H8X0gjX0ZAu", "iFrYk29YNAy", "mG902piamRT", "6eYUbW-QGMy", "nips_2021_J28lNO4p3ki", "VqLnm3S6lRO", "3pFBgRGdE1V", "nips_2021_J28lNO4p3ki" ]
nips_2021_SjxC07jABZ4
Bubblewrap: Online tiling and real-time flow prediction on neural manifolds
While most classic studies of function in experimental neuroscience have focused on the coding properties of individual neurons, recent developments in recording technologies have resulted in an increasing emphasis on the dynamics of neural populations. This has given rise to a wide variety of models for analyzing population activity in relation to experimental variables, but direct testing of many neural population hypotheses requires intervening in the system based on current neural state, necessitating models capable of inferring neural state online. Existing approaches, primarily based on dynamical systems, require strong parametric assumptions that are easily violated in the noise-dominated regime and do not scale well to the thousands of data channels in modern experiments. To address this problem, we propose a method that combines fast, stable dimensionality reduction with a soft tiling of the resulting neural manifold, allowing dynamics to be approximated as a probability flow between tiles. This method can be fit efficiently using online expectation maximization, scales to tens of thousands of tiles, and outperforms existing methods when dynamics are noise-dominated or feature multi-modal transition probabilities. The resulting model can be trained at kiloHertz data rates, produces accurate approximations of neural dynamics within minutes, and generates predictions on submillisecond time scales. It retains predictive performance throughout many time steps into the future and is fast enough to serve as a component of closed-loop causal experiments.
accept
After discussion, the reviewers all converged on accept or weak accept ratings for this paper, once the stated changes are implemented. The methodological novelty is somewhat limited, but the problem is important, the approach is sensible, and the results are good.
train
[ "jxkgorValQ-", "Wwz9Dq6q4BT", "0ew9-LUWh5a", "RVSTh137er", "idJvfnVHDX_", "M02XuAcyX-h", "GogwvljDC3l", "Gao3U_tg5Dn", "WyAnClPTl4o", "bdvclVQiadS", "ZKZFk7eatlF" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a method to efficiently characterize high-dimensional neural dynamics using a soft \"tiling\" of a lower-dimensionality manifold. Specifically, the method is a combination of two algorithms, (i) \"Stable Streaming SVD\", a dimensionality reduction method that aims to find the top-k-dimensional ...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_SjxC07jABZ4", "nips_2021_SjxC07jABZ4", "RVSTh137er", "bdvclVQiadS", "M02XuAcyX-h", "WyAnClPTl4o", "ZKZFk7eatlF", "nips_2021_SjxC07jABZ4", "Wwz9Dq6q4BT", "jxkgorValQ-", "nips_2021_SjxC07jABZ4" ]
nips_2021_uw4mcO8nz3n
The Semi-Random Satisfaction of Voting Axioms
Lirong Xia
accept
This paper gives several interesting results on the smoothed satisfaction of voting axioms. I, and all of the reviewers, have a positive view of the paper. I recommend acceptance.
train
[ "8mxb2SV-ed", "AXQ1OKNQL0n", "Ifb-mCwXVfk", "AA-782wR0av", "chVFarnuXw9", "J0vRCCSVjmS", "0Z2FPR9WatF", "ze16SWxbvnA", "wdq5lcnHGbl", "0YHWA_WPw_t", "gfGhFI_84Em" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response! I have no further questions and I will keep my score.", " I thank the authors for their response, and I’m excited to see the journal version of the paper!", " The authors' responses are satisfactory, I keep my score unchanged.", "The authors study the likelihood that two voting axi...
[ -1, -1, -1, 7, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, 3, 3, 4 ]
[ "0Z2FPR9WatF", "chVFarnuXw9", "ze16SWxbvnA", "nips_2021_uw4mcO8nz3n", "0YHWA_WPw_t", "AA-782wR0av", "gfGhFI_84Em", "wdq5lcnHGbl", "nips_2021_uw4mcO8nz3n", "nips_2021_uw4mcO8nz3n", "nips_2021_uw4mcO8nz3n" ]