paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_HbTzvugzOp | Chebyshev-Cantelli PAC-Bayes-Bennett Inequality for the Weighted Majority Vote | We present a new second-order oracle bound for the expected risk of a weighted majority vote. The bound is based on a novel parametric form of the Chebyshev-Cantelli inequality (a.k.a. one-sided Chebyshev’s), which is amenable to efficient minimization. The new form resolves the optimization challenge faced by prior oracle bounds based on the Chebyshev-Cantelli inequality, the C-bounds [Germain et al., 2015], and, at the same time, it improves on the oracle bound based on second order Markov’s inequality introduced by Masegosa et al. [2020]. We also derive a new concentration of measure inequality, which we name PAC-Bayes-Bennett, since it combines PAC-Bayesian bounding with Bennett’s inequality. We use it for empirical estimation of the oracle bound. The PAC-Bayes-Bennett inequality improves on the PAC-Bayes-Bernstein inequality of Seldin et al. [2012]. We provide an empirical evaluation demonstrating that the new bounds can improve on the work of Masegosa et al. [2020]. Both the parametric form of the Chebyshev-Cantelli inequality and the PAC-Bayes-Bennett inequality may be of independent interest for the study of concentration of measure in other domains.
| accept | The referees are in agreement that this submission provides novel techniques for analyzing majority-based learners. It is very much within the conference scope and of sufficient interest and novelty. All of the referee objections have been addressed during the discussion phase. | train | [
"lsUlfRjAdpW",
"vSTewFcpWzE",
"AExohCByOC4",
"hfdV--bOIgO",
"FFHX4bNTtao",
"AZJsW4rbnn",
"LjH7i2qoHr8",
"vK71Q178K2p",
"nBydXFyIRVQ",
"V8rTCtoTXof",
"tatrylWgkj",
"QbQFJ3vz_x"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for revising the score. Below we provide further explanations to the additional comments.\n\nFactor-two gap:\n\nWe assume there was a typo in the comment and it referred to $\\min(1, Var[Z]/\\alpha^2) \\leq 2 Var[Z]/(\\alpha^2+Var[Z])$, which means that Chebyshev truncated at 1 is at most tw... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7,
7,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
3,
4
] | [
"AExohCByOC4",
"nips_2021_HbTzvugzOp",
"hfdV--bOIgO",
"vSTewFcpWzE",
"tatrylWgkj",
"QbQFJ3vz_x",
"nBydXFyIRVQ",
"V8rTCtoTXof",
"nips_2021_HbTzvugzOp",
"nips_2021_HbTzvugzOp",
"nips_2021_HbTzvugzOp",
"nips_2021_HbTzvugzOp"
] |
nips_2021_59mdmZJV6IG | A Multi-Implicit Neural Representation for Fonts | Fonts are ubiquitous across documents and come in a variety of styles. They are either represented in a native vector format or rasterized to produce fixed resolution images. In the first case, the non-standard representation prevents benefiting from latest network architectures for neural representations; while, in the latter case, the rasterized representation, when encoded via networks, results in loss of data fidelity, as font-specific discontinuities like edges and corners are difficult to represent using neural networks. Based on the observation that complex fonts can be represented by a superposition of a set of simpler occupancy functions, we introduce multi-implicits to represent fonts as a permutation-invariant set of learned implict functions, without losing features (e.g., edges and corners). However, while multi-implicits locally preserve font features, obtaining supervision in the form of ground truth multi-channel signals is a problem in itself. Instead, we propose how to train such a representation with only local supervision, while the proposed neural architecture directly finds globally consistent multi-implicits for font families. We extensively evaluate the proposed representation for various tasks including reconstruction, interpolation, and synthesis to demonstrate clear advantages with existing alternatives. Additionally, the representation naturally enables glyph completion, wherein a single characteristic font is used to synthesize a whole font family in the target style.
| accept | Meta-review of "A Multi-Implicit Neural Representation for Fonts"
The paper proposes representing vector fonts with multi-implicit functions. They were able to reconstruct vector fonts (at arbitrarily high resolution) while keeping specific font-discontinuities such as edges and corners, and also showcased their method to synthesize an entire font given one character only.
Reviewers were initially mixed. While reviewers generally agree that the proposed method works (and based on the results, seems to work really well), the concerns raised (esp by iAuP) is whether multi-implicit functions are indeed needed. DnKK raised a similar concern about whether standard SDF could attain similar results or not. The authors have responded to these concerns, and even produced some newer results in their response (They have also written a detailed general response with baseline comparisons). Reviewers were generally satisfied with their response, new results, and improvements made, and have increased their scores, and even persuaded the reviewer iAuP to improve their initial assessment.
In reviewer discussions, most reviewers are in favour of accepting the work. Reviewer iAuP strongly suggested that the authors clarify many issues they discussed in a detailed thread in the camera ready version. For instance, as they noted: *Given the author's additional experiments, it turns out that 'single implicts' w/ fourier features can reconstruct fonts as well as 'multi implicits', better in low resolution worse in high resolution, but roughly about the same.* So I would like to see the text clarified to the best of the authors' abilities (as they have pledged in the discussion thread) the text, and make sure the claims are accurate and clear, as to not mislead readers.
Given the generally high quality of the work, and fantastic results, which will be useful to the NeurIPS community and ML practitioners, I recommend accepting this work as a poster.
| train | [
"tQDgOR3ALRm",
"edVhIt9dMN",
"fx3GneIpxNF",
"jeyThVbZ0q5",
"ETh08rP0mlg",
"BCvprWza9lW",
"V59bjnF3h04",
"cxOpnFBaz6O",
"ty9SguHtJuG",
"ExtHPgD-PK_",
"dlvOijfntGY",
"URw0br6MVPV",
"a6f_CSKcsD6",
"ei8mKSzH_5",
"W3nPTDB8GgH",
"uAmpn6--g2L",
"8CWt9vdytT9",
"OHmK8a0fsO",
"iMFhv8xW0uW"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_re... | [
" Thank you, we appreciate the comments and, given a chance, we will add the suggested discussion to the paper.",
" Thanks for providing the reconstruction results. I really do appreciate it. I guess 64x64 training resolution might be the culprit that single implicit function w/ fourier features performed a bit l... | [
-1,
-1,
-1,
-1,
4,
7,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"edVhIt9dMN",
"fx3GneIpxNF",
"jeyThVbZ0q5",
"V59bjnF3h04",
"nips_2021_59mdmZJV6IG",
"nips_2021_59mdmZJV6IG",
"ty9SguHtJuG",
"ExtHPgD-PK_",
"ei8mKSzH_5",
"W3nPTDB8GgH",
"a6f_CSKcsD6",
"nips_2021_59mdmZJV6IG",
"8CWt9vdytT9",
"ETh08rP0mlg",
"BCvprWza9lW",
"nips_2021_59mdmZJV6IG",
"URw0b... |
nips_2021_zvTBIFQ43Sd | OctField: Hierarchical Implicit Functions for 3D Modeling | Recent advances in localized implicit functions have enabled neural implicit representation to be scalable to large scenes.However, the regular subdivision of 3D space employed by these approaches fails to take into account the sparsity of the surface occupancy and the varying granularities of geometric details. As a result, its memory footprint grows cubically with the input volume, leading to a prohibitive computational cost even at a moderately dense decomposition. In this work, we present a learnable hierarchical implicit representation for 3D surfaces, coded OctField, that allows high-precision encoding of intricate surfaces with low memory and computational budget. The key to our approach is an adaptive decomposition of 3D scenes that only distributes local implicit functions around the surface of interest. We achieve this goal by introducing a hierarchical octree structure to adaptively subdivide the 3D space according to the surface occupancy and the richness of part geometry. As octree is discrete and non-differentiable, we further propose a novel hierarchical network that models the subdivision of octree cells as a probabilistic process and recursively encodes and decodes both octree structure and surface geometry in a differentiable manner. We demonstrate the value of OctField for a range of shape modeling and reconstruction tasks, showing superiority over alternative approaches.
| accept | Post rebuttal, the paper was the subject of extensive discussion both between the authors and reviewers and between the reviewers themselves. The reviewers were overall generally in agreement with many facts about the paper, but had good faith disagreements in terms of where to draw boundaries for novelty and contribution. The AC examined the reviews, rebuttals, and discussion and is inclined to agree with the reviewers advocating for acceptance. The AC is persuaded by overall perspective of reviewers 2BPG and Nodw that the paper contributes valuable insights to the field (presented in committee discussions). Moreover, if the reviewers are satisfied that their clarity concerns can be addressed in a revision, then the AC is as well.
The AC would make the following requests:
- The authors should carefully read the comments of the reviewers and be sure each commented is addressed in the final version of the paper. The rebuttal and a promise of revision was persuasive in this case.
- The authors should add a stronger discussion of the social impact in the paper. | train | [
"SCeCVqT7G9Z",
"7US7JhUGxqd",
"1DdAzq7FkP",
"msrOKHoKpkz",
"sxh9WjIQtK-",
"PSI4QhkJeek",
"kxmsdICkee",
"yJCPz3VpDVt",
"dmKUm-FtCrW",
"kYw_DH0UnK",
"3Cnc7fYS0dl",
"DG5KJbt1K1Q",
"53fJU0A8lRF",
"6jytSdrrFLx"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1) We respectfully argue that extending voxel to implicit representation in our setting is a non-trivial task, which requires novel contributions in order to be accomplished. Since voxel is a crude approximation of the underlying shape, **OGN only needs to predict the octree itself** (i.e. the occupancy of the oc... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"7US7JhUGxqd",
"PSI4QhkJeek",
"sxh9WjIQtK-",
"nips_2021_zvTBIFQ43Sd",
"dmKUm-FtCrW",
"kxmsdICkee",
"3Cnc7fYS0dl",
"53fJU0A8lRF",
"msrOKHoKpkz",
"6jytSdrrFLx",
"DG5KJbt1K1Q",
"nips_2021_zvTBIFQ43Sd",
"nips_2021_zvTBIFQ43Sd",
"nips_2021_zvTBIFQ43Sd"
] |
nips_2021_iNqrOCPRmYQ | The Inductive Bias of Quantum Kernels | It has been hypothesized that quantum computers may lend themselves well to applications in machine learning. In the present work, we analyze function classes defined via quantum kernels. Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute. However, having an exponentially large feature space renders the problem of generalization hard. Furthermore, being able to evaluate inner products in high dimensional spaces efficiently by itself does not guarantee a quantum advantage, as already classically tractable kernels can correspond to high- or infinite-dimensional reproducing kernel Hilbert spaces (RKHS). We analyze the spectral properties of quantum kernels and find that we can expect an advantage if their RKHS is low dimensional and contains functions that are hard to compute classically. If the target function is known to lie in this class, this implies a quantum advantage, as the quantum computer can encode this inductive bias, whereas there is no classically efficient way to constrain the function class in the same way. However, we show that finding suitable quantum kernels is not easy because the kernel evaluation might require exponentially many measurements. In conclusion, our message is a somewhat sobering one: we conjecture that quantum machine learning models can offer speed-ups only if we manage to encode knowledge about the problem at hand into quantum circuits, while encoding the same bias into a classical model would be hard. These situations may plausibly occur when learning on data generated by a quantum process, however, they appear to be harder to come by for classical datasets.
| accept | There is a consensus among the reviewers that this is a very strong theoretical analysis for the learnability of functions using quantum kernels. The results discovered poor learnability of quantum kernels which is similar to the “barren plateau” phenomenon in quantum neural networks and provided useful guidance towards the design of effective kernels for specific problems. In summary, the paper is helpful for understanding the power of quantum kernels, and I would like to recommend to accept this submission. | train | [
"mEPfaKJtpWo",
"54n9B2sJ1U",
"EdWr9HWT3Fb",
"_9GD7AQSgrz",
"1w6C2bvAlrJ",
"nNO1Rct1XV",
"wf8Tuz1A9YU",
"eaZxEbUvPvx",
"x0av1KjHoAs",
"ZVCnzZHqbF",
"zaWK3GfVQpD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper gives a general investigation of quantum kernel learning, where quantum computers are used to compute the kernel functions used in standard kernel learning problems. Several results are given characterizing the general behavior of quantum kernels, which together give useful guidance towards the design of... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
2,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_iNqrOCPRmYQ",
"wf8Tuz1A9YU",
"nips_2021_iNqrOCPRmYQ",
"nNO1Rct1XV",
"zaWK3GfVQpD",
"EdWr9HWT3Fb",
"mEPfaKJtpWo",
"ZVCnzZHqbF",
"nips_2021_iNqrOCPRmYQ",
"nips_2021_iNqrOCPRmYQ",
"nips_2021_iNqrOCPRmYQ"
] |
nips_2021_dFRbxGpNWw5 | An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks | Shashank Rajput, Kartik Sreenivasan, Dimitris Papailiopoulos, Amin Karbasi | accept | A solid progress in a well studied line of research | train | [
"TBYzMKbSP5G",
"PMx-eXG-FfV",
"zSVfeZNc6PV",
"pd1c0unPjkr",
"5vpRJX_BaL",
"gYIpj8lT3O0",
"Wr70OBjF41N",
"GjCIw7GdqLs",
"gsGH925oNj",
"mM4NZ-jToC",
"FANaOlYFunU",
"6wH1xJoR-JJ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for the response. I agree that this is a good paper and am keeping my score.",
" Thank you for the quick response and the suggestions. Yes, we plan to add these two discussions to the paper, if the space permits.",
" Thank you very much for the clarification.\n\n- Our network can be easi... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"gYIpj8lT3O0",
"zSVfeZNc6PV",
"GjCIw7GdqLs",
"nips_2021_dFRbxGpNWw5",
"pd1c0unPjkr",
"FANaOlYFunU",
"nips_2021_dFRbxGpNWw5",
"6wH1xJoR-JJ",
"mM4NZ-jToC",
"nips_2021_dFRbxGpNWw5",
"nips_2021_dFRbxGpNWw5",
"nips_2021_dFRbxGpNWw5"
] |
nips_2021_XpSAvlvnMa | Pretraining Representations for Data-Efficient Reinforcement Learning | Data efficiency is a key challenge for deep reinforcement learning. We address this problem by using unlabeled data to pretrain an encoder which is then finetuned on a small amount of task-specific data. To encourage learning representations which capture diverse aspects of the underlying MDP, we employ a combination of latent dynamics modelling and unsupervised goal-conditioned RL. When limited to 100k steps of interaction on Atari games (equivalent to two hours of human experience), our approach significantly surpasses prior work combining offline representation pretraining with task-specific finetuning, and compares favourably with other pretraining methods that require orders of magnitude more data. Our approach shows particular promise when combined with larger models as well as more diverse, task-aligned observational data -- approaching human-level performance and data-efficiency on Atari in our best setting.
| accept | Taking as evaluation setting Atari 100k, the paper demonstrates the effectiveness of pretraining an agent torso/encoder on a collection of auxiliary self supervised losses, greatly improving data efficiency and final performance of the agent when training on a small of amount of experience on the downstream tasks (in conjunction with careful tuning of fine-tuning learning step).
Whilst the proposed unsupervised losses are not novel (a combination of latent dynamics modelling, goal-conditioned RL, and inverse dynamics modeling), the most impactful contribution of the submission is the thorough evaluation of the method, its extensive, carefully designed experiments, ablations and empirical analyses, as well as the solid choice of baselines. Although limited to Atari 100k, these experiments are well described and discussed, forming a very solid foundation for eventual follow-up work investigating if these results can generalize to other domains or training regimes,
I believe the paper will be of great interest to the broader community, as it sheds light on viable training regimes for agents in real-world application, where access to real environment experience is scarce. | train | [
"pa5fJOPpnfg",
"YJJy1l8_ioB",
"HAjEZaf4XfX",
"C4Zfub_BUnw",
"FBHlUajWlSZ",
"nuMJBJRXCTt",
"yBoGTzDQj_5",
"LMjbLXXQkiw",
"POjamxjGkR"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response and acknowledge that most of my concerns have been addressed. While I still think running experiments on a different domain would strengthen the paper, I believe the empirical study is thorough enough in its current form to be a valuable contribution at NeurIPS. I agree with... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"HAjEZaf4XfX",
"POjamxjGkR",
"LMjbLXXQkiw",
"yBoGTzDQj_5",
"nuMJBJRXCTt",
"nips_2021_XpSAvlvnMa",
"nips_2021_XpSAvlvnMa",
"nips_2021_XpSAvlvnMa",
"nips_2021_XpSAvlvnMa"
] |
nips_2021_qLpJ0VWRuWk | Universal Approximation Using Well-Conditioned Normalizing Flows | Normalizing flows are a widely used class of latent-variable generative models with a tractable likelihood. Affine-coupling models [Dinh et al., 2014, 2016] are a particularly common type of normalizing flows, for which the Jacobian of the latent-to-observable-variable transformation is triangular, allowing the likelihood to be computed in linear time. Despite the widespread usage of affine couplings, the special structure of the architecture makes understanding their representational power challenging. The question of universal approximation was only recently resolved by three parallel papers [Huang et al., 2020, Zhang et al., 2020, Koehler et al., 2020] – who showed reasonably regular distributions can be approximated arbitrarily well using affine couplings – albeit with networks with a nearly-singular Jacobian. As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows? In this paper, we show that any log-concave distribution can be approximated using well-conditioned affine-coupling flows. In terms of proof techniques, we uncover and leverage deep connections between affine coupling architectures, underdamped Langevin dynamics (a stochastic differential equation often used to sample from Gibbs measures) and Hénon maps (a structured dynamical system that appears in the study of symplectic diffeomorphisms). In terms of informing practice, we approximate a padded version of the input distribution with iid Gaussians – a strategy which Koehler et al. [2020] empirically observed to result in better-conditioned flows, but had hitherto no theoretical grounding. Our proof can thus be seen as providing theoretical evidence for the benefits of Gaussian padding when training normalizing flows.
| accept | The paper introduces a universal approximation proof of normalizing for log-concave distribution when the Jacobians are not ill-conditioned.
While the scope of the paper remains limited (log-concave distribution), the submission gives a clear exposition of their techniques and could provide a strong foundation for future theoretical studies of flow-based models.
I'm recommending the submission for acceptance as a poster. | train | [
"wUlaHJoYxz_",
"56eOcCRYur4",
"YQ8JYvQIHiH",
"o944Xb4rFEG",
"W8kAqLxdF1X",
"r0KvtR-Tma9",
"XpZ9Et6sa3",
"6mgvM_Drogz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply. \n\nAlthough the scope of the theoretical result is still limited to log-concave distributions, the problem the authors seek to tackle seems innately non-trivial. The authors have addressed most of my concerns and answered my questions. I believe the general direction and theoretical tec... | [
-1,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
2,
3
] | [
"W8kAqLxdF1X",
"nips_2021_qLpJ0VWRuWk",
"6mgvM_Drogz",
"nips_2021_qLpJ0VWRuWk",
"56eOcCRYur4",
"XpZ9Et6sa3",
"nips_2021_qLpJ0VWRuWk",
"nips_2021_qLpJ0VWRuWk"
] |
nips_2021_goEdyJ_nVQI | On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs) | It is generally recognized that finite learning rate (LR), in contrast to infinitesimal LR, is important for good generalization in real-life deep nets. Most attempted explanations propose approximating finite-LR SGD with Itô Stochastic Differential Equations (SDEs), but formal justification for this approximation (e.g., Li et al., 2019) only applies to SGD with tiny LR. Experimental verification of the approximation appears computationally infeasible. The current paper clarifies the picture with the following contributions: (a) An efficient simulation algorithm SVAG that provably converges to the conventionally used Itô SDE approximation. (b) A theoretically motivated testable necessary condition for the SDE approximation and its most famous implication, the linear scaling rule (Goyal et al., 2017), to hold.(c) Experiments using this simulation to demonstrate that the previously proposed SDE approximation can meaningfully capture the training and generalization properties of common deep nets.
| accept | The paper proposes a practical algorithm called SVAG, which is shown to converge in the weak sense
to the SDE approximation of SGD, for even moderately large step sizes.
Authors demonstrate its performance via experiments on neural networks, and show that SVAG can diagnose if SGD and its SDE approximation have similar behaviors. Authors further investigate the conditions in which the SDE approximation fails.
The motivation of this paper is clear, and very timely. All the assumptions and the results are presented in a clear way.
All reviewers agree that this paper is well-written and its results are interesting. Authors should address reviewers' suggestions in the final version of their work. Specifically, authors should add a discussion on the limitations of the experiments and amend the statement about SVAG converging for small values of l, clarify the bound in Li et al., 2019, and add more intuition about the importance of scale invariance. | test | [
"-mgoGX9ZQZD",
"-_5z2BoEElF",
"tkCNq72I36c",
"XzYVXvdMTXK",
"bIAKfqbYeVy",
"hezDM0UhUVJ",
"6XLSXL5qibR",
"u6UKhhIOKrJ",
"21IRnS7Lafw",
"xTBCxS4YUS",
"nULCGP9ceuN",
"AaAuGLbRbNi",
"gRPyX9yZXO",
"yFdepQcUnGc",
"DzBWm63-nqB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"SDE has been used as an approximation to help understand the implicit bias of SGD. This paper studies the validity of such approximations, which hasn't been formalized in prior work for realistic choices of learning rates. The contributions are three-folds:\n - Proposing \"Stochastic Variance Amplified Gradient\"... | [
6,
-1,
-1,
-1,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_goEdyJ_nVQI",
"tkCNq72I36c",
"u6UKhhIOKrJ",
"gRPyX9yZXO",
"nips_2021_goEdyJ_nVQI",
"nips_2021_goEdyJ_nVQI",
"21IRnS7Lafw",
"nULCGP9ceuN",
"xTBCxS4YUS",
"yFdepQcUnGc",
"-mgoGX9ZQZD",
"DzBWm63-nqB",
"bIAKfqbYeVy",
"hezDM0UhUVJ",
"nips_2021_goEdyJ_nVQI"
] |
nips_2021_5rm0b_fsNZ | Proportional Participatory Budgeting with Additive Utilities | We study voting rules for participatory budgeting, where a group of voters collectively decides which projects should be funded using a common budget. We allow the projects to have arbitrary costs, and the voters to have arbitrary additive valuations over the projects. We formulate two axioms that guarantee proportional representation to groups of voters with common interests. To the best of our knowledge, all known rules for participatory budgeting do not satisfy either of the two axioms; in addition we show that the most prominent proportional rule for committee elections, Proportional Approval Voting, cannot be adapted to arbitrary costs nor to additive valuations so that it would satisfy our axioms of proportionality. We construct a simple and attractive voting rule that satisfies one of our axioms (for arbitrary costs and arbitrary additive valuations), and that can be evaluated in polynomial time. We prove that our other stronger axiom is also satisfiable, though by a computationally more expensive and less natural voting rule.
| accept | This is probably the strongest paper in my batch (as AC). | train | [
"vHRopevXDJ6",
"J9Wti67rSQp",
"mpV09PShr9",
"4NEJsDR4PGa",
"GgMJMEkZPFN",
"pwNEu4-iEMF"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the proportional participatory budgeting model, where the projects have arbitrary costs and the voters have additive utilities. In this classic model of PB, the paper studies fairness concept and explore whether existing rules satisfy certain fairness guarantee. Most of the results in this paper... | [
6,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_5rm0b_fsNZ",
"pwNEu4-iEMF",
"vHRopevXDJ6",
"GgMJMEkZPFN",
"nips_2021_5rm0b_fsNZ",
"nips_2021_5rm0b_fsNZ"
] |
nips_2021_H6y7EAf7s4P | Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect | The “cold posterior effect” (CPE) in Bayesian deep learning describes the disturbing observation that the predictive performance of Bayesian neural networks can be significantly improved if the Bayes posterior is artificially sharpened using a temperature parameter T <1. The CPE is problematic in theory and practice and since the effect was identified many researchers have proposed hypotheses to explain the phenomenon. However, despite this intensive research effort the effect remains poorly understood. In this work we provide novel and nuanced evidence relevant to existing explanations for the cold posterior effect, disentangling three hypotheses: 1. The dataset curation hypothesis of Aitchison (2020): we show empirically that the CPE does not arise in a real curated data set but can be produced in a controlled experiment with varying curation strength. 2. The data augmentation hypothesis of Izmailov et al. (2021) and Fortuin et al. (2021): we show empirically that data augmentation is sufficient but not necessary for the CPE to be present. 3. The bad prior hypothesis of Wenzel et al. (2020): we use a simple experiment evaluating the relative importance of the prior and the likelihood, strongly linking the CPE to the prior. Our results demonstrate how the CPE can arise in isolation from synthetic curation, data augmentation, and bad priors. Cold posteriors observed “in the wild” are therefore unlikely to arise from a single simple cause; as a result, we do not expect a simple “fix” for cold posteriors.
| accept | Another nice contribution to the ongoing community effort of understanding Bayesian deep learning.
All reviewers liked the paper.
Accept.
| train | [
"d7B0E2XmXRl",
"Qa_Y2N2tEW",
"YE9loYTHEc5",
"ys73kOo_CBb",
"IE5cX1HSLCB",
"q2GXkMZNB69",
"OIdMgSBHMRV",
"bHq9ZZnYQiz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" That you for taking the time to write the response. I will keep my score.",
" Thanks for your comments. I will keep my original score.",
" We would like to thank the reviewer for their positive feedback and the valuable thoughts on our results. We will address the reviewer’s questions one by one.\n\n**1) “Fig... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"ys73kOo_CBb",
"IE5cX1HSLCB",
"OIdMgSBHMRV",
"q2GXkMZNB69",
"bHq9ZZnYQiz",
"nips_2021_H6y7EAf7s4P",
"nips_2021_H6y7EAf7s4P",
"nips_2021_H6y7EAf7s4P"
] |
nips_2021_WL7pr00_fnJ | Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? | There have been long-standing controversies and inconsistencies over the experiment setup and criteria for identifying the "winning ticket" in literature. To reconcile such, we revisit the definition of lottery ticket hypothesis, with comprehensive and more rigorous conditions. Under our new definition, we show concrete evidence to clarify whether the winning ticket exists across the major DNN architectures and/or applications. Through extensive experiments, we perform quantitative analysis on the correlations between winning tickets and various experimental factors, and empirically study the patterns of our observations. We find that the key training hyperparameters, such as learning rate and training epochs, as well as the architecture characteristics such as capacities and residual connections, are all highly correlated with whether and when the winning tickets can be identified. Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis. Our codes are publicly available at: https://github.com/boone891214/sanity-check-LTH.
| accept | This paper investigates the impact of hyperparameters on lottery ticket performance and uses these analyses to provide guidance as to how to best select these hyperparameters. All 3 reviewers praised the paper for its comprehensive and compelling experiments as well as its clarity. There was some concern regarding the stringent definitions used in this work, but clarifications in the discussion were sufficient to resolve these concerns so long as the authors add additional discussion to the manuscript. Given the rigor of this work and its relevance to the lottery ticket community, I recommend this paper is accepted. | val | [
"B9IcmTIXbfL",
"A9Iku5nBmwK",
"zKxLJLBK0A",
"LNQISzjYcj",
"Ep_wu70w6yd",
"Iw9tjHANKgK",
"T91ZOlVuOuK",
"o5mQJJG8F1O",
"uT1zRWIarTB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We want to thank you for your appreciation of our work and for raising the score! Your comments are very constructive, e.g., the discussion of the Jackpot winning tickets, the decoupled learning rates for LTH training, and the accuracy drop metric, etc. We will complete the experiments of decoupled learning rate... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"zKxLJLBK0A",
"nips_2021_WL7pr00_fnJ",
"LNQISzjYcj",
"A9Iku5nBmwK",
"uT1zRWIarTB",
"A9Iku5nBmwK",
"o5mQJJG8F1O",
"nips_2021_WL7pr00_fnJ",
"nips_2021_WL7pr00_fnJ"
] |
nips_2021_35wwc2nc1a4 | Collaborative Causal Discovery with Atomic Interventions | Raghavendra Addanki, Shiva Kasiviswanathan | accept | All reviewers are favorable after author responses and discussion.
The paper considers collaboratively learning different MAGs which have a clustering property. Authors show an efficient way to learn all the MAGs, with the smallest number of atomic interventions per entity. It scales in the degree of the MAG and logarithmic in the number of MAGs.
The collaborative setting is novel, the interventional complexity under the clustering assumption is very interesting. This significantly expands the score of learning through intervention of related causal models that could be related but different (apart from being just intervened versions of each other).
Note: Reviewer 8Ccq indicated that she/he would raise the score to 7 but did not follow through after the discussion period. | train | [
"NydJptaSLLH",
"43MUbar4Kj7",
"KwAxp2LE5yQ",
"zeEQ3na_Wkk",
"0eOAxVxNshm",
"arYv25QdKCR",
"hFeAajfSDTn",
"osk2jCWvjNP",
"f5RJh-csItz",
"mpkkUQSGLb7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their clarifications about all the issues raised. I appreciate the author's comments about the multi-environment setting. There seems some potential to extend the framework to a multi-environment setup as given in https://proceedings.neurips.cc/paper/2017/file/62889e73828c756... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"KwAxp2LE5yQ",
"osk2jCWvjNP",
"mpkkUQSGLb7",
"f5RJh-csItz",
"hFeAajfSDTn",
"nips_2021_35wwc2nc1a4",
"nips_2021_35wwc2nc1a4",
"nips_2021_35wwc2nc1a4",
"nips_2021_35wwc2nc1a4",
"nips_2021_35wwc2nc1a4"
] |
nips_2021_P9_gOq5w7Eb | Towards optimally abstaining from prediction with OOD test examples | A common challenge across all areas of machine learning is that training data is not distributed like test data, due to natural shifts or adversarial examples; such examples are referred to as out-of-distribution (OOD) test examples. We consider a model where one may abstain from predicting, at a fixed cost. In particular, our transductive abstention algorithm takes labeled training examples and unlabeled test examples as input, and provides predictions with optimal prediction loss guarantees. The loss bounds match standard generalization bounds when test examples are i.i.d. from the training distribution, but add an additional term that is the cost of abstaining times the statistical distance between the train and test distribution (or the fraction of adversarial examples). For linear regression, we give a polynomial-time algorithm based on Celis-Dennis-Tapia optimization algorithms. For binary classification, we show how to efficiently implement it using a proper agnostic learner (i.e., an Empirical Risk Minimizer) for the class of interest. Our work builds on recent work of Goldwasser, Kalais, and Montasser (2020) who gave error and abstention guarantees for transductive binary classification.
| accept | This paper considers a setting in which one may have different train/test distributions, but the predictor can choose to abstain at a cost of \alpha. All reviewers believed that the results were interesting, non-trivial, and clearly-stated, and all four recommended acceptance. What criticisms there were were adequately addressed in the author response.
Please be sure to make any changes that have been promised to the reviewers, and seriously consider attempting to address any other concerns that they might have raised, in particular UTdk's request for concrete examples. | val | [
"n5o91NbJxu2",
"7Eec2J_xWes",
"LVX9L9IaNK8",
"e4065oGdi2v",
"Qlu1T_FqKdl",
"tmf39a-nZka",
"mUFzjQnWdpv",
"Kw8N2MAvsZ",
"aS5dAgp3jD0"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers prediction with the abstention option, with a reduced loss for abstention (Chow's model), in a setting where the test distribution can be different from the training distribution. The setting is very similar to GKKM20 except that the loss here is different (GKKM separately bounds standard risk ... | [
6,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_P9_gOq5w7Eb",
"e4065oGdi2v",
"aS5dAgp3jD0",
"Kw8N2MAvsZ",
"n5o91NbJxu2",
"mUFzjQnWdpv",
"nips_2021_P9_gOq5w7Eb",
"nips_2021_P9_gOq5w7Eb",
"nips_2021_P9_gOq5w7Eb"
] |
nips_2021_z-l1kpDXs88 | TokenLearner: Adaptive Space-Time Tokenization for Videos | In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks. Instead of relying on hand-designed splitting strategies to obtain visual tokens and processing a large number of densely sampled patches for attention, our approach learns to mine important tokens in visual data. This results in efficiently and effectively finding a few important visual tokens and enables modeling of pairwise attention between such tokens, over a longer temporal horizon for videos, or the spatial content in image frames. Our experiments demonstrate strong performance on several challenging benchmarks for video recognition tasks. Importantly, due to our tokens being adaptive, we accomplish competitive results at significantly reduced computational cost. We establish new state-of-the-arts on multiple video datasets, including Kinetics-400, Kinetics-600, Charades, and AViD.
| accept | After the rebuttal period all reviewers rate this paper as being past the threshold for acceptance.
The authors did a good job of addressing questions of novelty during the rebuttal and additional experiments were well received by reviewers. The authors have promised to clean and polish some parts of the manuscript flagged by reviewers - please do this.
The AC recommends acceptance | train | [
"Cy9quUyR8DM",
"yWP0PqQmJz3",
"TheZRhD4b5Z",
"0azTcpiiWm",
"tLsnKsSJGFT",
"3YVa-W9MKW0",
"sTW566vIapA",
"Ok2g7iS63q"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces a dynamic tokenizer that constructs the tokens using a set of convolutional layers, each selecting regions of interest. This is then combine with a space-time like attention mechanism. The idea is intersecting but important experiments are missing. On novelty:\nThe idea of generating tokens d... | [
6,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2021_z-l1kpDXs88",
"Ok2g7iS63q",
"Cy9quUyR8DM",
"sTW566vIapA",
"3YVa-W9MKW0",
"nips_2021_z-l1kpDXs88",
"nips_2021_z-l1kpDXs88",
"nips_2021_z-l1kpDXs88"
] |
nips_2021_Q2R6noQ3tn5 | Learning in Multi-Stage Decentralized Matching Markets | Matching markets are often organized in a multi-stage and decentralized manner. Moreover, participants in real-world matching markets often have uncertain preferences. This article develops a framework for learning optimal strategies in such settings, based on a nonparametric statistical approach and variational analysis. We propose an efficient algorithm, built upon concepts of "lower uncertainty bound" and "calibrated decentralized matching," for maximizing the participants' expected payoff. We show that there exists a welfare-versus-fairness trade-off that is characterized by the uncertainty level of acceptance. Participants will strategically act in favor of a low uncertainty level to reduce competition and increase expected payoff. We prove that participants can be better off with multi-stage matching compared to single-stage matching. We demonstrate aspects of the theoretical predictions through simulations and an experiment using real data from college admissions.
| accept | Reviewers were supportive of the present submission, appreciating its approach to a general problem faced in modern two-sided markets. Reviewers supported the model at a high level, and appreciated the cleanliness of the results proved about that model. Still, there were concerns raised by some reviewers about the definition of and exposition surrounding fairness (auj5, GLYK) and its application to particular settings (e.g., can preferences of the arms change, and so on; see reviews). Some responses to the reviewers' questions were not satisfactory, for example, when asked about the complexity of their results, the authors responded with:
-- "We adopted the use of NP-completeness by following the literature; see, e.g., Section 2 of Milgrom (2017, Discovering prices. Columbia University Press). However, for clarity, we will change the writing in lines (155, 181) from NP-completeness to computational intractable."
That citation is a 200+ page general book, and referring to Section 2 of it does not answer the reviewer's question. Other examples abound. Still, the rebuttal did address many of the reviewers' concerns. Should this paper be accepted, this AC would very much appreciate the authors spending substantial time with the reviewers' comments (especially auj5, zGHc, and GLYK) before finalizing their work. | train | [
"DP_nf3XRtax",
"iHuoVK6n3Hh",
"SwqyMH0uYDT",
"mTYGymXZjHZ",
"vq8kHZFMIc",
"jBx--H9QfYv",
"0Kwv30mMvRc",
"pXsKAGlrRp",
"Pqt_i9_p-n",
"ExllLQeXmjQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We want to thank the reviewer for the positive feedback and thank the reviewer for updating the score.",
" I would like to thank the authors for the great effort they put in their response. Conditional on implementing the suggested revisions, I found the response to most of my comments convincing enough. As a r... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iHuoVK6n3Hh",
"0Kwv30mMvRc",
"nips_2021_Q2R6noQ3tn5",
"ExllLQeXmjQ",
"pXsKAGlrRp",
"Pqt_i9_p-n",
"SwqyMH0uYDT",
"nips_2021_Q2R6noQ3tn5",
"nips_2021_Q2R6noQ3tn5",
"nips_2021_Q2R6noQ3tn5"
] |
nips_2021__6j_jQiYB2c | Non-asymptotic convergence bounds for Wasserstein approximation using point clouds | Quentin Mérigot, Filippo Santambrogio, Clément SARRAZIN | accept | The reviewers recognize the theoretical interest of the authors' contribution, bounding in particular the error done after one step of gradient descent to minimize Wasserstein distance between a given density and a uniform discrete N-point distribution. They however raise valid concerns, particularly on the complexity needed to compute such gradients, on how it would rely on obtention of samples from the density being approximated. These are important aspects that are not treated in the paper. The reviewers also point to the lack of a discussion contrasting the guarantee they obtain to the one that results from taking as an approximating point cloud an N-sample from the target density.
It therefore appears that the paper contains a very promising set of results, but is at this stage too preliminary and not ready yet for publication at NeurIPS. | train | [
"jsWNyWWBa2",
"AYYuzwA3qf_",
"CTwtmki0sSH",
"j0LZJSGPIKd",
"FXcmDZuqkko",
"6sS7myIogU",
"o4hx9ke57G6",
"dZUthAWixD5",
"JbstFZxjMPY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies approximation of continuous measures in Wasserstein distance using point clouds, which define empirical distributions over their support. Optimizing over the locations of the points in these clouds results in a nonconvex optimization problem. Despite this, the authors develop a Polyak-Lojasiewic... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
4,
2,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021__6j_jQiYB2c",
"nips_2021__6j_jQiYB2c",
"6sS7myIogU",
"AYYuzwA3qf_",
"JbstFZxjMPY",
"dZUthAWixD5",
"jsWNyWWBa2",
"nips_2021__6j_jQiYB2c",
"nips_2021__6j_jQiYB2c"
] |
nips_2021_1dq2MVDXot- | Understanding Interlocking Dynamics of Cooperative Rationalization | Selective rationalization explains the prediction of complex neural networks by finding a small subset of the input that is sufficient to predict the neural model output. The selection mechanism is commonly integrated into the model itself by specifying a two-component cascaded system consisting of a rationale generator, which makes a binary selection of the input features (which is the rationale), and a predictor, which predicts the output based only on the selected features. The components are trained jointly to optimize prediction performance. In this paper, we reveal a major problem with such cooperative rationalization paradigm --- model interlocking. Inter-locking arises when the predictor overfits to the features selected by the generator thus reinforcing the generator's selection even if the selected rationales are sub-optimal. The fundamental cause of the interlocking problem is that the rationalization objective to be minimized is concave with respect to the generator’s selection policy. We propose a new rationalization framework, called A2R, which introduces a third component into the architecture, a predictor driven by soft attention as opposed to selection. The generator now realizes both soft and hard attention over the features and these are fed into the two different predictors. While the generator still seeks to support the original predictor performance, it also minimizes a gap between the two predictors. As we will show theoretically, since the attention-based predictor exhibits a better convexity property, A2R can overcome the concavity barrier. Our experiments on two synthetic benchmarks and two real datasets demonstrate that A2R can significantly alleviate the interlock problem and find explanations that better align with human judgments.
| accept | This paper addresses the problem of selective rationalization, revealing that the usual cooperative rationalization paradigm suffers from model interlocking — this arises when the predictor overfits to the features selected by the generator thus reinforcing the generator's selection even if the selected rationales are sub-optimal. To sidestep this, the paper proposes a new rationalization framework (A2R), which introduces a third component into the architecture, a predictor driven by soft attention as opposed to selection. Experiments on two synthetic benchmarks and two real datasets demonstrate that A2R can significantly alleviate the interlock problem.
This is a solid paper that proposes a simple strategy to solve an important practical problem with rationalizers. While reviewers pointed out some weaknesses (lack of clarity and need for better motivation, no sensitivity analysis for the lambda coefficient), some of these concerns have been successfully alleviated in the rebuttal, with new results for dependency on lambda. Therefore I recommend acceptance. I urge the authors to take into account the reviewers’ comments when preparing the final version. | test | [
"k5wyviQNujz",
"qWGUzW87BHh",
"yb13AxPtKO_",
"c92DnbsCZlA",
"1XoHYIfDWQ",
"UXEflkrAn4-",
"n0KrtO3S2hN",
"J4sNlo99tS",
"YQmhrSVipAD",
"YLd5qzf3Qxg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper analysis the problem of interlocking in cooperative rationalization, provide both theoretical and practical analysis for this problem, and derive a potential way to solve it:\n- The paper first shows theoretically, that, when formulating selective rationalization as a two player cooperative game, where a... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"nips_2021_1dq2MVDXot-",
"n0KrtO3S2hN",
"UXEflkrAn4-",
"YLd5qzf3Qxg",
"k5wyviQNujz",
"YQmhrSVipAD",
"J4sNlo99tS",
"nips_2021_1dq2MVDXot-",
"nips_2021_1dq2MVDXot-",
"nips_2021_1dq2MVDXot-"
] |
nips_2021_MqCzSKCQ1QB | Adversarial Robustness without Adversarial Training: A Teacher-Guided Curriculum Learning Approach | Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps. Being repetitive in nature during the inner maximization step, they take a huge time to train. We propose a non-iterative method that enforces the following ideas during training. Attribution maps are more aligned to the actual object in the image for adversarially robust models compared to naturally trained models. Also, the allowed set of pixels to perturb an image (that changes model decision) should be restricted to the object pixels only, which reduces the attack strength by limiting the attack space. Our method achieves significant performance gains with a little extra effort (10-20%) over existing AT models and outperforms all other methods in terms of adversarial as well as natural accuracy. We have performed extensive experimentation with CIFAR-10, CIFAR-100, and TinyImageNet datasets and reported results against many popular strong adversarial attacks to prove the effectiveness of our method.
| accept | this works achieves adversarial robustness without adversarial training adding a "refinement" phase to a network trained with an alignment loss. The idea of the refinement is to force the model to be robust to perturbations outside of the detected item and is done using curriculum learning using a threshold on saliencies as a mask. Avoiding iterative training allows to significantly lower the computational cost for Sota robustness.
I think the discussion with reviewer Wty9 highlights well the situation: yet the paper can be considered as an empirical better way to force a NN to focus on the actually important part of an image, empirical evidence is convincing and an ablation study is performed. I'd love a discussion about the existence or not of a difference wrt the robustness obtained with iterative methods.
Also, adding results on imageNet would improve the paper and mentioning the low variance in the legends of the tables rather than in text would be good.
| train | [
"IOat-YFADWJ",
"gJxZnsZIlsX",
"b_5fsftV-2Y",
"uJ2CMPvS6D",
"TIvYMYxgE7j",
"272Jfpcub_",
"RkrDdhnpTi",
"PHgElHiFm9K",
"nRMkBi8aE7",
"uzptOqUDIT4",
"ujLFw0w5V5X",
"F8_cbxqplzX",
"yemXQFLvTDT",
"0VzwCJiIlyg",
"XlRoKgwHFie",
"c-GtUwyJ4Kf",
"W4E_LdZd0Gw",
"tafXEg52Ufq",
"oBAB2BY6rW2",... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thank you for sharing your thoughts.\n\n> R1-1: ...how important it is for the \"suitable teacher\" to be adversarially trained. Results in Tables 3 and 4 still all use adversarially trained teachers. If the claim is to achieve very fast robustness, then It would be interesting to see what the robustness is under... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"gJxZnsZIlsX",
"RkrDdhnpTi",
"PHgElHiFm9K",
"nips_2021_MqCzSKCQ1QB",
"XlRoKgwHFie",
"c-GtUwyJ4Kf",
"W4E_LdZd0Gw",
"ujLFw0w5V5X",
"nips_2021_MqCzSKCQ1QB",
"nips_2021_MqCzSKCQ1QB",
"nRMkBi8aE7",
"nRMkBi8aE7",
"oBAB2BY6rW2",
"nips_2021_MqCzSKCQ1QB",
"8CgSS-HeSrZ",
"oBAB2BY6rW2",
"tafXEg... |
nips_2021_a4WgjcLeZIn | Tactical Optimism and Pessimism for Deep Reinforcement Learning | In recent years, deep off-policy actor-critic algorithms have become a dominant approach to reinforcement learning for continuous control. One of the primary drivers of this improved performance is the use of pessimistic value updates to address function approximation errors, which previously led to disappointing performance. However, a direct consequence of pessimism is reduced exploration, running counter to theoretical support for the efficacy of optimism in the face of uncertainty. So which approach is best? In this work, we show that the most effective degree of optimism can vary both across tasks and over the course of learning. Inspired by this insight, we introduce a novel deep actor-critic framework, Tactical Optimistic and Pessimistic (TOP) estimation, which switches between optimistic and pessimistic value learning online. This is achieved by formulating the selection as a multi-arm bandit problem. We show in a series of continuous control tasks that TOP outperforms existing methods which rely on a fixed degree of optimism, setting a new state of the art in challenging pixel-based environments. Since our changes are simple to implement, we believe these insights can easily be incorporated into a multitude of off-policy algorithms.
| accept | This is a solid piece of work and the reviewers agreed in the end. The paper is well written, it unifies two views on exploration, the experiments are well done, and the authors did a good job both (a) responding to the reviews and (b) improving on the previous submission. There were some concerns about the working being incremental, but consensus emerged that the work exhibited a high degree of correctness (which is critical and not as common as should be) that the paper provides useful insights and foundations of future work. Another reviewer was generally favourable after discussion but did not update their score to reflect that. Given the relevance and difficulty of efficient exploration this work will make a welcome addition to the neurips program.
There were some important missing details and notational issues flagged by one reviewer. I am confident these can be addressed easily based on the author response. | train | [
"5H-wGor-wan",
"DW75I9wnBgU",
"WaqQ2mCdHY",
"geU228T1aH5",
"x1JMnkKkPfJ",
"nt83OTmE_v-",
"OCqsVIcuTa",
"IBk1Eqc-Jlp",
"o2-P0xEmzAA",
"LWAnKWjcXWe",
"sH-nekw9OO",
"R4BIYwHX6CM",
"LFIWxrQ8Hx"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your insightful comments and questions. \n\nRe: notation of the random return. Yes, absolutely—this is a good point. We initially erred on the side of convention, but agree that the proper description is preferable. \n\nRe: multiple value estimates. Thank you, this is also an interesting p... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"DW75I9wnBgU",
"OCqsVIcuTa",
"nips_2021_a4WgjcLeZIn",
"LWAnKWjcXWe",
"nt83OTmE_v-",
"o2-P0xEmzAA",
"LFIWxrQ8Hx",
"R4BIYwHX6CM",
"sH-nekw9OO",
"WaqQ2mCdHY",
"nips_2021_a4WgjcLeZIn",
"nips_2021_a4WgjcLeZIn",
"nips_2021_a4WgjcLeZIn"
] |
nips_2021_9RFGrW9z9te | Towards Hyperparameter-free Policy Selection for Offline Reinforcement Learning | Siyuan Zhang, Nan Jiang | accept | The manuscript examines the question of how to improve policy selection in the off-line RL setting. Typically offline policy selection is approached via off-policy evaluation (OPE), aimed at estimating the expected return of candidate policies. OPE is itself a difficult problem that typical requires hyperparameter tuning and selection itself. The paper develops moves closer to a hyperparameter-free method and demonstrates the effectiveness of the algorithm in the context of standardized offline datasets (e.g. RLUnplugged for Atari).
The algorithm for policy selection is built using insights from the recently published Batch Value-Function Tournament (BVFT) approach to estimating the best value function from among a set of candidates.
They make comparisons to well developed OPE style methods such as fitted Q-evaluation and show clear advantages in data efficiency and the policy selection.
The manuscript examines applying the approach to a wide range of settings (from Atari to continuous control) and to a range of policies produced by a variety of algorithms. The ideas, theory, and experiments are well motivated by the text. Taken together, the manuscript provides a promising look at a fundamental and open problem in RL.
| train | [
"oEQCr2fhAyp",
"VJDdpqBHVuJ",
"BAj-hik2L8M",
"iVtA87DQdIg",
"UYbD_zOQo_",
"iaFAGGnkD4U",
"a8-XSi3L1kE",
"qhUSUMdbgB4",
"Z91X6jYf_83",
"6KHd5W0wakx",
"7bAvo_8jRP2",
"qAJwx-FO0lQ",
"n2zlS2uhY_x",
"Q1G2GmGZzaC",
"_UXzyqKb4q",
"rg_ssWzXwdt",
"p8AZZa6KHM",
"u56fokVxZA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_rev... | [
" Thank you for all the discussion and clarification. I am assuming some of this can be included in the camera ready copy, and have increased my score to 8.",
"Batch Value Function Tournament (BVFT) is an existing approach to offline policy selection that uses piecewise linear value function approximations and se... | [
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
9
] | [
-1,
4,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
"iaFAGGnkD4U",
"nips_2021_9RFGrW9z9te",
"rg_ssWzXwdt",
"nips_2021_9RFGrW9z9te",
"qAJwx-FO0lQ",
"a8-XSi3L1kE",
"qhUSUMdbgB4",
"Z91X6jYf_83",
"6KHd5W0wakx",
"7bAvo_8jRP2",
"n2zlS2uhY_x",
"iVtA87DQdIg",
"_UXzyqKb4q",
"u56fokVxZA",
"VJDdpqBHVuJ",
"p8AZZa6KHM",
"nips_2021_9RFGrW9z9te",
... |
nips_2021_4fLr7H5D_eT | FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout | Federated Learning (FL) has been gaining significant traction across different ML tasks, ranging from vision to keyboard predictions. In large-scale deployments, client heterogeneity is a fact and constitutes a primary problem for fairness, training performance and accuracy. Although significant efforts have been made into tackling statistical data heterogeneity, the diversity in the processing capabilities and network bandwidth of clients, termed system heterogeneity, has remained largely unexplored. Current solutions either disregard a large portion of available devices or set a uniform limit on the model's capacity, restricted by the least capable participants.In this work, we introduce Ordered Dropout, a mechanism that achieves an ordered, nested representation of knowledge in Neural Networks and enables the extraction of lower footprint submodels without the need for retraining. We further show that for linear maps our Ordered Dropout is equivalent to SVD. We employ this technique, along with a self-distillation methodology, in the realm of FL in a framework called FjORD. FjORD alleviates the problem of client system heterogeneity by tailoring the model width to the client's capabilities. Extensive evaluation on both CNNs and RNNs across diverse modalities shows that FjORD consistently leads to significant performance gains over state-of-the-art baselines while maintaining its nested structure.
| accept | This paper develops new techniques for tackling system heterogeneity (i.e., diversity in the processing capabilities and network bandwidth of clients) in federated learning. The paper highlights the practical issue of system heterogeneity in federated learning that is often ignored in many papers. All the reviewers agree that the paper has interesting ideas and solves an important problem, and are in favor of accepting the paper. Along with data heterogeneity, systematic study of system heterogeneity can potentially have significant practical value and can be of interest to a wider audience. I recommend acceptance of the paper. I suggest the authors address the concerns of the reviewers in the final revision. | train | [
"5m7jVmR04wL",
"F0u8IzFOD4",
"j_1f-jG6c-n",
"ZLsoaMJXlKf",
"cK7umYvwCR",
"ZYPhhUya3Am",
"VxzmvJsuPcH",
"3rzc1p15_NZ",
"ntc2rVK5GO",
"zpwoo2DAxT",
"vT5W_gEJGxz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper offers a dropout approach to account for different client resources. The idea is simply through pruning the network where clients fit smaller networks. The papers also offers an approach called ordered dropout where end layers are dropped rather than random neurons. The paper idea is interesting. Belo... | [
7,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_4fLr7H5D_eT",
"zpwoo2DAxT",
"ZLsoaMJXlKf",
"VxzmvJsuPcH",
"nips_2021_4fLr7H5D_eT",
"cK7umYvwCR",
"3rzc1p15_NZ",
"cK7umYvwCR",
"vT5W_gEJGxz",
"5m7jVmR04wL",
"nips_2021_4fLr7H5D_eT"
] |
nips_2021_yMf3SLah5-y | Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings | Ming Yin, Yu-Xiang Wang | accept | The paper studies the uniform UPE problem in the offline RL setting with both upper and lower bound. Most of the reviewers believe the paper is well-written. There is one major concern about the epsilon range. The current results give the bound for eps<1/sqrt{S}. The paper would be much stronger if results about eps>1/sqrt{S} is also provided. | val | [
"_0X1FGHolCs",
"mdHqVTPH6pz",
"wVLmt_CoGps",
"7aoBDdYho_5",
"S_5DC0mkJ5E",
"dalZjZCvrIT",
"rWZbhCOyYGS",
"zz_29eWfjih",
"NliJsz1H3M5",
"J7eBA1FqWJi",
"3UvFTACWPFp",
"C7uCfs3d82",
"0k6fY9dzwXG"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hello reviewer rLZC, since we didn’t hear any response from you, as the last message we want to mention our results are tight in the minimax sense. Our Thm 3.1 is tight since it has quadratic dependence on $S$ (Note for Lower bound, the **larger** lower bound means **tighter** result). Our local result 4.1/4.2 is... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5,
5
] | [
"mdHqVTPH6pz",
"wVLmt_CoGps",
"7aoBDdYho_5",
"NliJsz1H3M5",
"zz_29eWfjih",
"C7uCfs3d82",
"J7eBA1FqWJi",
"3UvFTACWPFp",
"0k6fY9dzwXG",
"nips_2021_yMf3SLah5-y",
"nips_2021_yMf3SLah5-y",
"nips_2021_yMf3SLah5-y",
"nips_2021_yMf3SLah5-y"
] |
nips_2021_VeZQA9KdjMK | MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data | Time series forecasting is widely used in business intelligence, e.g., forecast stock market price, sales, and help the analysis of data trend. Most time series of interest are macroscopic time series that are aggregated from microscopic data. However, instead of directly modeling the macroscopic time series, rare literature studied the forecasting of macroscopic time series by leveraging data on the microscopic level. In this paper, we assume that the microscopic time series follow some unknown mixture probabilistic distributions. We theoretically show that as we identify the ground truth latent mixture components, the estimation of time series from each component could be improved because of lower variance, thus benefitting the estimation of macroscopic time series as well. Inspired by the power of Seq2seq and its variants on the modeling of time series data, we propose Mixture of Seq2seq (MixSeq), an end2end mixture model to cluster microscopic time series, where all the components come from a family of Seq2seq models parameterized by different parameters. Extensive experiments on both synthetic and real-world data show the superiority of our approach.
| accept | The paper proposes an approach for forecasting a single "macroscopic" time series which is the sum of several "microscopic" time series by forecasting the microscopic time series using a mixture of transformer-based seq2seq models and finally combining the forecasts.
The reviewers agree that the paper proposes a pragmatic solution to a relevant problem, and demonstrates clear performance improvements in doing so. While the building blocks (mixture models trained with variational inference, ConvTrans time series model) are well know, the combination is novel and well-chosen to address the task at hand. The reviewers highlight the convincing empirical evaluation against sensible baselines, where significant performance gains were demonstrated. Initial reviewer concerns around novelty and clarity were alleviated during the discussion period, leading me to recommend acceptance. | test | [
"uXETVixLKrX",
"bvhKjIPXCzF",
"m99QbbLZQ_",
"_NWRe9prGs",
"PDWay9LP0cX",
"O5SlUjbTMXf",
"wChpqjBVyYm",
"axYFsJzgkrt",
"gDA1m5DNyeg",
"ALSC1dYX_H"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
" Thanks for the reviewer's feedback again. For the concerns about language mistakes, please see some of our revisions as follows.\n\n**L1**. ***Time*** (~~Times~~) series forecasting is widely used in business intelligence...\n\n**L27**. ... all of them study the modeling of time series without considering the con... | [
-1,
7,
-1,
-1,
5,
-1,
6,
-1,
-1,
-1
] | [
-1,
3,
-1,
-1,
3,
-1,
3,
-1,
-1,
-1
] | [
"O5SlUjbTMXf",
"nips_2021_VeZQA9KdjMK",
"bvhKjIPXCzF",
"O5SlUjbTMXf",
"nips_2021_VeZQA9KdjMK",
"PDWay9LP0cX",
"nips_2021_VeZQA9KdjMK",
"PDWay9LP0cX",
"wChpqjBVyYm",
"bvhKjIPXCzF"
] |
nips_2021_frgb7FsKWs3 | Pareto Domain Adaptation | Domain adaptation (DA) attempts to transfer the knowledge from a labeled source domain to an unlabeled target domain that follows different distribution from the source. To achieve this, DA methods include a source classification objective to extract the source knowledge and a domain alignment objective to diminish the domain shift, ensuring knowledge transfer. Typically, former DA methods adopt some weight hyper-parameters to linearly combine the training objectives to form an overall objective. However, the gradient directions of these objectives may conflict with each other due to domain shift. Under such circumstances, the linear optimization scheme might decrease the overall objective value at the expense of damaging one of the training objectives, leading to restricted solutions. In this paper, we rethink the optimization scheme for DA from a gradient-based perspective. We propose a Pareto Domain Adaptation (ParetoDA) approach to control the overall optimization direction, aiming to cooperatively optimize all training objectives. Specifically, to reach a desirable solution on the target domain, we design a surrogate loss mimicking target classification. To improve target-prediction accuracy to support the mimicking, we propose a target-prediction refining mechanism which exploits domain labels via Bayes’ theorem. On the other hand, since prior knowledge of weighting schemes for objectives is often unavailable to guide optimization to approach the optimal solution on the target domain, we propose a dynamic preference mechanism to dynamically guide our cooperative optimization by the gradient of the surrogate loss on a held-out unlabeled target dataset. Our theoretical analyses show that the held-out data can guide but will not be over-fitted by the optimization. Extensive experiments on image classification and semantic segmentation benchmarks demonstrate the effectiveness of ParetoDA
| accept | The paper proposes a novel way to solve the Domain adaptation problem by formulating it as a multi-objective optimization problem instead of a sum of objective values. This allows for solving the DA problem with less hyper parameters that are hard to select in practice and can have an important impact fr applications of DA. Despite this original idea the paper had originally borderline reviews due to some missing explanations and experiments. The authors did a very good job at answering the reviewers concerns with new experiments and even theoretical results. The reviewers agreed after the replies that the paper is interesting and deserve to be published.
Please take into account all reviewers comments (missing references and clarifications) and insert the novel results (both numerical and theoretical) in the main paper and supplementary for the final version because the consensus among reviewers is that it will make the paper much better and interesting for the community. | train | [
"R3pa0WWp6hO",
"hjQPGIZdqvJ",
"SwVGjhDLFQ",
"tSyb7dmSvYX",
"aeBgnMBYnwn",
"wnijtdFmVM_",
"HBbD-pQVCT4",
"khRRihSDAW3",
"90covsKbUut",
"T7Pkhyq-nCz",
"dA4ShiI8Ou",
"ovo2YT0JcA7",
"DNaz0Mzp5AE",
"bbCfs63s0kY",
"_RfyYUxaRVX",
"tLOP1vckB6s",
"LIGZXQzaOVt",
"HdyX2vT6AkF"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"While doing domain adaptation, we usually have a source classification loss and a domain alignment loss. However, these losses might compete with each other and we might reach a situation where one loss can’t be reduced without degrading the other one. We call that the Pareto front. For DA, we don’t know which los... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_frgb7FsKWs3",
"R3pa0WWp6hO",
"R3pa0WWp6hO",
"R3pa0WWp6hO",
"khRRihSDAW3",
"90covsKbUut",
"nips_2021_frgb7FsKWs3",
"DNaz0Mzp5AE",
"dA4ShiI8Ou",
"nips_2021_frgb7FsKWs3",
"ovo2YT0JcA7",
"_RfyYUxaRVX",
"HBbD-pQVCT4",
"HBbD-pQVCT4",
"T7Pkhyq-nCz",
"T7Pkhyq-nCz",
"HdyX2vT6AkF",
... |
nips_2021_Z_J5bCb4Rra | Divergence Frontiers for Generative Models: Sample Complexity, Quantization Effects, and Frontier Integrals | The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance. Divergence frontiers have recently been proposed as an evaluation framework for generative models, due to their ability to measure the quality-diversity trade-off inherent to deep generative modeling. We establish non-asymptotic bounds on the sample complexity of divergence frontiers. We also introduce frontier integrals which provide summary statistics of divergence frontiers. We show how smoothed estimators such as Good-Turing or Krichevsky-Trofimov can overcome the missing mass problem and lead to faster rates of convergence. We illustrate the theoretical results with numerical examples from natural language processing and computer vision.
| accept | The submission proposes a new integral summary, frontier integral, for generative model evaluation, extending on recent work on divergence frontiers. All reviewers expressed concern about high dimensions. However, the theoretical contributions are of interest to the community, the authors have addressed most concerns in the reviews, and the high-dimension issue is a limitation of most related work and not just this paper. Overall, good paper. Accept! | train | [
"l_nI0sndmgd",
"zXrmC0Ehnn",
"ZDK_yqjt87-",
"rqsszX57FZg",
"RC05GvyLHD_",
"nQIshrhwCD0",
"COVhzRuZleO",
"P-XK2DXrZyk",
"4qn7K5OiHPr",
"13ORKitEq54",
"chtwDjxddgE",
"rY3NHigQflT",
"wXo1vZVLCts",
"ovKBCHJ6Opm",
"q9sxCnNb0zZ",
"RdIILF1zX3E"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We are very happy to hear that you appreciate our theoretical contribution. We acknowledge your concerns and we have one more experiment to address it. \n\n**The pretrained feature extractors are completely independent of the goal: comparing two distributions.**\n\nWe agree with the reviewer that using a feature ... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"zXrmC0Ehnn",
"ZDK_yqjt87-",
"chtwDjxddgE",
"RC05GvyLHD_",
"ovKBCHJ6Opm",
"nips_2021_Z_J5bCb4Rra",
"4qn7K5OiHPr",
"nips_2021_Z_J5bCb4Rra",
"q9sxCnNb0zZ",
"chtwDjxddgE",
"wXo1vZVLCts",
"nips_2021_Z_J5bCb4Rra",
"RdIILF1zX3E",
"nQIshrhwCD0",
"P-XK2DXrZyk",
"nips_2021_Z_J5bCb4Rra"
] |
nips_2021_djbC2A4uTHP | Consistency Regularization for Variational Auto-Encoders | Variational Auto-Encoders (VAEs) are a powerful approach to unsupervised learning. They enable scalable approximate posterior inference in latent-variable models using variational inference. A VAE posits a variational family parameterized by a deep neural network---called an encoder---that takes data as input. This encoder is shared across all the observations, which amortizes the cost of inference. However the encoder of a VAE has the undesirable property that it maps a given observation and a semantics-preserving transformation of it to different latent representations. This "inconsistency" of the encoder lowers the quality of the learned representations, especially for downstream tasks, and also negatively affects generalization. In this paper, we propose a regularization method to enforce consistency in VAEs. The idea is to minimize the Kullback-Leibler (KL) divergence between the variational distribution when conditioning on the observation and the variational distribution when conditioning on a random semantics-preserving transformation of this observation. This regularization is applicable to any VAE. In our experiments we apply it to four different VAE variants on several benchmark datasets and found it always improves the quality of the learned representations but also leads to better generalization. In particular, when applied to the Nouveau VAE (NVAE), our regularization method yields state-of-the-art performance on MNIST, CIFAR-10, and CELEBA. We also applied our method to 3D data and found it learns representations of superior quality as measured by accuracy on a downstream classification task. Finally, we show our method can even outperform the triplet loss, an advanced and popular contrastive learning-based method for representation learning.
| accept | This paper proposes an approach to improving the representations of generalization of VAEs using semantic-preserving transformations and a regularizer that enforces a congruence between representations of the original and transformed inputs. This is shown to provide notable performance improvements for a variety of VAE models, with these gains, critically, larger than those achieved using simple data augmentation.
Overall, the reviewers were quite positive about the work, praising its simplicity, clarity, and empirical results. Some issues were raised about the fact that most aspects of the paper are relatively straightforward applications of previous techniques, with the novelty predominantly originating only from their use in a VAE context, and insufficient discussion and comparison to other representation learning approaches that utilize semantic-preserving transformations, such as contrastive learning approaches. However, all reviewers felt that the paper still deserved to be accepted in spite of these (albeit in some cases only marginally).
I believe that this assessment by the reviewers is quite fair and that, though not the most spectacular of papers, it is good solid work that will be of interest to the community and likely see some practical usage. I thus recommend accepting it to the conference.
That said, I do think it is important that the authors make sure to make updates and improvements to the work, as per the reviewer's suggestions, for the camera-ready version. In particular, I think the suggested additional discussion of related work is very important to add. I would also strongly encourage the authors to include some non-VAE self-supervised baselines: though I mostly buy the argument that VAEs are important in their own right and do not think beating these baselines should be a condition for the work to be acceptable, I think it is still important to quantify how the approach compares to natural alternatives practitioners might consider (e.g. to see if there are aspects that this VAE-based approach is relatively good and bad at and how far off they are to SOTA approaches in the area). Specifically, given much of the utility of the work is empirical, I feel that it is important that it properly investigates how relatively well VAE based approaches can perform in the target setting where semantically equivalent inputs can be generated. | train | [
"X4maM9UDD9O",
"_MLLFh7p4z",
"g73gFFE0VDr",
"OQnGBkZ61QJ",
"X16f1UsnKms",
"VNuhLFzRvl",
"A3YlihqOQ0u",
"Tw9mjGgHdvE",
"FPfscL38lqO",
"iAKPOSDT5AS",
"MjI_TLxSWzZ",
"uNoDuygBuut",
"fZOGka6vh-",
"w6C5lauAIta",
"TLFXHAIeeut",
"c4HGATW8ySw"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a novel method of using data augmentation when training Variational Autoencoders. Given a family of semantics-preserving transformations, the idea is to enforce the invariance of latent code w.r.t. these transformations by adding a KL penalty term. This penalty improves the quality of learned la... | [
7,
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_djbC2A4uTHP",
"g73gFFE0VDr",
"X16f1UsnKms",
"FPfscL38lqO",
"A3YlihqOQ0u",
"nips_2021_djbC2A4uTHP",
"w6C5lauAIta",
"nips_2021_djbC2A4uTHP",
"iAKPOSDT5AS",
"MjI_TLxSWzZ",
"fZOGka6vh-",
"X4maM9UDD9O",
"Tw9mjGgHdvE",
"VNuhLFzRvl",
"c4HGATW8ySw",
"nips_2021_djbC2A4uTHP"
] |
nips_2021_PPzV1H4atM4 | Score-based Generative Neural Networks for Large-Scale Optimal Transport | We consider the fundamental problem of sampling the optimal transport coupling between given source and target distributions. In certain cases, the optimal transport plan takes the form of a one-to-one mapping from the source support to the target support, but learning or even approximating such a map is computationally challenging for large and high-dimensional datasets due to the high cost of linear programming routines and an intrinsic curse of dimensionality. We study instead the Sinkhorn problem, a regularized form of optimal transport whose solutions are couplings between the source and the target distribution. We introduce a novel framework for learning the Sinkhorn coupling between two distributions in the form of a score-based generative model. Conditioned on source data, our procedure iterates Langevin Dynamics to sample target data according to the regularized optimal coupling. Key to this approach is a neural network parametrization of the Sinkhorn problem, and we prove convergence of gradient descent with respect to network parameters in this formulation. We demonstrate its empirical success on a variety of large scale optimal transport tasks.
| accept | This paper proposes a method to sample from high dimensional regularized optimal transport maps using Langevin diffusion on neural network parameterized potentials. The reviewers compliment the authors on being some of the first to tackle this problem at a truly large scale. The problem is important and the proposed solution seems technically sound and practically promising.
After the first round of reviews some concerns surfaced about the correctness of some of the presented results. In addition to general concerns about experimental validation from multiple reviewers, reviewer zJ3u pointed out some irregularities in the reported BW-UVP values which turned out to be caused by a scaling error in the regularization used in some of the experiments. The authors responded by providing updated results, in addition to acknowledging and fixing the error. The reviewers are satisfied with the author response and are now unanimous in their recommendation to accept the paper.
Authors, please carefully revise your paper for the camera ready version, and make sure to incorporate all new results and all changes that were promised in the discussion with the reviewers. | train | [
"AFRX-o2PPr",
"1yE2Oq-iaRp",
"H2WLUubtv_G",
"JVekULsX2Yc",
"hjVstcXHwuU",
"P8240Z9-K24",
"Qx9t5E3yJ0w",
"fy2y5YKFrX2",
"48LHr2IeirR",
"qDpx_1IBGX",
"pNymGPS0re",
"OOxObFvtwUm",
"rIHFayPn_gZ",
"l8jb5cZDW3x",
"gO4B8xSi1xp",
"uno29rDsPOz",
"8Ojx2fpdNyz",
"UjknwS-dKU6",
"HU3Jag3B1m"
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_re... | [
"The authors consider a regularised optimal transport problem. They \n- obtained a dual objective in case of a general regularisation based on f-divergence,\n- used \na) the large-scale stochastic dual approach, introduced by Seguy et al., to solve the optimal transport problem in their generalised setting,\nb) La... | [
6,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_PPzV1H4atM4",
"UjknwS-dKU6",
"nips_2021_PPzV1H4atM4",
"nips_2021_PPzV1H4atM4",
"P8240Z9-K24",
"48LHr2IeirR",
"nips_2021_PPzV1H4atM4",
"nips_2021_PPzV1H4atM4",
"pNymGPS0re",
"nips_2021_PPzV1H4atM4",
"OOxObFvtwUm",
"rIHFayPn_gZ",
"l8jb5cZDW3x",
"gO4B8xSi1xp",
"JVekULsX2Yc",
"H... |
nips_2021_T6m9bNI7C__ | Interactive Label Cleaning with Example-based Explanations | We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look "suspicious" to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose CINCER, a novel approach that cleans both new and past data by identifying \emph{pairs of mutually incompatible examples}. Whenever it detects a suspicious example, CINCER identifies a counter-example in the training set that - according to the model - is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as \emph{explanations} of the model's suspicion, and highly influential, so to convey as much information as possible if relabeled. CINCER achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps in acquiring substantially better data and models, especially when paired with our FIM approximation.
| accept | Four knowledgeable referees have thoroughly reviewed this paper and all recommended that it is accepted. I agree with their recommendation. | train | [
"UGjIoyA5v4l",
"orGfV5XXq7i",
"tov52TLfxGQ",
"i3D2gaDTxZs",
"pLE7XYEFj7A",
"QIlHvfrIO1_",
"TRdsY_0TLJi",
"fItaKTmGTKS",
"yP_I8kO7pl4",
"NJnwRsuTHFV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper tackles the problem of label cleaning, where labels for incoming data points might be flawed. The authors propose an interactive learning approach, where the system builds counter examples for data points, which consist of pairs of data points which the model finds incompatible. The human expert is then ... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_T6m9bNI7C__",
"yP_I8kO7pl4",
"NJnwRsuTHFV",
"NJnwRsuTHFV",
"UGjIoyA5v4l",
"yP_I8kO7pl4",
"fItaKTmGTKS",
"nips_2021_T6m9bNI7C__",
"nips_2021_T6m9bNI7C__",
"nips_2021_T6m9bNI7C__"
] |
nips_2021_Aa5oPXc_1IV | Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias | The generalization mystery of overparametrized deep nets has motivated efforts to understand how gradient descent (GD) converges to low-loss solutions that generalize well. Real-life neural networks are initialized from small random values and trained with cross-entropy loss for classification (unlike the "lazy" or "NTK" regime of training where analysis was more successful), and a recent sequence of results (Lyu and Li, 2020; Chizat and Bach, 2020; Ji and Telgarsky, 2020) provide theoretical evidence that GD may converge to the "max-margin" solution with zero loss, which presumably generalizes well. However, the global optimality of margin is proved only in some settings where neural nets are infinitely or exponentially wide. The current paper is able to establish this global optimality for two-layer Leaky ReLU nets trained with gradient flow on linearly separable and symmetric data, regardless of the width. The analysis also gives some theoretical justification for recent empirical findings (Kalimeris et al., 2019) on the so-called simplicity bias of GD towards linear or other "simple" classes of solutions, especially early in training. On the pessimistic side, the paper suggests that such results are fragile. A simple data manipulation can make gradient flow converge to a linear classifier with suboptimal margin.
| accept | This paper refines the implicit bias story for two-layer networks, analyzing settings where it is guaranteed to prefer certain simple linear solutions. While the reviewers had some concerns, overall reviews and discussion were positive. I urge the authors to carefully address reviewer comments during their final revisions. | train | [
"di_quutxNc6",
"243NZZXM7uN",
"rk-8aR-dAGn",
"kXNJnre0TS",
"Q66V9eRuBd",
"FcNSOGb9q8t",
"c4JnmpMQGHv",
"R1wZTjPjyYl"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the implicit bias for finite two layers Leaky ReLU networks. For linearly separable symmetric data, the authors show that the global max-margin direction corresponds to a linear classifier. In addition, under additional regularity conditions and for a sufficiently small initialization scale, the ... | [
6,
6,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_Aa5oPXc_1IV",
"nips_2021_Aa5oPXc_1IV",
"R1wZTjPjyYl",
"di_quutxNc6",
"243NZZXM7uN",
"c4JnmpMQGHv",
"nips_2021_Aa5oPXc_1IV",
"nips_2021_Aa5oPXc_1IV"
] |
nips_2021_GitDcBlcg78 | Glance-and-Gaze Vision Transformer | Recently, there emerges a series of vision Transformers, which show superior performance with a more compact model size than conventional convolutional neural networks, thanks to the strong ability of Transformers to model long-range dependencies. However, the advantages of vision Transformers also come with a price: Self-attention, the core part of Transformer, has a quadratic complexity to the input sequence length. This leads to a dramatic increase of computation and memory cost with the increase of sequence length, thus introducing difficulties when applying Transformers to the vision tasks that require dense predictions based on high-resolution feature maps.In this paper, we propose a new vision Transformer, named Glance-and-Gaze Transformer (GG-Transformer), to address the aforementioned issues. It is motivated by the Glance and Gaze behavior of human beings when recognizing objects in natural scenes, with the ability to efficiently model both long-range dependencies and local context. In GG-Transformer, the Glance and Gaze behavior is realized by two parallel branches: The Glance branch is achieved by performing self-attention on the adaptively-dilated partitions of the input, which leads to a linear complexity while still enjoying a global receptive field; The Gaze branch is implemented by a simple depth-wise convolutional layer, which compensates local image context to the features obtained by the Glance mechanism. We empirically demonstrate our method achieves consistently superior performance over previous state-of-the-art Transformers on various vision tasks and benchmarks.
| accept | The paper introduces an efficient transformer architecture with a “glance” branch to model long-range dependencies and a “gaze” branch to account for local context. Three reviewers recommend acceptance, highlighting that the idea is novel, the paper is well-written, and the results are solid. One reviewer, despite appreciating the intuitive appeal of the glance and gaze idea, recommends rejection primarily because of the discrepancy between the Swin baseline numbers being compared with and the actual numbers in the Swin paper referenced in the manuscript. However, the AC agrees with the other three reviewers and the authors based on their response that the comparison with Swin transformers is fair, as the same setting was used to compare both models. The paper is technically sound, has an interesting idea, and the results are promising. The authors should carefully proofread the paper and add the discussion in the rebuttal to the final version. | test | [
"16UeFEzxHjs",
"cvWu2iFG8K",
"pRe_gIOiH_b",
"8hxivGR3Evd",
"EElmMLw2jEF",
"ugboCrcqwNy",
"ENFwfxaZj8X",
"BdkzclEzfD1",
"j9jBT04uql",
"qCekkxbOZQm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Hi,\n\nAfter reading the rebuttal and the others reviews, I believe the paper should be accepted. I think the authors have addressed my concerns in their response, and therefore I think the paper has sufficient merits to be accepted. Furthermore, I have read the other reviews, and although I share some of the con... | [
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
3,
7
] | [
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"qCekkxbOZQm",
"nips_2021_GitDcBlcg78",
"nips_2021_GitDcBlcg78",
"EElmMLw2jEF",
"pRe_gIOiH_b",
"qCekkxbOZQm",
"j9jBT04uql",
"cvWu2iFG8K",
"nips_2021_GitDcBlcg78",
"nips_2021_GitDcBlcg78"
] |
nips_2021_0hggPuM2b2 | Stochastic $L^\natural$-convex Function Minimization | Haixiang Zhang, Zeyu Zheng, Javad Lavaei | accept | From the SAC: after looking at the paper, reviews, rebuttal, and discussion, and questions about reviewer sXUy , I am going to concur with the AC's recommendation of accept as a poster. However, this was a borderline paper compared to other papers, so I strongly urge you to take these reviewer comments into account. By your own words, "We agree that the lower bound established in this work is a little counter-intuitive, when being compared to Ito (2019)", therefore, it behooves you to make sure in the next version of this paper that this fact is acknowledged in the paper, and that you strive in your edits to make it as non-counter-intuitive as possible. Also, please ensure that all questions and concerns raised by the reviewer are addressed in the next version of the paper. | train | [
"M55UsK108Dn",
"C92XqXGi3op",
"AwDZGBKSYaj",
"LiDDoishhAG",
"5oZSwcYJ54B",
"cF3KIc18o7H",
"8Nbrjk28jMi",
"eKi68zuhvas",
"XL7xFUAAH7",
"l2DkmbWf92",
"hD-ocrj1No9"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper focuses on the problem of minimizing $f(x) = \\mathbb{E}_{\\xi}[F(x,\\xi_x)]$ where $f:[N]^d\\to\\mathbb{R}$ is an $L^\\natural$ function. This class of functions can be interpreted as a discrete analog of convex functions. The goal is to design polynomial time algorithms that guarantee a solution whose... | [
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
5,
6
] | [
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_0hggPuM2b2",
"eKi68zuhvas",
"LiDDoishhAG",
"8Nbrjk28jMi",
"nips_2021_0hggPuM2b2",
"hD-ocrj1No9",
"l2DkmbWf92",
"M55UsK108Dn",
"5oZSwcYJ54B",
"nips_2021_0hggPuM2b2",
"nips_2021_0hggPuM2b2"
] |
nips_2021_MT0pTKLyzkT | Self-Supervised GANs with Label Augmentation | Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing a stationary learning environment. However, the separate self-supervised tasks in existing self-supervised GANs cause a goal inconsistent with generative modeling due to the fact that their self-supervised classifiers are agnostic to the generator distribution. To address this problem, we propose a novel self-supervised GAN that unifies the GAN task with the self-supervised task by augmenting the GAN labels (real or fake) via self-supervision of data transformation. Specifically, the original discriminator and self-supervised classifier are unified into a label-augmented discriminator that predicts the augmented labels to be aware of both the generator distribution and the data distribution under every transformation, and then provide the discrepancy between them to optimize the generator. Theoretically, we prove that the optimal generator could converge to replicate the real data distribution. Empirically, we show that the proposed method significantly outperforms previous self-supervised and data augmentation GANs on both generative modeling and representation learning across benchmark datasets.
| accept | This was a borderline paper with a good amount of discussion both between the reviewers and reviewers and authors. I'll try to summarize some of the pros and cons of this work:
* novel yet simple extension to SSGAN. Simplicity is good, but some reviewers have argued that contribution is incremental.
* the theoretical contribution is not in the main body of the paper (this is easily fixed)
* the link with catastrophic forgetting should be made more obvious in the paper itself.
* the results are on small resolution sets only (some reviewers argued that access to resources is not an excuse, but I respectfully disagree).
All in all, I am inclined to accept this work based on the fact that it's a simple, principled, theoretically grounded change that seems to work well in comparable empirical settings on lower-resolution image sets. And I think self-supervised GANs are an interesting practically useful line of research, where novel insights may be useful to many researchers.
| train | [
"DHDZSE-Q4E",
"oxDHueal1LX",
"HTrJz54-cjN",
"SJ8tmspIJHr",
"rvUer-naItR",
"fG75gWmcU5X",
"w1IHQ64xniW",
"dkHLg_xHw5",
"0x_bCDWSOQv",
"lHjnpF1ZGb",
"RLvpYmfoc70",
"wuEY7rxhPkn",
"dyRtQ2dYAT",
"nETXKGyThof",
"ppD0YrrHh8H",
"Yiob8nzbSJJ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer NbmS,\n\n \n\nWe would like to thank you for providing valuable feedback on this paper. We are encouraged that we have addressed most of your concerns around quality. We hereby respond to your rest concerns by further elaborating our motivation, contributions, and how our experiments verify them.\n\... | [
-1,
-1,
5,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"SJ8tmspIJHr",
"fG75gWmcU5X",
"nips_2021_MT0pTKLyzkT",
"dyRtQ2dYAT",
"fG75gWmcU5X",
"lHjnpF1ZGb",
"nips_2021_MT0pTKLyzkT",
"w1IHQ64xniW",
"wuEY7rxhPkn",
"RLvpYmfoc70",
"ppD0YrrHh8H",
"Yiob8nzbSJJ",
"HTrJz54-cjN",
"w1IHQ64xniW",
"nips_2021_MT0pTKLyzkT",
"nips_2021_MT0pTKLyzkT"
] |
nips_2021_Ecuu521mPpG | Shape As Points: A Differentiable Poisson Solver | In recent years, neural implicit representations gained popularity in 3D reconstruction due to their expressiveness and flexibility. However, the implicit nature of neural implicit representations results in slow inference times and requires careful initialization. In this paper, we revisit the classic yet ubiquitous point cloud representation and introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR) which allows for a GPU-accelerated fast solution of the indicator function given an oriented point cloud. The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field, enabling end-to-end optimization of surface reconstruction metrics such as Chamfer distance. This duality between points and meshes hence allows us to represent shapes as oriented point clouds, which are explicit, lightweight and expressive. Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude. Compared to other explicit representations such as points, patches, and meshes, SAP produces topology-agnostic, watertight manifold surfaces. We demonstrate the effectiveness of SAP on the task of surface reconstruction from unoriented point clouds and learning-based reconstruction.
| accept | Congratulations, the paper is accepted to NeurIPS 2021!
Please incorporate additional experiments, edits and corrections as discussed in rebuttal. | train | [
"J5syVm6nGfm",
"pyuQi_nPKva",
"Mo43dnSacNa",
"gc3eJ9CdFSp",
"iOkOCEFyuMq",
"a0blzDCMYby",
"adadqqpwbrH",
"k0a1iJKiKm",
"UNsDT38ZYQI",
"8Vsp1Igricg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work introduces a differentiable points (+normals) to mesh layer based on a differentiable formulation of Poisson surface reconstruction.\nFirst, the proposed method is validated in point cloud optimisation tasks where the objective function is expressed w.r.t surface meshes.\nThen, the method is proposed as ... | [
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
9,
8
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_Ecuu521mPpG",
"adadqqpwbrH",
"nips_2021_Ecuu521mPpG",
"a0blzDCMYby",
"Mo43dnSacNa",
"8Vsp1Igricg",
"J5syVm6nGfm",
"UNsDT38ZYQI",
"nips_2021_Ecuu521mPpG",
"nips_2021_Ecuu521mPpG"
] |
nips_2021_4bzanicqvy8 | Outcome-Driven Reinforcement Learning via Variational Inference | While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
| accept | The paper proposes a new variational inference approach for goal-based reinforcement learning. The author's goal is to more directly tackle reinforcement learning problems where the goal is to reach a certain outcome, without resorting to manual reward shaping. Similarly to some previous work, they define an inference objective which is maximize using techniques from variational optimization. Different from such works, in their inference formulation they condition on the goal being reached, rather than using the reward function as pseudo-likelihood.
The reviewers consider the considered problem (goal-conditioned RL) important and very relevant for the machine learning community.
The resulting approach was considered novel by the reviewers, especially the extension of a framework similar to VICE [3] to an off-policy setting.
The reviewers also considered the paper technically sound and sufficiently well supported by experiments and ablations (that show the method outperforms e.g. SAC, another method in this space).
Also, the reviewers consider the paper well-written.
| test | [
"sF2eqd8ePIh",
"LSR7JFOaF2",
"mQCZRNf5Kup",
"aFi7Ptz4cEn",
"S4Z4iqXvZpq",
"FkQopW0UwpV",
"ZStBxLCaeZa",
"44hshvFtMeL",
"dv3_fRlPtaS",
"E8xeZ5GmXm8",
"F40XF3-dhFA",
"KuAtE4zmfLX",
"BYD-eeVRLv",
"e-VE5i5lwjC",
"6Fwpyd175_",
"RV-hlSQhIbb"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for engaging with our response, for your helpful suggestions, and for revising your score. It's much appreciated.",
"This paper combines the recent work on viewing RL as Probabilistic Inference and the goal-conditioned RL setting. The proposed approach assigns rewards based on the likelihood of reachi... | [
-1,
7,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"mQCZRNf5Kup",
"nips_2021_4bzanicqvy8",
"aFi7Ptz4cEn",
"S4Z4iqXvZpq",
"BYD-eeVRLv",
"44hshvFtMeL",
"nips_2021_4bzanicqvy8",
"dv3_fRlPtaS",
"E8xeZ5GmXm8",
"KuAtE4zmfLX",
"RV-hlSQhIbb",
"ZStBxLCaeZa",
"LSR7JFOaF2",
"6Fwpyd175_",
"nips_2021_4bzanicqvy8",
"nips_2021_4bzanicqvy8"
] |
nips_2021_98zhe-xzviq | Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks | Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks, i.e., an imperceptible perturbation to the input can mislead DNNs trained on clean images into making erroneous predictions. To tackle this, adversarial training is currently the most effective defense method, by augmenting the training set with adversarial samples generated on the fly. \textbf{Interestingly, we discover for the first time that there exist subnetworks with inborn robustness, matching or surpassing the robust accuracy of the adversarially trained networks with comparable model sizes, within randomly initialized networks without any model training}, indicating that adversarial training on model weights is not indispensable towards adversarial robustness. We name such subnetworks Robust Scratch Tickets (RSTs), which are also by nature efficient. Distinct from the popular lottery ticket hypothesis, neither the original dense networks nor the identified RSTs need to be trained. To validate and understand this fascinating finding, we further conduct extensive experiments to study the existence and properties of RSTs under different models, datasets, sparsity patterns, and attacks, drawing insights regarding the relationship between DNNs’ robustness and their initialization/overparameterization. Furthermore, we identify the poor adversarial transferability between RSTs of different sparsity ratios drawn from the same randomly initialized dense network, and propose a Random RST Switch (R2S) technique, which randomly switches between different RSTs, as a novel defense method built on top of RSTs. We believe our findings about RSTs have opened up a new perspective to study model robustness and extend the lottery ticket hypothesis.
| accept | This paper investigates whether subnetworks with high adversarial robustness can be found by learning a mask over the weights in a randomly initialized large network. Using this approach, the authors demonstrate that robust scratch tickets (RSTs) can be found for a number of datasets and show that multiple RSTs can be ensembled to further increase robustness. All reviewers found the work to be interesting, original, and of high quality, especially after a robust discussion with the authors, and I think this paper will be useful for the broader community. I would strongly encourage the authors to include all the experiments performed for the rebuttal in the final paper. I recommend acceptance. | train | [
"w88FWr239Bq",
"nbCHdihMmeO",
"xG2W8-hwAMz",
"ebUDCZgAe8i",
"aDywbJLjd4X",
"KvVQ6LnzKKj",
"MiWDTEq1TV8",
"wBnjXrC52h",
"5y64exsjrs",
"SgpNkZyjs-t"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for recognizing the potential impacts of our work and for your constructive comments which would help improve our paper! We will follow your and other reviewers’ suggestions to include the discussions about adaptive attacks and the comparison with other methods in the final version.",
" Thanks a lot f... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
4,
4
] | [
"xG2W8-hwAMz",
"ebUDCZgAe8i",
"KvVQ6LnzKKj",
"MiWDTEq1TV8",
"nips_2021_98zhe-xzviq",
"SgpNkZyjs-t",
"aDywbJLjd4X",
"5y64exsjrs",
"nips_2021_98zhe-xzviq",
"nips_2021_98zhe-xzviq"
] |
nips_2021_N1i6BJzouX4 | Rectifying the Shortcut Learning of Background for Few-Shot Learning | The category gap between training and evaluation has been characterised as one of the main obstacles to the success of Few-Shot Learning (FSL). In this paper, we for the first time empirically identify image background, common in realistic images, as a shortcut knowledge helpful for in-class classification but ungeneralizable beyond training categories in FSL. A novel framework, COSOC, is designed to tackle this problem by extracting foreground objects in images at both training and evaluation without any extra supervision. Extensive experiments carried on inductive FSL tasks demonstrate the effectiveness of our approaches.
| accept | The authors provide a simple but effective technique for reducing bias towards background pixels, in a few-shot setting. They provided extensive data and explanations in their rebuttal. Though two of the reviewers remained unconvinced, the other two reviewers were very supportive. However, I believe that the simplicity of the method and it's clear effectiveness, along with the extensive empirical studies, makes it a good candidate to be accepted to Neurips 2021. The authors should (as they have themselves promised) revise the paper to incorporate the good comments from the reviewers. | train | [
"inD37tvSfa-",
"VSC4MIZY0FR",
"kNGKYAZvvq0",
"IQEYTGSxKzC",
"USVxAj4cfqQ",
"IYiWOdRf6mA",
"DYdzlWef1_M",
"aRt5rJp1d7W",
"umQoLVkIy7J",
"rVyfTrGcY3",
"ETst40BysUt",
"7McajN4spv",
"D1bvOZ0Y5dM",
"5r02hcYuWpK",
"sJCoZda50N",
"mPvaZN8N-LD",
"NBWheW7BE9s",
"ELeD55fhx5v",
"kb_i7figIjz"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
... | [
" 1. The reason why it seems our method under 1-shot setting improves more has been explained in the rebuttal. **Here we want to emphasize that, due to the essential difference between 1-shot and multi-shot problem (whether need to aggregate class information for metric-based comparison), the inconsistency between ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"VSC4MIZY0FR",
"rVyfTrGcY3",
"IQEYTGSxKzC",
"rVyfTrGcY3",
"DYdzlWef1_M",
"kb_i7figIjz",
"ETst40BysUt",
"5r02hcYuWpK",
"nips_2021_N1i6BJzouX4",
"kb_i7figIjz",
"aRt5rJp1d7W",
"mPvaZN8N-LD",
"sJCoZda50N",
"umQoLVkIy7J",
"ELeD55fhx5v",
"NBWheW7BE9s",
"ibjuXY7m5pG",
"4g0o55GCU56",
"ni... |
nips_2021_guHXB1dcD3l | SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency | In this paper, we explore how we can build upon the data and models of Internet images and use them to adapt to robot vision without requiring any extra labels. We present a framework called Self-supervised Embodied Active Learning (SEAL). It utilizes perception models trained on internet images to learn an active exploration policy. The observations gathered by this exploration policy are labelled using 3D consistency and used to improve the perception model. We build and utilize 3D semantic maps to learn both action and perception in a completely self-supervised manner. The semantic map is used to compute an intrinsic motivation reward for training the exploration policy and for labelling the agent observations using spatio-temporal 3D consistency and label propagation. We demonstrate that the SEAL framework can be used to close the action-perception loop: it improves object detection and instance segmentation performance of a pretrained perception model by just moving around in training environments and the improved perception model can be used to improve Object Goal Navigation.
| accept | This paper proposes a method which, like active vision, combines perception and control in a closed loop to improve perception (and also an agent policy). It leverages data from pre-trained models, which are bootstrapped, and proposes two different settings, generalization vs. specialization, which also allows training on the test environment.
The paper received 4 expert reviews and was initially on the fence. The reviewers initially mainly raised issues on lack of novelty, an requested additional experiments.
The authors provided a response, which was highly appreciated by most reviewers and solved several issues, in particular through a fruitful discussion on novelty. Additional experiments also help the paper. Some reviewers raised the score above borderline, and 3 out of the 4 reviewers were convinced on acceptance.
The AC followed the discussion, and while he does not find the paper particularly well-written, concurs that it has merits and recommends acceptance.
| train | [
"bAeF3q_4HF",
"lzfxq5zbUnq",
"Xbbb302Vhpm",
"pS6vZsitSkj",
"DRAOIPy4rdp",
"GaWNonxzwc1",
"kV3uZteK6Z8",
"68g3B2QAGJ",
"zwMUlmyHB2k",
"bfU1q6usTf4",
"ici8rUrh07",
"1ebV5mYmeA5",
"Dk8-Qik9fFv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes an approach to fine-tune and transfer an object detection and segmentation approach (Mask-RCNN) to a set of 3D textured scene models by label propagation and exploration. The object detector is pretrained on an annotated large-scale Internet image dataset (MS Coco). A \"perception\" component m... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_guHXB1dcD3l",
"GaWNonxzwc1",
"nips_2021_guHXB1dcD3l",
"ici8rUrh07",
"bfU1q6usTf4",
"zwMUlmyHB2k",
"1ebV5mYmeA5",
"nips_2021_guHXB1dcD3l",
"Xbbb302Vhpm",
"bAeF3q_4HF",
"Dk8-Qik9fFv",
"68g3B2QAGJ",
"nips_2021_guHXB1dcD3l"
] |
nips_2021_VsUQQkpEXgr | Sifting through the noise: Universal first-order methods for stochastic variational inequalities | Kimon Antonakopoulos, Thomas Pethick, Ali Kavis, Panayotis Mertikopoulos, Volkan Cevher | accept | This paper proposes universal adaptive stochastic gradient-based methods for variational inequalities. It unifies several dual averaging algorithms, uses AdaGrad-type stepsize to achieve universality and adaptive results, and shows asymptotic convergence of the last iterate. I agree with the reviewers that the technical contributions are interesting and novel, and I am happy to recommend acceptance. | test | [
"7APGk5xghXe",
"zxZG3BoKX_S",
"JziBy5Lak-y",
"7GW5ZYF7oUX",
"1OLjZNsP6rN",
"Ngf1e743ly9",
"7P1zMDIxTl0",
"f6cHmc5OI2",
"RewEsZazHsN",
"x_Np6iIewL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper gives an algorithmic framework of generalized extragradient method for solving variational inequalities induced by a cocoercive operator A. The framework casts dual averaging, dual extrapolation as special cases, and interpolates between the convergence rate of O(1/sqrt{T}) and O(1/T) for different noise... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_VsUQQkpEXgr",
"7P1zMDIxTl0",
"nips_2021_VsUQQkpEXgr",
"Ngf1e743ly9",
"JziBy5Lak-y",
"RewEsZazHsN",
"x_Np6iIewL",
"7APGk5xghXe",
"nips_2021_VsUQQkpEXgr",
"nips_2021_VsUQQkpEXgr"
] |
nips_2021_xdk17QJpf5q | Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning | Jingfeng Wu, Vladimir braverman, Lin Yang | accept | This paper looks at an interesting variant of Reinforcement/Online learning, where users arrive with different preferences over several criteria. The problem is therefore linked to multi-objective optimization. Preferences could have been modelled and tackled differently than by linear aggregation, but one must start somewhere.
The content of the paper, as well as its writing, justify its acceptance. | val | [
"TYHlO2gDwXu",
"BwrT4K85Xw",
"AbOrYPV0pN5",
"yDjWQy3alJy",
"4Hyph_DJl_",
"EB2xe97XFj",
"BihGRSmy_bt",
"ZcR6FYxIG72",
"QE8AcbfxVv5",
"PUUGzjlX4I"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigates multi-objective reinforcement learning (MORL) in the online setting and the so-called preference-free setting, where the preference vector is given by an adversary. It proposes a model-based algorithms for each setting under tabular episodic MDPs, respectively. Both algorithms are shown to ... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"nips_2021_xdk17QJpf5q",
"TYHlO2gDwXu",
"PUUGzjlX4I",
"QE8AcbfxVv5",
"ZcR6FYxIG72",
"BihGRSmy_bt",
"nips_2021_xdk17QJpf5q",
"nips_2021_xdk17QJpf5q",
"nips_2021_xdk17QJpf5q",
"nips_2021_xdk17QJpf5q"
] |
nips_2021_SbGpYmQHlS8 | Exact Privacy Guarantees for Markov Chain Implementations of the Exponential Mechanism with Artificial Atoms | Jeremy Seeman, Matthew Reimherr, Aleksandra Slavković | accept | The reviewers were overall positive about the paper, modulo the following concerns, which came up during the discussion period.
1. Comparison to prior work, especially to Ganesh and Talwar. In particular, the reviwers will be more satisfied if even just empirically -- one shows that the algorithm outperforms some existing algorithms on some measure (say, runtime), and
2. Since the assumptions in the paper are different than prior work, it would be important to contrast them with prior work. In particular, w.r.t. curse of dimensionality. | train | [
"m2ZN7A35Bz0",
"w4TcgxKsHCK",
"Tb4gmVmdUxf",
"y7aoe0JW0_K",
"9_VzoOnfx7k",
"wEVPlnSGk0E",
"yJnqIGLx4Eo",
"hfjpDM7Krr",
"pxLX_-iNQ2D",
"2wwjBSw8Nc",
"d4cao-7W954"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks authors for addressing many of my concerns. I still have some concerns regarding the applicability of the method, but I will raise my score to weak accept for the novel theoretical contributions.",
"This paper explores the use of finite runtime samplers applied on exponential mechanism (EM). The paper pr... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
4,
2,
-1,
-1,
-1,
-1,
-1,
2,
2,
4
] | [
"y7aoe0JW0_K",
"nips_2021_SbGpYmQHlS8",
"nips_2021_SbGpYmQHlS8",
"w4TcgxKsHCK",
"d4cao-7W954",
"2wwjBSw8Nc",
"Tb4gmVmdUxf",
"pxLX_-iNQ2D",
"nips_2021_SbGpYmQHlS8",
"nips_2021_SbGpYmQHlS8",
"nips_2021_SbGpYmQHlS8"
] |
nips_2021_grfI7Rnv5P | The Emergence of Objectness: Learning Zero-shot Segmentation from Videos | Humans can easily detect and segment moving objects simply by observing how they move, even without knowledge of object semantics. Inspired by this, we develop a zero-shot unsupervised approach for learning object segmentations. The model comprises two visual pathways: an appearance pathway that segments individual RGB images into coherent object regions, and a motion pathway that predicts the flow vector for each region between consecutive video frames. The two pathways jointly reconstruct a new representation called segment flow. This decoupled representation of appearance and motion is trained in a self-supervised manner to reconstruct one frame from another.When pretrained on an unlabeled video corpus, the model can be useful for a variety of applications, including 1) primary object segmentation from a single image in a zero-shot fashion; 2) moving object segmentation from a video with unsupervised test-time adaptation; 3) image semantic segmentation by supervised fine-tuning on a labeled image dataset. We demonstrate encouraging experimental results on all of these tasks using pretrained models.
| accept | Reviewers did not reach a consensus for this paper. Most of them like the idea in the paper and consider that the experimental section is comprehensive, but some of the reviewers are not fully convinced by any of the experiments.
There was also considerable discussion over if the method depends strongly on photographer bias. I am not too concerned about that in training: one can imagine a curriculum of learning where a method like the proposed is first exposed to videos with photographic bias, and progressively it moves to more complex data. Arguably the proposed model may not scale all the way, also since it has simplifying assumptions like a constant velocity motion model, but it may be able to serve as a good bootstrapping mechanism. There was some criticism in discussion that the model trains on more data than competing methods, but being unsupervised i wouldn't hold that against the method
I join the reviewers in enjoying the simplicity of the approach and understand that the experiments can be improved but think that they are decent already. In any case i encourage the authors to look carefully into the reviews and improve the experiments following the reviewers advice, namely trying to understand how good the segments are in multi-object data. I would be curious if the model still learns on more complex video data, such as imagenet-video or larger action recognition datasets. | train | [
"l2jAnu5BuLy",
"M_s9ZAAgrpr",
"BaVjA5sAmH",
"ktTyBMvIlD_",
"q69YnpnUys2",
"lwQDxtrNsvL",
"AxFz8oDKiPu",
"H3yBfZcGpS5",
"V6PR0jFmlXx",
"twaf4MeYXK",
"921d0AO-GY0",
"NEgwjZj6c1",
"BoB9RGWCf9w",
"baqgX1qnapr",
"Eidls4kbQqm",
"TjewF7jqFRx",
"MD3rCrXRQhC",
"taCvxdR86QD",
"J1jAvir_fr_"... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Dear reviewer,\n\nWe feel that we are unable to provide our results on the synthetic optical flow datasets. Our paper studies unsupervised learning from videos instead of supervised transfer learning from synthetic datasets. The two problems are distinct. Moreover, the FlyingThings dataset is heavily engineered f... | [
-1,
-1,
-1,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"H3yBfZcGpS5",
"H3yBfZcGpS5",
"lwQDxtrNsvL",
"nips_2021_grfI7Rnv5P",
"nips_2021_grfI7Rnv5P",
"q69YnpnUys2",
"J1jAvir_fr_",
"921d0AO-GY0",
"czYvghcptpe",
"q69YnpnUys2",
"NEgwjZj6c1",
"TjewF7jqFRx",
"baqgX1qnapr",
"TjewF7jqFRx",
"q69YnpnUys2",
"ktTyBMvIlD_",
"czYvghcptpe",
"J1jAvir_f... |
nips_2021_rG2ponW2Si | Direct Multi-view Multi-person 3D Pose Estimation | We present Multi-view Pose transformer (MvP) for estimating multi-person 3D poses from multi-view images. Instead of estimating 3D joint locations from costly volumetric representation or reconstructing the per-person 3D pose from multiple detected 2D poses as in previous methods, MvP directly regresses the multi-person 3D poses in a clean and efficient way, without relying on intermediate tasks. Specifically, MvP represents skeleton joints as learnable query embeddings and let them progressively attend to and reason over the multi-view information from the input images to directly regress the actual 3D joint locations. To improve the accuracy of such a simple pipeline, MvP presents a hierarchical scheme to concisely represent query embeddings of multi-person skeleton joints and introduces an input-dependent query adaptation approach. Further, MvP designs a novel geometrically guided attention mechanism, called projective attention, to more precisely fuse the cross-view information for each joint. MvP also introduces a RayConv operation to integrate the view-dependent camera geometry into the feature representations for augmenting the projective attention. We show experimentally that our MvP model outperforms the state-of-the-art methods on several benchmarks while being much more efficient. Notably, it achieves 92.3% AP25 on the challenging Panoptic dataset, improving upon the previous best approach [35] by 9.8%. MvP is general and also extendable to recovering human mesh represented by the SMPL model, thus useful for modeling multi-person body shapes. Code and models are available at https://github.com/sail-sg/mvp.
| accept | Reviewers agreed this is an interesting paper and assigned scores ranging from 5 to 8. The rebuttal successfully clarified some of the key reviewers’ initial concerns and the ACs reached consensus that this paper can be accepted for publication. Authors are highly encouraged to address the key comments reported by reviewers in the final camera-ready version | train | [
"6Cqag6Xlmjq",
"DeJDMgMJoD",
"C4UlM080daE",
"ZO5s0R3vRBv",
"2jwqFiCjjdw",
"fzE1aVEh4qI",
"Cs_bDSoFZwW",
"6PULSHUA1oT",
"60Aqq3v5Vt",
"0VQFmdN7KF",
"KMKEnZdgzo",
"-DvYS1XAzD1",
"gfrNM2I4mYe",
"6nasSKOxI2j",
"_zOQJBgRk7_",
"AyjiSl0U-xr",
"hfUeAWJeWH",
"hA6GfV1gtGA",
"twEtZVviH81",
... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
"This work proposes Multi-View Pose transformer (MVP), a novel transformer based technique for multi-person 3D pose estimation via direct regression. There are multiple technical innovations at the core of MVP (1) a nontrivial means to embed joints as learned query features appropriate for a transformer (2) a compu... | [
7,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
8,
7
] | [
4,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
4,
5
] | [
"nips_2021_rG2ponW2Si",
"hfUeAWJeWH",
"2jwqFiCjjdw",
"Cs_bDSoFZwW",
"nips_2021_rG2ponW2Si",
"0VQFmdN7KF",
"AyjiSl0U-xr",
"KMKEnZdgzo",
"-DvYS1XAzD1",
"_zOQJBgRk7_",
"C4UlM080daE",
"-8gRH5pZiy0",
"SxQ43eU7kc",
"twEtZVviH81",
"6Cqag6Xlmjq",
"hA6GfV1gtGA",
"nips_2021_rG2ponW2Si",
"nip... |
nips_2021_y_OmkmCH9w | MST: Masked Self-Supervised Transformer for Visual Representation | Transformer has been widely used for self-supervised pre-training in Natural Language Processing (NLP) and achieved great success. However, it has not been fully explored in visual self-supervised learning. Meanwhile, previous methods only consider the high-level feature and learning representation from a global perspective, which may fail to transfer to the downstream dense prediction tasks focusing on local features. In this paper, we present a novel Masked Self-supervised Transformer approach named MST, which can explicitly capture the local context of an image while preserving the global semantic information. Specifically, inspired by the Masked Language Modeling (MLM) in NLP, we propose a masked token strategy based on the multi-head self-attention map, which dynamically masks some tokens of local patches without damaging the crucial structure for self-supervised learning. More importantly, the masked tokens together with the remaining tokens are further recovered by a global image decoder, which preserves the spatial information of the image and is more friendly to the downstream dense prediction tasks. The experiments on multiple datasets demonstrate the effectiveness and generality of the proposed method. For instance, MST achieves Top-1 accuracy of 76.9% with DeiT-S only using 300-epoch pre-training by linear evaluation, which outperforms supervised methods with the same epoch by 0.4% and its comparable variant DINO by 1.0%. For dense prediction tasks, MST also achieves 42.7% mAP on MS COCO object detection and 74.04% mIoU on Cityscapes segmentation only with 100-epoch pre-training.
| accept |
MST proposes to combine the task of mask language modeling with instance discrimination.
While adding non-negligible complexity to training, MST demonstrate gain of the order of 2% on various standard SSL tasks (ImageNet linear eval, MS-COCO, Cityscape). Additionally, reviewers all agreed that the ablation experiments performed during the rebuttal period clarified the importance of the contributions. Overall, it is unclear if the performance gain fully justifies the extra-complexity of the training pipeline.
Given the positive feedback from the reviewers and the good empirical results, I am in favor of acceptance
| train | [
"Z5c1Q2JWioY",
"Y5U1YCYfLJu",
"f450tvX63N5",
"w-79_Nv86-S",
"UsNhFYXQWmm",
"xegT8CEBjNC",
"8D0R1AKpfW",
"FYQ5xYO8W2J",
"pz6HhpxzW8G",
"2zqf4Rer47b",
"P5EVt8Ub6cz",
"GDQfrbr-Hd",
"2JpVnWnOn3F",
"_j0UPqgqkwa",
"9lH6Q0HX5bj",
"FMSVXb3MYux",
"WBm_zTOJMU",
"h_TSoNhgZD",
"twnMn7tPxgA",... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"... | [
" We conduct the experiment by using pure MLM with DeiT-S under 100 epochs, the result is about 40% with the same experimental configuration. Then we further adjust its learning rate and other hyperparameters, the best result is only 61%, which is far lower than that of the DINO by 10.6% (71.6% in Table 6) and also... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"WBm_zTOJMU",
"nips_2021_y_OmkmCH9w",
"w-79_Nv86-S",
"Y5U1YCYfLJu",
"8D0R1AKpfW",
"nips_2021_y_OmkmCH9w",
"P5EVt8Ub6cz",
"pz6HhpxzW8G",
"2zqf4Rer47b",
"GDQfrbr-Hd",
"2JpVnWnOn3F",
"_j0UPqgqkwa",
"twnMn7tPxgA",
"37zgYXwbqrM",
"FMSVXb3MYux",
"WBm_zTOJMU",
"37zgYXwbqrM",
"Y5U1YCYfLJu"... |
nips_2021_m72s2rDrm3G | Exploiting Opponents Under Utility Constraints in Sequential Games | Recently, game-playing agents based on AI techniques have demonstrated super-human performance in several sequential games, such as chess, Go, and poker. Surprisingly, the multi-agent learning techniques that allowed to reach these achievements do not take into account the actual behavior of the human player, potentially leading to an impressive gap in performances. In this paper, we address the problem of designing artificial agents that learn how to effectively exploit unknown human opponents while playing repeatedly against them in an online fashion. We study the case in which the agent's strategy during each repetition of the game is subject to constraints ensuring that the human's expected utility is within some lower and upper thresholds. Our framework encompasses several real-world problems, such as human engagement in repeated game playing and human education by means of serious games. As a first result, we formalize a set of linear inequalities encoding the conditions that the agent's strategy must satisfy at each iteration in order to do not violate the given bounds for the human's expected utility. Then, we use such formulation in an upper confidence bound algorithm, and we prove that the resulting procedure suffers from sublinear regret and guarantees that the constraints are satisfied with high probability at each iteration. Finally, we empirically evaluate the convergence of our algorithm on standard testbeds of sequential games.
| accept | The paper addresses the problem of human engagement in repeated games, developing a novel algorithm that guarantees payoffs to the human can be kept within particular bounds over time with high probability. All reviewers agree that the problem is interesting and relevant. The topic is particularly important as the number of applications of human-computer interaction rise.
After the response there was some discussion. There are a few problems with the paper in its current form: (a) the problem is not well-motivated, (b) the contribution is mainly theoretical and not practically tied to human engagement since the "human strategy" is fixed, (c) the relationship between human engagement and utilities being within a certain range is not firmly backed by real evidence, and (d) the experimental results are not well-matched to the main problem being motivated by the paper (i.e. no evidence of actual increased human engagement).
The reviewers appreciated the thorough responses and satisfied by clarifying of technical points. Three of the four reviewers agreed post-response that (a) is easily fixed in a final copy. This is also true for (c) by adding (and perhaps discussing) the references mentioned in the responses more prominently. (b) and (d) are valid outstanding criticisms, which make the impact unclear at this point. But, a proper theoretical investigation on a fixed policy does act as an important stepping stone toward these eventual goals. Taking this into account, combined with the novelty and interest this paper could generate, the positives outweigh the shortcomings. Still, it is important to take all the critical feedback strongly into consideration when revising the paper.
| train | [
"nb4mHN2gNSM",
"4FBjsCdN87U",
"stYkMXx-vhd",
"PC9313wwy9K",
"i0KmuF6Jztz",
"rAT9mSd1NTs",
"XUHAYoR1yEO",
"PAUw_K39-K"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes EXO-UCB that achieves sublinear regret in 2-player games while ensuring an opponent's utility remains in a specified interval. The motivation lies in applications such as serious games where human users need to be engaged while participating in some task with an AI and may drop off if their uti... | [
6,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"nips_2021_m72s2rDrm3G",
"PAUw_K39-K",
"nb4mHN2gNSM",
"XUHAYoR1yEO",
"rAT9mSd1NTs",
"nips_2021_m72s2rDrm3G",
"nips_2021_m72s2rDrm3G",
"nips_2021_m72s2rDrm3G"
] |
nips_2021_9SD2Rb3NiWu | A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference | Circuit representations are becoming the lingua franca to express and reason about tractable generative and discriminative models. In this paper, we show how complex inference scenarios for these models that commonly arise in machine learning---from computing the expectations of decision tree ensembles to information-theoretic divergences of sum-product networks---can be represented in terms of tractable modular operations over circuits. Specifically, we characterize the tractability of simple transformations---sums, products, quotients, powers, logarithms, and exponentials---in terms of sufficient structural constraints of the circuits they operate on, and present novel hardness results for the cases in which these properties are not satisfied. Building on these operations, we derive a unified framework for reasoning about tractable models that generalizes several results in the literature and opens up novel tractable inference scenarios.
| accept | Very interesting solid work with potential impact. Please take all comments in consideration. | test | [
"tJgFug5TlY",
"Q9NQG89ileV",
"oRXw4D_hq5",
"WcVgBcaNLpV",
"LllhzcKmnd_",
"c8eh9Zs1nl2",
"futu4DFPbwV",
"1qOVseEsR9w"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a general framework to trace the tractability of complex queries involving circuit representations in a unified manner over model classes and query classes, and to automatically distill tractable inference functions. Using the framework, the paper unifies and generalizes existing inference al... | [
7,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
2,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"nips_2021_9SD2Rb3NiWu",
"1qOVseEsR9w",
"tJgFug5TlY",
"futu4DFPbwV",
"c8eh9Zs1nl2",
"nips_2021_9SD2Rb3NiWu",
"nips_2021_9SD2Rb3NiWu",
"nips_2021_9SD2Rb3NiWu"
] |
nips_2021_FyaSaEbNm1W | Demystifying and Generalizing BinaryConnect | BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization. However, our understanding of the inner workings of BC is still quite limited. We attempt to close this gap in four different aspects: (a) we show that existing quantization algorithms, including post-training quantization, are surprisingly similar to each other; (b) we argue for proximal maps as a natural family of quantizers that is both easy to design and analyze; (c) we refine the observation that BC is a special case of dual averaging, which itself is a special case of the generalized conditional gradient algorithm; (d) consequently, we propose ProxConnect (PC) as a generalization of BC and we prove its convergence properties by exploiting the established connections. We conduct experiments on CIFAR-10 and ImageNet, and verify that PC achieves competitive performance.
| accept | After the discussion, all reviewers recommend accept (7). For example, one reviewers emphasized "the submission made a non-trivial contribution upon existing works (Li et al., Bai et al.), in the sense that it formally shows the convergence for BinaryConnect for non-convex problems.". I find this alone sufficient novel. Also, the unification and extension of previously suggested algorithms are quite interesting, and the empirical results are not bad for a theoretical paper. The only concern raising from the reviews seems to be that the paper is dense, and I hope the authors can improve improve readability, if possible. | val | [
"r-e_eobTnWR",
"7-EcbYufoVx",
"9v-6FWjHldr",
"72h519LMWuL",
"Vb0JpLwPwr2",
"4aFuZZkDoLD",
"ZbINtMgci2Z",
"5t9Y5UmIlO1"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer for their active participation in the review process and appreciate their dedication to spending more time on relevant literature.",
" We thank the reviewer for their active participation in the review process and we are more than happy to include the requested discussion in the revised ve... | [
-1,
-1,
7,
7,
-1,
-1,
-1,
7
] | [
-1,
-1,
3,
4,
-1,
-1,
-1,
3
] | [
"9v-6FWjHldr",
"72h519LMWuL",
"nips_2021_FyaSaEbNm1W",
"nips_2021_FyaSaEbNm1W",
"9v-6FWjHldr",
"5t9Y5UmIlO1",
"72h519LMWuL",
"nips_2021_FyaSaEbNm1W"
] |
nips_2021_kwOjbvNyM-K | CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator | Accurately backpropagating the gradient through categorical variables is a challenging task that arises in various domains, such as training discrete latent variable models. To this end, we propose CARMS, an unbiased estimator for categorical random variables based on multiple mutually negatively correlated (jointly antithetic) samples. CARMS combines REINFORCE with copula based sampling to avoid duplicate samples and reduce its variance, while keeping the estimator unbiased using importance sampling. It generalizes both the ARMS antithetic estimator for binary variables, which is CARMS for two categories, as well as LOORF/VarGrad, the leave-one-out REINFORCE estimator, which is CARMS with independent samples. We evaluate CARMS on several benchmark datasets on a generative modeling task, as well as a structured output prediction task, and find it to outperform competing methods including a strong self-control baseline. The code is publicly available.
| accept | The submission proposes a low-variance gradient estimator for discrete random variables, extending recent work on binary to categorical. Whilst the reviewers raised concerns around the marginal improvement over existing methods, all agreed that the submission is clearly written and is a theoretically grounded, novel extension of ARMS. Overall, this is a good contribution to NeurIPS and should be accepted. | train | [
"ZN5AGJ0QvWQ",
"mk_8QiPPAL3",
"vuBm75KcdhN",
"umfcoLiCVV5",
"ZHkBJtif3WK",
"Tml3VeYUal4",
"W96cqeEhKsG",
"0VhhvrY_3Ok",
"UQXVk1tL9md",
"4JaF6Bs5-EW",
"QSTd1OcGt-N",
"fnrGm4CNME",
"KnRNyDzBsOn"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely appreciate your taking the time to read our response and providing additional feedback! We will add these details to the paper to strengthen the clarity. ",
" Thanks for a really detailed response to my review. I particularly appreciate the time taken to help me understand the requirements of the c... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"mk_8QiPPAL3",
"W96cqeEhKsG",
"Tml3VeYUal4",
"fnrGm4CNME",
"nips_2021_kwOjbvNyM-K",
"4JaF6Bs5-EW",
"KnRNyDzBsOn",
"fnrGm4CNME",
"ZHkBJtif3WK",
"QSTd1OcGt-N",
"nips_2021_kwOjbvNyM-K",
"nips_2021_kwOjbvNyM-K",
"nips_2021_kwOjbvNyM-K"
] |
nips_2021_6p2jG0FJ5j | Learning to Learn Dense Gaussian Processes for Few-Shot Learning | Gaussian processes with deep neural networks demonstrate to be a strong learner for few-shot learning since they combine the strength of deep learning and kernels while being able to well capture uncertainty. However, it remains an open problem to leverage the shared knowledge provided by related tasks. In this paper, we propose to learn Gaussian processes with dense inducing variables by meta-learning for few-shot learning. In contrast to sparse Gaussian processes, we define a set of dense inducing variables to be of a much larger size than the support set in each task, which collects prior knowledge from experienced tasks. The dense inducing variables specify a shared Gaussian process prior over prediction functions of all tasks, which are learned in a variational inference framework and offer a strong inductive bias for learning new tasks. To achieve task-specific prediction functions, we propose to adapt the inducing variables to each task by efficient gradient descent. We conduct extensive experiments on common benchmark datasets for a variety of few-shot learning tasks. Our dense Gaussian processes present significant improvements over vanilla Gaussian processes and comparable or even better performance with state-of-the-art methods.
| accept | The reviewers all agree that the idea of using a dense set of inducing points to share information across tasks in the few-shot setting is novel and interesting. The discussions yielded a number of clarifications as well as new experiments on few-shot regression and uncertainty quantification. Please be sure to add these to the final draft. | val | [
"KjoGCh81lA",
"yFrw-orxjaw",
"KDAZEY_2G8t",
"pMv5qBT8n7g",
"dxHt8POU72",
"kp0BRcxhIki",
"p-T6tcvpxdD",
"CdK6FXmxwc",
"oN-eV0eLn7f",
"FpOU9qlgxpH",
"0w-9k1wkdy",
"BWryXkFbiW9",
"iXR_2fCNahN",
"QJl3rbZ70yI",
"idkWBBTF9X2",
"jkXK2s-zfiX",
"E_KPKVpDGIs",
"E5-2fTXf-0R",
"hv9p8YDPQq0"
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"... | [
" Thank you for the response! I would keep my score.",
" I appreciate the author's response which resolved most of my concerns. I keep my score intact.",
" Thanks for your further comment. \n\nThe temperature here is not the same temperature adopted in calibrating the uncertainty quantification, which does not ... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"iXR_2fCNahN",
"idkWBBTF9X2",
"dxHt8POU72",
"p-T6tcvpxdD",
"CdK6FXmxwc",
"nips_2021_6p2jG0FJ5j",
"CdK6FXmxwc",
"FpOU9qlgxpH",
"nips_2021_6p2jG0FJ5j",
"QJl3rbZ70yI",
"BWryXkFbiW9",
"jkXK2s-zfiX",
"E_KPKVpDGIs",
"kp0BRcxhIki",
"E5-2fTXf-0R",
"hv9p8YDPQq0",
"nips_2021_6p2jG0FJ5j",
"ni... |
nips_2021_x5hh6N9bUUb | Stochastic Solutions for Linear Inverse Problems using the Prior Implicit in a Denoiser | Deep neural networks have provided state-of-the-art solutions for problems such as image denoising, which implicitly rely on a prior probability model of natural images. Two recent lines of work – Denoising Score Matching and Plug-and-Play – propose methodologies for drawing samples from this implicit prior and using it to solve inverse problems, respectively. Here, we develop a parsimonious and robust generalization of these ideas. We rely on a classic statistical result that shows the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this to derive a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any deterministic linear inverse problem, with no additional training, thus extending the power of supervised learning for denoising to a much broader set of problems. The algorithm relies on minimal assumptions and exhibits robust convergence over a wide range of parameter choices. To demonstrate the generality of our method, we use it to obtain state-of-the-art levels of unsupervised performance for deblurring, super-resolution, and compressive sensing.
| accept | This paper provides an original way to solve linear inverse problems when given a denoiser. The method uses forward calls to the denoiser in a way that leverages the signal prior implicit to it. The paper establishes strong quantitative and qualitative recovery performance. In the camera ready version of the paper, the authors should make the modifications discussed with the reviewers. In particular, the authors should clarify the relationship between the proposed method and Langevin Dynamics. The authors say the method draws "high probability samples"; the authors should clarify what these terms mean and whether or not it is the same as directly sampling from the relevant distribution. | train | [
"FpBuYOPRkz",
"7Vl3s_wNeB",
"irYnZMsskDW",
"AitVFp3zrKG",
"xyp-ANTLYOS",
"uj8jzlCvETf",
"GsvscCRK5ua",
"M-RhYT0L4uD",
"EyH6tcVcEm",
"k30PEmmBmL",
"3B06cJO3i2Z",
"q07hpzwJ_id"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors proposed a stochastic coarse-to-fine gradient ascent method for drawing high-probability samples from the implicit prior to denoise the noisy image and generalizes it to more broad linear inverse problems. The idea is quite interesting, i.e., building a plug-and-play denoiser to inverse problems.\nHow... | [
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_x5hh6N9bUUb",
"EyH6tcVcEm",
"xyp-ANTLYOS",
"nips_2021_x5hh6N9bUUb",
"uj8jzlCvETf",
"k30PEmmBmL",
"q07hpzwJ_id",
"AitVFp3zrKG",
"FpBuYOPRkz",
"3B06cJO3i2Z",
"nips_2021_x5hh6N9bUUb",
"nips_2021_x5hh6N9bUUb"
] |
nips_2021_hPgy4_gpbmU | Towards Stable and Robust AdderNets | Adder neural network (AdderNet) replaces the original convolutions with massive multiplications by cheap additions while achieving comparable performance thus yields a series of energy-efficient neural networks. Compared with convolutional neural networks (CNNs), the training of AdderNets is much more sophisticated including several techniques for adjusting gradient and batch normalization. In addition, variances of both weights and activations in resulting adder networks are very enormous which limits its performance and the potential for applying to other tasks. To enhance the stability and robustness of AdderNets, we first thoroughly analyze the variance estimation of weight parameters and output features of an arbitrary adder layer. Then, we develop a weight normalization scheme for adaptively optimizing the weight distribution of AdderNets during the training procedure, which can reduce the perturbation on running mean and variance in batch normalization layers. Meanwhile, the proposed weight normalization can also be utilized to enhance the adversarial robustness of resulting networks. Experiments conducted on several benchmarks demonstrate the superiority of the proposed approach for generating AdderNets with higher performance.
| accept | This paper focuses on enhancing the stability of AdderNets for performance improvement in downstream tasks. The proposal is the adaptive weight normalization (AWN) to optimize adder weight distributions adaptively. The philosophy behind sounds quite interesting to me, namely, batch normalization layers can form automatic perturbation eliminations. This philosophy leads to a simple yet effective inference algorithm design I have never seen.
The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! Please include the additional experimental results and merge the reviewers' comments in the next version. | train | [
"jCw6wntE5n1",
"f76SiIdPmBE",
"0XOBFS3VFfi",
"pHlZXGNQXO",
"vS7chG27FAi",
"FSNdlDPF2Wc",
"oME806LDA3c",
"3FYVSLe222R"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your constructive comments and support.\n\n**The learning rate of trainable parameters in AWN is rescaled by a hyper-parameter 1e-5. How is this parameter determined? Are there more ablation studies for the impact of this hyper-parameter?**\n\nAWN is quite sensitive to this hyper-parameter since it dir... | [
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"3FYVSLe222R",
"oME806LDA3c",
"FSNdlDPF2Wc",
"vS7chG27FAi",
"nips_2021_hPgy4_gpbmU",
"nips_2021_hPgy4_gpbmU",
"nips_2021_hPgy4_gpbmU",
"nips_2021_hPgy4_gpbmU"
] |
nips_2021_nYz2_BbZnYk | Representing Long-Range Context for Graph Neural Networks with Global Attention | Graph neural networks are powerful architectures for structured datasets. However, current methods struggle to represent long-range dependencies. Scaling the depth or width of GNNs is insufficient to broaden receptive fields as larger GNNs encounter optimization instabilities such as vanishing gradients and representation oversmoothing, while pooling-based approaches have yet to become as universally useful as in computer vision. In this work, we propose the use of Transformer-based self-attention to learn long-range pairwise relationships, with a novel “readout” mechanism to obtain a global graph embedding. Inspired by recent computer vision results that find position-invariant attention performant in learning long-range relationships, our method, which we call GraphTrans, applies a permutation-invariant Transformer module after a standard GNN module. This simple architecture leads to state-of-the-art results on several graph classification tasks, outperforming methods that explicitly encode graph structure. Our results suggest that purely-learning-based approaches without graph structure may be suitable for learning high-level, long-range relationships on graphs. Code for GraphTrans is available at https://github.com/ucbrise/graphtrans.
| accept | The majority of the reviewers recommend accepting this paper (3 of 4).
The only reviewer not recommending acceptance did not properly engage in discussion and the authors responded to their concerns.
The AC recommends acceptance.
| train | [
"BXXVSw-jDl",
"KNOZ7fBWDd5",
"PmEcukoVXj_",
"PmebS7JCHRs",
"7usF2S3uoZG",
"icTNr20yltN",
"tJT4mq3Z9xg",
"s8t7Mbl-Jtr",
"VpeQJDWtdXT",
"pjcVFVLNe7a",
"SjFuOJsw1I",
"0Uiz5o6Jvd2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Current GNN methods fail at utilizing long-range dependencies in graphs due to over smoothing. However, in some cases, long-range information might be useful. Inspired by the success of transformers in computer vision tasks, this paper tackles the challenge of learning long-range dependencies in GNNs by simply aug... | [
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"nips_2021_nYz2_BbZnYk",
"BXXVSw-jDl",
"nips_2021_nYz2_BbZnYk",
"BXXVSw-jDl",
"0Uiz5o6Jvd2",
"0Uiz5o6Jvd2",
"BXXVSw-jDl",
"PmEcukoVXj_",
"nips_2021_nYz2_BbZnYk",
"SjFuOJsw1I",
"nips_2021_nYz2_BbZnYk",
"nips_2021_nYz2_BbZnYk"
] |
nips_2021_AVvcLO2UYGA | Beyond Bandit Feedback in Online Multiclass Classification | Dirk van der Hoeven, Federico Fusco, Nicolò Cesa-Bianchi | accept | This paper studies a new problem setting of online multiclass linear classification with graph-structured feedback. The reviewers agree that it is of interest to the online learning research community. In summary, the paper:
- proposes the Gappletron algorithm that can establish regret guarantees against a wide range of surrogate loss functions;
- proves high probability regret bounds that are new
- gives a new \sqrt{T} regret lower bound
- gives improved experimental results against prior art
The reviewers also pointed out that studying: (1) the regret's fundamental dependence on the graph independence number \rho (e.g. by showing lower bounds in term of \rho); (2) whether the results can be extended to the commonly-used hinge losses, are important future directions. The reviewers would also like the authors to incorporate their rebuttal on comparison with prior art into the final version. | train | [
"bPZJSCF5mJi",
"mrwLT0RsHLZ",
"Tm9Qb0DjfUf",
"u_9UUNGe1vE",
"HP0jSdPsnHG",
"NOlQjY1zYvD",
"X5Jxcc3hZmU",
"sP2EU1upewO",
"cHGDx3vo5ei",
"7eCH3ZPUDVc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the problem of active online multiclass classification. As opposed to previous works, this paper introduces the additional complexity of gathering data via a feedback graph, as opposed to past works that consider either bandit feedback or the full information case. The authors present an algorit... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2,
1
] | [
"nips_2021_AVvcLO2UYGA",
"HP0jSdPsnHG",
"nips_2021_AVvcLO2UYGA",
"NOlQjY1zYvD",
"bPZJSCF5mJi",
"Tm9Qb0DjfUf",
"cHGDx3vo5ei",
"nips_2021_AVvcLO2UYGA",
"nips_2021_AVvcLO2UYGA",
"nips_2021_AVvcLO2UYGA"
] |
nips_2021_0xs40KGnsq3 | Learning Student-Friendly Teacher Networks for Knowledge Distillation | We propose a novel knowledge distillation approach to facilitate the transfer of dark knowledge from a teacher to a student. Contrary to most of the existing methods that rely on effective training of student models given pretrained teachers, we aim to learn the teacher models that are friendly to students and, consequently, more appropriate for knowledge transfer. In other words, at the time of optimizing a teacher model, the proposed algorithm learns the student branches jointly to obtain student-friendly representations. Since the main goal of our approach lies in training teacher models and the subsequent knowledge distillation procedure is straightforward, most of the existing knowledge distillation methods can adopt this technique to improve the performance of diverse student models in terms of accuracy and convergence speed. The proposed algorithm demonstrates outstanding accuracy in several well-known knowledge distillation techniques with various combinations of teacher and student models even in the case that their architectures are heterogeneous and there is no prior knowledge about student models at the time of training teacher networks
| accept | Reviewers are in broad agreement, finding the paper below the bar for NeurIPS. Reviewers found the problem itself interesting, and the experiments reasonably thorough. Criticisms centered around issues of novelty, efficiency, and additional components that should be added to the experiments to make them more convincing. None of these individually seemed like deal-breakers, but collectively put the paper somewhat below the threshold. | train | [
"DS6KWpFQUiL",
"SrdMvCqRBq9",
"SIQG_aVvjK",
"3xskuNSC5F3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a simple method to facilitate knowledge transfer from a teacher to a student. Unlike traditional knowledge distillation methodology that aims to effectively train student networks given pre-trained teachers, the goal of the work is training teachers which are friendly to knowledge transfer to s... | [
4,
5,
4,
5
] | [
4,
4,
4,
4
] | [
"nips_2021_0xs40KGnsq3",
"nips_2021_0xs40KGnsq3",
"nips_2021_0xs40KGnsq3",
"nips_2021_0xs40KGnsq3"
] |
nips_2021_x4t0fxWPNdi | Implicit Transformer Network for Screen Content Image Continuous Super-Resolution | Nowadays, there is an explosive growth of screen contents due to the wide application of screen sharing, remote cooperation, and online education. To match the limited terminal bandwidth, high-resolution (HR) screen contents may be downsampled and compressed. At the receiver side, the super-resolution (SR)of low-resolution (LR) screen content images (SCIs) is highly demanded by the HR display or by the users to zoom in for detail observation. However, image SR methods mostly designed for natural images do not generalize well for SCIs due to the very different image characteristics as well as the requirement of SCI browsing at arbitrary scales. To this end, we propose a novel Implicit Transformer Super-Resolution Network (ITSRN) for SCISR. For high-quality continuous SR at arbitrary ratios, pixel values at query coordinates are inferred from image features at key coordinates by the proposed implicit transformer and an implicit position encoding scheme is proposed to aggregate similar neighboring pixel values to the query one. We construct benchmark SCI1K and SCI1K-compression datasets withLR and HR SCI pairs. Extensive experiments show that the proposed ITSRN significantly outperforms several competitive continuous and discrete SR methods for both compressed and uncompressed SCIs.
| accept | The paper proposes a new model for image super-resolution, which builds on an implicit transformer, to address the resolution enhancement problem in the specific settings of screen-content images. The performance of the model has been extensively studied by the other and improved through the active discussion with the reviewers. The results are quite convincing for the specific SR task under consideration, which has not been commonly studied so far. | train | [
"Jocy8S8wgEa",
"9itPjm3PPB",
"_XkdhAjUSJM",
"qVEStyDxcio",
"TemeUGTAAmu",
"_eDBk_z52qU",
"mU_tCp25pFX",
"jtqn7sqUwp1",
"T3LvlS_ZyF-",
"1ThMRN78_S3",
"oM1iQbXDDAz",
"-MvT-h3KA0",
"oVBePVh1hWC",
"FGAxiFaBTz0",
"cuohhlswE4p"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a transformer-based image super-resolution method for screen content images which have many thin edges. The transformer is designed to learn the mapping from coordinates to rgb values. It additionally has a scale token representing the magnification factor. Due to targeting a new application, th... | [
7,
-1,
-1,
5,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_x4t0fxWPNdi",
"1ThMRN78_S3",
"T3LvlS_ZyF-",
"nips_2021_x4t0fxWPNdi",
"-MvT-h3KA0",
"oVBePVh1hWC",
"nips_2021_x4t0fxWPNdi",
"oM1iQbXDDAz",
"cuohhlswE4p",
"Jocy8S8wgEa",
"FGAxiFaBTz0",
"qVEStyDxcio",
"mU_tCp25pFX",
"nips_2021_x4t0fxWPNdi",
"nips_2021_x4t0fxWPNdi"
] |
nips_2021_WAO1STUPWPP | Channel Permutations for N:M Sparsity | We introduce channel permutations as a method to maximize the accuracy of N:M sparse networks. N:M sparsity requires N out of M consecutive elements to be zero and has been shown to maintain accuracy for many models and tasks with a simple prune and fine-tune workflow. By permuting weight matrices along their channel dimension and adjusting the surrounding layers appropriately, we demonstrate accuracy recovery for even small, parameter-efficient networks, without affecting inference run-time. We also present both a quality metric to simplify judging permutations as well as efficient methods to search for high-quality permutations, including two optimizations to escape local minima. Finally, we share an ablation study to show the importance of each part of our search algorithm, experimental results showing correlation between our quality metric and final network accuracy, improved sparse network accuracy using our techniques with insignificant overhead to training time, and the transformation of unstructured to structured sparse workloads. Code to use these techniques when generating a 2:4 sparse network is available at https://github.com/NVIDIA/apex/tree/master/apex/contrib/sparsity.
| accept | This paper suggests an interesting permutation based method which closes the (sometimes small, but significant) performance gap in 2:4 fine-grained sparsity. The reviewers were all positive, and it seems most of their clarity-related concerns were addressed in the authors' response and the following discussion. I ask the authors to address all these issues in the camera-ready version to improve the readability of the paper and publish the code, as promised. | train | [
"dXQv7hRr1G",
"WH_Bo2e7Rnh",
"PuLDfOBvHn",
"l2JXrmYqPcS",
"B2Yw8CzW5fc",
"F2SKxOCo9sD",
"8vLis2GS4JZ",
"dUpZ1sISGOZ",
"sfvo3oEOWY2",
"vBcAD0r4hlU",
"yX_wIWjsnJ",
"7bjEOBVJa0T",
"-O3Ofq-Gal3",
"5IDvtzoTvyo",
"S98y70YCTfk",
"ATDR-T8rYLS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks - this response helped to clear up some concerns. I've slightly bumped up my original rating to reflect this --- and am looking forward to the final version.",
"Recent nVidia chips support hardware acceleration of structured sparsity patterns in weight matrices — specifically 2:4 sparsity where consecut... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"yX_wIWjsnJ",
"nips_2021_WAO1STUPWPP",
"B2Yw8CzW5fc",
"-O3Ofq-Gal3",
"F2SKxOCo9sD",
"8vLis2GS4JZ",
"dUpZ1sISGOZ",
"vBcAD0r4hlU",
"ATDR-T8rYLS",
"S98y70YCTfk",
"WH_Bo2e7Rnh",
"ATDR-T8rYLS",
"5IDvtzoTvyo",
"nips_2021_WAO1STUPWPP",
"nips_2021_WAO1STUPWPP",
"nips_2021_WAO1STUPWPP"
] |
nips_2021_1fr3bOX2t69 | Curriculum Learning for Vision-and-Language Navigation | Vision-and-Language Navigation (VLN) is a task where an agent navigates in an embodied indoor environment under human instructions. Previous works ignore the distribution of sample difficulty and we argue that this potentially degrade their agent performance. To tackle this issue, we propose a novel curriculum- based training paradigm for VLN tasks that can balance human prior knowledge and agent learning progress about training samples. We develop the principle of curriculum design and re-arrange the benchmark Room-to-Room (R2R) dataset to make it suitable for curriculum training. Experiments show that our method is model-agnostic and can significantly improve the performance, the generalizability, and the training efficiency of current state-of-the-art navigation agents without increasing model complexity.
| accept | This work introduces a simple and intuitive approach to VLN based on the compositional nature of instructions, focusing therefore on single room, before multiform, etc. The approach is then tested on multiple baselines for both R2R and RxR and shows consistent improvements. The results are based on validation performance (not unseen test) and focus on older models which are substantially weaker than the current SotA on these domains. This raises a natural question about the applicability of this approach to more contemporary models. | train | [
"9SKzTtBLx9",
"SGa2yooKoil",
"8ZTPvzQlG4D",
"Osle2UYERSN",
"NSIejKiSVI",
"ijO4j0LHGrQ",
"9AZmGKJqdpv",
"x_Z3U7HjADZ",
"3A57bpeVrU",
"KcPAmFiN6RX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduces a new curriculum learning scheme for vision-and-language navigation (more particularly, the R2R dataset), where the navigation agent learns from easier to harder paths based on the number of traversed rooms. The curriculum learning scheme improves the performance of multiple baseline models o... | [
6,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_1fr3bOX2t69",
"nips_2021_1fr3bOX2t69",
"9AZmGKJqdpv",
"nips_2021_1fr3bOX2t69",
"x_Z3U7HjADZ",
"KcPAmFiN6RX",
"SGa2yooKoil",
"Osle2UYERSN",
"9SKzTtBLx9",
"nips_2021_1fr3bOX2t69"
] |
nips_2021_HEVfOwxrmQh | Better Algorithms for Individually Fair $k$-Clustering | Maryam Negahbani, Deeparnab Chakrabarty | accept | The authors consider the problem of \ell-p objective (e.g. k-median, k-means) centre based clustering with fairness constraints. The notion of fairness is that no point v should be at a distance of more than r(v) from its respective centre (where there are some conditions on r(v) to ensure feasibility). This notion of fairness was introduced in prior work and the authors main contribution is algorithmic. The algorithmic techniques while standard do improve on the (very recent) state of the art; thus the paper could be of interest to the NeurIPS community. There was a lot of discussion about the notion of fairness as well. It is suggested that the authors critique the notion (from all angles) more effectively, given how new this notion is, in order for the paper to have the most influence. | train | [
"ZYt9YpGyMS",
"UVwabPQ4tBz",
"lWRh_nrcRV",
"pxL-92rzjne",
"zElOeA7_HJ",
"FZvIqcdKhvZ",
"He4QiC_yRa",
"r9O1UHzatEt",
"x1UaNiAANtb",
"WxbYkT82WSc",
"pynxkC2cLMr",
"35bWBJnopn_",
"c7dErcNx5TP"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for their comment. We also agree that adding a discussion comparing with other relevant fairness notions is prudent for the motivation discussion. We will add this in the next version of our paper.",
" I thank the authors for the clarifications for my concerns. I read the oth... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"UVwabPQ4tBz",
"r9O1UHzatEt",
"nips_2021_HEVfOwxrmQh",
"zElOeA7_HJ",
"FZvIqcdKhvZ",
"He4QiC_yRa",
"lWRh_nrcRV",
"pynxkC2cLMr",
"35bWBJnopn_",
"c7dErcNx5TP",
"nips_2021_HEVfOwxrmQh",
"nips_2021_HEVfOwxrmQh",
"nips_2021_HEVfOwxrmQh"
] |
nips_2021_pvjfA4wogD6 | Video Instance Segmentation using Inter-Frame Communication Transformers | We propose a novel end-to-end solution for video instance segmentation (VIS) based on transformers. Recently, the per-clip pipeline shows superior performance over per-frame methods leveraging richer information from multiple frames. However, previous per-clip models require heavy computation and memory usage to achieve frame-to-frame communications, limiting practicality.In this work, we propose Inter-frame Communication Transformers (IFC), which significantly reduces the overhead for information-passing between frames by efficiently encoding the context within the input clip.Specifically, we propose to utilize concise memory tokens as a means of conveying information as well as summarizing each frame scene.The features of each frame are enriched and correlated with other frames through exchange of information between the precisely encoded memory tokens.We validate our method on the latest benchmark sets and achieved the state-of-the-art performance (AP 42.6 on YouTube-VIS 2019 val set using the offline inference) while having a considerably fast runtime (89.4 FPS). Our method can also be applied to near-online inference for processing a video in real-time with only a small delay.The code is available at https://github.com/sukjunhwang/IFC
| accept | None of the reviewers recommended accepting this paper.
After reading the author response and other reviews one of the reviewers also reduced their score.
One of the common critiques of the work was around the degree of novelty provided by the work.
One of the initially more positive reviewers did recognize that the memory-token based inter-frame attention is technically new and felt the experiments here were quite extensive. But this reviewer also felt that the main performance improvement in this method were coming from the other elements of the method.
The AC recommends that this paper be rejected.
| train | [
"AZgws3wYVEA",
"ZMErSbcgMdm",
"eZBr9an5Sr",
"5W_2C2l4IA",
"MJHd-pcRqQo",
"3tyTruJvMKv",
"p8BtDzxekj5",
"Ut318yi9o7K",
"MU1xvsRHUz",
"C7H43_ZVsN8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes inter-frame communication transformers(IFC) for video instance segmentation. Compared to a full spatial-temporal transformer with complexity THW x THW, the proposed IFC module reduces it to THW + T(HW)^2 while preserving the temporal and spatial attentions. The proposed video instance segmentat... | [
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_pvjfA4wogD6",
"nips_2021_pvjfA4wogD6",
"nips_2021_pvjfA4wogD6",
"C7H43_ZVsN8",
"ZMErSbcgMdm",
"MU1xvsRHUz",
"AZgws3wYVEA",
"nips_2021_pvjfA4wogD6",
"nips_2021_pvjfA4wogD6",
"nips_2021_pvjfA4wogD6"
] |
nips_2021_4G2dEuRZ7eO | Progressive Coordinate Transforms for Monocular 3D Object Detection | Recognizing and localizing objects in the 3D space is a crucial ability for an AI agent to perceive its surrounding environment. While significant progress has been achieved with expensive LiDAR point clouds, it poses a great challenge for 3D object detection given only a monocular image. While there exist different alternatives for tackling this problem, it is found that they are either equipped with heavy networks to fuse RGB and depth information or empirically ineffective to process millions of pseudo-LiDAR points. With in-depth examination, we realize that these limitations are rooted in inaccurate object localization. In this paper, we propose a novel and lightweight approach, dubbed {\em Progressive Coordinate Transforms} (PCT) to facilitate learning coordinate representations. Specifically, a localization boosting mechanism with confidence-aware loss is introduced to progressively refine the localization prediction. In addition, semantic image representation is also exploited to compensate for the usage of patch proposals. Despite being lightweight and simple, our strategy allows us to establish a new state-of-the-art among the monocular 3D detectors on the competitive KITTI benchmark. At the same time, our proposed PCT shows great generalization to most coordinate-based 3D detection frameworks.
| accept | At the time of rebuttal, the reviewers had widely varying opinions on the paper. Major concerns included general clarity as well as a set of fairly concrete experimental issues. The authors responded to the reviews with detailed new experiments to address the reviewer concerns. Reviewer vnDo was persuaded by these experiments and raised their score to an acceptance. Reviewer rVMM did not participate in post-rebuttal discussion. The AC examined rVMM's requests and the authors' responses. The AC cannot speak on behalf of rVMM; however, in the AC's view, much of the reviewers' concerns are ostensibly addressed directly with reasonable experiments. On balance, given that the consensus of the active reviewers is acceptance, and the rebuttal seems likely to address the remaining reviewer's concerns, the AC is strongly inclined to recommend acceptance.
The AC would suggest that the authors:
- Incorporate, to the best of their ability, the results presented in the rebuttal. If they do not fit, the authors should report them in the supplement.
- Take Reviewer vnDO's comments on figures and clarity seriously, and update them and their captions.
- As noted by the reviewers, the authors should extend their discussion of social impacts.
Both promises/new results were instrumental in the paper's acceptance, and the authors will maximize the impact of their paper if they make these changes. | train | [
"MQUrUo0NvYW",
"1rp_TMkCIJt",
"1y4uQmpeqlB",
"JftFhAmjOxn",
"QR_n-DonKHF",
"wCAZNPqawn3",
"IuXmDHkWWz-",
"A0gr4KQ8KRP",
"vK7VwDh-2xz",
"OZdNJeAnc_",
"rq2z4TM8Qd"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer rVMM,\n\nThanks again for your valuable comments and suggestions. As the discussion phase is nearing its end, we wondered if you might still have any concerns that we could address. We believe our responses on *contribution, test results, waymo results and lightweight model* addressed all your quest... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"rq2z4TM8Qd",
"nips_2021_4G2dEuRZ7eO",
"1rp_TMkCIJt",
"rq2z4TM8Qd",
"1rp_TMkCIJt",
"OZdNJeAnc_",
"rq2z4TM8Qd",
"OZdNJeAnc_",
"1rp_TMkCIJt",
"nips_2021_4G2dEuRZ7eO",
"nips_2021_4G2dEuRZ7eO"
] |
nips_2021_X2Cxixkcpx | Structured Reordering for Modeling Latent Alignments in Sequence Transduction | Despite success in many domains, neural models struggle in settings where train and test examples are drawn from different distributions. In particular, in contrast to humans, conventional sequence-to-sequence (seq2seq) models fail to generalize systematically, i.e., interpret sentences representing novel combinations of concepts (e.g., text segments) seen in training. Traditional grammar formalisms excel in such settings by implicitly encoding alignments between input and output segments, but are hard to scale and maintain. Instead of engineering a grammar, we directly model segment-to-segment alignments as discrete structured latent variables within a neural seq2seq model. To efficiently explore the large space of alignments, we introduce a reorder-first align-later framework whose central component is a neural reordering module producing separable permutations. We present an efficient dynamic programming algorithm performing exact marginal inference of separable permutations, and, thus, enabling end-to-end differentiable training of our model. The resulting seq2seq model exhibits better systematic generalization than standard models on synthetic problems and NLP tasks (i.e., semantic parsing and machine translation).
| accept | This paper presents uses the class of separable permutations, which in contrast to the unrestricted class of permutations, can be reasoned about in polynomial time using dynamic programming algorithms to introduce explicit reordering and alignment variables in seq2seq models. The reviewers remarked on the technical clarity, interestingly novel approach, and thorough experiments. This technique improves interpretability and, most importantly, provides a demonstrated bias for compositional generalization, both of which are important concerns in sequence transduction modelling. This represents a serious and successful attempt to address these issues by changing the underlying assumptions of the model, rather than relying on data augmentation. | train | [
"R1_YPxifyh",
"MPAfKF5WDi2",
"ph27FJJn4ZI",
"x8_CqDguCVf",
"IXXdjZZzbHI",
"n4QO1DMkQ97",
"WMpS5UOm2Lc",
"t9jLiaRRFIJ",
"W0sMAt4jzb4",
"Hkh4ZBkyrn3"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The order of the input and output (including their associated representations) can matter for the performance and interpretability of a sequence to sequence model. Based on monotonic alignment seq2seq model (e.g. SSNT), the paper proposes to learn a separable permutation over the input representations so that the ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2021_X2Cxixkcpx",
"ph27FJJn4ZI",
"IXXdjZZzbHI",
"t9jLiaRRFIJ",
"R1_YPxifyh",
"Hkh4ZBkyrn3",
"W0sMAt4jzb4",
"nips_2021_X2Cxixkcpx",
"nips_2021_X2Cxixkcpx",
"nips_2021_X2Cxixkcpx"
] |
nips_2021_6ZdqOpE_UVF | A universal probabilistic spike count model reveals ongoing modulation of neural variability | Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method builds on sparse Gaussian processes and can model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates, using scalable variational inference to jointly infer the covariate-to-SCD mappings and latent trajectories in a data efficient way. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.
| accept | This paper presents a universal spike-count model for flexibly describing both over-dispersion and under- dispersion in neural spiking data. The model is presented with an efficient variational fitting method, and can be applied to data without known stimuli, as is often the case for overdispersion metrics. The reviewers recognized the novelty of the model, and the topic fits nicely in the probabilistic modeling and neuroscience focuses in NeurIPS. Moreover spike-count models are an ongoing discussion in the computational neuroscience community (heavily represented at NeurIPS) and this work presents a new and interesting addition to the literature. I therefore recommend this work be accepted in NeurIPS. | train | [
"IgsVIWJ3rZw",
"okg2MtC1ey",
"ThPNUd5LDBY",
"krxurHvpzPD",
"eAFOTzZRz4h",
"DDxSNf32_6R",
"yBlHOjyQ7Ud",
"gWjs5b9fJzr",
"YODdA_iMqr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed response. They have answered all of my questions.",
"In this paper authors provide a method for modelling spike counts as a function of input covariates and latent covariates. A Markovin prior is assumed over latent parameters, and the effect of latent parameters and input... | [
-1,
6,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
2,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"krxurHvpzPD",
"nips_2021_6ZdqOpE_UVF",
"okg2MtC1ey",
"YODdA_iMqr",
"gWjs5b9fJzr",
"yBlHOjyQ7Ud",
"nips_2021_6ZdqOpE_UVF",
"nips_2021_6ZdqOpE_UVF",
"nips_2021_6ZdqOpE_UVF"
] |
nips_2021_b8Kl8mcK6tb | Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms | Finding the minimal structural assumptions that empower sample-efficient learning is one of the most important research directions in Reinforcement Learning (RL). This paper advances our understanding of this fundamental question by introducing a new complexity measure—Bellman Eluder (BE) dimension. We show that the family of RL problems of low BE dimension is remarkably rich, which subsumes a vast majority of existing tractable RL problems including but not limited to tabular MDPs, linear MDPs, reactive POMDPs, low Bellman rank problems as well as low Eluder dimension problems. This paper further designs a new optimization-based algorithm— GOLF, and reanalyzes a hypothesis elimination-based algorithm—OLIVE (proposed in Jiang et al. (2017)). We prove that both algorithms learn the near-optimal policies of low BE dimension problems in a number of samples that is polynomial in all relevant parameters, but independent of the size of state-action space. Our regret and sample complexity results match or improve the best existing results for several well-known subclasses of low BE dimension problems.
| accept | The paper introduces a new structural assumption for provably efficient online reinforcement learning. The authors provided a characterization of the new Bellman Eluder Dimension, showing that it generalizes a large part of existing structural assumptions while preserving efficient learning. Overall, the paper is novel, well written and technically solid. I think it should be accepted. | train | [
"VKFN1qW2vk",
"LFq1BveFnLL",
"R5lD2adOzs",
"atfO8dUgFHA",
"PdAMs_gf3oP",
"0DQm0ei-Vb",
"tLbj5gWB4qV",
"lKAxT2eZnfg",
"OENE8-6AyR",
"Kl-4NJKREOu"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read other reviews and the author feedback and would like to keep my review unchanged. I continue to support acceptance for this paper! ",
" The authors addressed all my questions. I vote for acceptance.",
" Thank you for your valuable time and suggestions. Please see our responses below:\n\n\n${\\bf Q... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"0DQm0ei-Vb",
"R5lD2adOzs",
"Kl-4NJKREOu",
"tLbj5gWB4qV",
"lKAxT2eZnfg",
"OENE8-6AyR",
"nips_2021_b8Kl8mcK6tb",
"nips_2021_b8Kl8mcK6tb",
"nips_2021_b8Kl8mcK6tb",
"nips_2021_b8Kl8mcK6tb"
] |
nips_2021_MTMyxzrIKsM | Detecting Anomalous Event Sequences with Temporal Point Processes | Automatically detecting anomalies in event data can provide substantial value in domains such as healthcare, DevOps, and information security. In this paper, we frame the problem of detecting anomalous continuous-time event sequences as out-of-distribution (OOD) detection for temporal point processes (TPPs). First, we show how this problem can be approached using goodness-of-fit (GoF) tests. We then demonstrate the limitations of popular GoF statistics for TPPs and propose a new test that addresses these shortcomings. The proposed method can be combined with various TPP models, such as neural TPPs, and is easy to implement. In our experiments, we show that the proposed statistic excels at both traditional GoF testing, as well as at detecting anomalies in simulated and real-world data.
| accept | The authors formulate anomaly detection in continuous-time event sequences as an out-of-distribution detection problem for temporal point processes. This allows them to apply a number of common goodness-of-fit measures to detect the out-of-distribution data (e.g., KS tests on arrival or inter-event times), and propose other statistics (specifically, 3S or "sum of squared spacings") and test for typicality of the statistic values under the training dataset. Overall reviewers found the work clear and novel and recommend acceptance. Please see the reviews for more detailed suggestions to improve the presentation.
| train | [
"70tDh3fOrk1",
"F_vkgdjMkp",
"LLeIy8V94lu",
"_rvQiTIzsLO",
"7yPUE8ZazZq",
"NavgXV-Sy5o",
"zR1LbWZtMHg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I feel that you have addressed my comments well, especially on motivation and clarity, which were my primary concerns.\nAlso, I admit that contributions on empirical evaluation should be noted, where the proposed approach worked well even on short intervals.\nI raised my score to a 7.",
"This paper addresses an... | [
-1,
7,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
3,
2
] | [
"LLeIy8V94lu",
"nips_2021_MTMyxzrIKsM",
"F_vkgdjMkp",
"NavgXV-Sy5o",
"zR1LbWZtMHg",
"nips_2021_MTMyxzrIKsM",
"nips_2021_MTMyxzrIKsM"
] |
nips_2021_E8BxwYR8op | HNPE: Leveraging Global Parameters for Neural Posterior Estimation | Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method. A particularly challenging setting is when the model is strongly indeterminate, i.e. when distinct sets of parameters yield identical observations. This arises in many practical situations, such as when inferring the distance and power of a radio source (is the source close and weak or far and strong?) or when estimating the amplifier gain and underlying brain activity of an electrophysiological experiment. In this work, we present hierarchical neural posterior estimation (HNPE), a novel method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters. Our method extends recent developments in simulation-based inference (SBI) based on normalizing flows to Bayesian hierarchical models. We validate quantitatively our proposal on a motivating example amenable to analytical solutions and then apply it to invert a well known non-linear model from computational neuroscience, using both simulated and real EEG data.
| accept | Dear authors,
congratulations on your paper being accepted at Neurips. As you know, the reviewers gave extensive comments and feedback on the submission, and we urge you to ensure that this feedback is incorporated into the final version of the article. In particular, reviewers felt that the name of the method, 'h-flow', is misleading, as the paper does not present a new flow, but rather a new SBI method (which will likely, but not necessarily, be applied to flows). We would ask you to take this advice seriously.
With best regards and congratulations, your AC | train | [
"ZRSFGOnF5G-",
"bsNUpuE9VUa",
"2ZXNB2KHj5D",
"3AyBEaIU6a",
"bm12xJWpyP",
"3EZFtfh4itv",
"gHeuKfX61Zx",
"7m5pCnBfTb",
"PD7vOhtq6Pk",
"e-jwv2D7Nz7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
" Thank you for the response! I have increased my score based on your extended results covering a more comparable baseline for the NMM test case with matched network capacity and observation aggregation. I appreciate your inclusion of a test case involving real data, and it is indeed good to see such practical appl... | [
-1,
7,
6,
6,
6,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
3,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"e-jwv2D7Nz7",
"nips_2021_E8BxwYR8op",
"nips_2021_E8BxwYR8op",
"nips_2021_E8BxwYR8op",
"nips_2021_E8BxwYR8op",
"bm12xJWpyP",
"3AyBEaIU6a",
"nips_2021_E8BxwYR8op",
"2ZXNB2KHj5D",
"bsNUpuE9VUa"
] |
nips_2021_th788unrdTj | Alignment Attention by Matching Key and Query Distributions | The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains. Most such models use multi-head self-attention which is appealing for the ability to attend to information from different perspectives. This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head. The resulting alignment attention networks can be optimized as an unsupervised regularization in the existing attention framework. It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention. On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks. We further demonstrate the general applicability of our approach on graph attention and visual question answering, showing the great potential of incorporating our alignment method into various attention-related tasks.
| accept | I recommend acceptance of the paper for the following reasons. As pointed out by reviewer E8gR, the premise of aligning key and value distributions might not be as important as it sounds at first for the purposes of improved learning. In addition, the analysis and ablations studies could be more thorough and convincing. Yet, experimentally the module works well. So despite potential limitations on the "intuition" and "analysis" of the module, there is value in knowing that such a module has empirical validation, especially for its potential impact on future analyses. There is also agreement from reviewers about this point and general support from two reviewers for the paper.
For these reasons, I recommend to accept the paper. | train | [
"daOmyawlCgF",
"gTfj5p3CieF",
"BvrxmxStHKi",
"5nERe5FANFE",
"WbDxO2aJTRA",
"yb97z79_y0I",
"WpMqUpLg-v",
"Qkxsfa1N7bx",
"PaKks4dkMd_",
"Xq6smoy_ehX",
"Ln751qD3nuA"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" We appreciate your providing additional feedback. We hope our response below could help convince you to reconsider the significance of our contributions.\n\n1. We'd like to emphasize that adversarial is only one of the methods for aligning distributions. We have presented the optimal transport (OT) based alignmen... | [
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
5
] | [
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
5
] | [
"gTfj5p3CieF",
"Qkxsfa1N7bx",
"Ln751qD3nuA",
"yb97z79_y0I",
"nips_2021_th788unrdTj",
"Xq6smoy_ehX",
"nips_2021_th788unrdTj",
"Ln751qD3nuA",
"WpMqUpLg-v",
"WbDxO2aJTRA",
"nips_2021_th788unrdTj"
] |
nips_2021_jI97GGA0H_ | Settling the Variance of Multi-Agent Policy Gradients | Policy gradient (PG) methods are popular reinforcement learning (RL) methods where a baseline is often applied to reduce the variance of gradient estimates. In multi-agent RL (MARL), although the PG theorem can be naturally extended, the effectiveness of multi-agent PG (MAPG) methods degrades as the variance of gradient estimates increases rapidly with the number of agents. In this paper, we offer a rigorous analysis of MAPG methods by, firstly, quantifying the contributions of the number of agents and agents' explorations to the variance of MAPG estimators. Based on this analysis, we derive the optimal baseline (OB) that achieves the minimal variance. In comparison to the OB, we measure the excess variance of existing MARL algorithms such as vanilla MAPG and COMA. Considering using deep neural networks, we also propose a surrogate version of OB, which can be seamlessly plugged into any existing PG methods in MARL. On benchmarks of Multi-Agent MuJoCo and StarCraft challenges, our OB technique effectively stabilises training and improves the performance of multi-agent PPO and COMA algorithms by a significant margin.
| accept | The paper studies how to reduce the variance of stochastic gradient estimates in policy gradient approaches to Multi-agent Reinforcement Learning (MARL). Focusing on the popular Centralized Training Decentralized Execution setting, the authors provide a theoretical analysis that quantifies excess variance and connects it to the number of agents and agents' local advantages. They build on these insights by empirically showing how these insights affect COMA, and by deriving a new optimal baseline (OB) which is empirically validated in StarCraft and multi-agent MUJOCO environments.
Reviewers were positive about the importance of the problem addressed in this work. They largely agreed that the question of policy gradient variance has not been widely studied in the MARL community, but has important implications and potential for impact. The paper was considered clear and well written.
At the same time, several concerns were raised. In particular, the assumptions of the analysis are idealised, and findings do not always translate into corresponding empirical gains.
During the rebuttal and discussion phase, reviewers were generally satisfied with the clarifications provided by the authors and recommend acceptance. Unfortunately, the most negative reviewer did not engage beyond the initial review. Therefore, the AC examined the concerns raised in the initial review and judged these as sufficiently addressed by the authors.
The paper is assessed as meeting the bar for acceptance, based on the important theoretical insights and high potential for impact on the MARL community. The authors are strongly encouraged to take on board all reviewer comments to further improve clarity and potential for impact in the camera ready version. | train | [
"4bt7p7-Gga0",
"JKMW6trHO2h",
"dkyvB1GsKcy",
"N1V0VjE0lGW",
"VH8if_dd5TT",
"4xH6CMe6nxw",
"Pbn5xc3VKE7",
"QpTaNzsFqPY",
"Y4apMeuBq-d",
"Cd0gGlhEViC",
"g-vB2EKFRzh",
"HmyNFmhwjW",
"gMSrj3h1cGQ",
"XKdfGSCC7Ep",
"dWnPEDhbJKV"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper analyzes the variance of gradient estimates in policy-gradient-based multi-agent RL, specifically for standard multi-agent policy gradient (including both centralized and decentralized training) and for the COMA gradient. The paper then derives an optimal baseline (OB) as well as a tractable surrogate f... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_jI97GGA0H_",
"HmyNFmhwjW",
"N1V0VjE0lGW",
"Cd0gGlhEViC",
"4xH6CMe6nxw",
"Y4apMeuBq-d",
"QpTaNzsFqPY",
"g-vB2EKFRzh",
"4bt7p7-Gga0",
"dWnPEDhbJKV",
"XKdfGSCC7Ep",
"gMSrj3h1cGQ",
"nips_2021_jI97GGA0H_",
"nips_2021_jI97GGA0H_",
"nips_2021_jI97GGA0H_"
] |
nips_2021_28NikxkK6kJ | For high-dimensional hierarchical models, consider exchangeability of effects across covariates instead of across datasets | Hierarchical Bayesian methods enable information sharing across regression problems on multiple groups of data. While standard practice is to model regression parameters (effects) as (1) exchangeable across the groups and (2) correlated to differing degrees across covariates, we show that this approach exhibits poor statistical performance when the number of covariates exceeds the number of groups. For instance, in statistical genetics, we might regress dozens of traits (defining groups) for thousands of individuals (responses) on up to millions of genetic variants (covariates). When an analyst has more covariates than groups, we argue that it is often preferable to instead model effects as (1) exchangeable across covariates and (2) correlated to differing degrees across groups. To this end, we propose a hierarchical model expressing our alternative perspective. We devise an empirical Bayes estimator for learning the degree of correlation between groups. We develop theory that demonstrates that our method outperforms the classic approach when the number of covariates dominates the number of groups, and corroborate this result empirically on several high-dimensional multiple regression and classification problems.
| accept | The reviewers were initially concerned that this paper was not well-suited for NeurIPS, but better suited to a statistics journal. However, during the discussion phase the authors were able to convince the reviewers that they were able to make the necessary changes to make this right for NeurIPS. Therefore I move to accept. The authors should make sure to make the serious changes suggested by the authors and then this paper will be a nice contribution to the conference. | train | [
"zaUdSX8ZN1",
"pGIt9K0utRu",
"VBBqU3AOpaA",
"9NwGpURGRq",
"Kns76PF__l6",
"xLkWZcXlQY3",
"TgvhTV2Qof",
"gxNfCCoTFmK",
"GQvu9xe8b59",
"EQtQkRhoiY",
"RRRl36Pf9_t",
"EjN4qyxespl",
"upBFCqIUxZg",
"o9ys1d2ql5b"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for clarifying my concerns. I will keep the rating as is since there is some amount of rewriting to be done as indicated in your response.",
"The authors of this paper study exchangeable prior distribution for regression coefficients in high dimensional regression problems, where the number of covariates... | [
-1,
7,
6,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
3,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"EjN4qyxespl",
"nips_2021_28NikxkK6kJ",
"nips_2021_28NikxkK6kJ",
"Kns76PF__l6",
"EQtQkRhoiY",
"gxNfCCoTFmK",
"nips_2021_28NikxkK6kJ",
"RRRl36Pf9_t",
"pGIt9K0utRu",
"VBBqU3AOpaA",
"TgvhTV2Qof",
"o9ys1d2ql5b",
"nips_2021_28NikxkK6kJ",
"nips_2021_28NikxkK6kJ"
] |
nips_2021_6Ddt0bvKoeh | Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations | Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan | accept | A solid progress in a well studied line of research | train | [
"_x8TjqV7Za1",
"5h3p5wMWu4H",
"S5BJaDecKpy",
"rNYJ4PSy3e4",
"MpUhFATISHy",
"eJ47D4Xfl9y",
"kZ8hAzb_0vJ",
"CDL_UHBtxP9",
"LjnSAralB2h",
"nlQWRsgtDw1",
"4WxRAukXu70",
"EIBDC85vrij"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. After reading the other reviews and the authors' responses, I keep my (high) score.",
" Thanks for the response! I totally agree that even fully understanding the Gaussian case is a challenging and important problem. In any case, I remain convinced that this paper would be a great ad... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
4
] | [
"eJ47D4Xfl9y",
"CDL_UHBtxP9",
"MpUhFATISHy",
"kZ8hAzb_0vJ",
"EIBDC85vrij",
"4WxRAukXu70",
"LjnSAralB2h",
"nlQWRsgtDw1",
"nips_2021_6Ddt0bvKoeh",
"nips_2021_6Ddt0bvKoeh",
"nips_2021_6Ddt0bvKoeh",
"nips_2021_6Ddt0bvKoeh"
] |
nips_2021_kcI3T5qe1jr | Controllable and Compositional Generation with Latent-Space Energy-Based Models | Controllable generation is one of the key requirements for successful adoption of deep generative models in real-world applications, but it still remains as a great challenge. In particular, the compositional ability to generate novel concept combinations is out of reach for most current models. In this work, we use energy-based models (EBMs) to handle compositional generation over a set of attributes. To make them scalable to high-resolution image generation, we introduce an EBM in the latent space of a pre-trained generative model such as StyleGAN. We propose a novel EBM formulation representing the joint distribution of data and attributes together, and we show how sampling from it is formulated as solving an ordinary differential equation (ODE). Given a pre-trained generator, all we need for controllable generation is to train an attribute classifier. Sampling with ODEs is done efficiently in the latent space and is robust to hyperparameters. Thus, our method is simple, fast to train, and efficient to sample. Experimental results show that our method outperforms the state-of-the-art in both conditional sampling and sequential editing. In compositional generation, our method excels at zero-shot generation of unseen attribute combinations. Also, by composing energy functions with logical operators, this work is the first to achieve such compositionality in generating photo-realistic images of resolution 1024x1024.
| accept | The paper proposes an approach for controllable image generation based on applying an energy-based model in the latent space of a pre-trained unconditional generative model. The reviews are mixed: three reviewers are in favor of acceptance, while one recommends rejection. After reading the reviews, the rebuttal, the discussion, and the paper itself, below are some key points.
Pros:
1) The method is sensible and simple
2) The method has been evaluated thoroughly and shown to work well compared to the relevant baselines
3) Quite thorough analysis experiments evaluating different design decisions
Cons:
1) Limited novelty - the work mainly combines existing methods and is related to many prior papers
2) High-res experimental results only on one dataset
Overall, the proposed method is not overwhelmingly new, but it is simple and effective. The paper is well written and easy to understand, and the experimental evaluation is fairly thorough. I thus recommend acceptance at this point, but I urge the authors to take the reviewers' comments into account and adjust the final version of the paper accordingly. | train | [
"ZuK8AiyrzPE",
"nQjfQilGl2o",
"8JZMXaA3Zi",
"Lexn-Ktryxt",
"eutSrZrLX8m",
"Deb91fwPCZ",
"Dmk51RnG5Gp",
"3sCIAPViZaR",
"-s4-f-qECKl",
"k9sKLoPg39L",
"eT0F3wwXCXJ",
"1as8yowPPPW",
"yoVS-SuTWl2",
"5wx8RN2EHO1"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We do appreciate the detailed discussion of your main concern around the positioning of this paper as being an energy-based model (EBM) (i.e. a generative model). We understand your concern much better now. We agree with you that in our framework the challenging part of the high-quality generation in data space i... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
2,
4
] | [
"8JZMXaA3Zi",
"nips_2021_kcI3T5qe1jr",
"eutSrZrLX8m",
"1as8yowPPPW",
"k9sKLoPg39L",
"Dmk51RnG5Gp",
"eT0F3wwXCXJ",
"nips_2021_kcI3T5qe1jr",
"5wx8RN2EHO1",
"nQjfQilGl2o",
"3sCIAPViZaR",
"yoVS-SuTWl2",
"nips_2021_kcI3T5qe1jr",
"nips_2021_kcI3T5qe1jr"
] |
nips_2021_AD4jtD8wE1w | Reverse-Complement Equivariant Networks for DNA Sequences | Vincent Mallet, Jean-Philippe Vert | accept | This paper addresses a very interesting problem motivated by genomics data analysis. All the 3 reviewers unanimously suggest to accept this paper and the meta reviewer agrees with the reviewers. Thus an acceptance is recommended. | train | [
"v9XFNhm0t-",
"TFnJjs-N3Lq",
"wmk0yIoPaXr",
"GFAa0DP-xMQ",
"GBixxfTX1vB",
"L7UBQvD9V5Q",
"WaG8fhjnck0",
"yAl-rT39ZSJ",
"dE5rt-apOA4"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors note that neural network models operating on DNA sequences desire shift equivariance and reverse complement (RC) equivariance. Existing proposed techniques have been somewhat ad hoc. The authors derive RC equivariant models from first principles, and expose a class of techniques that haven’t yet been s... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"nips_2021_AD4jtD8wE1w",
"wmk0yIoPaXr",
"GFAa0DP-xMQ",
"v9XFNhm0t-",
"dE5rt-apOA4",
"dE5rt-apOA4",
"yAl-rT39ZSJ",
"nips_2021_AD4jtD8wE1w",
"nips_2021_AD4jtD8wE1w"
] |
nips_2021_vYZmTEDFoqP | Provably Efficient Reinforcement Learning with Linear Function Approximation under Adaptivity Constraints | Tianhao Wang, Dongruo Zhou, Quanquan Gu | accept | The paper studies reinforcement learning in linear MDPs when the agent is limited in how frequently it can change its policy (e.g., the rarely-switching setting). This setting has been well-developed in simpler problems like online learning and linear bandits, and the theoretical results are what one would expect. The main concern is that the technical novelty is fairly low here, the paper seems to combine known techniques for linear MDPs, with techniques from the bandit literature on limited adaptivity. As such, we recommend rejecting the paper. | train | [
"zxOAQW2o5v",
"Mam1HCgzpN",
"DNqyZXF3sQ6",
"WQJrTTKpXNN",
"RNvoNE2gSkF",
"4KQtGhc8LN",
"7bjqWav1j56",
"leZsF61Q4S"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers RL with limited number of batches. They show that they can achieve the same regret as full adaptivity with lower number of batches. The result seems well explored. The first result reduces the number of batches to achieve the same regret as with T batches. The second result reduces the number... | [
6,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
2,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"nips_2021_vYZmTEDFoqP",
"7bjqWav1j56",
"leZsF61Q4S",
"zxOAQW2o5v",
"4KQtGhc8LN",
"nips_2021_vYZmTEDFoqP",
"nips_2021_vYZmTEDFoqP",
"nips_2021_vYZmTEDFoqP"
] |
nips_2021_FGAi8TP3ShV | Nonsmooth Implicit Differentiation for Machine-Learning and Optimization | In view of training increasingly complex learning architectures, we establish a nonsmooth implicit function theorem with an operational calculus. Our result applies to most practical problems (i.e., definable problems) provided that a nonsmooth form of the classical invertibility condition is fulfilled. This approach allows for formal subdifferentiation: for instance, replacing derivatives by Clarke Jacobians in the usual differentiation formulas is fully justified for a wide class of nonsmooth problems. Moreover this calculus is entirely compatible with algorithmic differentiation (e.g., backpropagation). We provide several applications such as training deep equilibrium networks, training neural nets with conic optimization layers, or hyperparameter-tuning for nonsmooth Lasso-type models. To show the sharpness of our assumptions, we present numerical experiments showcasing the extremely pathological gradient dynamics one can encounter when applying implicit algorithmic differentiation without any hypothesis.
| accept | This paper provided the justification of the implicit differentiation in the non-smooth settings with conservative Jacobians. The proposed method is also compatible with the standard AD. The authors also established the connection of the conservative Jacobians with the Clarke Jacobian. The authors finally apply the results to multiple machine learning problems, e.g., deep equilibrium networks, optimization layers, and bi-level optimization. The paper is well-written and easy-to-follow. I believe this paper will be of interest to the wide range of NeurIPS community.
There are still several minor issues need to be address, especially the discussion to the related work in literature (Reviewer gMrk, 2ovw) and more comprehensive empirical experiments (Reviewer eaLv). | test | [
"gpatP9QpcxU",
"GKTUPVwSPj",
"OqvGSi49rpS",
"vUbKsizn5K1",
"uDqGJMAO_He",
"NVuYaiRd46h",
"4QeFZl5AUgE",
"eN2m19bDsn",
"C0uwrlTyoaG",
"FvVHQ6Cujky"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I have no major issues accepting this paper. I will keep my score as it is then.",
" Thanks again for the report. As mentioned in our response, we will be sure to add the suggested references and expose their connection to our work.",
"This paper surveys and summarizes the theoretical... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
"uDqGJMAO_He",
"OqvGSi49rpS",
"nips_2021_FGAi8TP3ShV",
"OqvGSi49rpS",
"FvVHQ6Cujky",
"C0uwrlTyoaG",
"eN2m19bDsn",
"nips_2021_FGAi8TP3ShV",
"nips_2021_FGAi8TP3ShV",
"nips_2021_FGAi8TP3ShV"
] |
nips_2021_HipwnJKnp3 | Heuristic-Guided Reinforcement Learning | We provide a framework to accelerate reinforcement learning (RL) algorithms by heuristics that are constructed by domain knowledge or offline data. Tabula rasa RL algorithms require environment interactions or computation that scales with the horizon of the sequential decision-making task. Using our framework, we show how heuristic-guided RL induces a much shorter horizon sub-problem that provably solves the original task. Our framework can be viewed as a horizon-based regularization for controlling bias and variance in RL under a finite interaction budget. In theory, we characterize the properties of a good heuristic and the resulting impact on RL acceleration. In particular, we introduce the novel concept of an improvable heuristic that can allow any RL agent to conservatively extrapolate beyond its prior knowledge. In practice, we instantiate our framework to accelerate several state-of-the-art algorithms in simulated robotic control tasks and procedurally generated games. Our framework complements the rich literature on warm-starting RL using expert demonstrations or exploratory data-sets, and creates a unified channel to inject prior knowledge into RL.
| accept | I thank the authors for their submission and active participation in the discussions. The majority of reviewers find this paper valuable, in particular they emphasized the novelty [bqji], a good theoretical framework [RNky] with new findings that admissibility consistency help RL [58wi], a comprehensive discussion of related literature [RNky,vrFt], and that the paper is well structured and well written [RNky,vrFt]. On the negative side, reviewer bqji voiced concerns about limited experimental validation. I believe the authors have addressed these concerns well in their rebuttal. I thus side with reviewers RNky, vrFt and 58wi, and recommend acceptance. I encourage the authors to further improve their paper based on the reviewer feedback.
| train | [
"vc0QKMu-YJM",
"3NLflt94Okx",
"qtANw67wiOw",
"545sC2SDQlG",
"1JieHMxAwQY",
"i08avc8YWGS",
"0bTlILrEXO",
"lXp3blN5Zhl",
"v4P-J55EcWk",
"5eQJ2JpH6rk",
"w-WixN4ov8e",
"LLwGIMNBBw0",
"10oVcAYGhB",
"YnYYMVq6uTQ",
"hXw5qDBydf",
"aZjJk_dwjWM",
"dbUYj67Y3RA",
"KzvbrkGhefm",
"-nLPJy0xhi2"... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_rev... | [
" We are glad our response has helped clear the ambiguities, and will transfer over the explanations to the final paper version. Regarding your and **Reviewer 58wi**'s points about the term \"heuristic\", indeed we've realized as a result of the discussions here that our intended interpretation of this term is far ... | [
-1,
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3
] | [
"545sC2SDQlG",
"545sC2SDQlG",
"nips_2021_HipwnJKnp3",
"aZjJk_dwjWM",
"i08avc8YWGS",
"lXp3blN5Zhl",
"nips_2021_HipwnJKnp3",
"v4P-J55EcWk",
"5eQJ2JpH6rk",
"10oVcAYGhB",
"LLwGIMNBBw0",
"hXw5qDBydf",
"ZyGJFmfxxhI",
"4n4vLP-BIu2",
"-nLPJy0xhi2",
"qtANw67wiOw",
"-nLPJy0xhi2",
"nips_2021_... |
nips_2021_KmPSe18DGLs | Statistical Undecidability in Linear, Non-Gaussian Causal Models in the Presence of Latent Confounders | Konstantin Genin | accept | A nice summary of this paper from one of the reviews:
"The paper shows that for linear models with additive non-Gaussian noise components (LiNGAM) in the presence of latent confounders the inference of the direction of causal relations converges in the limit in probability to the correct orientation, but no longer satisfies the criterion for ‘statistical decidability’, this in contrast to unconfounded LiNGAMs. This result could have implications for the interpretation of causal models discovered under the LiNGAM framework."
While initially the reviewer opinion was mixed, the discussion with the authors led to an emerging consensus that the paper is novel, interesting, and a worthwhile addition to the NeurIPS proceedings. | train | [
"Hi9S3M3qL4W",
"GCG-WIr1yJ",
"ibVlMO3h7jV",
"p0WjZ9_Z0wc",
"pGqV4Xz5jqK",
"TtAo2UuQ7gq",
"q-qz5Hbg_uE",
"Rzi9WBPwflX",
"iXL1vHT_9se",
"4uYmc0o9zY4",
"Iu-DD7Zrwj",
"-C-g24pL-u8",
"X4k6nDxlsWJ",
"C2Rwq3_li4t",
"tSFITmXIjJE",
"YbwwwvjnVhw",
"ZWAQ_CO6XBP",
"Sr1Skke6suh",
"ItnS_zXqNIV... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"a... | [
" I very much appreciate it! I learned a lot during this process.",
"This paper focuses on the problem of causal discovery in additive linear models with non-Gaussian errors (LINGAM) and latent variables. It asks: are the underlying orientations statistically decidable (controlled error for every n) for any algor... | [
-1,
7,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
3
] | [
"p0WjZ9_Z0wc",
"nips_2021_KmPSe18DGLs",
"tSFITmXIjJE",
"pGqV4Xz5jqK",
"q-qz5Hbg_uE",
"nips_2021_KmPSe18DGLs",
"4uYmc0o9zY4",
"iXL1vHT_9se",
"Iu-DD7Zrwj",
"ZWAQ_CO6XBP",
"-C-g24pL-u8",
"X4k6nDxlsWJ",
"YbwwwvjnVhw",
"nips_2021_KmPSe18DGLs",
"C2Rwq3_li4t",
"TtAo2UuQ7gq",
"GCG-WIr1yJ",
... |
nips_2021_PwVruv8s3_Q | A novel notion of barycenter for probability distributions based on optimal weak mass transport | We introduce weak barycenters of a family of probability distributions, based on the recently developed notion of optimal weak transport of mass by Gozlan et al. (2017) and Backhoff-Veraguas et al. (2020). We provide a theoretical analysis of this object and discuss its interpretation in the light of convex ordering between probability measures. In particular, we show that, rather than averaging the input distributions in a geometric way (as the Wasserstein barycenter based on classic optimal transport does) weak barycenters extract common geometric information shared by all the input distributions, encoded as a latent random variable that underlies all of them. We also provide an iterative algorithm to compute a weak barycenter for a finite family of input distributions, and a stochastic algorithm that computes them for arbitrary populations of laws. The latter approach is particularly well suited for the streaming setting, i.e., when distributions are observed sequentially. The notion of weak barycenter and our approaches to compute it are illustrated on synthetic examples, validated on 2D real-world data and compared to standard Wasserstein barycenters.
| accept | All the reviewers insisted on the quality of the exposition of the paper, which is able to import advanced notions from OT to the ML community. This being said, the paper only partially motivates (from both a theoretical and numerical perspective) the relevance of weak OT for ML. For this reason, and after discussing with the reviewers, I believe that the paper is borderline for acceptance. I nevertheless proposed acceptance. I strongly urge the authors to take into account some of the suggestions of the comments of the reviewers in the final version. | train | [
"L61v_WKqBOp",
"qN9RUcXsAu8",
"bXkMTjrMP9P",
"2gcvXO5I1gD",
"LvE4iSJh9PW",
"VYZHoyc6TWg",
"t9czKnbP8H4",
"0NWM580me_2",
"KPj1gRJdvIL",
"fkf2X0WvsS",
"GJI6wzm4jMO",
"BDNb81-xRy1",
"eiLLm9hBQTp",
"7z7ukseSYiU",
"eyhTdbTG-lR"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We agree that comparing the value of the computed weak barycenter against the approximated optimal value (using the plug-in estimator) is beneficial, in particular towards supporting a wider use of the proposed algorithm. We thank the reviewer for suggesting this practice. In the revised version of the paper, we ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"t9czKnbP8H4",
"KPj1gRJdvIL",
"LvE4iSJh9PW",
"nips_2021_PwVruv8s3_Q",
"BDNb81-xRy1",
"eiLLm9hBQTp",
"fkf2X0WvsS",
"nips_2021_PwVruv8s3_Q",
"GJI6wzm4jMO",
"7z7ukseSYiU",
"0NWM580me_2",
"2gcvXO5I1gD",
"eyhTdbTG-lR",
"nips_2021_PwVruv8s3_Q",
"nips_2021_PwVruv8s3_Q"
] |
nips_2021_T2yRQao67x | Temporal-attentive Covariance Pooling Networks for Video Recognition | For video recognition task, a global representation summarizing the whole contents of the video snippets plays an important role for the final performance. However, existing video architectures usually generate it by using a simple, global average pooling (GAP) method, which has limited ability to capture complex dynamics of videos. For image recognition task, there exist evidences showing that covariance pooling has stronger representation ability than GAP. Unfortunately, such plain covariance pooling used in image recognition is an orderless representative, which cannot model spatio-temporal structure inherent in videos. Therefore, this paper proposes a Temporal-attentive Covariance Pooling (TCP), inserted at the end of deep architectures, to produce powerful video representations. Specifically, our TCP first develops a temporal attention module to adaptively calibrate spatio-temporal features for the succeeding covariance pooling, approximatively producing attentive covariance representations. Then, a temporal covariance pooling performs temporal pooling of the attentive covariance representations to characterize both intra-frame correlations and inter-frame cross-correlations of the calibrated features. As such, the proposed TCP can capture complex temporal dynamics. Finally, a fast matrix power normalization is introduced to exploit geometry of covariance representations. Note that our TCP is model-agnostic and can be flexibly integrated into any video architectures, resulting in TCPNet for effective video recognition. The extensive experiments on six benchmarks (e.g., Kinetics, Something-Something V1 and Charades) using various video architectures show our TCPNet is clearly superior to its counterparts, while having strong generalization ability. The source code is publicly available.
| accept | All three reviewers provided a rating of "6: Marginally above the acceptance threshold" for this submission. Reviewers qjEJ and SHTG initially raised concerns about the technical novelty of the proposed temporal-attentive covariance pooling (TCP). However, the authors provided a response that convinced both reviewers that there are significant differences between the TCP and related mechanisms introduced in prior works. Reviewer qjEJ pointed out the somewhat unintuitive design of TSA but concurs that the approach is empirically effective. Finally, Reviewer zp2B requested a few additional studies (such the comparison to local feature aggregation methods and the ablation on k), which were presented in the response by the authors. The ACs agree with the recommendation of accepting the paper. | train | [
"HYFyBakbXbM",
"p92WiQ_FTp",
"PH3b2REXYW",
"3bIHxwo84Xw",
"36Ug6pHww2q",
"jwifQmgo5rh",
"KS8q4ov4tZU",
"F22slGz166",
"SmZY0JW2M6l",
"aiSDomIwKtk",
"9QNeDglHe7Y",
"48DUbD_TFe1"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" We are delighted to know that our responses addressed the reviewer's concern about the differences between TCP and existing works, as well as the concern about comparisons with more state-of-the-art methods. We sincerely thank the reviewer for acknowledgement of our contributions and upgrading the overall rating ... | [
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"PH3b2REXYW",
"nips_2021_T2yRQao67x",
"9QNeDglHe7Y",
"jwifQmgo5rh",
"nips_2021_T2yRQao67x",
"SmZY0JW2M6l",
"F22slGz166",
"aiSDomIwKtk",
"36Ug6pHww2q",
"48DUbD_TFe1",
"p92WiQ_FTp",
"nips_2021_T2yRQao67x"
] |
nips_2021_sn0wj3Dci2J | Revisiting Smoothed Online Learning | Lijun Zhang, Wei Jiang, Shiyin Lu, Tianbao Yang | accept | The paper considers the analysis of competitive ratio/dynamic regret for problems with hitting cost and switching cost. They provide concrete improvements in terms of nailing down the constants for the ratios in the case of polyhedral function and quadratic growth functions achieving this with a relatively simple algorithm and a clean analysis. The reviewers have appreciated the clarity and simplicity of the paper and the approach, the concreteness of the improvements achieved as well as the right attribution and discussion of previous work. Nevertheless, the paper does not provide any novel algorithmic insights and analyses a well-known algorithm in various settings with improvements in guarantees. Overall while the paper is close to the borderline post the discussion, the reviewers unanimously agreed that the paper is slightly above the borderline due to the concrete improvements for well-known and well-studied problems and there by I am proposing an accept. | train | [
"Efue5aI7vzt",
"04m5NQA4Rj",
"x3smPmE1DfN",
"EIsshHFac7H",
"SsVKxxbfXH",
"W_qLGuWSsxt",
"iqgatzVdv90",
"zFMZvRCOaO",
"iNTejGqIv_",
"eIjKd1cjQB",
"2UOsibUYRP",
"4zxacpyJF70",
"NKGxwKfW6b",
"lgTd3s_iH9U",
"eQgB6t64EGF",
"dBlJvZerS_V"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer MZJx,\n\nThank you very much for your kind reply! We will revise our paper according to the suggestion.\n\nBest\\\nAuthors",
" After reading authors' response and other reviews, I will keep my score. I suggest incorporating the connection between two parts of contribution to the revision.",
" De... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"04m5NQA4Rj",
"lgTd3s_iH9U",
"EIsshHFac7H",
"iqgatzVdv90",
"nips_2021_sn0wj3Dci2J",
"NKGxwKfW6b",
"2UOsibUYRP",
"eIjKd1cjQB",
"nips_2021_sn0wj3Dci2J",
"4zxacpyJF70",
"SsVKxxbfXH",
"iNTejGqIv_",
"dBlJvZerS_V",
"eQgB6t64EGF",
"nips_2021_sn0wj3Dci2J",
"nips_2021_sn0wj3Dci2J"
] |
nips_2021_zHj5fx11jQC | Marginalised Gaussian Processes with Nested Sampling | Gaussian Process models are a rich distribution over functions with inductive biases controlled by a kernel function. Learning occurs through optimisation of the kernel hyperparameters using the marginal likelihood as the objective. This work proposes nested sampling as a means of marginalising kernel hyperparameters, because it is a technique that is well-suited to exploring complex, multi-modal distributions. We benchmark against Hamiltonian Monte Carlo on time-series and two-dimensional regression tasks, finding that a principled approach to quantifying hyperparameter uncertainty substantially improves the quality of prediction intervals.
| accept | The paper proposes the use of nested sampling (NS) for inference in Gaussian process (GP) models with Gaussian likelihoods and shows the benefits of this fully Bayesian approach over competing methods such as HMC and type-II marginal likelihood hyper-parameter estimation. Although NS has been studied previously and the evaluation of the proposed method is mainly focused on the spectral mixture kernel and low-dimensional settings, all the reviewers recommend acceptance and I agree, as the paper further strengthens the machinery to carry out inference in GP models. I note and appreciate the authors’ commitment to release the corresponding code. | train | [
"bS0t_WQ7ji",
"w39opsetHVm",
"gO4uFCPVOwB",
"otZ6WTGvOaO",
"72jbfYte5tG",
"lyqSZb4WSXl",
"Tl-Kc9dH08-",
"dTrJJqDAMXJ",
"8eD3v1VL4Wm",
"aLYT1fpeSkk",
"znLeJ4qdXDC",
"_sX2v5REA2O"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your message, we agree that the reconstruction has imperfections, and with more training data we expect the reconstruction would be much sharper - we will update the text to explain this in greater detail. We mainly wish to demonstrate that in situations of high epistemic uncertainty, the fully Baye... | [
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"otZ6WTGvOaO",
"lyqSZb4WSXl",
"nips_2021_zHj5fx11jQC",
"Tl-Kc9dH08-",
"nips_2021_zHj5fx11jQC",
"_sX2v5REA2O",
"gO4uFCPVOwB",
"72jbfYte5tG",
"znLeJ4qdXDC",
"nips_2021_zHj5fx11jQC",
"nips_2021_zHj5fx11jQC",
"nips_2021_zHj5fx11jQC"
] |
nips_2021_EnmG3G5SYR | Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning | Actor-critic methods are widely used in offline reinforcement learningpractice, but are not so well-understood theoretically. We propose a newoffline actor-critic algorithm that naturally incorporates the pessimism principle, leading to several key advantages compared to the state of the art. The algorithm can operate when the Bellman evaluation operator is closed with respect to the action value function of the actor's policies; this is a more general setting than the low-rank MDP model. Despite the added generality, the procedure is computationally tractable as it involves the solution of a sequence of second-order programs.We prove an upper bound on the suboptimality gap of the policy returned by the procedure that depends on the data coverage of any arbitrary, possibly data dependent comparator policy.The achievable guarantee is complemented with a minimax lower bound that is matching up to logarithmic factors.
| accept | This paper proposes an actor-critic style algorithm for solving the offline RL problem. The key contribution is in showing that pessimism can be naturally incorporated in this framework, designing an efficient algorithm with provable performance guarantees. At the same time, the reviewers also point out limitations/points of discussion such as whether the goal of finding the policy with highest minimum value function is the best choice (or a better choice than alternative choices) in the offline RL setting, discussion of assumptions on functional space and data generation process -- having a discussion around these and other points in the reviews would improve the quality of paper.
Overall, since all reviewers agree that this paper makes a good theoretical contribution to the offline RL problem, I am recommending acceptance. | train | [
"16s55lPPmO",
"XQXx_Disoh_",
"8PowgJPQuAa",
"1CnAaeYWof0",
"Nlkz2oiwph5",
"DSFIbg_ThxQ",
"Uoc8wkfcSqy",
"g9Jbo0UGupA",
"oFL0-mGNEkA"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Good work -- I enjoyed reading the paper! ",
" We thank the reviewer for connecting this work with the broader deep RL literature.\n\nA goal we had with this work was to see whether existing theoretical analyses with provable guarantees with function approximation (which are very recent and mostly for value-bas... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
1
] | [
"8PowgJPQuAa",
"oFL0-mGNEkA",
"g9Jbo0UGupA",
"Uoc8wkfcSqy",
"DSFIbg_ThxQ",
"nips_2021_EnmG3G5SYR",
"nips_2021_EnmG3G5SYR",
"nips_2021_EnmG3G5SYR",
"nips_2021_EnmG3G5SYR"
] |
nips_2021__MQBBpJzoZd | Bayesian Bellman Operators | We introduce a novel perspective on Bayesian reinforcement learning (RL); whereas existing approaches infer a posterior over the transition distribution or Q-function, we characterise the uncertainty in the Bellman operator. Our Bayesian Bellman operator (BBO) framework is motivated by the insight that when bootstrapping is introduced, model-free approaches actually infer a posterior over Bellman operators, not value functions. In this paper, we use BBO to provide a rigorous theoretical analysis of model-free Bayesian RL to better understand its relationship to established frequentist RL methodologies. We prove that Bayesian solutions are consistent with frequentist RL solutions, even when approximate inference is used, and derive conditions for which convergence properties hold. Empirically, we demonstrate that algorithms derived from the BBO framework have sophisticated deep exploration properties that enable them to solve continuous control tasks at which state-of-the-art regularised actor-critic algorithms fail catastrophically.
| accept | All of the reviewers agree that the paper presents a novel approach to model-free Bayesian RL that distinguishes itself from previous work with a clear theoretical construction and empirical evidence to support the approach. Given the potential importance and interest to the community, I recommend acceptance as a spotlight. | test | [
"n-yRk_Jj6X",
"0ypOdpbooXq",
"_IqRxP-sdL",
"p7nEQJlyzn_",
"N54FgXjil3m",
"kKdyOymo4yM",
"Er16-KzzGll",
"HZQQVBXQLlL",
"VNZUijB1roE",
"UEV31SeHhNt",
"R39YL5XNLLI",
"S_97AScUg-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors introduce a Bayesian perspective on the application of Bellman operators to estimated Q-functions. The posterior distribution over the operator application is described, a Bernstein-von Mises-style frequentist analysis is undertaken, a variety of approximate approaches are proposed, and the methods are... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021__MQBBpJzoZd",
"HZQQVBXQLlL",
"Er16-KzzGll",
"N54FgXjil3m",
"VNZUijB1roE",
"S_97AScUg-",
"R39YL5XNLLI",
"UEV31SeHhNt",
"n-yRk_Jj6X",
"nips_2021__MQBBpJzoZd",
"nips_2021__MQBBpJzoZd",
"nips_2021__MQBBpJzoZd"
] |
nips_2021_4azYdmhHCG | Uncertainty Calibration for Ensemble-Based Debiasing Methods | Ensemble-based debiasing methods have been shown effective in mitigating the reliance of classifiers on specific dataset bias, by exploiting the output of a bias-only model to adjust the learning target. In this paper, we focus on the bias-only model in these ensemble-based methods, which plays an important role but has not gained much attention in the existing literature. Theoretically, we prove that the debiasing performance can be damaged by inaccurate uncertainty estimations of the bias-only model. Empirically, we show that existing bias-only models fall short in producing accurate uncertainty estimations. Motivated by these findings, we propose to conduct calibration on the bias-only model, thus achieving a three-stage ensemble-based debiasing framework, including bias modeling, model calibrating, and debiasing. Experimental results on NLI and fact verification tasks show that our proposed three-stage debiasing framework consistently outperforms the traditional two-stage one in out-of-distribution accuracy.
| accept | This paper studies the uncertainty calibration properties of the bias-only model in ensemble-based de-biasing methods. The authors argue that the accurate estimation of uncertainty in the bias-only model has been overlooked in prior research, but can have an important influence on the overall de-biasing process. The method proposed in the paper applies calibration methods to the output of the bias-only model. Thus, from a novelty perspective, the method itself is a novel combination of existing approaches, while at a conceptual level the paper is addressing a previously overlooked issue and thus has stronger novelty at this level. The reviewers judged the approach and the theoretical development to be technically correct. The experiments were judged to provide adequate support for claims. The writing was judged to be clear. The reviewers indicated that all of their questions had been adequately addressed following the author response. As a result, the paper is recommended for acceptance. The authors should be sure to incorporate the discussed clarifications and updates in the final manuscript. | train | [
"u4_5cW1q_Rt",
"ePiJtn6-Std",
"Kukex9DH3u",
"Spd0wS0jjxr",
"rc9yI6QAEne",
"3oaFq7BGLN8",
"t5kuskL0WpV",
"Fh-Nk03fTs"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper shows an increasing connection between the uncertainty estimation of the biased only models and proposed MoCaD, a three-stage EBD approach considering model calibrating. Originality: High. The combination of uncertainty calibration and EBD is a very novel idea and the impressive results shown in this p... | [
8,
6,
7,
-1,
-1,
-1,
-1,
6
] | [
3,
3,
4,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_4azYdmhHCG",
"nips_2021_4azYdmhHCG",
"nips_2021_4azYdmhHCG",
"Fh-Nk03fTs",
"u4_5cW1q_Rt",
"Kukex9DH3u",
"ePiJtn6-Std",
"nips_2021_4azYdmhHCG"
] |
nips_2021_10anajdGZm | Provably Faster Algorithms for Bilevel Optimization | Junjie Yang, Kaiyi Ji, Yingbin Liang | accept | Reviewers unanimously agree that this is a paper of high originality and soundness of the results. Some reviewers have raised concerns over inflated claims in light of experimental validation (5eE2, VGzP), as well clarity of the proofs and discussion of the limitations (VGzP). After a strong rebuttal and engaging in discussion with the authors, the reviewers have found that the authors have addressed their remark, and some reviewers increased their score. | train | [
"BO6q_wiCsLw",
"PvTDP_pX9I5",
"_fZ4nXw61qo",
"VT37BfvLm7",
"Q38hUxQbwcI",
"8LNjDxvXFE",
"Mxg0PCKwMkt",
"KxdSUTqOXbS",
"7Q0BLMMaKus",
"ZHbnKpZY3nJ",
"K2R7bOGAWqk"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer very much for further reviewing our response and raising the score!",
" We thank the reviewer very much for the further comments and raising the score!\n\nQ: I think it is important to highlight these numerical details: the double and triple loop, the choice of the \"outer gradient step\",... | [
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
"8LNjDxvXFE",
"VT37BfvLm7",
"nips_2021_10anajdGZm",
"ZHbnKpZY3nJ",
"nips_2021_10anajdGZm",
"Q38hUxQbwcI",
"Q38hUxQbwcI",
"Q38hUxQbwcI",
"K2R7bOGAWqk",
"_fZ4nXw61qo",
"nips_2021_10anajdGZm"
] |
nips_2021_Ic9vRN3VpZ | Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link Prediction | Graph Neural Networks (GNNs) have been widely applied to various fields for learning over graph-structured data. They have shown significant improvements over traditional heuristic methods in various tasks such as node classification and graph classification. However, since GNNs heavily rely on smoothed node features rather than graph structure, they often show poor performance than simple heuristic methods in link prediction where the structural information, e.g., overlapped neighborhoods, degrees, and shortest paths, is crucial. To address this limitation, we propose Neighborhood Overlap-aware Graph Neural Networks (Neo-GNNs) that learn useful structural features from an adjacency matrix and estimate overlapped neighborhoods for link prediction. Our Neo-GNNs generalize neighborhood overlap-based heuristic methods and handle overlapped multi-hop neighborhoods. Our extensive experiments on Open Graph Benchmark datasets (OGB) demonstrate that Neo-GNNs consistently achieve state-of-the-art performance in link prediction.
| accept | The paper proposes a novel GNN architecture for link prediction based on neighbourhood overlap.
The authors provided a detailed rebuttal including additional experimental results requested by the reviewers. There has been an extensive follow-up discussion. While doubts remain about clarity of the presentation, the AC believe this can be improved in the final version according to the discussion with the reviewers and their recommendations. We recommend accepting the paper. | train | [
"yYeA1VPFpvW",
"0ZV7ZdZgP1",
"y892iTR0WQd",
"IhEm89YEenc",
"mVuK4EGbVC_",
"Yrz3aNv7A7T",
"puolRy9rNWa",
"ZdC3LnsPype",
"rrq60isp-qn",
"qj1tYjWF5CR",
"ivviVBG89qH",
"wLtVqxUJIg",
"1m4E2N-fD0L",
"AJn5P0aIFF3",
"nlwl08nWaQ2",
"Sjd8MXu4Zse"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This work added a model into GNN to improve link prediction. The connection is a weighted combination of the model's output and GNN's output in the scoring function (scoring a predicted link). The paper claims that this model computes the number of overlapped neighbors. Experiments show the combination improves t... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_Ic9vRN3VpZ",
"y892iTR0WQd",
"IhEm89YEenc",
"mVuK4EGbVC_",
"ivviVBG89qH",
"qj1tYjWF5CR",
"wLtVqxUJIg",
"qj1tYjWF5CR",
"nips_2021_Ic9vRN3VpZ",
"AJn5P0aIFF3",
"nlwl08nWaQ2",
"1m4E2N-fD0L",
"Sjd8MXu4Zse",
"rrq60isp-qn",
"yYeA1VPFpvW",
"nips_2021_Ic9vRN3VpZ"
] |
nips_2021_Vc2SXubOUuf | Self-Supervised Multi-Object Tracking with Cross-input Consistency | In this paper, we propose a self-supervised learning procedure for training a robust multi-object tracking (MOT) model given only unlabeled video. While several self-supervisory learning signals have been proposed in prior work on single-object tracking, such as color propagation and cycle-consistency, these signals are not effective for training RNN models, which are needed to achieve accurate MOT: they yield degenerate models that, for instance, always match new detections to tracks with the closest initial detections. We propose a novel self-supervisory signal that we call cross-input consistency: we construct two distinct inputs for the same sequence of video, by hiding different information about the sequence in each input. We then compute tracks in that sequence by applying an RNN model independently on each input, and train the model to produce consistent tracks across the two inputs. We evaluate our unsupervised method on MOT17 and KITTI --- remarkably, we find that, despite training only on unlabeled video, our unsupervised approach outperforms four supervised methods published in the last 1--2 years, including Tracktor++, FAMNet, GSM, and mmMOT.
| accept | The paper initially received mixed reviews, two for (6 & 7) and two against (4 & 5). The reviewers appreciated the motivation and novel self-supervised learning method proposed for MOT, as well as the superior results against previous unsupervised methods, as well as many supervised methods. The main concerns from the reviewers were:
1. missing discussion about choice/effect of hyperparameters (n).
2. "occlusion-based hiding" is proposed (and a lot of space is dedicated to describing it), but performs worse than the proposed "visual-spatial hiding".
3. some claims not supported
4. still requires a trained detector to work, which can be considered as not fully unsupervised MOT.
5. missing ablation studies and baselines.
6. use more recent comparison methods for MOT, Chained-Tracker and QDTrack (published after NeurIPS deadline).
The author provided a response, and attempted to address these points through further explanations and promises to rewrite/expand parts of the paper. Some ablation studies were already presented in the supplemental. After the discussion one positive reviewer raised from 6 to 7, while the other positive reviewer maintained 7. One negative reviewer was concerned that a large amount of changes are required to address the review comments, and thus maintained rating 4. The other negative reviewer did not participate in the discussion. The AC checked the response to this reviewer's comments, and thinks that the responses to their comments were satisfactory. In the end the AC has a positive impression on the paper, and thus recommends acceptance, under the condition that the presentation is improved.
The authors should revise the paper according to the reviews and discussion, in particular: reduce the discussion on occlusion-based hiding, increase discussion on visual-spatial hiding, include the ablation studies from the supplemental, add baselines (tracking w/o training), expand discussion about parameter n. | train | [
"HvDtiv93O_",
"mj01YzEWtE",
"XiXWJQ5LaKi",
"YFn2B93V2bS",
"YOV8SP18NI",
"KFuWyQCZbeQ",
"D2h8sqk8bVc",
"BzeBzUTXWl",
"DqZ9QdZCJiM",
"B8upm6d0QPV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading all the comments and responses, I find several crucial concerns have not been addressed. \n\nThe authors agree with most of my concerns, such as \n1) missing discussion about choice/effect of hyperparameters (n); \n2) section 4.1 is included in the method section but not used in the final method; \n... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"YOV8SP18NI",
"nips_2021_Vc2SXubOUuf",
"D2h8sqk8bVc",
"B8upm6d0QPV",
"DqZ9QdZCJiM",
"BzeBzUTXWl",
"mj01YzEWtE",
"nips_2021_Vc2SXubOUuf",
"nips_2021_Vc2SXubOUuf",
"nips_2021_Vc2SXubOUuf"
] |
nips_2021_SlXwiSeyE1 | Tree in Tree: from Decision Trees to Decision Graphs | Decision trees have been widely used as classifiers in many machine learning applications thanks to their lightweight and interpretable decision process. This paper introduces Tree in Tree decision graph (TnT), a framework that extends the conventional decision tree to a more generic and powerful directed acyclic graph. TnT constructs decision graphs by recursively growing decision trees inside the internal or leaf nodes instead of greedy training. The time complexity of TnT is linear to the number of nodes in the graph, therefore it can construct decision graphs on large datasets. Compared to decision trees, we show that TnT achieves better classification performance with reduced model size, both as a stand-alone classifier and as a base-estimator in bagging/AdaBoost ensembles. Our proposed model is a novel, more efficient and accurate alternative to the widely-used decision trees.
| accept | The paper presents a way to to grow decision graphs by using decision trees as the splitting criterion of another tree. Reviewers praised the simplicity of the idea and its effectiveness on classification tasks.
During the reviewing and rebuttal phase, some criticisms regarding missing references and more crucially missing experiments with suitable comparisons were raised.
Authors provided additional convincing results in the rebuttal and managed to convince one reviewer to upgrade its weak rejection score into a weak acceptance.
The paper is accepted, subject to including the missing comparisons to the literature, and the additional baselines and experiments discussed in the rebuttal. | train | [
"7WSAnXM5ouK",
"--4K4voOPMF",
"uhvzO5sBjLJ",
"8KmemN8qjU",
"dhscZXm9X8S",
"YeQ7mtevU-",
"4T7WejKKvj",
"nmp2sjzzA2s",
"_JjjfBTiZmP",
"cd6pF76Vaag",
"_M0ePTnWGo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response. They have addressed my concerns satisfactorily. \n",
" ## Additional experiments on all datasets\nWe thank the reviewer for the feedback. We agree that the advantage of TnT over standard trees looks obvious at a fixed budget. We will include a new experiment in the final ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"4T7WejKKvj",
"8KmemN8qjU",
"nips_2021_SlXwiSeyE1",
"YeQ7mtevU-",
"cd6pF76Vaag",
"uhvzO5sBjLJ",
"_M0ePTnWGo",
"_JjjfBTiZmP",
"nips_2021_SlXwiSeyE1",
"nips_2021_SlXwiSeyE1",
"nips_2021_SlXwiSeyE1"
] |
nips_2021_hrkY-fe8nJV | Test-time Collective Prediction | An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points. Agents wish to benefit from the collective expertise of the full set of agents to make better predictions than they would individually, but may not be willing to release labeled data or model parameters. In this work, we explore a decentralized mechanism to make collective predictions at test time, that is inspired by the literature in social science on human consensus-making. Building on a query model to facilitate information exchange among agents, our approach leverages each agent’s pre-trained model without relying on external validation, model retraining, or data pooling. A theoretical analysis shows that our approach recovers inverse mean-squared-error (MSE) weighting in the large-sample limit which is known to be the optimal way to combine independent, unbiased estimators. Empirically, we demonstrate that our scheme effectively combines models with differing quality across the input space: the proposed consensus prediction achieves significant gains over classical model averaging, and even outperforms weighted averaging schemes that have access to additional validation data. Finally, we propose a decentralized Jackknife procedure as a tool to evaluate the sensitivity of the collective predictions with respect to a single agent's opinion.
| accept | Strengths:
- Novel approach and application of the DeGroots consensus model
- Thorough theoretical analysis
- Effective empirical demonstration
- As stated by one reviewer: “well-written, well-executed”
Weaknesses (mild):
- Motivation for decentralized collection prediction could be improved
- The way in which privacy and data sharing properties of the approach are described
Summary:
Reviewers were unanimous in their view that this is a strong and complete submission. This was evident both in the original reviews as well as in correspondences with the authors. Authors are encouraged to discuss in the final version issues raised by the reviewers, including (i) how privacy and data sharing are considered, (ii) experimenting with classification (in addition to regression), and (iii) possible future relations to crypto and differential privacy.
| train | [
"gRvnbATsdPH",
"FDahmc9qOBP",
"Xp6fHA-Ke42",
"x_2pyOcsRDF",
"4zUMR6nVJB1",
"C81ClccEEql",
"o1tcXxmFF42",
"yDc96LfjVF",
"OrIRd0Lm9PR",
"2sGJKrr71Hg",
"QeGvbh59FeP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. My ratings remain unchanged.\nI believe that including the results for classification tasks would help in a better understanding of this method.",
"This paper considers collective prediction at test time, and proposes a novel decentralized mechanism by leveraging each agent's pretrai... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"yDc96LfjVF",
"nips_2021_hrkY-fe8nJV",
"o1tcXxmFF42",
"C81ClccEEql",
"OrIRd0Lm9PR",
"2sGJKrr71Hg",
"FDahmc9qOBP",
"QeGvbh59FeP",
"nips_2021_hrkY-fe8nJV",
"nips_2021_hrkY-fe8nJV",
"nips_2021_hrkY-fe8nJV"
] |
nips_2021_gB4hvxTzQLQ | A Continuous Mapping For Augmentation Design | Keyu Tian, Chen Lin, Ser Nam Lim, Wanli Ouyang, Puneet Dokania, Philip Torr | accept | This paper is concerned with augmentation design, i.e. finding a set of parameters for generating random image variants via color adjustment / filtering / warping to aid generalization in models based on images. The basic idea is to reduce augmentations to a finite-dimensional continuous space, then learn a distribution over the parameters. State of the art methods [5] use bi-level optimization to learn this distribution. Instead, this paper proposes a Bayesian posterior where the likelihood of any set of hyperparameters can be computed in terms of scores on a held-out validation set. That construction, if defined exactly, would still involve a double-loop. However, essentially by "warm-starting" the inner-optimization, this is avoided and a single-loop sampling algorithm is obtained.
Reviewers were generally positive about this paper, and found it clear. The major concerns were that the experimental setup could be improved, discussion of related work could be more thorough, there could be more discussion of when the method might fail, and that it isn't entirely clear what the benefits of the Bayesian approach is. (The paper argues that it has the benefit of avoiding a double-loop optimization, but it appears that the idea of warm-starting the inner optimization could be applied to other methods as well, so it isn't entirely clear what is providing the benefit here.)
I found the central construction difficult to understand. (Reviews simply referred to it as a Bayesian model, but I wanted to understand what this model was, so I read the paper in detail.) After extensive discussion with the authors I am satisfied that there is a coherent model being defined. However, I strongly suggest that the notation of the paper be reconsidered to first explicitly state the model in the form of the equations (i), (ii), and (iii) that were given in the discussion below. | train | [
"z7jTSzv4CmW",
"7KN8byN6eUd",
"WIrmWc8bK_H",
"lJwlj9enwIx",
"0Tk9G2VsP0k",
"QdEhmbfQd8",
"wgIJUx66EV",
"n0CGfuMB_W",
"QXxGS3nl5GC",
"QQkPQHPWlBF",
"Hyp5i9mEBJ1",
"3vnLm-EBUaK",
"fcLRFbn9w89"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your detailed review. This is a gentle reminder that the deadline for preliminary reviewing process is approaching soon. We are looking forward to your soonest reply.",
" Thanks again for your detailed review. This is a gentle reminder that the deadline for preliminary reviewing process is appr... | [
-1,
-1,
-1,
8,
-1,
7,
-1,
-1,
-1,
-1,
-1,
4,
7
] | [
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"lJwlj9enwIx",
"fcLRFbn9w89",
"3vnLm-EBUaK",
"nips_2021_gB4hvxTzQLQ",
"3vnLm-EBUaK",
"nips_2021_gB4hvxTzQLQ",
"QXxGS3nl5GC",
"lJwlj9enwIx",
"QQkPQHPWlBF",
"QdEhmbfQd8",
"fcLRFbn9w89",
"nips_2021_gB4hvxTzQLQ",
"nips_2021_gB4hvxTzQLQ"
] |
nips_2021_gbEreV4H2Jv | Neural Routing by Memory | Recent Convolutional Neural Networks (CNNs) have achieved significant success by stacking multiple convolutional blocks, named procedures in this paper, to extract semantic features. However, they use the same procedure sequence for all inputs, regardless of the intermediate features.This paper proffers a simple yet effective idea of constructing parallel procedures and assigning similar intermediate features to the same specialized procedures in a divide-and-conquer fashion. It relieves each procedure's learning difficulty and thus leads to superior performance. Specifically, we propose a routing-by-memory mechanism for existing CNN architectures. In each stage of the network, we introduce parallel Procedural Units (PUs). A PU consists of a memory head and a procedure. The memory head maintains a summary of a type of features. For an intermediate feature, we search its closest memory and forward it to the corresponding procedure in both training and testing. In this way, different procedures are tailored to different features and therefore tackle them better.Networks with the proposed mechanism can be trained efficiently using a four-step training strategy. Experimental results show that our method improves VGGNet, ResNet, and EfficientNet's accuracies on Tiny ImageNet, ImageNet, and CIFAR-100 benchmarks with a negligible extra computational cost.
| accept | This paper has received 4 expert reviews, with a single reviewer providing a slightly favorite rating (but a less informative review),
and 3 extremely critical and informative reviews. The reviewers raised the following points:
= Unconvincing evaluations, as comparisons are done with own baselines only, i.e. without the contributions of the paper, and no comparison with a competing method is in the main paper. One comparison was discussed in the discussion phase, after reception of the author's response, but it was found to be unconvincing. As a summary, the results in accuracy are quite close and only one method is reproduced by running the official code.
= The advantages of the method are unclear wrt to the state of the art
= Experiments are not controlled.
The author's response has been found to be handwavy in many respects, iterating claims of the paper instead of attempting to answer the issues raised by reviewers.
The AC judges that the paper is not yet ready. | train | [
"D2MdtMb3iXh",
"2s-PgCivNi",
"ktS3kGexxYl",
"EFmv8SKge9",
"gat5bKsKi5",
"3ZciFwG2kM",
"Va-N_MlkAT-",
"bO5g3D6mWMF",
"kD3LLUuKWzl",
"KQwpcCrQ72A",
"J-5tqMnAah"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a nonparametric method for routing in convolutional network image classifiers, where routing is the selection of a subset of modules to execute on a given input. Routing by memory associates each module with a feature, the \"memory\", that is scored against the input feature to select the module... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
4
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_gbEreV4H2Jv",
"bO5g3D6mWMF",
"Va-N_MlkAT-",
"gat5bKsKi5",
"J-5tqMnAah",
"KQwpcCrQ72A",
"kD3LLUuKWzl",
"D2MdtMb3iXh",
"nips_2021_gbEreV4H2Jv",
"nips_2021_gbEreV4H2Jv",
"nips_2021_gbEreV4H2Jv"
] |
nips_2021_af_hng9tuNj | GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles | Prediction of a molecule's 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery. Existing generative models have several drawbacks including lack of modeling important molecular geometry elements (e.g. torsion angles), separate optimization stages prone to error accumulation, and the need for structure fine-tuning based on approximate classical force-fields or computationally expensive methods such as metadynamics with approximate quantum mechanics calculations at each geometry. We propose GeoMol -- an end-to-end, non-autoregressive and SE(3)-invariant machine learning approach to generate distributions of low-energy molecular 3D conformers. Leveraging the power of message passing neural networks (MPNNs) to capture local and global graph information, we predict local atomic 3D structures and torsion angles, avoiding unnecessary over-parameterization of the geometric degrees of freedom (e.g. one angle per non-terminal bond). Such local predictions suffice both for the training loss computation, as well as for the full deterministic conformer assembly (at test time). We devise a non-adversarial optimal transport based loss function to promote diverse conformer generation. GeoMol predominantly outperforms popular open-source, commercial, or state-of-the-art machine learning (ML) models, while achieving significant speed-ups. We expect such differentiable 3D structure generators to significantly impact molecular modeling and related applications.
| accept | This paper introduces an end-to-end learning method called GeoMol for molecular conformation generation. The algorithm is novel and provides a fast and clean way of generating molecular conformations. It achieves competitive performance on the GEOM-DRUGS and GEOM-QM9 tasks. All the reviewers are excited about the work.
| train | [
"ClTUUddz6fB",
"r0H7Z4LkXCi",
"eAJgJloxaVa",
"13CGB1XxW8",
"0e5TppymMwz",
"QGQsrfO1ASn",
"NrYLSDsnWj",
"GhyWozZcxHl"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author"
] | [
"This paper studies how to generate low-energy 3D molecular conformations based on 2D molecular graphs. \nTo this end, they propose the GEOMOL, an end-to-end trainable model that models molecular conformations in a SE(3)-invariant fashion, which first predicts the 3D local structures and then assembles them into f... | [
7,
-1,
7,
-1,
-1,
7,
-1,
-1
] | [
5,
-1,
5,
-1,
-1,
4,
-1,
-1
] | [
"nips_2021_af_hng9tuNj",
"13CGB1XxW8",
"nips_2021_af_hng9tuNj",
"0e5TppymMwz",
"eAJgJloxaVa",
"nips_2021_af_hng9tuNj",
"ClTUUddz6fB",
"QGQsrfO1ASn"
] |
nips_2021_eNB4WXnNczJ | CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression | Zhize Li, Peter Richtarik | accept | The review committee agreed that the paper is an interesting contribution and hence should be accepted for presentation at Neurips. However, there were some concerns regarding the practical relevance of the work. The authors mentioned in their response that they have run experiments on "logistic regression tasks". I strongly encourage the authors to include those simulations in the final version of the paper.
| train | [
"3zYJTrWGyBZ",
"4TMuQnUzNwH",
"9uP4odV37xq",
"LYblYpL3HJe",
"Q39GCeIqfRZ",
"5MwYHxppaqV",
"-EXWw0dDLPf",
"OXCIJM_5GHG",
"P5_TWQ2k4JD",
"4ZZGT_v3q8k",
"TDxZfLwqUvK",
"KPW6JgbPZnC",
"uOHV-m78k9",
"wHAyCWtxdhj",
"JTzG6LIy4oY",
"J7BmsZk7Mvs",
"BHMTLIW2md1",
"fmTBHAabs5g",
"mLg-dtQyH0... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the rebuttal. I keep my score unchanged.",
"The paper propose a compressed and accelerated gradient method called CANITA for distributed optimization. \nIt improves the the state-of-the-art result which is achieved by DIANA in smooth and convex problems.\nThe main contribution is the (near) optimal a... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"BHMTLIW2md1",
"nips_2021_eNB4WXnNczJ",
"nips_2021_eNB4WXnNczJ",
"fmTBHAabs5g",
"mLg-dtQyH0X",
"JucAPL7wQm",
"4TMuQnUzNwH",
"4TMuQnUzNwH",
"4TMuQnUzNwH",
"mLg-dtQyH0X",
"4TMuQnUzNwH",
"JucAPL7wQm",
"mLg-dtQyH0X",
"fmTBHAabs5g",
"mLg-dtQyH0X",
"JucAPL7wQm",
"JucAPL7wQm",
"nips_2021_... |
nips_2021_A_Aeb-XLozL | Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers | In this work, we consider the problem of sequence-to-sequence alignment for signals containing outliers. Assuming the absence of outliers, the standard Dynamic Time Warping (DTW) algorithm efficiently computes the optimal alignment between two (generally) variable-length sequences. While DTW is robust to temporal shifts and dilations of the signal, it fails to align sequences in a meaningful way in the presence of outliers that can be arbitrarily interspersed in the sequences. To address this problem, we introduce Drop-DTW, a novel algorithm that aligns the common signal between the sequences while automatically dropping the outlier elements from the matching. The entire procedure is implemented as a single dynamic program that is efficient and fully differentiable. In our experiments, we show that Drop-DTW is a robust similarity measure for sequence retrieval and demonstrate its effectiveness as a training loss on diverse applications. With Drop-DTW, we address temporal step localization on instructional videos, representation learning from noisy videos, and cross-modal representation learning for audio-visual retrieval and localization. In all applications, we take a weakly- or unsupervised approach and demonstrate state-of-the-art results under these settings.
| accept | There has been extensive discussion between the reviewers and authors in the post-rebuttal period to tease apart the novel contributions of this paper and accurately position it wrt prior work.
The authors have committed to repositioning the paper, clarifying the contributions, and thoroughly discussing close prior works in their final version. | train | [
"6nE_6X4wBhT",
"nIrHxikN-1r",
"7YFn7CxE98",
"lp8Nz2sb_hK",
"uwJkj8zStX",
"ggZAX4xgGa",
"3J6BTMawI_o",
"jU89r1jTtT",
"vMQZHaSGyPY",
"1OfCIE29R8D",
"IBPfLUTTXvB",
"E3k3r5BfAcm",
"xRqIZTuYP8g",
"JzxqG7rnlI3",
"JmSyuU8HpY7",
"kYnQRaUIACo",
"KH-7ZHteLP2",
"1q4_12fUj0o",
"OmtOSU9Rlsb",... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
... | [
" We thank Reviewer VyHZ for the very productive dialog, acknowledging our method \"has fundamental merits and the new experimental results provided in the discussion are highly valuable and illustrative\", the novelty of our method, and \"would go along with the acceptance decision\". This extensive dialog made ou... | [
-1,
6,
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
4,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_A_Aeb-XLozL",
"nips_2021_A_Aeb-XLozL",
"nips_2021_A_Aeb-XLozL",
"nips_2021_A_Aeb-XLozL",
"1q4_12fUj0o",
"nips_2021_A_Aeb-XLozL",
"jU89r1jTtT",
"1OfCIE29R8D",
"JmSyuU8HpY7",
"IBPfLUTTXvB",
"E3k3r5BfAcm",
"xRqIZTuYP8g",
"JzxqG7rnlI3",
"JmSyuU8HpY7",
"kYnQRaUIACo",
"1q4_12fUj0o... |
nips_2021_qxKh67NNJ2I | Safe Reinforcement Learning with Natural Language Constraints | While safe reinforcement learning (RL) holds great promise for many practical applications like robotics or autonomous cars, current approaches require specifying constraints in mathematical form. Such specifications demand domain expertise, limiting the adoption of safe RL. In this paper, we propose learning to interpret natural language constraints for safe RL. To this end, we first introduce HAZARDWORLD, a new multi-task benchmark that requires an agent to optimize reward while not violating constraints specified in free-form text. We then develop an agent with a modular architecture that can interpret and adhere to such textual constraints while learning new tasks. Our model consists of (1) a constraint interpreter that encodes textual constraints into spatial and temporal representations of forbidden states, and (2) a policy network that uses these representations to produce a policy achieving minimal constraint violations during training. Across different domains in HAZARDWORLD, we show that our method achieves higher rewards (up to11x) and fewer constraint violations (by 1.8x) compared to existing approaches. However, in terms of absolute performance, HAZARDWORLD still poses significant challenges for agents to learn efficiently, motivating the need for future work.
| accept | This paper presents a framework wherein constraints (particularly safety constraints) on a reinforcement learning agent can be specified using natural language. The paper is well written, clear, and was well-received by the reviewers, all of whom recommend acceptance. The AC finds the idea of natural language constraints particularly compelling, and strongly supports acceptance.
One additional related work is "Preventing undesirable of intelligent machines", which proposes an "interface" that the users of ML/RL algorithms can use to define safety constraints. It seems like the method proposed here would plug-in directly as such an "interface", making this paper a valuable contribution to Seldonian methods (perhaps beyond RL) as well. | train | [
"piG3D2Jxu5D",
"gpGHKRyZ3hi",
"-EUEtcrDoNp",
"nDT5zVsGwik",
"bViunztXYIh",
"X1rbRy9G1E",
"jnPri4AyHY6",
"wAfyZkFLepe"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to respond to my questions. I've read other reviews and the author's rebuttal. I still think that this paper is well-motivated and has a clear contribution. I maintain my original score.",
" We thank the reviewer for the helpful and insightful feedback. We provide answers to indivi... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"gpGHKRyZ3hi",
"jnPri4AyHY6",
"wAfyZkFLepe",
"X1rbRy9G1E",
"nips_2021_qxKh67NNJ2I",
"nips_2021_qxKh67NNJ2I",
"nips_2021_qxKh67NNJ2I",
"nips_2021_qxKh67NNJ2I"
] |
nips_2021_9fHO2sjdZq8 | Compositional Modeling of Nonlinear Dynamical Systems with ODE-based Random Features | Effectively modeling phenomena present in highly nonlinear dynamical systems whilst also accurately quantifying uncertainty is a challenging task, which often requires problem-specific techniques. We present a novel, domain-agnostic approach to tackling this problem, using compositions of physics-informed random features, derived from ordinary differential equations. The architecture of our model leverages recent advances in approximate inference for deep Gaussian processes, such as layer-wise weight-space approximations which allow us to incorporate random Fourier features, and stochastic variational inference for approximate Bayesian inference. We provide evidence that our model is capable of capturing highly nonlinear behaviour in real-world multivariate time series data. In addition, we find that our approach achieves comparable performance to a number of other probabilistic models on benchmark regression tasks.
| accept | There was initially some spread in the reviewer scores and perception of this paper, but the reviewer consensus did lean towards acceptance in the end. The additions proposed in the rebuttal are quite extensive but written out in sufficient detail that the reviewers also believe the paper can be accepted. Please, carefully pay attention to the reviewer comments and make sure to include the promised changes in the camera-ready version. In terms of additional experiments, including the results you presented during the rebuttal/discussion phase is sufficient.
| test | [
"9obiCCJwLEg",
"eBMvFevRRrs",
"oyI7cMQ_hIb",
"eurdf_wTvyG",
"_2sp9c9EQcZ",
"3ypz_D86_eT",
"Tud3KS4gGGP",
"z2KVc0h4tD",
"h1aIB4eZc9",
"9lOxfc8N5E",
"NRV_ybxRWQQ",
"y9VzDuUkq6E",
"fwW17Uoiyh1"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer for their additional feedback and are glad that the points made in A1 were particularly useful for the reviewer; we will certainly include the suggested paragraph expanding upon these points in the revised version of our paper.",
"This manuscript describes Deep Latent Force Models (DLFM), ... | [
-1,
7,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"eurdf_wTvyG",
"nips_2021_9fHO2sjdZq8",
"nips_2021_9fHO2sjdZq8",
"9lOxfc8N5E",
"NRV_ybxRWQQ",
"nips_2021_9fHO2sjdZq8",
"z2KVc0h4tD",
"eBMvFevRRrs",
"3ypz_D86_eT",
"oyI7cMQ_hIb",
"y9VzDuUkq6E",
"fwW17Uoiyh1",
"nips_2021_9fHO2sjdZq8"
] |
nips_2021_LNXTIrMqyGz | Implicit Semantic Response Alignment for Partial Domain Adaptation | Partial Domain Adaptation (PDA) addresses the unsupervised domain adaptation problem where the target label space is a subset of the source label space. Most state-of-art PDA methods tackle the inconsistent label space by assigning weights to classes or individual samples, in an attempt to discard the source data that belongs to the irrelevant classes. However, we believe samples from those extra categories would still contain valuable information to promote positive transfer. In this paper, we propose the Implicit Semantic Response Alignment to explore the intrinsic relationships among different categories by applying a weighted schema on the feature level. Specifically, we design a class2vec module to extract the implicit semantic topics from the visual features. With an attention layer, we calculate the semantic response according to each implicit semantic topic. Then semantic responses of source and target data are aligned to retain the relevant information contained in multiple categories by weighting the features, instead of samples. Experiments on several cross-domain benchmark datasets demonstrate the effectiveness of our method over the state-of-the-art PDA methods. Moreover, we elaborate in-depth analyses to further explore implicit semantic alignment.
| accept | This paper focuses on the partial domain adaptation problem. The proposal is an implicit semantic response alignment method where the source domain has more categories than the target domain. The philosophy behind sounds quite interesting to me, namely, utilize the extra source categories to boost the adaptation performance. This philosophy leads to a novel algorithm design I have never seen.
The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the significance, especially from Reviewer SLrJ and BdsE, the authors did a particularly good job in their rebuttal. Thus, most of us have agreed to accept this paper for publication! Please include the additional experimental results in the next version. | train | [
"kNWqaOp1a_V",
"5Pi1uu--Tbx",
"mAr-04xiH5",
"WFq3JuFcC7I",
"nMGWWuPC6sc",
"niK8Q8j8i5r",
"3f4Vl-R7ve",
"WQA61P4gimE"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new partial domain adaptation (PDA) method. Different from previous methods that ignore the irrelevant source classes, this work argues that source data coming from extra classes also contributes to the adaptation. Motivated by this, the proposed method first identifies implicit semantic, t... | [
6,
-1,
-1,
-1,
-1,
8,
8,
5
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"nips_2021_LNXTIrMqyGz",
"3f4Vl-R7ve",
"niK8Q8j8i5r",
"kNWqaOp1a_V",
"WQA61P4gimE",
"nips_2021_LNXTIrMqyGz",
"nips_2021_LNXTIrMqyGz",
"nips_2021_LNXTIrMqyGz"
] |
nips_2021_XP9SZpjZkq | ToAlign: Task-Oriented Alignment for Unsupervised Domain Adaptation | Unsupervised domain adaptive classifcation intends to improve the classifcation performance on unlabeled target domain. To alleviate the adverse effect of domain shift, many approaches align the source and target domains in the feature space. However, a feature is usually taken as a whole for alignment without explicitly making domain alignment proactively serve the classifcation task, leading to sub-optimal solution. In this paper, we propose an effective Task-oriented Alignment (ToAlign) for unsupervised domain adaptation (UDA). We study what features should be aligned across domains and propose to make the domain alignment proactively serve classifcation by performing feature decomposition and alignment under the guidance of the prior knowledge induced from the classifcation task itself. Particularly, we explicitly decompose a feature in the source domain into a task-related/discriminative feature that should be aligned, and a task-irrelevant feature that should be avoided/ignored, based on the classifcation meta-knowledge. Extensive experimental results on various benchmarks (e.g., Offce-Home, Visda-2017, and DomainNet) under different domain adaptation settings demonstrate the effectiveness of ToAlign which helps achieve the state-of-the-art performance. The code is publicly available at https://github.com/microsoft/UDA.
| accept | This paper uses Grad-CAM to reweigh the latent features in order to align the source and target domains only with respect to task-oriented features. The method is simple and effective. The idea is clearly motivated in the context of relevant literature and the experimental results clearly demonstrate the effectiveness of the method. All reviewers, including myself, find the paper an interesting and solid contribution that is worth publication. | train | [
"bu3w2WM1884",
"4cu-Vt0ZoC3",
"fgYOCE1s1H7",
"j77GvWpj3bo",
"tAygiRRLFor",
"U51q4shF67",
"pKcQwBrhBCr",
"UHPFHtKaVhm",
"ICiWHSngTHX",
"oQCxR3CNw7",
"Zhjys77YPQA",
"zlvzxguPIw",
"ucPlohO6vG",
"kMgUJ4ycWl8",
"zmzfx5VgQ8",
"f0O_A3dMOoC",
"7TdWWjcOjkZ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nThank you for your positive feedback! \n\nWe hope our response could help address your concern. If you have any further comment, we are really glad to help address it here, considering that the deadline of rolling discussion is coming. ",
" Thank your for the positive comment on our previous response. \n\n1. ... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"4cu-Vt0ZoC3",
"fgYOCE1s1H7",
"Zhjys77YPQA",
"nips_2021_XP9SZpjZkq",
"U51q4shF67",
"UHPFHtKaVhm",
"ICiWHSngTHX",
"oQCxR3CNw7",
"j77GvWpj3bo",
"j77GvWpj3bo",
"7TdWWjcOjkZ",
"f0O_A3dMOoC",
"zmzfx5VgQ8",
"nips_2021_XP9SZpjZkq",
"nips_2021_XP9SZpjZkq",
"nips_2021_XP9SZpjZkq",
"nips_2021_... |
nips_2021_iU88qpcgh2X | Prior-independent Dynamic Auctions for a Value-maximizing Buyer | Yuan Deng, Hanrui Zhang | accept | The reviews are generally positive, although not enthusiastic. There are some pros and cons. My reading of the main issues in the reviews and the discussion is as follows:
[+] The explore-then-exploit mechanism is quite technical. This is somewhat unusual: many explore-then-exploit algorithms or mechanisms in the literature are often "the easy part" of the paper, followed either by a more complicated approach based on "adaptive exploration" (and $\sqrt{T}$ regret), or a lower bound which rules it out.
[+] As the authors suggest, $T^{2/3}$ regret appears optimal among all explore-then-exploit mechanisms. Such lower bound seems within reach, via a standard lower-bounding technique for bandits (*), and would be a natural complement to the main result. I encourage the authors to try to prove it.
[-] Long-term incentives are reduced to short-term incentives in a simple, brute-force way: a $O(\log T)$-sized "padding" is inserted between exploration and exploitation, to make the entire exploitation worth very little to the buyer.
[-] This brute-force approach is feasible because the seller is assumed to be infinitely patient. This approach would probably fail if the seller had a larger but comparable time-discount factor. Which is arguably closer to reality: e.g., advertiser's time scale is week or even a month, whereas the platform's time scale is a year.
[+/-] The novelty lies in the design of the single-shot mechanism for exploration (which needs to be incentive compatible in a very strong sense: large deviation should cause a large loss in buyer's utility), and perhaps also the single-shot mechanism for exploitation (which needs to be robust to uncertainty in the value distribution $D$). However, one reviewer suggests that both single-shot mechanisms are similar to those in Balseiro et al (2021).
[-] Some of the other simplifications are not that well-justified: that ROI constraint is per-round rather than aggregate, and that target ROI is public. Probably not a big flaw, though.
(*) It may be possible to "reduce" directly to the respective lower bound for bandits. Else, it would be a slightly more involved problem-specific argument. A similar argument (for explore-then-exploit mechanisms in a technically different auction design setting) can be found in (Moshe Babaioff, Yogeshwer Sharma, Aleksandrs Slivkins: Characterizing Truthful Multi-armed Bandit Mechanisms, EC 2009, SICOMP 2014) and (Nikhil R. Devanur, Sham M. Kakade: The price of truthfulness for pay-per-click auctions. EC 2009).
| train | [
"EDRDci4jfB",
"43FzCB4xzp",
"BSC4vLISgKe",
"IzIM3Hx3dXh",
"kakqEK9cabC",
"dSv_fC6WTR5",
"5m75bPAwtW",
"G6b4MFhYe7M",
"xjsbDF4xZ-k"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response and clarifications. I have read other reviews and responses, and my opinions about the paper have not changed.",
"This paper studies the problem of a seller who interacts repeatedly with a buyer. In each round an item is up for sale, and the buyer draws a value from an unknown (to the ... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"dSv_fC6WTR5",
"nips_2021_iU88qpcgh2X",
"xjsbDF4xZ-k",
"G6b4MFhYe7M",
"43FzCB4xzp",
"5m75bPAwtW",
"nips_2021_iU88qpcgh2X",
"nips_2021_iU88qpcgh2X",
"nips_2021_iU88qpcgh2X"
] |
nips_2021_vIDBSGl3vzl | Safe Reinforcement Learning by Imagining the Near Future | Safe reinforcement learning is a promising path toward applying reinforcement learning algorithms to real-world problems, where suboptimal behaviors may lead to actual negative consequences. In this work, we focus on the setting where unsafe states can be avoided by planning ahead a short time into the future. In this setting, a model-based agent with a sufficiently accurate model can avoid unsafe states.We devise a model-based algorithm that heavily penalizes unsafe trajectories, and derive guarantees that our algorithm can avoid unsafe states under certain assumptions. Experiments demonstrate that our algorithm can achieve competitive rewards with fewer safety violations in several continuous control tasks.
| accept | This paper presents a model-based policy optimization method that reduces the frequency that unsafe states are visited. All three reviewers recommend acceptance, one strongly. The primary concerns of the reviewers centered around whether the proposed method would extend to interesting problem settings. After the discussion, the reviewers were all convinced that the covered settings are of interest, and the extension to even more settings (stochastic transitions) may be feasible as an avenue for future work. The AC recommends that the authors work to improve the clarity of these points in the paper to ensure that future readers do not run into the same points of confusion that tripped up the reviewers. | test | [
"Bv20fBjSqMp",
"vzdsuqqZ_2a",
"AX6UlsfOlWK",
"4f8HaFls8Ve",
"0eNaPcOl3Yl",
"qMiAK4WBItM",
"NA8QXJ30b0D",
"zIeG1OEqHw",
"dlBmwtK_H40",
"3vu1id0U75u",
"RQW13iMWF9-",
"MCmmZNtbSAr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper deals with one manifestation of safe exploration. Starting from a given safety function that classifies states as safe or unsafe, a modification of MBPO is used to change the Q-function in such a way that unsafe states should not be visited. The method is tested on two benchmarks. Strengths:\\\nThe topi... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_vIDBSGl3vzl",
"qMiAK4WBItM",
"zIeG1OEqHw",
"NA8QXJ30b0D",
"nips_2021_vIDBSGl3vzl",
"dlBmwtK_H40",
"RQW13iMWF9-",
"3vu1id0U75u",
"0eNaPcOl3Yl",
"Bv20fBjSqMp",
"MCmmZNtbSAr",
"nips_2021_vIDBSGl3vzl"
] |
nips_2021_5t5FPwzE6mq | Contrastive Active Inference | Active inference is a unifying theory for perception and action resting upon the idea that the brain maintains an internal model of the world by minimizing free energy. From a behavioral perspective, active inference agents can be seen as self-evidencing beings that act to fulfill their optimistic predictions, namely preferred outcomes or goals. In contrast, reinforcement learning requires human-designed rewards to accomplish any desired outcome. Although active inference could provide a more natural self-supervised objective for control, its applicability has been limited because of the shortcomings in scaling the approach to complex environments. In this work, we propose a contrastive objective for active inference that strongly reduces the computational burden in learning the agent's generative model and planning future actions. Our method performs notably better than likelihood-based active inference in image-based tasks, while also being computationally cheaper and easier to train. We compare to reinforcement learning agents that have access to human-designed reward functions, showing that our approach closely matches their performance. Finally, we also show that contrastive methods perform significantly better in the case of distractors in the environment and that our method is able to generalize goals to variations in the background.
| accept | The paper proposes a method that bypasses the need for reconstruction in MBRL using a contrastive objective. The proposed method outperforms Dreamer on the domains studied in the paper. All reviewers unanimously vote to accept the paper, which I agree with. I would encourage the authors to scale up with their method and present results on at least a few ATARI environments in the camera-ready version.
| train | [
"pv_VZzLwHyH",
"KHPsJCqYqGg",
"-FcHVXl0L_s",
"y7pkMMn4Ws_",
"PN9cRh_tQtS",
"2wsCGtTK5ai",
"eTHXbh5d7-s",
"PpTBS2kC2wg",
"xBxXue_vZOa"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I'd like to thank the authors for addressing my comments. I've increased my score to reflect my satisfaction with their response.",
"This work proposes an active inference implementation that scales in high dimensional observations and continuous actions spaces. To achieve this the authors propose to replace th... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"PN9cRh_tQtS",
"nips_2021_5t5FPwzE6mq",
"eTHXbh5d7-s",
"xBxXue_vZOa",
"KHPsJCqYqGg",
"PpTBS2kC2wg",
"nips_2021_5t5FPwzE6mq",
"nips_2021_5t5FPwzE6mq",
"nips_2021_5t5FPwzE6mq"
] |
nips_2021_PxMfDdPnTfV | Overparameterization Improves Robustness to Covariate Shift in High Dimensions | A significant obstacle in the development of robust machine learning models is \emph{covariate shift}, a form of distribution shift that occurs when the input distributions of the training and test sets differ while the conditional label distributions remain the same. Despite the prevalence of covariate shift in real-world applications, a theoretical understanding in the context of modern machine learning has remained lacking. In this work, we examine the exact high-dimensional asymptotics of random feature regression under covariate shift and present a precise characterization of the limiting test error, bias, and variance in this setting. Our results motivate a natural partial order over covariate shifts that provides a sufficient condition for determining when the shift will harm (or even help) test performance. We find that overparameterized models exhibit enhanced robustness to covariate shift, providing one of the first theoretical explanations for this ubiquitous empirical phenomenon. Additionally, our analysis reveals an exact linear relationship between the in-distribution and out-of-distribution generalization performance, offering an explanation for this surprising recent observation.
| accept | The paper studies the robustness of overparametrized models to covariate shift between train and test data. The authors study this in the sandbox of random feature regression, by computing the high-dimensional asymptotic limit of the test error in the presence of a shift in the covariance matrix. In this model, they provide analytic solutions, and they corroborate the theory with empirical evidence.
The reviewers concluded the paper makes a value contribution and should be accepted, but noted that the paper could use improvements to the presentation. In particular, Theorem 2 is rather hard to parse/interpret, and some of the proofs are essentially long calculations that are difficult to follow. This was partially addressed in the discussions, and some of these techniques are standard in the literature. We strongly suggest that the authors address these concerns for the camera ready version. | train | [
"tz1ZHhukyQy",
"MlUPQMQIOw9",
"TVOfWW_7Bdh",
"PxZTmS-gCX",
"WP35sKNxMBC",
"8fmBk1-_vDB",
"ROIl5zHcIsF",
"ofg6clDBx2b",
"SK5yjPyaqI",
"_tm_nWDoU-b",
"hXO2OYjV75I",
"1Qo7dAh5n4N",
"J0R5Vy00Ai"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the test error, as well as the bias and variance, of random feature regression in the asymptotic regime, in a setting with covariate shift — i.e., when the covariance matrix of the features is different at testing time than at training time. The paper defines a partial ordering over covariance sh... | [
6,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_PxMfDdPnTfV",
"ofg6clDBx2b",
"nips_2021_PxMfDdPnTfV",
"SK5yjPyaqI",
"ROIl5zHcIsF",
"nips_2021_PxMfDdPnTfV",
"1Qo7dAh5n4N",
"TVOfWW_7Bdh",
"J0R5Vy00Ai",
"tz1ZHhukyQy",
"8fmBk1-_vDB",
"nips_2021_PxMfDdPnTfV",
"nips_2021_PxMfDdPnTfV"
] |
nips_2021_ii5mGEbRo93 | Logarithmic Regret in Feature-based Dynamic Pricing | Jianyu Xu, Yu-Xiang Wang | accept | All the reviewers agree on the importance of the contribution. It is also my view that obtaining logarithmic regret is an important open problem in contextual pricing without any assumption of small noise. I am aware of many papers that tried to establish similar results but failed.
The only concern shared by the reviewers is that the regret bound is inversely proportional to noise, but this was adequately addressed this in the rebuttal by saying that larger noise leads to more exploration. The authors are encouraged to make this more explicit in the revision.
The review team is confident that this should be one of the top papers in NeurIPS and enthusiastically recommends acceptance. | train | [
"hgDDsGDiN2W",
"TLazO4Qhza6",
"cEagUPq9-iK",
"q7IsHGHOTx6",
"fGxF3gcdN6J",
"_gHlzsySW0s",
"chqhcBBEhfp",
"9MNekEqK0wK",
"_uor85tfPz",
"-QrysLDxjdr",
"Ngrrx7XqWUH",
"GF-XUSuN1G",
"nFoz0a1Dc8N"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for your carefully reviewing our rebuttal and kindly updating the score!",
"This paper considers the feature-based dynamic pricing setting. This setting is a sequential setting in which in each round a learner is to predict the price of a product based on a feature vector. Then, if the customer's p... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
9
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"cEagUPq9-iK",
"nips_2021_ii5mGEbRo93",
"q7IsHGHOTx6",
"fGxF3gcdN6J",
"_gHlzsySW0s",
"TLazO4Qhza6",
"nFoz0a1Dc8N",
"GF-XUSuN1G",
"Ngrrx7XqWUH",
"nips_2021_ii5mGEbRo93",
"nips_2021_ii5mGEbRo93",
"nips_2021_ii5mGEbRo93",
"nips_2021_ii5mGEbRo93"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.