paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_Owggnutk6lE | Alias-Free Generative Adversarial Networks | We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation.
| accept | The paper identifies the "texture sticking" problem in image synthesis GANs, argues that this is caused by aliasing in the convolutions, proposes architectural modifications to fix the aliasing, and demonstrates strong improvements in equivariance / smooth transitions. All reviewers agreed that the results were impressive. Separately and equally importantly, all reviewers agreed that the problem was clearly motivated and multiple reviewers (gQig, 1gcw) also agreed that the analysis was insightful. Papers which succeed on both of these fronts are rare; I recommend acceptance. | train | [
"NVC8VX0TVz",
"VCBWO0NeOLB",
"78NvSfGO8W",
"ZjJdQnG0heH",
"oiK7s69Utb6",
"htKAHwukVwA",
"m2u5mcZ7exv",
"y3Qp4fTZlir"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Starting from the observation that current generative models in fact do not synthesize images in a natural hierarchical manner (which they call “texture sticking”), this paper defines the problem systematically, analyze the root of it (unintentional information leaking and aliasing), and proposes a set of simple a... | [
10,
-1,
-1,
-1,
-1,
6,
9,
8
] | [
5,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"nips_2021_Owggnutk6lE",
"m2u5mcZ7exv",
"y3Qp4fTZlir",
"NVC8VX0TVz",
"htKAHwukVwA",
"nips_2021_Owggnutk6lE",
"nips_2021_Owggnutk6lE",
"nips_2021_Owggnutk6lE"
] |
nips_2021_ZqEUs3sTRU0 | Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising without Clean Images | Recently, there has been extensive research interest in training deep networks to denoise images without clean reference.However, the representative approaches such as Noise2Noise, Noise2Void, Stein's unbiased risk estimator (SURE), etc. seem to differ from one another and it is difficult to find the coherent mathematical structure. To address this, here we present a novel approach, called Noise2Score, which reveals a missing link in order to unite these seemingly different approaches.Specifically, we show that image denoising problems without clean images can be addressed by finding the mode of the posterior distribution and that the Tweedie's formula offers an explicit solution through the score function (i.e. the gradient of loglikelihood). Our method then uses the recent finding that the score function can be stably estimated from the noisy images using the amortized residual denoising autoencoder, the method of which is closely related to Noise2Noise or Nose2Void. Our Noise2Score approach is so universal that the same network training can be used to remove noises from images that are corrupted by any exponential family distributions and noise parameters. Using extensive experiments with Gaussian, Poisson, and Gamma noises, we show that Noise2Score significantly outperforms the state-of-the-art self-supervised denoising methods in the benchmark data set such as (C)BSD68, Set12, and Kodak, etc.
| accept | This paper proposes a new method, Noise2Score, for denoising images without clean references at training time. The proposed approach has two steps. First a denoising autoencoder (DAE) is used to learn the score function in a noise agnostic fashion. Second, the denoised image is computed through the estimated score function by exploiting Tweedie’s formula (which can be used with any exponential family noises). Empirical evaluation shows the benefits of the proposed approach (comparing to a large number of relevant baselines)
All reviewers found the proposed method interesting and novel. Furthermore, its connection with the recent literature in the subject (e.g. Noise2X) is a nice feature that will inform future research on the topic.
In the original review, several clarifications were needed. The author provided a detailed response which clarified many of the raised concerns. After the rebuttal, all reviewers recommend accepting the paper. Reviewers kPzN and 1non updated their score (to 6 and 7 respectively).
In sum, the work provides an interesting contribution and the current empirical evaluation is sufficient for accepting the paper. The AC encourages the authors to incorporate the detailed clarifications provided to the reviewers.
| train | [
"vcSAOinSolp",
"uAIP0WqOpZ",
"KvstebMbvVR",
"7ifCGir5_Rr",
"Vscru5nAZy_",
"bcQ31HTeXX4",
"qBvgSvmC47D",
"NGDXf3uJ_tI",
"w3OctjPbrTj",
"vtGDGatgoRD",
"8zRuZ_g_uX",
"ECuwfuRQHm6"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper proposes a new approach based on Tweedie's formula to perform self-supervised image denoising. The proposed approach has two steps: (1) learn the score function or gradient of density of the noisy images, and (2) use Tweedie's formula to compute a denoised image using the learned score function. The prop... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1
] | [
5,
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1
] | [
"nips_2021_ZqEUs3sTRU0",
"vcSAOinSolp",
"Vscru5nAZy_",
"nips_2021_ZqEUs3sTRU0",
"vtGDGatgoRD",
"qBvgSvmC47D",
"ECuwfuRQHm6",
"nips_2021_ZqEUs3sTRU0",
"nips_2021_ZqEUs3sTRU0",
"7ifCGir5_Rr",
"vcSAOinSolp",
"NGDXf3uJ_tI"
] |
nips_2021_pbAmqUUHsQ | Continuous Mean-Covariance Bandits | Existing risk-aware multi-armed bandit models typically focus on risk measures of individual options such as variance. As a result, they cannot be directly applied to important real-world online decision making problems with correlated options. In this paper, we propose a novel Continuous Mean-Covariance Bandit (CMCB) model to explicitly take into account option correlation. Specifically, in CMCB, there is a learner who sequentially chooses weight vectors on given options and observes random feedback according to the decisions. The agent's objective is to achieve the best trade-off between reward and risk, measured with option covariance. To capture different reward observation scenarios in practice, we consider three feedback settings, i.e., full-information, semi-bandit and full-bandit feedback. We propose novel algorithms with optimal regrets (within logarithmic factors), and provide matching lower bounds to validate their optimalities. The experimental results also demonstrate the superiority of our algorithms. To the best of our knowledge, this is the first work that considers option correlation in risk-aware bandits and explicitly quantifies how arbitrary covariance structures impact the learning performance.The novel analytical techniques we developed for exploiting the estimated covariance to build concentration and bounding the risk of selected actions based on sampling strategy properties can likely find applications in other bandit analysis and be of independent interests.
| accept | The paper considers the problem of regret minimization on bandits with continuous arms. The key idea is to use the correlation between arms. It shows matching upper and lower regret bounds for full information and semi-bandit settings, and upper bound for the full bandit setting.
Three expert reviewers considered the strengths and weaknesses of the paper. In the author rebuttal, the authors additionally provided evidence on a non-synthetic dataset, in response to two reviewers' comments. The authors engaged with reviewer questions, and the reviewers appreciated this engagement, with two scores increasing. A brief discussion post rebuttal confirmed that all reviewers thought that the paper is novel and provides interesting results. Therefore it is great pleasure for me to recommend that the paper is accepted for publication at NeurIPS. | test | [
"AHFjSZTG1kw",
"PvoJAM7rHXk",
"2nzthPqQRS",
"iyFPbWPUdn8",
"u3Z0_7Qn_dI",
"1jJvB7JOnK2",
"sxqGypwkBy",
"r-IY8nBzMo8",
"SRSK3T2bK86",
"_EzMrd4P3QK"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
" Thank you very much for your time and effort in reviewing our paper!",
" Thank you very much for your increased score and effort in reviewing our paper!",
"The paper presents a novel set of methodologies to deal with the minimization of a regret where the goal is to maximize the revenue and, at the same time,... | [
-1,
-1,
7,
7,
-1,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
3,
-1,
3,
-1,
-1,
-1,
-1
] | [
"2nzthPqQRS",
"iyFPbWPUdn8",
"nips_2021_pbAmqUUHsQ",
"nips_2021_pbAmqUUHsQ",
"sxqGypwkBy",
"nips_2021_pbAmqUUHsQ",
"SRSK3T2bK86",
"2nzthPqQRS",
"1jJvB7JOnK2",
"iyFPbWPUdn8"
] |
nips_2021_lk1ORT35tbi | Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language | In this work, we propose a unified framework, called Visual Reasoning with Differ-entiable Physics (VRDP), that can jointly learn visual concepts and infer physics models of objects and their interactions from videos and language. This is achieved by seamlessly integrating three components: a visual perception module, a concept learner, and a differentiable physics engine. The visual perception module parses each video frame into object-centric trajectories and represents them as latent scene representations. The concept learner grounds visual concepts (e.g., color, shape, and material) from these object-centric representations based on the language, thus providing prior knowledge for the physics engine. The differentiable physics model, implemented as an impulse-based differentiable rigid-body simulator, performs differentiable physical simulation based on the grounded concepts to infer physical properties, such as mass, restitution, and velocity, by fitting the simulated trajectories into the video observations. Consequently, these learned concepts and physical models can explain what we have seen and imagine what is about to happen in future and counterfactual scenarios. Integrating differentiable physics into the dynamic reasoning framework offers several appealing benefits. More accurate dynamics prediction in learned physics models enables state-of-the-art performance on both synthetic and real-world benchmarks while still maintaining high transparency and interpretability; most notably, VRDP improves the accuracy of predictive and counterfactual questions by 4.5% and 11.5% compared to its best counterpart. VRDP is also highly data-efficient: physical parameters can be optimized from very few videos, and even a single video can be sufficient. Finally, with all physical parameters inferred, VRDP can quickly learn new concepts from a few examples.
| accept | The paper proposes a new task and method for dynamic visual question answering based on neuro-symbolic reasoning and the usage of a differentiable physics engine. It has received reviews from four experts, who appreciated the new formulation with the differentiable engine (and its effect on counterfactual tasks), the modular and interpretable design, and a well written and presented paper.
Minor issues were raised on the model-based nature of the method --- the advantage of having a physics engine is also a shortcoming, since the method requires a model of the physical processes, and on evaluation, comparisons.
The authors' responses were convincing, in particular some requested experiments (eg re: pre-training on objects) where appreciated by the reviewers, and a consensus for acceptance emerged.
The AC recommends acceptance. | train | [
"NjS4NKOKC49",
"HT5ERv6wg_i",
"lo9i7x-CTRv",
"6TndY0DEam",
"m8Eo9H3mGFh",
"A_BElTlmfc",
"bnii6K3eXz",
"BkInx3iF1SJ",
"Y8zC2z_uBy",
"Po416Dux5jW",
"35lCArnWE2",
"hKVmsb_TFg",
"tjl0jvgbEhS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely appreciate all reviewers’ and ACs’ time and efforts in reviewing our paper. We truly thank you all for the insightful and constructive suggestions, which helped further improve our paper. We genuinely appreciate the positive 6-7-6-7 evaluation from reviewers BwFB, etLB, Dj3E, and Qwxn.\n\nHere is a s... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"nips_2021_lk1ORT35tbi",
"Y8zC2z_uBy",
"nips_2021_lk1ORT35tbi",
"A_BElTlmfc",
"lo9i7x-CTRv",
"lo9i7x-CTRv",
"hKVmsb_TFg",
"nips_2021_lk1ORT35tbi",
"tjl0jvgbEhS",
"35lCArnWE2",
"nips_2021_lk1ORT35tbi",
"nips_2021_lk1ORT35tbi",
"nips_2021_lk1ORT35tbi"
] |
nips_2021_yAIYc7YjGbd | Solving Soft Clustering Ensemble via $k$-Sparse Discrete Wasserstein Barycenter | Clustering ensemble is one of the most important problems in ensemble learning. Though it has been extensively studied in the past decades, the existing methods often suffer from the issues like high computational complexity and the difficulty on understanding the consensus. In this paper, we study the more general soft clustering ensemble problem where each individual solution is a soft clustering. We connect it to the well-known discrete Wasserstein barycenter problem in geometry. Based on some novel geometric insights in high dimensions, we propose the sampling-based algorithms with provable quality guarantees. We also provide the systematical analysis on the consensus of our model. Finally, we conduct the experiments to evaluate our proposed algorithms.
| accept | The majority reviewers are in favor of accepting this paper. The reviewers in general liked the connection made between the soft clustering ensemble problem and discrete Wasserstein barycenter. The hardness result and convergence analysis rounded out the paper. However, there was concern about the lack of direct practical implications of the result due to the exponential dependence on the number of centers.
| val | [
"luAem9JD5oL",
"bfMv8SOlxYh",
"D1xPtDkXQmg",
"bmDPyTD5leW",
"IhvtVsr9Tcg",
"0WFAjheEdIg",
"_5Np9rtuPAN",
"iXGYNggwWTY",
"K_jJP4109Wd",
"eAmPyHofze6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the authors for their answers.",
" Thank you for your response. \n\nI think the paper is in general very clear. My main concerns about Theorem 2 and Algorithm 1 still remain. I appreciate your explanation but I still think the contribution of the paper is limited because Theorem 2 is pretty straightfor... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"bmDPyTD5leW",
"D1xPtDkXQmg",
"eAmPyHofze6",
"iXGYNggwWTY",
"K_jJP4109Wd",
"_5Np9rtuPAN",
"nips_2021_yAIYc7YjGbd",
"nips_2021_yAIYc7YjGbd",
"nips_2021_yAIYc7YjGbd",
"nips_2021_yAIYc7YjGbd"
] |
nips_2021_15HPeY8MGQ | Bayesian Adaptation for Covariate Shift | When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.While improving the robustness of neural networks is one promising approach to mitigate this issue, an appealing alternate to robustifying networks against all possible test-time shifts is to instead directly adapt them to unlabeled inputs from the particular distribution shift we encounter at test time.However, this poses a challenging question: in the standard Bayesian model for supervised learning, unlabeled inputs are conditionally independent of model parameters when the labels are unobserved, so what can unlabeled data tell us about the model parameters at test-time? In this paper, we derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters, and show how approximate inference in this model can be instantiated with a simple regularized entropy minimization procedure at test-time. We evaluate our method on a variety of distribution shifts for image classification, including image corruptions, natural distribution shifts, and domain adaptation settings, and show that our method improves both accuracy and uncertainty estimation.
| accept | This paper proposes a Bayesian approach to covariate shift using unlabelled test data. There was extensive discussion of the paper, and number of concerns, many of which have moved towards a resolution throughout the response period. Key issues included:
1. Use the standard ImageNet-C test set, instead of the custom one in the submission.
2. Hyperparameter tuning
3. Show results for ensembles of baselines
4. Missing comparisons of DA baselines
5. Concerns about small-scale datasets
6. mCE computation inconsistencies
While it is concurrent work, and does not factor into the evaluation of this submission, it could also be useful to the readership to discuss the recent paper (https://arxiv.org/abs/2106.11905), which shows there are in fact risks of Bayesian model averaging under covariate shift, and explains why deep ensembles don't suffer from these risks.
Overall, this is nice work, and I am supportive of this paper. Please do carefully incorporate all reviewer comments in finalizing the camera-ready, especially point 1, where it is crucial to show an evaluation using the standard test-set for ImageNet-C. | train | [
"9fV9muKUMIi",
"WgJAY8vB73T",
"O519U7nTtOV",
"ehtFl5Zeq_",
"q9G35OlGcR",
"gXiLS58LqlT",
"en-_ePSD6qq",
"9mSMVj5cDXu",
"2fDE-isHsnZ",
"d5b_1yYFvzi",
"MFY7n7_N3b5",
"RJ5o1GIHu9f",
"e6bg4t3N4YM",
"_VGHfBMQSqa",
"csji86hrh8F",
"Ey1XmjA_XDc",
"ODw0XzY-Xi",
"52EdQRCdvR9",
"IMC84GQwaf",... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"au... | [
" Dear authors,\n\nthanks a lot, this sounds great. Please see my post-rebuttal comments which I added to my original review. I decided to increase my score to (7).",
"The paper extends test-time entropy minimization through a novel Bayesian criterion for adaptation. It allows to improve performance during adapt... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"O519U7nTtOV",
"nips_2021_15HPeY8MGQ",
"ehtFl5Zeq_",
"vpTFU2WLrRd",
"en-_ePSD6qq",
"9mSMVj5cDXu",
"2fDE-isHsnZ",
"MFY7n7_N3b5",
"e6bg4t3N4YM",
"NkT3rahEjKj",
"RJ5o1GIHu9f",
"_VGHfBMQSqa",
"csji86hrh8F",
"XlnHo3wPf8O",
"XlnHo3wPf8O",
"PrUlVSkOd8E",
"CIqGUyNOWBm",
"S2vXUmbWpxI",
"C... |
nips_2021_xRJ_Xqmb6d | Perturb-and-max-product: Sampling and learning in discrete energy-based models | Perturb-and-MAP offers an elegant approach to approximately sample from a energy-based model (EBM) by computing the maximum-a-posteriori (MAP) configuration of a perturbed version of the model. Sampling in turn enables learning. However, this line of research has been hindered by the general intractability of the MAP computation. Very few works venture outside tractable models, and when they do, they use linear programming approaches, which as we will show, have several limitations. In this work we present perturb-and-max-product (PMP), a parallel and scalable mechanism for sampling and learning in discrete EBMs. Models can be arbitrary as long as they are built using tractable factors. We show that (a) for Ising models, PMP is orders of magnitude faster than Gibbs and Gibbs-with-Gradients (GWG) at learning and generating samples of similar or better quality; (b) PMP is able to learn and sample from RBMs; (c) in a large, entangled graphical model in which Gibbs and GWG fail to mix, PMP succeeds.
| accept | The authors present an interesting framework in which they combine the "perturb-and-MAP" strategy for generating samples from a Gibbs distribution with the perspective that incorrect inference (such as substituting the simple and efficient max-product method) can be compensated by learning the model parameters using a matching approximation, giving a mechanism for (approximately) generating new samples from the training data distribution.
The paper is well-written and clear. Novelty is somewhat limited, since the work is a straightforward combination of several ideas in the literature. Experimental validation is reasonable / sufficient, although several reviewers requested some extensions, such as including e.g. off the shelf or limited-budget LP solvers, and expanding on the authors' observations about the performance of such methods. The authors' response clarified and addressed some of the reviewers' concerns. Reviewers felt the impact of the work was borderline for NeurIPS, and suggested that some of the authors' claims be toned down or moderated by clear discussion of the method's cons, but were generally in favor of acceptance if possible.
(I also should add that there are *many* method for generating samples from discrete distributions beyond Gibbs-like methods, including tempering methods and "discontinuous" Hamiltonian Monte Carlo for Gibbs distributions, and on the "learning to sample" side, discrete (normalizing) flow models, just to name only a few.)
| train | [
"gNvIZOCR9Eo",
"8AfBXyiMzXn",
"jN4m5nZ9W7c",
"xudvT7lFbZ",
"uqB1L3Kt-Ac",
"lg8erecS5wd",
"iHuIoZASTwv",
"3GBbwg0qQTu",
"vn1f0M-w3Pr",
"JxfXjEV4rqI"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your clarifications and promised amendments to the text.\n\nMy opinion of the paper and score remains the same.",
"The manuscript is clear, well written, and presents some interesting results in the use \nof the Belief Propagation (BP) algorithm to train discrete energy-based models. \n\nThe autho... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"uqB1L3Kt-Ac",
"nips_2021_xRJ_Xqmb6d",
"lg8erecS5wd",
"nips_2021_xRJ_Xqmb6d",
"JxfXjEV4rqI",
"8AfBXyiMzXn",
"xudvT7lFbZ",
"vn1f0M-w3Pr",
"nips_2021_xRJ_Xqmb6d",
"nips_2021_xRJ_Xqmb6d"
] |
nips_2021_G_WdNNLj4wU | Towards Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games | Measuring and promoting policy diversity is critical for solving games with strong non-transitive dynamics where strategic cycles exist, and there is no consistent winner (e.g., Rock-Paper-Scissors). With that in mind, maintaining a pool of diverse policies via open-ended learning is an attractive solution, which can generate auto-curricula to avoid being exploited. However, in conventional open-ended learning algorithms, there are no widely accepted definitions for diversity, making it hard to construct and evaluate the diverse policies. In this work, we summarize previous concepts of diversity and work towards offering a unified measure of diversity in multi-agent open-ended learning to include all elements in Markov games, based on both Behavioral Diversity (BD) and Response Diversity (RD). At the trajectory distribution level, we re-define BD in the state-action space as the discrepancies of occupancy measures. For the reward dynamics, we propose RD to characterize diversity through the responses of policies when encountering different opponents. We also show that many current diversity measures fall in one of the categories of BD or RD but not both. With this unified diversity measure, we design the corresponding diversity-promoting objective and population effectivity when seeking the best responses in open-ended learning. We validate our methods in both relatively simple games like matrix game, non-transitive mixture model, and the complex \textit{Google Research Football} environment. The population found by our methods reveals the lowest exploitability, highest population effectivity in matrix game and non-transitive mixture model, as well as the largest goal difference when interacting with opponents of various levels in \textit{Google Research Football}.
| accept | This paper explores different methods for measuring population diversity and how they can be applied in open-ended learning. All reviewers agreed that it provides a valuable contribution. One important point that emerged from the discussion with reviewer QZCS was that the paper's title is a bit of an overclaim. The authors graciously agreed to qualify it by changing to "Towards unifying...". | train | [
"YDSMmiSvqG7",
"A63f4Faby5e",
"doTkzcCvYuT",
"_jbtcDoQCG0",
"nvv6ptp9UGa",
"QPcLVy9AFhN",
"WJYwXrWLEHZ",
"-s-wtf036Wy",
"GGvMC6UaAI8",
"fgK0fI-R-0k",
"8nDnnVeXS5q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the detailed response. I am very happy with the feedback provided by you and the conversations with the rest of the reviewers, so I will wholeheartedly recommend acceptance.\n\nLooking forward to seeing the improved version at the conference!",
" Thanks for the response addressing some o... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
8,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"WJYwXrWLEHZ",
"-s-wtf036Wy",
"nips_2021_G_WdNNLj4wU",
"QPcLVy9AFhN",
"8nDnnVeXS5q",
"doTkzcCvYuT",
"fgK0fI-R-0k",
"GGvMC6UaAI8",
"nips_2021_G_WdNNLj4wU",
"nips_2021_G_WdNNLj4wU",
"nips_2021_G_WdNNLj4wU"
] |
nips_2021_52weXyh2yh | Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | We study the problem of training certifiably robust models against adversarial examples. Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models. However, many studies have shown that Interval Bound Propagation (IBP) training uses much looser bounds but outperforms other models that use tighter bounds. We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}. We find significant differences in the loss landscapes across many linear relaxation-based methods, and that the current state-of-the-arts method often has a landscape with favorable optimization properties. Moreover, to test the claim, we design a new certifiable training method with the desired properties. With the tightness and the smoothness, the proposed method achieves a decent performance under a wide range of perturbations, while others with only one of the two factors can perform well only for a specific range of perturbations. Our code is available at \url{https://github.com/sungyoon-lee/LossLandscapeMatters}.
| accept | The paper is tackling an important question in certified robust training: why tighter bounds may not lead to better models. The authors point out that linear relaxation-based methods may lead to less smooth loss landscapes when upper and lower bounds are not properly chosen, and propose a method to improve linear relaxation-based methods by improving smoothness.
Although the paper is interesting and could be valuable to the certified defense community, there are many presentation problems. Even for expert reviewers who have been publishing in the same area, the paper is very hard to follow. Furthermore, the proposed method only achieves marginal or even no improvement over state-of-the-arts, so whether smoothness is a critical factor in certified training remains shady. We thereby decide the reject the paper this time. We do think the paper contains some interesting ideas and would like to encourage the authors to improve the paper and resubmit to another top conference. | train | [
"LuN9b3WIGcc",
"kD157u-556M",
"PiUSKbh7u73",
"20tIdqomw5j",
"5k7BnhIVEx1",
"J92ZcmumpG",
"iYQenQRthpx",
"rgrPsCmjLGy",
"Azlx3i9awBo"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the problem of training certifiably robust models against adversarial examples. Besides the tightness of the upper bound, the paper identifies smoothness of the loss landscape as another key factor that influences the performance of certifiable training. Based on the theoretical analysis, the au... | [
5,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"nips_2021_52weXyh2yh",
"PiUSKbh7u73",
"iYQenQRthpx",
"rgrPsCmjLGy",
"LuN9b3WIGcc",
"Azlx3i9awBo",
"nips_2021_52weXyh2yh",
"nips_2021_52weXyh2yh",
"nips_2021_52weXyh2yh"
] |
nips_2021_7PkfLkyLMRM | Mitigating Covariate Shift in Imitation Learning via Offline Data With Partial Coverage | This paper studies offline Imitation Learning (IL) where an agent learns to imitate an expert demonstrator without additional online environment interactions. Instead, the learner is presented with a static offline dataset of state-action-next state triples from a potentially less proficient behavior policy. We introduce Model-based IL from Offline data (MILO): an algorithmic framework that utilizes the static dataset to solve the offline IL problem efficiently both in theory and in practice. In theory, even if the behavior policy is highly sub-optimal compared to the expert, we show that as long as the data from the behavior policy provides sufficient coverage on the expert state-action traces (and with no necessity for a global coverage over the entire state-action space), MILO can provably combat the covariate shift issue in IL. Complementing our theory results, we also demonstrate that a practical implementation of our approach mitigates covariate shift on benchmark MuJoCo continuous control tasks. We demonstrate that with behavior policies whose performances are less than half of that of the expert, MILO still successfully imitates with an extremely low number of expert state-action pairs while traditional offline IL methods such as behavior cloning (BC) fail completely. Source code is provided at https://github.com/jdchang1/milo.
| accept | The reviewers in most part agree that paper is well-written, is tackling an interesting problem and also the authors have engaged constructively during the reviewer feedback period and addressed the main concerns and questions. Please address the minor points raised by the reviewers in the final version. | test | [
"HE3NUH6gJ8K",
"fBytB2p4Gq4",
"uNb4H_IlD33",
"h2oACvQXj7Q",
"0L6Emc9WfNO",
"qm0CatKJzZY",
"1B7-Fxt6FUM",
"scWv_NxeOmZ",
"i02sCElA-vV",
"hKe_CUOVv43",
"RJi1zONM4t5",
"_1S3DNXmW_c"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This work proposes a new method for offline imitation learning. The setup is slightly different than usual as it supposes that, additionally to the expert dataset we want to imitate, we have access to a large dataset of non-expert interactions.\nThe method is the following: \n\n1) learn a model of the transition k... | [
6,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_7PkfLkyLMRM",
"nips_2021_7PkfLkyLMRM",
"_1S3DNXmW_c",
"nips_2021_7PkfLkyLMRM",
"i02sCElA-vV",
"h2oACvQXj7Q",
"HE3NUH6gJ8K",
"HE3NUH6gJ8K",
"h2oACvQXj7Q",
"fBytB2p4Gq4",
"_1S3DNXmW_c",
"nips_2021_7PkfLkyLMRM"
] |
nips_2021_K_Mnsw5VoOW | Global Filter Networks for Image Classification | Recent advances in self-attention and pure multi-layer perceptrons (MLP) models for vision have shown great potential in achieving promising performance with fewer inductive biases. These models are generally based on learning interaction among spatial locations from raw data. The complexity of self-attention and MLP grows quadratically as the image size increases, which makes these models hard to scale up when high-resolution features are required. In this paper, we present the Global Filter Network (GFNet), a conceptually simple yet computationally efficient architecture, that learns long-term spatial dependencies in the frequency domain with log-linear complexity. Our architecture replaces the self-attention layer in vision transformers with three key operations: a 2D discrete Fourier transform, an element-wise multiplication between frequency-domain features and learnable global filters, and a 2D inverse Fourier transform. We exhibit favorable accuracy/complexity trade-offs of our models on both ImageNet and downstream tasks. Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness. Code is available at https://github.com/raoyongming/GFNet
| accept | This work proposes replacing self-attention with a global filter layer and applies this method to image classification problems. The resulting method replaces O(n^2) attention operation with a O(nlogn) time-complexity while achieving favorable performance in computation time vs accuracy trade-off curves. The reviewers commented positively on the motivation, clarity of exposition, and the implementation. Although the reviewers noted some concerns about novelty with respect to previous architectures, overall the reviewers were favorable to the acceptance of this work. Despite these minor concerns, this paper is accepted to the conference.
| train | [
"GoueFlGK-8r",
"hM4HfFANJyj",
"WhwEmP3iu2A",
"aCGfQtqzlpk",
"4b_8ksbga2",
"fn4eE2WRHxO",
"Ls5gddPpORD",
"Ulnn2Icra31",
"lCHtq9Ego5-",
"t9YjSF4_1-J",
"EX1UAV8q7KR",
"-h7FXAtRZ33",
"7fl8sJj1Lhn",
"MK4jBU-w6ya",
"yNGLb4sRA1o"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the rebuttal and decided to retain my score.",
" Thanks for upgrading the score and providing valuable feedback. We would like to further discuss your concerns on the complexity/accuracy trade-off of our GFNet. Recently, we continue to explore the possibility of using our simple building block to de... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5
] | [
"Ls5gddPpORD",
"WhwEmP3iu2A",
"t9YjSF4_1-J",
"nips_2021_K_Mnsw5VoOW",
"EX1UAV8q7KR",
"lCHtq9Ego5-",
"MK4jBU-w6ya",
"-h7FXAtRZ33",
"yNGLb4sRA1o",
"aCGfQtqzlpk",
"7fl8sJj1Lhn",
"nips_2021_K_Mnsw5VoOW",
"nips_2021_K_Mnsw5VoOW",
"nips_2021_K_Mnsw5VoOW",
"nips_2021_K_Mnsw5VoOW"
] |
nips_2021_b4YiFnQH3gN | Catastrophic Data Leakage in Vertical Federated Learning | Recent studies show that private training data can be leaked through the gradients sharing mechanism deployed in distributed machine learning systems, such as federated learning (FL). Increasing batch size to complicate data recovery is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data leakage attack with theoretical justification to efficiently recover batch data from the shared aggregated gradients. We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE). Comparing to existing data leakage attacks, our extensive experimental results on vertical FL settings demonstrate the effectiveness of CAFE to perform large-batch data leakage attack with improved data recovery quality. We also propose a practical countermeasure to mitigate CAFE. Our results suggest that private data participated in standard FL, especially the vertical case, have a high risk of being leaked from the training gradients. Our analysis implies unprecedented and practical data leakage risks in those learning settings. The code of our work is available at https://github.com/DeRafael/CAFE.
| accept | The reviewers were not strongly positive about the paper. The reviewers had a recurring concern over the distinction between Vertical Federated Learning (VFL) and Horizontal Federated Learning, probably the more common one. I would recommend the authors to update the title and make a clear distinction in the introduction itself, to keep the expectation of the reader focussed on VFL. Also, reviewer kAc7 had some concerns about the vagueness in the definition of concepts in the paper, which needs to be addressed. | train | [
"LlKSVwM2do",
"-z8xOPCjMNs",
"fDearskltIh",
"KlSwGBGluk",
"qUt9ddAw_cb",
"AMrxmc1D62F",
"_8U50CcpZm",
"95J4j7NTHLe",
"8uU0OYRuCr9",
"wDS8AJYKsSs",
"j9GeIv7BThj",
"47LS43ound"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies recovering sensitive training data from vertical federated learning (VFL) process, where same set of training data is shared among the clients, but each has distinct set of features. The authors leveraged the data indices (or batch indices) to reliably recover internal representations, which in ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_b4YiFnQH3gN",
"8uU0OYRuCr9",
"KlSwGBGluk",
"nips_2021_b4YiFnQH3gN",
"nips_2021_b4YiFnQH3gN",
"LlKSVwM2do",
"AMrxmc1D62F",
"47LS43ound",
"j9GeIv7BThj",
"nips_2021_b4YiFnQH3gN",
"nips_2021_b4YiFnQH3gN",
"nips_2021_b4YiFnQH3gN"
] |
nips_2021_ospGnpuf6L | Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee | The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories. Despite its promising applications, existing works on FRL fail to I) provide theoretical analysis on its convergence, and II) account for random system failures and adversarial attacks. Towards this end, we propose the first FRL framework the convergence of which is guaranteed and tolerant to less than half of the participating agents being random system failures or adversarial attackers. We prove that the sample efficiency of the proposed framework is guaranteed to improve with the number of agents and is able to account for such potential failures or attacks. All theoretical results are empirically verified on various RL benchmark tasks.
| accept | The reviewers in general agree that the paper considers an interesting new setting and this is one of the very first works that consider federated RL with theoretical guarantees, which make the reviewers in general lean towards an acceptance. But reviewers still have some concerns on the novelty of the proposed approach and whether or not the assumptions are realistic. | train | [
"dMtyaSFYPcU",
"dEKnGA-R3q",
"bOh3knx4VaV",
"xgu023kzcpt",
"cGt_C36r6R",
"x_OKRTtLMK",
"Tf0741YnNBC",
"RXGNNJpN6e",
"g6Ngwy7H2Bm",
"J6D-MtyHne3",
"JIfFiwlUYDd",
"CZjSq6UifRB",
"Af3OZfJITV"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you and we will explore the possibilities in future works.",
" Thank you for the further response to my question. Overall, your answers address some of my concerns and I have adjusted my score.\n\nThough I think in many real-world scenarios, the agents still often act in environments where the MPD can be ... | [
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"dEKnGA-R3q",
"x_OKRTtLMK",
"nips_2021_ospGnpuf6L",
"J6D-MtyHne3",
"nips_2021_ospGnpuf6L",
"Tf0741YnNBC",
"g6Ngwy7H2Bm",
"Af3OZfJITV",
"bOh3knx4VaV",
"CZjSq6UifRB",
"cGt_C36r6R",
"nips_2021_ospGnpuf6L",
"nips_2021_ospGnpuf6L"
] |
nips_2021_bqGK5PyI6-N | Compacter: Efficient Low-Rank Hypercomplex Adapter Layers | Adapting large-scale pretrained language models to downstream tasks via fine-tuning is the standard method for achieving state-of-the-art performance on NLP benchmarks. However, fine-tuning all weights of models with millions or billions of parameters is sample-inefficient, unstable in low-resource settings, and wasteful as it requires storing a separate copy of the model for each task. Recent work has developed parameter-efficient fine-tuning methods, but these approaches either still require a relatively large number of parameters or underperform standard fine-tuning. In this work, we propose Compacter, a method for fine-tuning large-scale language models with a better trade-off between task performance and the number of trainable parameters than prior work. Compacter accomplishes this by building on top of ideas from adapters, low-rank optimization, and parameterized hypercomplex multiplication layers.Specifically, Compacter inserts task-specific weight matrices into a pretrained model's weights, which are computed efficiently as a sum of Kronecker products between shared slow'' weights andfast'' rank-one matrices defined per Compacter layer. By only training 0.047% of a pretrained model's parameters, Compacter performs on par with standard fine-tuning on GLUE and outperforms standard fine-tuning on SuperGLUE and low-resource settings. Our code is publicly available at https://github.com/rabeehk/compacter.
| accept | This paper address efficient finetuning of pertrained models and investigate applications in NLP domains. The idea is to insert a parameter compact layer called Compacter in the pretrained models. By further parametering the Compacter layers with low-rank representation, its achieves a better trade-off between model performance, trained parameters and memory footprint compared with existing methods.
The idea is quite simple and the techniques are well explored in different scenarios by existing works. Thus most reviewers consider the technical contribution to be incremental. However, it demonstrates very good results in NLP pretrained models. Some reviewers were not satisfied with the limited experiments provided in the original submission. In the rebuttal, the authors was able to provide more results in a more complex data and model. And the results are in consistent with the results provided in the paper. The reviewers are generally satisfied with the rebuttal. Considering the technical limitation and the support from the reviewers, I recommend a weak accept and the authors to incorporate the new results and other comments from the reviewers into the revision. | test | [
"7QHLte5bIMk",
"0fCmsl4BHOC",
"rPMyrzBvlAL",
"hpGiDsBeuWT",
"isY8b9ETBP0",
"nwnHxJFYJFt",
"HSZr4NjGqxE",
"wgmRQgNMR9",
"756HCb2N80g",
"cuAiS1j6on8",
"Ckj-6zK9P-i",
"oz96xvNQB7h"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors, \n\nThanks for your rebuttal. After reading the rebuttal and other reviews, I am inclined to retain the current score. Compared with [24], novelty and originality are still my major concerns. The improvements are not considered as significant. ",
" We now included the results on Superglue for all ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"wgmRQgNMR9",
"rPMyrzBvlAL",
"nwnHxJFYJFt",
"756HCb2N80g",
"cuAiS1j6on8",
"nips_2021_bqGK5PyI6-N",
"oz96xvNQB7h",
"Ckj-6zK9P-i",
"nips_2021_bqGK5PyI6-N",
"nips_2021_bqGK5PyI6-N",
"nips_2021_bqGK5PyI6-N",
"nips_2021_bqGK5PyI6-N"
] |
nips_2021_tCYjE8Pf2Zg | Distilling Image Classifiers in Object Detectors | Knowledge distillation constitutes a simple yet effective way to improve the performance of a compact student network by exploiting the knowledge of a more powerful teacher. Nevertheless, the knowledge distillation literature remains limited to the scenario where the student and the teacher tackle the same task. Here, we investigate the problem of transferring knowledge not only across architectures but also across tasks. To this end, we study the case of object detection and, instead of following the standard detector-to-detector distillation approach, introduce a classifier-to-detector knowledge transfer framework. In particular, we propose strategies to exploit the classification teacher to improve both the detector's recognition accuracy and localization performance. Our experiments on several detectors with different backbones demonstrate the effectiveness of our approach, allowing us to outperform the state-of-the-art detector-to-detector distillation methods.
| accept | Two of the reviewers agreed that the paper is novel and can result in cheaper, higher-performing object detectors. The third reviewer misunderstood the term "spatial transformer" and was confused by its use, not responding to the authors' clarification. Given the strong experimental result and interesting approach, I support the acceptance of this paper. | test | [
"-xm3iDRhGCs",
"HdunJ5IelpK",
"EKnFsjuq9L",
"4-AYiqT3zoa",
"HJmTSB2xLI4",
"7y2RmutYMii",
"RRTaV0vfY4D",
"Pf34r8y7mpj",
"XFAdbdQs_-",
"tHsYM1osvN",
"wMtNrRywJC0",
"C6_qTEUkIBd"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" 1. ‘+loc ‘ is **NOT** ’between detectors (student and teacher detectors)‘ but proposed by us between our classification teacher and the detection student. As mentioned before and stated at lines 174-181 of the paper, ‘+loc’ is **indeed** our contribution and a special case of ‘KD_loc’.\n\n As stated in answer... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"HdunJ5IelpK",
"EKnFsjuq9L",
"4-AYiqT3zoa",
"Pf34r8y7mpj",
"nips_2021_tCYjE8Pf2Zg",
"RRTaV0vfY4D",
"tHsYM1osvN",
"wMtNrRywJC0",
"C6_qTEUkIBd",
"HJmTSB2xLI4",
"nips_2021_tCYjE8Pf2Zg",
"nips_2021_tCYjE8Pf2Zg"
] |
nips_2021_68B1ezcffDc | Subgroup Generalization and Fairness of Graph Neural Networks | Despite enormous successful applications of graph neural networks (GNNs), theoretical understanding of their generalization ability, especially for node-level tasks where data are not independent and identically-distributed (IID), has been sparse. The theoretical investigation of the generalization performance is beneficial for understanding fundamental issues (such as fairness) of GNN models and designing better learning methods. In this paper, we present a novel PAC-Bayesian analysis for GNNs under a non-IID semi-supervised learning setup. Moreover, we analyze the generalization performances on different subgroups of unlabeled nodes, which allows us to further study an accuracy-(dis)parity-style (un)fairness of GNNs from a theoretical perspective. Under reasonable assumptions, we demonstrate that the distance between a test subgroup and the training set can be a key factor affecting the GNN performance on that subgroup, which calls special attention to the training node selection for fair learning. Experiments across multiple GNN models and datasets support our theoretical results.
| accept | The review scores are stable towards "clear accept". Some concerns raised by a reviewer are properly addressed during the discussion period, making the paper have a unanimous consensus. Also, it makes an interesting connection with "fairness" concerning accuracy disparity, thereby opening up interesting problems along with this direction in the community. | train | [
"MciXVvtyhJb",
"CvRnjwcboGE",
"A8Ms5kPwM5T",
"gdvNyABwRjP",
"_307uyUrTNh",
"M0JBq40rDBR",
"U9gxnJUARd_",
"Fn0waKE6tr",
"sV5lSYrp7q",
"llklhzLQGSE",
"HIvzbVnWzNa",
"OTEMbX9iDmV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification on aggregated feature in the paper. I agree that aggregated feature distance and geodesic distance can help understand the fairness problem. I also notice the subgroup performance difference is consistent across these three measures. The caveat of final hidden representation is the po... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"Fn0waKE6tr",
"nips_2021_68B1ezcffDc",
"U9gxnJUARd_",
"_307uyUrTNh",
"M0JBq40rDBR",
"OTEMbX9iDmV",
"CvRnjwcboGE",
"HIvzbVnWzNa",
"llklhzLQGSE",
"nips_2021_68B1ezcffDc",
"nips_2021_68B1ezcffDc",
"nips_2021_68B1ezcffDc"
] |
nips_2021_vIRFiA658rh | Scaling Neural Tangent Kernels via Sketching and Random Features | The Neural Tangent Kernel (NTK) characterizes the behavior of infinitely-wide neural networks trained under least squares loss by gradient descent. Recent works also report that NTK regression can outperform finitely-wide neural networks trained on small-scale datasets. However, the computational complexity of kernel methods has limited its use in large-scale learning tasks. To accelerate learning with NTK, we design a near input-sparsity time approximation algorithm for NTK, by sketching the polynomial expansions of arc-cosine kernels: our sketch for the convolutional counterpart of NTK (CNTK) can transform any image using a linear runtime in the number of pixels. Furthermore, we prove a spectral approximation guarantee for the NTK matrix, by combining random features (based on leverage score sampling) of the arc-cosine kernels with a sketching algorithm. We benchmark our methods on various large-scale regression and classification tasks and show that a linear regressor trained on our CNTK features matches the accuracy of exact CNTK on CIFAR-10 dataset while achieving 150x speedup.
| accept | This paper considers sketching and random feature methods to accelerate learning with the Neural Tangent Kernel (NTK). NTK based methods are promising linearized approximations of non-convex neural network models, however, they suffer from the high computational complexity of kernel matrix operations. Scaling NTK methods via randomized approximations is a promising direction towards understanding deep neural networks and matching their performance using simpler architectures such as linear kernel machines.
The reviews all agreed that the paper contains interesting theoretical and experimental results, and expressed minor concerns and suggestions. One reviewer pointed out that a discussion of the regime in which NTK is not fully descriptive would be useful. Another reviewer commented that the complexity notions are hard to digest. Overall, the paper received generally positive reviews, while some of them pointed out certain minor problems and gave above borderline scores, they all recommended accepting the paper. Please take into account the updated reviews when preparing the final version to accommodate the requested changes. | test | [
"VU24YpD9G_",
"Zko9giI0Q0Q",
"pnUrhTeB3_j",
"NKnqSbMEtr",
"il7SStnNVS",
"WJjN7V_eC0o",
"-yH7bAhfNPp",
"zgj66AqESXj",
"MqiNWKcuTaT",
"5ASDAGCzOds"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors' feadback.\nI won't change my review score as it is good enough in terms of these minor issues.\n",
" I would like to thank the authors for their detailed response to my review.\n\nWhile the response does clarify some points, it does not do so in a manner that appreciably changes my persp... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
5
] | [
"NKnqSbMEtr",
"pnUrhTeB3_j",
"zgj66AqESXj",
"5ASDAGCzOds",
"MqiNWKcuTaT",
"-yH7bAhfNPp",
"nips_2021_vIRFiA658rh",
"nips_2021_vIRFiA658rh",
"nips_2021_vIRFiA658rh",
"nips_2021_vIRFiA658rh"
] |
nips_2021_qQAtFdyDr- | BatchQuant: Quantized-for-all Architecture Search with Robust Quantizer | Haoping Bai, Meng Cao, Ping Huang, Jiulong Shan | accept | This paper combines the idea of mixed precision quantization and single shot NAS method, and proposed a single-shot weight sharing quantization method that requires no training during search. 3 out of 4 reviewers acknowledges that the idea is solid and simple. Concerns remain about the novelty. This paper is recommended for acceptance given the effectiveness and satisfying result. | train | [
"SgZMCCYmLMO",
"PM6bbcQ-15y",
"sMbRbEL-Z8P",
"fWAGv7bCRtc",
"I3-GvViVhVE",
"gQ_Xc3hP8PH",
"9rgxWscE9pg",
"DYA8fI4zZ9R",
"ARSvH-qvGbq",
"aURKgEj5o0m",
"HahZcm8M-lR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a combination of neural architecture search and per-batch quantization statistics calibration in order to quantize neural networks to low bit-width using mixed precision. The proposed NAS method is applied to ImageNet and pareto-optimal networks in the accuracy vs complexity space are found. O... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"nips_2021_qQAtFdyDr-",
"ARSvH-qvGbq",
"SgZMCCYmLMO",
"HahZcm8M-lR",
"aURKgEj5o0m",
"ARSvH-qvGbq",
"DYA8fI4zZ9R",
"nips_2021_qQAtFdyDr-",
"nips_2021_qQAtFdyDr-",
"nips_2021_qQAtFdyDr-",
"nips_2021_qQAtFdyDr-"
] |
nips_2021_aohkNJxjYJX | Long Short-Term Transformer for Online Action Detection | We present Long Short-term TRansformer (LSTR), a temporal modeling algorithm for online action detection, which employs a long- and short-term memory mechanism to model prolonged sequence data. It consists of an LSTR encoder that dynamically leverages coarse-scale historical information from an extended temporal window (e.g., 2048 frames spanning of up to 8 minutes), together with an LSTR decoder that focuses on a short time window (e.g., 32 frames spanning 8 seconds) to model the fine-scale characteristics of the data. Compared to prior work, LSTR provides an effective and efficient method to model long videos with fewer heuristics, which is validated by extensive empirical analysis. LSTR achieves state-of-the-art performance on three standard online action detection benchmarks, THUMOS'14, TVSeries, and HACS Segment.
| accept | The LSTR model is original and well explained and explored in the paper. It was well received by all reviewers, and the rebuttal has addressed most of the concerns raised.
One outstanding issue is that the model is not applied to several of the current datasets and tasks of interest (e.g. AVA, EPIC-Kitchens) and this will somewhat limit interest in it. The authors are encouraged to address this in the final version of the paper. | val | [
"_32WKVXIAVw",
"VoCkccYvAVY",
"E23GgfYGHta",
"v2Trbt0r49y",
"jSAysqm9rEL",
"RbYRz0zOAs",
"a3cTt7Pf-Z",
"JUwuZTGZjfk",
"jCFuWhPI__q",
"cHokRLUM7E",
"xe27SeNazQY",
"Rw4Z6tuD12",
"lIeAezarMs7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you authors for a detailed response. I have read the reviews from other reviewers too. My concerns have been adequately addressed by the rebuttal. I am going to keep my previous rating and recommend an acceptance for this paper.",
" Considering the limited time of rebuttal and the changes required for app... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
"RbYRz0zOAs",
"E23GgfYGHta",
"JUwuZTGZjfk",
"jCFuWhPI__q",
"a3cTt7Pf-Z",
"lIeAezarMs7",
"Rw4Z6tuD12",
"xe27SeNazQY",
"cHokRLUM7E",
"nips_2021_aohkNJxjYJX",
"nips_2021_aohkNJxjYJX",
"nips_2021_aohkNJxjYJX",
"nips_2021_aohkNJxjYJX"
] |
nips_2021_ZEhDWKLTvt7 | Near Optimal Policy Optimization via REPS | Since its introduction a decade ago, relative entropy policy search (REPS) has demonstrated successful policy learning on a number of simulated and real-world robotic domains, not to mention providing algorithmic components used by many recently proposed reinforcement learning (RL) algorithms. While REPS is commonly known in the community, there exist no guarantees on its performance when using stochastic and gradient-based solvers. In this paper we aim to fill this gap by providing guarantees and convergence rates for the sub-optimality of a policy learned using first-order optimization methods applied to the REPS objective. We first consider the setting in which we are given access to exact gradients and demonstrate how near-optimality of the objective translates to near-optimality of the policy. We then consider the practical setting of stochastic gradients, and introduce a technique that uses generative access to the underlying Markov decision process to compute parameter updates that maintain favorable convergence to the optimal regularized policy.
| accept | This is a challenging paper, as the reviewers have a wide range of opinions about it.
We had a lot of discussions among the reviewers: there are more than 40 messages in total in the forum (including authors' responses), which is more than any other paper in my batch (most messages are private). Even though one of the reviewers changed their stance from the negative side to the positive side, the majority kept their initial viewpoints. In order to help with the final decision, I invited an expert emergency reviewer who provided a positive evaluation of this work. At the end of the day, we have 3 positive reviewers and 2 negative ones.
Several issues have been brought up. The core issues can be summarized into the following two categories. I explain the viewpoint of reviewers and provide my commentary:
1) Whether the results are novel and significant or not.
Some reviewers believe that the results are not novel or significant, yet others believe that they are.
In my viewpoint, the results are novel because they theoretically analyze one of the important RL algorithms (REPS) for the first time.
The provided convergence rates may or may not be optimal, as we do not know what the lower bound is. This is the first work with such an analysis, and I believe is an important one.
The negative reviewers' complaints regarding the novelty and significance are along these lines:
Concern: The results look suboptimal because the rates are not fast (perhaps compared to other policy search algorithms).
My take: Given that this is the first result for REPS, and we do not have a lower bound for the rate, this is not a real concern.
Concern: The results are not novel because similar papers analyzed similar algorithms. Are we going to analyze all algorithms using similar tools and claim novelty?
My take: I would argue that REPS is not any algorithm. It is one of the important RL algorithms in the past two decades, and its analysis is worthwhile.
Concern: The technical tools used in the analysis are standard; each component of the result (lemmas, etc.) have been proved elsewhere; the proofs are not creative applications of prior tools.
My take: I did not check the proofs myself, so I do not have a strong opinion about their content and technical novelty and creativity, but I do not think this is the most significant evaluation criteria for this type of paper.
Even if the proof techniques might be well-established, some of the results (Lemmas 4 and 5 and 9) appear to be novel.
How difficult is it to prove them? I do not know, but I do not think it matters much. They are not trivial corollaries of already known results.
Since NeurIPS is not a math venue, the novelty and creativity of the proof techniques are not the main consideration anyway, in my opinion.
I believe some of the reviewers have a different standard of novelty and significance in evaluating this paper from mine, and that might be the main reason for the difference in opinion.
2) The exposition should be improved.
The fact that some of the reviewers got confused about what is novel here and what is not, and the authors had to correct them in the rebuttal multiple times (a frustration that was expressed in their rebuttal too) should be a signal to the authors that maybe their paper is not written very well.
All reviewers spent a lot of time on this paper. Many of them are expert in areas closely related to the topic of this paper. The closeness of their expertise to this paper is way more than a typical RL researcher at NeurIPS. The reviews show that they tried hard to understand the paper, yet they occasionally misunderstood some important aspects of the paper, for example, the relationship of the results of this paper and the currently existing results in the literature.
This issue was not raised only by the negative reviewers. The expert emergency reviewer also believed that the writing sometimes becomes a bit dense with technical details. I shared this sentiment after reading the paper myself.
Although I realize that writing a paper that can be understood by everyone is challenging, especially a theoretical paper, I would recommend the authors to seriously consider the feedback provided by the reviewers to improve the exposition of their work.
**Evaluation:** Overall, given the novelty and significance of the paper, according to what I believe should be the novelty and significance criteria for a venue such as NeurIPS, and that we have enough positive and enthusiastic support from three reviewers, I recommend the acceptance of this paper.
I encourage the authors to revise the exposition of their paper and make it more accessible to a broader range of NeurIPS audience. | train | [
"kugeuXARtU9",
"lT4aJ8JLlNX",
"S3I6FjDOutg",
"GIppIdaf75z",
"Idc9792aCH",
"lBZy0VK_gKO",
"s1Gu_YCNY8I",
"VRYQYsz3gu5",
"L9hXc80HMVC",
"aEfIHdRlC4m",
"4JGnfFLJowg",
"v34-bNvR4aC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies the performance of the classic Relative Entropy Policy Search (REPS) algorithm of Peters et al. (2010) in the context of tabular reinforcement learning, focusing on the effect of optimization errors on the quality of the policy extracted from the solution. REPS is based on formulating the policy ... | [
7,
-1,
4,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
4
] | [
4,
-1,
4,
3,
-1,
-1,
5,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_ZEhDWKLTvt7",
"S3I6FjDOutg",
"nips_2021_ZEhDWKLTvt7",
"nips_2021_ZEhDWKLTvt7",
"lBZy0VK_gKO",
"aEfIHdRlC4m",
"nips_2021_ZEhDWKLTvt7",
"v34-bNvR4aC",
"s1Gu_YCNY8I",
"GIppIdaf75z",
"S3I6FjDOutg",
"nips_2021_ZEhDWKLTvt7"
] |
nips_2021_x2rdRAx3QF | Self-Consistent Models and Values | Learned models of the environment provide reinforcement learning (RL) agents with flexible ways of making predictions about the environment.Models enable planning, i.e. using more computation to improve value functions or policies, without requiring additional environment interactions.In this work, we investigate a way of augmenting model-based RL, by additionally encouraging a learned model and value function to be jointly \emph{self-consistent}.This lies in contrast to classic planning methods like Dyna, which only update the value function to be consistent with the model.We propose a number of possible self-consistency updates, study them empirically in both the tabular and function approximation settings, and find that with appropriate choices self-consistency can be useful both for policy evaluation and control.
| accept | The paper proposes a self-consistency approach to model-based RL. The self-consistency refers to jointly optimizing the value function and the model such that (a variant) of the Bellman residual is minimized.
The reviewers are all positive about this work. They believe that it is an original work and written (mostly) clearly. Most of their concerns are also addressed during the discussion phase, and there is no concern that prevents them from accepting this work. Therefore, I would recommend the *acceptance* of this paper.
There are some minor issues that can be improved though. Please refer to the reviews for more detail. I enlist a few of them:
- Improving the writing, especially in the Experiments section, in which some details are missing.
- Explaining and studying the effect of the number of unrolled steps in the computation of L_SC.
- Some of the reviewers believe that the paper isn't very reproducible. Please try to improve it in this regard, by providing more detail or releasing (some parts of the) source code.
- Better explanation of the relation between value equivalence and self-consistency. | train | [
"5EG68YAypvW",
"HH4KDC7xC7w",
"xFrAQephl2u",
"6HyYm_Oa19s",
"XFXA7BKvxu1",
"L95KxwkEwL3",
"TJPhRK9r6ij",
"HefeLSctRwA",
"Qyapmo_9nj",
"g03OTRctwr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I have read the other reviews and your responses. You have done an excellent job of responding, and other reviewers have articulated additional value in the paper that I did not see, so I am raising my score to a 7.",
"This paper presents the idea of self-consistent reinforcement learning. By enforcing \"cons... | [
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"HefeLSctRwA",
"nips_2021_x2rdRAx3QF",
"nips_2021_x2rdRAx3QF",
"XFXA7BKvxu1",
"g03OTRctwr",
"xFrAQephl2u",
"Qyapmo_9nj",
"HH4KDC7xC7w",
"nips_2021_x2rdRAx3QF",
"nips_2021_x2rdRAx3QF"
] |
nips_2021_Tbq5fYViJzm | Learning on Random Balls is Sufficient for Estimating (Some) Graph Parameters | Theoretical analyses for graph learning methods often assume a complete observation of the input graph. Such an assumption might not be useful for handling any-size graphs due to the scalability issues in practice. In this work, we develop a theoretical framework for graph classification problems in the partial observation setting (i.e., subgraph samplings). Equipped with insights from graph limit theory, we propose a new graph classification model that works on a randomly sampled subgraph and a novel topology to characterize the representability of the model. Our theoretical framework contributes a theoretical validation of mini-batch learning on graphs and leads to new learning-theoretic results on generalization bounds as well as size-generalizability without assumptions on the input.
| accept | This paper considers estimation of a real number associated to a large graph. Some examples of quantities that one might wish to estimate are the density of triangles (or relatedly the clustering coefficient). Rather than observing the entire graph, it is assumed that algorithms have access to a specific random sampling procedure whereby random nodes are selected and then their neighborhoods are randomly explored. A variation on the Benjamini-Schramm topology is introduced and the set of estimable graph properties is characterized in terms of continuity in this topology. A number of other results are presented regarding generalization across multiple graph sizes, etc.
The proposed sampling/observation model is reasonable and placed within context of related observation models. While the work is entirely theoretical, its results clearly demarcate what is possible or impossible with regards to the specific sampling model. Thus, there are natural implications for what can be learned by graph neural networks. The work seems likely to spur further investigation with other sampling models and to make tighter connections to finite size graph neural networks. | train | [
"9yxh7s4PP-h",
"IUps2MDm_Mm",
"NiiTZtEzA6B",
"37EaHK-yaBY",
"6qKGgZ-cq1t",
"-rF7_MbealP",
"goMYG6k73kV",
"ThurTV5ELsX",
"47TN9ceOIX8",
"IXljYIrxjc",
"VyYqn0av6tf",
"MERRkPSnaYw"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I acknowledge that I have read the rebuttal. When I did so, I increased my rating (16 Aug).",
" We truly appreciate you for spending your time to interact with us. \n\n*Reply to (1)*: Now we see that you have a different opinion than ours on this point: we believe having applications in point-cloud application ... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"goMYG6k73kV",
"NiiTZtEzA6B",
"-rF7_MbealP",
"nips_2021_Tbq5fYViJzm",
"nips_2021_Tbq5fYViJzm",
"VyYqn0av6tf",
"37EaHK-yaBY",
"MERRkPSnaYw",
"IXljYIrxjc",
"nips_2021_Tbq5fYViJzm",
"nips_2021_Tbq5fYViJzm",
"nips_2021_Tbq5fYViJzm"
] |
nips_2021_dxaINwQdXh1 | Risk-Averse Bayes-Adaptive Reinforcement Learning | In this work, we address risk-averse Bayes-adaptive reinforcement learning. We pose the problem of optimising the conditional value at risk (CVaR) of the total return in Bayes-adaptive Markov decision processes (MDPs). We show that a policy optimising CVaR in this setting is risk-averse to both the epistemic uncertainty due to the prior distribution over MDPs, and the aleatoric uncertainty due to the inherent stochasticity of MDPs. We reformulate the problem as a two-player stochastic game and propose an approximate algorithm based on Monte Carlo tree search and Bayesian optimisation. Our experiments demonstrate that our approach significantly outperforms baseline approaches for this problem.
| accept | This paper has been carefully discussed by the reviewers, the consensus is that while this paper applies CVaR to the fully-Bayesian setting and its motivation is appreciated (especially the motivation behind CVaR optimization in Bayesian MDP is to address the two sources of uncertainty: the parametric (or epistemic) uncertainty and internal (or aleatoric) uncertainty, which is good), the contribution on extending CVaR MDPs onto the Bayes MDP setting is quite incremental. Similar to the standard MDP case, the authors managed to show a dual representation results for Bayes MDP and thus are able to modify algorithms such as MCTS to solve CVaR Bayes MDPs. Reviewers also mentioned concerns about scalability of the algorithm, and the lack of more realistic experiments (besides the small-scale experiments reported). For example some reviewers suggested more experimental details such as decoupling the effect of epistemic and aleatoric uncertainty on risk experimentally could also have been included, as well as considering an analysis of different priors (e.g. varying entropy, or an incorrect prior w.r.t. the true MDP distribution).
On the overall, the paper studied an interesting topic but its current form is below the acceptance threshold.
| train | [
"wtHW6ntPKAL",
"LC07hmUvFUO",
"eAoaKhSGuq0",
"V7QTfdbFuhw",
"R3liaPV3_k5",
"xEbjnNf3KK",
"weuHZWV78Iy",
"SBJpwcmEuO",
"TwrwMeY-OXl",
"gLpL2k8NQ0r"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper discusses parametric (epistmic) versus internal (aleatory) uncertainty in the context of Bayesian reinforcement learning. It extends previous methods for CVaR optimisation in MDPs to the BAMDP setting.\n\nThe response clarified some minor issues, but my main points remain unchanged. The paper discusses... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"nips_2021_dxaINwQdXh1",
"wtHW6ntPKAL",
"gLpL2k8NQ0r",
"TwrwMeY-OXl",
"SBJpwcmEuO",
"weuHZWV78Iy",
"nips_2021_dxaINwQdXh1",
"nips_2021_dxaINwQdXh1",
"nips_2021_dxaINwQdXh1",
"nips_2021_dxaINwQdXh1"
] |
nips_2021_IuH1bVRgvHH | Iterative Connecting Probability Estimation for Networks | Estimating the probabilities of connections between vertices in a random network using an observed adjacency matrix is an important task for network data analysis. Many existing estimation methods are based on certain assumptions on network structure, which limit their applicability in practice. Without making strong assumptions, we develop an iterative connecting probability estimation method based on neighborhood averaging. Starting at a random initial point or an existing estimate, our method iteratively updates the pairwise vertex distances, the sets of similar vertices, and connecting probabilities to improve the precision of the estimate. We propose a two-stage neighborhood selection procedure to achieve the trade-off between smoothness of the estimate and the ability to discover local structure. The tuning parameters can be selected by cross-validation. We establish desirable theoretical properties for our method, and further justify its superior performance by comparing with existing methods in simulation and real data analysis.
| accept | The paper presents a simple iterative and provably effective method for graphon estimation, which improves on previous "single shot" estimation techniques. The reviewers were in consensus that the idea is very natural and well-studied, and a significant contribution to the graphon literature. | train | [
"xjUr5vHZVlu",
"FOal4gNmJL",
"YvYAUI3CgPz",
"ocd4GqK5DpH",
"NTa2Pc6n65",
"oDM9j7u_GR9",
"vRD-efItsN_",
"KYOBQOOIjpB",
"QjOrNI8Avwh"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose an iterative approach to estimate the probability of forming an edge between any two vertices in a graph, which they denote as the *connecting probability*. With only a single realization of the graph, it is a difficult task to estimate the connecting probabilities. The proposed iterative conne... | [
8,
-1,
7,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
3,
2
] | [
"nips_2021_IuH1bVRgvHH",
"NTa2Pc6n65",
"nips_2021_IuH1bVRgvHH",
"YvYAUI3CgPz",
"xjUr5vHZVlu",
"QjOrNI8Avwh",
"KYOBQOOIjpB",
"nips_2021_IuH1bVRgvHH",
"nips_2021_IuH1bVRgvHH"
] |
nips_2021_PEGc7x_QL2 | Learning to Adapt via Latent Domains for Adaptive Semantic Segmentation | Domain adaptive semantic segmentation aims to transfer knowledge learned from labeled source domain to unlabeled target domain. To narrow down the domain gap and ease adaptation difficulty, some recent methods translate source images to target-like images (latent domains), which are used as supplement or substitute to the original source data. Nevertheless, these methods neglect to explicitly model the relationship of knowledge transferring across different domains. Alternatively, in this work we break through the standard “source-target” one pair adaptation framework and construct multiple adaptation pairs (e.g. “source-latent” and “latent-target”). The purpose is to use the meta-knowledge (how to adapt) learned from one pair as guidance to assist the adaptation of another pair under a meta-learning framework. Furthermore, we extend our method to a more practical setting of open compound domain adaptation (a.k.a multiple-target domain adaptation), where the target is a compound of multiple domains without domain labels. In this setting, we embed an additional pair of “latent-latent” to reduce the domain gap between the source and different latent domains, allowing the model to adapt well on multiple target domains simultaneously. When evaluated on standard benchmarks, our method is superior to the state-of-the-art methods in both the single target and multiple-target domain adaptation settings.
| accept | After the discussion period this paper received mixed scores with one recommendation for rejection, one for acceptance, one leaning reject and one leaning accept. There was disagreement about the novelty of the proposed approach. The key novelty claimed by the authors is using a translated source image as a domain unto itself and performing meta learning to learn an adaptable model from source $\rightarrow$ latent such that further adaptation from latent $\rightarrow$ target should be more effective. All reviewers agreed that the image translation portion (i.e., generation of latent domain) was based on prior work. The debate then centered around whether the introduction of a meta learning objective using this translated image was sufficient contribution as well as whether that idea proved effective empirically. After considering all reviewer comments, discussion, author rebuttal and examining the paper, it seems that although prior work has used meta learning for domain generalization the specific use in this paper for the latent domain together with the ablation study in Table 3 showing the value of using the meta objective instead of just adapting directly from L$\rightarrow$T implies that this design decision is useful and novel. It does however seem that the study into the open compound target together with using existing clustering from [21] is less of a contribution and I would encourage the authors to revise the text to clarify their key contribution. Further, as promised please move the ResNet experiments to the main paper. | train | [
"S3u3Cm2HEZ",
"cPrT5UvTKXI",
"syZxLGTwuwb",
"ZQC2wihvfYO",
"zuk9gcLt3HW",
"bjhCTp-mqdj",
"Ur4t-h0muW6",
"vJm3U6_hOkV",
"3SYEYCjicji",
"W_3yUq1xDx",
"D3vwuJgd37r",
"O_EoUjscB__",
"3-YUl6SxXbk",
"YHeicpZiTcl",
"ZSvjbiS_ew"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your valuable suggestions!\n\n(1) The source only model using ResNet achieves 28.4% w.r.t mIoU on the open compound DA setting. Our MTDA outperforms the source-only model by 9.5% w.r.t mIoU.\n\n(2) When we apply our STDA framework on another UDA scenario of SYNTHIA to Cityscapes, we also outperform ... | [
-1,
-1,
-1,
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
-1,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"cPrT5UvTKXI",
"ZQC2wihvfYO",
"ZQC2wihvfYO",
"nips_2021_PEGc7x_QL2",
"nips_2021_PEGc7x_QL2",
"Ur4t-h0muW6",
"vJm3U6_hOkV",
"3SYEYCjicji",
"W_3yUq1xDx",
"zuk9gcLt3HW",
"ZSvjbiS_ew",
"YHeicpZiTcl",
"ZQC2wihvfYO",
"nips_2021_PEGc7x_QL2",
"nips_2021_PEGc7x_QL2"
] |
nips_2021_ympqhd5gE9 | Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection | Detecting out-of-distribution (OOD) samples is vital for developing machine learning based models for critical safety systems. Common approaches for OOD detection assume access to some OOD samples during training which may not be available in a real-life scenario. Instead, we utilize the {\em predictive normalized maximum likelihood} (pNML) learner, in which no assumptions are made on the tested input. We derive an explicit expression of the pNML and its generalization error, denoted as the regret, for a single layer neural network (NN). We show that this learner generalizes well when (i) the test vector resides in a subspace spanned by the eigenvectors associated with the large eigenvalues of the empirical correlation matrix of the training data, or (ii) the test sample is far from the decision boundary. Furthermore, we describe how to efficiently apply the derived pNML regret to any pretrained deep NN, by employing the explicit pNML for the last layer, followed by the softmax function. Applying the derived regret to deep NN requires neither additional tunable parameters nor extra data. We extensively evaluate our approach on 74 OOD detection benchmarks using DenseNet-100, ResNet-34, and WideResNet-40 models trained with CIFAR-100, CIFAR-10, SVHN, and ImageNet-30 showing a significant improvement of up to 15.6% over recent leading methods.
| accept | The paper proposes to use predictive normalized maximum likelihood (pNML) for detecting out-of-distribution inputs.
Overall, the reviewers found it be a well-written paper. The idea of pNML for OOD detection is novel and the empirical results show consistent improvements over baseline. The authors did a good job of addressing major reviewer concerns. During the discussion phase, the consensus decision learned towards acceptance. I recommend acceptance and encourage the authors to address the remaining comments in the camera ready version.
Other suggestions to improve the final version:
- I'd encourage the authors to evaluate the technique on harder OOD pairs (e.g. https://arxiv.org/abs/2007.05566 define near-OOD pairs such as CIFAR-100 vs CIFAR-10) as it would be interesting to see how the method performs on more difficult benchmarks.
- Section 4: The idea of weighting Eigen directions with large Eigen values seems related to variants of Mahalanobis distance such as marginal Mahalanobis distance https://arxiv.org/abs/2003.00402 and Relative Mahalanobis distance https://arxiv.org/abs/2106.09022 It would be interesting to add a discussion and potentially compare pNML to these methods. | train | [
"8rVKlQ97qok",
"Xz7gqqOozxv",
"Lvcf7WiDi5k",
"EXJ8Ea-AaE",
"uty6XHrJqGS",
"0AIMdr43GmM",
"cbj6OJYSTF6",
"O-CqVPv7g-W",
"W5I6wyR_HJx",
"mCKK9CnlmDT",
"09rxVFcVTH",
"w8CezmNVa6C",
"caP5zn2s4a"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Nice to see the results on larger images. I will keep my score.",
"This paper studies out-of-distribution detection with single-layer neural networks. There are theoretical and experimental results - for theoretical contributions the authors derive the analytical form of the NML regret, and for experimental con... | [
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"w8CezmNVa6C",
"nips_2021_ympqhd5gE9",
"mCKK9CnlmDT",
"cbj6OJYSTF6",
"nips_2021_ympqhd5gE9",
"uty6XHrJqGS",
"caP5zn2s4a",
"w8CezmNVa6C",
"09rxVFcVTH",
"Xz7gqqOozxv",
"nips_2021_ympqhd5gE9",
"nips_2021_ympqhd5gE9",
"nips_2021_ympqhd5gE9"
] |
nips_2021_OkFPq7ZtsQ | Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation | Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes. Most approaches only exploit the temporal dimension to address the association problem, while relying on single frame predictions for the segmentation mask itself. We propose Prototypical Cross-Attention Network (PCAN), capable of leveraging rich spatio-temporal information for online multiple object tracking and segmentation. PCAN first distills a space-time memory into a set of prototypes and then employs cross-attention to retrieve rich information from the past frames. To segment each object, PCAN adopts a prototypical appearance module to learn a set of contrastive foreground and background prototypes, which are then propagated over time. Extensive experiments demonstrate that PCAN outperforms current video instance tracking and segmentation competition winners on both Youtube-VIS and BDD100K datasets, and shows efficacy to both one-stage and two-stage segmentation frameworks. Code and video resources are available at http://vis.xyz/pub/pcan.
| accept | All four reviewers recommend acceptance of this paper. The final ratings are: 7, 7, 6, 7.
Overall, the reviewers appreciated the interesting take on reducing the computational cost of memory-based mask propagation methods by means of matching to a small set of prototypes. The cost-accuracy tradeoff is acknowledged as very strong. The ablations are informative and convincing.
However, several reviewers comment that the presentation and description of the method should be improved to do full justice to the approach. As noted by Reviewer TiXB, the ablations on the number of prototypes is highly relevant and should be added to the paper if at all possible.
| train | [
"Gji1k4z8j4E",
"G_xS83VQEeU",
"iZLhj9boicg",
"J9G-SBVfBsg",
"OtMSlXwfyR",
"LfAImboR7MV",
"Bg0HgWJ4b7l",
"OB3h642jU3o",
"9f0C2hFA2G",
"1PKVtgdVc4d",
"JV3vMHaUXx",
"0CF-v5mrFKz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" The rebuttal has addressed my concerns, and all the reviewers agree on the good performance of the proposed method and its contribution to the community, so I keep my initial rating. Hope the authors can make the suggested edits so the technical details can be conveyed accurately.",
" Thanks for the feedback, m... | [
-1,
-1,
7,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"1PKVtgdVc4d",
"JV3vMHaUXx",
"nips_2021_OkFPq7ZtsQ",
"OB3h642jU3o",
"9f0C2hFA2G",
"nips_2021_OkFPq7ZtsQ",
"nips_2021_OkFPq7ZtsQ",
"iZLhj9boicg",
"LfAImboR7MV",
"0CF-v5mrFKz",
"Bg0HgWJ4b7l",
"nips_2021_OkFPq7ZtsQ"
] |
nips_2021_Am_qvhPRQTq | Algorithmic Instabilities of Accelerated Gradient Descent | We study the algorithmic stability of Nesterov's accelerated gradient method. For convex quadratic objectives, Chen et al. (2018) proved that the uniform stability of the method grows quadratically with the number of optimization steps, and conjectured that the same is true for the general convex and smooth case. We disprove this conjecture and show, for two notions of algorithmic stability (including uniform stability), that the stability of Nesterov's accelerated method in fact deteriorates exponentially fast with the number of gradient steps. This stands in sharp contrast to the bounds in the quadratic case, but also to known results for non-accelerated gradient methods where stability typically grows linearly with the number of steps.
| accept | The paper studies uniform stability of Nesterov accelerated gradient descent for smooth convex optimization and proves that, unlike in the quadratic case and contrary to a conjecture by [Chen et al., 2018], the error can accumulate exponentially fast (as opposed to quadratically fast, which happens for quadratics). The crux of the approach is in introducing clever constructions of one-dimensional adversarial examples. Overall, the exposition of the paper is clear and the paper appears technically sound (as far as it was possible to verify by reviewers). The paper adds a solid contribution to understanding of stability of accelerated/momentum methods. | val | [
"5BDpsIsY8YY",
"_RUbynJk8k1",
"1oKTRhEyGEF",
"U4cHRmhjaR",
"A5QtRqpRmmz",
"5SBeIqP4SX0",
"NhfWefFH9fX",
"h3vOzHRK5hk",
"0L3zj3d5uFe",
"TzJV1ULsMu",
"gGAiNWM_XrJ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your answers. I wish to keep my rating at 9, since my concerns were minors, and I believe the authors can address them in the final version as stated by them. I also looked the other reviews, and I don't share most of their concerns (novelty, full-batch, NAG easy convergence proof, etc).",
" I thank ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
9,
5,
7
] | [
-1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"h3vOzHRK5hk",
"A5QtRqpRmmz",
"5SBeIqP4SX0",
"nips_2021_Am_qvhPRQTq",
"gGAiNWM_XrJ",
"U4cHRmhjaR",
"TzJV1ULsMu",
"0L3zj3d5uFe",
"nips_2021_Am_qvhPRQTq",
"nips_2021_Am_qvhPRQTq",
"nips_2021_Am_qvhPRQTq"
] |
nips_2021_bDHBNVtB9XA | Learning Optimal Predictive Checklists | Checklists are simple decision aids that are often used to promote safety and reliability in clinical applications. In this paper, we present a method to learn checklists for clinical decision support. We represent predictive checklists as discrete linear classifiers with binary features and unit weights. We then learn globally optimal predictive checklists from data by solving an integer programming problem. Our method allows users to customize checklists to obey complex constraints, including constraints to enforce group fairness and to binarize real-valued features at training time. In addition, it pairs models with an optimality gap that can inform model development and determine the feasibility of learning sufficiently accurate checklists on a given dataset. We pair our method with specialized techniques that speed up its ability to train a predictive checklist that performs well and has a small optimality gap. We benchmark the performance of our method on seven clinical classification problems, and demonstrate its practical benefits by training a short-form checklist for PTSD screening. Our results show that our method can fit simple predictive checklists that perform well and that can easily be customized to obey a rich class of custom constraints.
| accept | This work proposes an IP-based solution to learning predictive clinical checklists. Overall, the paper was well received, but reviewers make a number of recommendations to help improve the paper that should be incorporated prior to presentation:
-including error bars/confidence intervals
-including an ablation of the sub-modularity heuristic and the path algorithm
-compare to the strongest baselines possible (thorough baseline hyperparameter turning)
-framing of fairness results
I encourage authors to carefully consider reviewer feedback when working on their revision.
| val | [
"S6tArN-q5H",
"3maopZObYV",
"_cVg1j4gqvH",
"WSJ_FtrkjJ",
"swZ3RmKZcf",
"vfzko6T9x2-",
"yRUpbUNhdpk",
"nqeZu-gg-Hu",
"H7-SyO2OYGF",
"XH99FWcUORY",
"0ajS3ZqFM_s",
"MJGQ4YNkrAs",
"tmJrxYNqpIl",
"zVmb2THXdnL"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your valuable feedback, and for re-evaluating the score. We will emphasize and clarify the use of the optimality gap (in addition to our response to Reviewer hq2w), as well as remove mentions of procedural checklists from the introduction.",
"Method to train classifiers based on integer programing... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
9
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"_cVg1j4gqvH",
"nips_2021_bDHBNVtB9XA",
"nqeZu-gg-Hu",
"vfzko6T9x2-",
"yRUpbUNhdpk",
"H7-SyO2OYGF",
"zVmb2THXdnL",
"3maopZObYV",
"tmJrxYNqpIl",
"MJGQ4YNkrAs",
"nips_2021_bDHBNVtB9XA",
"nips_2021_bDHBNVtB9XA",
"nips_2021_bDHBNVtB9XA",
"nips_2021_bDHBNVtB9XA"
] |
nips_2021_1Rxp-demAH0 | Finite Sample Analysis of Average-Reward TD Learning and $Q$-Learning | Sheng Zhang, Zhe Zhang, Siva Theja Maguluri | accept | This paper considers RL in average-reward setting. In particular, it provides sample complexity guarantees for (1) TD(lambda) with linear function approximation for policy evaluation; (2) Q-learning in the tabular setting for policy optimization. Reviewers believe there is a clear disconnection of the two parts of the paper (since algorithm/setting/proof techniques are all different). Furthermore, reviewers are concerned with the assumptions and technical contribution of the second part. Although reviewers agree the first part does have some interesting technical contribution, but these contribution of the first part alone may not be sufficient for NeurIPS. | val | [
"6hsa7NW4ZZT",
"DGxoNzZePPY",
"7Ae8VMUfvOO",
"Tqb9aRnadBy",
"FgogQFv93Sy",
"0o0m7X8LYVL"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the valuable feedback. Please find our responses below:\n\n**(Main Review)[Comment 1.]** \"For the first part, I think the TD($\\lambda$) policy evaluation with linear function approximation is not that hard even if it's under the average-reward setting. Essentially it is centered aroun... | [
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
4,
3,
3
] | [
"FgogQFv93Sy",
"0o0m7X8LYVL",
"Tqb9aRnadBy",
"nips_2021_1Rxp-demAH0",
"nips_2021_1Rxp-demAH0",
"nips_2021_1Rxp-demAH0"
] |
nips_2021_Zfk2NOSWoYg | Generalization Bounds for Graph Embedding Using Negative Sampling: Linear vs Hyperbolic | Graph embedding, which represents real-world entities in a mathematical space, has enabled numerous applications such as analyzing natural languages, social networks, biochemical networks, and knowledge bases.It has been experimentally shown that graph embedding in hyperbolic space can represent hierarchical tree-like data more effectively than embedding in linear space, owing to hyperbolic space's exponential growth property. However, since the theoretical comparison has been limited to ideal noiseless settings, the potential for the hyperbolic space's property to worsen the generalization error for practical data has not been analyzed.In this paper, we provide a generalization error bound applicable for graph embedding both in linear and hyperbolic spaces under various negative sampling settings that appear in graph embedding. Our bound states that error is polynomial and exponential with respect to the embedding space's radius in linear and hyperbolic spaces, respectively, which implies that hyperbolic space's exponential growth property worsens the error.Using our bound, we clarify the data size condition on which graph embedding in hyperbolic space can represent a tree better than in Euclidean space by discussing the bias-variance trade-off.Our bound also shows that imbalanced data distribution, which often appears in graph embedding, can worsen the error.
| accept | The paper considers generalization error incurred when embedding graphs in linear versus hyperbolic space, in particular in the context of embeddings obtained via negative sampling, and presenting results where linear/hyperbolic space incurs polynomial/exponential growth, respectively, and well as on edge heterogeneity. One low reviewer raised his/her score in the discussion phase, and
still has the good suggestion to incorporate comments to improve readability. | train | [
"BggGS7Z_25",
"NJkB-wBeZ9N",
"kB6K1N7zNP5",
"JyuaQ9vrUMY",
"pDG-4IYs4qv",
"QUOaLreogYb",
"QmnV0GCAEfS",
"buxZgPHgKyJ",
"wcJS8L2_cvB",
"g7BIYi1FQnj",
"6Lp7qWbc8nr",
"HXDdS0MSywU",
"5pTfbJaaCgD",
"GKaTpb5p0fe"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your comments thinking highly of our work. We will further improve the readability of the final version by reflecting on your feedback.",
" We appreciate your comments confirming that our reply has addressed your concerns. We will further improve the readability of the final version by reflecting ... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"JyuaQ9vrUMY",
"QUOaLreogYb",
"QmnV0GCAEfS",
"buxZgPHgKyJ",
"nips_2021_Zfk2NOSWoYg",
"g7BIYi1FQnj",
"wcJS8L2_cvB",
"HXDdS0MSywU",
"GKaTpb5p0fe",
"pDG-4IYs4qv",
"5pTfbJaaCgD",
"nips_2021_Zfk2NOSWoYg",
"nips_2021_Zfk2NOSWoYg",
"nips_2021_Zfk2NOSWoYg"
] |
nips_2021_aExAsh1UHZo | Gradient Starvation: A Learning Proclivity in Neural Networks | We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks. Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task, despite the presence of other predictive features that fail to be discovered. This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks. Using tools from Dynamical Systems theory, we identify simple properties of learning dynamics during gradient descent that lead to this imbalance, and prove that such a situation can be expected given certain statistical structure in training data. Based on our proposed formalism, we develop guarantees for a novel regularization method aimed at decoupling feature learning dynamics, improving accuracy and robustness in cases hindered by gradient starvation. We illustrate our findings with simple and real-world out-of-distribution (OOD) generalization experiments.
| accept | The paper studies a phenomenon, referred to by as "gradient starvation" (e.g., https://arxiv.org/abs/1809.06848), in which only a subset of features relevant for the task is captured during the training, despite the presence of other predictive features. The reviewers found the explicit study of this phenomenon interesting and of practical relevance, and generally appreciated the numerical results provided in the paper. However, whereas some of the reviewers felt that the focus on the NTK regime is fair, others raised several significant concerns regarding the technical contributions of the work, in part (but not only) due to the by now well-established discrepancy between NTK and neural network models. | test | [
"zi_9lVZtugX",
"9J0JhHklrm_",
"nf1c8lXPkSi",
"xx9t5ZjgS6a",
"VmMDDPsV7nQ",
"EqlmHCPfKN",
"b3BuBX9yy6U",
"9V3f2CUn5qP",
"ub19V9G4i5w",
"rHBDrEM4Sjj",
"UFDa2YNqxB_",
"2EjVpIOqD3E",
"0LcWIJHIWNq",
"ld4gjo54df",
"2-bZA8DPkpb"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nThanks to your constructive comments, we believe our paper is considerably improved, but we recognize that it is still on the boundary of acceptance. There have been some misunderstandings that we have done our best to make clear. As we reach the end of the discussion period, we would like to e... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
3
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"nips_2021_aExAsh1UHZo",
"nips_2021_aExAsh1UHZo",
"b3BuBX9yy6U",
"ub19V9G4i5w",
"rHBDrEM4Sjj",
"0LcWIJHIWNq",
"2-bZA8DPkpb",
"ld4gjo54df",
"9J0JhHklrm_",
"2EjVpIOqD3E",
"nips_2021_aExAsh1UHZo",
"nips_2021_aExAsh1UHZo",
"nips_2021_aExAsh1UHZo",
"nips_2021_aExAsh1UHZo",
"nips_2021_aExAsh1U... |
nips_2021_wgeK563QgSw | Offline Reinforcement Learning as One Big Sequence Modeling Problem | Reinforcement learning (RL) is typically viewed as the problem of estimating single-step policies (for model-free RL) or single-step models (for model-based RL), leveraging the Markov property to factorize the problem in time. However, we can also view RL as a sequence modeling problem: predict a sequence of actions that leads to a sequence of high rewards. Viewed in this way, it is tempting to consider whether powerful, high-capacity sequence prediction models that work well in other supervised learning domains, such as natural-language processing, can also provide simple and effective solutions to the RL problem. To this end, we explore how RL can be reframed as "one big sequence modeling" problem, using state-of-the-art Transformer architectures to model distributions over sequences of states, actions, and rewards. Addressing RL as a sequence modeling problem significantly simplifies a range of design decisions: we no longer require separate behavior policy constraints, as is common in prior work on offline model-free RL, and we no longer require ensembles or other epistemic uncertainty estimators, as is common in prior work on model-based RL. All of these roles are filled by the same Transformer sequence model. In our experiments, we demonstrate the flexibility of this approach across imitation learning, goal-conditioned RL, and offline RL.
| accept | The reviews were thorough and there was good interaction with the authors.
I'd like to suggest that the authors may have come off as overly aggressive in their pushing for additional reviewer interaction, especially when the interaction was already quite good. This aggressiveness may backfire sometime in the future.
That said, the paper does seem to make a reasonable contribution and the delta between what is promised and the original submission is not overly large. | train | [
"gFYeAP5cZql",
"4bf_iBy0ZzN",
"Zu7J3XInQxj",
"JtNfpJRRVK",
"-DIJuDjK6xk",
"BpMvWxvVKu",
"Q-vZtLZse7n",
"uVWcIWvUDg",
"fzSlq6-GpB",
"yMVL3dxFZA",
"ugUXCsqttaq",
"532_SAWWMGm",
"ClJQsaZRk9E",
"zgg-lWVW7un",
"6oBJoUFOj-K"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" To summarize the main points:\n1. We have run the requested PlaNet baseline.\n2. We have provided 1000 more attention maps. \n3. We have benchmarked a continuous variant of the model and an alternative (quantile regression-based) discretization approach.\n4. We have clarified that we do not perform reward conditi... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
5
] | [
"uVWcIWvUDg",
"nips_2021_wgeK563QgSw",
"JtNfpJRRVK",
"-DIJuDjK6xk",
"BpMvWxvVKu",
"ClJQsaZRk9E",
"4bf_iBy0ZzN",
"6oBJoUFOj-K",
"532_SAWWMGm",
"nips_2021_wgeK563QgSw",
"6oBJoUFOj-K",
"yMVL3dxFZA",
"4bf_iBy0ZzN",
"nips_2021_wgeK563QgSw",
"nips_2021_wgeK563QgSw"
] |
nips_2021_GF20UbpcqKT | Optimality and Stability in Federated Learning: A Game-theoretic Approach | Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players). First, we provide and prove the correctness of an efficient algorithm to calculate an optimal (error minimizing) arrangement of players. Next, we analyze the relationship between the stability and optimality of an arrangement. First, we show that for some regions of parameter space, all stable arrangements are optimal (Price of Anarchy equal to 1). However, we show this is not true for all settings: there exist examples of stable arrangements with higher cost than optimal (Price of Anarchy greater than 1). Finally, we give the first constant-factor bound on the performance gap between stability and optimality, proving that the total error of the worst stable solution can be no higher than 9 times the total error of an optimal solution (Price of Anarchy bound of 9).
| accept | The rebuttals helped the Reviewers to understand better the original contributions provided in the paper. However, I have two major suggestions for the authors when producing the camera-ready version of the paper:
1. clarify better the assumptions, even those made implicitly, and clarifying them in the formal statements;
2. re-organize the first three sections to clarify which are the original contributions and which are related works, in particular it would be nice to have one section entirely devoted to preliminaries and model definition. | train | [
"20zBp42nVq8",
"Dtk7di3uQss",
"lKq-xx7-PgZ",
"jZbqbYDePA",
"mP5TVlNXiNp",
"MeK8sY9n13Q",
"hWIpK8XyYyP",
"c88t1BBpFOA",
"_W3uRXZ_u0a",
"uComn0tdRi0",
"f3uBBNTZWlJ",
"VLO3HL8xep",
"zA0eIPYDmz5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks. When there is no stable solution, I think the danger still exists. E.g., the system could oscillate between states with high costs. I think your second point provides a stronger argument, and I'm convinced overall.",
" Thanks for the clarification. I understand that analyze the federated learning model ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
4
] | [
"_W3uRXZ_u0a",
"hWIpK8XyYyP",
"nips_2021_GF20UbpcqKT",
"nips_2021_GF20UbpcqKT",
"zA0eIPYDmz5",
"VLO3HL8xep",
"lKq-xx7-PgZ",
"f3uBBNTZWlJ",
"uComn0tdRi0",
"nips_2021_GF20UbpcqKT",
"nips_2021_GF20UbpcqKT",
"nips_2021_GF20UbpcqKT",
"nips_2021_GF20UbpcqKT"
] |
nips_2021_lLP77dROaJ | Understanding Deflation Process in Over-parametrized Tensor Decomposition | In this paper we study the training dynamics for gradient flow on over-parametrized tensor decomposition problems. Empirically, such training process often first fits larger components and then discovers smaller components, which is similar to a tensor deflation process that is commonly used in tensor decomposition algorithms. We prove that for orthogonally decomposable tensor, a slightly modified version of gradient flow would follow a tensor deflation process and recover all the tensor components. Our proof suggests that for orthogonal tensors, gradient flow dynamics works similarly as greedy low-rank learning in the matrix setting, which is a first step towards understanding the implicit regularization effect of over-parametrized models for low-rank tensors.
| accept | All the reviewers reached the consensus that the paper makes a valuable interesting contribution and should be accepted to the conference. In the beginning of the discussion period, some concerns were raised about the practical utility of the results but after discussion among reviewers, the reviewers all agreed that the contribution is a nice technical addition to the growing literature on implicit regularization in factorization method. | train | [
"C05z6o5lAOC",
"O8aGOlxauW",
"_oQynVFOpMt",
"Nah8sen36Bw",
"FhHzQzDGcki",
"FNPv8hc24d",
"NIevlC5A4G8",
"U7uwDN-Fhnm",
"Ve67T4vuFaW",
"LMmJUOCGXsj",
"LIgLG950onG",
"Q_PS7-70rfo",
"5ZHXXcXAhM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper analyzes gradient descent on the orthogonal tensor decomposition problem. More specifically, they consider a ground 4th order tensor with r orthonormal factors and an overparametrized estimate with m>r factors, and analyze the performance of (a variant of) gradient descent on the problem.\n\nThe paper es... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2021_lLP77dROaJ",
"LMmJUOCGXsj",
"U7uwDN-Fhnm",
"FhHzQzDGcki",
"FNPv8hc24d",
"NIevlC5A4G8",
"5ZHXXcXAhM",
"Q_PS7-70rfo",
"C05z6o5lAOC",
"LIgLG950onG",
"nips_2021_lLP77dROaJ",
"nips_2021_lLP77dROaJ",
"nips_2021_lLP77dROaJ"
] |
nips_2021_YBanVDVEbVe | Privately Learning Subspaces | Private data analysis suffers a costly curse of dimensionality. However, the data often has an underlying low-dimensional structure. For example, when optimizing via gradient descent, the gradients often lie in or near a low-dimensional subspace. If that low-dimensional structure can be identified, then we can avoid paying (in terms of privacy or accuracy) for the high ambient dimension. We present differentially private algorithms that take input data sampled from a low-dimensional linear subspace (possibly with a small amount of error) and output that subspace (or an approximation to it). These algorithms can serve as a pre-processing step for other procedures.
| accept | The paper studies the problem of differentially private subspace learning. Under an assumption on low-dimensional nature of data, the authors present algorithms that enjoy a sample complexity scaling with the dimension of the subspace and not with the ambient dimension, thereby improving on existing results. Overall, a good paper. | train | [
"jnKUc_YHWc0",
"B-zYlGY-SGI",
"An7ZhwqPrK",
"_fDYcXzext",
"Hg_-VdN5BPG",
"NEzZKId_Fb",
"SlZ-b9RBw_o",
"1FpUpIezEwS",
"-ksU4YqiTi",
"DmUO5dPNomf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the problem of learning a low-dimensional subspace from high-dimensional data in a differentially private manner. The emphasis is on getting sample complexity that only depends on the dimension of the subspace but not the dimension of the original space. Such a guarantee is not achieved by previ... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_YBanVDVEbVe",
"Hg_-VdN5BPG",
"nips_2021_YBanVDVEbVe",
"nips_2021_YBanVDVEbVe",
"An7ZhwqPrK",
"jnKUc_YHWc0",
"DmUO5dPNomf",
"-ksU4YqiTi",
"nips_2021_YBanVDVEbVe",
"nips_2021_YBanVDVEbVe"
] |
nips_2021_Xa9Ba6NsJ6 | On the Value of Interaction and Function Approximation in Imitation Learning | Nived Rajaraman, Yanjun Han, Lin Yang, Jingbo Liu, Jiantao Jiao, Kannan Ramchandran | accept | This is a good paper that studies the statistical properties of imitation learning under various regimes, e.g., interactive, and several non-interactive settings. The reviewers found the paper technically sound and the results significant.
The shortcomings of the papers are:
- Not citing and comparing with a recent paper by Swamy et al., ICML, 2021. Some of the results of these two papers are comparable.
I believe this omission in the initial submission is acceptable, given the recency of the paper and that its arXiv version was uploaded on 4 March 2021, relatively close to the NeurIPS deadline. That being said, I encourage the authors to provide a detailed comparison with that work.
- The exposition can be improved, for example, how the Introduction is presented.
Please consult the reviews for concrete suggestions.
Overall, I believe this paper should be accepted. | train | [
"ZxFxJU5wpPO",
"R2aLoRDf13R",
"kV1xSjew7-F",
"Af07l0DflfM",
"qE9lDwclY-U",
"PqO_Ret8_eC",
"aiFvBo2GKLH",
"W9yFkACYyGt",
"ff-nKEh6V-c",
"vF02vGnnRCF",
"6ONE0KUu69d"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" -",
"The paper examines statistical guarantees of imitation learning under various regimes -- interactive setting, non-interactive setting, and non-interactive setting with known dynamics. It further looks at the linear setting and derives bounds for non-interactive settings with/without known dynamics. The mai... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"qE9lDwclY-U",
"nips_2021_Xa9Ba6NsJ6",
"aiFvBo2GKLH",
"W9yFkACYyGt",
"vF02vGnnRCF",
"ff-nKEh6V-c",
"R2aLoRDf13R",
"6ONE0KUu69d",
"nips_2021_Xa9Ba6NsJ6",
"nips_2021_Xa9Ba6NsJ6",
"nips_2021_Xa9Ba6NsJ6"
] |
nips_2021_ZjGr1tMVbjw | Shapeshifter: a Parameter-efficient Transformer using Factorized Reshaped Matrices | Language models employ a very large number of trainable parameters. Despite being highly overparameterized, these networks often achieve good out-of-sample test performance on the original task and easily fine-tune to related tasks. Recent observations involving, for example, intrinsic dimension of the objective landscape and the lottery ticket hypothesis, indicate that often training actively involves only a small fraction of the parameter space. Thus, a question remains how large a parameter space needs to be in the first place –- the evidence from recent work on model compression, parameter sharing, factorized representations, and knowledge distillation increasingly shows that models can be made much smaller and still perform well. Here, we focus on factorized representations of matrices that underpin dense, embedding, and self-attention layers. We use low-rank factorized representation of a reshaped and rearranged original matrix to achieve space efficient and expressive linear layers. We prove that stacking such low-rank layers increases their expressiveness, providing theoretical understanding for their effectiveness in deep networks. In Transformer models, our approach leads to more than ten-fold reduction in the number of total trainable parameters, including embedding, attention, and feed-forward layers, with little degradation in on-task performance. The approach operates out-of-the-box, replacing each parameter matrix with its compact equivalent while maintaining the architecture of the network.
| accept | The paper attempts to improve parameter efficiency of transformer models for sequence modelling. In this regard, the authors propose a novel replacement of all matrices in the transformer by sum of Kronecker products (a type of low-ranked factorized representations) and some reshaping tricks. Further, the expressivity of such Kronecker-based linear layers is analysed. Empirical results on translation looks promising (better model-size vs performance trade-off as compared to other compression approaches for transformers). We thank the reviewers and authors for engaging in an active discussion, which resulted in clearing a lot of the concerns (e.g. speed/flops) and a lot of constructive feedback were provided to improve the paper. The authors provided new empirical results as part of the discussion, please include them in the final version of the paper as they add great value and understanding to the model as a whole.
- Please remove "fast" from abstract in the final version as the proposed method is slower than baseline in both training and inference. | train | [
"slrJqRzBve4",
"F5H5HQIVFIv",
"y1xkp_ctQi",
"zfBCnHCCu2d",
"XinOhTyKUK",
"tXde1BmsCrY",
"fZ9LcYFhiWS",
"JpHXfs6VxPU",
"VQbMjWngxT3",
"jg20pgCmKdz",
"Pw4JxlX4VHp",
"--pXjr_9FBS",
"emXhKB3joRA",
"ju9jKzdbh5j",
"y5mRJDZ6ekq",
"B_nw6dDb77N"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the timing experiments. I stand by my score.",
" Thank you for the response, and for answering my questions about the architecture. I stand by my score.",
"Shapeshifter reparameterizes dense kernels into a product of two lower-rank matrices. Shapeshifter introduces reshapes and transposes to make t... | [
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"jg20pgCmKdz",
"JpHXfs6VxPU",
"nips_2021_ZjGr1tMVbjw",
"VQbMjWngxT3",
"jg20pgCmKdz",
"nips_2021_ZjGr1tMVbjw",
"tXde1BmsCrY",
"B_nw6dDb77N",
"y1xkp_ctQi",
"y5mRJDZ6ekq",
"ju9jKzdbh5j",
"emXhKB3joRA",
"nips_2021_ZjGr1tMVbjw",
"nips_2021_ZjGr1tMVbjw",
"nips_2021_ZjGr1tMVbjw",
"nips_2021_Z... |
nips_2021_zyD5AiyLuzG | The Adaptive Doubly Robust Estimator and a Paradox Concerning Logging Policy | The doubly robust (DR) estimator, which consists of two nuisance parameters, the conditional mean outcome and the logging policy (the probability of choosing an action), is crucial in causal inference. This paper proposes a DR estimator for dependent samples obtained from adaptive experiments. To obtain an asymptotically normal semiparametric estimator from dependent samples without non-Donsker nuisance estimators, we propose adaptive-fitting as a variant of sample-splitting. We also report an empirical paradox that our proposed DR estimator tends to show better performances compared to other estimators utilizing the true logging policy. While a similar phenomenon is known for estimators with i.i.d. samples, traditional explanations based on asymptotic efficiency cannot elucidate our case with dependent samples. We confirm this hypothesis through simulation studies.
| accept | The paper studies off policy estimation using data collected by a changing (adaptive) and unknown logging policy. This is an important and under-explored setting, since adaptive experimentation is increasingly common in many application domains (e.g., online platforms). The paper gives an adaptive doubly-robust estimator, establishes semiparametric efficiency of the estimator, and also provides some nice simulation studies that showcase how fitting propensities may be beneficial even when the true ones are known.
Overall, all of the reviewers were quite positive about the paper, and so we are recommending acceptance. Please incorporate comments from the discussion section into the final version, as this will clear up some confusions and improve the manuscript. | train | [
"erERNNk7BY",
"HJDE814pzPs",
"70hLW4PATKy",
"mup5z2JydvU",
"21mJlPvqFmY",
"CrkB3i9ggS",
"FNg5_KXQGFy",
"my14l4aiF0N",
"5Go_0FvDY1u",
"0TeXksLN9Bb"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"In the setting studied in this paper, the analyst has access to data consisting of treatment assignments $A_t \\\\in \\\\mathcal{A} = \\\\{1,...,K\\\\}$, covariates $X_t$ and responses $Y_t = Y_t(A_t)$, where $(Y_t(a))_{a \\\\in \\\\mathcal{A}}$ are potential outcomes. The data are collected at time steps $t=1,...... | [
6,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1
] | [
"nips_2021_zyD5AiyLuzG",
"21mJlPvqFmY",
"FNg5_KXQGFy",
"nips_2021_zyD5AiyLuzG",
"0TeXksLN9Bb",
"nips_2021_zyD5AiyLuzG",
"5Go_0FvDY1u",
"erERNNk7BY",
"CrkB3i9ggS",
"mup5z2JydvU"
] |
nips_2021_BGS3o8SpjI3 | Regularized Softmax Deep Multi-Agent Q-Learning | Ling Pan, Tabish Rashid, Bei Peng, Longbo Huang, Shimon Whiteson | accept | The paper addresses an important well-known problem (Q-value overestimation), but in a more context which is more complex than the traditional setup (Multi-Agent RL). In this context the authors claim that the problem is less studied, which the reviewers generally don't contest (and I personally agree as well). This paper studies this problem in this context a clear way, and also propose an algorithm to mitigate it.
Two of the reviewers are voting to accept the paper, whereas the third is recommending a reject. I still recommend acceptance, because that's the majority vote but also because I am not convinced that the third reviewer (QGZD) has a full understanding of the paper. The authors have addressed the criticisms of QGZD and I tend to agree with them. Unfortunately QGZD hasn't answered the rebuttal. | val | [
"0KbcYRVuFsY",
"XPjky4SxcsP",
"m5yky4EBo4",
"myLLfRfEHv6",
"hoiBrOWprl",
"zDUfnMRCGkT"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed evaluation of our paper and thoughtful reviews, and the comments are greatly appreciated!\n\n*Q1: The paper shows that RES-QMIX is quite insensitive to $\\beta$. Could authors give more intuition on how this happens? Are there any experiments on other tasks besides Figure 7?*\n\n- Low ... | [
-1,
-1,
-1,
7,
4,
7
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"zDUfnMRCGkT",
"myLLfRfEHv6",
"hoiBrOWprl",
"nips_2021_BGS3o8SpjI3",
"nips_2021_BGS3o8SpjI3",
"nips_2021_BGS3o8SpjI3"
] |
nips_2021_A_TVp2HtxPS | Physics-Aware Downsampling with Deep Learning for Scalable Flood Modeling | Background. Floods are the most common natural disaster in the world, affecting the lives of hundreds of millions. Flood forecasting is therefore a vitally important endeavor, typically achieved using physical water flow simulations, which rely on accurate terrain elevation maps. However, such simulations, based on solving partial differential equations, are computationally prohibitive on a large scale. This scalability issue is commonly alleviated using a coarse grid representation of the elevation map, though this representation may distort crucial terrain details, leading to significant inaccuracies in the simulation.\Contributions. We train a deep neural network to perform physics-informed downsampling of the terrain map: we optimize the coarse grid representation of the terrain maps, so that the flood prediction will match the fine grid solution. For the learning process to succeed, we configure a dataset specifically for this task. We demonstrate that with this method, it is possible to achieve a significant reduction in computational cost, while maintaining an accurate solution. A reference implementation accompanies the paper as well as documentation and code for dataset reproduction.
| accept | The paper proposes a method for physics-informed downsampling of the terrain map for scalable flood modeling. The application problem is extremely important and challenging. The proposed solution, while not totally novel, is reasonable. The experiment would be more convincing if thorough evaluation can be conducted with strong baselines in the fields. We hope that the authors can seriously consider improving the evaluation in the final version of the paper. | train | [
"wCnbmJfSz5q",
"jg9q5XoA2_5",
"y6PLOwdLbPB",
"id6DTBXF-b",
"7tpReLJoiSw",
"zUyadP7X__e",
"tlvCpuqaYPD",
"3WRWjlVucp",
"hubk26hU38j"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a deep learning-based method for coarse-grain discretization in solving the PDEs of flood modeling. A ResNet is used to downsample the input terrain, and the coarse-grain output is then fed into a numerical solver. A fine-grain numerical solver generates the ground truth for end-to-end training ... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_A_TVp2HtxPS",
"zUyadP7X__e",
"nips_2021_A_TVp2HtxPS",
"hubk26hU38j",
"y6PLOwdLbPB",
"wCnbmJfSz5q",
"3WRWjlVucp",
"nips_2021_A_TVp2HtxPS",
"nips_2021_A_TVp2HtxPS"
] |
nips_2021_UUds0Jr_XWk | Systematic Generalization with Edge Transformers | Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art neural models such as Transformers and Graph Neural Networks. To tackle this challenge, we propose Edge Transformer, a new model that combines inspiration from Transformers and rule-based symbolic AI. The first key idea in Edge Transformers is to associate vector states with every edge, that is, with every pair of input nodes---as opposed to just every node, as it is done in the Transformer model. The second major innovation is a triangular attention mechanism that updates edge representations in a way that is inspired by unification from logic programming. We evaluate Edge Transformer on compositional generalization benchmarks in relational reasoning, semantic parsing, and dependency parsing. In all three settings, the Edge Transformer outperforms Relation-aware, Universal and classical Transformer baselines.
| accept | This paper proposes 'Edge Transformer' which is a Transformer architecture that assigns an embedding to the edge between two tokens. A 'triangular attention' is used to update the edge embeddings, which basically computes, for edge a-b, a weighted combination of the product of edge embeddings a-n * n-b for each intermediary node n (N^3). The authors test this method on systematic generalization datasets such as CLUTTR (link prediction) and on a subtask derived from CFQ (dependency label prediction).
--
The paper was generally found clear, although some concerns about the experimental setup need to be addressed in the revision. A reviewer found improvements over baselines for systematic generalization to be "impressive". There has been an extensive back-and-forth between reviewers and authors on the validity of the baselines for the task considered. I thank both reviewers and authors for their fruitful exchange. The authors provided new positive results for the baselines suggested by reviewers. Initial concerns rotated around the synthetic nature of the task considered: the authors improved on this point by providing new results on the full semantic parsing task of CFQ by testing on each MCD split: the Edge Transformers outperforms carefully tuned previous (non pre-trained) transformers for this task. These results must still be taken with a grain of salt, given that pre-trained models achieve better results. However, these results strengthen the paper considerably. One suggestion I can think of in this respect is whether the authors could try to start from a pre-trained T5 model, add parameters for their edge attention, and fine-tune everything on the downstream CFQ task. This could maybe give a better idea on whether this model holds promise in the general setting, e.g. gains can be obtained without retraining the all model from scratch. If the potential impact of novel attention mechanism can extend to a large set of tasks, still major concerns rotate around the computational complexity of this approach (N^3) which makes it hardly applicable in general.
Overall, the reader might be left to wonder whether this method is applicable or is beneficial when it comes to large-scale pre-training of Transformer architectures. Nevertheless, provided that the authors integrate all the precious reviewers' feedback and the new experimental results in the final version of the paper, reviewers and myself agree that this paper can make an interesting addition to the conference. | train | [
"sV4p97G_W0H",
"v2d9tUp6SX",
"u7Xz9naZnMY",
"p75e4SBvrYf",
"gpFJjKLqJSA",
"4QL7LxWijQ6",
"QDA2iaBcEma",
"xifU6Vn-_R2",
"DtTjIssrlUw",
"MKMARsYB4Y",
"vgo9EwiUpVm",
"wGCLR5qlXqv",
"SzRcOKY4doW",
"jU-SL2VRQLA",
"FPDeMlFr1W",
"1tkR1pIQwA5",
"pKdoUoTlh19",
"ZzzMDOEmbQC",
"Z1zvcpeFFa",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"... | [
" Dear Reviewer V441,\n\nWe hope that you are doing well. In your review, you wrote:\n\n> For dependency parsing tasks, it would be better to test the method on more benchmark datasets\n\nSince submission, we have performed experiments on a new task, CFQ semantic parsing, described in more detail [here](https://ope... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"3299S39BqEk",
"nips_2021_UUds0Jr_XWk",
"p75e4SBvrYf",
"QDA2iaBcEma",
"4QL7LxWijQ6",
"xifU6Vn-_R2",
"nips_2021_UUds0Jr_XWk",
"1tkR1pIQwA5",
"MKMARsYB4Y",
"vgo9EwiUpVm",
"wGCLR5qlXqv",
"SzRcOKY4doW",
"jU-SL2VRQLA",
"rZxpncLLEpJ",
"nips_2021_UUds0Jr_XWk",
"pKdoUoTlh19",
"FPDeMlFr1W",
... |
nips_2021_ZEoMBPtvqey | TransformerFusion: Monocular RGB Scene Reconstruction using Transformers | We introduce TransformerFusion, a transformer-based 3D scene reconstruction approach. From an input monocular RGB video, the video frames are processed by a transformer network that fuses the observations into a volumetric feature grid representing the scene; this feature grid is then decoded into an implicit 3D scene representation. Key to our approach is the transformer architecture that enables the network to learn to attend to the most relevant image frames for each 3D location in the scene, supervised only by the scene reconstruction task. Features are fused in a coarse-to-fine fashion, storing fine-level features only where needed, requiring lower memory storage and enabling fusion at interactive rates. The feature grid is then decoded to a higher-resolution scene reconstruction, using an MLP-based surface occupancy prediction from interpolated coarse-to-fine 3D features. Our approach results in an accurate surface reconstruction, outperforming state-of-the-art multi-view stereo depth estimation methods, fully-convolutional 3D reconstruction approaches, and approaches using LSTM- or GRU-based recurrent networks for video sequence fusion.
| accept | This paper has rather positive reviews (7,6,6,5). In general, the reviewers appreciated the quality of the writing and experiments, as well as the (moderate) novelty the paper offers. Reviewer AGwq in particular was convinced by the rebuttal, especially on novelty vs the recent NeuralRecon paper, and increased their score from 5 to 6. Overall, the AC agrees with the reviewers and recommends acceptance.
| train | [
"uHiPLgD7B0_",
"oZPaxXgtw-U",
"dtT9_4XLLo",
"jFpINqjUBLq",
"bwKKPdJkIFk",
"ZdVFTpvX64Y",
"wFnt5HBFdHc",
"CgbHYeohy6w",
"lLJowSLJdlU",
"D5-W0m_RfUa",
"IHb5jaVJKrB",
"gg9kgJdMe62"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a method to reconstruct dense surface geometry of a static scene from a monocular RGB video. \n\nThe method combines implicit functions and transformers for achieving the final reconstruction. Specifically, images in the video are encoded by a 2D image encoder. After that, for each voxel in a 3... | [
6,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_ZEoMBPtvqey",
"ZdVFTpvX64Y",
"nips_2021_ZEoMBPtvqey",
"nips_2021_ZEoMBPtvqey",
"wFnt5HBFdHc",
"D5-W0m_RfUa",
"jFpINqjUBLq",
"uHiPLgD7B0_",
"dtT9_4XLLo",
"gg9kgJdMe62",
"nips_2021_ZEoMBPtvqey",
"nips_2021_ZEoMBPtvqey"
] |
nips_2021_AklttWFnxS9 | Maximum Likelihood Training of Score-Based Diffusion Models | Yang Song, Conor Durkan, Iain Murray, Stefano Ermon | accept | This paper shows how training continuous-time denoising diffusion models (a.k.a. score-based modes) can be converted to maximum-likelihood training with a proper weighting of the training objective. Although this connection is known to some degrees from classical literature (see below), the paper does an excellent job of showing how following this connection for training yields "SOTA" likelihood results with modern score-based generative models and proper importance sampling. Given the unanimously positive reviews and the potential impact of this work, I am very happy to recommend this paper for acceptance.
That said, I'd like to encourage the authors to make the following changes for the camera-ready version:
- I appreciate coming forward with the updated numbers after discovering the numerical problems. Please make sure that the claims in this submission are adjusted after updating the numbers in the final camera-ready version.
- The connection between the KL divergence and score matching objective was also discussed in classical papers such as [1]. Please add a small section discussing these earlier works.
[1] Lyu, Interpretation and Generalization of Score Matching. | train | [
"7eZhJ1VidaA",
"Cf2MehUfw2p",
"aKr_QoriHq_",
"xsBXtnMDft",
"H0kpwA6bWhy",
"7HOTAaQZ1u",
"G5jM-qe4F-",
"F0nD0LecZk0",
"Tl4mvr7xMUz",
"Ay5uTsOEVBY",
"z2i6lMX-A0S",
"8aN0mRUcFsl",
"9hazwWz2OM9"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the insightful question. As opposed to VP/subVP SDEs (eq. (32) of [1]), VE SDEs in [1] (eq. (30)) are only defined on the time horizon $(0, T]$, which doesn't contain $t=0$. From our understanding, $x_{0^+}$ is different from $x_0$ and is defined as a random variable from $\\mathcal{N}(x_0, \\sigma_... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"Cf2MehUfw2p",
"F0nD0LecZk0",
"Ay5uTsOEVBY",
"nips_2021_AklttWFnxS9",
"7HOTAaQZ1u",
"xsBXtnMDft",
"nips_2021_AklttWFnxS9",
"9hazwWz2OM9",
"8aN0mRUcFsl",
"z2i6lMX-A0S",
"nips_2021_AklttWFnxS9",
"nips_2021_AklttWFnxS9",
"nips_2021_AklttWFnxS9"
] |
nips_2021_sMIMAXqiqj3 | Global Convergence of Gradient Descent for Asymmetric Low-Rank Matrix Factorization | Tian Ye, Simon S. Du | accept | We thank the authors for this submission. Overall, the paper presents the first proof that shows randomly initialized gradient descent converges to a global minimum of the asymmetric low-rank factorization problem with a polynomial rate.
The paper well-motivates the approach. The authors have provided extensive responses to the concerns raised (matrix sensing connection, convergence of Ut, Vt, better presentation of results, etc) and the AC + reviewers really thank them for their effort. Overall, the new results obtained during the rebuttal definitely improve the quality of the paper. We all believe that the inclusion of these results during the rebuttal period is something that does not heavily change the message of this paper.
There was discussion and consensus that this work is interesting. Having in mind issues/concerns raised by the reviewers, the main points of reviewers during further discussion were that this paper deserves publication, given the promised fixes by the authors during the discussion period. | train | [
"dwAOmcOLEM",
"jPHaU-McwV",
"5MWEFl0xNbK",
"mCYMTb2eWP",
"wRtG_5CYtzb",
"6IgVcsPcyr1",
"zxx9G7-OWFk",
"1F5Tv6OQWFS",
"9ucAgWKSgPV",
"iic2dWIv6PD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the problem of decomposing an m x n matrix of rank r into the product of an m x r matrix and an r x n matrix. While various methods exits for solving this problem, the goal of the paper is to provide theoretical guarantees for gradient descent with random initialization and fixed learning rate. ... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
8,
9
] | [
2,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_sMIMAXqiqj3",
"1F5Tv6OQWFS",
"nips_2021_sMIMAXqiqj3",
"6IgVcsPcyr1",
"iic2dWIv6PD",
"5MWEFl0xNbK",
"9ucAgWKSgPV",
"dwAOmcOLEM",
"nips_2021_sMIMAXqiqj3",
"nips_2021_sMIMAXqiqj3"
] |
nips_2021_G5l8qucT8A | Adaptive Data Augmentation on Temporal Graphs | Temporal Graph Networks (TGNs) are powerful on modeling temporal graph data based on their increased complexity. Higher complexity carries with it a higher risk of overfitting, which makes TGNs capture random noise instead of essential semantic information. To address this issue, our idea is to transform the temporal graphs using data augmentation (DA) with adaptive magnitudes, so as to effectively augment the input features and preserve the essential semantic information. Based on this idea, we present the MeTA (Memory Tower Augmentation) module: a multi-level module that processes the augmented graphs of different magnitudes on separate levels, and performs message passing across levels to provide adaptively augmented inputs for every prediction. MeTA can be flexibly applied to the training of popular TGNs to improve their effectiveness without increasing their time complexity. To complement MeTA, we propose three DA strategies to realistically model noise by modifying both the temporal and topological features. Empirical results on standard datasets show that MeTA yields significant gains for the popular TGN models on edge prediction and node classification in an efficient manner.
| accept | After discussions between the reviewers and authors it seems as though a consensus has been reached amongst 3 of the reviewers that this paper is clearly worthy of acceptance. The reviewer on the side of rejection has not engaged in discussions with the authors or in internal discussions on the paper. The review against acceptance also lacks details in the objections raised to the paper.
I want to thank the authors for their detailed responses and for listening to and incorporating the feedback from the reviewers and thank those reviewers who did for engaging in discussions with the authors. | train | [
"SFM2rv5A_D",
"HYtJo3i4oVa",
"3ypiATQDLy0",
"ElkM3cIOQAZ",
"1LG23yUdaEB",
"D8f6_agkHsx",
"B7YsrTQXEKL",
"49G_1ZLIc-1",
"LdkJ24VckTi",
"ebZD-aRuIgl",
"yTX0ViAppv8",
"Bj3fc7HnWa7",
"ToN-MRvJRNf",
"T6naqLRNWEH"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces Memory Tower Augmentation (MeTA) --- an adaptive data augmentation approach for temporal graph networks (TGNs). The proposal aims to tackle overfitting and improve learning in TGNs. MeTA applies a hierarchical message-passing scheme to update TGNs' memory states. In addition, MeTA considers th... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_G5l8qucT8A",
"ElkM3cIOQAZ",
"nips_2021_G5l8qucT8A",
"LdkJ24VckTi",
"D8f6_agkHsx",
"yTX0ViAppv8",
"ebZD-aRuIgl",
"ToN-MRvJRNf",
"3ypiATQDLy0",
"T6naqLRNWEH",
"SFM2rv5A_D",
"yTX0ViAppv8",
"nips_2021_G5l8qucT8A",
"nips_2021_G5l8qucT8A"
] |
nips_2021_lI2To0NGe3Q | Regularized Frank-Wolfe for Dense CRFs: Generalizing Mean Field and Beyond | We introduce regularized Frank-Wolfe, a general and effective algorithm for inference and learning of dense conditional random fields (CRFs). The algorithm optimizes a nonconvex continuous relaxation of the CRF inference problem using vanilla Frank-Wolfe with approximate updates, which are equivalent to minimizing a regularized energy function. Our proposed method is a generalization of existing algorithms such as mean field or concave-convex procedure. This perspective not only offers a unified analysis of these algorithms, but also allows an easy way of exploring different variants that potentially yield better performance. We illustrate this in our empirical results on standard semantic segmentation datasets, where several instantiations of our regularized Frank-Wolfe outperform mean field inference, both as a standalone component and as an end-to-end trainable layer in a neural network. We also show that dense CRFs, coupled with our new algorithms, produce significant improvements over strong CNN baselines.
| accept | The paper is well written and the experiments are convincing. The authors spent a lot of effort addressing reviewer comments and all reviewers increased their score to 7. I would therefore like to accept the paper.
At first, generalized Frank-Wolfe seems like an odd choice for solving (6). Indeed, (6) can be solved by PG or MD, which have a better convergence rate than generalized FW. Moreover, since adding L2 or entropic regularization leads to Euclidean or KL projections instead of usual LMOs, the proposed algorithm is quite similar to PG and MD. I would therefore encourage the authors to further justify their choice. One justification is the connection with parallel mean field, provided that it doesn't hold for MD. In my understanding, another justification would be that FW-like step sizes require no tuning, which is important for backpropagability. In contrast, PG or MD would require either knowledge of the Lipschitz constant or a line search.
A few of minor comments on the technical side:
* The rates are not state of the art. The authors mention a 1/\sqrt{t} rate for the "adaptive" step-size, but a O(1/t) rate is known for that class of step-sizes, see for instance http://proceedings.mlr.press/v54/locatello17a/locatello17a.pdf and/or https://arxiv.org/pdf/1806.05123.pdf
* The analysis of the generalized FW algorithm with a strongly convex regularizer is very similar (although not fully identical) to the analysis with strongly convex set. I encourage the authors to mention and/or compare against this line of work, see for instance https://arxiv.org/pdf/2011.03351.pdf
* The adaptive step-size still requires to know constants that can be difficult to compute in practice or crude upper bounds. This should be clarified.
* Generalized FW algorithms are also proposed in
F. Bach. Duality between subgradient and conditional gradient methods. SIAM Journal of Optimization | train | [
"Z1Sv0mq11Gl",
"1GRQBOPxi5I",
"-f-ScwN7Pkq",
"Z4KAK5ndfcx",
"L-q6JOS_Zw",
"KTaEnU-p_UQ",
"-XB7PNY1z8t",
"W6vjMl_iM7Y",
"OA9PaZ9wz2-",
"GuSCmZu2FVl"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your feedback.",
" We are happy that our responses are appreciated. We thank the reviewer again for the feedback and for having increased the rating!",
"Semantic segmentation is a widely-studied problem in computer vision, and over the years a range of techniques have been popular. This paper re-vi... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"L-q6JOS_Zw",
"Z4KAK5ndfcx",
"nips_2021_lI2To0NGe3Q",
"-XB7PNY1z8t",
"OA9PaZ9wz2-",
"-f-ScwN7Pkq",
"-f-ScwN7Pkq",
"GuSCmZu2FVl",
"nips_2021_lI2To0NGe3Q",
"nips_2021_lI2To0NGe3Q"
] |
nips_2021_um7zVEeyVH1 | Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs | Imperative programming allows users to implement their deep neural networks (DNNs) easily and has become an essential part of recent deep learning (DL) frameworks. Recently, several systems have been proposed to combine the usability of imperative programming with the optimized performance of symbolic graph execution. Such systems convert imperative Python DL programs to optimized symbolic graphs and execute them. However, they cannot fully support the usability of imperative programming. For example, if an imperative DL program contains a Python feature with no corresponding symbolic representation (e.g., third-party library calls or unsupported dynamic control flows) they fail to execute the program. To overcome this limitation, we propose Terra, an imperative-symbolic co-execution system that can handle any imperative DL programs while achieving the optimized performance of symbolic graph execution. To achieve this, Terra builds a symbolic graph by decoupling DL operations from Python features. Then, Terra conducts the imperative execution to support all Python features, while delegating the decoupled operations to the symbolic execution. We evaluated Terra’s performance improvement and coverage with ten imperative DL programs for several DNN architectures. The results show that Terra can speed up the execution of all ten imperative DL programs, whereas AutoGraph, one of the state-of-the-art systems, fails to execute five of them.
| accept | The reviewers appreciated the novel techniques presented in the paper for co-executing the symbolic and imperative representations of a model program. During discussions, one major concern that came up was regarding experimental evaluation and comparisons to other related approaches. The additional experiments of applying XLA to Terra, and comparison with LazyTensor emulation helped clarify some of the advantages of Terra's approach and the corresponding benefits. It would be great if authors can incorporate the detailed feedback from reviews, the additional experiments and results, as well as discussions about limitations and clarifications from the author response in the final version of the paper. | train | [
"SoTQCxtZvco",
"0WfSPyhXuO",
"6Si6jYOiQoJ",
"E2gt0pfPewD",
"BvRRul_tUUa",
"Vk13yKXJZEn",
"T_N4yCB3IlX",
"LdgiUf8qWFL",
"NcZi86lEPH",
"BucTIz_g1hR",
"GSPVBSE5sKs",
"rlTXKFGBJ89",
"UoYigIZHVVa",
"w5F3YsnEw94",
"LMkGHHj2E84",
"jzLQRm740Xw"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Does our response address your concerns? If there is any further information you need, please let us know.",
" We appreciate your positive feedback on our work! If this paper gets accepted, we will reflect the reviewers' comments.\n\nYes. The current implementation of Terra breaks the XLA graph when there is an... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
5,
6
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"rlTXKFGBJ89",
"E2gt0pfPewD",
"nips_2021_um7zVEeyVH1",
"BvRRul_tUUa",
"Vk13yKXJZEn",
"T_N4yCB3IlX",
"NcZi86lEPH",
"NcZi86lEPH",
"GSPVBSE5sKs",
"jzLQRm740Xw",
"6Si6jYOiQoJ",
"LMkGHHj2E84",
"w5F3YsnEw94",
"nips_2021_um7zVEeyVH1",
"nips_2021_um7zVEeyVH1",
"nips_2021_um7zVEeyVH1"
] |
nips_2021_3GpcwM1slH8 | Uniform Sampling over Episode Difficulty | Episodic training is a core ingredient of few-shot learning to train models on tasks with limited labelled data. Despite its success, episodic training remains largely understudied, prompting us to ask the question: what is the best way to sample episodes? In this paper, we first propose a method to approximate episode sampling distributions based on their difficulty. Building on this method, we perform an extensive analysis and find that sampling uniformly over episode difficulty outperforms other sampling schemes, including curriculum and easy-/hard-mining. As the proposed sampling method is algorithm agnostic, we can leverage these insights to improve few-shot learning accuracies across many episodic training algorithms. We demonstrate the efficacy of our method across popular few-shot learning datasets, algorithms, network architectures, and protocols.
| accept | The submission investigates the question of how to sample episodes for few-shot learning. By introducing a measure of episode difficulty based on the few-shot learner's NLL on the episode, the authors experiment with importance sampling to simulate different episode difficulty distributions. They observe that without any intervention, episode difficulty is normally distributed, and that a uniform distribution during meta-training yields better results.
Reviewers found the paper well-written and easy to read, and found the proposed approach original and easy to implement. Even though the improvement margins are not substantial, reviewers thought that the ideas and experiments have a broad enough appeal to the few-shot learning community to make the submission a valuable contribution. I therefore recommend acceptance. | train | [
"mvURwsxlQBm",
"sXIttIYCDN",
"lsEd3I-Pac",
"MIWBMirfDjm",
"ej5RtVsBC4G",
"kqx485Xiu6k",
"EzYYLLAp3eO",
"IfiqavQIV3r",
"2viG99BWOy9",
"KE62sBU12sg",
"-eRd5NksZin",
"po6dINRDh9B",
"uxQTTlpZXia"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper investigates the effect of episode difficulty and sampling schemes on the performance of few-shot classification algorithms. It defines difficulty based on the negative log predictive likelihood of the few-shot learner and proposes to use importance sampling to mimic various sampling strategies. Experim... | [
8,
-1,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_3GpcwM1slH8",
"-eRd5NksZin",
"nips_2021_3GpcwM1slH8",
"nips_2021_3GpcwM1slH8",
"po6dINRDh9B",
"2viG99BWOy9",
"KE62sBU12sg",
"nips_2021_3GpcwM1slH8",
"lsEd3I-Pac",
"uxQTTlpZXia",
"mvURwsxlQBm",
"MIWBMirfDjm",
"nips_2021_3GpcwM1slH8"
] |
nips_2021_sLVJXf-BkIt | Scalable Intervention Target Estimation in Linear Models | This paper considers the problem of estimating the unknown intervention targets in a causal directed acyclic graph from observational and interventional data. The focus is on soft interventions in linear structural equation models (SEMs). Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets even for linear SEMs. This severely limits their scalability and sample complexity. This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets. The pivotal idea is to estimate the intervention sites from the difference between the precision matrices associated with the observational and interventional datasets. It involves repeatedly estimating such sites in different subsets of variables. The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class. Consistency, Markov equivalency, and sample complexity are established analytically. Finally, simulation results on both real and synthetic data demonstrate the gains of the proposed approach for scalable causal structure recovery. Implementation of the algorithm and the code to reproduce the simulation results are available at \url{https://github.com/bvarici/intervention-estimation}.
| accept | The paper proposes an algorithm for estimating the unknown intervention targets in a causal linear structural equation model (SEM) with Gaussian noise. The paper considers observational and interventional data generated under soft interventions.
This paper was viewed as a borderline paper.
The authors learn the set of intervened nodes (and their non-intervened parents) by building on the work of directly learning the difference directed acyclic graph between SEMs by Ghoshal and Honorio (2019). The main steps of the proposed algorithm are essentially the same as Ghoshal and Honorio (2019)'s algorithm based on estimating the difference of undirected graphical models (inverse covariance matrices). In addition to that, the authors use non-trivial insights from the problem at hand.
If accepted, I recommend to include the following clarifications and modifications:
- I will give the benefit of the doubt regarding experiments. Please try a relatively larger number of samples, so that the F1 score is closer to 100%, thus validating the high-probability exact recovery statement.
- Please assume boundedness of the product $M\_\\Sigma M\_{\\Gamma,\\Gamma^T}$, where $M\_\\Sigma = \\max( \\| \\Sigma^{(1)} \\|\_{1,\\infty} , \\| \\Sigma^{(2)} \\|\_{1,\\infty} )$ and $M\_{\\Gamma,\\Gamma^T} = \\max( \\| \\Gamma\_{S,S}^{-1} \\|\_{1,\\infty} , \\| (\\Gamma\_{S,S}^T)^{-1} \\|\_{1,\\infty} )$, where $S$ is the support of the precision matrix $(\\Sigma^{(2)})^{-1} - (\\Sigma^{(1)})^{-1}$ and $\\Gamma = \\Sigma^{(2)} \\otimes \\Sigma^{(1)}$. I agree with the authors that this change is possible.
- Please check for typos (e.g., Line 250 and Algorithm 2 uses $\\Sigma_1,\\Sigma_2$ instead of $\\Sigma^{(1)},\\Sigma^{(2)}$.)
- Please take into account the comments from the reviewers, e.g., regarding additional literature comparison, exponential complexity.
| train | [
"J8soRRxo-MQ",
"zY3H8NT3vBd",
"EDLjto7jOGx",
"pAO5d8tUAsO",
"_4L4K44MWy",
"5NMk4dPolxR",
"nItAqghmdIO",
"4cRxdnRL2vx",
"SSaH3E2eisW",
"brDIV33eqOi",
"ub87m18SKq",
"vVYBQ38WIA"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" 1) Our relaxed assumption has absolutely no impact on the analysis provided. The relaxation only affects the choices of covariance matrices $\\Sigma^{(1)},\\Sigma^{(2)}$. This relaxation was made to answer the AC’s earlier question about the choices of covariance matrices and the validity of the assumptions. \n ... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
-1,
-1,
-1,
-1,
5
] | [
"zY3H8NT3vBd",
"EDLjto7jOGx",
"pAO5d8tUAsO",
"vVYBQ38WIA",
"SSaH3E2eisW",
"nips_2021_sLVJXf-BkIt",
"nips_2021_sLVJXf-BkIt",
"brDIV33eqOi",
"5NMk4dPolxR",
"nItAqghmdIO",
"vVYBQ38WIA",
"nips_2021_sLVJXf-BkIt"
] |
nips_2021_4pciaBbRL4B | Play to Grade: Testing Coding Games as Classifying Markov Decision Process | Contemporary coding education often presents students with the task of developing programs that have user interaction and complex dynamic systems, such as mouse based games. While pedagogically compelling, there are no contemporary autonomous methods for providing feedback. Notably, interactive programs are impossible to grade by traditional unit tests. In this paper we formalize the challenge of providing feedback to interactive programs as a task of classifying Markov Decision Processes (MDPs). Each student's program fully specifies an MDP where the agent needs to operate and decide, under reasonable generalization, if the dynamics and reward model of the input MDP should be categorized as correct or broken. We demonstrate that by designing a cooperative objective between an agent and an autoregressive model, we can use the agent to sample differential trajectories from the input MDP that allows a classifier to determine membership: Play to Grade. Our method enables an automatic feedback system for interactive code assignments. We release a dataset of 711,274 anonymized student submissions to a single assignment with hand-coded bug labels to support future research.
| accept | The paper studies the problem of automatically identifying whether the student program for an interactive coding task is correct or not. The proposed method is based on a novel idea of mapping a student program to an MDP and then developing an RL agent that can play in this MDP with the objective of classifying it as correct or incorrect. Experiments are performed on a synthetic task as well as on a programming task from code.org. The reviewers acknowledged the importance of the studied problem and the potential impact in the domain of coding education. The reviewers raised several concerns in their initial reviews. The reviewers appreciated the authors' responses, which helped in answering most of their questions.
One specific concern that came up is regarding releasing the dataset. The paper lists one of the four contributions as "We will release a dataset of over 700k submissions to support further research." In the checklist, it is mentioned that the data is currently proprietary, making it sound more of a plan without a clear timeline. Given that the reviewers consider this dataset contribution in the reviewing process, a timeline is important. It is expected that the authors already have a plan in place and would open-source the dataset during the conference timeline (e.g., camera-ready deadline) if the paper is accepted.
Overall, the reviewers have a positive assessment of the paper. The reviewers have provided detailed feedback in their reviews, and we strongly encourage the authors to incorporate this feedback when preparing a revised version of the paper.
| train | [
"_Ppf_a62jVc",
"mER-xf6qaiA",
"-divRtDt6eh",
"7TBpFqpMPNw",
"pgXocNY9aKp",
"q8E9YJYlG57",
"h9h_GxtzO3G",
"WBr6QwbmrLc",
"VKfZXHXlEMa",
"Xkr0tzjaMWF",
"SPDxOa2BcP",
"994qsEOGlCC",
"iy20gIFkvM"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We received some communication from Code.org\nThe release of dataset isn't going to be an issue. \n\n---------- Forwarded message ---------\nFrom: Baker Franke <baker@code.org>\nDate: Fri, Sep 17, 2021, 8:27 PM\nSubject: All clear\nTo: Authors\n\nHey there,\n\nHadi + Lawyer + COO + head of product see no need for... | [
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"mER-xf6qaiA",
"-divRtDt6eh",
"7TBpFqpMPNw",
"nips_2021_4pciaBbRL4B",
"nips_2021_4pciaBbRL4B",
"SPDxOa2BcP",
"nips_2021_4pciaBbRL4B",
"pgXocNY9aKp",
"Xkr0tzjaMWF",
"h9h_GxtzO3G",
"pgXocNY9aKp",
"iy20gIFkvM",
"nips_2021_4pciaBbRL4B"
] |
nips_2021_u7oKU1iXTa9 | Distributional Reinforcement Learning for Multi-Dimensional Reward Functions | A growing trend for value-based reinforcement learning (RL) algorithms is to capture more information than scalar value functions in the value network. One of the most well-known methods in this branch is distributional RL, which models return distribution instead of scalar value. In another line of work, hybrid reward architectures (HRA) in RL have studied to model source-specific value functions for each source of reward, which is also shown to be beneficial in performance. To fully inherit the benefits of distributional RL and hybrid reward architectures, we introduce Multi-Dimensional Distributional DQN (MD3QN), which extends distributional RL to model the joint return distribution from multiple reward sources. As a by-product of joint distribution modeling, MD3QN can capture not only the randomness in returns for each source of reward, but also the rich reward correlation between the randomness of different sources. We prove the convergence for the joint distributional Bellman operator and build our empirical algorithm by minimizing the Maximum Mean Discrepancy between joint return distribution and its Bellman target. In experiments, our method accurately models the joint return distribution in environments with richly correlated reward functions, and outperforms previous RL methods utilizing multi-dimensional reward functions in the control setting.
| accept | All reviewers were in favor of acceptance, and from reading the paper myself I find it to be a solid contribution and potentially impactful. I definitely want to second ZZ5A's callout for how this work could be combined with successor features as an interesting direction for future work (thinking of Barreto et al.'s transfer learning work in particular). As reviewers noted, there is work in the literature that overlaps and thus reduces novelty. One that I did not see mentioned by reviewers or authors is "Distributional Multivariate Policy Evaluation and Exploration with the Bellman GAN" by Freirich et al., and in an ideal world the final version of this paper would provide a comparison with this method, but certainly should be discussed due to the overlap in problems being solved. Another improvement that I would really appreciate in a final version, although I understand that computational costs can make this difficult, would be an evaluation on the rest of (or reasonable subset of) the other Atari games. My reasoning here is that the current experiments tell me how well this method works in the type of problem it was intended for, but I also want to know how well it works out of that scope or more generally. | train | [
"-yrJ3KYUdTY",
"8CxWhz0jsQv",
"TpR4rhmIe1i",
"Xu2atlgYGHy",
"pClkiXhhoP",
"gxa1gqI9_W",
"m6gfmvZGvPU",
"vmkk5SJvGf-",
"TJlw2KvpWUQ",
"6RSMK_kfN4",
"bA9wzpmEbLF",
"y1zdQOLb71C",
"ukdOqEHTSew",
"JJzqTODu5m6",
"iGUacEYNUPS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We are glad to see that our rebuttal addressed your questions. We will make further revisions on analyzing Atari experiments as well as comparing the MMDQN baseline in the next version of the manuscript. We believe that our paper will be strengthened after that. Thank you again for your valuable and generous sugg... | [
-1,
6,
-1,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
3,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"TpR4rhmIe1i",
"nips_2021_u7oKU1iXTa9",
"bA9wzpmEbLF",
"gxa1gqI9_W",
"nips_2021_u7oKU1iXTa9",
"JJzqTODu5m6",
"TJlw2KvpWUQ",
"nips_2021_u7oKU1iXTa9",
"y1zdQOLb71C",
"nips_2021_u7oKU1iXTa9",
"8CxWhz0jsQv",
"vmkk5SJvGf-",
"iGUacEYNUPS",
"pClkiXhhoP",
"nips_2021_u7oKU1iXTa9"
] |
nips_2021_OUH25e12YyH | Differentiable Unsupervised Feature Selection based on a Gated Laplacian | Scientific observations may consist of a large number of variables (features). Selecting a subset of meaningful features is often crucial for identifying patterns hidden in the ambient space. In this paper, we present a method for unsupervised feature selection, and we demonstrate its advantage in clustering, a common unsupervised task. We propose a differentiable loss that combines a graph Laplacian-based score that favors low-frequency features with a gating mechanism for removing nuisance features. Our method improves upon the naive graph Laplacian score by replacing it with a gated variant computed on a subset of low-frequency features. We identify this subset by learning the parameters of continuously relaxed Bernoulli variables, which gate the entire feature space. We mathematically motivate the proposed approach and demonstrate that it is crucial to compute the graph Laplacian on the gated inputs rather than on the full feature set in the high noise regime. Using several real-world examples, we demonstrate the efficacy and advantage of the proposed approach over leading baselines.
| accept | The reviewers found that the proposed work proposes a sufficiently novel and interesting analysis and approach to solve an important problem, together with compelling experiments.
The rebuttal provides clear responses to the comments and questions of the reviewers.
The authors are strongly encouraged to take into account the comments of the reviewers and the elements they themselves contributed to this discussion when preparing the final version of the manuscript. | val | [
"6kCBXRcOVAz",
"Dn6oZGQtSw",
"V8yfWgdIAKF",
"vDbq7y2_gy",
"MwNoo1_HE_",
"ocMz3PT66wY",
"CUcdm-DQnyD",
"hBNQHllLG50",
"lTdGI65bccR",
"YsvV4swzSi",
"Jjhs9ua-SWS",
"vaSUJUGu4kn",
"E_5m0bRgjbE",
"YdS6vlkRwLJ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Since the discussion period is over, below, we provide a brief response to the AC. \n \nWe truly appreciate the time the AC invested in reading our response and the related material. Thanks for the valuable comments; we will add a discussion on the role that stochasticity plays in the revised version of the manu... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"Dn6oZGQtSw",
"MwNoo1_HE_",
"hBNQHllLG50",
"nips_2021_OUH25e12YyH",
"ocMz3PT66wY",
"nips_2021_OUH25e12YyH",
"lTdGI65bccR",
"vDbq7y2_gy",
"YdS6vlkRwLJ",
"E_5m0bRgjbE",
"vaSUJUGu4kn",
"nips_2021_OUH25e12YyH",
"nips_2021_OUH25e12YyH",
"nips_2021_OUH25e12YyH"
] |
nips_2021_vnHjsF7NSMw | Smooth Bilevel Programming for Sparse Regularization | Iteratively reweighted least square (IRLS) is a popular approach to solve sparsity-enforcing regression problems in machine learning. State of the art approaches are more efficient but typically rely on specific coordinate pruning schemes. In this work, we show how a surprisingly simple re-parametrization of IRLS, coupled with a bilevel resolution (instead of an alternating scheme) is able to achieve top performances on a wide range of sparsity (such as Lasso, group Lasso and trace norm regularizations), regularization strength (including hard constraints), and design matrices (ranging from correlated designs to differential operators). Similarly to IRLS, our method only involves linear systems resolutions, but in sharp contrast, corresponds to the minimization of a smooth function. Despite being non-convex, we show that there is no spurious minima and that saddle points are "ridable'', so that there always exists a descent direction. We thus advocate for the use of a BFGS quasi-Newton solver, which makes our approach simple, robust and efficient. We perform a numerical benchmark of the convergence speed of our algorithm against state of the art solvers for Lasso, group Lasso, trace norm and linearly constrained problems. These results highlight the versatility of our approach, removing the need to use different solvers depending on the specificity of the ML problem under study.
| accept | All reviewers found the reformulation interesting and appreciated the thorough experimental comparison. Remaining concerns include the (heavy) restriction to the least-square loss and the limited theoretical convergence guarantee. The authors' response acknowledged these issues and explained possible alleviations/extensions. Overall, there seems to be enough merit in the authors' approach and hopefully it could be further extended in the future.
Please incorporate the review, response and promised changes in the final revision. I would also suggest the authors refrain from using the term "bi-level programming," since the main theorem is simply a dualization of the loss, resulting in a standard min-max problem. | train | [
"ok_Wvt5undP",
"LoGQLRTYpBr",
"OwO18iyfSN",
"6hGFov_6wj",
"2G-R7IED7Jc",
"bumW-mjJnAu",
"MNFxUcnFPmX",
"RY28rVM6R9Y",
"60bv9QAEPCX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for clarifications. \nI think the paper is good and should be accepted.",
"The authors study regularized regression with a sparsity inducing constraint. (In the appendix also the extension to the low-rankness inducing case is considered.) The authors propose a new non-convex formulation, whi... | [
-1,
7,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
2,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"60bv9QAEPCX",
"nips_2021_vnHjsF7NSMw",
"LoGQLRTYpBr",
"nips_2021_vnHjsF7NSMw",
"nips_2021_vnHjsF7NSMw",
"nips_2021_vnHjsF7NSMw",
"nips_2021_vnHjsF7NSMw",
"nips_2021_vnHjsF7NSMw",
"nips_2021_vnHjsF7NSMw"
] |
nips_2021__kwj6V53ZqB | Grounding Representation Similarity Through Statistical Testing | To understand neural network behavior, recent works quantitatively compare different networks' learned representations using canonical correlation analysis (CCA), centered kernel alignment (CKA), and other dissimilarity measures. Unfortunately, these widely used measures often disagree on fundamental observations, such as whether deep networks differing only in random initialization learn similar representations. These disagreements raise the question: which, if any, of these dissimilarity measures should we believe? We provide a framework to ground this question through a concrete test: measures should have \emph{sensitivity} to changes that affect functional behavior, and \emph{specificity} against changes that do not. We quantify this through a variety of functional behaviors including probing accuracy and robustness to distribution shift, and examine changes such as varying random initialization and deleting principal components. We find that current metrics exhibit different weaknesses, note that a classical baseline performs surprisingly well, and highlight settings where all metrics appear to fail, thus providing a challenge set for further improvement.
| accept | This paper evaluates several representational similarity measures on a wide variety of benchmark tasks, and concludes that orthogonal Procrustes performs slightly better on average than other methods. All reviewers agreed that the empirical evaluation was novel and valuable, particularly with the additional results for vision models that the authors provided during the rebuttal period. Two reviewers noted that the suggestion that orthogonal Procrustes is superior to other methods was not entirely convincing given that different representational similarity methods seemed to perform best for different tasks. Nonetheless, all reviewers recommended acceptance.
The AC has some technical comments that may assist the authors in improving the camera-ready paper:
1. The definition of CCA in Eq. 2 isn’t sufficient to specify the value of PWCCA, since different scalings of the weights will lead to the same canonical correlations but different values of $\alpha_i$. If the components are assumed to have unit norm, then it does not seem like PWCCA is invariant to left orthogonal transformations as claimed.
2. The statement at the beginning of the appendix that the representations are normalized seems important for interpreting and reproducing the results in this work, and although L104-105 says that orthogonal Procrustes distance is not normalized between 0 and 1, because the representations are normalized, the Procrustes distance is normalized between 0 and 2.
3. The discussion suggests that orthogonal Procrustes gives better correlations than linear CKA for some tasks because, for diagonal matrices, orthogonal Procrustes reduces to a function of the squared distances between singular values whereas linear CKA reduces to the sum of the squared distances between the squared singular values. This seems possible, but this argument is not entirely convincing, because the difference in weighting of singular values is not the only difference between the methods: If all singular values are 1 and representations are equal-sized, orthogonal Procrustes distance reduces to the mean CCA correlation distance (up to a factor of 2), whereas linear CKA reduces to the mean squared CCA correlation distance. The AC suggests that the authors try manipulating (squaring/taking square roots of) the singular values of real representations before computing orthogonal Procrustes/CKA to ensure that the difference in efficacy between the methods can be explained by differences in their treatment of the singular values alone.
Finally, the biggest strength of this paper is its thorough empirical evaluation of the correlation between representational similarity and various functional properties of networks. Releasing the code for performing this evaluation would likely increase the paper's impact. | train | [
"2M7GY-4USC4",
"9DL_z1Ce0Qx",
"1iV_dB3m6wT",
"B3fc5QGtlXf",
"_IpTrDNdMGf",
"4qRraRfvb6",
"eGrghs6Epk3",
"WJSKH7_cZ6",
"1_LDNbA2x1R",
"mXxCF2CxQiX",
"L05yvFYSJU1",
"EkCT5jtmfpx",
"AfCbdK4mXU",
"3JI9zHVeNp",
"BR33deqzUu",
"ZfJ6NbDTGnB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The work is motivated by a desire to better understand representational similarity metrics and ensure they obey certain desirable properties. Specifically, representational similarity measures should be sensitive changes that affect functional behavior, and specific (invariant) to changes that do not affect functi... | [
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
4,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021__kwj6V53ZqB",
"EkCT5jtmfpx",
"AfCbdK4mXU",
"4qRraRfvb6",
"nips_2021__kwj6V53ZqB",
"eGrghs6Epk3",
"WJSKH7_cZ6",
"1_LDNbA2x1R",
"_IpTrDNdMGf",
"L05yvFYSJU1",
"EkCT5jtmfpx",
"2M7GY-4USC4",
"ZfJ6NbDTGnB",
"BR33deqzUu",
"nips_2021__kwj6V53ZqB",
"nips_2021__kwj6V53ZqB"
] |
nips_2021_jh1lAmTMOJp | A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning | We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state during planning. The agent uses a bottleneck mechanism over a set-based representation to force the number of entities to which the agent attends at each planning step to be small. In experiments, we investigate the bottleneck mechanism with several sets of customized environments featuring different challenges. We consistently observe that the design allows the planning agents to generalize their learned task-solving abilities in compatible unseen environments by attending to the relevant objects, leading to better out-of-distribution generalization performance.
| accept | The reviewers agreed that the paper contains compelling ideas and should be accepted. Some initial concerns about the generality of the claims were well addressed by the authors. | test | [
"LA5iVWOen3V",
"C5s3pNZQiA9",
"Orb0td1tzyb",
"vQERn7P03Rl",
"QUp8U-kf3R",
"a4119gaJrXs",
"e53mKFBRMK",
"hVsvG5PFouZ",
"Pgh3Gbl6rFb",
"meobBkFA4jO",
"kJOFWyg_ue0",
"fsWnBARuTc3",
"8rKLI247ezB",
"FoEpx3GJxu6",
"ZZrsF12jjHf",
"cTFq82e4Yll",
"inIody-LYnE",
"qA1SXRfyOTi",
"CMmTnOMAGFc... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"... | [
" Please do not hesitate to reply with further questions. Thank you again for your appreciation!",
" Thank you for your reply and your hard work; I have updated my review.",
"The authors take inspiration from theories of human consciousness to construct a model-based architecture for RL, with the goal of genera... | [
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
3,
-1,
5,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"C5s3pNZQiA9",
"fsWnBARuTc3",
"nips_2021_jh1lAmTMOJp",
"a4119gaJrXs",
"nips_2021_jh1lAmTMOJp",
"8rKLI247ezB",
"hVsvG5PFouZ",
"FoEpx3GJxu6",
"nips_2021_jh1lAmTMOJp",
"kJOFWyg_ue0",
"ZZrsF12jjHf",
"Orb0td1tzyb",
"QUp8U-kf3R",
"cTFq82e4Yll",
"inIody-LYnE",
"qA1SXRfyOTi",
"CMmTnOMAGFc",
... |
nips_2021_IoEnnwAP7aP | Reward-Free Model-Based Reinforcement Learning with Linear Function Approximation | Weitong ZHANG, Dongruo Zhou, Quanquan Gu | accept | This paper proposes two algorithms (corresponding to Bernstein and Hoeffding bonuses) for reward-free exploration in linear mixture MDPs and analyzes their sample complexity. The authors also provide a lower bound for this setting. The proposed algorithms involve a computationally difficult subproblem, which is replaced by an LP relaxation in the implementation.
Overall, the results and techniques are not very surprising, leaving most reviewers ambivalent. However, given that the paper is executed well and fills a gap in the literature, I recommend acceptance.
| test | [
"uv-rTyEh33m",
"FWjKtb8BCP",
"DcYNESgNmCD",
"0TYLzDQ5Hv",
"_jgleJuOE7m",
"Ux_PIWU6Re",
"N7rlv4ete4",
"m37vq1_l-a8",
"BPnWMLCs4gR",
"u0k-xfW37pm",
"RSzqJIpXh7t",
"o1ObKl6fWEC",
"i_d9PKqhMK",
"sxrdSOvMU2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply. After the discussion and re-reading, I have updated the score.",
"This paper studies the problem of reward-free exploration in linear mixture MDPs. In the reward-free setting, the agent interacts in two stages where it must explore sufficiently without a reward function and then an arb... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"o1ObKl6fWEC",
"nips_2021_IoEnnwAP7aP",
"o1ObKl6fWEC",
"nips_2021_IoEnnwAP7aP",
"BPnWMLCs4gR",
"o1ObKl6fWEC",
"RSzqJIpXh7t",
"sxrdSOvMU2",
"0TYLzDQ5Hv",
"sxrdSOvMU2",
"i_d9PKqhMK",
"FWjKtb8BCP",
"nips_2021_IoEnnwAP7aP",
"nips_2021_IoEnnwAP7aP"
] |
nips_2021_4YlE2huxEsl | Beltrami Flow and Neural Diffusion on Graphs | We propose a novel class of graph neural networks based on the discretized Beltrami flow, a non-Euclidean diffusion PDE. In our model, node features are supplemented with positional encodings derived from the graph topology and jointly evolved by the Beltrami flow, producing simultaneously continuous feature learning, topology evolution. The resulting model generalizes many popular graph neural networks and achieves state-of-the-art results on several benchmarks.
| accept | The paper proposes a novel class of GNNs based on non-Euclidean diffusion PDEs, i.e., Beltrami flows on 2-manifolds of position and feature embeddings, and can be seen to generalize various existing GNN architectures. The paper is well written and relevant to the NeurIPS community. All reviewers and the AC support acceptance for its contributions, especially due to the novel and interesting ideas that provide a new perspective for graph learning methods as well as the promising empirical results. The authors' rebuttal, including additional experiments, further increased the confidence of reviewers and resolved concerns related to the empirical analysis. When preparing the camera ready version, please incorporate the overall feedback of reviewers into the new revision (e.g., motivation, theoretical discussion of Thm 1 and limitation, ablation). | train | [
"2tBC2jNI9oK",
"xi87pAv-d3a",
"nzFEt_fpvs",
"gGbiY7Brgg1",
"eKuVL_mApqy",
"3kod7mW_ENf",
"Als57RQPadj",
"VNUTow_K2MR",
"tCBAXwDxRn",
"ZOYVmm-qoMe",
"L1rU27Ek1Nw",
"q7dDTjfJa2",
"DMhUbyNBi38",
"aOHLN5rtAzr",
"64SJ77kRAmm",
"uvV3TCX9qLQ",
"rjLTrgWLyd",
"jcSHEvtRIRk"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" Thank you - as suggested by the Reviewer, we will reword the statements about equation (5) and related matters according to our comments. ",
"The paper proposes a new graph neural network based on a continuous time flow. Each node is given a pre-processed position from an off-line node embedding method, then bo... | [
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"gGbiY7Brgg1",
"nips_2021_4YlE2huxEsl",
"xi87pAv-d3a",
"Als57RQPadj",
"nips_2021_4YlE2huxEsl",
"ZOYVmm-qoMe",
"VNUTow_K2MR",
"L1rU27Ek1Nw",
"nips_2021_4YlE2huxEsl",
"uvV3TCX9qLQ",
"DMhUbyNBi38",
"aOHLN5rtAzr",
"64SJ77kRAmm",
"rjLTrgWLyd",
"xi87pAv-d3a",
"tCBAXwDxRn",
"eKuVL_mApqy",
... |
nips_2021_F6gvhOgTM-4 | Think Big, Teach Small: Do Language Models Distil Occam’s Razor? | Large language models have recently shown a remarkable ability for few-shot learning, including patterns of algorithmic nature. However, it is still an open question to determine what kind of patterns these models can capture and how many examples they need in their prompts. We frame this question as a teaching problem with strong priors, and study whether language models can identify simple algorithmic concepts from small witness sets. In particular, we explore how several GPT architectures, program induction systems and humans perform in terms of the complexity of the concept and the number of additional examples, and how much their behaviour differs. This first joint analysis of language models and machine teaching can address key questions for artificial intelligence and machine learning, such as whether some strong priors, and Occam’s razor in particular, can be distilled from data, making learning from a few examples possible.
| accept | This paper seeks to provide more insight into the few-shot learning capabilities of language models by pitting them against humans on a set of algorithmic tasks under the framework of machine teaching. Specifically, the goal of the paper is to identify whether language models learn to prefer simple explanations for algorithmic rules. Reviewers appreciated the thorough experimental design, in particular the inclusion of a high-quality human study. There were some concerns raised about clarity, claims, and fit for NeurIPS, but I think these can be improved for the camera-ready version based on the author's rebuttal. | train | [
"bG41YorDeYp",
"cEEIpIGLa6e",
"n2vgn0lS8Th",
"BwpYNQ_MVgo",
"s26wl3Z2OI",
"-mTcCQkCbPh",
"Pa-K_uQI_9",
"JczOlOZKlaH",
"GueeJ4zCDZ",
"1t2RyotX1x_",
"Z74_D6zW_-",
"_4UJ47x_kba",
"kQMaC5rBQgr",
"J9ev0oFaOXs",
"UDNdVtHwXc9",
"EoDG-1Rdejx",
"Poz-CD74X4O",
"4CF0KWxWFar",
"M4lEKTwMAGc",... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_revi... | [
" I increased my score.",
"This paper proposes to empirically investigate to what extent large language models like GPT-3 learn (or \"distill\") a simplicity prior, i.e., a prior that favors concepts with shorter description length. The authors investigate this by randomly choosing 8 concepts/programs in the P3 l... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"n2vgn0lS8Th",
"nips_2021_F6gvhOgTM-4",
"nips_2021_F6gvhOgTM-4",
"-mTcCQkCbPh",
"GueeJ4zCDZ",
"Z74_D6zW_-",
"1t2RyotX1x_",
"nips_2021_F6gvhOgTM-4",
"4CF0KWxWFar",
"UDNdVtHwXc9",
"IhIBkM1JVVp",
"nips_2021_F6gvhOgTM-4",
"J9ev0oFaOXs",
"EoDG-1Rdejx",
"Poz-CD74X4O",
"M4lEKTwMAGc",
"bvHpN... |
nips_2021_52XXcK8jY0J | Disentangling Identifiable Features from Noisy Data with Structured Nonlinear ICA | We introduce a new general identifiable framework for principled disentanglement referred to as Structured Nonlinear Independent Component Analysis (SNICA). Our contribution is to extend the identifiability theory of deep generative models for a very broad class of structured models. While previous works have shown identifiability for specific classes of time-series models, our theorems extend this to more general temporal structures as well as to models with more complex structures such as spatial dependencies. In particular, we establish the major result that identifiability for this framework holds even in the presence of noise of unknown distribution. Finally, as an example of our framework's flexibility, we introduce the first nonlinear ICA model for time-series that combines the following very useful properties: it accounts for both nonstationarity and autocorrelation in a fully unsupervised setting; performs dimensionality reduction; models hidden states; and enables principled estimation and inference by variational maximum-likelihood.
| accept | The paper extends the literature on identifiable nonlinear ICA for time series in ways that all reviewers found satisfactory. There is a clear consensus that the work is novel and interesting, and many of the reviewers were convinced by the authors' rebuttal to increase their score even further. Therefore I recommend acceptance. | train | [
"JRdC9LL78GY",
"Mah30q9XuGW",
"bjjQICi4grR",
"oi1HwZ3r9pb",
"qQYt5iJ33uA",
"-PhgHdas9Z",
"Wpq4etukzaL",
"bJpA1-CajI4",
"WQFb4a255R7",
"NRYnu9f-GzY",
"-75iO_ZbJu",
"BV1TmYueAap"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" My apologies for the unfaithful summary of the scope of the results. Indeed the identifiability theorems are more general, and I did not mean to exclude autoregressive models when I mentioned Markov switching models. As you state in the paper \"While the above framework has great generality, any practical applica... | [
-1,
8,
7,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1
] | [
"BV1TmYueAap",
"nips_2021_52XXcK8jY0J",
"nips_2021_52XXcK8jY0J",
"NRYnu9f-GzY",
"nips_2021_52XXcK8jY0J",
"-75iO_ZbJu",
"nips_2021_52XXcK8jY0J",
"WQFb4a255R7",
"Wpq4etukzaL",
"bjjQICi4grR",
"qQYt5iJ33uA",
"Mah30q9XuGW"
] |
nips_2021_0yMGEUQKd2D | Conditionally Parameterized, Discretization-Aware Neural Networks for Mesh-Based Modeling of Physical Systems | Simulations of complex physical systems are typically realized by discretizing partial differential equations (PDEs) on unstructured meshes. While neural networks have recently been explored for surrogate and reduced order modeling of PDE solutions, they often ignore interactions or hierarchical relations between input features, and process them as concatenated mixtures. We generalize the idea of conditional parameterization -- using trainable functions of input parameters to generate the weights of a neural network, and extend them in a flexible way to encode critical information. Inspired by discretized numerical methods, choices of the parameters include physical quantities and mesh topology features. The functional relation between the modeled features and the parameters is built into the network architecture. The method is implemented on different networks and applied to frontier scientific machine learning tasks including the discovery of unmodeled physics, super-resolution of coarse fields, and the simulation of unsteady flows with chemical reactions. The results show that the conditionally-parameterized networks provide superior performance compared to their traditional counterparts. The CP-GNet - an architecture that can be trained on very few data snapshots - is proposed as the first deep learning model capable of standalone prediction of reacting flows on irregular meshes.
| accept | 3 of 4 ratings were "accept". A key concern shared by several reviewers was about comprehensive experimental baselines, and the authors responded by performing requested experiments which supported their results. Because the one reviewer who gave the lowest rating (a 5) suggested this experiment, the authors performed it and it was favorable, and the reviewer didn't respond, I'm inclined to treat the reviewer's rating as a 6. | test | [
"6rriuv7mmB",
"RbgSTu7QZIF",
"L_k3EhQHhk1",
"MO694EgJWGn",
"P9PTEb8OMvW",
"wRTwjfPm6tA",
"WoUvDuDP9oO",
"KUv6bN5ffkm",
"r9Hm9xhW70I",
"4119ty40lyg",
"Y4BhaakaqZU",
"pJpmFK0SWWO",
"r4TTdvez6qf"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your updated comments! We would like to followup and update you with the comparison we performed against the MeshGraphNets in response to your comments. The comparison is conducted with two tests. The first test is the reacting flow case in Sec 4.3, where we implemented the MeshGraphNets in our pipelin... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"MO694EgJWGn",
"wRTwjfPm6tA",
"pJpmFK0SWWO",
"WoUvDuDP9oO",
"nips_2021_0yMGEUQKd2D",
"4119ty40lyg",
"r4TTdvez6qf",
"Y4BhaakaqZU",
"pJpmFK0SWWO",
"P9PTEb8OMvW",
"nips_2021_0yMGEUQKd2D",
"nips_2021_0yMGEUQKd2D",
"nips_2021_0yMGEUQKd2D"
] |
nips_2021_P85jauwfNCV | USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization Problems | Real-world decision-making systems are often subject to uncertainties that have to be resolved through observational data. Therefore, we are frequently confronted with combinatorial optimization problems of which the objective function is unknown and thus has to be debunked using empirical evidence. In contrast to the common practice that relies on a learning-and-optimization strategy, we consider the regression between combinatorial spaces, aiming to infer high-quality optimization solutions from samples of input-solution pairs -- without the need to learn the objective function. Our main deliverable is a universal solver that is able to handle abstract undetermined stochastic combinatorial optimization problems. For learning foundations, we present learning-error analysis under the PAC-Bayesian framework using a new margin-based analysis. In empirical studies, we demonstrate our design using proof-of-concept experiments, and compare it with other methods that are potentially applicable. Overall, we obtain highly encouraging experimental results for several classic combinatorial problems on both synthetic and real-world datasets.
| accept | The paper considers learning to solve the stochastic combinatorial optimization problem, where the underlying distribution over configuration space is unknown---a problem class which the authors refer to as *undetermined* stochastic combinational problem.
Assuming that a deterministic (approximation) oracle and the corresponding input-solutions pairs are given as training data, the authors propose to use combinatorial kernels (with random configurations) to train a large-margin machine. There was initial disagreement among the reviewers in the applicability of the proposed algorithms beyond the combinatorial optimization problems mentioned in the paper; however, all reviewers find the proposed approach interesting and the theoretical analysis of training and generalization bounds (which is applied to the approximation ratio of the learned solution) insightful.
The authors are encouraged to address the concerns raised in the reviews; in particular, to clarify the missing references, algorithmic details, computation complexity, and applicability to realistic domains.
| train | [
"UgL_nUOpbKF",
"UwN7XwTNZZ_",
"Hk9IBPl2H4O",
"EkOzjgJwjDy",
"5Q-6ARbVVr",
"bSk_En1jYdF",
"NE36zGMcvD",
"ZfMUlBjKFQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their constructive comments and questions.\n\n$\\textbf{On experiments.}$ It is true that it remains unknown whether or not USCO-Solver is effective for other combinatorial optimization problems (as mentioned in Section 5), and we believe that a comprehensive experimental study is a crit... | [
-1,
-1,
-1,
-1,
5,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"ZfMUlBjKFQ",
"NE36zGMcvD",
"bSk_En1jYdF",
"5Q-6ARbVVr",
"nips_2021_P85jauwfNCV",
"nips_2021_P85jauwfNCV",
"nips_2021_P85jauwfNCV",
"nips_2021_P85jauwfNCV"
] |
nips_2021_6vaActvpcp3 | Adaptive Conformal Inference Under Distribution Shift | We develop methods for forming prediction sets in an online setting where the data generating distribution is allowed to vary over time in an unknown fashion. Our framework builds on ideas from conformal inference to provide a general wrapper that can be combined with any black box method that produces point predictions of the unseen label or estimated quantiles of its distribution. While previous conformal inference methods rely on the assumption that the data are exchangeable, our adaptive approach provably achieves the desired coverage frequency over long-time intervals irrespective of the true data generating process. We accomplish this by modelling the distribution shift as a learning problem in a single parameter whose optimal value is varying over time and must be continuously re-estimated. We test our method, adaptive conformal inference, on two real world datasets and find that its predictions are robust to visible and significant distribution shifts.
| accept | The paper proposes an adaptive conformal inference (ACI) that relaxes the classical exchangeability assumption by leveraging the online learning framework, and proposes a simple algorithm that achieves the target coverage rate as the number of steps approaches infinity. There is a consensus among the expert reviewers, with which I also concur, that the main idea of combining conformal inference with online learning is novel and interesting, and that the paper provides a fundamental contribution to the field of conformal inference. Hence, I recommend an acceptance.
One of the reviewers raised the concern regarding experimental results, which I consider a minor point. Nevertheless, improving the clarity of this paper, e.g., by making the paper accessible to broader audience, and providing thorough discussion with respect to the related work, as suggested by the reviewers, will further increase the impact of this work.
The authors provided a rebuttal which was also positively acknowledged by the reviewers. | train | [
"6xx7g5NdBOW",
"s8nUcF8_Df",
"PLMb1lpFSx",
"e3I6IzOCx48",
"X506O86UzmV",
"YnhXYKXPjo",
"eAxuktwbuP7",
"djyzS9LK1sK",
"HzvOlqb4QsI",
"fpuNZ7xlUC",
"dSmIQxvNiK",
"CVs8rB0vFH",
"e_fY4FngRCF",
"9c_HGe2qNW",
"sWmLG5ebdWu",
"GPggXmmVP1j",
"lzuVcIjkkOX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_revi... | [
" Dear authors, thank you for the response. I look forward to the results of your current investigations in future work.",
" I thank the authors for their detailed response. I did not realized the model in Section 5 was fit in an online fashion and that resolves my comment. Their clarifications on this point and ... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
8,
8
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4,
4
] | [
"djyzS9LK1sK",
"fpuNZ7xlUC",
"nips_2021_6vaActvpcp3",
"CVs8rB0vFH",
"eAxuktwbuP7",
"HzvOlqb4QsI",
"lzuVcIjkkOX",
"GPggXmmVP1j",
"sWmLG5ebdWu",
"PLMb1lpFSx",
"9c_HGe2qNW",
"e_fY4FngRCF",
"nips_2021_6vaActvpcp3",
"nips_2021_6vaActvpcp3",
"nips_2021_6vaActvpcp3",
"nips_2021_6vaActvpcp3",
... |
nips_2021_gRwh5HkdaTm | Periodic Activation Functions Induce Stationarity | Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret. We seek to build models that `know what they do not know' by introducing inductive biases in the function space. We show that periodic activation functions in Bayesian neural networks establish a connection between the prior on the network weights and translation-invariant, stationary Gaussian process priors. Furthermore, we show that this link goes beyond sinusoidal (Fourier) activations by also covering triangular wave and periodic ReLU activation functions. In a series of experiments, we show that periodic activation functions obtain comparable performance for in-domain data and capture sensitivity to perturbed inputs in deep neural networks for out-of-domain detection.
| accept | The paper provides an interesting link between periodic activation functions, which have shown excellent empirical performance in fitting low-dimensional functions in models like SIREN, and stationarity in the infinite-width limit, connecting these networks with the Gaussian Process literature. All reviewers agreed that the contribution is interesting and novel, and many of the reviewers raised their scores after the author rebuttal. So long as the authors include the promised results in the camera-ready version, all reviewers were happy to recommend acceptance. I agree with their assessment. | train | [
"ainRt_AI24H",
"Cqo3pY2ri6Q",
"_tsm7Vek9DK",
"CTZt_WidEJp",
"A2bpL2CivFm",
"DqLhdRHDPW",
"1-SJtT49H1Q",
"pbeg_E4vfe4",
"3FUteKo1437"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Update: I have read the author feedback and am quite satisfied with it and thus raised my score.\n\n******\n\nThis paper relates to Bayesian Neural Networks (BNN). It proposes to use periodic activation functions in order to deal with out-of-domain (OOD) samples and uncertainty. Series of experiments hint that wit... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"nips_2021_gRwh5HkdaTm",
"_tsm7Vek9DK",
"3FUteKo1437",
"ainRt_AI24H",
"pbeg_E4vfe4",
"1-SJtT49H1Q",
"nips_2021_gRwh5HkdaTm",
"nips_2021_gRwh5HkdaTm",
"nips_2021_gRwh5HkdaTm"
] |
nips_2021_ZfIO21FYv4 | Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation | Autonomous driving relies on a huge volume of real-world data to be labeled to high precision. Alternative solutions seek to exploit driving simulators that can generate large amounts of labeled data with a plethora of content variations. However, the domain gap between the synthetic and real data remains, raising the following important question: What are the best way to utilize a self-driving simulator for perception tasks?. In this work, we build on top of recent advances in domain-adaptation theory, and from this perspective, propose ways to minimize the reality gap. We primarily focus on the use of labels in the synthetic domain alone. Our approach introduces both a principled way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator. Our method is easy to implement in practice as it is agnostic of the network architecture and the choice of the simulator. We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data (cameras, lidar) using an open-source simulator (CARLA), and evaluate the entire framework on a real-world dataset (nuScenes). Last but not least, we show what types of variations (e.g. weather conditions, number of assets, map design and color diversity) matter to perception networks when trained with driving simulators, and which ones can be compensated for with our domain adaptation technique.
| accept | This paper studies the problem of sim2real domain adaptation, focusing on the bird's-eye-view vehicle segmentation task and on strategies for sampling data from a self-driving car simulator. The authors trained existing models and sampled data in a simulator-agnostic way in an effort to reduce the sim2real gap, compared their approach to standard domain adaptation techniques and demonstrated that domain adversarial training outperformed simple style transfer approaches.
Reviewers have praised the clarity of the paper and thoroughness of the experiments, and noted that this research was “very relevant to demonstrating which types of variation (ie. map design, and number of vehicles) actually matter to self-driving car simulators”.
Reviewers’ claims about “novelty” are easy to make and to dismiss, and I note that the authors provide an in-depth mathematical formalism and analysis of the simulation-reality gap.
Reviewers also had concerns about missing evaluation and comparison of SOTA models on nuScenes (TDjR and z4FW) - the authors have addressed these by running additional experiments comparing their method with “HorizonLiDAR3D" from the 3D detection track and the domain adaptation track in Waymo Open Dataset Challenge at CVPR 2020, training the models on CARLA; moreover reviewer z4FW praised the “focus on what parts of the simulator are actually relevant to transfer between datasets”. Reviewers gzio and LRhN had additional small concerns about photorealism, focus on bird eye view, heuristics and spatial priors, and the authors committed to answer them.
Reviewers TDjR and z4FW did not answer the authors’ rebuttal but I believe they should and would increase their score. I am therefore willing to challenge these two reviewers and promote this paper to an acceptance.
| train | [
"F1HCfmybI3w",
"1lZ0l3Bf0eh",
"f315L96wJ7A",
"a59iAydJpow",
"M12X8iOAXTd",
"lkFUE5BUtDB",
"lSRoJPPb7dh",
"Spyoufc4Jao",
"fkV0m6VH0v",
"uOWKTSbZhd",
"MW5MBAXf8ir",
"JbWT1d-5BDd",
"8CCZvs_NKjG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a method for both generating data in simulation and training with the generated data for the bird's-eye-view (BEV) vehicle segmentation task. It takes inspiration from recent theoretical advances in domain adaptation (DA) and frames learning from simulation as DA. It also points out special prop... | [
7,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_ZfIO21FYv4",
"uOWKTSbZhd",
"nips_2021_ZfIO21FYv4",
"fkV0m6VH0v",
"nips_2021_ZfIO21FYv4",
"JbWT1d-5BDd",
"MW5MBAXf8ir",
"nips_2021_ZfIO21FYv4",
"M12X8iOAXTd",
"F1HCfmybI3w",
"8CCZvs_NKjG",
"f315L96wJ7A",
"nips_2021_ZfIO21FYv4"
] |
nips_2021_Sh_MDcDUD5e | KS-GNN: Keywords Search over Incomplete Graphs via Graphs Neural Network | Keyword search is a fundamental task to retrieve information that is the most relevant to the query keywords. Keyword search over graphs aims to find subtrees or subgraphs containing all query keywords ranked according to some criteria. Existing studies all assume that the graphs have complete information. However, real-world graphs may contain some missing information (such as edges or keywords), thus making the problem much more challenging. To solve the problem of keyword search over incomplete graphs, we propose a novel model named KS-GNN based on the graph neural network and the auto-encoder. By considering the latent relationships and the frequency of different keywords, the proposed KS-GNN aims to alleviate the effect of missing information and is able to learn low-dimensional representative node embeddings that preserve both graph structure and keyword features. Our model can effectively answer keyword search queries with linear time complexity over incomplete graphs. The experiments on four real-world datasets show that our model consistently achieves better performance than state-of-the-art baseline methods in graphs having missing information.
| accept | This paper addresses the novel and technically interesting problem of answering queries on graphs with missing information such as edges and nodes.
The proposed problem is well-motivated and the proposed method based on autoencoders and GNNs, although being a combination of known techniques, sounds reasonable, and is considered as a solid technical contribution, and also the experimental evaluations are fairly thorough in a variety of settings.
Overall, this paper is expected to be of interest to the NeurIPS community, and worth being accepted. | train | [
"QXIb4Hkqwz4",
"TJGs2qrHlD",
"EoDhwlgDkRq",
"jLd2VBRRVDN",
"pfb1ysnVOoD",
"hqUpSJ7czD",
"X1suAoTYADA",
"fEtPMShwMH1"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the author's replies for my review concerns as well as others. The authors generally addressed my questions and provided a satisfactory response. I would stick to my current score evaluation considering the innovation, significance and clarity. ",
" Thanks for replying to my questions. I believe man... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"jLd2VBRRVDN",
"pfb1ysnVOoD",
"fEtPMShwMH1",
"hqUpSJ7czD",
"X1suAoTYADA",
"nips_2021_Sh_MDcDUD5e",
"nips_2021_Sh_MDcDUD5e",
"nips_2021_Sh_MDcDUD5e"
] |
nips_2021_ZKbZ4mebI9l | Reconstruction for Powerful Graph Representations | Graph neural networks (GNNs) have limited expressive power, failing to represent many graph classes correctly. While more expressive graph representation learning (GRL) alternatives can distinguish some of these classes, they are significantly harder to implement, may not scale well, and have not been shown to outperform well-tuned GNNs in real-world tasks. Thus, devising simple, scalable, and expressive GRL architectures that also achieve real-world improvements remains an open challenge. In this work, we show the extent to which graph reconstruction---reconstructing a graph from its subgraphs---can mitigate the theoretical and practical problems currently faced by GRL architectures. First, we leverage graph reconstruction to build two new classes of expressive graph representations. Secondly, we show how graph reconstruction boosts the expressive power of any GNN architecture while being a (provably) powerful inductive bias for invariances to vertex removals. Empirically, we show how reconstruction can boost GNN's expressive power---while maintaining its invariance to permutations of the vertices---by solving seven graph property tasks not solvable by the original GNN. Further, we demonstrate how it boosts state-of-the-art GNN's performance across nine real-world benchmark datasets.
| accept | The reviewers agreed that this work provides a valuable contribution to the GNN research by expanding on the known connections between graph reconstruction theory with graph representation learning. While sometimes falling behind more powerful architectures, the proposed method is shown to consistently improve upon vanilla GNN.
The reviewers originally expressed some concerns about the rigorousness of the mathematical statements. However, these were addressed during the rebuttal phase.
I thus recommend the (revised) paper for publication. | train | [
"G8o_205vu7w",
"oIJwLIkCp1M",
"kolZy4MaEsK",
"-rzBzDh3RQM",
"JP35Q56sgtF",
"DfqaXnpgVs6",
"0Uv_Or-jj6K",
"f7rgV8wZr45",
"GSBgjA6uN90",
"X6DuM0XHkbW",
"6ZTBJ7bHy1g",
"DSXZY3iZrI",
"NoC-Xnvhlrf",
"M1mjQiEqKQp",
"1m_YuhCMZ5Q",
"aGCjXUe2yuS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed several kinds of GNN models that integrate the representations of subgraphs to obtain graph representations.\n\nFirst, the k-reconstruction NN is defined using the representation of a subgraph of size k for the input graph G. Under the reconstruction conjecture, the k-reconstruction NN is prove... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_ZKbZ4mebI9l",
"nips_2021_ZKbZ4mebI9l",
"-rzBzDh3RQM",
"JP35Q56sgtF",
"GSBgjA6uN90",
"0Uv_Or-jj6K",
"DSXZY3iZrI",
"6ZTBJ7bHy1g",
"NoC-Xnvhlrf",
"1m_YuhCMZ5Q",
"aGCjXUe2yuS",
"oIJwLIkCp1M",
"G8o_205vu7w",
"1m_YuhCMZ5Q",
"nips_2021_ZKbZ4mebI9l",
"nips_2021_ZKbZ4mebI9l"
] |
nips_2021_WBuLBaoEKNK | Revealing and Protecting Labels in Distributed Training | Distributed learning paradigms such as federated learning often involve transmission of model updates, or gradients, over a network, thereby avoiding transmission of private data. However, it is possible for sensitive information about the training data to be revealed from such gradients. Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e.g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al.] with additional knowledge about the current state of the model. In this work, we propose a method to discover the set of labels of training samples from only the gradient of the last layer and the id to label mapping. Our method is applicable to a wide variety of model architectures across multiple domains. We demonstrate the effectiveness of our method for model training in two domains - image classification, and automatic speech recognition. Furthermore, we show that existing reconstruction techniques improve their efficacy when used in conjunction with our method. Conversely, we demonstrate that gradient quantization and sparsification can significantly reduce the success of the attack.
| accept | This paper gives an interesting way to reveal label information from the transmitted gradients in distributed training. Empirically, the paper also shows that techniques for communication reduction mitigate the privacy leakage of label information. Overall, this paper gives new insights into what information can be leaked from the model updates in federated learning. The authors should incorporate the reviewers' suggestions in the next revision. | train | [
"ytw71-sG5z0",
"GnRxr4KLOMc",
"MgsAQZKzWeO",
"xRO0j_KH4nj",
"blOrAcuUqi-",
"Ll94VjXhUSL",
"fuNlTboJYSV",
"l0z_xcqRU3l",
"aCpcdNYQLDg",
"yt14rHhPm9u",
"s2rmsGxiiv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We will add text in the relevant sections in the paper to appropriately reflect the responses to 1,2. Thank you!",
" Thanks for responding to all my questions. Additionally the responses to 1, 2 need to be reflected somewhere in paper - perhaps on discussions, conclusions.",
" I thank the authors for the resp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
1,
4
] | [
"GnRxr4KLOMc",
"xRO0j_KH4nj",
"blOrAcuUqi-",
"aCpcdNYQLDg",
"yt14rHhPm9u",
"s2rmsGxiiv",
"l0z_xcqRU3l",
"nips_2021_WBuLBaoEKNK",
"nips_2021_WBuLBaoEKNK",
"nips_2021_WBuLBaoEKNK",
"nips_2021_WBuLBaoEKNK"
] |
nips_2021_XIuDe2A0jDL | Solving Graph-based Public Goods Games with Tree Search and Imitation Learning | Public goods games represent insightful settings for studying incentives for individual agents to make contributions that, while costly for each of them, benefit the wider society. In this work, we adopt the perspective of a central planner with a global view of a network of self-interested agents and the goal of maximizing some desired property in the context of a best-shot public goods game. Existing algorithms for this known NP-complete problem find solutions that are sub-optimal and cannot optimize for criteria other than social welfare.In order to efficiently solve public goods games, our proposed method directly exploits the correspondence between equilibria and the Maximal Independent Set (mIS) structural property of graphs. In particular, we define a Markov Decision Process which incrementally generates an mIS, and adopt a planning method to search for equilibria, outperforming existing methods. Furthermore, we devise a graph imitation learning technique that uses demonstrations of the search to obtain a graph neural network parametrized policy which quickly generalizes to unseen game instances. Our evaluation results show that this policy is able to reach 99.5\% of the performance of the planning method while being three orders of magnitude faster to evaluate on the largest graphs tested. The methods presented in this work can be applied to a large class of public goods games of potentially high societal impact and more broadly to other graph combinatorial optimization problems.
| accept | The paper tackles graph-based public good games and proposes efficient search-based and learning-based algorithms for finding desirable equilibria. Based on all the reviews, responses, and discussions, here is the overall evaluation:
(+) It is a solid paper with a clearly defined problem (public good game on a graph) and analysis.
(+) It proposes tree-search-based (case-specific) and imitation learning-based (generalizable to new cases) algorithms for computing desirable pure strategy Nash equilibrium and these algorithms show superior performance in the experiments. The latter leverages the graph-neural network and is a new example of learning-powered combinatorial optimization with very good performance.
(-) The description of the algorithm, figure, and experimental results lack some details. The authors agree to provide additional results and details in their responses, which partially addresses the reviewers’ concerns.
(-) The team is not fully sure whether it can lead to a significant impact on society as it is hard to apply this algorithm to guide individuals in practice in a concrete application. Since the analysis of equilibrium is an important step towards a better understanding of public good games, the team still thinks the work is of value to the community.
Overall, the reviewer team has a positive view of the paper. We suggest the authors make changes they promised in the responses to improve the paper. | train | [
"Yp6WkKtPcJv",
"AQpkJuwmsYO",
"n58PFc2Xku",
"h5fblcCdA8T",
"GwHjg4L2iY",
"I9H8-V-WU5D",
"txIBtASxrAI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The problem studied by the authors is finding pure Nash equilibria in public goods games on graphs that maximize certain functions (e.g., social welfare, fairness). The authors search over maximal independent sets on the graph, a connection that was proven equivalent to PSNE by previous authors. Past approaches in... | [
6,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
2
] | [
"nips_2021_XIuDe2A0jDL",
"GwHjg4L2iY",
"Yp6WkKtPcJv",
"txIBtASxrAI",
"I9H8-V-WU5D",
"nips_2021_XIuDe2A0jDL",
"nips_2021_XIuDe2A0jDL"
] |
nips_2021_Q_64PF6XNut | Stochastic Optimization of Areas Under Precision-Recall Curves with Provable Convergence | Areas under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems. Compared with AUROC, AUPRC is a more appropriate metric for highly imbalanced datasets. While stochastic optimization of AUROC has been studied extensively, principled stochastic optimization of AUPRC has been rarely explored. In this work, we propose a principled technical method to optimize AUPRC for deep learning. Our approach is based on maximizing the averaged precision (AP), which is an unbiased point estimator of AUPRC. We cast the objective into a sum of dependent compositional functions with inner functions dependent on random variables of the outer level. We propose efficient adaptive and non-adaptive stochastic algorithms named SOAP with provable convergence guarantee under mild conditions by leveraging recent advances in stochastic compositional optimization. Extensive experimental results on image and graph datasets demonstrate that our proposed method outperforms prior methods on imbalanced problems in terms of AUPRC. To the best of our knowledge, our work represents the first attempt to optimize AUPRC with provable convergence. The SOAP has been implemented in the libAUC library at https://libauc.org/.
| accept | The paper proposes a stochastic optimization technique for maximizing the AUPRC metric, a popular metric in the evaluation of models on class-imbalanced problems. The paper makes compelling theoretical contributions, which are then supported by strong experimental results. All the reviewers lean towards accepting the paper.
Some questions were raised about how closely the proposed surrogate approximations align with the true metric, and how the method fares with different choices of surrogates, to which the authors have provided a satisfactory response with additional experimental results. We strongly encourage the authors to include the additional results in the final paper (perhaps in the appendix if there isn't enough space in the main text). | train | [
"2a3LU0uzngj",
"8pMBTk7Ph6Q",
"JR4TdZFMSyZ",
"ITnrmmKTWmj",
"wDP3PSqXTED",
"AEUrdn_yV8j",
"l3yVgnrPgu0",
"RisQ1AzITKT",
"Sb-fBO_Bx0m",
"6jhYfQ8qNmE",
"NifZufNsZM_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper studies classification in an imbalanced data setting, where the goal is to obtain large area under the precision-recall curve (AUPRC). This paper proposes a stochastic optimization method procedure for directly optimizing the AUPRC. Their procedure is based on optimizing a continuous approximation to th... | [
7,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_Q_64PF6XNut",
"nips_2021_Q_64PF6XNut",
"ITnrmmKTWmj",
"6jhYfQ8qNmE",
"nips_2021_Q_64PF6XNut",
"nips_2021_Q_64PF6XNut",
"wDP3PSqXTED",
"nips_2021_Q_64PF6XNut",
"2a3LU0uzngj",
"wDP3PSqXTED",
"8pMBTk7Ph6Q"
] |
nips_2021_CzVPfeqPOBu | Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization | Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirements and guarantees towards their transferability. In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs. Firstly, we propose a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of EGI (Ego-Graph Information maximization) to analytically achieve this goal. Secondly,when node features are structure-relevant, we conduct an analysis of EGI transferability regarding the difference between the local graph Laplacians of the source and target graphs. We conduct controlled synthetic experiments to directly justify our theoretical conclusions. Comprehensive experiments on two real-world network datasets show consistent results in the analyzed setting of direct-transfering, while those on large-scale knowledge graphs show promising results in the more practical setting of transfering with fine-tuning.
| accept | This paper studies, in the context of transfer learning, the extent to which trained graph neural networks can transfer to new data. Concretely, the authors derive criteria to asses the transferability of a based on ego-graph information (EGI) maximization. In initial reviews, most reviewers marked the paper above acceptance threshold, and following rebuttal and discussion we have reached a unanimous decision to recommend its acceptance.
Some remarks have been raised (especially by one reviewer) regarding the clarity and quality of presentation, but in discussion a consensus has been reached that the revisions required to address these concerned can viably be regarded as only minor revisions that should not prevent the paper from being accepted, especially given the author responses that indicated they will revise the manuscript appropriately.
Therefore, I recommend this paper be accepted and reiterate to the authors to carefully go over the reviewer comments and follow up on the indicated or planned revisions from their responses. | train | [
"RTh3jRlzM4h",
"OhACuj6p5jL",
"w1YU1DG4IaA",
"EV6VL1IAFbn",
"ZVPae0xQuWU",
"nvZJ6IWVHgj",
"425wnIlQ5zS",
"1h8fn4GvzG",
"mV0OTrgbbO",
"WPENlv43p3I",
"oezABBsYczU",
"X4A5OmFj7kh"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 7fvg,\n\nWe are grateful for your reconsideration. Thanks to your questions, we are more aware of the proper improvements to be made on the technical clarity, and will do our best to achieve them in the revision.",
"The paper establishes a theoretically grounded and practically useful framework fo... | [
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
2,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"w1YU1DG4IaA",
"nips_2021_CzVPfeqPOBu",
"1h8fn4GvzG",
"nvZJ6IWVHgj",
"nips_2021_CzVPfeqPOBu",
"425wnIlQ5zS",
"ZVPae0xQuWU",
"OhACuj6p5jL",
"oezABBsYczU",
"X4A5OmFj7kh",
"nips_2021_CzVPfeqPOBu",
"nips_2021_CzVPfeqPOBu"
] |
nips_2021_-Z7FuZGUzv | You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership | Xuxi Chen, Tianlong Chen, Zhenyu Zhang, Zhangyang Wang | accept | We thank the authors for this submission. Overall, the paper is about a topology-based ownership verification mechanism that can prevent lottery-ticket theft under various verification schemes and attacks.
The paper well-motivates the approach. The authors have provided extensive responses to the concerns raised and the AC + reviewers really thank them for their effort. Overall, the new results obtained during the rebuttal definitely improve the quality of the paper. We all believe that the inclusion of these results during the rebuttal period is something that does not heavily change the message of this paper.
There was discussion and consensus that this work is interesting. Having in mind issues/concerns raised by the reviewers, the main points of reviewers during further discussion were that this paper deserves publication, given the promised fixes by the authors during the discussion period. | val | [
"0f2YfMUeX01",
"fU5mZnXgEzY",
"dTkQI3alf5T",
"MfR1kcfAOJy",
"HuAyOA-0nn7",
"0EByiMmrRv5",
"X4GJjPNMUT2",
"fPazMedCyKR",
"wfthNSbf5GX",
"wpSVocmm6Pf",
"Lv82IwYx1UT",
"mXJekS4nXN_",
"lmU-V3ghEty",
"jPWTxwpcukT"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Three different approaches to embed ownership information in the topological structure of sparse neural networks (i.e. lottery tickets) and their resilience against specific attacks are presented. Strengths:\n+ Ownership verification of lottery tickets is a novel and relevant problem.\n+ Three proposals to embed ... | [
7,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
2,
-1,
-1,
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_-Z7FuZGUzv",
"wpSVocmm6Pf",
"Lv82IwYx1UT",
"nips_2021_-Z7FuZGUzv",
"wfthNSbf5GX",
"nips_2021_-Z7FuZGUzv",
"mXJekS4nXN_",
"jPWTxwpcukT",
"MfR1kcfAOJy",
"lmU-V3ghEty",
"0f2YfMUeX01",
"0EByiMmrRv5",
"nips_2021_-Z7FuZGUzv",
"nips_2021_-Z7FuZGUzv"
] |
nips_2021_Kug2s3rHiG3 | Complexity Lower Bounds for Nonconvex-Strongly-Concave Min-Max Optimization | Haochuan Li, Yi Tian, Jingzhao Zhang, Ali Jadbabaie | accept | The paper provides a complexity lower bound for first-order optima for smooth non-convex and strongly concave min-max problems. The reviewers were unanimous in their appreciation of the construction of the lower bound and the result of the paper. The main concern with the paper is the presence of concurrent work which decreases the novelty of the result of the paper. The reviewers have nevertheless found the paper solid and the improvements/differences somewhat nicely articulated. I strongly suggest the authors to highlight the differences between the paper and the exisiting works clearly in the final version. | train | [
"8RXhIDvT6SV",
"cElAsQgkQgI",
"VrEHkyS0TDI",
"u_IrHQjfZf",
"Ba_pJ8GKzn",
"AA7ySRAi-l8",
"2PnxYcQlK9a",
"HGCQ6yehUE",
"SKJlaMmHH7m",
"wAxoX0a-ycC",
"cCNdEHz6p0v",
"upGu8JpvRpT"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper establishes lower bounds for finding approximate stationary points of min_x max_y f(x;y), where f is nonconvex in x and strongly concave in y. With deterministic gradients, the lower bound matches the convergence rate of the accelerated \"Minimax-PPA\" algorithm from [Lin et al. '20b], settling the orac... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_Kug2s3rHiG3",
"wAxoX0a-ycC",
"AA7ySRAi-l8",
"HGCQ6yehUE",
"nips_2021_Kug2s3rHiG3",
"SKJlaMmHH7m",
"upGu8JpvRpT",
"cCNdEHz6p0v",
"Ba_pJ8GKzn",
"8RXhIDvT6SV",
"nips_2021_Kug2s3rHiG3",
"nips_2021_Kug2s3rHiG3"
] |
nips_2021_vPVTsuJtGky | Early-stopped neural networks are consistent | This work studies the behavior of shallow ReLU networks trained with the logistic loss via gradient descent on binary classification data where the underlying data distribution is general, and the (optimal) Bayes risk is not necessarily zero. In this setting, it is shown that gradient descent with early stopping achieves population risk arbitrarily close to optimal in terms of not just logistic and misclassification losses, but also in terms of calibration, meaning the sigmoid mapping of its outputs approximates the true underlying conditional distribution arbitrarily finely. Moreover, the necessary iteration, sample, and architectural complexities of this analysis all scale naturally with a certain complexity measure of the true conditional model. Lastly, while it is not shown that early stopping is necessary, it is shown that any classifier satisfying a basic local interpolation property is inconsistent.
| accept | The paper is very well-written, clear, and the reviewers found the main theorem a strong contribution.
Please take into consideration the reviewers comments and suggestions in the final version. | train | [
"RzETmGp64TM",
"tH2up_Jm16v",
"9MCRXqgPwyW",
"2fDorh2Uei",
"GiKvHsklm7F",
"lQBJ1Hgw8G",
"BM-3DtF-U2X",
"6hHQ2zVeXmp",
"G_sIS2nDn_E",
"NGCm4i-KAHb",
"yY2tJtoKNce",
"bQdBwLQrV6E",
"O6n3_1-NHsC",
"f_VfkVYAFXG",
"mv-Rm2gUTR7"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for going through our response, and for your support of our work. We will gladly answer any further questions.",
" Thank you for going through our response, and for your support of our work. We will gladly answer any further questions.",
"This paper studies the training of single layer neural netwo... | [
-1,
-1,
7,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"GiKvHsklm7F",
"2fDorh2Uei",
"nips_2021_vPVTsuJtGky",
"BM-3DtF-U2X",
"yY2tJtoKNce",
"nips_2021_vPVTsuJtGky",
"9MCRXqgPwyW",
"mv-Rm2gUTR7",
"f_VfkVYAFXG",
"nips_2021_vPVTsuJtGky",
"lQBJ1Hgw8G",
"O6n3_1-NHsC",
"nips_2021_vPVTsuJtGky",
"nips_2021_vPVTsuJtGky",
"nips_2021_vPVTsuJtGky"
] |
nips_2021_bSgieZ8-be | NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM | Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained Transformer networks. However, these models often contain hundreds of millions or even billions of parameters, bringing challenges to online deployment due to latency constraints. Recently, hardware manufacturers have introduced dedicated hardware for NxM sparsity to provide the flexibility of unstructured pruning with the runtime efficiency of structured approaches. NxM sparsity permits arbitrarily selecting M parameters to retain from a contiguous group of N in the dense representation. However, due to the extremely high complexity of pre-trained models, the standard sparse fine-tuning techniques often fail to generalize well on downstream tasks, which have limited data resources. To address such an issue in a principled manner, we introduce a new learning framework, called NxMTransformer, to induce NxM semi-structured sparsity on pretrained language models for natural language understanding to obtain better performance. In particular, we propose to formulate the NxM sparsity as a constrained optimization problem and use Alternating Direction Method of Multipliers (ADMM) to optimize the downstream tasks while taking the underlying hardware constraints into consideration. ADMM decomposes the NxM sparsification problem into two sub-problems that can be solved sequentially, generating sparsified Transformer networks that achieve high accuracy while being able to effectively execute on newly released hardware. We apply our approach to a wide range of NLP tasks, and our proposed method is able to achieve 1.7 points higher accuracy in GLUE score than current best practices. Moreover, we perform detailed analysis on our approach and shed light on how ADMM affects fine-tuning accuracy for downstream tasks. Finally, we illustrate how NxMTransformer achieves additional performance improvement with knowledge distillation based methods.
| accept | This paper uses ADMM to induce structured weight sparsity during neural net finetuning.
It is specifically targeting a very local sparsity pattern (N nonzeros in every block of M weights) which is supported in recent nvidia hardware and libraries with near-linear speedups. This is a different constraint scenario than most structured sparsity approaches, and ADMM does seem like a good fit.
As such, reviewers find the contribution valuable but also highlight some weaknesses. The experimental methodology seems susceptible to noise; multiple runs would help; however, simply getting comparable performance seems sufficient when sparsity is the goal. Reviewers and possibly all readers are left wanting speed comparisons; authors should place more emphasis on the library+hardware support for NxM-sparsity to better clarify that this is established independent of this work. The writing and presentation should be polished as well. | test | [
"-KVkFqbNBZg",
"zzEovhLpE-h",
"3CKA4gZCybq",
"RZqZVjR465i",
"_vi9qkAjcC4",
"jdApoCIXGW0",
"5GgY8hgVbG",
"L-JniIzm32",
"zBC61l5vPzF",
"RYhjsQBrduS",
"7shoQUqTpVo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduces a new method called NxMTransformer to induce NxM semi-structured sparsity on pertained language models, for recent hardware dedicated for efficient NxM sparsity (e.g. newly released Sparse Tensor Core). NxMTransformer fromulates the NxM sparsity as a constrained optimization problem and use A... | [
6,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_bSgieZ8-be",
"nips_2021_bSgieZ8-be",
"zBC61l5vPzF",
"nips_2021_bSgieZ8-be",
"L-JniIzm32",
"-KVkFqbNBZg",
"RYhjsQBrduS",
"RZqZVjR465i",
"zzEovhLpE-h",
"7shoQUqTpVo",
"nips_2021_bSgieZ8-be"
] |
nips_2021_Mx-iNoxLU4t | Reliable Decisions with Threshold Calibration | Decision makers rely on probabilistic forecasts to predict the loss of different decision rules before deployment. When the forecasted probabilities match the true frequencies, predicted losses will be accurate. Although perfect forecasts are typically impossible, probabilities can be calibrated to match the true frequencies on average. However, we find that this \textit{average} notion of calibration, which is typically used in practice, does not necessarily guarantee accurate decision loss prediction. Specifically in the regression setting, the loss of threshold decisions, which are decisions based on whether the forecasted outcome falls above or below a cutoff, might not be predicted accurately. We propose a stronger notion of calibration called threshold calibration, which is exactly the condition required to ensure that decision loss is predicted accurately for threshold decisions. We provide an efficient algorithm which takes an uncalibrated forecaster as input and provably outputs a threshold-calibrated forecaster. Our procedure allows downstream decision makers to confidently estimate the loss of any threshold decision under any threshold loss function. Empirically, threshold calibration improves decision loss prediction without compromising on the quality of the decisions in two real-world settings: hospital scheduling decisions and resource allocation decisions.
| accept | There is a general agreement among the reviewers on the novelty and significance of the work. However, it is still clear based on the reviews that the presentation of the submitted paper could be further improved. That said, given the the satisfactory work that the authors have made during the discussion period, I am happy to suggest the acceptance of this paper and strongly encourage the authors to improve their paper based on the discussion with the reviewers. The major points to revisit are: i) polishing of the notation (as suggested by vtF8); ii) extension of the related work section (as suggested by reviewer APiD); iii) clarifications on the scalability of the proposed algorithm and on the implementation of baselines (as suggested by reviewer VsY9). | train | [
"gfdqCahtnte",
"tGTjoVOkZT",
"tLECEZT852Q",
"ZTpS74oy8CX",
"KN84-0vSI6",
"dk-rdVdIFvS",
"hEiWwJh-zJT",
"bAo_HNL5vYk",
"XnOa9NFawN",
"KWM2oTbYJ4T"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the problem of calibrating the conditional CDFs from a regression model. \n\nThe authors propose a new definition of threshold calibration, which requires the conditional CDFs to have calibrated quantiles when these CDFs have a $\\alpha$-quantile that are all on the same side regarding a give... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
4,
9,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"nips_2021_Mx-iNoxLU4t",
"ZTpS74oy8CX",
"hEiWwJh-zJT",
"bAo_HNL5vYk",
"KWM2oTbYJ4T",
"gfdqCahtnte",
"XnOa9NFawN",
"nips_2021_Mx-iNoxLU4t",
"nips_2021_Mx-iNoxLU4t",
"nips_2021_Mx-iNoxLU4t"
] |
nips_2021_gbcsmD3Iznu | End-to-End Weak Supervision | Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current state of the art approaches that do not use any labeled training data, however, require two separate modeling steps: Learning a probabilistic latent variable model based on the WS sources -- making assumptions that rarely hold in practice -- followed by downstream model training. Importantly, the first step of modeling does not consider the performance of the downstream model.To address these caveats we propose an end-to-end approach for directly learning the downstream model by maximizing its agreement with probabilistic labels generated by reparameterizing previous probabilistic posteriors with a neural network. Our results show improved performance over prior work in terms of end model performance on downstream test sets, as well as in terms of improved robustness to dependencies among weak supervision sources.
| accept | This paper proposes a deep learning, end-to-end approach to learning from multiple noisy labeling functions. The proposed method outperforms existing techniques that use probabilistic graphical models for label aggregation. One of the major criticisms of the paper is that the paper does not analyze (theoretically or empirically) why exactly the proposed technique works well. The reviews believe that they have suggested different empirical investigations, but the authors did not engage with these ideas during the rebuttal period. The reviewers were willing to raise their rating if additional empirical investigation about the inner-workings of the technique had been conducted. Other that the lack of analysis, the reviewers do not present a strong criticism of the technique and are happy with empirical results. I do resonate with the authors that deep learning techniques can be challenging to analyze, and my overall reasoning is in agreement with reviewer #zED9 that the novelty of the proposed method along with the improved empirical performance can substitute the lack of theoretical or empirical justification to some extent. Overall, I believe the publication of this paper can help the research community make progress on finding better algorithms for aggregating multiple noisy labeling sources, and better understanding of this algorithm can be left for future work. Hence, I recommend acceptance as a poster.
| train | [
"4ZJKTIbg6p",
"pDPxUmh6pS",
"dteY_jpp5lq",
"1uaP8yEoozI",
"8saLFfWLjve",
"QDjbhRRsml",
"hu0BE5SW276",
"priyn1TWMK",
"inpIc3VCSYm",
"2YWY-xwc5ED"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the responses to my concerns regarding the paper. After going through the rebuttal and reading the other reviews, I decided to keep my original score.",
"The authors propose an \"end to end\" approach to weakly supervised learning. Instead of obtaining ground truth labels by modeling WS sources vi... | [
-1,
5,
6,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"priyn1TWMK",
"nips_2021_gbcsmD3Iznu",
"nips_2021_gbcsmD3Iznu",
"hu0BE5SW276",
"pDPxUmh6pS",
"2YWY-xwc5ED",
"dteY_jpp5lq",
"inpIc3VCSYm",
"nips_2021_gbcsmD3Iznu",
"nips_2021_gbcsmD3Iznu"
] |
nips_2021_tqi_45ApQzF | Shift Invariance Can Reduce Adversarial Robustness | Shift invariance is a critical property of CNNs that improves performance on classification. However, we show that invariance to circular shifts can also lead to greater sensitivity to adversarial attacks. We first characterize the margin between classes when a shift-invariant {\em linear} classifier is used. We show that the margin can only depend on the DC component of the signals. Then, using results about infinitely wide networks, we show that in some simple cases, fully connected and shift-invariant neural networks produce linear decision boundaries. Using this, we prove that shift invariance in neural networks produces adversarial examples for the simple case of two classes, each consisting of a single image with a black or white dot on a gray background. This is more than a curiosity; we show empirically that with real datasets and realistic architectures, shift invariance reduces adversarial robustness. Finally, we describe initial experiments using synthetic data to probe the source of this connection.
| accept | This paper provides theoretical and empirical evidences that the robustness of a (linear) classifier reduces when one tries to ensure shift invariance for the input. The results are not as surprising since classifying all shifted instances is indeed conceptually more difficult, when the feature dimension is fixed. The analysis is done for two-layer networks. One of the reviewers raised a legitimate concern whether the notion of robustness studied here is directly associated with the adversarial robustness. Majority of the reviewers nevertheless think the paper is above the standard for acceptance. | train | [
"hVJy5FpPyX",
"3_w9GWBVv9l",
"CKi31bQ277f",
"c3n3zFpq5Gb",
"uSNdjwRkJrC",
"AfJN20BpqMX",
"SyT1U66X9a0",
"3Y97NUY2uqC",
"Gfe-aEtk6ME",
"AUmhRxOaSu4",
"XUbETZdQXvP",
"OwaRMvkZZJV",
"nct6R_sgN0M"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed response. I realize the main confusion comes from this difference between working with CNN's which aims to be shift-invariant versus the CNN's you provided in the experiments which are provably shift-invariant. Then the line of thought \"shifted versions of adversarial examples will al... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"SyT1U66X9a0",
"uSNdjwRkJrC",
"c3n3zFpq5Gb",
"AfJN20BpqMX",
"nct6R_sgN0M",
"OwaRMvkZZJV",
"AUmhRxOaSu4",
"nips_2021_tqi_45ApQzF",
"XUbETZdQXvP",
"nips_2021_tqi_45ApQzF",
"nips_2021_tqi_45ApQzF",
"nips_2021_tqi_45ApQzF",
"nips_2021_tqi_45ApQzF"
] |
nips_2021_C5jDWzrZak | Wisdom of the Crowd Voting: Truthful Aggregation of Voter Information and Preferences | We consider two-alternative elections where voters' preferences depend on a state variable that is not directly observable. Each voter receives a private signal that is correlated to the state variable. As a special case, our model captures the common scenario where voters can be categorized into three types: those who always prefer one alternative, those who always prefer the other, and those contingent voters whose preferences depends on the state. In this setting, even if every voter is a contingent voter, agents voting according to their private information need not result in the adoption of the universally preferred alternative, because the signals can be systematically biased.We present a mechanism that elicits and aggregates the private signals from the voters, and outputs the alternative that is favored by the majority. In particular, voters truthfully reporting their signals forms a strong Bayes Nash equilibrium (where no coalition of voters can deviate and receive a better outcome).
| accept | There is not much substantive disagreement among the reviewers; the disagreement is mostly about how to value these results, in particular given the restriction to two alternatives. (Such a restriction is common in social choice due to challenges that start to appear with three or more alternatives.) The paper is left in a borderline position, but I'm inclined to give it the benefit of the doubt and recommend it for poster (in particular I think social choice results with two alternatives can be interesting and valuable). The authors should take seriously though some of the reviewers' comments to improve the presentation of the paper. | train | [
"mSC8GN6WQG",
"Kh7JXMS2TTm",
"JXVG9tzAcb",
"aqeSSrioDEL",
"oDGUhisvjAo",
"rXX1ycpYbE6",
"yumnnnanWt",
"Pd0HheLIFfg",
"F3gm9CjMlhF",
"OG8Z5MOAce1",
"P06eO92jya3"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We noticed that there is an update in your review with an extra \"post author response\" section. We appreciate that you read our response and share with us what you think. Here, we would like to further clarify a few things you mentioned.\n\n> Two alternatives - there is certainly plenty of work that looks at se... | [
-1,
-1,
-1,
5,
6,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"yumnnnanWt",
"rXX1ycpYbE6",
"Pd0HheLIFfg",
"nips_2021_C5jDWzrZak",
"nips_2021_C5jDWzrZak",
"P06eO92jya3",
"aqeSSrioDEL",
"OG8Z5MOAce1",
"oDGUhisvjAo",
"nips_2021_C5jDWzrZak",
"nips_2021_C5jDWzrZak"
] |
nips_2021_5UZ-AcwFDKJ | Replay-Guided Adversarial Environment Design | Minqi Jiang, Michael Dennis, Jack Parker-Holder, Jakob Foerster, Edward Grefenstette, Tim Rocktäschel | accept | The paper proposes a unified approach for environmental design in RL.
The reviewers positively evaluated the paper and recommended for acceptance.
AC. | train | [
"_MBAg8g98Ag",
"vWserIb_8R",
"mZUIL5ZkO1",
"MGuGWh7Uwpv",
"YMm5DdxBzfL",
"fYMAWXlXQtT",
"K3NfJCCuyIo",
"edgT1IH9_Hz",
"gSAh7ixYp-1",
"sZ8Bt3CfscK",
"vmSlCqzmvE",
"8EYb9XLw7_4",
"oXBMinAIyLQ",
"AQ1p8I9kgXX",
"DACRReMRw9",
"chCuJ9CCEj6",
"jXRr6zzgl-B"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad that our additional experiments and clarifications addressed the reviewer's concerns, and thank the reviewer for their score increase. We will provide the extended curves with a longer time horizon in the camera-ready.",
" I appreciate the authors' efforts in including the additional experiments and... | [
-1,
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"vWserIb_8R",
"vmSlCqzmvE",
"nips_2021_5UZ-AcwFDKJ",
"fYMAWXlXQtT",
"nips_2021_5UZ-AcwFDKJ",
"AQ1p8I9kgXX",
"DACRReMRw9",
"mZUIL5ZkO1",
"YMm5DdxBzfL",
"jXRr6zzgl-B",
"mZUIL5ZkO1",
"chCuJ9CCEj6",
"DACRReMRw9",
"YMm5DdxBzfL",
"nips_2021_5UZ-AcwFDKJ",
"nips_2021_5UZ-AcwFDKJ",
"nips_2021... |
nips_2021_3X65eaS4PtP | There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning | We propose to learn to distinguish reversible from irreversible actions for better informed decision-making in Reinforcement Learning (RL). From theoretical considerations, we show that approximate reversibility can be learned through a simple surrogate task: ranking randomly sampled trajectory events in chronological order. Intuitively, pairs of events that are always observed in the same order are likely to be separated by an irreversible sequence of actions. Conveniently, learning the temporal order of events can be done in a fully self-supervised way, which we use to estimate the reversibility of actions from experience, without any priors.We propose two different strategies that incorporate reversibility in RL agents, one strategy for exploration (RAE) and one strategy for control (RAC). We demonstrate the potential of reversibility-aware agents in several environments, including the challenging Sokoban game. In synthetic tasks, we show that we can learn control policies that never fail and reduce to zero the side-effects of interactions, even without access to the reward function.
| accept | After reading each other's reviews and the authors' feedback, the reviewers discussed the merits and flaws of the paper.
The reviewers did not reach a consensus about the acceptance of this paper.
In particular, the main concern is about the quadratic complexity of the proposed approach, which has not been properly addressed by the authors' answers. Nonetheless, the majority of the reviewers think that the good performance on the Sokoban experiment shows that the algorithm can be effective in practice. Overall I think that the proposed approach is interesting and I am in favor of its acceptance.
I want to congratulate the authors and invite them to modify their paper following the reviewers' suggestions. | train | [
"Mzdgqv-rHW3",
"HKvDe7lTtl",
"TQ8wNhVD0B",
"QMMt297VJ0V",
"rIJbb2nQpn",
"rfPImsmmmGb",
"09X0xy2Yci",
"b-JHf76ZzPM"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper defines several reversibility measures in MDPs and provides corresponding results for estimating those measures. Then, a supervised learning approach is proposed to be incorporated into an RL agent for avoiding irreversible actions. The idea of inspecting reversibility is itself novel and I appreciate t... | [
5,
-1,
-1,
-1,
-1,
9,
6,
8
] | [
3,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_3X65eaS4PtP",
"b-JHf76ZzPM",
"09X0xy2Yci",
"Mzdgqv-rHW3",
"rfPImsmmmGb",
"nips_2021_3X65eaS4PtP",
"nips_2021_3X65eaS4PtP",
"nips_2021_3X65eaS4PtP"
] |
nips_2021_lEkPb2Rhm7 | Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics | Applications of Reinforcement Learning (RL) in robotics are often limited by high data demand. On the other hand, approximate models are readily available in many robotics scenarios, making model-based approaches like planning a data-efficient alternative. Still, the performance of these methods suffers if the model is imprecise or wrong. In this sense, the respective strengths and weaknesses of RL and model-based planners are complementary. In the present work, we investigate how both approaches can be integrated into one framework that combines their strengths. We introduce Learning to Execute (L2E), which leverages information contained in approximate plans to learn universal policies that are conditioned on plans. In our robotic manipulation experiments, L2E exhibits increased performance when compared to pure RL, pure planning, or baseline methods combining learning and planning.
| accept | The paper combines learning and planning to increase data efficiency, motivated by robotics application scenarios. In more detail, the authors leverage the fact that in such scenarios, often, a coarse model is available, that allows for making a coarse, but probably suboptimal plan. The authors propose using this plan to provide shaping rewards to a RL agent that is then free to further optimize the policy. For this, they build on "plan based final volume preserving reward shaping" introduced by Schubert et al. The main contribution of the authors is to
- Allow policies that generalize over instances by conditioning on the plan
- Using this formulation to introduce plan-replay-strategies.
Initially, the reviews was mixed, but after the discussion phase all reviewers recommend acceptance. The main reasons being:
- Strong and original technical ideas
- Mostly well-written
- The experimental evaluation was initially somewhat limited, but during the discussion phase the authors have made these stronger by adding another environment and clarifying the baseline methods used.
- The methods limitations could be more explicitly discussed.
I'd like to add that I think that the authors could more explicitly discuss what is novel in their paper compared to the earlier Schubert paper. E.g., the FV-RS is directly introduced in a 'universal' formulation (3), making the transition to universal policies and reward shaping functions more smooth but also making this part of their contribution less explicit. | train | [
"-BUjEU1-bDb",
"xHTpCiEQ_z",
"yTFoLo0OfLG",
"uKSx1kGI5r",
"3--5H7-QAvL",
"GiYAyBnyYBx",
"OtbVw2s7aM",
"PBLKt-I_wtb",
"E7clDlA5m1_",
"BkBYZBNOARE",
"_zWhO07wgx7",
"8dw7bxSwFyQ",
"9MpiJSC1JcW",
"ECqaT8BIMJc",
"s0qJmEF86Wy",
"SZZcWQ3bJd6",
"sc8zw8DoLJb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for the detailed response. The additional experiments are helpful. ",
" Thank you very much for your clear responses.\nI noticed some misunderstandings especially about (5).\nI also welcome many clarifications you made, and appreciate your effort to conduct an additional experiment.\n\nBased on the updat... | [
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"SZZcWQ3bJd6",
"s0qJmEF86Wy",
"nips_2021_lEkPb2Rhm7",
"8dw7bxSwFyQ",
"nips_2021_lEkPb2Rhm7",
"3--5H7-QAvL",
"_zWhO07wgx7",
"SZZcWQ3bJd6",
"nips_2021_lEkPb2Rhm7",
"nips_2021_lEkPb2Rhm7",
"ECqaT8BIMJc",
"nips_2021_lEkPb2Rhm7",
"3--5H7-QAvL",
"BkBYZBNOARE",
"yTFoLo0OfLG",
"sc8zw8DoLJb",
... |
nips_2021_SGZn06ZXcG | Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative Adversarial Networks | Despite remarkable performance in producing realistic samples, Generative Adversarial Networks (GANs) often produce low-quality samples near low-density regions of the data manifold, e.g., samples of minor groups. Many techniques have been developed to improve the quality of generated samples, either by post-processing generated samples or by pre-processing the empirical data distribution, but at the cost of reduced diversity. To promote diversity in sample generation without degrading the overall quality, we propose a simple yet effective method to diagnose and emphasize underrepresented samples during training of a GAN. The main idea is to use the statistics of the discrepancy between the data distribution and the model distribution at each data instance. Based on the observation that the underrepresented samples have a high average discrepancy or high variability in discrepancy, we propose a method to emphasize those samples during training of a GAN. Our experimental results demonstrate that the proposed method improves GAN performance on various datasets, and it is especially effective in improving the quality and diversity of sample generation for minor groups.
| accept | The paper addresses the problem of mode-dropping of underrepresented areas of the data manifold in the context of GANs. The main result relies on a reweighing scheme for the real data, which is novel according to the reviewers.
Two reviewers think the paper is clear and well written while the third one believes the clarity could be improved.
One recurring comment is that the experimental section could be stronger by using more recent models and larger datasets.
In general, the reviewers tend to agree that this is an incremental progress, based on other papers, but all agree that it is important enough to be published.
Ethics: The reviewers only highlight general possible ethics issues which are common to most GAN (and generative models). The authors did acknowledge some of these issues in the paper and their answers to the reviewers, along to the planned editing, are satisfactory.
Given all that, and the general consensus for acceptance, I recommend this paper to be published as a poster. | train | [
"2Dsp4UVFEU",
"Qvewhx2mQkh",
"fVmLzOelL1",
"3MM1Ad1jYwl",
"zjaioEm_L7Q",
"SN9mdP0azzX",
"SJf_IXChbmu",
"LJ7fnaAbab",
"OhNlAwGLziQ",
"D81nlqMlMRk"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a scoring to measure to what extend an image in a dataset is characterized by an under-represented visual appearance. GANs fail to accurately model these under-represented modes of the data distribution or drop them completely. Here, the score is based on the discriminator predictions on real im... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_SGZn06ZXcG",
"LJ7fnaAbab",
"3MM1Ad1jYwl",
"nips_2021_SGZn06ZXcG",
"D81nlqMlMRk",
"2Dsp4UVFEU",
"OhNlAwGLziQ",
"nips_2021_SGZn06ZXcG",
"nips_2021_SGZn06ZXcG",
"nips_2021_SGZn06ZXcG"
] |
nips_2021_kVHxBqPcn_ | Online Multi-Armed Bandits with Adaptive Inference | During online decision making in Multi-Armed Bandits (MAB), one needs to conduct inference on the true mean reward of each arm based on data collected so far at each step. However, since the arms are adaptively selected--thereby yielding non-iid data--conducting inference accurately is not straightforward. In particular, sample averaging, which is used in the family of UCB and Thompson sampling (TS) algorithms, does not provide a good choice as it suffers from bias and a lack of good statistical properties (e.g. asymptotic normality). Our thesis in this paper is that more sophisticated inference schemes that take into account the adaptive nature of the sequentially collected data can unlock further performance gains, even though both UCB and TS type algorithms are optimal in the worst case. In particular, we propose a variant of TS-style algorithms--which we call doubly adaptive TS--that leverages recent advances in causal inference and adaptively reweights the terms of a doubly robust estimator on the true mean reward of each arm. Through 20 synthetic domain experiments and a semi-synthetic experiment based on data from an A/B test of a web service, we demonstrate that using an adaptive inferential scheme (while still retaining the exploration efficacy of TS) provides clear benefits in online decision making: the proposed DATS algorithm has superior empirical performance to existing baselines (UCB and TS) in terms of regret and sample complexity in identifying the best arm. In addition, we also provide a finite-time regret bound of doubly adaptive TS that matches (up to log factors) those of UCB and TS algorithms, thereby establishing that its improved practical benefits do not come at the expense of worst-case suboptimality.
| accept | Three of the four reviewers recommended accepting the paper and I am happy to accept it. I encourage the authors to include in the final version the additional material that they mentioned in the rebuttal. | train | [
"H5SQ73-rdN0",
"skLYUbDy3H",
"DzGNNLlvtj",
"LSKrdTtnY2",
"WbM9k9jxejW",
"xbEJ5csst4",
"t3a4h1vWZNC",
"ih6sKmuxFd",
"IbEF9NnqhAc",
"ECJ6MH6hpXQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper investigates stochastic MAB algorithms, UCB and Thompson sampling, where the usual choice for estimating the means of arms, importance weighting, is replaced with a \"doubly adaptive\" estimator. This estimator, proposed by Leudtke and Van Der Laan 2016, is self-normalized in that the weight of each samp... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2021_kVHxBqPcn_",
"DzGNNLlvtj",
"t3a4h1vWZNC",
"ECJ6MH6hpXQ",
"IbEF9NnqhAc",
"H5SQ73-rdN0",
"ih6sKmuxFd",
"nips_2021_kVHxBqPcn_",
"nips_2021_kVHxBqPcn_",
"nips_2021_kVHxBqPcn_"
] |
nips_2021_oyHWvdvkZDv | Efficient Truncated Linear Regression with Unknown Noise Variance | Constantinos Daskalakis, Patroklos Stefanou, Rui Yao, Emmanouil Zampetakis | accept | This paper studies high-dimensional linear regression in a truncated setting, where only examples with labels in a given set are observed.
Learning from truncated samples is a classical topic in statistics that has received renewed interest in the last few years. Prior work had given an efficient algorithm for this problem under the assumption that the additive observation noise has known variance. This paper gives an efficient algorithm for the broader, more realistic, setting that the variance of the noise is *unknown*. Handing unknown variance turns out to require non-trivial new ideas. After extensive discussion internally and with the authors, the reviewers agreed that this contribution merits acceptance to NeurIPS. | test | [
"pLe82DOwpfd",
"T9uUzhPWjqT",
"D77Edl3z0Q2",
"Umlf6QV6XfY",
"Mcofi_GiZZ8",
"D2VlaFPwA_z",
"Icn-yFJGQ6Q",
"8cul2wEA4FP",
"q9OkK4TElTK",
"DdhhkxLoYOP",
"bGkgPG7fjF-",
"qEHFEGmRQ21"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"# Summary\n\nThe paper studies truncated linear regression when the additive error follows a zero-mean normal distribution with unknown variance. \nThis setting generalizes the work of Daskalakis-Gouleakis-Tzamos-Zampetakis-2019, who assumed the knowledge of the variance of error.\nThis work provides two statistic... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1
] | [
"nips_2021_oyHWvdvkZDv",
"Umlf6QV6XfY",
"nips_2021_oyHWvdvkZDv",
"Icn-yFJGQ6Q",
"Icn-yFJGQ6Q",
"8cul2wEA4FP",
"D2VlaFPwA_z",
"qEHFEGmRQ21",
"nips_2021_oyHWvdvkZDv",
"D77Edl3z0Q2",
"q9OkK4TElTK",
"pLe82DOwpfd"
] |
nips_2021_C0GmZH2RnVR | Breaking the Dilemma of Medical Image-to-image Translation | Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field of medical image-to-image translation. However, neither modes are ideal. The Pix2Pix mode has excellent performance. But it requires paired and well pixel-wise aligned images, which may not always be achievable due to respiratory motion or anatomy change between times that paired images are acquired. The Cycle-consistency mode is less stringent with training data and works well on unpaired or misaligned images. But its performance may not be optimal. In order to break the dilemma of the existing modes, we propose a new unsupervised mode called RegGAN for medical image-to-image translation. It is based on the theory of "loss-correction". In RegGAN, the misaligned target images are considered as noisy labels and the generator is trained with an additional registration network to fit the misaligned noise distribution adaptively. The goal is to search for the common optimal solution to both image-to-image translation and registration tasks. We incorporated RegGAN into a few state-of-the-art image-to-image translation methods and demonstrated that RegGAN could be easily combined with these methods to improve their performances. Such as a simple CycleGAN in our mode surpasses latest NICEGAN even though using less network parameters. Based on our results, RegGAN outperformed both Pix2Pix on aligned data and Cycle-consistency on misaligned or unpaired data. RegGAN is insensitive to noises which makes it a better choice for a wide range of scenarios, especially for medical image-to-image translation tasks in which well pixel-wise aligned data are not available. Code and dataset are available at https://github.com/Kid-Liet/Reg-GAN.
| accept | This paper proposes a novel medical image-to-image generative model, called RegGAN, to combine an adversarially trained image generator with a registration module, which enforces the alignment between the generated image and noisy ground truth image. The paper illustrates the better performance of the proposed method on all major metrics for both paired and unpaired data and that RegGAN is more resistant to label noise and more stable in the training process. All reviewers agree that the results are convincing and that the work may bring insights to the image translation community. | train | [
"cBYnhDDvguM",
"3VmRd9t2KQN",
"T2H039vVom",
"Rnaki7aGWQd",
"NnQkzRVQGFm",
"8bbK_f_yYGQ",
"5L9PcELB-f5",
"OzOnF1hA1e",
"3B1j9QPdP9z",
"eI5r-lNkEzo",
"cR-T3HRIt6",
"qz_CJQYCURO"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your clarification. It partly addressed my concern. So I raised my score to 6. ",
"This paper considers the setting where paired images are available but some of them are misaligned. To address this issue, it proposes to add a registration network after the translation network as the \"correction los... | [
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"Rnaki7aGWQd",
"nips_2021_C0GmZH2RnVR",
"nips_2021_C0GmZH2RnVR",
"NnQkzRVQGFm",
"5L9PcELB-f5",
"OzOnF1hA1e",
"3VmRd9t2KQN",
"T2H039vVom",
"qz_CJQYCURO",
"cR-T3HRIt6",
"nips_2021_C0GmZH2RnVR",
"nips_2021_C0GmZH2RnVR"
] |
nips_2021_LGvlCcMgWqb | Temporally Abstract Partial Models | Humans and animals have the ability to reason and make predictions about different courses of action at many time scales. In reinforcement learning, option models (Sutton, Precup \& Singh, 1999; Precup, 2000) provide the framework for this kind of temporally abstract prediction and reasoning. Natural intelligent agents are also able to focus their attention on courses of action that are relevant or feasible in a given situation, sometimes termed affordable actions. In this paper, we define a notion of affordances for options, and develop temporally abstract partial option models, that take into account the fact that an option might be affordable only in certain situations. We analyze the trade-offs between estimation and approximation error in planning and learning when using such models, and identify some interesting special cases. Additionally, we empirically demonstrate the ability to learn both affordances and partial option models online resulting in improved sample efficiency and planning time in the Taxi domain.
| accept | This paper introduces an option model based on the notions of intent and affordances of options, and provides a theoretical analysis of the framework. While the empirical results are limited, it was felt that the conceptual and theoretical contributions of this paper are exciting, and could be a useful contribution to the field.
There were several points of confusion raised by the reviewers, and these points should be clarified in the final version. | train | [
"dA8OFceFDml",
"_2fsXhJCR3",
"Wat16qafoE",
"s_KZj0-Reca",
"iKIO4TdC7M",
"0qJqSNspo2K",
"UrtRRDCBHu0",
"uK9xl-xjKLn",
"XJFI-5jtln7",
"MsKbQaimoqw",
"xSau-wQvh2Q",
"RDyIGbBNMzb",
"wdp2-FOk9b"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a formalism for semi-Markov decision processes that defines \"intention\" (desirable outcomes of behavior) and \"affordances\" (behaviors that achieve an intention). The main idea is that the agent may have many options, but may have a more limited set of intentions, and affordances can constra... | [
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_LGvlCcMgWqb",
"nips_2021_LGvlCcMgWqb",
"0qJqSNspo2K",
"MsKbQaimoqw",
"UrtRRDCBHu0",
"_2fsXhJCR3",
"XJFI-5jtln7",
"nips_2021_LGvlCcMgWqb",
"wdp2-FOk9b",
"RDyIGbBNMzb",
"dA8OFceFDml",
"nips_2021_LGvlCcMgWqb",
"nips_2021_LGvlCcMgWqb"
] |
nips_2021_I3yGrFoH8DF | TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification | Transformers have recently gained increasing attention in computer vision. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions, and the generalizability of Transformers is unknown. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images. We find that the Vision Transformer (ViT) and the vanilla Transformer with decoders are not adequate for image matching due to their lack of image-to-image attention. Thus, we further design two naive solutions, i.e. query-gallery concatenation in ViT, and query-gallery cross-attention in the vanilla Transformer. The latter improves the performance, but it is still limited. This implies that the attention mechanism in Transformers is primarily designed for global feature aggregation, which is not naturally suitable for image matching. Accordingly, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, global max pooling and a multilayer perceptron (MLP) head are applied to decode the matching result. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching. The proposed method, called TransMatcher, achieves state-of-the-art performance in generalizable person re-identification, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively, on several popular datasets. Code is available at https://github.com/ShengcaiLiao/QAConv.
| accept | This paper copes with the interesting problem of using transformer-based models for image matching. This work shows that such a mechanism can successfully be used for person re-identification. The reviewers have recognised the simplicity of the proposed approach and good simple baselines described along with the approach. The experimental results show improvement over recent work on domain generalized person re-id. Despite the lack of important theoretical novelty, the tackled topic is timely and the proposed model simple. I suggest the acceptance of this work. | train | [
"dWElXnwiH1",
"6638JBPdhb8",
"bxUoFoipoNc",
"_cXmpmQY6_E",
"0xH3niinGF",
"ImJUsxe7fQ0",
"rD0pom_ApdT",
"2iKvYoqGC4B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigates how to apply the transformer architecture to solve the image match problem of generalised person Re-ID. The features of query/gallery images are first extracted with a ResNet model, then a transformer encoder without positional embedding is applied to query/gallery features, respectively. A... | [
6,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
5,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_I3yGrFoH8DF",
"nips_2021_I3yGrFoH8DF",
"dWElXnwiH1",
"2iKvYoqGC4B",
"rD0pom_ApdT",
"6638JBPdhb8",
"nips_2021_I3yGrFoH8DF",
"nips_2021_I3yGrFoH8DF"
] |
nips_2021_XzH3QMBKIJ | Multi-Objective SPIBB: Seldonian Offline Policy Improvement with Safety Constraints in Finite MDPs | We study the problem of Safe Policy Improvement (SPI) under constraints in the offline Reinforcement Learning (RL) setting. We consider the scenario where: (i) we have a dataset collected under a known baseline policy, (ii) multiple reward signals are received from the environment inducing as many objectives to optimize. We present an SPI formulation for this RL setting that takes into account the preferences of the algorithm’s user for handling the trade-offs for different reward signals while ensuring that the new policy performs at least as well as the baseline policy along each individual objective. We build on traditional SPI algorithms and propose a novel method based on Safe Policy Iteration with Baseline Bootstrapping (SPIBB, Laroche et al., 2019) that provides high probability guarantees on the performance of the agent in the true environment. We show the effectiveness of our method on a synthetic grid-world safety task as well as in a real-world critical care context to learn a policy for the administration of IV fluids and vasopressors to treat sepsis.
| accept | The paper proposed a SPIBB algorithm for offline constraint RL. This is an interesting problem to address. Most of the reviewers think the theoretical analysis is rather weak in the paper, but the experiments give good complementation to the theory. Hence the overall recommendation is a weak acceptance. | train | [
"wdXOonZO5TD",
"5ItDdJG0SXt",
"i2l3YlCj8O6",
"mmjznRc1FB2",
"-ozvV2eJV8f",
"EJZppYovfnp",
"cSGHd2uR9RS",
"tq1kUgHHaQs",
"bI7g-Ug44im",
"9kZ85CQbN4y",
"BHX6dMKXqpf",
"VyQUZBi1Gz",
"f83u6hYeD8f",
"jbJfmLEbgB",
"0-4b-VtPkWr",
"iUiZtYMEqhG",
"85kUaj11xYA",
"HFc565KmMoQ",
"iFyEQtfQ4zR... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
... | [
"This paper presents the Safe Policy Iteration with Baseline Bootstrapping (SPIBB) in the offline RL setting with multiple objectives, i.e. multiple rewards. The authors apply the SPIBB set-up to handle the trade-offs for different rewards standing for different user preferences using the multi-objective framework ... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_XzH3QMBKIJ",
"iFyEQtfQ4zR",
"nZe5YV7mx5c",
"EJZppYovfnp",
"bI7g-Ug44im",
"9kZ85CQbN4y",
"nips_2021_XzH3QMBKIJ",
"nips_2021_XzH3QMBKIJ",
"BHX6dMKXqpf",
"f83u6hYeD8f",
"jbJfmLEbgB",
"nips_2021_XzH3QMBKIJ",
"HFc565KmMoQ",
"iUiZtYMEqhG",
"nips_2021_XzH3QMBKIJ",
"85kUaj11xYA",
... |
nips_2021_tjdHCnPqoo | Is Automated Topic Model Evaluation Broken? The Incoherence of Coherence | Topic model evaluation, like evaluation of other unsupervised methods, can be contentious. However, the field has coalesced around automated estimates of topic coherence, which rely on the frequency of word co-occurrences in a reference corpus. Contemporary neural topic models surpass classical ones according to these metrics. At the same time, topic model evaluation suffers from a validation gap: automated coherence, developed for classical models, has not been validated using human experimentation for neural models. In addition, a meta-analysis of topic modeling literature reveals a substantial standardization gap in automated topic modeling benchmarks. To address the validation gap, we compare automated coherence with the two most widely accepted human judgment tasks: topic rating and word intrusion. To address the standardization gap, we systematically evaluate a dominant classical model and two state-of-the-art neural models on two commonly used datasets. Automated evaluations declare a winning model when corresponding human evaluations do not, calling into question the validity of fully automatic evaluations independent of human judgments.
| accept | All reviews recommended acceptance. The work was seen as somewhat novel and well motivated for an important task of topic model evaluation, and experiments were considered sound. Thus the work seems clearly valuable to the community. Some questions were raised about the experiments (analysis why the Wikipedia corpus is best correlated, alternatives to NPMI, use of only three-point scales in human evaluations, considering expertise of the user), but overall the work seems clearly acceptable. | train | [
"-IvKGf37we2",
"SJnWISTDPns",
"xU-_FX24mDH",
"Cs7pFdKDj-F",
"i9sXur4XUx",
"X1s3OYoqsF",
"luk8tXUb53r",
"2A9jcu-xBZ",
"lwQ9wvhOBJ7",
"n9Zud7Mu6eE",
"qQW1z8jDxT",
"TzY3oDdP2p"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Topic model evaluation in terms of the quality of learned topics is an important problem in topic modelling, the validation of which has a profound impact on the model development and its application in cross disciplines. This paper reassesses the automatic topic model evaluation metrics, e.g., topic coherence sco... | [
7,
-1,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_tjdHCnPqoo",
"nips_2021_tjdHCnPqoo",
"SJnWISTDPns",
"qQW1z8jDxT",
"nips_2021_tjdHCnPqoo",
"lwQ9wvhOBJ7",
"nips_2021_tjdHCnPqoo",
"-IvKGf37we2",
"i9sXur4XUx",
"luk8tXUb53r",
"TzY3oDdP2p",
"nips_2021_tjdHCnPqoo"
] |
nips_2021_m4k66oJFK9P | INDIGO: GNN-Based Inductive Knowledge Graph Completion Using Pair-Wise Encoding | The aim of knowledge graph (KG) completion is to extend an incomplete KG with missing triples. Popular approaches based on graph embeddings typically work by first representing the KG in a vector space, and then applying a predefined scoring function to the resulting vectors to complete the KG. These approaches work well in transductive settings, where predicted triples involve only constants seen during training; however, they are not applicable in inductive settings, where the KG on which the model was trained is extended with new constants or merged with other KGs. The use of Graph Neural Networks (GNNs) has recently been proposed as a way to overcome these limitations; however, existing approaches do not fully exploit the capabilities of GNNs and still rely on heuristics and ad-hoc scoring functions. In this paper, we propose a novel approach, where the KG is fully encoded into a GNN in a transparent way, and where the predicted triples can be read out directly from the last layer of the GNN without the need for additional components or scoring functions. Our experiments show that our model outperforms state-of-the-art approaches on inductive KG completion benchmarks.
| accept | This paper describes a new knowledge graph completion alg. which can work on novel entities. The evaluation is thorough and extensive. There were several points brought up by the reviewers, and most handled well in the response.
One reviewer thought there should be an additional analysis with a different negative sampling method. The authors respond "Our goal in this paper, however, was to develop a system that performs well when taking into account all relevant metrics rather than a particular one, and which does not exploit unnecessary biases. We believe that our results show that this objective has been achieved." I agree with the authors on this point -- an additional model is not needed
| train | [
"U5Z1mnt9lUE",
"Ful-7xW2Mfd",
"AcfIxTq6k-",
"4HIbK6rhMk",
"0AjxhX7soYE",
"cVyWQQMVA2",
"MgqiKv-Mpxi",
"dCdnRx0d4v",
"in0MFjoPg7Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for their detailed response.\n\nI still have a concern regarding the novelty. But it may not be a serious weakness since the proposed method seems to work well.",
" We thank the reviewer for their comment.\n\nUpon examination of the baselines’ source code, it is clear that they are strongly bi... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"4HIbK6rhMk",
"AcfIxTq6k-",
"cVyWQQMVA2",
"in0MFjoPg7Q",
"dCdnRx0d4v",
"MgqiKv-Mpxi",
"nips_2021_m4k66oJFK9P",
"nips_2021_m4k66oJFK9P",
"nips_2021_m4k66oJFK9P"
] |
nips_2021_pR3dPOHrbfy | Do Input Gradients Highlight Discriminative Features? | Post-hoc gradient-based interpretability methods [Simonyan et al., 2013, Smilkov et al., 2017] that provide instance-specific explanations of model predictions are often based on assumption (A): magnitude of input gradients—gradients of logits with respect to input—noisily highlight discriminative task-relevant features. In this work, we test the validity of assumption (A) using a three-pronged approach:1. We develop an evaluation framework, DiffROAR, to test assumption (A) on four image classification benchmarks. Our results suggest that (i) input gradients of standard models (i.e., trained on original data) may grossly violate (A), whereas (ii) input gradients of adversarially robust models satisfy (A).2. We then introduce BlockMNIST, an MNIST-based semi-real dataset, that by design encodes a priori knowledge of discriminative features. Our analysis on BlockMNIST leverages this information to validate as well as characterize differences between input gradient attributions of standard and robust models.3. Finally, we theoretically prove that our empirical findings hold on a simplified version of the BlockMNIST dataset. Specifically, we prove that input gradients of standard one-hidden-layer MLPs trained on this dataset do not highlight instance-specific signal coordinates, thus grossly violating assumption (A).Our findings motivate the need to formalize and test common assumptions in interpretability in a falsifiable manner [Leavitt and Morcos, 2020]. We believe that the DiffROAR evaluation framework and BlockMNIST-based datasets can serve as sanity checks to audit instance-specific interpretability methods; code and data available at https://github.com/harshays/inputgradients.
| accept | Thank you for your submission to NeurIPS. The reviewers and I are in agreement that the paper presents interesting new insights into the problem of understanding the significance of input gradients as they concern robust and nonrobust models. The addition of the BlockMNIST data set seems particularly helpful to better understanding and furthering research in the area. I'm happy to recommend that the paper be accepted.
Since there was a fair amount of response to reviewer feedback, I'd mainly highlight that many of the comments seemed to center around further (discussion-based, not quantitative) comparison to previous works (such as [34] and Ilyas et a, 2019 mentioned by the reviewer). Overall I think that relating the results here more to potentially qualify phenomena observed in previous studies, at least to the extent possible without substantial revisions, would improve most upon the current paper. | train | [
"yiRo87Jiosq",
"eEzP3t8Av0C",
"HmmDVbMC-jr",
"6iYcK1m6gas",
"vgH-Fn551uO",
"vCbYdnG5aa",
"fohnp_Jhi38",
"znIzbKNztxi",
"Yjk8neXF7ca",
"r3AikjeZHOF",
"mcptqPvZdM",
"VSjg6VY5Cua",
"z7SB66zI4Wn",
"-B0B4IwxOv",
"AEMQsG_ZG9w",
"F5aDFK1rxEU",
"ZLGfbJgMakY",
"bVQsiu1Rt5o"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors test the current assumption for gradient-based explanation methods, i.e., a high magnitude of vanilla gradients highlight more discriminative task-relevant features and vice-versa. The authors theoretically and empirically test the above assumption using DiffROAR, a new evaluation framework, and BlockM... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"nips_2021_pR3dPOHrbfy",
"6iYcK1m6gas",
"nips_2021_pR3dPOHrbfy",
"F5aDFK1rxEU",
"mcptqPvZdM",
"bVQsiu1Rt5o",
"HmmDVbMC-jr",
"Yjk8neXF7ca",
"r3AikjeZHOF",
"mcptqPvZdM",
"-B0B4IwxOv",
"AEMQsG_ZG9w",
"bVQsiu1Rt5o",
"yiRo87Jiosq",
"ZLGfbJgMakY",
"HmmDVbMC-jr",
"nips_2021_pR3dPOHrbfy",
... |
nips_2021_pTe-8qCdDqy | Improving Conditional Coverage via Orthogonal Quantile Regression | We develop a method to generate prediction intervals that have a user-specified coverage level across all regions of feature-space, a property called conditional coverage. A typical approach to this task is to estimate the conditional quantiles with quantile regression---it is well-known that this leads to correct coverage in the large-sample limit, although it may not be accurate in finite samples. We find in experiments that traditional quantile regression can have poor conditional coverage. To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event. For the true conditional quantiles, these two quantities are independent (orthogonal), so the modified loss function continues to be valid. Moreover, we empirically show that the modified loss function leads to improved conditional coverage, as evaluated by several metrics. We also introduce two new metrics that check conditional coverage by looking at the strength of the dependence between the interval size and the indicator of miscoverage.
| accept | This paper proposes a new type of regularization for quantile regression such that the accuracy of finite sample conditional coverage is improved. The proposed regularization is built on an interesting notion about independence between the size of the intervals and the indicator of a miscoverage event. All reviewers agreed that learning accurate conditional quantiles and prediction intervals (PIs) is an important problem in the uncertainty quantification community. The authors are encouraged to consider updating the paper based on the reviewers' comments and suggestions. | train | [
"wEHQRRG1M6e",
"vY8DC-zobwT",
"LXfyU777WyT",
"6xapqb2E6u",
"ZBRlILfnVvP",
"KjrVCSj2vTE"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a novel method to construct prediction intervals with pre-specified conditional coverage probability. Starting point of the paper is the observation that the length $|\\hat{C}(X)|$ and conditional coverage indicator $1\\{Y \\in \\hat{C}(X)\\}$ are independent whenever the prediction interval $\... | [
7,
-1,
-1,
-1,
5,
7
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_pTe-8qCdDqy",
"ZBRlILfnVvP",
"wEHQRRG1M6e",
"KjrVCSj2vTE",
"nips_2021_pTe-8qCdDqy",
"nips_2021_pTe-8qCdDqy"
] |
nips_2021_ye-NP0VZtLC | Minimizing Polarization and Disagreement in Social Networks via Link Recommendation | Liwang Zhu, Qi Bao, Zhongzhi Zhang | accept | Generally all reviews for this paper were positive -- they appreciated the algorithmic contribution -- showing that a greedy approach to link recommendation for minimizing polarization+disagreement gives a bounded approximation ratio, despite the fact that the polarization+disagreement is not submodular. There were significant concerns about practical impact/relevance of the work. Nevertheless, I am recommending acceptance. | train | [
"zmUcdDsxr69",
"_puSe-q5Ael",
"W3ntZzDyNy",
"eDuWYX3V3qr",
"Nn4fL23oOlC",
"Srw8PzVFhK",
"XD7RuYl3q72",
"kFfMd6_tYV",
"F5nsuITerDS",
"6V8T7u9Lie",
"lbKxj0dctz",
"bTPge3Zrhl"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We have carefully considered the suggestion of all reviewers and we will add more experimental results in terms of other distribution of initial opinions $s$. We sincerely thank you again for your support and constructive comments on our paper.",
" We sincerely thank you for providing very helpful comments to ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"Nn4fL23oOlC",
"W3ntZzDyNy",
"XD7RuYl3q72",
"nips_2021_ye-NP0VZtLC",
"Srw8PzVFhK",
"lbKxj0dctz",
"bTPge3Zrhl",
"6V8T7u9Lie",
"eDuWYX3V3qr",
"nips_2021_ye-NP0VZtLC",
"nips_2021_ye-NP0VZtLC",
"nips_2021_ye-NP0VZtLC"
] |
nips_2021_-7EhrbfbK31 | Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations | When compared to the image classification models, black-box adversarial attacks against video classification models have been largely understudied. This could be possible because, with video, the temporal dimension poses significant additional challenges in gradient estimation. Query-efficient black-box attacks rely on effectively estimated gradients towards maximizing the probability of misclassifying the target video. In this work, we demonstrate that such effective gradients can be searched for by parameterizing the temporal structure of the search space with geometric transformations. Specifically, we design a novel iterative algorithm GEOmetric TRAnsformed Perturbations (GEO-TRAP), for attacking video classification models. GEO-TRAP employs standard geometric transformation operations to reduce the search space for effective gradients into searching for a small group of parameters that define these operations. This group of parameters describes the geometric progression of gradients, resulting in a reduced and structured search space. Our algorithm inherently leads to successful perturbations with surprisingly few queries. For example, adversarial examples generated from GEO-TRAP have better attack success rates with ~73.55% fewer queries compared to the state-of-the-art method for video adversarial attacks on the widely used Jester dataset. Overall, our algorithm exposes vulnerabilities of diverse video classification models and achieves new state-of-the-art results under black-box settings on two large datasets.
| accept | The authors propose a black-box adversarial attack method for video classifiers. The challenge is to reduce the search space (T x C x H x W) for attack efficiency measured in number of queries. The idea is to employ geometrics transformations to parameterize and reduce the search space. The experiments demonstrate significant improvement in higher success rate at fewer queries compared to the baseline methods. I suggest the authors incorporate the suggestions raised by the reviewers in the camera ready version. Please include a) plot of query budget versus success rate in addition to the tables 2 & 3; b) the ablation study for the results on translation only and on dilation only. | train | [
"vMMFgUeUOH",
"muHa2YWTJb6",
"gcRJFvUBA4o",
"tYZq_SGi2B",
"N6-1icV2K2",
"1bJxLHCSViV",
"x_Yf561j0a-",
"ouVmpfnR3XG",
"UsdkhAdOQXT",
"H4c3cIfPU8f",
"zj20VC8KrRC",
"wuct_DMew8s",
"udGrLqEjx6S",
"nZEvTnNVJ96",
"0cWo5I72U-",
"rrRF7KSvu9F",
"RDOgp63El-f"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The response seems reasonable but I'm not totally convinced by the comparison (about the settings and fairness). I only raise the score to 6.",
"This paper studies adversarial attacks against black-box video classification models. By adopting geometric transformation, the proposed GEO-TRAP reduced the search sp... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"gcRJFvUBA4o",
"nips_2021_-7EhrbfbK31",
"tYZq_SGi2B",
"N6-1icV2K2",
"1bJxLHCSViV",
"x_Yf561j0a-",
"0cWo5I72U-",
"H4c3cIfPU8f",
"0cWo5I72U-",
"nips_2021_-7EhrbfbK31",
"muHa2YWTJb6",
"rrRF7KSvu9F",
"H4c3cIfPU8f",
"RDOgp63El-f",
"nips_2021_-7EhrbfbK31",
"nips_2021_-7EhrbfbK31",
"nips_20... |
nips_2021_dfyjet3BMKA | Optimal Rates for Random Order Online Optimization | We study online convex optimization in the random order model, recently proposed by Garber et al. (2020), where the loss functions may be chosen by an adversary, but are then presented to the online algorithm in a uniformly random order. Focusing on the scenario where the cumulative loss function is (strongly) convex, yet individual loss functions are smooth but might be non-convex, we give algorithms that achieve the optimal bounds and significantly outperform the results of Garber et al. (2020), completely removing the dimension dependence and improve their scaling with respect to the strong convexity parameter. Our analysis relies on novel connections between algorithmic stability and generalization for sampling without-replacement analogous to those studied in the with-replacement i.i.d. setting, as well as on a refined average stability analysis of stochastic gradient descent.
| accept | The paper eventually received uniformly positive evaluation, with two of the reviewers rating it to be within top 50% of accepted papers.
There is clear and significant contribution to the random-order Online Convex Optimization, with the results improving upon previously known by Garber (2020), and getting nearly optimal rates in the general setting.
| train | [
"ymm_l3YP3t9",
"toUNCswqdY",
"pb74SdpLjb4",
"0QZ-H87rlVv",
"eAOb7Ar8xVv",
"deUO5GEg1Ov",
"kNr5oVW3oC0",
"W5bFD1Msdr",
"kJg78_PAADf",
"iInSsHq5KNo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Problem description: The paper studies the problem of online convex optimization in the random order model. This problem was recently proposed by Garber et al. (ICML 2020). It is a generalization of online convex optimization, where at each step the learner incurs a loss function $f_t$ and the goal is to minimize ... | [
8,
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
-1,
2,
3,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_dfyjet3BMKA",
"W5bFD1Msdr",
"kJg78_PAADf",
"nips_2021_dfyjet3BMKA",
"nips_2021_dfyjet3BMKA",
"eAOb7Ar8xVv",
"iInSsHq5KNo",
"ymm_l3YP3t9",
"0QZ-H87rlVv",
"nips_2021_dfyjet3BMKA"
] |
nips_2021_YSYXmOzlrou | Discrete-Valued Neural Communication | Deep learning has advanced from fully connected architectures to structured models organized into components, e.g., the transformer composed of positional elements, modular architectures divided into slots, and graph neural nets made up of nodes. The nature of structured models is that communication among the components has a bottleneck, typically achieved by restricted connectivity and attention. In this work, we further tighten the bottleneck via discreteness of the representations transmitted between components. We hypothesize that this constraint serves as a useful form of inductive bias. Our hypothesis is motivated by past empirical work showing the benefits of discretization in non-structured architectures as well as our own theoretical results showing that discretization increases noise robustness and reduces the underlying dimensionality of the model. Building on an existing technique for discretization from the VQ-VAE, we consider multi-headed discretization with shared codebooks as the output of each architectural component. One motivating intuition is human language in which communication occurs through multiple discrete symbols. This form of communication is hypothesized to facilitate transmission of information between functional components of the brain by providing a common interlingua, just as it does for human-to-human communication. Our experiments show that discrete-valued neural communication (DVNC) substantially improves systematic generalization in a variety of architectures—transformers, modular architectures, and graph neural networks. We also show that the DVNC is robust to the choice of hyperparameters, making the method useful in practice.
| accept | This paper discusses neural network architectures that exchange discrete messages, trained using a strategy similar to the successful VQ-VAE. The paper includes both theoretical results about such architectures, as well as encouraging empirical results.
The largest concerns have to do with the discussion of the scope, as the setup discussed here has large overlap with existing but different literature. (Reviewers point out examples from reinforcement learning or on RNN hidden state discretization.) Overall, some comparison to those strategies seems possible and would improve the work, but, at the same time, the work here seems like a good step. At the very least, the connections to this others applications of discrete variables should be discussed as clearly as possible and in a positive way. (Connections here are likely to be productive.)
The paper could be strengthened with results on more realistic applications, possibly where the discrete messages can be nicely interpretable.
The authors have been very active in the discussion and have provided many clarification as well as promises for improvements of the presentation, including agreeing to a change of title that seems like it would improve clarity. | train | [
"VQhRmh7pGL-",
"CqDb-eBihm",
"Lq94y8qXzhU",
"l7Ce2ehWFHN",
"XbCoeawxhxp",
"EgEtYjRr1W",
"AzXrjTr7TcC",
"LMHqnt9byA",
"Ya0G_xy4v9X",
"_autrcx9iU6",
"SSAd4XMO_S0",
"vSSazOHmAxS",
"tqN4Ats3Tjv",
"zyCjFCDWeNP",
"g4qwZDXR_nN",
"tsBia7fB-bX",
"26zyWLf-6o"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, we completely understand it is a busy season but , as the clock is really counting down, we will extremely appreciate if you can let us know if you have any further concerns regarding our work and replies. Thank you very much for your help! ",
" Dear. Reviewer, \n\nDoes the updated title and adj... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
2
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
2,
4
] | [
"26zyWLf-6o",
"Lq94y8qXzhU",
"26zyWLf-6o",
"VQhRmh7pGL-",
"26zyWLf-6o",
"tqN4Ats3Tjv",
"LMHqnt9byA",
"Ya0G_xy4v9X",
"26zyWLf-6o",
"26zyWLf-6o",
"nips_2021_YSYXmOzlrou",
"g4qwZDXR_nN",
"zyCjFCDWeNP",
"SSAd4XMO_S0",
"tsBia7fB-bX",
"nips_2021_YSYXmOzlrou",
"nips_2021_YSYXmOzlrou"
] |
nips_2021_pZCYG7gjkKz | Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method | Transformers are expensive to train due to the quadratic time and space complexity in the self-attention mechanism. On the other hand, although kernel machines suffer from the same computation bottleneck in pairwise dot products, several approximation schemes have been successfully incorporated to considerably reduce their computational cost without sacrificing too much accuracy. In this work, we leverage the computation methods for kernel machines to alleviate the high computational cost and introduce Skyformer, which replaces the softmax structure with a Gaussian kernel to stabilize the model training and adapts the Nyström method to a non-positive semidefinite matrix to accelerate the computation. We further conduct theoretical analysis by showing that the matrix approximation error of our proposed method is small in the spectral norm. Experiments on Long Range Arena benchmark show that the proposed method is sufficient in getting comparable or even better performance than the full self-attention while requiring fewer computation resources.
| accept | The reviewers agree that this is a solid paper re-introducing the Nystrom method for Transformers to address the quadratic space and time complexity of the regular attention module. Thus, it leverages the fruitful area of research on using kernel approaches to improve regular Transformers. The novelty given that Nystrom method was previously used in that context is limited, but the empirical results are strong and furthermore the authors explain in detail approximation guarantees of their method. What is missing is equally detailed analysis of the computational time of the presented method. Nystrom algorithm is in general expensive, but the authors manage to bypass some of its complexities, as showed in the experimental section. More detailed discussion regarding Nystrom method in theory and practice would strengthen the paper. | train | [
"kYTwuHkY4un",
"fRYZ8lvH1hS",
"k9HhvjSnQ1",
"GtVGRQtohrZ",
"7Ue4UsT8dKp",
"u-eOdglCinc",
"CM9AKjaWDC",
"_D7VWwzR0R7",
"BJYl3Rj8a_O",
"Gfcr8g13m1O",
"G8NaOKS5h52",
"RarYeaX-ZTk",
"CeRwWfWuYI3",
"WohPGo87byn",
"J27Mhb1iY11",
"ZDzmbAOnrzv"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a method (Skyformer) to apply Nystrom approximation to the attention matrix. It embeds the attention matrix inside a larger PSD matrix (unlike Nystromformer), allowing Nystrom method to work well. Theoretical and empirical validation shows that the method is competitive with other forms of effic... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
9,
6,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
3,
5
] | [
"nips_2021_pZCYG7gjkKz",
"G8NaOKS5h52",
"G8NaOKS5h52",
"_D7VWwzR0R7",
"ZDzmbAOnrzv",
"J27Mhb1iY11",
"WohPGo87byn",
"CeRwWfWuYI3",
"kYTwuHkY4un",
"RarYeaX-ZTk",
"nips_2021_pZCYG7gjkKz",
"nips_2021_pZCYG7gjkKz",
"nips_2021_pZCYG7gjkKz",
"nips_2021_pZCYG7gjkKz",
"nips_2021_pZCYG7gjkKz",
"... |
nips_2021_LKUfuWxajHc | TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification | Multiple instance learning (MIL) is a powerful tool to solve the weakly supervised classification in whole slide image (WSI) based pathology diagnosis. However, the current MIL methods are usually based on independent and identical distribution hypothesis, thus neglect the correlation among different instances. To address this problem, we proposed a new framework, called correlated MIL, and provided a proof for convergence. Based on this framework, we devised a Transformer based MIL (TransMIL), which explored both morphological and spatial information. The proposed TransMIL can effectively deal with unbalanced/balanced and binary/multiple classification with great visualization and interpretability. We conducted various experiments for three different computational pathology problems and achieved better performance and faster convergence compared with state-of-the-art methods. The test AUC for the binary tumor classification can be up to 93.09% over CAMELYON16 dataset. And the AUC over the cancer subtypes classification can be up to 96.03% and 98.82% over TCGA-NSCLC dataset and TCGA-RCC dataset, respectively. Implementation is available at: https://github.com/szc19990412/TransMIL.
| accept | The paper has an interesting application (if rather straightforward) of transformers on Multiple Instance Learning for classification of Whole slide images. The authors successfully responded to the reviewer criticisms. All reviewers recommend acceptance.
Authors are encouraged to include the additional experiments and explanations to the final version | train | [
"kBZVZrvCYxy",
"fK3MZwn6JYO",
"HunsE6IfMFU",
"fCX4W4FmqUy",
"9ClZRFNnAI6",
"YWKP2GB__I",
"MMYfWJvZumK",
"YLxei3XVDyY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"\tThe authors propose a Transformer-based MIL method for WSI classification. \nThe method is designed to avoid the IID assumption inherent to standard MIL inference methods (e.g. mean, max pooling). Secondly, they propose a positional encoding module for patches in WSIs based on CNNs to incorporate spatial inform... | [
7,
-1,
6,
-1,
6,
-1,
-1,
-1
] | [
4,
-1,
4,
-1,
3,
-1,
-1,
-1
] | [
"nips_2021_LKUfuWxajHc",
"YWKP2GB__I",
"nips_2021_LKUfuWxajHc",
"YLxei3XVDyY",
"nips_2021_LKUfuWxajHc",
"kBZVZrvCYxy",
"HunsE6IfMFU",
"9ClZRFNnAI6"
] |
nips_2021_NlB8_hXkbby | Multi-view Contrastive Graph Clustering | With the explosive growth of information technology, multi-view graph data have become increasingly prevalent and valuable. Most existing multi-view clustering techniques either focus on the scenario of multiple graphs or multi-view attributes. In this paper, we propose a generic framework to cluster multi-view attributed graph data. Specifically, inspired by the success of contrastive learning, we propose multi-view contrastive graph clustering (MCGC) method to learn a consensus graph since the original graph could be noisy or incomplete and is not directly applicable. Our method composes of two key steps: we first filter out the undesirable high-frequency noise while preserving the graph geometric features via graph filtering and obtain a smooth representation of nodes; we then learn a consensus graph regularized by graph contrastive loss. Results on several benchmark datasets show the superiority of our method with respect to state-of-the-art approaches. In particular, our simple approach outperforms existing deep learning-based methods.
| accept | This paper focuses on multiview graph representation learning, proposing a graph clustering model and demonstrating its usefulness against a number of baselines. The reviewers felt the model was well motivated with clear contributions, though there were some concerns about presentations, namely on how the main contributions are presented. However, the reviewers were satisfied with the promised changes: I encourage the authors to do so as this will make the paper more generally clearer in terms of impact (as well as proofreading). Finally, the reviewers were generally excited to see strong performance of a shallow model, and I agree that this is an important result in the context of the currently dominant "bigger is better" models. Therefore, I recommend acceptance. | train | [
"j-J5FAG0uga",
"AV_59Kq0Ivx",
"QubGqjehNfh",
"obBwSeq23U",
"31mueYfTTl1",
"MMBPqW4lDGA",
"txN4nDK5tnU",
"CXGUVxRXCiC",
"dkA2vJ61KrG",
"Zo_DGAroUyU",
"pMZVqAaRNj_",
"2bFKxUtDcTl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a multi-view clustering method. They first use graph filter to generate a new node representation for input, and then do self-expression in a linear combination of various views. Afterwards, they propose a graph contrastive regularizer, which mimics the contrastive learning given the nearest ne... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"nips_2021_NlB8_hXkbby",
"MMBPqW4lDGA",
"nips_2021_NlB8_hXkbby",
"dkA2vJ61KrG",
"Zo_DGAroUyU",
"j-J5FAG0uga",
"pMZVqAaRNj_",
"2bFKxUtDcTl",
"QubGqjehNfh",
"nips_2021_NlB8_hXkbby",
"nips_2021_NlB8_hXkbby",
"nips_2021_NlB8_hXkbby"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.