paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18
values | meta_review stringlengths 29 10k | label stringclasses 3
values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2021_lSijhyKKsct | Reinforcement Learning with Latent Flow | Temporal information is essential to learning effective policies with Reinforcement Learning (RL). However, current state-of-the-art RL algorithms either assume that such information is given as part of the state space or, when learning from pixels, use the simple heuristic of frame-stacking to implicitly capture tempo... | withdrawn-rejected-submissions | This paper provides a simple approach to incorporate temporal information in RL algorithms. AC agrees with authors that simplicity is a virtue. As reviewers point out that experimentally the approach is not conclusively better (given that environments might be hand-chosen). Even R3 believes some reported improvements i... | train | [
"Qn9qxBV4wn1",
"Lr3bBkjePkg",
"tIvuQp_bsKL",
"ADti1NX0s2L",
"w5Ij_DjMrNa",
"qsQpFx7qnus",
"YykaFcyN5X3",
"e0pJqTMFIEs",
"HH5MiCmgMhD",
"t0FClChyvKw"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"- Summary:\n - This paper presents Flare, an RL method that replaces frame stacking (early fusion) with latent vector stacking (late fusion) and then further improve upon this by adding in latent flow vectors (the difference between adjacent latent vectors)\n - The method is demonstrated on DM control using ... | [
7,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_lSijhyKKsct",
"iclr_2021_lSijhyKKsct",
"iclr_2021_lSijhyKKsct",
"Qn9qxBV4wn1",
"tIvuQp_bsKL",
"Lr3bBkjePkg",
"t0FClChyvKw",
"t0FClChyvKw",
"iclr_2021_lSijhyKKsct",
"iclr_2021_lSijhyKKsct"
] |
iclr_2021_zI38PZQHWKj | Feature-Robust Optimal Transport for High-Dimensional Data | Optimal transport is a machine learning problem with applications including distribution comparison, feature selection, and generative adversarial networks. In this paper, we propose feature-robust optimal transport (FROT) for high-dimensional data, which solves high-dimensional OT problems using feature selection to a... | withdrawn-rejected-submissions | Motivated by (1) the problem of scaling up optimal transport to high-dimensional problems and (2) being able to tolerate noisy features, this paper introduces a new optimization problem that they call feature-robust optimal transport where they find a transport plan with discriminative features. They show that the min-... | train | [
"85VCwtROxIG",
"eDnPxbSAV7n",
"P9uzEKblYX",
"-H_jma7KSCe",
"LB9KCWXwzQh",
"Rtj5ykMxdXi",
"WddFG3GiRK",
"WoDWNNRyTm-"
] | [
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The proposed framework FROT - feature-robust optimal transport - seeks to select feature groups to both speed up OT computation for high-dimensional data and make it more robust to noise. The exposition is generally clear. My main concerns are limited novelty and lack of extensive experiments.\n\nThe paper draws t... | [
4,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_zI38PZQHWKj",
"WoDWNNRyTm-",
"WddFG3GiRK",
"LB9KCWXwzQh",
"iclr_2021_zI38PZQHWKj",
"85VCwtROxIG",
"iclr_2021_zI38PZQHWKj",
"iclr_2021_zI38PZQHWKj"
] |
iclr_2021_T6RYeudzf1 | TextSETTR: Label-Free Text Style Extraction and Tunable Targeted Restyling | We present a novel approach to the challenging problem of label-free text style transfer. Unlike previous approaches that use parallel or non-parallel labeled data, our technique removes the need for labels entirely, relying instead on the implicit connection in style between adjacent sentences in unlabeled text. We sh... | withdrawn-rejected-submissions | This paper proposes a new method for label-free text style transfer. The method employs the pre-trained language model T5 and makes an assumption that two adjacent sentences in a document have the same style. Experimental results show satisfying results compared with supervised methods.
Pros. • The paper is generally ... | train | [
"W3ayvoSkoq7",
"WbgBspw8Io",
"PwjyIbLbw2u",
"tdXeiE4_vEw",
"nhu3oGSbGbE",
"Ed-xc7b4znd",
"zZf5SiQvqVq",
"_yMPaXV0UdZ",
"CRuZ4_C-DtW",
"S3l_DW0jEw6",
"UmAsk5M1yes",
"-JFyPegpsl"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all the reviewers for their time and thoughtful feedback.\n\nThe reviewers with higher confidence (R3, R4) pointed out several core strengths of the paper:\n\nR3 writes that the unsupervised setup we describe is **“important and instructive”**, the model architecture is **“well demonstrated”**, and the wr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
2,
4
] | [
"iclr_2021_T6RYeudzf1",
"S3l_DW0jEw6",
"-JFyPegpsl",
"CRuZ4_C-DtW",
"iclr_2021_T6RYeudzf1",
"iclr_2021_T6RYeudzf1",
"iclr_2021_T6RYeudzf1",
"UmAsk5M1yes",
"iclr_2021_T6RYeudzf1",
"iclr_2021_T6RYeudzf1",
"iclr_2021_T6RYeudzf1",
"iclr_2021_T6RYeudzf1"
] |
iclr_2021_ohz3OEhVcs | Graph Autoencoders with Deconvolutional Networks | Recent studies have indicated that Graph Convolutional Networks (GCNs) act as a low pass filter in spectral domain and encode smoothed node representations. In this paper, we consider their opposite, namely Graph Deconvolutional Networks (GDNs) that reconstruct graph signals from smoothed node representations. We moti... | withdrawn-rejected-submissions | The covered topic is timely and of potential impact for many application domains, such as drug design. The paper is well written and presentation is clear. The proposed approach seems to have some degree of originality. Experimental results seem to be generally good, and in the rebuttal the authors have provided furthe... | train | [
"1FvCXiLffh",
"QomLfEftgyu",
"vWzNcE_s6o-",
"th_PKd23Fpl",
"wwAqI5A-BND",
"vqWAqLNTJtO",
"xbKc8LXtkJt",
"ZY0ItKOZ4n",
"zhtAN7JLQ2",
"Cbc2Kg9t4B-"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Graph Autoencoders with Deconvolutional Networks\n\nThe paper proposes a graph deconvolutional network to reconstruct the original graph signal from smoothed node representations obtained by graph convolutional networks.\n\nThe proposed deconvolution incorporates a denoising component based on graph wavelet transf... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_ohz3OEhVcs",
"iclr_2021_ohz3OEhVcs",
"zhtAN7JLQ2",
"1FvCXiLffh",
"iclr_2021_ohz3OEhVcs",
"QomLfEftgyu",
"Cbc2Kg9t4B-",
"zhtAN7JLQ2",
"iclr_2021_ohz3OEhVcs",
"iclr_2021_ohz3OEhVcs"
] |
iclr_2021_1ibNKMp8SKc | On Disentangled Representations Learned From Correlated Data | Despite impressive progress in the last decade, it still remains an open challenge to build models that generalize well across multiple tasks and datasets. One path to achieve this is to learn meaningful and compact representations, in which different semantic aspects of data are structurally disentangled. The focus of... | withdrawn-rejected-submissions | This submission considers the problem of learning disentangled representations from data in which there are correlations between underlying factors of variation (FoVs). Much of the work on learning disentangled representations has considered simulated datasets in which the FoVs are conditionally independent. The author... | train | [
"jHGT-d52rs",
"5Z3qk4rmnzr",
"4I5kBHnAsjT",
"dydsAruk_7d",
"kv6Lnti9dt2",
"90YnXVFfndP",
"d9lIw9Dn2n5",
"rJRXnZ94dK5",
"wbFbbIpkWN8",
"Xmk0UmvI7if",
"nteY3iilFno",
"QHmODdInJA"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your prompt reply. We appreciate your time and feedback very much and wanted to clarify that we did not remove any experiments or changed any conclusions but just moved some of the results to the appendix in light of your suggestions. If this would be a problem, we are happy to move the intervening expe... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
3
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"4I5kBHnAsjT",
"iclr_2021_1ibNKMp8SKc",
"kv6Lnti9dt2",
"Xmk0UmvI7if",
"90YnXVFfndP",
"d9lIw9Dn2n5",
"wbFbbIpkWN8",
"nteY3iilFno",
"5Z3qk4rmnzr",
"QHmODdInJA",
"iclr_2021_1ibNKMp8SKc",
"iclr_2021_1ibNKMp8SKc"
] |
iclr_2021_Aj4_e50nB8 | Contextual Knowledge Distillation for Transformer Compression | A computationally expensive and memory intensive neural network lies behind the recent success of language representation learning. Knowledge distillation, a major technique for deploying such a vast language model in resource-scarce environments, transfers the knowledge on individual word representations learned witho... | withdrawn-rejected-submissions | This paper proposes a new method to perform knowledge distillation (KD) for transformer compression, where two types of contextual knowledge, namely, word relations and layer-transforming relations, are considered for KD. Both pair-wise and triple-wise relations are modeled.
This paper receives two weak reject and tw... | train | [
"8sRk6ZASFgd",
"OvpIiuwPE_4",
"K45Eaibu1PX",
"tO1K-vUsxOk",
"dlAVFcUSUDR",
"_pN8zhG4Vcn",
"5f5y3373G8q",
"Mfl6MIJZisG",
"H4S2K5kAxLD",
"Rm4ZeqBUnS",
"hqYsCDjWGus",
"5Y74M9yb6uC",
"LYDl09CSw66",
"86SEJjNXNdv",
"4NDWW2Y4Cbs",
"x5HeTbUJw6j",
"6wQcoxMIKq",
"bBNnGeh6bz6"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We sincerely thank you for mentioning that our approach is really interesting. We would like to clear up some misunderstandings to change your mind a little more positively.\n\n**Q1: However, in comparison to some other distillation methods, e.g. the TinyBERT, this method is already not a strong baseline, and it i... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"OvpIiuwPE_4",
"hqYsCDjWGus",
"tO1K-vUsxOk",
"Rm4ZeqBUnS",
"_pN8zhG4Vcn",
"LYDl09CSw66",
"iclr_2021_Aj4_e50nB8",
"iclr_2021_Aj4_e50nB8",
"iclr_2021_Aj4_e50nB8",
"x5HeTbUJw6j",
"6wQcoxMIKq",
"6wQcoxMIKq",
"5f5y3373G8q",
"5f5y3373G8q",
"bBNnGeh6bz6",
"iclr_2021_Aj4_e50nB8",
"iclr_2021_... |
iclr_2021_hTUPgfEobsm | ADIS-GAN: Affine Disentangled GAN | This paper proposes Affine Disentangled GAN (ADIS-GAN), which is a Generative Adversarial Network that can explicitly disentangle affine transformations in a self-supervised and rigorous manner. The objective is inspired by InfoGAN, where an additional affine regularizer acts as the inductive bias. The affine regular... | withdrawn-rejected-submissions | Most of the reviewers had serious problems with clarity to start out.
The authors have addressed some, but not all of these problems.
More importantly, there were issues of significance and experimental evaluation.
I concur with r4 on the experimental evaluation.
I think if you're going to explicitly specialize tow... | train | [
"Ql1nkM67HfK",
"3Ls9hiZt6R",
"-FsGaNfyddX",
"m6526NRkU2s",
"MBlHWV98toi",
"V-MeUVO9kr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method that can explicitly learn the affine transformations via disentangled representations in a self-supervised manner. The proposed ADIS-GAN learns to extract affine parameters by adding an affine regularizer on the top of InfoGAN. It seems to me that the main contribution of the paper is ... | [
5,
-1,
-1,
-1,
4,
3
] | [
2,
-1,
-1,
-1,
2,
2
] | [
"iclr_2021_hTUPgfEobsm",
"Ql1nkM67HfK",
"MBlHWV98toi",
"V-MeUVO9kr",
"iclr_2021_hTUPgfEobsm",
"iclr_2021_hTUPgfEobsm"
] |
iclr_2021_TiGF63rxr8Q | Efficient Reinforcement Learning in Resource Allocation Problems Through Permutation Invariant Multi-task Learning | One of the main challenges in real-world reinforcement learning is to learn successfully from limited training samples. We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning, by exploiting an invariance property in the tasks. We provide a theoretical pe... | withdrawn-rejected-submissions | This paper tackles a problem of resource allocation using reinforcement learning. An important invariant - permutation invariant - is identified as an important characteristic of this problem. Then it is shown that taking advantage of such an invariant should dramatically improve the sample efficiency.
On behalf of t... | train | [
"I5K8trbUGo",
"QUvn_OEDenW",
"FT262egzEcd",
"ZpbB7LPiNzy",
"nT5Q06zS9FR",
"jf4lhi1ePSl",
"N2bkqX1L0Mq",
"nkXlOA8d2Y-"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an approach to reducing the sample complexity in multi-task reinforcement learning using permutation invariant policies. The main premise of the paper is that certain families of tasks exhibit approximate forms of symmetry, i.e., applying a permutation to the state/action variables would make a... | [
5,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
4,
2,
2
] | [
"iclr_2021_TiGF63rxr8Q",
"jf4lhi1ePSl",
"N2bkqX1L0Mq",
"nkXlOA8d2Y-",
"I5K8trbUGo",
"iclr_2021_TiGF63rxr8Q",
"iclr_2021_TiGF63rxr8Q",
"iclr_2021_TiGF63rxr8Q"
] |
iclr_2021_zgGmAx9ZcY | Learning the Connections in Direct Feedback Alignment | Feedback alignment was proposed to address the biological implausibility of the backpropagation algorithm which requires the transportation of the weight transpose during the backwards pass. The idea was later built upon with the proposal of direct feedback alignment (DFA), which propagates the error directly from the ... | withdrawn-rejected-submissions | This paper investigates an improvement to the direct feedback alignment (DFA) algorithm where the "backward weights" are learned instead of being fixed random matrices. The proposed approach essentially applies the technique of DFA to Kolen-Pollack learning. While reviewers found the paper reasonably clear and thought ... | train | [
"j5kXqLsN9qi",
"bXF6GWOnu-B",
"NJ9nn1QssQj",
"Lwdlu1ouAPP",
"g-3t_iMjiRm",
"-Ya2TVvHwse"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"## Second Review\n\nI thank authors for taking time and answering my queries. However current manuscript fails to point out key difference between Akrout 19 kolen-pollack method and DKP (proposed method). Combining FA with DKP does not add sufficient novelty. As pointed out by other reviewers, paper should highlig... | [
5,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_zgGmAx9ZcY",
"g-3t_iMjiRm",
"j5kXqLsN9qi",
"-Ya2TVvHwse",
"iclr_2021_zgGmAx9ZcY",
"iclr_2021_zgGmAx9ZcY"
] |
iclr_2021_iox4AjpZ15 | Invertible Manifold Learning for Dimension Reduction | It is widely believed that a dimension reduction (DR) process drops information inevitably in most practical scenarios. Thus, most methods try to preserve some essential information of data after DR, as well as manifold based DR methods. However, they usually fail to yield satisfying results, especially in high-dimensi... | withdrawn-rejected-submissions |
While reviewers find the ideas in the paper interesting, they also raise several major concerns.
In particular, R1 and R4 find the claims of "invertible" and "lossless" to be potentially misleading.
The bijective property is achieve on the first stage (L-1 layers) due to a sequence of one-to-one mappings, as is done i... | train | [
"acT56_kmkLb",
"8NC6dUTk17",
"BYCWqRjQhFx",
"azhR2PttwW",
"8v8xLdAAPXy",
"AR6Cfr47Lfc",
"37Li6x3ks3r",
"L6vVDD6BFmq",
"WzS_JjxfhEo",
"wKJF_tVWJz0",
"oZK4_SUEJ4i",
"hH7IVQy7WhH",
"nPoq9mR7Vel",
"hAQawEuBOQR",
"7aeVd6RaL25"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We sincerely thank reviewers for their detailed comments and repliments. We have submited the first revision to address the defects pointed out by reviewers. The revised parts of the paper are marked in $\\textcolor{brown}{brown}$. Specifically, we identify the problem to be addressed and main contributions in thi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"iclr_2021_iox4AjpZ15",
"hAQawEuBOQR",
"hH7IVQy7WhH",
"7aeVd6RaL25",
"wKJF_tVWJz0",
"oZK4_SUEJ4i",
"WzS_JjxfhEo",
"BYCWqRjQhFx",
"AR6Cfr47Lfc",
"oZK4_SUEJ4i",
"nPoq9mR7Vel",
"iclr_2021_iox4AjpZ15",
"iclr_2021_iox4AjpZ15",
"iclr_2021_iox4AjpZ15",
"iclr_2021_iox4AjpZ15"
] |
iclr_2021_3F0Qm7TzNDM | Variance Based Sample Weighting for Supervised Deep Learning | In the context of supervised learning of a function by a Neural Network (NN), we claim and empirically justify that a NN yields better results when the distribution of the data set focuses on regions where the function to learn is steeper. We first traduce this assumption in a mathematically workable way using Taylor e... | withdrawn-rejected-submissions | # Paper Summary
This paper proposes "variance based sample weighting" (VBSW). The key observation is that, in areas where the labeling function is changing rapidly, more samples may be required to achieve a good fit. In Section 3, they justify this intuition by showing that areas in which the label gradient is higher ... | train | [
"FD6HRhuxTJP",
"EW-R1U9lP7U",
"5DlIMkjJWkr",
"C4DyYL6ro0b",
"gxQNCc5iXZC",
"sl080_YGmnd",
"xy5bvekYDO1",
"We_vEnGLyJE",
"EpWm4yBroiW",
"Z3FWTPHXzSr",
"0lXmejARo_N"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"A method for computing sample learning weights based on variance is proposed. The method is model independent and a simple k-NN based estimator for the weights is derived. The authors justify their work by appealing to a novel generalisation bound. Overall the idea is interesting but the exposition needs to be sig... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"iclr_2021_3F0Qm7TzNDM",
"iclr_2021_3F0Qm7TzNDM",
"iclr_2021_3F0Qm7TzNDM",
"0lXmejARo_N",
"FD6HRhuxTJP",
"EpWm4yBroiW",
"EpWm4yBroiW",
"Z3FWTPHXzSr",
"iclr_2021_3F0Qm7TzNDM",
"iclr_2021_3F0Qm7TzNDM",
"iclr_2021_3F0Qm7TzNDM"
] |
iclr_2021_zYmnBGOZtH | An information-theoretic framework for learning models of instance-independent label noise | Given a dataset D with label noise, how do we learn its underlying noise model? If we assume that the label noise is instance-independent, then the noise model can be represented by a noise transition matrix QD. Recent work has shown that even without further information about any instances with correct labels, or furt... | withdrawn-rejected-submissions | This paper studies the following model: The input to our classifier is the instance X which determines the label Z and we observe a noisy version of this label Y. The key assumption is that the label noise is independent of the instance, and the goal is to learn the channel from Z to Y. The main motivation is that gene... | train | [
"dwG5d1LF4ou",
"JdumcL8LyIU",
"GLVKHj8Or3P",
"oUTuenzMGyr",
"uAKxy5cgqlx",
"E6qZ1r5S3Pi",
"8BNi0KA44rN",
"BXxYhXDcxX",
"G-Y_qMNV_yc",
"iPWYRu28xPO",
"NeHxuNQSmH",
"ix9VFpi9tmB",
"jKQyMo3SdjK",
"Cf0WieLsHhF"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers the problem of estimating instance-independent label noise. More formally, it is assumed that the true labels for any data point are modified based on a noise transition matrix, and the goal is to estimate this noise transition matrix. The paper proposed an information-theoretic approach for th... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_zYmnBGOZtH",
"jKQyMo3SdjK",
"jKQyMo3SdjK",
"Cf0WieLsHhF",
"Cf0WieLsHhF",
"Cf0WieLsHhF",
"dwG5d1LF4ou",
"Cf0WieLsHhF",
"dwG5d1LF4ou",
"jKQyMo3SdjK",
"dwG5d1LF4ou",
"Cf0WieLsHhF",
"iclr_2021_zYmnBGOZtH",
"iclr_2021_zYmnBGOZtH"
] |
iclr_2021_IkYEJ5Cps5H | Succinct Network Channel and Spatial Pruning via Discrete Variable QCQP | Reducing the heavy computational cost of large convolutional neural networks is crucial when deploying the networks to resource-constrained environments. In this context, recent works propose channel pruning via greedy channel selection to achieve practical acceleration and memory footprint reduction. We first show... | withdrawn-rejected-submissions | This paper proposed a new optimization framework for pruning CNNs considering coupling between channels in the neighboring layers. Two reviewers suggested acceptance and two did rejection. The main concerns of the negative reviewers are (a) limited novelty, (b) limited performance metrics and (c) limited baselines. The... | val | [
"tFCLoJiWefe",
"mxEswI8Ggxi",
"xUQ0HCMBnH",
"l5MFnFkR24F",
"hZp9wy2ct_Q",
"_Ts7BVEm5Z",
"JunV4nkZ8ga",
"_EFcrSJCMlp",
"g2cLyBv_lV7",
"hpDNAcfGwwh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper mainly improves the idea of \"PRUNING FILTERS FOR EFFICIENT CONVNETS\" by encouraging the pruning with a {0-1} optimization instead of a greedy manner. Experiments validate the effectiveness of the proposed method. \n\nPros: \n+ Writing is good, and the technical details seem sound and clear.\n+ The mot... | [
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"iclr_2021_IkYEJ5Cps5H",
"iclr_2021_IkYEJ5Cps5H",
"tFCLoJiWefe",
"xUQ0HCMBnH",
"iclr_2021_IkYEJ5Cps5H",
"g2cLyBv_lV7",
"mxEswI8Ggxi",
"hpDNAcfGwwh",
"iclr_2021_IkYEJ5Cps5H",
"iclr_2021_IkYEJ5Cps5H"
] |
iclr_2021_Bpw_O132lWT | Dynamic of Stochastic Gradient Descent with State-dependent Noise | Stochastic gradient descent (SGD) and its variants are mainstream methods to train deep neural networks. Since neural networks are non-convex, more and more works study the dynamic behavior of SGD and its impact to generalization, especially the escaping efficiency from local minima. However, these works make the over-... | withdrawn-rejected-submissions | The average review rating is 5.5 which means it’s somewhat borderline. One of the reviewers planned to increase the score but apparently didn’t do so formally. A subset of the main pros and cons the reviewers pointed out are:
Pros:
“Some empirical support is provided for the theory.”
“ It is particularly interesting... | train | [
"uB0smUihduB",
"FL6HlaYlCU4",
"juO-JoFhad",
"B3FmtfikWvp",
"XFvOdsAWR0R",
"NF2JyUng6Py",
"R6VTScHsT1",
"79ZK_mPsaLM",
"qevyo7b-1p",
"0KhY94zUnkY",
"XaiXgknkrs5"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for raising the score. We’re glad to see that our response has addressed your concerns. We will continue to polish the paper as you suggested.",
"Thank you very much for the valuable comments. Please check the proof of Theorem 2 in Appendix 7.1 in the updated version. \nHere are our responses to your t... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"B3FmtfikWvp",
"XaiXgknkrs5",
"iclr_2021_Bpw_O132lWT",
"XFvOdsAWR0R",
"juO-JoFhad",
"qevyo7b-1p",
"0KhY94zUnkY",
"iclr_2021_Bpw_O132lWT",
"iclr_2021_Bpw_O132lWT",
"iclr_2021_Bpw_O132lWT",
"iclr_2021_Bpw_O132lWT"
] |
iclr_2021_yT7-k6Q6gda | Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization | The early phase of training has been shown to be important in two ways for deep neural networks. First, the degree of regularization in this phase significantly impacts the final generalization. Second, it is accompanied by a rapid change in the local loss curvature influenced by regularization choices. Connecting thes... | withdrawn-rejected-submissions | This is a tricky one, hence my low confidence rating.
The reviewers seem to agree that the paper is well written, easy to follow, and that it tests a relevant hypothesis that is of interest to the community. There was some disagreement as to whether the experiments are comprehensive, complete and/or conclusive enough,... | test | [
"izFImoH1aw3",
"p4w7XJw71W2",
"ZbmJKjUUdwU",
"eJxqRvkW50",
"h3GE-ctlM0I",
"d9zTlrl2XAo",
"6BMVP42e-uW",
"fdje3cFH7L6",
"-hfVtrDbO7D",
"KMR5HqBmzZ",
"CGv5sm7XpmV",
"WFS3b3Vg6IY",
"SQ-4S0NAgzl",
"IbaNJu_HM-",
"cCpCqfW5QQh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"After the rebuttal, the authors added a section on large-batch training, which shows that the catastrophic Fisher explosion also occurs in large batch training. This makes the paper more convincing. However, the main concern that this paper lacks theoretical contribution still exists. Therefore, I keep the weak ac... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_yT7-k6Q6gda",
"iclr_2021_yT7-k6Q6gda",
"h3GE-ctlM0I",
"iclr_2021_yT7-k6Q6gda",
"WFS3b3Vg6IY",
"IbaNJu_HM-",
"izFImoH1aw3",
"izFImoH1aw3",
"IbaNJu_HM-",
"cCpCqfW5QQh",
"p4w7XJw71W2",
"p4w7XJw71W2",
"p4w7XJw71W2",
"iclr_2021_yT7-k6Q6gda",
"iclr_2021_yT7-k6Q6gda"
] |
iclr_2021_akgiLNAkC7P | Inverse Constrained Reinforcement Learning | Standard reinforcement learning (RL) algorithms train agents to maximize given reward functions. However, many real-world applications of RL require agents to also satisfy certain constraints which may, for example, be motivated by safety concerns. Constrained RL algorithms approach this problem by training agents to m... | withdrawn-rejected-submissions | This paper introduces ICRL, where the RL agent is supposed to maximize the reward under unknown constraints, which should be inferred from the expert demonstration. Reviewers generally agreed that this is an interesting work, and potentially make RL to be applied to more general settings. However, they also would like ... | val | [
"TceFVSQQX7n",
"lMS_w2j7bt1",
"sDNl_33q7lO",
"ajs_rdGlIzW",
"ZQszwUKxaJA",
"Cx7sjrc7bo",
"nwuNA5C2cuO",
"i9LO3ZNqwFv",
"T4k5fh4lfgB",
"sI3VgMeMXL1",
"5ahcrFjyPuE",
"I5twSL251x",
"0t1H4GYMU39",
"OhR3UBnhlMz",
"GD818iJHCAR",
"MDTl8ktIZPY",
"tq4x4P8zIH",
"AKn5Q6mK0mS",
"yOdO1eS_Iqi"... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"#### Summary\n\nThe submission focuses on a variant of inverse reinforcement learning, where the learner knows the task reward but is unaware of hard constraints that need to be respected while completing the task. The authors provide an algorithm to recover these constraints from expert demonstrations. The propos... | [
7,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_akgiLNAkC7P",
"ZQszwUKxaJA",
"i9LO3ZNqwFv",
"nwuNA5C2cuO",
"OhR3UBnhlMz",
"iclr_2021_akgiLNAkC7P",
"GD818iJHCAR",
"5ahcrFjyPuE",
"sI3VgMeMXL1",
"0t1H4GYMU39",
"TceFVSQQX7n",
"iclr_2021_akgiLNAkC7P",
"MDTl8ktIZPY",
"iclr_2021_akgiLNAkC7P",
"Cx7sjrc7bo",
"I5twSL251x",
"I5twS... |
iclr_2021_MY3WGKsXct_ | Membership Attacks on Conditional Generative Models Using Image Difficulty | Membership inference attacks (MIA) try to detect if data samples were used to train a Neural Network model. As training data is very valuable in machine learning, MIA can be used to detect the use of unauthorized data. Unlike the traditional MIA approaches, addressing classification models, we address conditional image... | withdrawn-rejected-submissions | The work focuses on detecting whether a certain data sample was used to train a deep network-based conditional image synthesis model. The key idea is not to rely on just reconstruction error but normalizing it via a proposed difficulty score. The reviewers found the problem statement important and the paper easy to fol... | train | [
"EhVXWmPoK2m",
"KUZ6gTcvi4-",
"DtE754P_yE",
"Vn4wegPuLvr",
"ni2xOmlH2Ru",
"oZyVIsePddV",
"yqTx9Oa3Tfm",
"mur9FirnXVq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper aims to solve the problem of membership attack, ie detecting if data samples were used to train a neural network for conditional image generation. The paper first proposes a simple but effective approach by using the reconstruction error. To address the issue that reconstruction error alone is less effe... | [
6,
5,
6,
-1,
-1,
-1,
-1,
6
] | [
3,
3,
3,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_MY3WGKsXct_",
"iclr_2021_MY3WGKsXct_",
"iclr_2021_MY3WGKsXct_",
"KUZ6gTcvi4-",
"DtE754P_yE",
"EhVXWmPoK2m",
"mur9FirnXVq",
"iclr_2021_MY3WGKsXct_"
] |
iclr_2021_M9hdyCNlWaf | Sparse Uncertainty Representation in Deep Learning with Inducing Weights | Bayesian neural networks and deep ensembles represent two modern paradigms of uncertainty quantification in deep learning. Yet these approaches struggle to scale mainly due to memory inefficiency issues, since they require parameter storage several times higher than their deterministic counterparts. To address this, we... | withdrawn-rejected-submissions | The scores for this paper have been borderline, however the decision has been greatly facilitated by the participation of the authors and reviewers to the discussion and, more importantly, by active private discussion among reviewers and AC. Specifically, from the private discussion it seems that the reviewers find int... | train | [
"tHp6Xlovbwi",
"rsttyd1OYq-",
"PWc8NzrHF4y",
"CrBfXBEgL9J",
"KQIhPB1DX3s",
"jZxhie4nLwS",
"xtU1GbEgiSG",
"6eCa3b3TeVP",
"eKnzGij1VA",
"KUPFW1L8T_E",
"dRgmcm5bYDU",
"6DN6Vm01f4n",
"BGU4-dbMcK",
"Wa7YRCPg5Lw",
"VOfpbJfbFS",
"OOXG9b2YMMl",
"VSPZWoFSN3u",
"Nhhw4yLjlV",
"74BwCxlFDhE",... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_r... | [
"\n\n# summary\n\nThis paper proposed a method on uncertainty estimation in deep neural\nnetworks. Compared with BNN and deep ensemble, the proposed approach in this\nwork has a storage advantage. Furthermore, this work provides a better\ntrade-off between accuracy and calibration.\n\n\n# pros\n\n1. The approach i... | [
6,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_M9hdyCNlWaf",
"CrBfXBEgL9J",
"eKnzGij1VA",
"74BwCxlFDhE",
"iclr_2021_M9hdyCNlWaf",
"tHp6Xlovbwi",
"iclr_2021_M9hdyCNlWaf",
"iclr_2021_M9hdyCNlWaf",
"Nhhw4yLjlV",
"iclr_2021_M9hdyCNlWaf",
"6DN6Vm01f4n",
"BGU4-dbMcK",
"Wa7YRCPg5Lw",
"VSPZWoFSN3u",
"iclr_2021_M9hdyCNlWaf",
"KUP... |
iclr_2021_2NHl-ETnHxk | Adversarial Privacy Preservation in MRI Scans of the Brain | De-identification of magnetic resonance imagery (MRI) is intrinsically difficult since, even with all metadata removed, a person's face can easily be rendered and matched against a database. Existing de-identification methods tackle this task by obfuscating or removing parts of the face, but they either fail to reliabl... | withdrawn-rejected-submissions | The paper received diverging review feedback. While reviewers found merits in the work, they also raise serious concerns over experimental validation, comparison with the existing methods, and practicality of the proposed method. It appears that the paper can benefit from better writing and more experimental validation... | train | [
"zLVV3M6P187",
"kVtwtIK942O",
"K4giYmPAfVo",
"o6s-TVjvsq",
"4-oxGgX13nz",
"582VeaFOo0j",
"ib9rQ4kHQU5",
"SJ-h5UNB5k",
"PyQk3l51nnz",
"a95_PE9aJG0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors proposed a conditional GAN-based de-identification method for MRI scans. This method aimed to prevent possible personal information from leaking from the face surface of MRI data. The study is useful in certain clinical situations. However, the applied methods lack enough novelty in the aspect of deep ... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
3,
6,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"iclr_2021_2NHl-ETnHxk",
"ib9rQ4kHQU5",
"zLVV3M6P187",
"SJ-h5UNB5k",
"PyQk3l51nnz",
"a95_PE9aJG0",
"iclr_2021_2NHl-ETnHxk",
"iclr_2021_2NHl-ETnHxk",
"iclr_2021_2NHl-ETnHxk",
"iclr_2021_2NHl-ETnHxk"
] |
iclr_2021_j0p8ASp9Br | Real-time Uncertainty Decomposition for Online Learning Control | Safety-critical decisions based on machine learning models require a clear understanding of the involved uncertainties to avoid hazardous or risky situations. While aleatoric uncertainty can be explicitly modeled given a parametric description, epistemic uncertainty rather describes the presents or absence of training ... | withdrawn-rejected-submissions | This paper aims to do efficient epistemic uncertainty quantification for model-based learning for control. It does so by augmenting the dataset with synthetic data around the true data points, and trying to classify whether a point is close to the training set or not. I agree with many of the criticisms that R3 and R5 ... | train | [
"4tHorJFHdk",
"Z7JHMAPSIa2",
"14lqhWmT-dB",
"jJzSk4wQxy",
"-5RcjYZliEc",
"8_lWvYPulvL",
"X_jTWuEpMkS",
"snfy07Fole",
"QojqGqVQ85X",
"GWdh4M-2w7G",
"P6tt2yGZ1jM"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary\n--------\nThe authors consider the problem of efficient modeling of epistemic uncertainty, separated from aleatoric uncertainty, for neural networks. They propose a novel methodology, involving automatically constructing a epistemic uncertainty support data set used to extend a given NN with an epistemic... | [
7,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_j0p8ASp9Br",
"iclr_2021_j0p8ASp9Br",
"jJzSk4wQxy",
"8_lWvYPulvL",
"snfy07Fole",
"Z7JHMAPSIa2",
"4tHorJFHdk",
"GWdh4M-2w7G",
"P6tt2yGZ1jM",
"iclr_2021_j0p8ASp9Br",
"iclr_2021_j0p8ASp9Br"
] |
iclr_2021_k9GoaycDeio | Improving Local Effectiveness for Global Robustness Training | Despite its increasing popularity, deep neural networks are easily fooled. To alleviate this deficiency, researchers are actively developing new training strategies, which encourage models that are robust to small input perturbations. Several successful robust training methods have been proposed. However, many of them ... | withdrawn-rejected-submissions | This paper presents a new method of employing some existing techniques to improve robustness, which was verified through experiments. According to the reviewers’ comments and the authors’ responses to these comments, the reviewers generally appreciate the authors’ effort in properly improving and clarifying the propose... | train | [
"Paz1tUqajyI",
"qgryMzV3pYy",
"Nsqb6OhW11-",
"OjxFIYoBBog",
"-Mp8T8JYQ3",
"XORhgbLuz3_",
"JuoU9qzUFl2",
"74SssrnzRnE",
"WyLrJRFjTfY",
"fZjF5Wtz7QP"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new adversarial training scheme, LEAP, to obtain models robust against $\\ell_\\infty$-bounded adversarial examples. The loss used as the objective minimized during training involves both local and global (wrt the input space) properties of the network. Experiments suggest improved performance... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"iclr_2021_k9GoaycDeio",
"Nsqb6OhW11-",
"fZjF5Wtz7QP",
"iclr_2021_k9GoaycDeio",
"Paz1tUqajyI",
"74SssrnzRnE",
"WyLrJRFjTfY",
"iclr_2021_k9GoaycDeio",
"iclr_2021_k9GoaycDeio",
"iclr_2021_k9GoaycDeio"
] |
iclr_2021_E8fmaZwzEj | Defective Convolutional Networks | Robustness of convolutional neural networks (CNNs) has gained in importance on account of adversarial examples, i.e., inputs added as well-designed perturbations that are imperceptible to humans but can cause the model to predict incorrectly. Recent research suggests that the noise in adversarial examples breaks the te... | withdrawn-rejected-submissions | The paper presents a method to make CNN focus more on structure rather than texture by constraining a random set of neurons per feature map to have constant activation.
The paper has limited novelty and unclear analysis of the experimental results, for instance plots of accuracy vs strength of adversarial perturbation... | test | [
"xkQofUF3anO",
"b4MEUmh8USS",
"uXtStVWhNMJ",
"0Bnpy5mE2o4",
"4cIYLlQaRdn",
"S7RlqsBy8SA",
"dSRf8h4gZ6x",
"Vi7C9X4Tfp",
"ke1bOPvOgD1"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nThank you very much for your timely update. We clarify the mentioned points as below. If there’re further assessments, please let us know, then we can appropriately take the new feedback into account. \n\n---------------------------------------------\n***Q1***: Regarding the index operations.\n\n***A1***: Yes, w... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"b4MEUmh8USS",
"4cIYLlQaRdn",
"iclr_2021_E8fmaZwzEj",
"uXtStVWhNMJ",
"ke1bOPvOgD1",
"Vi7C9X4Tfp",
"0Bnpy5mE2o4",
"iclr_2021_E8fmaZwzEj",
"iclr_2021_E8fmaZwzEj"
] |
iclr_2021_0DALDI-xyW4 | A new accelerated gradient method inspired by continuous-time perspective | Nesterov's accelerated method are widely used in problems with machine learning background including deep learning. To give more insight about the acceleration phenomenon, an ordinary differential equation was obtained from Nesterov's accelerated method by taking step sizes approaching zero, and the relationship betwee... | withdrawn-rejected-submissions | The paper studies a high-order discretization of the ODE corresponding to Nesterov's accelerated method, as introduced by Su-Boyd-Candes. The main claim of the paper is that the more complex discretization scheme leads to a method that is more stable and faster. However, the theoretical claims do not seem sufficiently ... | train | [
"YYfAT8rM6p",
"hDrncLJDQD6",
"FobG1AxT5Zt",
"ok5RTb1wqdb",
"3hobohOiPVZ",
"tPOcrGlU3rz",
"_N_XoU2_Acw",
"EHKqJUpQdgM",
"A4aaBLSGOk5",
"jehPDhF5czK",
"vo_NltmAVzz",
"ZgmraklFuO",
"LpNT5j08DKm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"UPDATE: After reading through all other reviews and responses by the authors, I share the concern that the theoretical justification of the paper is lacking as the connection between the truncation error and the improved algorithm performance is not rigorously proven. Therefore, I have reduced my score.\n\n\nSumma... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_0DALDI-xyW4",
"jehPDhF5czK",
"ok5RTb1wqdb",
"_N_XoU2_Acw",
"A4aaBLSGOk5",
"iclr_2021_0DALDI-xyW4",
"vo_NltmAVzz",
"LpNT5j08DKm",
"ZgmraklFuO",
"YYfAT8rM6p",
"iclr_2021_0DALDI-xyW4",
"iclr_2021_0DALDI-xyW4",
"iclr_2021_0DALDI-xyW4"
] |
iclr_2021_cbdp6RLk2r7 | Addressing the Topological Defects of Disentanglement | A core challenge in Machine Learning is to disentangle natural factors of variation in data (e.g. object shape vs pose). A popular approach to disentanglement consists in learning to map each of these factors to distinct subspaces of a model's latent representation. However, this approach has shown limited empirical s... | withdrawn-rejected-submissions | This is a borderline case (quite comparable to the other borderline case in my batch). The paper has received careful reviews and based on my weighting of the different arguments I arrive at an average score between 5.75 and 6.. The authors present some worthwhile ideas related to disentanglement that deserves more att... | train | [
"VPMXquybssI",
"EUDs1-oanjh",
"zJP3HDipisl",
"bImjtaN30-8",
"ldfSrvN6wkQ",
"KXrSHYjOigy",
"VmO-Zm7ypeS",
"F3Nb82_JweX",
"OEFMBvLOYax",
"JFX45HKEhFK",
"tGV9ioKlW0N",
"5sI9IoC59mU",
"3NeMPey5HS3",
"3crA7weUMIl",
"u303Z8E9R1c",
"O7lPIOJSc3y",
"MpCSLT84Yk",
"HN42rkn06HW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the notion of disentanglement in a group representation theoretic setting. Disentangling is sometimes conceptualized as mapping distinct factors (e.g. position / orientation) to distinct subspaces. It is shown theoretically that such a naive notion of disentangling is impossible for topological ... | [
5,
3,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_cbdp6RLk2r7",
"iclr_2021_cbdp6RLk2r7",
"iclr_2021_cbdp6RLk2r7",
"KXrSHYjOigy",
"bImjtaN30-8",
"VmO-Zm7ypeS",
"u303Z8E9R1c",
"VPMXquybssI",
"MpCSLT84Yk",
"tGV9ioKlW0N",
"zJP3HDipisl",
"EUDs1-oanjh",
"OEFMBvLOYax",
"iclr_2021_cbdp6RLk2r7",
"F3Nb82_JweX",
"HN42rkn06HW",
"iclr... |
iclr_2021_5wmNjjvGOXh | Selfish Sparse RNN Training | Sparse neural networks have been widely applied to reduce the necessary resource requirements to train and deploy over-parameterized deep neural networks. For inference acceleration, methods that induce sparsity from a pre-trained dense network (dense-to-sparse) work effectively. Recently, dynamic sparse training (DST)... | withdrawn-rejected-submissions | The authors introduce an approach to train sparse RNNs with a fixed parameter count. During training, they allow RNN layers to have a non-uniform redistribution across cell weights for a better regularization.They also introduce a variant of the averaged stochastic gradient optimizer, which improves the performance of ... | train | [
"p_6bnmFnyjc",
"K7ubHPkIt0A",
"I71wOUmlFQ",
"eoyhJkp7ieq",
"kuIRFXYb0-",
"bFyrcQRjUVb",
"-nMGFcluav",
"x4_uParF5zR",
"lhMqduuW5XY",
"-hg1W59XWZ"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments. We believe that there is some confusion regarding our method, which we clarify in details below:\n- **One-time SVD + fine-tuning usually works very well for most RNN training applications in the industry.** We want to re-emphasize that the central goal of our paper is to develop a meth... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3
] | [
"-nMGFcluav",
"iclr_2021_5wmNjjvGOXh",
"lhMqduuW5XY",
"x4_uParF5zR",
"-hg1W59XWZ",
"lhMqduuW5XY",
"iclr_2021_5wmNjjvGOXh",
"iclr_2021_5wmNjjvGOXh",
"iclr_2021_5wmNjjvGOXh",
"iclr_2021_5wmNjjvGOXh"
] |
iclr_2021_QM4_h99pjCE | Decentralized Deterministic Multi-Agent Reinforcement Learning | Recent work in multi-agent reinforcement learning (MARL) by [Zhang, ICML12018] provided the first decentralized actor-critic algorithm to offer convergence guarantees. In that work, policies are stochastic and are defined on finite action spaces. We extend those results to develop a provably-convergent decentralized ac... | withdrawn-rejected-submissions | The paper offers a direction for mult-agent RL that builds on results for actor-critic methods [Zhang, ICML 2018], extending that work to address deterministic policies. The authors establish convergence under a number of assumptions. Both on-policy setting and off-policy settings are treated. The reviewers point o... | val | [
"WDe-YuqWFm",
"YsSEepb4_jB",
"NUUKscl4GVx",
"brVEQGDS2p4",
"AY9AWOesNT",
"OnQz4SQLXmR",
"Hgtp3BjN6RK",
"imV41rIck7",
"CvrYhdPjzdQ",
"gzk0qib8dVN",
"hxfuHrv3hG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper offers a comprehensive theoretical treatment of deterministic policy gradients in a multi-agent setting, working out several key results:\n* existence and explicit formulas for the multi-agent deterministic policy gradient in off and on-policy settings;\n* convergence of stochastic policy gradients to d... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"iclr_2021_QM4_h99pjCE",
"iclr_2021_QM4_h99pjCE",
"AY9AWOesNT",
"WDe-YuqWFm",
"CvrYhdPjzdQ",
"gzk0qib8dVN",
"hxfuHrv3hG",
"YsSEepb4_jB",
"iclr_2021_QM4_h99pjCE",
"iclr_2021_QM4_h99pjCE",
"iclr_2021_QM4_h99pjCE"
] |
iclr_2021_vnlqCDH1b6n | Learning disentangled representations with the Wasserstein Autoencoder | Disentangled representation learning has undoubtedly benefited from objective function surgery. However, a delicate balancing act of tuning is still required in order to trade off reconstruction fidelity versus disentanglement. Building on previous successes of penalizing the total correlation in the latent variables, ... | withdrawn-rejected-submissions | There were both positive and negative assessments of this paper by the reviewers: It was deemed a well written paper that explores cleanly rederiving the TC-VAE in the Wasserstein Autoencoder Framework and that has experiments comparing to competing approaches. However, there are two strong concerns with this paper: Fi... | train | [
"cMUUFS3VCxV",
"dE9UKEj3ppu",
"4MVlzpj0TRR",
"j2UgpFC_-j5",
"7RZ46SAoRDl",
"WYHfwVnLTbc",
"TH2oUW9qVbA",
"u00b_ORE7U"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their comments and will amend the names choices in the figures and tables to make them more consistent. \n\nWhile static disentanglement learning has been an active centre of interest, disentanglement with dynamic or sequential data has been relatively under studied. Even less so in the ... | [
-1,
-1,
-1,
-1,
8,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"7RZ46SAoRDl",
"WYHfwVnLTbc",
"TH2oUW9qVbA",
"u00b_ORE7U",
"iclr_2021_vnlqCDH1b6n",
"iclr_2021_vnlqCDH1b6n",
"iclr_2021_vnlqCDH1b6n",
"iclr_2021_vnlqCDH1b6n"
] |
iclr_2021_RrIqhkFEpec | Isometric Autoencoders | High dimensional data is often assumed to be concentrated on or near a low-dimensional manifold. Autoencoders (AE) is a popular technique to learn representations of such data by pushing it through a neural network with a low dimension bottleneck while minimizing a reconstruction error. Using high capacity AE often lea... | withdrawn-rejected-submissions | The paper introduces a new formulation for learning low-dimensional manifold representations via autoencoder mappings that are (locally) isometric by design. The key technical ingredient is the use of a particular (theoretically motivated) weight-tied architecture coupled with isometry-promoting loss terms that can be ... | train | [
"9l_1C3NBCX2",
"P2guORp4MO",
"UsW8QemGWfl",
"Gql1JraYP6m",
"UeMB6fTTg7F",
"6JVblQOb58z",
"I8i1y6kgG4F",
"3jVtnIGgDzm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Update: I appreciate the authors addressing my concerns. I have increased my score accordingly.\n\nOriginal Review:\n\nThis paper describes a new type of regularization for the parameters of an autoencoder - one that forces the decoder to be an isometry. The authors present conditions that need to be satisfied by ... | [
7,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"iclr_2021_RrIqhkFEpec",
"6JVblQOb58z",
"I8i1y6kgG4F",
"3jVtnIGgDzm",
"9l_1C3NBCX2",
"iclr_2021_RrIqhkFEpec",
"iclr_2021_RrIqhkFEpec",
"iclr_2021_RrIqhkFEpec"
] |
iclr_2021_sAX7Z7uIJ_Y | Calibrated Adversarial Refinement for Stochastic Semantic Segmentation | Ambiguities in images or unsystematic annotation can lead to multiple valid solutions in semantic segmentation. To learn a distribution over predictions, recent work has explored the use of probabilistic networks. However, these do not necessarily capture the empirical distribution accurately. In this work, we aim to l... | withdrawn-rejected-submissions | This paper addresses stochastic semantic segmentation with a two-step approach: a standard segmentation network learned with cross-entropy serves as a guide to calibrate a second refinement network to generate diverse predictions while their expectation matches the calibration model.
The reviewers acknowledge the pap... | test | [
"RJXYo2XiIME",
"qB6WaDHpv6V",
"7XjARDysAwq",
"7fU3Aljoe7J",
"z_sExquR8qu",
"tdbbQ1U3KN",
"xxvjgySyC3-",
"37vcGAJcgDU",
"vkkLw-4B_IL",
"1Ij3TA5LfVD",
"Yk7hhV5gG5X",
"7ifT79vCG",
"f-FoZz3JTts",
"5--1a4Jti2z",
"XtO7NsY2YdT",
"dSrqiN4qZsZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"-----------------------------------------------------------------------------------------------------------------------------------------------------------------\nPOST REBUTTAL\n-----------------------------------------------------------------------------------------------------------------------------------------... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1
] | [
"iclr_2021_sAX7Z7uIJ_Y",
"iclr_2021_sAX7Z7uIJ_Y",
"iclr_2021_sAX7Z7uIJ_Y",
"RJXYo2XiIME",
"XtO7NsY2YdT",
"qB6WaDHpv6V",
"iclr_2021_sAX7Z7uIJ_Y",
"dSrqiN4qZsZ",
"qB6WaDHpv6V",
"RJXYo2XiIME",
"XtO7NsY2YdT",
"Yk7hhV5gG5X",
"7XjARDysAwq",
"vkkLw-4B_IL",
"iclr_2021_sAX7Z7uIJ_Y",
"7ifT79vCG"... |
iclr_2021_qHXkE-8c1sQ | Information distance for neural network functions | We provide a practical distance measure in the space of functions parameterized by neural networks. It is based on the classical information distance, and we propose to replace the uncomputable Kolmogorov complexity with information measured by codelength of prequential coding. We also provide a method for directly est... | withdrawn-rejected-submissions | All the reviewers agree that the paper studies an important and interesting problem. However the reviewers felt the paper is still in preliminary stages, with incorrect derivations, missing comparisons/references, and writing. While the authors updated the paper during the discussion stage addressing some of the conce... | train | [
"Kf8I1McNAI",
"YABwDP7OLQD",
"tKm1GWSXzc7",
"ZSlpyN6Suks",
"DPxwEYqbeqv",
"5M73IvY628",
"8GYc2Jzt-2Q",
"ELw9KMbmaNj",
"4ulJh4EJf3K",
"eSqDDe41bJx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"1, Summary of contribution:\nThe paper proposes a new distance measure between a pair of neural networks, which is invariant to the reparameterization. Specifically, it introduces prequential coding to approximate the Kolmogorov complexity between two neural networks, and \nThe paper also conducts empirical studie... | [
4,
6,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_qHXkE-8c1sQ",
"iclr_2021_qHXkE-8c1sQ",
"iclr_2021_qHXkE-8c1sQ",
"eSqDDe41bJx",
"YABwDP7OLQD",
"8GYc2Jzt-2Q",
"4ulJh4EJf3K",
"tKm1GWSXzc7",
"Kf8I1McNAI",
"iclr_2021_qHXkE-8c1sQ"
] |
iclr_2021_WrNjg9tCLUt | BAFFLE: TOWARDS RESOLVING FEDERATED LEARNING’S DILEMMA - THWARTING BACKDOOR AND INFERENCE ATTACKS | Recently, federated learning (FL) has been subject to both security and privacy attacks posing a dilemmatic challenge on the underlying algorithmic designs: On the one hand, FL is shown to be vulnerable to backdoor attacks that stealthily manipulate the global model output using malicious model updates, and on the othe... | withdrawn-rejected-submissions | The paper makes an attempt towards byzantine resilient federated learning, in the pressneece of backdoor attacks.
The method presented combines a clustering step with a poison elimination step, and seems to be effective against a range of current attacks.
Both steps are a bit ad hoc in nature, and do not come with ... | val | [
"thLfbnMBCly",
"cuUXHXybN3",
"O-F0KUY--Q",
"NFvkKNFPI8H",
"Hc169pIfSjv",
"UH2S9aogI7B",
"LFLoIFP7Bl7",
"p9hJwpgtD6d",
"TzIr_FGgAB8"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In the paper, the authors proposed a novel privacy-preserving defense approach BAFFLE for federated learning which could simultaneously impede backdoor and inference attacks. To impede backdoor attacks, the Model Filtering layer (i.e., by dynamic clustering) and Poison Elimination layer (i.e., by noising and clipp... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_WrNjg9tCLUt",
"p9hJwpgtD6d",
"p9hJwpgtD6d",
"LFLoIFP7Bl7",
"thLfbnMBCly",
"TzIr_FGgAB8",
"iclr_2021_WrNjg9tCLUt",
"iclr_2021_WrNjg9tCLUt",
"iclr_2021_WrNjg9tCLUt"
] |
iclr_2021_a9nIWs-Orh | Deepening Hidden Representations from Pre-trained Language Models | Transformer-based pre-trained language models have proven to be effective for learning contextualized language representation. However, current approaches only take advantage of the output of the encoder's final layer when fine-tuning the downstream tasks. We argue that only taking single layer's output restricts the p... | withdrawn-rejected-submissions | This paper proposes a new mechanism, called HIRE, to improve the down-stream performance of a pre-trained Transformer on NLP tasks. Different from directly using the last layer of transformer, the proposed model allows the system to dynamically decide which intermediate layers to use based on the input through some s... | val | [
"srwFlBgZ_G3",
"ny0CxcGB5p",
"ik-lwhK9Or1",
"9GbtJBCdRl",
"u9xVbSSSM3",
"aSCI3QZE6LR",
"UHtN1YWqVb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper presents a new mechanism, called HIRE, to extract more information from the intermediate layers of pre-trained models, which will be further fused with the last layer of pre-trained models. The main contribution of this work is the newly proposed dynamic feature extractor HIRE and the fusion network. Ex... | [
5,
-1,
6,
-1,
-1,
-1,
4
] | [
4,
-1,
4,
-1,
-1,
-1,
5
] | [
"iclr_2021_a9nIWs-Orh",
"u9xVbSSSM3",
"iclr_2021_a9nIWs-Orh",
"ik-lwhK9Or1",
"srwFlBgZ_G3",
"UHtN1YWqVb",
"iclr_2021_a9nIWs-Orh"
] |
iclr_2021_wZ4yWvQ_g2y | Task-Agnostic and Adaptive-Size BERT Compression | While pre-trained language models such as BERT and RoBERTa have achieved impressive results on various natural language processing tasks, they have huge numbers of parameters and suffer from huge computational and memory costs, which make them difficult for real-world deployment. Hence, model compression should be perf... | withdrawn-rejected-submissions | Compressing BERT is a practically important research direction. Our main concern on this submission is on its practical value. Comparing with MobileBERT in the literature, NAS-BERT does not show advantages on any aspect: latency, prediction performance, or model size (less important), while being much more costly to bu... | train | [
"mMhsFT_vAAw",
"sW0RBsZJwAJ",
"dEmgbf1omy",
"vfI0Uj0ZgfL",
"jTbAn1eo8fa",
"UFUuFRphoY",
"mIO2WZIaXF3",
"kkBhgG8GFsn",
"_n7HSBTMfR",
"H9qQrNhWefv",
"1ALXJuPFZJ8",
"OUrjJ-SAFJg",
"hh3O5Rj3d--",
"FE5Cbb5aWoL",
"nhZqyL8xXv6",
"uezBS89xMhk"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper proposes to search architectures of BERT model under various memory and latency contraints. The search algorithm is conducted by pretraining a big supernet that contains the all the sub-network structures, where the optimal models for different requirements are selected from it. Once an archit... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2021_wZ4yWvQ_g2y",
"vfI0Uj0ZgfL",
"iclr_2021_wZ4yWvQ_g2y",
"_n7HSBTMfR",
"hh3O5Rj3d--",
"nhZqyL8xXv6",
"dEmgbf1omy",
"1ALXJuPFZJ8",
"mIO2WZIaXF3",
"kkBhgG8GFsn",
"mMhsFT_vAAw",
"UFUuFRphoY",
"FE5Cbb5aWoL",
"uezBS89xMhk",
"iclr_2021_wZ4yWvQ_g2y",
"iclr_2021_wZ4yWvQ_g2y"
] |
iclr_2021_MpStQoD73Mj | Differentiable Weighted Finite-State Transducers | We introduce a framework for automatic differentiation with weighted finite-state transducers (WFSTs) allowing them to be used dynamically at training time. Through the separation of graphs from operations on graphs, this framework enables the exploration of new structured loss functions which in turn eases the encodin... | withdrawn-rejected-submissions | This paper introduces a framework for automatic differentiation with weighted finite-state transducers (WFSTs), which would allow user-specified graphs in structured output prediction tasks and easy plug-and-play of graphs through the composition operation (demonstrated with variants of CTC). The authors demonstrated t... | val | [
"PiS35rIRzcT",
"r7QnbW4R9xr",
"gQI2NXhzYyC",
"QOlj23Vo47n",
"ni5fep2sO2k",
"KiBIiGinhL",
"YoJJ5jbi48W",
"6fzq56cCKSJ",
"CDeGCVgEh_H"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments. Our detailed responses are below the original question or comment in italics.\n\n*Q1: The paper is a good read, but the central contribution isn't really detailed.*\n\nPlease see the top-level comment under “The goal of this work” as well as the list of contributions in the introductio... | [
-1,
-1,
-1,
-1,
-1,
6,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
5
] | [
"KiBIiGinhL",
"iclr_2021_MpStQoD73Mj",
"CDeGCVgEh_H",
"6fzq56cCKSJ",
"YoJJ5jbi48W",
"iclr_2021_MpStQoD73Mj",
"iclr_2021_MpStQoD73Mj",
"iclr_2021_MpStQoD73Mj",
"iclr_2021_MpStQoD73Mj"
] |
iclr_2021_9MdLwggYa02 | ROMUL: Scale Adaptative Population Based Training | In most pragmatic settings, data augmentation and regularization are essential, and require hyperparameter search.
Population based training (PBT) is an effective tool for efficiently finding them as well as schedules over hyperparameters.
In this paper, we compare existing PBT algorithms and contribute a n... | withdrawn-rejected-submissions | This submission proposes a variant of population based training (PBT) for hyperparameter selection/evolution, aimed at addressing drawbacks of existing variants (e.g. the coupling of the choice of checkpoint with the choice of hyperparameters). Reviewers generally agreed that the paper is interesting and covers an impo... | test | [
"ms9ZpSTvUz",
"8KmuGtG_zT7",
"7Jc63UcuWay",
"XO8a9tf_Mtr",
"Qn0Jrxqgae",
"PxU_UC-3O3H",
"KvktTAWX9zV",
"roOcGPNDSyG",
"VgUkTf0kTTT",
"2QSZC-kXVHt",
"dcE-J8OQYiq",
"K4g1caSfj_r"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"#### Summary\nThe paper provides a new variant of PBT which utilizes ideas from differential evolution and cross-over. The original PBT and even initiator PBT do not perform crossover on the hyper-parameters, and insufficient cross-over may cause PBT to perform greedy in the initial phases which ends up with a sub... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_9MdLwggYa02",
"XO8a9tf_Mtr",
"PxU_UC-3O3H",
"dcE-J8OQYiq",
"iclr_2021_9MdLwggYa02",
"KvktTAWX9zV",
"2QSZC-kXVHt",
"K4g1caSfj_r",
"ms9ZpSTvUz",
"iclr_2021_9MdLwggYa02",
"iclr_2021_9MdLwggYa02",
"iclr_2021_9MdLwggYa02"
] |
iclr_2021_GA87kjyd-f | A Unified Paths Perspective for Pruning at Initialization | A number of recent approaches have been proposed for pruning neural network parameters at initialization with the goal of reducing the size and computational burden of models while minimally affecting their training dynamics and generalization performance. While each of these approaches have some amount of well-founded... | withdrawn-rejected-submissions | The paper proposes a very interesting decomposition of the neural tangent kernel, which promises
to decouple effects of the parameters and data. The authors illustrate the effects of this decomposition
by considering pruning strategies for initialization.
While the approach looks promising, the current paper is somewha... | train | [
"1rvh_1nZVZ",
"U1gGJXTDxld",
"OfGX-9yFmF",
"R57a0PQAm-U",
"_pB_RUIFWA",
"nECTCYjYGHf",
"O6xUOuTOPV",
"VFeps_4XO-p",
"Ph1mdHAIBw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Summary\nThis paper is twofold: 1) the authors propose a way of computing a NTK by decomposing a piecewise linear network into paths; 2) the authors propose to use this decomposition to detect the least useful weights at initialization.\n\n### Details\nFor convenience, I will call the theoretical part \"Path-N... | [
4,
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"iclr_2021_GA87kjyd-f",
"iclr_2021_GA87kjyd-f",
"O6xUOuTOPV",
"1rvh_1nZVZ",
"VFeps_4XO-p",
"Ph1mdHAIBw",
"iclr_2021_GA87kjyd-f",
"iclr_2021_GA87kjyd-f",
"iclr_2021_GA87kjyd-f"
] |
iclr_2021_INhwJdJtxn6 | Coverage as a Principle for Discovering Transferable Behavior in Reinforcement Learning | Designing agents that acquire knowledge autonomously and use it to solve new tasks efficiently is an important challenge in reinforcement learning. Unsupervised learning provides a useful paradigm for autonomous acquisition of task-agnostic knowledge. In supervised settings, representations discovered through unsupervi... | withdrawn-rejected-submissions | The paper studies the unsupervised RL problem, where the agent is allowed to interact with the environment for a certain amount of time without any extrinsic reward. The main idea is that the initial unsupervised training phase can be used to learn a set of "skills" that could help both in exploration and zero-shot tra... | train | [
"odDKQp_erRV",
"vXEXSrFJBZ",
"2E1e9QZd_Hn",
"8V8BfsFLNr8",
"h5BzDKLPLWr",
"WhnyK4mdnSt",
"gnoDJH1mtxb",
"GGRk5V49yjc",
"toM4XuGDvGb",
"iQJ9qulvKEK",
"gTX8xwrr4-V",
"GXepgS3_scL"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies a pre-training approach to reinforcement learning. The objective is, first to pre-train a model considering that, without reward, interaction with an environment is cheap, and second, to fine-tune a policy given a particular reward function. \n\nAs a first contribution, the paper proposes two str... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2021_INhwJdJtxn6",
"iclr_2021_INhwJdJtxn6",
"iQJ9qulvKEK",
"h5BzDKLPLWr",
"gTX8xwrr4-V",
"gnoDJH1mtxb",
"odDKQp_erRV",
"toM4XuGDvGb",
"GXepgS3_scL",
"iclr_2021_INhwJdJtxn6",
"iclr_2021_INhwJdJtxn6",
"iclr_2021_INhwJdJtxn6"
] |
iclr_2021_CaCHjsqCBJV | Differentiable Optimization of Generalized Nondecomposable Functions using Linear Programs | We propose a framework which makes it feasible to directly train deep neural networks with respect to popular families of task-specific non-decomposable per- formance measures such as AUC, multi-class AUC, F -measure and others, as well as models such as non-negative matrix factorization. A common feature of the optimi... | withdrawn-rejected-submissions | This paper shows that various discrete loss functions can be formulated as an LP. It proposes to relax the constraint Ax = b, x >= 0 using a soft constraint and following Mangasarian, proposes to solve the relaxed problem using Newton's method. Backpropagation through these iterations is further proposed. The main moti... | train | [
"qzWgcjJWU35",
"LubSKLlcxY0",
"lBki2l0sp4l",
"p1-7fbTExz1",
"PQjSW9VhIe9",
"mqNPW_X2yrf",
"-FfLBzV5THR",
"N_WnQoLS5sA",
"v34STkfxic0",
"sYMmZtYtOh",
"Qtrs2DSgilf",
"PJ4cSQMHOM"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper addresses the classical topic of directly optimizing non-decomposable loss functions. Since these metrics can be computed via linear programs, it is sufficient to compute gradient through the LP solver. To that end, the authors propose to use a particular method for solving linear programs. In experimen... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
4
] | [
"iclr_2021_CaCHjsqCBJV",
"mqNPW_X2yrf",
"N_WnQoLS5sA",
"Qtrs2DSgilf",
"sYMmZtYtOh",
"-FfLBzV5THR",
"qzWgcjJWU35",
"PJ4cSQMHOM",
"p1-7fbTExz1",
"iclr_2021_CaCHjsqCBJV",
"iclr_2021_CaCHjsqCBJV",
"iclr_2021_CaCHjsqCBJV"
] |
iclr_2021_padYzanQNbg | Neural SDEs Made Easy: SDEs are Infinite-Dimensional GANs | Several authors have introduced \emph{Neural Stochastic Differential Equations} (Neural SDEs), often involving complex theory with various limitations. Here, we aim to introduce a generic, user friendly approach to neural SDEs. Our central contribution is the observation that an SDE is a map from Wiener measure (Browni... | withdrawn-rejected-submissions | The reviewers agree that this paper has some interesting ideas. However, they believe it needs more work before it is ready for publication, especially so with regards to presentation (SDEs as GANs) and the experiments (backpropagating through the solver rather than using the adjoint dynamics). These would significantl... | train | [
"KaVSJBEV4XO",
"cKBi4OlgSmI",
"dc8wxbAe0p3",
"Da-5oSfYaP",
"ruz4ygiMiQb",
"vihJpnD3UdQ",
"SPqdVSm7Nt3",
"vNCeMXdU8VT",
"dvytzm9yUXa",
"pDG-mvrBr5",
"y4SH9rTutbB",
"xI5-niXKquS",
"d3rhc6wkFXD",
"cFbPW_fKraQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We completely appreciate the concern; it would have been ideal to be able to apply adjoints in the experiments.\n\nWe decided to go with this presentation because we felt that all of the content of the paper was still of sufficient interest. Moreover, the theory still carries through: for example this could be res... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"cKBi4OlgSmI",
"pDG-mvrBr5",
"iclr_2021_padYzanQNbg",
"ruz4ygiMiQb",
"vihJpnD3UdQ",
"SPqdVSm7Nt3",
"dvytzm9yUXa",
"xI5-niXKquS",
"d3rhc6wkFXD",
"dc8wxbAe0p3",
"cFbPW_fKraQ",
"iclr_2021_padYzanQNbg",
"iclr_2021_padYzanQNbg",
"iclr_2021_padYzanQNbg"
] |
iclr_2021_4P35MfnBQIY | Consistency and Monotonicity Regularization for Neural Knowledge Tracing | Knowledge Tracing (KT), tracking a human's knowledge acquisition, is a central component in online learning and AI in Education. In this paper, we present a simple, yet effective strategy to improve the generalization ability of KT models: we propose three types of novel data augmentation, coined replacement, insertion... | withdrawn-rejected-submissions | The paper proposes new techniques for improving the generalization ability of deep learning models for Knowledge Tracing (KT). Instead of designing more sophisticated models, the paper investigates simple data augmentation techniques that can be applied to train existing models. In particular, three different augmentat... | train | [
"FNVcCivffD",
"eggqOHQwDN",
"w7PNMU49AjB",
"f3Ql6C1gWAp",
"pBABId6UteV",
"vXOnfDZmV1y",
"zBxxXWg74r",
"bEeRBlIatR",
"SaGU8Q_w0eM",
"GWYZe3gM6V",
"OmKqaOQHhcz",
"W-W5LkyQZQN"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Much appreciated !",
"Dear reviewers,\n\nMany thanks again for your constructive feedback to improve our manuscript. We have carefully incorporated your comments into this revision, as summarized in what follows:\n\n* For R4, data analysis that shows the monotonicity nature of student interaction datasets, by ob... | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"zBxxXWg74r",
"iclr_2021_4P35MfnBQIY",
"W-W5LkyQZQN",
"iclr_2021_4P35MfnBQIY",
"vXOnfDZmV1y",
"f3Ql6C1gWAp",
"GWYZe3gM6V",
"OmKqaOQHhcz",
"iclr_2021_4P35MfnBQIY",
"iclr_2021_4P35MfnBQIY",
"iclr_2021_4P35MfnBQIY",
"iclr_2021_4P35MfnBQIY"
] |
iclr_2021_9Y7_c5ZAd5i | A Sharp Analysis of Model-based Reinforcement Learning with Self-Play | Model-based algorithms---algorithms that explore the environment through building and utilizing an estimated model---are widely used in reinforcement learning practice and theoretically shown to achieve optimal sample efficiency for single-agent reinforcement learning in Markov Decision Processes (MDPs). However, for m... | withdrawn-rejected-submissions | The reviewers, AC, and PCs participated in a very thorough discussion. AC ultimately felt that the work was unfinished, and in particular that details in the proofs still needed work before publication.
| train | [
"NIY6vKLT76a",
"yrLTFRF_ZF",
"Gu416BSu5Ei",
"AMM3ZkupHt",
"7W8S3n9khee",
"4qxMo6ZE0ST",
"poG9VVOtiuZ",
"Y2ABbwcZmc7",
"4aGnwobnY3S",
"M9DMNCdKgrG",
"2cBYAIiwPj8",
"rVMW_pFElt2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies learning in stochastic games, which are extensions of Markov decision processes (MDPs) from the single-agent setup to the multi-agent one. Here the objective of each learner is to optimize her own reward function. Similarly to the case of MDPs, here one can devise learning algorithms with contro... | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_9Y7_c5ZAd5i",
"iclr_2021_9Y7_c5ZAd5i",
"7W8S3n9khee",
"poG9VVOtiuZ",
"iclr_2021_9Y7_c5ZAd5i",
"poG9VVOtiuZ",
"yrLTFRF_ZF",
"2cBYAIiwPj8",
"NIY6vKLT76a",
"rVMW_pFElt2",
"iclr_2021_9Y7_c5ZAd5i",
"iclr_2021_9Y7_c5ZAd5i"
] |
iclr_2021_cjk5mri_aOm | Environment Predictive Coding for Embodied Agents | We introduce environment predictive coding, a self-supervised approach to learn environment-level representations for embodied agents. In contrast to prior work on self-supervised learning for images, we aim to jointly encode a series of images gathered by an agent as it moves about in 3D environments. We learn these ... | withdrawn-rejected-submissions | The paper proposes a self-supervised method to predict the gist features of image frames during navigation of an agent supervised by depth and egomotion. The features are retargeted to train navigation policies and outperform previous methods or other pretraining schemes. The idea is related to self-supervised by featu... | test | [
"1W80Cdop2Ii",
"vWNhTVVmD9J",
"c91T_8ueTW2",
"MC2pzP68k8r",
"zvqy8W8tUQ8",
"7cv2KlAC1G",
"TLsslcwNly8",
"gV-Q4_f-Ajb",
"Uw_CLO6eap2",
"2otjyMGOtI6",
"R78LqaSJCp"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"**Summary**\n\nThe paper proposes a self-supervised approach for learning environment-level representations for embodied agents. The idea is that agents collect images and their corresponding poses during a walk-through phase. The images are clustered into multiple \"zones\". The zones are divided into seen and un... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_cjk5mri_aOm",
"iclr_2021_cjk5mri_aOm",
"Uw_CLO6eap2",
"1W80Cdop2Ii",
"2otjyMGOtI6",
"TLsslcwNly8",
"vWNhTVVmD9J",
"iclr_2021_cjk5mri_aOm",
"R78LqaSJCp",
"iclr_2021_cjk5mri_aOm",
"iclr_2021_cjk5mri_aOm"
] |
iclr_2021_e-ZdxsIwweR | Robust Constrained Reinforcement Learning for Continuous Control with Model Misspecification | Many real-world physical control systems are required to satisfy constraints upon deployment. Furthermore, real-world systems are often subject to effects such as non-stationarity, wear-and-tear, uncalibrated sensors and so on. Such effects effectively perturb the system dynamics and can cause a policy trained successf... | withdrawn-rejected-submissions | # Quality:
I personally feel that the comment from Reviewer1 regarding "real-world" is a minor but valid point. Even after the rebuttal, the abstract seems to suggest that the proposed algorithm is effective to solve real-world challenges. Maybe further rephrasing or explicitly stating that the experiments are in simul... | val | [
"bdLtDeAidIB",
"Bs7lO3v9Fon",
"qH9DWra3GOU",
"7mUhW_1lV-u",
"WxNTZUQDN6A",
"grS-v0JLNBZ",
"fp02T1j_YoL",
"EmeLBjX02SO",
"BwQFPAWXdYh",
"r5owAiSWfvq",
"4P9LkndnRQ_",
"W3WrdQZbIeJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This manuscript studies the problem of robust and constrained reinforcement learning and proposes two new objectives for incorporating constraints and robustness to misspecified models into RL training. The advantage of these objectives is that their associated Bellman operators are contractive, which enables the ... | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_e-ZdxsIwweR",
"iclr_2021_e-ZdxsIwweR",
"r5owAiSWfvq",
"WxNTZUQDN6A",
"grS-v0JLNBZ",
"4P9LkndnRQ_",
"Bs7lO3v9Fon",
"bdLtDeAidIB",
"W3WrdQZbIeJ",
"iclr_2021_e-ZdxsIwweR",
"iclr_2021_e-ZdxsIwweR",
"iclr_2021_e-ZdxsIwweR"
] |
iclr_2021_0z1HScLBEpb | UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning | This paper focuses on cooperative value-based multi-agent reinforcement learning (MARL) in the paradigm of centralized training with decentralized execution (CTDE). Current state-of-the-art value-based MARL methods leverage CTDE to learn a centralized joint-action value function as a monotonic mixing of each agent's ut... | withdrawn-rejected-submissions | This paper adapts the ideas around universal successor features for decentralised multi-agent environments, with a particular emphasis on deriving better exploration from them. Like most of the reviewers, I think this is indeed a promising research direction. Given the complexity of the endeavour however, it may take a... | train | [
"iTFRcZ7clFW",
"k2Zo4YqKuSR",
"kPAK6FEnfJw",
"yLQCS9FldfZ",
"fdJTj3pKETt",
"fxocXdYYGDx",
"GUJAqbUqGgO",
"pG3lw-6UfEd",
"IOjA-wrjNGs",
"wFrR475F4hR",
"JrBLqM7ibeo",
"_Ooq9-nlF19",
"TvlhCKunyWp",
"c-AuGb8aMh8",
"zCJSsZWA7NV"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"##########################################################################\nSummary:\n\nThis paper studies the problem setting of cooperative multi-agent RL (coop-MARL) under centralized training with decentralized execution (CTDE). Additionally, it assumes that the reward function is known and is represented as a... | [
6,
-1,
3,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
5,
-1,
5,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_0z1HScLBEpb",
"yLQCS9FldfZ",
"iclr_2021_0z1HScLBEpb",
"IOjA-wrjNGs",
"iclr_2021_0z1HScLBEpb",
"pG3lw-6UfEd",
"JrBLqM7ibeo",
"TvlhCKunyWp",
"wFrR475F4hR",
"kPAK6FEnfJw",
"_Ooq9-nlF19",
"iTFRcZ7clFW",
"zCJSsZWA7NV",
"fdJTj3pKETt",
"iclr_2021_0z1HScLBEpb"
] |
iclr_2021_GtiDFD1pxpz | Intelligent Matrix Exponentiation | We present a novel machine learning architecture that uses a single high-dimensional nonlinearity consisting of the exponential of a single input-dependent matrix. The mathematical simplicity of this architecture allows a detailed analysis of its behaviour, providing robustness guarantees via Lipschitz bounds. Despite ... | withdrawn-rejected-submissions | This paper was reviewed by 4 reviewers who scored the paper below acceptance threshold even after the rebuttal. Reviewer 4 is concerned about motivation, Reviewer 2 rightly points out that there exist numerous works that use some form of spectral layers in a deep setting on challenging datasets - something lacking in t... | train | [
"VoEy1ZiIxO",
"bpPA07m1Fs",
"iCmjTiYUTzu",
"nUXN9HX-t83",
"jG1fvWmcQHf",
"WwjpfMWY5Rl",
"VbUsZz3AAMj",
"DskWMNswhV",
"yOjRFPnwN9G",
"PngaFlimuWr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper explores matrix exponentiation as an alternative non-linearity in a neural network. The key idea is to compute an affine transform of the inputs, there by generating an nxn feature map, followed by applying a matrix exponential to this feature map, that is subsequently used for classification. The paper... | [
4,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_GtiDFD1pxpz",
"iclr_2021_GtiDFD1pxpz",
"iclr_2021_GtiDFD1pxpz",
"iCmjTiYUTzu",
"WwjpfMWY5Rl",
"VoEy1ZiIxO",
"DskWMNswhV",
"bpPA07m1Fs",
"PngaFlimuWr",
"iclr_2021_GtiDFD1pxpz"
] |
iclr_2021_6s480DdlRQQ | Dynamic Backdoor Attacks Against Deep Neural Networks | Current Deep Neural Network (DNN) backdooring attacks rely on adding static triggers (with fixed patterns and locations) on model inputs that are prone to detection. In this paper, we propose the first class of dynamic backdooring techniques: Random Backdoor, Backdoor Generating Network (BaN), and conditional Backdoor... | withdrawn-rejected-submissions | The authors present a new set of trigger based backdoor attacks that use dynamic patterns that make detection harder. These attacks seem to be stronger with regards to state of the art attacks.
Some weaknesses is the need for full whitebox access of the model. Several key references are missing, and the comparison wi... | train | [
"J0YLVnMZLi",
"iC0Neeb3g8",
"aAOMeca2wsH",
"oKaP3gc2-Uu",
"JsDIx5K9ma-",
"aCt7hbBUnp",
"7vMovKOmTZV",
"Z6IK5WYI4kF"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper proposed a class of methods for dynamic backdoor attack. The main idea is to generate different backdoor patterns and locations in backdoor attack. The threat model is the attacker has full access to the training data and the model training procedure. Both single-target and multi-target class-... | [
5,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_6s480DdlRQQ",
"aAOMeca2wsH",
"oKaP3gc2-Uu",
"J0YLVnMZLi",
"7vMovKOmTZV",
"Z6IK5WYI4kF",
"iclr_2021_6s480DdlRQQ",
"iclr_2021_6s480DdlRQQ"
] |
iclr_2021_Ua5yGJhfgAg | Safe Reinforcement Learning with Natural Language Constraints | In this paper, we tackle the problem of learning control policies for tasks when provided with constraints in natural language. In contrast to instruction following, language here is used not to specify goals, but rather to describe situations that an agent must avoid during its exploration of the environment. Specify... | withdrawn-rejected-submissions | The goal of the paper is to learn policies that can solve a given task while adhering to certain constraints specified via natural language. The paper closely builds upon prior work on constrained RL and passes the representation of natural language constraints by pre-training an interpreter. Experiments are done in a ... | train | [
"jWh6mpz0MWs",
"PGMVZgS545Q",
"5K_qXAPaDXz",
"80zeJ7_xLMw",
"TdzdgtjOIqj",
"n5LaM_-G2Lw",
"cTN8RY6gKrc",
"2c7GEdYFRT",
"CtduxQpsQOX",
"iiBJuJ6Agc4",
"G5WWFI1wD4",
"-rNCFe-v8am",
"BUT7F7ykDvy"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper present an experiment of safe reinforcement on a 2D grid-word where the safety constraints are specified in natural language instead of being specified formally. The justification of this system is to allow non-experts to train agents using safe-RL.\nAccording to the authors: \"The key challenge lies in... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_Ua5yGJhfgAg",
"iclr_2021_Ua5yGJhfgAg",
"CtduxQpsQOX",
"iclr_2021_Ua5yGJhfgAg",
"80zeJ7_xLMw",
"cTN8RY6gKrc",
"-rNCFe-v8am",
"jWh6mpz0MWs",
"iiBJuJ6Agc4",
"BUT7F7ykDvy",
"PGMVZgS545Q",
"iclr_2021_Ua5yGJhfgAg",
"iclr_2021_Ua5yGJhfgAg"
] |
iclr_2021_eNSpdJeR_J | Deep Learning with Data Privacy via Residual Perturbation | Protecting data privacy in deep learning (DL) is at its urgency. Several celebrated privacy notions have been established and used for privacy-preserving DL. However, many of the existing mechanisms achieve data privacy at the cost of significant utility degradation. In this paper, we propose a stochastic differential ... | withdrawn-rejected-submissions | This paper proposes techniques for differentially private training of ResNets inspired by SDEs. The idea has some promise but the paper does not give convincing evidence, either theoretical or empirical that it outperforms existing tehniques. Unfortunately, comparisons with existing techniques are presented in a mislea... | train | [
"tmVzgjbFJhL",
"_BnuvZoFJ3",
"O1BaXuQmQXk",
"vaAMne6loVP",
"vlZnTqoQORT",
"mSkoeA811Y2",
"waKnmaPJa6I",
"2U27eLD-Onv"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper at question tackles the well-known problem of differentially private (DP) deep learning: already for moderate privacy guarantees, the model performance suffers greatly.\n\nThe paper proposes a particular SDE based method for obtaining privacy for ResNets. DP and stochastic differential equations have b... | [
4,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
3,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_eNSpdJeR_J",
"tmVzgjbFJhL",
"waKnmaPJa6I",
"mSkoeA811Y2",
"2U27eLD-Onv",
"iclr_2021_eNSpdJeR_J",
"iclr_2021_eNSpdJeR_J",
"iclr_2021_eNSpdJeR_J"
] |
iclr_2021_6FtFPKw8aLj | Systematic Analysis of Cluster Similarity Indices: How to Validate Validation Measures | There are many cluster similarity indices used to evaluate clustering algorithms, and choosing the best one for a particular task remains an open problem. We demonstrate that this problem is crucial: there are many disagreements among the indices, these disagreements do affect which algorithms are chosen in application... | withdrawn-rejected-submissions | The paper goes over a long list of proposed clustering similarity indices and attempts
to provide a taxonomy of those by their different approaches and the extent by which they
satisfy a list of "desired properties" proposed by the authors.
This is very much in the spirit of earleir work on clustering similaritie by [M... | test | [
"IqYyFmZA2d",
"aRDehoIHwVH",
"4-rL86eD8VZ",
"keu1XtNTyh2",
"1osFqL5FY2s",
"T2CIui6Cauj",
"MwOt7sGo_RS",
"em4-sh9D85_",
"tgjr9O2v6M",
"wCtxN1eKEd2",
"JVLm1M9Uo0h",
"q7Tk3VzWQw_",
"lh-_ahisZ55"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\nSummary:\nThis paper aims to answer a very important and difficult question, i.e., given a clustering application what are the desirable qualities (i.e., similarity indices) to have. This work argues that there are so many clustering simil... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2021_6FtFPKw8aLj",
"iclr_2021_6FtFPKw8aLj",
"em4-sh9D85_",
"tgjr9O2v6M",
"T2CIui6Cauj",
"q7Tk3VzWQw_",
"wCtxN1eKEd2",
"JVLm1M9Uo0h",
"IqYyFmZA2d",
"lh-_ahisZ55",
"iclr_2021_6FtFPKw8aLj",
"iclr_2021_6FtFPKw8aLj",
"iclr_2021_6FtFPKw8aLj"
] |
iclr_2021_A993YzEUKB7 | Extrapolatable Relational Reasoning With Comparators in Low-Dimensional Manifolds | While modern deep neural architectures generalise well when test data is sampled from the same distribution as training data, they fail badly for cases when the test data distribution differs from the training distribution even along a few dimensions. This lack of out-of-distribution generalisation is increasingly mani... | withdrawn-rejected-submissions | This paper is attempting to improve the OOD generalization performance of neural networks on relational reasoning tasks. This is an important failure point of general neural network architectures and important research topic. The results of the paper shows impressive improvements on a set of subject.
* The paper is im... | train | [
"wVQRxIkH6h",
"jQT2trbUiwc",
"U_FgnPcqDs9",
"jbSvUP0Dq3C",
"JmR0qq0FsRj",
"fFQIiG4uouO",
"RoV1npPJvM7",
"-NPssFxJRB",
"VIxzF413Ybh",
"eivJj68qKQ7",
"B7GXl9Asl-0"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper showcases three studies on relational reasoning where objects need to be compared on a certain attribute (like size). The experiments show large improvements in generalization performance over previous work. The authors attempt to formulate a general method from their experiments.\n\nMy biggest criticis... | [
6,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"iclr_2021_A993YzEUKB7",
"iclr_2021_A993YzEUKB7",
"iclr_2021_A993YzEUKB7",
"wVQRxIkH6h",
"B7GXl9Asl-0",
"VIxzF413Ybh",
"eivJj68qKQ7",
"jQT2trbUiwc",
"iclr_2021_A993YzEUKB7",
"iclr_2021_A993YzEUKB7",
"iclr_2021_A993YzEUKB7"
] |
iclr_2021_daLIpc7vQ2q | Improved Contrastive Divergence Training of Energy Based Models | We propose several different techniques to improve contrastive divergence training of energy-based models (EBMs). We first show that a gradient term neglected in the popular contrastive divergence formulation is both tractable to estimate and is important to avoid training instabilities in previous models. We further h... | withdrawn-rejected-submissions | This paper introduces a bag of techniques to improve contrastive divergence training of energy-based models (EBMs), particularly a KL divergence term, data augmentation, multi-scale energy functions, and reservoir sampling. The overall paper is well written and clearly presented.
In response to the major concerns fro... | train | [
"CQAPCZKs1UP",
"50wM10V69ke",
"Y3qeXv8Hbp9",
"IeyNFnFPUJ6",
"rcdCNm9k2c",
"wo4SX8FTBd",
"YPlENifpRS7",
"ABZgmXf5R-V",
"6F_2Nz4Hkgi",
"YUPBCwIdKBq",
"lz7WFS-aydq"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes several techniques to improve contrastive divergence training of energy-based models (EBMs). \nFirst, the paper proposes to estimate a gradient term, which is neglected in the standard contrastive divergence training method, and show that this correction avoids training instabilities in previou... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"iclr_2021_daLIpc7vQ2q",
"iclr_2021_daLIpc7vQ2q",
"rcdCNm9k2c",
"6F_2Nz4Hkgi",
"6F_2Nz4Hkgi",
"YUPBCwIdKBq",
"lz7WFS-aydq",
"CQAPCZKs1UP",
"iclr_2021_daLIpc7vQ2q",
"iclr_2021_daLIpc7vQ2q",
"iclr_2021_daLIpc7vQ2q"
] |
iclr_2021_M4qXqdw3xC | Boundary Effects in CNNs: Feature or Bug? | Recent studies have shown that the addition of zero padding drives convolutional neural networks (CNNs) to encode a significant amount of absolute position information in their internal representations, while a lack of padding precludes position encoding. Additionally, various studies have used image patches on backgro... | withdrawn-rejected-submissions | This paper explores the effects of padding in convnets used for various visual recognition tasks (classification, segmentation). This is an important and relevant design choice that is often overlooked, as noted by reviewers. However, I share the concerns of AR2 & AR4 with the evaluation. The design of the ResNet varia... | train | [
"pdQSp8W2shP",
"Q_SdEAzBrg0",
"BvN-VoyocB3",
"KTjYX3rPJB",
"WVfX-2AMKfy",
"4fmkBFH6c1",
"e3fe13sLfYG",
"-Zsq-n-TLiW",
"cA1VhrGXk_f",
"4oHofFP8MJk",
"z3uh5hcbiCN",
"DoypBAg2P9w"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies the effect of padding on the Convolutional Neural Network. The authors try to answer the following questions: 1) what type of padding provides the most position information, 2) does the background value affects model accuracy when processing a patch on a canvas, 3) which part of the image suffer... | [
7,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_M4qXqdw3xC",
"iclr_2021_M4qXqdw3xC",
"iclr_2021_M4qXqdw3xC",
"DoypBAg2P9w",
"BvN-VoyocB3",
"pdQSp8W2shP",
"Q_SdEAzBrg0",
"BvN-VoyocB3",
"pdQSp8W2shP",
"DoypBAg2P9w",
"Q_SdEAzBrg0",
"iclr_2021_M4qXqdw3xC"
] |
iclr_2021_hcCao_UYd6O | Adversarial Feature Desensitization | Deep neural networks can now perform many tasks that were once thought to be only feasible for humans. While reaching impressive performance under standard settings, such networks are known to be susceptible to adversarial attacks -- slight but carefully constructed perturbations of the inputs which drastically decre... | withdrawn-rejected-submissions | This paper proposes Adversarial Feature Desensitization (AFD) as a defense against adversarial examples. Specifically, following the spirit of GAN and Adversarial Domain Adaptation, an adversarial discriminator is introduced to distinguish clean and perturbed inputs at the representational level.
This paper receives ... | train | [
"15s1UaZWMwZ",
"opPihG0DRE",
"Mx5LWnqoB0g",
"a3irBttzTrT",
"T1UdX_wxXuR",
"gzin1KIZA8V",
"_fo4PIapVIv",
"1oi1FDz7HoE"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThis paper proposes Adversarial Feature Desensitization (AFD) as a defense against adversarial examples. AFD employs a min-max adversarial learning framework where the classifier learns to encode features of both clean and adversarial images as the same distribution, thereby desensitizing adversarial f... | [
6,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
5,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"iclr_2021_hcCao_UYd6O",
"gzin1KIZA8V",
"15s1UaZWMwZ",
"_fo4PIapVIv",
"1oi1FDz7HoE",
"iclr_2021_hcCao_UYd6O",
"iclr_2021_hcCao_UYd6O",
"iclr_2021_hcCao_UYd6O"
] |
iclr_2021_ZvvxYyjfvZc | Correcting Momentum in Temporal Difference Learning | A common optimization tool used in deep reinforcement learning is momentum, which consists in accumulating and discounting past gradients, reapplying them at each iteration. We argue that, unlike in supervised learning, momentum in Temporal Difference (TD) learning accumulates gradients that become doubly stale: not on... | withdrawn-rejected-submissions | This paper studies the role of momentum in temporal difference (TD) learning algorithms, and how this can be systematically exploited to accelerate the TD type algorithms. More specifically, the authors point out that the momentum term could be quite biased, and propose a scheme to remedy this issue. However, the revie... | train | [
"AXq1l9-agl6",
"6TFTxdkcdc",
"nsdUVsilkB6",
"wKSnIBOXj7Q",
"p2LzalVVbFe",
"o5WtaKMrVh9",
"43_DMaoCNo",
"eVKxvjaEzmH",
"YnJO9jJ-E8",
"oqa_KHZ3qKR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary:\n\nThis paper extends the idea of momentum, commonly used in optimization literature, to Temporal Difference (TD) learning which is a widely used algorithm for policy evaluation in Reinforcement learning literature. The main challenge in this work is to account for the 'optimization bias' introduced by us... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_ZvvxYyjfvZc",
"iclr_2021_ZvvxYyjfvZc",
"iclr_2021_ZvvxYyjfvZc",
"o5WtaKMrVh9",
"oqa_KHZ3qKR",
"nsdUVsilkB6",
"6TFTxdkcdc",
"AXq1l9-agl6",
"iclr_2021_ZvvxYyjfvZc",
"iclr_2021_ZvvxYyjfvZc"
] |
iclr_2021__bF8aOMNIdu | Robust Temporal Ensembling | Successful training of deep neural networks with noisy labels is an essential capability as most real-world datasets contain some amount of mislabeled data. Left unmitigated, label noise can sharply degrade typical supervised learning approaches. In this paper, we present robust temporal ensembling (RTE), a simple su... | withdrawn-rejected-submissions | Reading the paper and the reviews themselves, I found myself conflicted about this work:
- Multiple reviewers commented that this is a rather incremental piece of work, given that it's a rather straightforward combination of existing losses/models.
- On the other hand, there is admittedly value in (1) realizing that t... | test | [
"gOGZY_8TAL",
"LUfCV0GSAl1",
"BDnGTQXFMcb",
"k_5wNcarBhE",
"mniZZ14GJOp",
"3JrxNqlvFHy",
"kTHf7IuEIN",
"xtvBxANnUBB",
"y2oZ7J94cY4",
"uOxW4Kj5Td5",
"4TwqNmTmOIk"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the author's response. However, some of my major concerns still remain. In particular, it is not clear to me how much performance improvement comes from increased model size and better augmentation. Furthermore, there lacks experiments on real-world noisy datasets. Therefore, I'll keep my original sco... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"mniZZ14GJOp",
"iclr_2021__bF8aOMNIdu",
"k_5wNcarBhE",
"xtvBxANnUBB",
"y2oZ7J94cY4",
"uOxW4Kj5Td5",
"4TwqNmTmOIk",
"iclr_2021__bF8aOMNIdu",
"iclr_2021__bF8aOMNIdu",
"iclr_2021__bF8aOMNIdu",
"iclr_2021__bF8aOMNIdu"
] |
iclr_2021_3InxcRQsYLf | VideoGen: Generative Modeling of Videos using VQ-VAE and Transformers | We present VideoGen: a conceptually simple architecture for scaling likelihood based generative modeling to natural videos. VideoGen uses VQ-VAE that learns learns downsampled discrete latent representations of a video by employing 3D convolutions and axial self-attention. A simple GPT-like architecture is then used to... | withdrawn-rejected-submissions | The paper focuses on the problem of high quality video generation. It approaches the problem by extending VQ-VAE to videos, where a GPT is used to model the low dimensional representation of the VAE. As agreed upon by the authors and the reviewers, the proposed method is simple and produces interesting results.
Based... | train | [
"1GswAMpjGpL",
"2DMMoBwIsHq",
"WcgbIrkJqPx",
"_6VV0gISswD",
"ViBImdEcais",
"CJK3arr0Cr",
"PGtqzp1n48i",
"0fKLH55RRIC",
"u9-Ub99mCB0",
"1ColeZ9fok0",
"l1u1J_91-ts"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer"
] | [
"After rebuttal: \nAuthors' responses do not address any of my concerns, and I completely agree with other reviewers regarding lack of clarity, evaluation, and novelty. The current form of the paper is not ready to be published. I decrease my score to reject. \n--------------------------------------\nSummary:\nThis... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
4,
5
] | [
"iclr_2021_3InxcRQsYLf",
"0fKLH55RRIC",
"_6VV0gISswD",
"ViBImdEcais",
"PGtqzp1n48i",
"u9-Ub99mCB0",
"iclr_2021_3InxcRQsYLf",
"iclr_2021_3InxcRQsYLf",
"iclr_2021_3InxcRQsYLf",
"iclr_2021_3InxcRQsYLf",
"iclr_2021_3InxcRQsYLf"
] |
iclr_2021_p3_z68kKrus | For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability | We study the average CV Leave One Out stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm minimizes a bound on CV Leave One Out stability, which in turn is controlled by the condition number of the empirical kernel matrix. The latter ... | withdrawn-rejected-submissions | The paper investigates the average stability of kernel minimal norm interpolating predictors. The main result
establishes an upper bound on a particular notion of average stability for which it is well-known that it
can be used to bound the generalization error. This upper bound holds for all interpolating predictors... | train | [
"uWYilxm14zS",
"_V2zLGGUUBp",
"zqyMhIcffGj",
"UKEnrXcu-U",
"cRFyFrUYCRT",
"wIB-S8PCl7A",
"8JRynY5teY",
"b9o7dw4o0l",
"lwC2271iu2",
"vX0hAEBSwq0",
"EDt09JmYuKL"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, they provide the risk bounds of kernel ridge-less regression (the regularization $\\lambda-\\rightarrow 0$) based on the CV_{loo} stability. They show that the interpolating solution with minimum norm is the minimal bound of CV_{loo} stability, and can be controlled by the condition number of the ... | [
6,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_p3_z68kKrus",
"iclr_2021_p3_z68kKrus",
"cRFyFrUYCRT",
"iclr_2021_p3_z68kKrus",
"8JRynY5teY",
"vX0hAEBSwq0",
"UKEnrXcu-U",
"EDt09JmYuKL",
"uWYilxm14zS",
"iclr_2021_p3_z68kKrus",
"iclr_2021_p3_z68kKrus"
] |
iclr_2021_yBJihVXahXc | Generalizing Graph Convolutional Networks via Heat Kernel | Graph convolutional networks (GCNs) have emerged as a powerful framework for mining and learning with graphs. A recent study shows that GCNs can be simplified as a linear model by removing nonlinearities and weight matrices across all consecutive layers, resulting the simple graph convolution (SGC) model. In this paper... | withdrawn-rejected-submissions | This paper has been evaluated by four reviewers who overall hesitated between borderline reject/accept. In general, as Rev. 4 points out, this paper appears to cope with over-oscillation rather than over-smoothing aspect of GCN modeling (something worth clarifying). Rev. 3 also rightly points out that the connection be... | train | [
"rv5EwIWWhbo",
"JZVlMesxb1p",
"NmZfjTIVr0",
"m9KA9adzip1",
"kf94dAj68Ob",
"w6uTpMBlGRv",
"tE9fYEufWSi",
"RBLU7p8zpR",
"rJoVdZ4Y4xn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This submission introduced a new graph convolutional operator based on heat diffusion, named heat kernel GCN (HKGCN). First, continuous-time heat diffusion on graphs is reviewed, where the solution is given by the heat equation (6). Then, the authors showed that classical GCN can be approximated in the same formul... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_yBJihVXahXc",
"iclr_2021_yBJihVXahXc",
"JZVlMesxb1p",
"rv5EwIWWhbo",
"w6uTpMBlGRv",
"RBLU7p8zpR",
"rJoVdZ4Y4xn",
"iclr_2021_yBJihVXahXc",
"iclr_2021_yBJihVXahXc"
] |
iclr_2021_Hr-cI3LMKb8 | Leveraging affinity cycle consistency to isolate factors of variation in learned representations | Identifying the dominant factors of variation across a dataset is a central goal of representation learning. Generative approaches lead to descriptions that are rich enough to recreate the data, but often only a partial description is needed to complete downstream tasks or to gain insights about the dataset. In this w... | withdrawn-rejected-submissions | This paper proposes to employ affinity cycle consistency(ACC) for extracting active (or shared) factors of variation across groups. Experiments shows how ACC works in various scenarios.
Pros:
- The problem is important and relevant.
- The paper is well written.
- The proposed method is simple and effective.
Cons:
- T... | val | [
"THL-OaEF_kZ",
"Fsns8fr8kd",
"1aAMOlelu85",
"UYAM05-XeD",
"iymodef4ZVy",
"R0wsaF1dV7b",
"wkTJxDFf4xj",
"NhQrXwDlGj",
"rM6dVnHw5em",
"0k5-qk4Wz54",
"gtxoHhtx0a-",
"lLAI4msmLGO",
"NmLlHwc88Hh"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper uses affinity cycle consistency to isolate factors of variation with only weak supervision on set membership. It has extensive experiments with both synthetic and real data.\n \nThe strength is that the problem setting is reasonable and important. The algorithm is sounding, and the evaluation is valid. ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_Hr-cI3LMKb8",
"1aAMOlelu85",
"NhQrXwDlGj",
"gtxoHhtx0a-",
"NmLlHwc88Hh",
"wkTJxDFf4xj",
"lLAI4msmLGO",
"THL-OaEF_kZ",
"iclr_2021_Hr-cI3LMKb8",
"iclr_2021_Hr-cI3LMKb8",
"iclr_2021_Hr-cI3LMKb8",
"iclr_2021_Hr-cI3LMKb8",
"iclr_2021_Hr-cI3LMKb8"
] |
iclr_2021_ctgsGEmWjDY | On The Adversarial Robustness of 3D Point Cloud Classification | 3D point clouds play pivotal roles in various safety-critical fields, such as autonomous driving, which desires the corresponding deep neural networks to be robust to adversarial perturbations. Though a few defenses against adversarial point cloud classification have been proposed, it remains unknown whether they can p... | withdrawn-rejected-submissions | The authors develop novel adaptive adversarial attacks for 3D Point Cloud Classification tasks. They show that many existing defenses are broken by develop a novel pooling operation, DeepSym, and demonstrate that using this they can achieve significant improvements in adversarial robustness of 3D Point Cloud Classifica... | train | [
"LJvZn87FViD",
"ZqRCdVAiUY",
"3rdKwQEL-z",
"G8CxlqH_sfh",
"V_C2B3FmFJ8",
"0ObfJfF8PmS",
"RjmvWbAA9Vf",
"o75h-hnEvpG",
"GFH13mPgX1v",
"bSOsnVpzRY",
"z9lLp-iizFI",
"jQGA6JymptY",
"ZoHnbQGidTk",
"CxwKSjpfype",
"SrvWHuk5pdA"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies adversarial robustness of point cloud classification models. In particular, this paper analyzes the effects of pooling layers and conducts extensive ablation studies. In addition, this paper proposes a DeepSym operation, which is built on top of both the sorting-based pooling and the parameteriz... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"iclr_2021_ctgsGEmWjDY",
"ZoHnbQGidTk",
"LJvZn87FViD",
"CxwKSjpfype",
"SrvWHuk5pdA",
"iclr_2021_ctgsGEmWjDY",
"GFH13mPgX1v",
"ZoHnbQGidTk",
"SrvWHuk5pdA",
"ZoHnbQGidTk",
"LJvZn87FViD",
"LJvZn87FViD",
"iclr_2021_ctgsGEmWjDY",
"iclr_2021_ctgsGEmWjDY",
"iclr_2021_ctgsGEmWjDY"
] |
iclr_2021_SOVSJZ9PTO7 | JAKET: Joint Pre-training of Knowledge Graph and Language Understanding | Knowledge graphs (KGs) contain rich information about world knowledge, entities, and relations. Thus, they can be great supplements to existing pre-trained language models. However, it remains a challenge to efficiently integrate information from KG into language modeling. And the understanding of a knowledge graph req... | withdrawn-rejected-submissions | Four knowledgeable referees reviewed this paper; one reviewer (weakly) supports accept and other three indicate reject. Even with the rebuttal, all reviewers (including positive reviewer) have concerns on unconvincing experimental results (due to missing baselines for instance). I basically agree on negative reviews th... | train | [
"LvIG7D9u75L",
"zCZcG6DT9-S",
"c1qNlfvgadK",
"K23eeH5o90D",
"-n7E7zMibuV",
"zrnrFh_V4zx",
"qdmtuOw36lQ",
"mrQ_6seJDJP",
"dknl2hx5_CD",
"f15ClNZIVdq",
"qcir1OVe4kw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an approach to jointly pre-train language models and representations for knowledge graphs. In particular, natural language texts (English Wikipedia) are used to train context representations, while knowledge graphs (Wikidata) train entity representations (and both depend on each other). Experim... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"iclr_2021_SOVSJZ9PTO7",
"c1qNlfvgadK",
"K23eeH5o90D",
"dknl2hx5_CD",
"f15ClNZIVdq",
"qcir1OVe4kw",
"LvIG7D9u75L",
"iclr_2021_SOVSJZ9PTO7",
"iclr_2021_SOVSJZ9PTO7",
"iclr_2021_SOVSJZ9PTO7",
"iclr_2021_SOVSJZ9PTO7"
] |
iclr_2021_AZ4vmLoJft | (Updated submission 11/20/2020) MISIM: A Novel Code Similarity System | Semantic code similarity systems are integral to a range of applications from code recommendation to automated software defect correction. Yet, these systems still lack the maturity in accuracy for general and reliable wide-scale usage. To help address this, we present Machine Inferred Code Similarity (MISIM), a novel ... | withdrawn-rejected-submissions | This paper studies the problem of computing a similarity measure between two pieces of code. The main contributions are a configurable alternative (CASS) to abstract syntax trees (ASTs) for representing code and a model for embedding these structures within a Siamese net-like architecture. While parts of the ICLR commu... | train | [
"KMCMCpVU6Sg",
"xGEL_3ijLvL",
"YVzI-wR8hfB",
"ZWMcNYHKqV2",
"pMu7kM31bP7",
"S2kSTQ_a2-g",
"SEQVK4eRnY0",
"mgUrXB7EZEH",
"dMdnYu_VWuj"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Response to AnonReviewer3:\n\nThank you for your review!\n\n>>> The author emphasizes code similarity is now a first-order problem that must be solved in the introduction part, but the lack of relevant statics and examples makes this argument weak.\n\nWhile we believe this was fairly well substantiated, we have ta... | [
-1,
-1,
-1,
-1,
-1,
4,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
4
] | [
"dMdnYu_VWuj",
"S2kSTQ_a2-g",
"SEQVK4eRnY0",
"mgUrXB7EZEH",
"iclr_2021_AZ4vmLoJft",
"iclr_2021_AZ4vmLoJft",
"iclr_2021_AZ4vmLoJft",
"iclr_2021_AZ4vmLoJft",
"iclr_2021_AZ4vmLoJft"
] |
iclr_2021_SPhswbiXpJQ | Deep Data Flow Analysis | Compiler architects increasingly look to machine learning when building heuristics for compiler optimization. The promise of automatic heuristic design, freeing the compiler engineer from the complex interactions of program, architecture, and other optimizations, is alluring. However, most machine learning methods cann... | withdrawn-rejected-submissions |
After reading the paper, reviews and authors’ feedback. The meta-reviewer agrees with the reviewers that the paper presented a very interesting idea and empirical studies. R3 rightfully pointed out the need to clarify relation to related works, as well as the scalability issue.
Notably, because the analysis does not... | train | [
"g9FkwxBxhFH",
"vPdF5-zvhG",
"Gsf4LF36-A8",
"5NJs2nmMvTN",
"WiwxUxBaZcX",
"FjOq0A-WYX_",
"HMarpmv-kw",
"yf7IFdNwoJ",
"sxtfn8SF4T",
"i2gP2a01xnU"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"# Changes after author response\nThanks for addressing the concerns in the review with the new revision. I am revising the score to 7 from 5 based on the reply and the revisions to the paper.\n\n---\n# Summary\nThis paper describes a directed graph representation for programs and a graph neural network architectur... | [
7,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"iclr_2021_SPhswbiXpJQ",
"HMarpmv-kw",
"yf7IFdNwoJ",
"sxtfn8SF4T",
"i2gP2a01xnU",
"g9FkwxBxhFH",
"iclr_2021_SPhswbiXpJQ",
"iclr_2021_SPhswbiXpJQ",
"iclr_2021_SPhswbiXpJQ",
"iclr_2021_SPhswbiXpJQ"
] |
iclr_2021_y_pDlU_FLS | Reverse engineering learned optimizers reveals known and novel mechanisms | Learned optimizers are algorithms that can themselves be trained to solve optimization problems. In contrast to baseline optimizers (such as momentum or Adam) that use simple update rules derived from intuitive principles, learned optimizers use flexible, high-dimensional, nonlinear parameterizations. Although this can... | withdrawn-rejected-submissions | A line of work since 2016 has investigated learning NN-based optimisers, which produce optimisation updates by processing loss/gradient info with neural networks. This paper tries to understand the learned dynamics of these NN-based optimisers by linear approximation to the learned non-linear dynamics. Visualisation of... | train | [
"55KSiUaxBaM",
"rTNZmraVLuu",
"amBWKQyBbLK",
"U6ap_E7xPv",
"fIhZAhAjXjy",
"QB7g136RKAY",
"fBJFxuHh9qW",
"-bpJ736gb2t",
"90RWtp6E0eq",
"CladsghKdBi",
"JBEJfqe93R",
"vLDZIxo1b3s",
"m1fRUrbNYH",
"uaIHU6hnu-L"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Hello authors,\n\nThank you for your submission. I very much enjoyed reading it. I found the writing to be clear and only found one grammatical error (detailed below). As with any black-box system like a learned optimizer, there is naturally a lot of interest in what, actually, the optimizer itself is learning,... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"iclr_2021_y_pDlU_FLS",
"CladsghKdBi",
"-bpJ736gb2t",
"90RWtp6E0eq",
"fBJFxuHh9qW",
"iclr_2021_y_pDlU_FLS",
"55KSiUaxBaM",
"m1fRUrbNYH",
"vLDZIxo1b3s",
"JBEJfqe93R",
"uaIHU6hnu-L",
"iclr_2021_y_pDlU_FLS",
"iclr_2021_y_pDlU_FLS",
"iclr_2021_y_pDlU_FLS"
] |
iclr_2021_Kkw3shxszSd | Improving Generalizability of Protein Sequence Models via Data Augmentations | While protein sequence data is an emerging application domain for machine learning methods, small modifications to protein sequences can result in difficult-to-predict changes to the protein's function. Consequently, protein machine learning models typically do not use randomized data augmentation procedures analogous ... | withdrawn-rejected-submissions | This paper tests out some straightforward data augmentation strategies on the protein inputs to the transformer used in the TAPE paper. Overall, there is insufficient intellectual merit to warrant publication at ICLR. As a side-note, the quality of the manuscript in terms of scholarliness of presentation was overall l... | test | [
"U-EVXO1i0-U",
"5wetgrdMFQJ",
"3Ex01RP86lD",
"irKMpMHCjCP",
"8RU5EEavjeK",
"ZLEeDqLevEm",
"N8M55Q0VGEE",
"FYnduAhX-LO",
"CI2-HMf-Ze",
"LYR3uG85w1g"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThe paper explores the impact of different types of data augmentations for protein sequence data, and does a thorough benchmark analysis on them. The authors used a pre-trained transformer model, fine tuned the model on augmented data using two approaches, namely, contrastive learning and masked token pr... | [
9,
4,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_Kkw3shxszSd",
"iclr_2021_Kkw3shxszSd",
"LYR3uG85w1g",
"5wetgrdMFQJ",
"CI2-HMf-Ze",
"5wetgrdMFQJ",
"LYR3uG85w1g",
"U-EVXO1i0-U",
"iclr_2021_Kkw3shxszSd",
"iclr_2021_Kkw3shxszSd"
] |
iclr_2021_Oq79NOiZB1H | On the Importance of Sampling in Training GCNs: Convergence Analysis and Variance Reduction | Graph Convolutional Networks (GCNs) have achieved impressive empirical advancement across a wide variety of graph-related applications. Despite their great success, training GCNs on large graphs suffers from computational and memory issues. A potential path to circumvent these obstacles is sampling-based methods, where... | withdrawn-rejected-submissions | The paper provides variance reduction techniques for GCN training. When training a GCN it is common to sample nodes as in SGD, but also subsample the nodes’ neighbors, due to computational reasons. The entire mechanism introduces both bias and variance to the gradient estimation. The authors decompose the gradient esti... | train | [
"B9CLovo9Mm",
"66xpO4c_A_q",
"7crPOhyY9WI",
"MrLyD4plpNJ",
"GhawI3Bd7A5",
"yrsz18VbhPD",
"Zt38K-VKxZK",
"7kF49ARNm8V",
"P6Rebw8IyLN",
"Mtqp-HbQYf-",
"Hdq09Ra3jhW",
"6gteQiaJymJ",
"NakK4-wffNP"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThanks for your careful and valuable comments. We address your concerns as follows.\n\n**1. Question: This paper is an another application of SPDIER on GCN and Assumption 3 is too strong.**\n\nPlease refer to the general comment for clarifications about this work and SPIDER (Appendix A in revised manuscript)\n\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"Hdq09Ra3jhW",
"7crPOhyY9WI",
"Mtqp-HbQYf-",
"6gteQiaJymJ",
"NakK4-wffNP",
"iclr_2021_Oq79NOiZB1H",
"7kF49ARNm8V",
"iclr_2021_Oq79NOiZB1H",
"iclr_2021_Oq79NOiZB1H",
"iclr_2021_Oq79NOiZB1H",
"iclr_2021_Oq79NOiZB1H",
"iclr_2021_Oq79NOiZB1H",
"iclr_2021_Oq79NOiZB1H"
] |
iclr_2021_LNtTXJ9XXr | Adversarial Masking: Towards Understanding Robustness Trade-off for Generalization | Adversarial training is a commonly used technique to improve model robustness against adversarial examples. Despite its success as a defense mechanism, adversarial training often fails to generalize well to unperturbed test data. While previous work assumes it is caused by the discrepancy between robust and non-robust ... | withdrawn-rejected-submissions | The reviews were a bit mixed, with some concerns on the incremental nature of this work, which the AC concurs (after independently going through both the submission and Xie et al 2020). In a nutshell, the main contribution on the authors' side appears to be a simple linear interpolation of two masks so that it is possi... | train | [
"BWhOnU82YNU",
"iFHcrJDk6C",
"cm3qgaVHmpX",
"VHteYphQT6b",
"ZUp9GnU0j8",
"3-esqSjWqu",
"jcJ6mr0o4y9",
"guaF-g7jp6U",
"aerGpXHHZMN",
"SGZFZJZlSpo",
"KuEnOsFHYdP",
"lCBthU0XoNo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"## Overview \n\nThe paper focuses on the generalization issue with adversarial training that various work has recently demonstrated. The paper studies the role of batch normalization (BN) in adversarial robustness and generalizability. The authors single out the rescaling operator in BN to significantly impact the... | [
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_LNtTXJ9XXr",
"cm3qgaVHmpX",
"iclr_2021_LNtTXJ9XXr",
"SGZFZJZlSpo",
"BWhOnU82YNU",
"cm3qgaVHmpX",
"cm3qgaVHmpX",
"KuEnOsFHYdP",
"lCBthU0XoNo",
"iclr_2021_LNtTXJ9XXr",
"iclr_2021_LNtTXJ9XXr",
"iclr_2021_LNtTXJ9XXr"
] |
iclr_2021_sMEpviTLi1h | Provably Faster Algorithms for Bilevel Optimization and Applications to Meta-Learning | Bilevel optimization has arisen as a powerful tool for many machine learning problems such as meta-learning, hyperparameter optimization, and reinforcement learning. In this paper, we investigate the nonconvex-strongly-convex bilevel optimization problem. For deterministic bilevel optimization, we provide a comprehensi... | withdrawn-rejected-submissions | The authors propose two algorithms and their theoretical analysis for solving bilevel optimization problems where the inner objective is assumed to be strongly convex. The authors have greatly improved the paper to answer reviewer comments and three out of four reviewers have increased their scores. That said, given th... | val | [
"XPpAeMHHvz",
"MHccZX65Da",
"kJb11BVVFQ",
"S4NLdOgtjGh",
"71OZaQwGcLv",
"8deGcsVw8vF",
"mB73Bnq8Xx",
"QEl_GY2gPe",
"dJe63UHHKv",
"Hy34h2839EL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents two algorithms - one for the deterministic and one for stochastic bilevel optimization. The paper claims the methods are lower cost in computational complexity for various terms and easy to implement. A finite-time convergence proof is provided for the algorithms. Empirical results are presente... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_sMEpviTLi1h",
"iclr_2021_sMEpviTLi1h",
"mB73Bnq8Xx",
"dJe63UHHKv",
"Hy34h2839EL",
"XPpAeMHHvz",
"MHccZX65Da",
"iclr_2021_sMEpviTLi1h",
"iclr_2021_sMEpviTLi1h",
"iclr_2021_sMEpviTLi1h"
] |
iclr_2021_eyDDGPt5R1S | Learning Deep Latent Variable Models via Amortized Langevin Dynamics | How can we perform posterior inference for deep latent variable models in an efficient and flexible manner? Markov chain Monte Carlo (MCMC) methods, such as Langevin dynamics, provide sample approximations of such posteriors with an asymptotic convergence guarantee. However, it is difficult to apply these methods to la... | withdrawn-rejected-submissions | The paper had three borderline reviews. While the idea of posterior sampling of a neural network is potentially useful and Langevin dynamics are a way to attempt to address that, the reviewers did not appear convinced by the experiments and what the MCMC sampling was doing wasn't really front and center there. | train | [
"VHJ82EqatF3",
"NBNm51hOWn6",
"JTNs2plEdA",
"mpi4a3PKMvX",
"gKmxNEqfDRz",
"1HRsjoECN1",
"ROkIgQXqgH9",
"GSBN7B3EeIv",
"UYSw4Qe5u3i",
"TH7g5nIDXcY"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"# Summarize what the paper claims to contribute\nThe papers introduce an amortisation for inference by Langevin dynamics (LD). Rather than making each particle track the posterior for a given data point as in normal LD, this new method couples the posterior samples of multiple data points by a dynamic recognition ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_eyDDGPt5R1S",
"mpi4a3PKMvX",
"TH7g5nIDXcY",
"gKmxNEqfDRz",
"VHJ82EqatF3",
"UYSw4Qe5u3i",
"iclr_2021_eyDDGPt5R1S",
"iclr_2021_eyDDGPt5R1S",
"iclr_2021_eyDDGPt5R1S",
"iclr_2021_eyDDGPt5R1S"
] |
iclr_2021_Zu3iPlzCe9J | On the Power of Abstention and Data-Driven Decision Making for Adversarial Robustness | We formally define a feature-space attack where the adversary can perturb datapoints by arbitrary amounts but in restricted directions. By restricting the attack to a small random subspace, our model provides a clean abstraction for non-Lipschitz networks which map small input movements to large feature movements. We p... | withdrawn-rejected-submissions | The paper considers the problem of abstention in robust classification. A number of issues were identified in the formal framework and the writing was also not up to scratch. The authors should take into regard the very many constructive suggestions made by the reviewers in preparing a revision. | train | [
"sDaqzNKpqFd",
"TLg9eELBaXd",
"witTADvltPG",
"1L2ksYh4EhW",
"J-6EmZcmEdC",
"BAAH5cQHye",
"DSP57KsNDtj",
"f5-lxSCZ2_D",
"tYmLncn2hcP",
"BXeMIZLi4aA",
"gx3-IiAKppc",
"N-zOMKKhmd1",
"Nn4VfbeoKPr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThis paper studies, through a provable approach, whether abstaining (i.e., refusing to answer) can be beneficial for achieving small adversarial/robust error in settings where the input is potentially adversarially perturbed. The paper proves a separation between the power of models with and without ab... | [
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_Zu3iPlzCe9J",
"iclr_2021_Zu3iPlzCe9J",
"iclr_2021_Zu3iPlzCe9J",
"gx3-IiAKppc",
"N-zOMKKhmd1",
"TLg9eELBaXd",
"sDaqzNKpqFd",
"sDaqzNKpqFd",
"sDaqzNKpqFd",
"gx3-IiAKppc",
"Nn4VfbeoKPr",
"iclr_2021_Zu3iPlzCe9J",
"iclr_2021_Zu3iPlzCe9J"
] |
iclr_2021_yrDEUYauOMd | Attainability and Optimality: The Equalized-Odds Fairness Revisited | Fairness of machine learning algorithms has been of increasing interest. In order to suppress or eliminate discrimination in prediction, various notions as well as approaches to impose fairness have been proposed. However, in different scenarios, whether or not the chosen notion of fairness can always be attained, even... | withdrawn-rejected-submissions | The paper study under which condition a classifier can respect the condition of equalized odds. The reviewers find the paper interesting but they also raise some important concerns about it.
First, multiple reviewers pointed out that the results are not particularly novel or surprising and, even after discussing the r... | train | [
"oLq0Qn3RqyX",
"RGSFgUazBc",
"8GiR8mUiaa2",
"kyamZn7qBu1",
"Xa4tZGZczh9",
"j8AQLMdfTIx",
"n3DP1V5RrnC",
"oAjXhGK461C",
"jujUslS96Hs",
"AcNSJBncVa2"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the attainability of the equalized-odd fairness criteria introduced by Hardt et al'16 in classification and regression tasks. In particular, the paper claims that under certain conditions EQ is not even attainable. They proved the claim for the regression task but I could not find exactly where t... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"iclr_2021_yrDEUYauOMd",
"n3DP1V5RrnC",
"oAjXhGK461C",
"oLq0Qn3RqyX",
"jujUslS96Hs",
"AcNSJBncVa2",
"iclr_2021_yrDEUYauOMd",
"iclr_2021_yrDEUYauOMd",
"iclr_2021_yrDEUYauOMd",
"iclr_2021_yrDEUYauOMd"
] |
iclr_2021_GPuvhWrEdUn | MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery | To address the issue that deep neural networks (DNNs) are vulnerable to model inversion attacks, we design an objective function to adjust the separability of the hidden data representations as a way to control the trade-off between data utility and vulnerability to inversion attacks. Our method is motivated by the the... | withdrawn-rejected-submissions | In this paper, the authors change the loss function of NNs to reduce the separability of the different classes in one of the hidden layers. The rationale for this assumption that the trained network will be more robust against white-box model inversion attack. The reviewers all concur that the paper had some merit, but... | train | [
"ttLRPm_iWhM",
"c0AdqW6FMAW",
"HuFHnf6gOa6",
"AInuMO9CPWi",
"0EL8H8igZwk",
"G23HDR-WDl",
"VDVmDOwCepl",
"f35BR0ilctd",
"tDYA--FkFzC",
"-ZChUP42Heb"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Post response update\n\nThanks for the response. However, my major concern is still that the technical contribution of this paper is limited.\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------... | [
5,
5,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_GPuvhWrEdUn",
"iclr_2021_GPuvhWrEdUn",
"G23HDR-WDl",
"0EL8H8igZwk",
"iclr_2021_GPuvhWrEdUn",
"AInuMO9CPWi",
"iclr_2021_GPuvhWrEdUn",
"ttLRPm_iWhM",
"-ZChUP42Heb",
"c0AdqW6FMAW"
] |
iclr_2021_b7ZRqEFXdQ | Improving Sequence Generative Adversarial Networks with Feature Statistics Alignment | Generative Adversarial Networks (GAN) are facing great challenges in synthesizing sequences of discrete elements, such as mode dropping and unstable training. The binary classifier in the discriminator may limit the capacity of learning signals and thus hinder the advance of adversarial training. To address such issues... | withdrawn-rejected-submissions | The work introduces a method that uses the Feature Statistics Alignment paradigm to improve sequence generation with GANs. The contribution is interesting and novel (although marginally), clarity is also good.
However the reviewers raised several concerns calling for more comprehensive and thorough evaluation. Experime... | train | [
"CnsHXulT1e",
"dF_eK3Sowmp",
"Pqu7TPSxfFN",
"4kmtzJfYcW",
"YZ_lwfy5JwS",
"Olp2qNFDDCq",
"GmJZxT7V0pD",
"tXxPiFfsuB_",
"2E-Y6UFOVI",
"VHkLDQqF-h2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"**Main Claim:**\n\nIn this work, the authors propose to use the Feature Statistics Alignment paradigm to enrich the learning signal from the discriminator in a sentence generation GAN. The proposed model can generate sentences with better likelihood and BLEU on one synthetic and two real datasets.\n\n**Contributio... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_b7ZRqEFXdQ",
"iclr_2021_b7ZRqEFXdQ",
"iclr_2021_b7ZRqEFXdQ",
"CnsHXulT1e",
"2E-Y6UFOVI",
"VHkLDQqF-h2",
"dF_eK3Sowmp",
"dF_eK3Sowmp",
"iclr_2021_b7ZRqEFXdQ",
"iclr_2021_b7ZRqEFXdQ"
] |
iclr_2021_uys9OcmXNtU | MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention | Recent advances in neural forecasting have produced major improvements in accuracy for probabilistic demand prediction. In this work, we propose novel improvements to the current state of the art by incorporating changes inspired by recent advances in Transformer architectures for Natural Language Processing. We develo... | withdrawn-rejected-submissions | The paper proposes and uses a fairly involved attention based architecture to perform time series forecasting. The idea of transformers is raised, but, given how sequence embedding is often convolutional, and position encoding input is provided to the model (albeit implictly in the form of features having to do with qu... | train | [
"RRCxNpya2qY",
"jDZKUIgJFKo",
"jCQW6o7StG",
"mnF62UtW_p_",
"Ov3QlQExPI5",
"NmeMAtkG9ky",
"SVmjg32wO6",
"2x2w1v91bh",
"h9EezZWBsTa"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"\n\nThis paper aims at improving accuracy of multi horizon univariate time series forecasting. The authors propose an encoder-decoder attention-based architecture for multi-horizon quantile forecasting. The model encodes a distinct representation of the past for each requested horizon.\n\n#### Strong points\n\n+ T... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"iclr_2021_uys9OcmXNtU",
"iclr_2021_uys9OcmXNtU",
"RRCxNpya2qY",
"RRCxNpya2qY",
"jDZKUIgJFKo",
"jDZKUIgJFKo",
"h9EezZWBsTa",
"iclr_2021_uys9OcmXNtU",
"iclr_2021_uys9OcmXNtU"
] |
iclr_2021_PiKUvDj5jyN | Relational Learning with Variational Bayes | In psychology, relational learning refers to the ability to recognize and respond to
relationship among objects irrespective of the nature of those objects. Relational
learning has long been recognized as a hallmark of human cognition and a key
question in artificial intelligence research. In this wor... | withdrawn-rejected-submissions | This paper presents a variational learning framework for inferring relations between data points. The authors further introduce novel regularizers to avoid unfavorable solutions to their relational learning problem. Qualitative results are provided on rotated versions of MNIST. Additional qualitative results on the Yal... | train | [
"ecjK1s4mz5",
"MMv4YOZU07q",
"CMx0-gtmBDl",
"8fWV9s4-A1",
"bODrNsXfvv",
"-D35znJWTOu",
"wwNW82b6cy",
"u0o5HNDpcp4",
"e2-CWAO_US_",
"bSUYlhfpARF",
"O_e_O4QkAR",
"XqrK4m4QH2",
"KGxkJFRBnly",
"toVGkDkVg2",
"QUBz0a6u9Ll",
"qLsnAIwUMG3",
"LoQ2kz1g9ut",
"D3WlqYNcRsM",
"toYh0ChcIP0",
... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
... | [
"If RPDA does not rely on the relation to be learned (which seems like cheating already), can the author try other data augmentation methods (e.g., adding Gaussian noise) in RPDA on the same MNIST data set and report the result?",
"Dear AnonReviewer4:\n\nWe appreciate you sharing your concern at the last minute. ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"MMv4YOZU07q",
"CMx0-gtmBDl",
"e2-CWAO_US_",
"iclr_2021_PiKUvDj5jyN",
"8fWV9s4-A1",
"r-VlSAt_wWy",
"XqrK4m4QH2",
"8fWV9s4-A1",
"r-VlSAt_wWy",
"toVGkDkVg2",
"KGxkJFRBnly",
"iclr_2021_PiKUvDj5jyN",
"yEUr92xZp8I",
"03HpbPfF-y",
"iclr_2021_PiKUvDj5jyN",
"8fWV9s4-A1",
"r-VlSAt_wWy",
"ic... |
iclr_2021_30SS5VjvhrZ | Bayesian Neural Networks with Variance Propagation for Uncertainty Evaluation | Uncertainty evaluation is a core technique when deep neural networks (DNNs) are used in real-world problems. In practical applications, we often encounter unexpected samples that have not seen in the training process. Not only achieving the high-prediction accuracy but also detecting uncertain data is significant for s... | withdrawn-rejected-submissions | This paper proposes an approach to estimating uncertainty in deep neural network models that avoids the need to make multiple forward passes through a network or through multiple individual models in a posterior ensemble. In terms of strengths, this is an important and timely topic that is of significant interest. The ... | train | [
"X5O47q1dD2J",
"5Nmi2XGD8Cn",
"IwFF2S3JM5E",
"QkEIl-4Rkw7",
"fYMIconZMDJ",
"WWzaO2n2Nv0",
"LrY9PW6PiEF",
"oFoQZtDr0AS",
"65ofW10Yc5T"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your constructive feedback and comments.\n\n- The chief issue with this work is the 'generality' of the work. (omitted) I find this assertion unfair to other works which attempt to estimate various types of uncertainties (e.g. epistemic) in a principled manner.\n\nWe provided the formula of... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"WWzaO2n2Nv0",
"LrY9PW6PiEF",
"oFoQZtDr0AS",
"65ofW10Yc5T",
"iclr_2021_30SS5VjvhrZ",
"iclr_2021_30SS5VjvhrZ",
"iclr_2021_30SS5VjvhrZ",
"iclr_2021_30SS5VjvhrZ",
"iclr_2021_30SS5VjvhrZ"
] |
iclr_2021_ghKbryXRRAB | Tracking the progress of Language Models by extracting their underlying Knowledge Graphs | The state of the art of language models, previously dominated by pre-trained word embeddings, is now being pushed forward by large pre-trained contextual representations. This success has driven growing interest to understand what these models encode inside their inner workings. Despite this, understanding their semant... | withdrawn-rejected-submissions | This work addresses the problem of understanding how pre-trained language models are encoding semantic information, such as WordNet structure. This is evaluated by recreating the structure of WordNet from embeddings. The study also shows evidence about the limitations of current pre-trained language models, demonstrati... | train | [
"Qvv_EGHRyxv",
"1LgJChLE-v",
"qK5NgrcnZdh",
"SZBn80U-Ouq",
"s0uV3QiXG7",
"puMyYXKiQV",
"0rFm6fgxSOq",
"yARipHFQZzx"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"* We used pre-trained models provided by the original papers. We updated the paper to make this fact clearer. We also enhanced discussion and analysis in section 7 and appendix E regarding the low impact of pre-training corpus sizes. \n* May we ask why you changed your Rating from 7 to 6? We would greatly apprecia... | [
-1,
-1,
-1,
-1,
6,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
2,
4,
4
] | [
"s0uV3QiXG7",
"puMyYXKiQV",
"0rFm6fgxSOq",
"yARipHFQZzx",
"iclr_2021_ghKbryXRRAB",
"iclr_2021_ghKbryXRRAB",
"iclr_2021_ghKbryXRRAB",
"iclr_2021_ghKbryXRRAB"
] |
iclr_2021_HWX5j6Bv_ih | Cross-Node Federated Graph Neural Network for Spatio-Temporal Data Modeling | Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated lea... | withdrawn-rejected-submissions | This paper received mixed reviews: two positives (6, 6) and two negatives (5, 3). However, the positive reviewers have very low confidence, do not show strong supports for this paper. The reviewers raised various concerns about this paper, and there still exist remaining critical issues although the authors made substa... | train | [
"fkeEex4yT6a",
"UyVNxZX_81Q",
"BcUoq-X4U5",
"dCauG-TKVF",
"BCj5qcXT19I",
"3UtLcgCedNv",
"Z1DsY0ZIutY",
"ywQmip3PAF",
"yGFv3fgl_Lg",
"U9PebyNWQqp",
"Saqmoa4vEoq",
"0n3ZzRoj7HO",
"L3G-0Pjkxk_",
"XqISMPcgCP5"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you for your comprehensive and helpful review and suggestions. We have addressed your concerns in our replies and have updated the submission draft according to your suggestions. We are looking forward to having an in-depth and fruitful discussion so that we can clarify any further confusion or concerns."... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
5,
1
] | [
"L3G-0Pjkxk_",
"Saqmoa4vEoq",
"UyVNxZX_81Q",
"3UtLcgCedNv",
"Z1DsY0ZIutY",
"BCj5qcXT19I",
"L3G-0Pjkxk_",
"0n3ZzRoj7HO",
"XqISMPcgCP5",
"iclr_2021_HWX5j6Bv_ih",
"iclr_2021_HWX5j6Bv_ih",
"iclr_2021_HWX5j6Bv_ih",
"iclr_2021_HWX5j6Bv_ih",
"iclr_2021_HWX5j6Bv_ih"
] |
iclr_2021_7UyqgFhPqAd | Connection- and Node-Sparse Deep Learning: Statistical Guarantees | Neural networks are becoming increasingly popular in applications, but a comprehensive mathematical understanding of their potentials and limitations is still missing. In this paper, we study the prediction accuracies of neural networks from a statistical point of view. In particular, we establish statistical predictio... | withdrawn-rejected-submissions | The paper considers the problem of using sparse coding to create better generalization in neural networks. The new generalization bound of the neural network only depends on the l1 norm of the weight, instead of the original \ell_2 version as in previous papers.
Well this direction is promising, the major concern ab... | test | [
"mo4yEDWeTYn",
"CV7o8VYpqh8",
"hr5zaCgrmeL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"# Summary\nThis paper studies the problem of estimating a vector valued regression function by neural networks. They provide a bound on the in-sample prediction error for a neural network estimator under two types of regularizations; one that induces connection sparsity and another that induces node sparsity. The ... | [
5,
4,
6
] | [
3,
4,
3
] | [
"iclr_2021_7UyqgFhPqAd",
"iclr_2021_7UyqgFhPqAd",
"iclr_2021_7UyqgFhPqAd"
] |
iclr_2021_NLuOUSp9zZd | DO-GAN: A Double Oracle Framework for Generative Adversarial Networks | In this paper, we propose a new approach to train Generative Adversarial Networks (GAN) where we deploy a double-oracle framework using the generator and discriminator oracles. GAN is essentially a two-player zero-sum game between the generator and the discriminator. Training GANs is challenging as a pure Nash equilibr... | withdrawn-rejected-submissions | This paper uses the double oracle method from game theory and applies it to GANs.
This idea is interesting and Double Oracle actually seems like a good fit to train GANs. This could lead to interesting results in the future.
Reviewers disagree on the clarity of the paper, probably because the game theory vocabulary i... | train | [
"A-YNwAhdiZB",
"FsrIKF9kNpT",
"3eBcSrOwDw",
"lnJAtCzg1YJ",
"EanRgATSqEl",
"ixth1NQK-o",
"8a3Qyi2ti38",
"j2sJk3TTgk",
"3LmLoyInbku"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new training framework for GAN, inspired by the double oracle (DO) algorithm in game theory. The authors design many mechanisms to make it possible to employ DO in GAN training. The motivation is clear, and the experimental results support the claim. \n\nFor the proposed training framework, ... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
4,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_NLuOUSp9zZd",
"8a3Qyi2ti38",
"j2sJk3TTgk",
"A-YNwAhdiZB",
"3LmLoyInbku",
"iclr_2021_NLuOUSp9zZd",
"iclr_2021_NLuOUSp9zZd",
"iclr_2021_NLuOUSp9zZd",
"iclr_2021_NLuOUSp9zZd"
] |
iclr_2021_kq4SNxgQI4v | Efficient Neural Machine Translation with Prior Word Alignment | Prior word alignment has been shown indeed helpful for a better translation if such prior is good enough and can be acquired in a convenient way at the same time. Traditionally, word alignment can be learned through statistical machine translation (SMT) models. In this paper, we propose a novel method that infuses prio... | withdrawn-rejected-submissions | Two reviewers suggested to reject and the other reviewer also thought it below the threshold. | train | [
"hNjyPWos7Bl",
"9gX-2zg1iX",
"izbNmMi8VCr",
"pwITMXvXP0N",
"E7sRtuQvNBL",
"X795aLMb-c",
"5R_j-mLAX4x"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes to integrate word alignment obtained from SMT into an NMT system. This is an exciting topic not only because it can help interpretability, but also because the same mechanism could be used e.g. for imposing a specific terminology in translations, something that was relatively easy to do with SM... | [
4,
5,
-1,
-1,
-1,
-1,
3
] | [
4,
4,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_kq4SNxgQI4v",
"iclr_2021_kq4SNxgQI4v",
"iclr_2021_kq4SNxgQI4v",
"hNjyPWos7Bl",
"9gX-2zg1iX",
"5R_j-mLAX4x",
"iclr_2021_kq4SNxgQI4v"
] |
iclr_2021_H-SPvQtMwm | Synthesizer: Rethinking Self-Attention for Transformer Models | The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, w... | withdrawn-rejected-submissions | The paper seeks to answer the question on the necessity of the self-attention matrix in Transformers and whether it is possible to synthesize it by alternate means other than pairwise attention.
The reviewers appreciated the main general idea and the wide range of experiments conducted.
However, there are some conce... | train | [
"MkRRdIOi0YR",
"KpLm0eoGDG",
"j83AT2FCEWV",
"kVLcSeGLc0f",
"bXAvGivDVp7",
"XB4TCqM-9g",
"Ro8HwpCmFCR",
"RbF80QRFVNK",
"xwitXcVkWoZ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper proposes replacing/combining Transformer self-attention with synthetic attention weights that do not rely on pairwise dependencies between token positions. Synthetic attention relies on either the input at the given position (dense synthesizer) or is altogether randomly initialized (random syn... | [
4,
7,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_H-SPvQtMwm",
"iclr_2021_H-SPvQtMwm",
"iclr_2021_H-SPvQtMwm",
"KpLm0eoGDG",
"MkRRdIOi0YR",
"RbF80QRFVNK",
"xwitXcVkWoZ",
"iclr_2021_H-SPvQtMwm",
"iclr_2021_H-SPvQtMwm"
] |
iclr_2021_0F_OC_oROWb | RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks | We propose RSO (random search optimization), a gradient free, sampling based approach for training deep neural networks. To this end, RSO adds a perturbation to a weight in a deep neural network and tests if it reduces the loss on a mini-batch. If this reduces the loss, the weight is updated, otherwise the existing wei... | withdrawn-rejected-submissions | The paper proposes a variant derivative-free optimization algorithm, that belongs to the family of Evolution Strategies (ES) and zero-order optimization algorithms, to train deep neural networks. The proposed Random Search Optimization (RSO) perturbs the weights via additive Gaussian noise and updates the weights only ... | train | [
"JYuw83ZSc1L",
"zEHO0tN7p8A",
"uwLAvLpzw5O",
"qNLE8SSHKdZ",
"S-IetRX90JU",
"A1_UaE2NkIe",
"FWpcaJ7C_ai",
"FPdMJzP2bei",
"NOLh7XxvuHA",
"fd3MZaa30FR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, rather than training a DNN using SGD, the proposed idea is to perturb the weights of the network and accept the perturbation if it improves the performance. This naive idea seems to perform almost as well as SGD, and \"an order of magnitude faster\" (see below).\n\nWhile I commend the authors for br... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
4,
2,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2021_0F_OC_oROWb",
"iclr_2021_0F_OC_oROWb",
"qNLE8SSHKdZ",
"FWpcaJ7C_ai",
"JYuw83ZSc1L",
"fd3MZaa30FR",
"NOLh7XxvuHA",
"zEHO0tN7p8A",
"iclr_2021_0F_OC_oROWb",
"iclr_2021_0F_OC_oROWb"
] |
iclr_2021_tEFhwX8s1GN | Training By Vanilla SGD with Larger Learning Rates | The stochastic gradient descent (SGD) method, first proposed in 1950's, has been the foundation for deep-neural-network (DNN) training with numerous enhancements including adding a momentum or adaptively selecting learning rates, or using both strategies and more. A common view for SGD is that the learning rate should... | withdrawn-rejected-submissions | The paper primary theoretical contribution claim is to establish the constant size SGD converges linear to the optimal solution in non-convex settings. This is shown in the interpolation regime for over-parametrized situations when starting from points nearby to the optimum. The paper's empirical claim is to use relati... | train | [
"5z01jhbUF-L",
"rgjVGmZ8jX",
"wsVvomSMgTT",
"4MyY5A40aid",
"qBMAvRP1vtN",
"4Bzdx08lpBK",
"FJhwVdZHFb",
"eB6a-dDVRhy"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the smooth finite-sum problem under suitable conditions in the non-convex case. They show the necessary condition for the minimizer $x^*$ being a point of attraction, and Theorem 1 provides a sufficient condition for the strong minimizer $x^*$ to be a point of strong attraction with high probabi... | [
5,
4,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
5,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_tEFhwX8s1GN",
"iclr_2021_tEFhwX8s1GN",
"rgjVGmZ8jX",
"FJhwVdZHFb",
"eB6a-dDVRhy",
"5z01jhbUF-L",
"iclr_2021_tEFhwX8s1GN",
"iclr_2021_tEFhwX8s1GN"
] |
iclr_2021_C_p3TDhOXW_ | Prior Preference Learning From Experts: Designing A Reward with Active Inference | Active inference may be defined as Bayesian modeling of a brain with a biologically plausible model of the agent. Its primary idea relies on the free energy principle and the prior preference of the agent. An agent will choose an action that leads to its prior preference for a future observation. In this paper, we clai... | withdrawn-rejected-submissions | The meta-reviewer agrees with the reviewers that this is a marginal case. Conditioned on the quality of content and comparisons to other works:
Constrained Reinforcement Learning With Learned Constraints (https://openreview.net/forum?id=akgiLNAkC7P)
Parrot: Data-Driven Behavioral Priors for Reinforcement Learning (http... | train | [
"R5qTmNEdop2",
"Yac-IvFf3X8",
"j_-u4yDLFH",
"nfFtFwO74U",
"TGyh86qMlM4",
"steoACxckTp",
"p77kgSp4gQ4",
"YdQYgnux_Pe",
"OJrxu0jIyry",
"buhWIYJh3o9"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The work in this paper draws connections between the active inference literature and reinforcement learning frameworks. The paper proposes a connection between these two methods more formally so that you can convert the active inference learning problem into a reinforcement learning problem. The paper also shows s... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2021_C_p3TDhOXW_",
"j_-u4yDLFH",
"steoACxckTp",
"R5qTmNEdop2",
"buhWIYJh3o9",
"nfFtFwO74U",
"iclr_2021_C_p3TDhOXW_",
"OJrxu0jIyry",
"iclr_2021_C_p3TDhOXW_",
"iclr_2021_C_p3TDhOXW_"
] |
iclr_2021_k2Om84I9JuX | Descending through a Crowded Valley — Benchmarking Deep Learning Optimizers | Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one. The growing literature now lists hundreds of optimization methods. In the absence of clear theoretical guidance and conclusive empirical evidence, the decision is often made based on anecdotes... | withdrawn-rejected-submissions | Contributions of this type are very important for the community. There is a great deal of confusion among practitioners about how to pick optimizers. Perhaps worse, there is confusion among optimization researchers about how to demonstrate the effectiveness of their novel algorithms on deep learning tasks. I applaud th... | train | [
"uwB9TctXzfq",
"4c3EviB6hxx",
"AxesDY_IXwf",
"lXdnkvD5p2t",
"tSOF5So-CyR",
"zSUepUS-64W",
"EMBy9LUYYAf",
"WG5IBU04WgD",
"uMuv40CgbXP",
"mU1SESwFqEB",
"6LvY2aN-l_C",
"BweQ-71Ryp5",
"kTBmKuI7PWS",
"ZDB6mDT4qDr",
"nS7tOsHRq0",
"8i_oW1Bbddm",
"4bWpeD_6ltQ",
"TTKV3qe6ok7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"au... | [
"Summary:\nThis paper benchmarks popular optimizers for training neural networks. The experiments consider all possible combinations of 3 different tuning budgets, and 4 different fixed learning rate schedules on 8 deep learning workloads for 14 optimizers. The paper highlights two main observations: 1) there is no... | [
4,
4,
-1,
-1,
-1,
-1,
-1,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
5,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_k2Om84I9JuX",
"iclr_2021_k2Om84I9JuX",
"lXdnkvD5p2t",
"EMBy9LUYYAf",
"uMuv40CgbXP",
"BweQ-71Ryp5",
"nS7tOsHRq0",
"iclr_2021_k2Om84I9JuX",
"mU1SESwFqEB",
"iclr_2021_k2Om84I9JuX",
"4bWpeD_6ltQ",
"nS7tOsHRq0",
"ZDB6mDT4qDr",
"WG5IBU04WgD",
"uwB9TctXzfq",
"TTKV3qe6ok7",
"4c3Ev... |
iclr_2021_C1VUD8RZ5wq | A Closer Look at Codistillation for Distributed Training | Codistillation has been proposed as a mechanism to share knowledge among concurrently trained models by encouraging them to represent the same function through an auxiliary loss. This contrasts with the more commonly used fully-synchronous data-parallel stochastic gradient descent methods, where different model replica... | withdrawn-rejected-submissions | This paper proposes a novel and interesting approach called co-distillation for distributed training. The main idea is to add a regularizer in order to encourage local models to be consistent with the global objective. Although the idea is a promising alternative to local-update SGD, the approach is mostly empirical. T... | test | [
"1xZNxZ0qetv",
"lsfBSka53Gw",
"VpVOBN-uOyA",
"yQg_UAUD43F",
"n1bWnnvgJcK",
"8WiffmQ2FZR",
"XE5bS7UFulN",
"q_LZMR_Bh4D",
"QNLL_vnPbF",
"5FA1GQw4eWX",
"Xj_3qgj7U9l",
"R2eZTmIvZRw",
"7p2DLOIyjGB",
"mpV9BmCL4VR",
"6eZxEis6jfg",
"LavfBLGLy2",
"1z_AssDWqsQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This work analyzes the effect of co-distillation for distributed training under moderate batch sizes. Using distillation-like techniques to improve synchronous SGD training is an interesting direction. And the paper carefully analyzed this setting while using the same amount of compute, which is not done by prior ... | [
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_C1VUD8RZ5wq",
"iclr_2021_C1VUD8RZ5wq",
"iclr_2021_C1VUD8RZ5wq",
"lsfBSka53Gw",
"1xZNxZ0qetv",
"1z_AssDWqsQ",
"VpVOBN-uOyA",
"lsfBSka53Gw",
"1xZNxZ0qetv",
"1z_AssDWqsQ",
"VpVOBN-uOyA",
"1z_AssDWqsQ",
"1z_AssDWqsQ",
"1xZNxZ0qetv",
"lsfBSka53Gw",
"VpVOBN-uOyA",
"iclr_2021_C1V... |
iclr_2021_pHgB1ASMgMW | Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness | Deep neural networks (DNNs) are known to be prone to adversarial attacks, for which many remedies are proposed. While adversarial training (AT) is regarded as the most robust defense, it suffers from poor performance both on clean examples and under other types of attacks, e.g. attacks with larger perturbations. Meanwh... | withdrawn-rejected-submissions | The reviews were a bit mixed, with some concerns on the novelty and experimental evaluation. While the authors' efforts during rebuttable were appreciated, the overall sentiment is that this work, in its current form, cannot be accepted to ICLR yet. Please consider revising your work based on the excellent reviews. Som... | val | [
"bkGALOX1Z6b",
"obbpodQfdb1",
"e6jFM6b_bh9",
"JCcoMXmU-lx",
"mUZjPfP1CTZ",
"Ud6MgY110OO",
"AIVcKmHFM5y",
"_JGvhAMgyZF",
"bitzfW4zgM_",
"FmeoZcVHs5H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"1) First, this work studies how entropy maximization and label smoothing combined with adversarial training can improve adversarial robustness. Although these two techniques have been shown to prevent model from being over-confident, I still think it is an over-claim that \"rethinking uncertainty in deep learning\... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_pHgB1ASMgMW",
"iclr_2021_pHgB1ASMgMW",
"iclr_2021_pHgB1ASMgMW",
"iclr_2021_pHgB1ASMgMW",
"FmeoZcVHs5H",
"bitzfW4zgM_",
"bkGALOX1Z6b",
"obbpodQfdb1",
"iclr_2021_pHgB1ASMgMW",
"iclr_2021_pHgB1ASMgMW"
] |
iclr_2021_R5M7Mxl1xZ | Minimal Geometry-Distortion Constraint for Unsupervised Image-to-Image Translation | Unsupervised image-to-image (I2I) translation, which aims to learn a domain mapping function without paired data, is very challenging because the function is highly under-constrained. Despite the significant progress in constraining the mapping function, current methods suffer from the \textit{geometry distortion} prob... | withdrawn-rejected-submissions | This paper deals with unsupervised image-to-image translation and proposed a geometric constrains for better structural similarity between the source and the target. Experiments are done using multiple GAN frameworks and demonstrate reduction in distortions in the generated images.
The reviewers appreciated the contri... | train | [
"IY53FWSytXo",
"vqsuwnqpYpL",
"GHi9-sJS-c2",
"BGRdetcL-E9",
"UX4LaFBHEIg",
"Jna54xtVYZ",
"heJJ5ykYtt",
"wma2g7U2wnP",
"Yn8E4I1PgO8",
"BqD4S0eshV4",
"KsHyc2im1wY",
"mpBaMPTkMbg",
"PiIhMBibGPO"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have uploaded a revision of our paper, and the modified part is typed in blue.\n\n$\\textbf{We summarize the modifications here}$:\n1.\tWe add an illustration in $\\textbf{Figure 1}$ to explain how the randomness of color transformation in the translation process harms to the geometry preservation.\n2.\tWe add ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3
] | [
"iclr_2021_R5M7Mxl1xZ",
"GHi9-sJS-c2",
"mpBaMPTkMbg",
"wma2g7U2wnP",
"BqD4S0eshV4",
"KsHyc2im1wY",
"PiIhMBibGPO",
"UX4LaFBHEIg",
"Jna54xtVYZ",
"iclr_2021_R5M7Mxl1xZ",
"iclr_2021_R5M7Mxl1xZ",
"iclr_2021_R5M7Mxl1xZ",
"iclr_2021_R5M7Mxl1xZ"
] |
iclr_2021_gHsr-v8Tz6l | Variational Invariant Learning for Bayesian Domain Generalization | Domain generalization addresses the out-of-distribution problem, which is challenging due to the domain shift and the uncertainty caused by the inaccessibility to data from the target domains. In this paper, we propose variational invariant learning, a probabilistic inference framework that jointly models domain invari... | withdrawn-rejected-submissions | Although the proposed method shows sota results, it is a simple combination of two existing methods, a bit of Bayesian + domain generalization. It seems that the total improvement by the proposed method is just the sum of improvements by Bayesian and by domain generalization. No synergy between Bayesian and domain ge... | train | [
"nkJGRo9IV4L",
"2jSoMBfnQ8S",
"Mkvk3Cx3E8-",
"ii4OdxSPr4",
"LCahpY941o",
"bCeFSKfymgw",
"Nf-0ZWZlFN",
"2UG1dHNNbX5"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank AnonReviewer3 for acknowledging our methodology, definitions and ablation study are clear and well presented.\n\nWe agree our claim on being the first Bayesian approach to domain generalization is not precise enough. We have softened in Section 1: ''We adopt Bayesian neural networks to domain generalizati... | [
-1,
-1,
-1,
-1,
8,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"LCahpY941o",
"bCeFSKfymgw",
"Nf-0ZWZlFN",
"2UG1dHNNbX5",
"iclr_2021_gHsr-v8Tz6l",
"iclr_2021_gHsr-v8Tz6l",
"iclr_2021_gHsr-v8Tz6l",
"iclr_2021_gHsr-v8Tz6l"
] |
iclr_2021_tij5dHg5Hk | Run Away From your Teacher: a New Self-Supervised Approach Solving the Puzzle of BYOL | Recently, a newly proposed self-supervised framework Bootstrap Your Own Latent (BYOL) seriously challenges the necessity of negative samples in contrastive-based learning frameworks. BYOL works like a charm despite the fact that it discards the negative samples completely and there is no measure to prevent collapse in ... | withdrawn-rejected-submissions | Most of the reviewers and AC found many claims of this submission unsubstantiated. | train | [
"lVkhGm1JujR",
"73rFsHqP6Bu",
"xeNO-cW0aAP",
"-2IiV-3N9s3",
"8DkYJ2f1tcR",
"Q2qedEYua-Q",
"M5nKi6k4eZ",
"p40Znp-nO1_",
"Hbfpvs6x3b9",
"bLQo-ZBk1eq",
"JGO_OAVdEQU"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank R2 for the valuable questions and advice. We would like to respond to your concerns one by one in this post. \n\n## Concern 1\nThanks for pointing out that our paper’s title oversells the contributions we made in our work. We sincerely apologize for that. While besides the title itself, we a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"p40Znp-nO1_",
"JGO_OAVdEQU",
"JGO_OAVdEQU",
"JGO_OAVdEQU",
"Hbfpvs6x3b9",
"Hbfpvs6x3b9",
"bLQo-ZBk1eq",
"iclr_2021_tij5dHg5Hk",
"iclr_2021_tij5dHg5Hk",
"iclr_2021_tij5dHg5Hk",
"iclr_2021_tij5dHg5Hk"
] |
iclr_2021_RpprvYz0xTM | A Flexible Framework for Discovering Novel Categories with Contrastive Learning | This paper studies the problem of novel category discovery on single- and multi-modal data with labels from different but relevant categories. We present a generic, end-to-end framework to jointly learn a reliable representation and assign clusters to unlabelled data. To avoid over-fitting the learnt embedding to label... | withdrawn-rejected-submissions | The reviewers unanimously raised concerns on the lack of insights on why the proposed method works better than Han et al., 2020, and why WTA brings significant gains only to the proposed method and not to Han et al. I think the paper is promising but providing these insights are critical to making the work convincing t... | train | [
"NibAhlR98J",
"lgwIKzg-vqU",
"kNeeAGDm-ys",
"A2E3UpbjbRQ",
"aMq2gt0phg1",
"JmOXZ58_oBU",
"vM8L1n22_2e",
"2IacAzghf1w",
"i2gbRFfnaGw",
"S0-ABPDq4N6",
"b_gyPp3pEN-",
"OBS_Bgpx7Bt",
"g4f0Tm2a9_V",
"MLzDiUbpPv5",
"iKUvUPirvSs",
"_qNGyMFOpJf",
"I7hcbZiUdkI"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"SUMMARY\n\nThe paper proposes a multi-task loss for semi-supervised representation learning and category discovery (i.e. clustering unlabeled examples). At a high level, this paper adds a contrastive loss term to the approach from Han et al. 2020. The loss has five components: (i) an InfoNCE loss where positive pa... | [
5,
4,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_RpprvYz0xTM",
"iclr_2021_RpprvYz0xTM",
"iclr_2021_RpprvYz0xTM",
"iclr_2021_RpprvYz0xTM",
"JmOXZ58_oBU",
"vM8L1n22_2e",
"2IacAzghf1w",
"g4f0Tm2a9_V",
"I7hcbZiUdkI",
"NibAhlR98J",
"A2E3UpbjbRQ",
"_qNGyMFOpJf",
"lgwIKzg-vqU",
"kNeeAGDm-ys",
"i2gbRFfnaGw",
"S0-ABPDq4N6",
"iclr... |
iclr_2021_bhKQ7P7gyLA | Manifold Regularization for Locally Stable Deep Neural Networks | We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks. Our regularizers encourage functions which are smooth not only in their predictions but also their decision boundaries. Empirically, our networks exhibit stability in a diverse set o... | withdrawn-rejected-submissions | The paper received borderline and negative reviews but has raised many questions and discussions, showing that the paper has some merit. Many concerns were however raised on various aspects of the paper such as mathematical rigor, clarity, and motivation of manifold regularization that is too disconnected from the robu... | train | [
"2NJyNihpyVH",
"hQDGMDUpCls",
"LwtHWUyqp2Q",
"ARi2JjIlTpn",
"cUo-LUDMNom",
"VCiwoX7GNeS",
"KkiSpmNdRWN",
"X4eJ4fNRtvC",
"uNcv-Q2VO1",
"xRg8QpM0Z-",
"X3Y3bzMM_mg",
"qfAXMnmoiRo",
"nAM58uuddH",
"z8uIaLvFuvV",
"pLz-6cYIUSp",
"ITxh47ZNnOQ",
"BpqKOzxOSKS",
"EVQZRgqIhW8",
"YJAkbZfknW2"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"This work introduces manifold regularization as an approach for learning stable deep nets, towards the goal of adversarial robustness. Several regularizers are proposed: intrinsic, sparse Laplacian and Hamming regularizers. As the proposed method relies only on adding these regularization terms to the loss, it is ... | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_bhKQ7P7gyLA",
"iclr_2021_bhKQ7P7gyLA",
"iclr_2021_bhKQ7P7gyLA",
"X3Y3bzMM_mg",
"VCiwoX7GNeS",
"ARi2JjIlTpn",
"X4eJ4fNRtvC",
"BpqKOzxOSKS",
"xRg8QpM0Z-",
"pLz-6cYIUSp",
"z8uIaLvFuvV",
"nAM58uuddH",
"QaqxXG8a8tn",
"qfAXMnmoiRo",
"iclr_2021_bhKQ7P7gyLA",
"2NJyNihpyVH",
"ITxh4... |
iclr_2021_IqZpoAAt2oQ | Function Contrastive Learning of Transferable Representations | Few-shot-learning seeks to find models that are capable of fast-adaptation to novel tasks which are not encountered during training. Unlike typical few-shot learning algorithms, we propose a contrastive learning method which is not trained to solve a set of tasks, but rather attempts to find a good representation of th... | withdrawn-rejected-submissions | While the authors provided extensive responses to the reviewers and most of the reviewers did a good job of accounting for the author responses the final ratings for this paper was unanimously 5s -- all marginally below acceptance. The paper's positioning, writing were identified as key issues that remained to be addre... | train | [
"zkHX1k7kKC1",
"q3BfNybccin",
"lufJOKVRvG",
"8TbppbSfR8S",
"2bbFha85O5n",
"q9g2QrcXhMP",
"2jx2P2WJA42",
"YgGWuYxsIJx",
"IG7Mc_jTuHW",
"jD1ozarP0WT",
"N7rMjkg-Rdz",
"BkJVsm9rf8m",
"xFkSaUgfwaA",
"adsHccPeA7E",
"y2u6uKmiGRc",
"Q_5Q-r7q2Jc",
"xNGvmJyNMoB",
"VUCTVAbvmP_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"As far as I can tell, this paper proposes to use contrastive learning to solve few-shot learning -- not just few-shot learning, but meta \"task-wise prediction\" as well (predict some properties of the task itself). The method learns representation of each task, by forcing random instantiations of the same tasks t... | [
5,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_IqZpoAAt2oQ",
"iclr_2021_IqZpoAAt2oQ",
"iclr_2021_IqZpoAAt2oQ",
"xFkSaUgfwaA",
"BkJVsm9rf8m",
"zkHX1k7kKC1",
"lufJOKVRvG",
"q3BfNybccin",
"VUCTVAbvmP_",
"iclr_2021_IqZpoAAt2oQ",
"VUCTVAbvmP_",
"zkHX1k7kKC1",
"BkJVsm9rf8m",
"y2u6uKmiGRc",
"lufJOKVRvG",
"xNGvmJyNMoB",
"q3BfN... |
iclr_2021_zUMD--Fb9Bt | A Unified Framework for Convolution-based Graph Neural Networks | Graph Convolutional Networks (GCNs) have attracted a lot of research interest in the machine learning community in recent years. Although many variants have been proposed, we still lack a systematic view of different GCN models and deep understanding of the relations among them. In this paper, we take a step forward to... | withdrawn-rejected-submissions | Four reviewers have reviewed and discussed this submission. After rebuttal, two reviewers felt the paper is below acceptance threshold. Firstly, Rev. 1 and Rev. 2 were somewhat disappointed in the lack of analysis regarding non-linearities despite authors suggested this was resolved in the revised manuscript, e.g. Rev.... | train | [
"l9TmkZxM89_",
"_oBYcrE87EB",
"Men_ZUaDys",
"CN2C4trPnma",
"vZb6um3VQrr",
"-HQl7ebJiVV",
"JZLs4NjS_h-",
"_PEBG1paYUI",
"_HW44yPA2vn",
"qYadojTqAda",
"Lu1Ayx-zhFT",
"V_aMIeRjCg",
"TI7mf0Uooc"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper unifies several variants of the graph convolutional networks (GCNs) into a regularized quadratic optimization framework. Basically, the function to be optimized considers both to preserve node information and to perform graph Laplacian regularization, whose optimal solution gives a convolutional layer.\... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2021_zUMD--Fb9Bt",
"iclr_2021_zUMD--Fb9Bt",
"-HQl7ebJiVV",
"V_aMIeRjCg",
"TI7mf0Uooc",
"Lu1Ayx-zhFT",
"V_aMIeRjCg",
"l9TmkZxM89_",
"TI7mf0Uooc",
"TI7mf0Uooc",
"_oBYcrE87EB",
"iclr_2021_zUMD--Fb9Bt",
"iclr_2021_zUMD--Fb9Bt"
] |
iclr_2021_cKnKJcTPRcV | HyperSAGE: Generalizing Inductive Representation Learning on Hypergraphs | Graphs are the most ubiquitous form of structured data representation used in machine learning. They model, however, only pairwise relations between nodes and are not designed for encoding the higher-order relations found in many real-world datasets. To model such complex relations, hypergraphs have proven to be a natu... | withdrawn-rejected-submissions | The paper proposes a learning framework for Hypergraphs. The proposed method can be viewed as generalisation of GraphSAGE to hyper graphs. Though the paper emphasises that there is significant differences between Hypergraphs and Graphs and hence new methods are required. However, the proposed methods are not significan... | train | [
"_nCAd1Bpgov",
"wN_9YoHbjIV",
"2Sf31weEubQ",
"YFCtAtLuBRv",
"h9SKwxu0Fte",
"4TMqrgOMBAA",
"Fm7VHpDET3",
"3HLKk-GTXW",
"VsrOjoyx3Iz",
"4g4jDDFvJke",
"lgKRXBb45Th",
"PPk6-7r4s1G"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"*Q- In addition, if I understand correctly, the proof does not show that p_1 is equal to p_2: \"we first assume $p_1=p_2$\" under eq. (10), while the case $p_1\\neq p_2$ is not dealt with. Finally, an additional condition seems to be assumed in the proof while not mentioned in the main document (see sentence under... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"wN_9YoHbjIV",
"4g4jDDFvJke",
"YFCtAtLuBRv",
"VsrOjoyx3Iz",
"4TMqrgOMBAA",
"lgKRXBb45Th",
"PPk6-7r4s1G",
"Fm7VHpDET3",
"iclr_2021_cKnKJcTPRcV",
"iclr_2021_cKnKJcTPRcV",
"iclr_2021_cKnKJcTPRcV",
"iclr_2021_cKnKJcTPRcV"
] |
iclr_2021_-Qaj4_O3cO | DCT-SNN: Using DCT to Distribute Spatial Information over Time for Learning Low-Latency Spiking Neural Networks | Spiking Neural Networks (SNNs) offer a promising alternative to traditional deep learning frameworks, since they provide higher
computational efficiency due to event-driven information processing. SNNs distribute the analog values of pixel intensities into binary spikes over time. However, the most widely used i... | withdrawn-rejected-submissions | This paper provides a method of encoding inputs to a spiking neural network (SNN) using the discrete cosine transform (DCT). The goal is to create a more energy and time efficient means of doing inference with SNNs. The authors provide a description of the method, then show accuracy results on a variety of standard ben... | val | [
"iW1T0TVkQf",
"7it6GVZ6gCe",
"IA_Tkhcv5XS",
"PfSkUN-nxOM",
"wABm5UlB8_R",
"LBA3HxbyRMg",
"efExI4_-VXE",
"I0ix6Mo-a04",
"JYgDG_ZtF63",
"_UMH2sJ1MKq",
"bnnwAXJXHsZ",
"Yl8hs9S8GTa",
"cqJI5SgWDzP",
"zJsmFGiFfJX",
"CLqXVB6UBF8",
"i1_swEZ2d4",
"BYWHVRSyAVq",
"1x83o6BEW5y",
"P74y3IQaQ9L... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official... | [
"We are grateful to the reviewer for updating the rating, and thank them for pointing us to an additional reference. Conceptually, the works mentioned use the first layer of the SNN as the spike encoder taking the analog pixels as input, and expose the activation map generated after the first convolution to the IF/... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"PfSkUN-nxOM",
"iclr_2021_-Qaj4_O3cO",
"LBA3HxbyRMg",
"bnnwAXJXHsZ",
"iclr_2021_-Qaj4_O3cO",
"efExI4_-VXE",
"I0ix6Mo-a04",
"cqJI5SgWDzP",
"e2Eu0Sbdc0",
"i1_swEZ2d4",
"wABm5UlB8_R",
"CLqXVB6UBF8",
"DkON7Q4s4_o",
"jbLUg-MjuOS",
"JYgDG_ZtF63",
"BYWHVRSyAVq",
"bnnwAXJXHsZ",
"P74y3IQaQ9... |
iclr_2021_mRNkPVHyIVX | Exploiting Safe Spots in Neural Networks for Preemptive Robustness and Out-of-Distribution Detection | Recent advances on adversarial defense mainly focus on improving the classifier’s robustness against adversarially perturbed inputs. In this paper, we turn our attention from classifiers to inputs and explore if there exist safe spots in the vicinity of natural images that are robust to adversarial attacks. In this reg... | withdrawn-rejected-submissions | The reviewers recognized that the proposed method is interesting and seems to be useful in some cases, and the authors provided sufficient empirical results to support their claim.
In addition, some comments have already been clarified.
However, some reviewers still concerned that the proposed defence method will be de... | train | [
"zlVunJT1_Nj",
"QJB9URcAEkw",
"LGNC4ItiU5J",
"RRG9jovfApP",
"0CRXLScNzZN",
"ihQDpACXTTo",
"WhwT2UBje69",
"eeOS5docO4-",
"SNHEa3cPtJS",
"njwSABZ0weV",
"AdbtY9xuKUr",
"k0iesDvp8A9",
"83Ae9CMs8nU",
"y-m9P5UyccC",
"8WEdaOhOIHW",
"sYfteT-YK2J",
"mB3r-tQYlYh",
"LKwbqAS2DdX",
"Jxc_u-7b8... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary: \nThis paper aims to improve adversarial robustness of the classifiers in a different perspective than the existing works. Usually, the networks are trained using adversarial examples to improve robustness (adversarial training). This work extend this line of thought and make an input robust to adversaria... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"iclr_2021_mRNkPVHyIVX",
"iclr_2021_mRNkPVHyIVX",
"RRG9jovfApP",
"ihQDpACXTTo",
"eeOS5docO4-",
"QJB9URcAEkw",
"0CRXLScNzZN",
"QJB9URcAEkw",
"njwSABZ0weV",
"AdbtY9xuKUr",
"83Ae9CMs8nU",
"iclr_2021_mRNkPVHyIVX",
"QJB9URcAEkw",
"mB3r-tQYlYh",
"iclr_2021_mRNkPVHyIVX",
"Jxc_u-7b8j",
"zlVu... |
iclr_2021_Nj8EIrSu5O | Divide-and-Conquer Monte Carlo Tree Search | Standard planners for sequential decision making (including Monte Carlo planning, tree search, dynamic programming, etc.) are constrained by an implicit sequential planning assumption: The order in which a plan is constructed is the same in which it is executed.
We consider alternatives to this assumption for the... | withdrawn-rejected-submissions | This paper proposes an MCTS approach to goal-conditioned planning, where the search generates high-level sequences of subgoals for low-level policies. This top-level planner is basically a search-based implementation of SSST for potential gain in computational requirements with the help from the advanced search techniq... | train | [
"wputW7cxDk0",
"GrfDJG61jzJ",
"hzHoxQ42Beo",
"lzpcj1jsyEW",
"bejDDNLAuh",
"DoqenLoDnNH",
"Oin8mBqxgQ4",
"sVfBF_qLzx1",
"Osie_CiWgPK",
"F4GfPAT2k-a",
"kdn5e1W6wZ",
"nDT2RLPb7h8",
"E0a6xLrNSgb"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The paper tackles the problem of multi-task navigation, where the agent sees a different goal-directed (navigation) task at the start of each episode. The paper is reasonably well-written and easy to understand although the text might benefit from an illustrated example. The primary contributions of the p... | [
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_Nj8EIrSu5O",
"iclr_2021_Nj8EIrSu5O",
"lzpcj1jsyEW",
"bejDDNLAuh",
"nDT2RLPb7h8",
"GrfDJG61jzJ",
"sVfBF_qLzx1",
"wputW7cxDk0",
"F4GfPAT2k-a",
"E0a6xLrNSgb",
"iclr_2021_Nj8EIrSu5O",
"iclr_2021_Nj8EIrSu5O",
"iclr_2021_Nj8EIrSu5O"
] |
iclr_2021_KsN9p5qJN3 | Energy-based Out-of-distribution Detection for Multi-label Classification | Out-of-distribution (OOD) detection is essential to prevent anomalous inputs from causing a model to fail during deployment. Improved methods for OOD detection in multi-class classification have emerged, while OOD detection methods for multi-label classification remain underexplored and use rudimentary techniques. We p... | withdrawn-rejected-submissions | The paper aims to do out-of-distribution (OOD) detection in multi-label classification. However, the challenges of extending energy-based OOD methods in multiclass to multi-label setting is not big. This paper just defines the label-wise free energy. The key challenging issues in MLC is the label dependency. The pape... | train | [
"PvaAWb6ZFKl",
"plK9NhLgtY3",
"nkAh_8bzxj",
"cHccVslskFT",
"ftelJrlZe3T",
"LKg-l95bTDr",
"lyCsjO2FJPT",
"OgJ_8dXohjq",
"Ir96jTCaFL9",
"iSrn012Vm7u",
"mk0vLqT-w1G",
"rEa0pUIn4wR",
"FDatSD9S3N0",
"wqUMaHy89P6",
"3bLBcp0nXV"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the response. We are glad to hear that **the reviewer agrees on the significance of our proposed SumEnergy**, which aggregates information over the labels and is better than methods that only consider the maximum scores. We have addressed the concerns regarding math clarity. \n\n1. [notation] As per sug... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
3,
4,
3
] | [
"ftelJrlZe3T",
"iclr_2021_KsN9p5qJN3",
"FDatSD9S3N0",
"wqUMaHy89P6",
"cHccVslskFT",
"wqUMaHy89P6",
"mk0vLqT-w1G",
"Ir96jTCaFL9",
"iclr_2021_KsN9p5qJN3",
"OgJ_8dXohjq",
"OgJ_8dXohjq",
"3bLBcp0nXV",
"iclr_2021_KsN9p5qJN3",
"iclr_2021_KsN9p5qJN3",
"iclr_2021_KsN9p5qJN3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.