paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18
values | meta_review stringlengths 29 10k | label stringclasses 3
values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2021_m08OHhXxl-5 | Privacy Preserving Recalibration under Domain Shift | Classifiers deployed in high-stakes applications must output calibrated confidence scores, i.e. their predicted probabilities should reflect empirical frequencies. Typically this is achieved with recalibration algorithms that adjust probability estimates based on the real-world data; however, existing algorithms are no... | withdrawn-rejected-submissions | This work considers the problem of calibrating a multi-class classifier while preserving differential privacy. It proposes a method Accuracy Temperature Scaling, that aims to achieve consistency rather than calibration. The method is particularly easy to implement under the constraint of DP. The paper then evaluates t... | train | [
"5qkJNqWJnK",
"drwXt4KMgv",
"ytCmxCax5Kh",
"UcLKzMZB0P",
"rGEvlab4Rq",
"211VMxAg_6v",
"dkbicLSuEuq",
"MBsIQ8ykjII",
"TgGh-KYSNi7"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We are grateful for the detailed review and thank the reviewer for their constructive comments. \n\n*“I don't see how the algorithm addresses the domain shift problem.”*\n\nWe address the domain shift point in the general response above. \n\n*“According to Tables 3 and 4 in the appendix, the Acc-T works well compa... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"211VMxAg_6v",
"dkbicLSuEuq",
"MBsIQ8ykjII",
"TgGh-KYSNi7",
"iclr_2021_m08OHhXxl-5",
"iclr_2021_m08OHhXxl-5",
"iclr_2021_m08OHhXxl-5",
"iclr_2021_m08OHhXxl-5",
"iclr_2021_m08OHhXxl-5"
] |
iclr_2021_Oc-Aedbjq0 | Model Compression via Hyper-Structure Network | In this paper, we propose a novel channel pruning method to solve the problem of compression and acceleration of Convolutional Neural Networks (CNNs). Previous channel pruning methods usually ignore the relationships between channels and layers. Many of them parameterize each channel independently by using gates or sim... | withdrawn-rejected-submissions | This paper proposes a channel pruning method to compress and accelerate pre-trained CNNs.
The reviewers suggest further analysis of the experimental results to help explain the gains in performance, as well as point out some errors in the formulation. The paper is also found similar to meta-pruning method. The authors ... | train | [
"wgwuqqkJ_nZ",
"4KEN_nqU9BN",
"98l1e0Dxmg",
"qQZOnSJbte_",
"JQTnyLIDBPW",
"V1wxfEp22oo",
"Gkg5k0fRqZF",
"SHxFtsKOFZ0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"## Summary\nThis paper proposes hyper-structure network for model compression (channel pruning). The idea is to have a hyper-network generate the *architecture* of the network to be pruned. To do so, the proposed approach use Gumbel softmax together with STE to get around the non-differentiability issue of such de... | [
6,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
5,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"iclr_2021_Oc-Aedbjq0",
"Gkg5k0fRqZF",
"V1wxfEp22oo",
"wgwuqqkJ_nZ",
"SHxFtsKOFZ0",
"iclr_2021_Oc-Aedbjq0",
"iclr_2021_Oc-Aedbjq0",
"iclr_2021_Oc-Aedbjq0"
] |
iclr_2021_oBmpWzJTCa4 | Meta-Active Learning in Probabilistically-Safe Optimization | Learning to control a safety-critical system with latent dynamics (e.g. for deep brain stimulation) requires judiciously taking calculated risks to gain information. We present a probabilistically-safe, meta-active learning approach to efficiently learn system dynamics and optimal configurations. The key to our approa... | withdrawn-rejected-submissions | This paper was quite contentious. While reviewers appreciated the detailed response by the authors, and there is consensus that the paper addresses a relevant problem and contains interesting ideas, in the end there remain several concerns. The paper provides a complex combination of techniques from active learning, ... | train | [
"DPf2Wj2qtou",
"a9KOhsCa4A9",
"nJcA4aMsBQO",
"CQl8tnESLU6",
"IuEtOAetbBT",
"OfUm684HKEg",
"PQgxSoLHtI5",
"L9OC09CBJdV",
"wmSoys-ikPP",
"jtaUS0D2VDQ",
"YOCt0uTgnZq",
"qPNAPk6BvmQ",
"4dFYjG56mH0",
"Bg6ny050FyY",
"41QB_D34RE",
"CwbnpiFFA0o",
"TRFgf94PeS_"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewer for the reviewer's insightful comments and helpful feedback in improving our paper. Thank you!",
"We would like to thank the reviewer for the reviewer's insightful comments and helpful feedback in improving our paper. Thank you!",
"We hope that our responses to the reviewer’... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"jtaUS0D2VDQ",
"IuEtOAetbBT",
"TRFgf94PeS_",
"iclr_2021_oBmpWzJTCa4",
"YOCt0uTgnZq",
"PQgxSoLHtI5",
"L9OC09CBJdV",
"CwbnpiFFA0o",
"iclr_2021_oBmpWzJTCa4",
"4dFYjG56mH0",
"qPNAPk6BvmQ",
"Bg6ny050FyY",
"wmSoys-ikPP",
"CQl8tnESLU6",
"TRFgf94PeS_",
"iclr_2021_oBmpWzJTCa4",
"iclr_2021_oBm... |
iclr_2021_DGIXvEAJVd | Learning Chess Blindfolded | Transformer language models have made tremendous strides in natural language understanding. However, the complexity of natural language makes it challenging to ascertain how accurately these models are tracking the world state underlying the text. Motivated by this issue, we consider the task of language modeling for t... | withdrawn-rejected-submissions | I thank the authors for their submission and very active participation in the author response period. World state tracking is an important problem that encompasses existing problems like coreference resolution. I agree with R2 and R3 that proposing a novel environment in which we can investigate to what extend Transfor... | test | [
"BMMGNmlo1ge",
"LQmJrTEqC5a",
"QmDeyu3SOIP",
"H6RiRjk3IyA",
"a_wNUuuNIz5",
"cLH-0kgzCF",
"o_tBTgw7vE3",
"_AD5WkeYFtW",
"-EaJH0Qn9dR",
"1Gvj7GQWLUA",
"p6aoZ7TolTe",
"-BVhGmQwBEa",
"-zsQm3NJ2tD",
"dKBUxLgokNR",
"1F2i8m95k5K",
"Q1nqxtbXL8",
"n_C9O0bQdV"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their appreciation of our effort and the increase in score. \nOur claims about the frameworks were based on the intended use case. While it may be possible to use frameworks like TextWorld to just predict observations, this is not what the original work or the follow-up work has done. Mor... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"QmDeyu3SOIP",
"iclr_2021_DGIXvEAJVd",
"dKBUxLgokNR",
"1Gvj7GQWLUA",
"1F2i8m95k5K",
"_AD5WkeYFtW",
"iclr_2021_DGIXvEAJVd",
"-BVhGmQwBEa",
"iclr_2021_DGIXvEAJVd",
"p6aoZ7TolTe",
"Q1nqxtbXL8",
"-zsQm3NJ2tD",
"-EaJH0Qn9dR",
"LQmJrTEqC5a",
"n_C9O0bQdV",
"iclr_2021_DGIXvEAJVd",
"iclr_2021... |
iclr_2021_igkmo23BgzB | End-to-end Quantized Training via Log-Barrier Extensions | Quantization of neural network parameters and activations has emerged as a successful approach to reducing the model size and inference time on hardware that sup-ports native low-precision arithmetic. Fully quantized training would facilitate further computational speed-ups as well as enable model training on embedde... | withdrawn-rejected-submissions | This paper introduced a log-barrier based regularization method to reduce the dynamic range of data types in neural networks. As pointed out by the reviewers, there are many technical issues. The authors agree with the reviewers in the rebuttal, though claimed that they are fixed in the revised version of the paper.
E... | train | [
"pn5QGsQNRn_",
"AMp0fFQSB3Q",
"S-eySQap-mz",
"xmohWjorKSV",
"GD4ku9yfOa7",
"QoA9FRHz9Js",
"TbVd1EXIwS7",
"1WBqk7FccDI",
"a0S-2hRcDUb",
"vJG3eLE8lR"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors introduce a log-barrier extension loss term enforcing soft constraints on the range of values to enable fully end-to-end quantization-aware training. \n\nStrengths of the paper: \n\n- The paper addresses an important topic, because there are increasing concerns in performing fully end-to-end low precis... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"iclr_2021_igkmo23BgzB",
"1WBqk7FccDI",
"iclr_2021_igkmo23BgzB",
"a0S-2hRcDUb",
"1WBqk7FccDI",
"pn5QGsQNRn_",
"vJG3eLE8lR",
"iclr_2021_igkmo23BgzB",
"iclr_2021_igkmo23BgzB",
"iclr_2021_igkmo23BgzB"
] |
iclr_2021_zM6fevLxIhI | Variational Structured Attention Networks for Dense Pixel-Wise Prediction | State-of-the-art performances in dense pixel-wise prediction tasks are obtained with specifically designed convolutional networks. These models often benefit from attention mechanisms that allow better learning of deep representations. Recent works showed the importance of estimating both spatial- and channel-wise atte... | withdrawn-rejected-submissions | Reviewers were concerned with the novelty, although appreciated sota results in extensive experiments. | train | [
"S01pt94K-nR",
"MhcOaTdIYt",
"OKq3y9U6OHP",
"ER9L4uhk1Y",
"4MEuz4fNJ6",
"lU8q6XsyYqG",
"k3R53120e8L",
"GN46_RX6O81",
"_3gW7h79Mg3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n\nThis paper proposes the VarIational STructured Attention networks (VISTA-Net), which improves pervious SOTA models for dense pixel-wise prediction tasks. The proposed VISTA-Net is featured by two aspects: 1) A new structured ... | [
5,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_zM6fevLxIhI",
"k3R53120e8L",
"GN46_RX6O81",
"S01pt94K-nR",
"_3gW7h79Mg3",
"iclr_2021_zM6fevLxIhI",
"iclr_2021_zM6fevLxIhI",
"iclr_2021_zM6fevLxIhI",
"iclr_2021_zM6fevLxIhI"
] |
iclr_2021_CLYe1Yke1r | Box-To-Box Transformation for Modeling Joint Hierarchies | Learning representations of entities and relations in knowledge graphs is an active area of research, with much emphasis placed on choosing the appropriate geometry to capture tree-like structures. Box embeddings (Vilnis et al., 2018; Li et al., 2019; Dasgupta et al., 2020), which represent concepts as n-dimensional hy... | withdrawn-rejected-submissions | The paper is concerned with modeling multi-relational data with joint hierarchical structure. For this purpose, the authors extend box embeddings to multi-relational settings, supporting the modeling of cross-hierarchy edges and generalizing from a subset of the transitive reduction. The reviewers highlight that the pa... | train | [
"VqGcXOwnSVN",
"So3IRn5mj75",
"PxreRPmmH3e",
"q_lAfHcn7T2",
"jy3Hd8t7cLp",
"zMwA_nkQYwQ",
"N7Th-ffpu6B",
"hF9X6gJOTvU",
"WKVTL2Mec8a",
"uXj6EFkcKFj",
"ERtahUzE4F-",
"uUKgZ9J5mCP",
"G0mfqMT-s9y",
"eRP16290P4b",
"Khrm-tvfVEL",
"QCMSzfv8AM1",
"nqstfm0ADkb"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"As requested, we have also created an evaluation involving 3 hierarchical relations:\nIsA (hypernym + instance_hypernym)\nPartOf (part_meronym + substance_meronym)\nmember_meronym\nWe include compositional edges between member_meronym and IsA in the same manner as for PartOf and IsA. Preliminary experiments on a r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"WKVTL2Mec8a",
"PxreRPmmH3e",
"q_lAfHcn7T2",
"jy3Hd8t7cLp",
"zMwA_nkQYwQ",
"N7Th-ffpu6B",
"hF9X6gJOTvU",
"ERtahUzE4F-",
"uXj6EFkcKFj",
"Khrm-tvfVEL",
"eRP16290P4b",
"QCMSzfv8AM1",
"nqstfm0ADkb",
"iclr_2021_CLYe1Yke1r",
"iclr_2021_CLYe1Yke1r",
"iclr_2021_CLYe1Yke1r",
"iclr_2021_CLYe1Y... |
iclr_2021_s9788-pPB2 | LLBoost: Last Layer Perturbation to Boost Pre-trained Neural Networks | While deep networks have produced state-of-the-art results in several domains from image classification to machine translation, hyper-parameter selection remains a significant computational bottleneck. In order to produce the best possible model, practitioners often search across random seeds or use ensemble metho... | withdrawn-rejected-submissions | Though the method suggested in this paper is interesting, theoretically motivated, and resulted in some practical improvement, the reviewers ultimately had low scores. The reasons for this are:
1) The improvements obtained by this method were rather small, especially on the standard datasets (CIFAR, Imagenet).
2) In th... | train | [
"rJSvYrPqqah",
"Xz7QRwEV7Pg",
"eTz8VZtRE1k",
"i_YXyE-lNDf",
"iC3eDLtA6aN",
"oawRCqAv73r",
"vQ8z7x7-UNY",
"KJK22acmIIV",
"_UKWrgNl8BG",
"Ee8-H-wk1Et",
"4Q2ivUcn6X2",
"xhk4-wg9ni",
"MKj0UNxTPvl"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"################################################################\n\nSummary:\n\nThis paper provided an efficient algorithm (LLBoost) to boost the validation accuracy without spending too much time tuning hyperparameter. The algorithm is theoretically and empirically guaranteed.\n\n#################################... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"iclr_2021_s9788-pPB2",
"4Q2ivUcn6X2",
"_UKWrgNl8BG",
"oawRCqAv73r",
"4Q2ivUcn6X2",
"vQ8z7x7-UNY",
"KJK22acmIIV",
"Ee8-H-wk1Et",
"rJSvYrPqqah",
"MKj0UNxTPvl",
"xhk4-wg9ni",
"iclr_2021_s9788-pPB2",
"iclr_2021_s9788-pPB2"
] |
iclr_2021_piek7LGx7j | Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling | Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors. This approach introduces a trade-off between disentangled representation learning and reconstruction quality since the model... | withdrawn-rejected-submissions | This paper presents a two-step approach to achieve disentangled representation and good reconstruction at the same time in deep generative models: the first step focuses on good disentanglement (e.g., with beta-TCVAE) while possibly sacrificing reconstruction reconstruction, while the second step focuses on high-qualit... | train | [
"ElX0tIp5hfk",
"19vUcuWuq9W",
"REiV71lXri-",
"Mmxa4WvcakP",
"PwDvQFcWxcV",
"Oi5f6z72x2",
"gzvPHutLmDu",
"QvO-_hL5H7T",
"3Zk-EY68Jxh",
"OJm-XVqmPvd",
"ljrocPyoNI0",
"9Z6glwbD7oN",
"gPK11oyHL4m",
"RwQGb0qjWNY",
"Z9iei1L9RsW",
"jj05VXLJEUn",
"d0_-Lqc62-L"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for the feedback and for increasing your score. We will add the quantitative results for the ablation study in the final version and will update the notation for the models in the appendix.",
"=======================================================================================\n\nSummary :... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"19vUcuWuq9W",
"iclr_2021_piek7LGx7j",
"Mmxa4WvcakP",
"PwDvQFcWxcV",
"gzvPHutLmDu",
"iclr_2021_piek7LGx7j",
"RwQGb0qjWNY",
"gPK11oyHL4m",
"ljrocPyoNI0",
"d0_-Lqc62-L",
"9Z6glwbD7oN",
"jj05VXLJEUn",
"Z9iei1L9RsW",
"19vUcuWuq9W",
"iclr_2021_piek7LGx7j",
"iclr_2021_piek7LGx7j",
"iclr_20... |
iclr_2021_pavee2r1N01 | Provable Robustness by Geometric Regularization of ReLU Networks | Recent work has demonstrated that neural networks are vulnerable to small, adversarial perturbations of their input. In this paper, we propose an efficient regularization scheme inspired by convex geometry and barrier methods to improve the robustness of feedforward ReLU networks. Since such networks are piecewise line... | withdrawn-rejected-submissions | The paper presents a novel regularization scheme for showing provable robustness guarantees for the class of ReLU networks.
The reviews of this paper were mixed but leaning more to reject. Even though geometric approaches to adversarial robustness are appealing there are too many issues left open (see below for detaile... | train | [
"ajroBFYfKnM",
"u1kPsFk8cD3",
"j1pL8i4pow",
"lPW8XmC19C",
"ucwPVnQNGth",
"pIanZZ7J0Ay",
"MIKpdxPvX1",
"hsUFWTeEg5k",
"uHeTfzB52VD"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their constructive comments. We have uploaded a new draft which clarifies questions and addresses issues brought up by all reviewers. \n\n- We have re-written all derivations in the appendix according to R4’s comments. \n- We have included a new section (4.2) clarifying an advantage of u... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_pavee2r1N01",
"hsUFWTeEg5k",
"MIKpdxPvX1",
"uHeTfzB52VD",
"pIanZZ7J0Ay",
"iclr_2021_pavee2r1N01",
"iclr_2021_pavee2r1N01",
"iclr_2021_pavee2r1N01",
"iclr_2021_pavee2r1N01"
] |
iclr_2021_v-9E8egy_i | Gated Relational Graph Attention Networks | Relational Graph Neural Networks (GNN) are a class of GNN that are capable of handling multi-relational graphs. Like all GNNs, they suffer from a drop in performance when training deeper networks, which may be caused by vanishing gradients, over-parameterization, and oversmoothing. Previous works have investigated meth... | withdrawn-rejected-submissions | This paper proposes a GNN architecture for multi-relational data to better address long-range dependencies in graphs. The proposed GR-GAT model is a variant of graph attention networks (GAT) with, among other modifications, vector-based edge type embeddings and GRU-type updates. Results are presented on AIFB, AM, and o... | train | [
"MQC1SF8RmJp",
"JEBS70p279y",
"El5akkWU2Ma",
"SHkdF09iHH5",
"ttRbDwJH5X",
"v7T89pn2GdK",
"7R-1Pl01niw",
"lkLR6SBHf_B",
"y4nQV0crk8t",
"18e-vYxXJBh",
"kMfy43gu1w",
"sk-2xxBdxDM",
"QVQqjQzxphp",
"RTLVgcUQL3i"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"The authors propose Gated Relational Graph Attention Nets (GR-GAT), a set of modifications to the GAT architecture in order to make them stronger under long-range relational reasoning, evaluating on various hand-crafted benchmarks as well as a real-world dataset previously explored in the area.\n\nThe GR-GAT tackl... | [
4,
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_v-9E8egy_i",
"iclr_2021_v-9E8egy_i",
"iclr_2021_v-9E8egy_i",
"iclr_2021_v-9E8egy_i",
"QVQqjQzxphp",
"18e-vYxXJBh",
"lkLR6SBHf_B",
"MQC1SF8RmJp",
"RTLVgcUQL3i",
"El5akkWU2Ma",
"MQC1SF8RmJp",
"JEBS70p279y",
"JEBS70p279y",
"iclr_2021_v-9E8egy_i"
] |
iclr_2021_LvJ8hLSusrv | Gradient-based tuning of Hamiltonian Monte Carlo hyperparameters | Hamiltonian Monte Carlo (HMC) is one of the most successful sampling methods in machine learning. However, its performance is significantly affected by the choice of hyperparameter values, which require careful tuning. Existing approaches for automating this task either optimise a proxy for mixing speed or consider the... | withdrawn-rejected-submissions | This paper proposes a tuning strategy for Hamiltonian Monte Carlo (HMC). The proposed algorithm optimizes a modified variational objective over the T step distribution of an HMC chain. The proposed scheme is evaluated experimentally.
All of the reviewers agreed that this is an important problem and that the proposed m... | train | [
"OdoXuwwPFBN",
"aWMQXN4Uhmb",
"tVP_6Ym9b3",
"5StFlE4KNFN",
"Is6gT1EKfAU",
"1DATVv1VmxS",
"ETWCGqkBGbc",
"RyIvJ8WSP-X",
"MdEYCRQ0wNb",
"Q-LXaPQ1Wjy",
"L9nl3U8Eh1",
"UgKA9qNWfJw",
"5fGzsvKcotc",
"OXuCXTBvEqC"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\n### Summary:\n\nThis paper proposes a variational inference based framework to tune some of the hyper-parameters of HMC algorithms automatically. The authors drop the entropy term from regular ELBO formulation, which facilitates a gradient-based approach. However, dropping this term requires extra care for which... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2021_LvJ8hLSusrv",
"1DATVv1VmxS",
"5StFlE4KNFN",
"RyIvJ8WSP-X",
"L9nl3U8Eh1",
"MdEYCRQ0wNb",
"UgKA9qNWfJw",
"UgKA9qNWfJw",
"5fGzsvKcotc",
"OdoXuwwPFBN",
"OXuCXTBvEqC",
"iclr_2021_LvJ8hLSusrv",
"iclr_2021_LvJ8hLSusrv",
"iclr_2021_LvJ8hLSusrv"
] |
iclr_2021_--rcOeCKRh | CROSS-SUPERVISED OBJECT DETECTION | After learning a new object category from image-level annotations (with no object bounding boxes), humans are remarkably good at precisely localizing those objects. However, building good object localizers (i.e., detectors) currently requires expensive instance-level annotations. While some work has been done on learni... | withdrawn-rejected-submissions | After the rebuttal phase, all scores are borderline (6) or negative (4). Among the most confident reviewers (confidence 5), one gives 6 and one gives 4. The reviewer with confidence 4 gives overall score 6 but states they cannot support the paper. There were several concerns about the novelty of the task and method, th... | test | [
"y3sAjjDew_3",
"6aIxHHNHztI",
"uVowNsXpHK",
"iPMUAH96CRX",
"xnJowpRfADS",
"B4utTXV7d_",
"xSEsCSMMuWx",
"WQ0KpdtZ6_7",
"9s2u0o1CLTo",
"VIhQw76tTY",
"joZJrrjMxV2",
"sfdV4h2xpzl"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Paper summary:\nThe paper proposes a new task cross-supervised object detection, which trains object detectors on the combination of base class images with instance-level annotations and novel class image with only image-level annotations. A network with a recognition head which is trained by image-level annotatio... | [
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2021_--rcOeCKRh",
"iclr_2021_--rcOeCKRh",
"iPMUAH96CRX",
"B4utTXV7d_",
"6aIxHHNHztI",
"y3sAjjDew_3",
"sfdV4h2xpzl",
"6aIxHHNHztI",
"joZJrrjMxV2",
"y3sAjjDew_3",
"iclr_2021_--rcOeCKRh",
"iclr_2021_--rcOeCKRh"
] |
iclr_2021_mDAZVlBeXWx | Towards Robust and Efficient Contrastive Textual Representation Learning | There has been growing interest in representation learning for text data, based on theoretical arguments and empirical evidence. One important direction involves leveraging contrastive learning to improve learned representations. We propose an application of contrastive learning for intermediate textual feature pairs, ... | withdrawn-rejected-submissions | This work brings improvement to contrastive learning method for text data by
combining a Wasserstein objective with a "memory bank" strategy
for getting (and updating) hard negative samples.
The approach leads to small but consistent improvements across a variety of representation learning tasks in both supervised and ... | train | [
"n1y1miqDIXU",
"n3cMW_cQWQ",
"yYA7Z0_cg4g",
"6r3AxdqDAFU",
"fpnBB1VXNo",
"UYOux0zj8Su",
"-Q1s-tWMCDH",
"8m58lHxobZD"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"-----------------------------------------\nSummary\n-----------------------------------------\nThis paper proposes an approach to improve (supervised and unsupervised) representation learning for text using constrastive learning. The proposed approach augments standard contrastive learning with: (1) Spectral-norm ... | [
6,
-1,
-1,
-1,
-1,
6,
3,
5
] | [
3,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2021_mDAZVlBeXWx",
"UYOux0zj8Su",
"n1y1miqDIXU",
"-Q1s-tWMCDH",
"8m58lHxobZD",
"iclr_2021_mDAZVlBeXWx",
"iclr_2021_mDAZVlBeXWx",
"iclr_2021_mDAZVlBeXWx"
] |
iclr_2021_SXoheAR0Gz | Fast Partial Fourier Transform | Given a time-series vector, how can we efficiently compute a specified part of Fourier coefficients? Fast Fourier transform (FFT) is a widely used algorithm that computes the discrete Fourier transform in many machine learning applications. Despite the pervasive use, FFT algorithms do not provide a fine-tuning option f... | withdrawn-rejected-submissions | The major complaint about this paper was the lack of a proper comparison to previous work, both theoretically and empirically. Also, a study of the tradeoff between the accuracy and running time would significantly help this paper. Ultimately these were the main reasons for deciding to not accept the paper. The reviewe... | train | [
"5c_lkFdvl7",
"2VfJNaDC8j9",
"Qb1Vl4OB3MZ",
"oScY2eGTjpO",
"bvPOePUfhpt",
"4DW4erLdhoN",
"APlRKoYyXAm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper presents a fast approximate algorithm for partial discrete Fourier transform. Given the input signal $a_0...a_N$ in the time domain the algorithm approximately computes the first $O(M)$ frequencies. The running time is $O(r (N + M \\log M))$, where $r$ a parameter that controls the accuracy of the output... | [
5,
-1,
5,
-1,
-1,
-1,
6
] | [
4,
-1,
3,
-1,
-1,
-1,
5
] | [
"iclr_2021_SXoheAR0Gz",
"iclr_2021_SXoheAR0Gz",
"iclr_2021_SXoheAR0Gz",
"5c_lkFdvl7",
"Qb1Vl4OB3MZ",
"APlRKoYyXAm",
"iclr_2021_SXoheAR0Gz"
] |
iclr_2021_ZHkbzSR56jA | BASGD: Buffered Asynchronous SGD for Byzantine Learning | Distributed learning has become a hot research topic due to its wide application in cluster-based large-scale learning, federated learning, edge computing and so on. Most traditional distributed learning methods typically assume no failure or attack on workers. However, many unexpected cases, such as communication fail... | withdrawn-rejected-submissions | The authors present BASGD and asynchronous version of SGD that attempts to be robust against byzantine failures/attacks.
The papers is overall well written and clearly presents the results. Some novelty is present as there have been limited work in asynchronous algorithms for byzantine ML.
However, there have been s... | train | [
"xMO55oHz00h",
"-fOfnXQ_GqG",
"hBCSpk9x7Yf",
"jRqqVjq5Zy",
"fRRSuFQw6LA",
"ERLonPuP3x",
"jIbgoyIgpRm",
"RQIv7x3VQJs",
"iyedbwBj2uy",
"knYuonHrviz",
"SQQQJ1Mw1__",
"mtXHg6BIRlo",
"X2--xhRyL4v",
"-iwnXc6c6GC",
"l7PAhFUPDa6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a practical asynchronous stochastic gradient descent for Byzantine distributed learning where some of transmitted gradients are likely to be replaced by arbitrary vectors. Specifically, the server temporarily stores gradients on multiple (namely $B$) buffers and performs a proper robust aggrega... | [
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_ZHkbzSR56jA",
"iclr_2021_ZHkbzSR56jA",
"xMO55oHz00h",
"fRRSuFQw6LA",
"ERLonPuP3x",
"iyedbwBj2uy",
"l7PAhFUPDa6",
"l7PAhFUPDa6",
"knYuonHrviz",
"SQQQJ1Mw1__",
"mtXHg6BIRlo",
"-fOfnXQ_GqG",
"-iwnXc6c6GC",
"iclr_2021_ZHkbzSR56jA",
"iclr_2021_ZHkbzSR56jA"
] |
iclr_2021_qRdED5QjM9e | Distributionally Robust Learning for Unsupervised Domain Adaptation | We propose a distributionally robust learning (DRL) method for unsupervised domain adaptation (UDA) that scales to modern computer-vision benchmarks. DRL can be naturally formulated as a competitive two-player game between a predictor and an adversary that is allowed to corrupt the labels, subject to certain const... | withdrawn-rejected-submissions | The paper proposes a rather complex algorithm for unsupervised doamin adaptation.
While the paper provides detailed explanation, some motivation and some experimental resulst,
it does not provide any theoretical guarantees for its performance. More concerning, since domain adaptation
can only succeed when there is a cl... | train | [
"A6kx0eNqBN",
"6bzyLsOU2Tm",
"UBmewLf8ajQ",
"k_9Sa_lqeyW",
"-R3HZVynElR",
"-nMDSrTbZtl",
"XlE85-w7eVX"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your detailed comments! Regarding the gradient approximation and the missing office results, please kindly refer to our general response. We now address your additional concerns:\n\n**Regarding using a discriminator as a density ratio estimation:** We added references for discriminative learning for two... | [
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"-nMDSrTbZtl",
"iclr_2021_qRdED5QjM9e",
"XlE85-w7eVX",
"-R3HZVynElR",
"iclr_2021_qRdED5QjM9e",
"iclr_2021_qRdED5QjM9e",
"iclr_2021_qRdED5QjM9e"
] |
iclr_2021_XdprrZhBk8 | On the Predictability of Pruning Across Scales | We show that the error of iteratively-pruned networks empirically follows a scaling law with interpretable coefficients that depend on the architecture and task. We functionally approximate the error of the pruned networks, showing that it is predictable in terms of an invariant tying width, depth, and pruning level, s... | withdrawn-rejected-submissions | The paper provides a functional approximation of the error of ResNets and VGGs pruned with IMP and SynFlow on CIFAR-10 and ImageNet, showing that it is predictable in terms of an invariant tying width, depth, and pruning level. In particular, it formulates the test error as a function of the density of the network aft... | train | [
"yIozBfW8RpX",
"o0P84-9zZlf",
"F-Lkjqv5p4W",
"HBK3GzDMdoE",
"kh-7iq_pQ4M",
"0FUnsMqPtdz",
"8nES6KEw8G",
"-7c83fwkLx",
"JIANHEB56t",
"m5gYYoD8u1w",
"Y45Wb7CuWvt"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies how to estimate the performance of pruned networks using regression models. The authors first empirically observe that there exist three distinct regions of sparsity: (1) In the low-sparsity regime, pruning does not decrease the accuracy (2) In the mid-sparsity regime, a linear relationship betw... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_XdprrZhBk8",
"8nES6KEw8G",
"HBK3GzDMdoE",
"0FUnsMqPtdz",
"Y45Wb7CuWvt",
"m5gYYoD8u1w",
"JIANHEB56t",
"yIozBfW8RpX",
"iclr_2021_XdprrZhBk8",
"iclr_2021_XdprrZhBk8",
"iclr_2021_XdprrZhBk8"
] |
iclr_2021_ErrNJYcVRmS | F^2ed-Learning: Good Fences Make Good Neighbors | In this paper, we present F^2ed-Learning, the first federated learning protocol simultaneously defending against both semi-honest server and Byzantine malicious clients. Using a robust mean estimator called FilterL2, F^2ed-Learning is the first FL protocol with dimension-free estimation error against Byzantine maliciou... | withdrawn-rejected-submissions | The paper considers federated learning in the presence of malicious clients and a semi-honest centralized server. The authors provide a novel secure aggregation technique (i.e. split the clients into shards, and securely aggregate each shard’s updates, and the estimating things based on the updates from different shard... | train | [
"CgKziB55nPO",
"pEgQ0yYGvry",
"7ZsiAIt2L-4",
"NJz4ZoEIBY",
"wnH0Ea-vbmJ",
"G2lRsoLQoio",
"A82ZtX33xiB",
"547CphNrSKO",
"XV35azcR7iV",
"DBZq23grups",
"fUZtAFyd9TU",
"6Gpop9HYpcz",
"MinywGTnBR",
"2sAiksoZdAY",
"kWx_MuFhX1N",
"LUaESQbWFzN",
"6D-2VWx8Jl_",
"8OACy9WaNG",
"Oh7DHiMG2TB"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_re... | [
"**Paper summary**\n\nThe paper claims to be the first paper that simultaneously handles Byzantine threats while ensuring privacy in a federated learning setup. One of their main claims is that this is the first algorithm that provides dimension independent robustness guarantees against byzantine threats (I have so... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_ErrNJYcVRmS",
"iclr_2021_ErrNJYcVRmS",
"NJz4ZoEIBY",
"A82ZtX33xiB",
"547CphNrSKO",
"XV35azcR7iV",
"DBZq23grups",
"8OACy9WaNG",
"8OACy9WaNG",
"2sAiksoZdAY",
"6Gpop9HYpcz",
"kWx_MuFhX1N",
"iclr_2021_ErrNJYcVRmS",
"pEgQ0yYGvry",
"LUaESQbWFzN",
"MinywGTnBR",
"Oh7DHiMG2TB",
"... |
iclr_2021_KIS8jqLp4fQ | On Dynamic Noise Influence in Differential Private Learning | Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which adds noise according to the Differential Privacy protocol.Recent studies show that... | withdrawn-rejected-submissions | This work proposes algorithms for solving ERM with continuous losses satisfying the PL condition. The first algorithm achieves that by using a chainging noise variance and thus the paper frames the contribution in terms of the advantages of non-constant noise rate.
The problem is a well-studied one and the result is a... | train | [
"06hVqjtcuLn",
"iL215gtD54-",
"jO4HmWaUWL",
"4D2h78AmwLZ",
"AE9ooYBCbRr",
"lWT9croVDn7",
"oiz7c_nnV0g",
"C1crV0zzvL8",
"ugLHN65xHL",
"XfNyPaOPhGN",
"GfApTkH2fZv",
"IHhdCQJj_c",
"wgxBUPwpRW",
"QK-zazNCyof",
"7wSYL8znBAB",
"Kim9mEkpMB",
"PYfWHpkd-VF"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The paper investigates the idea that a dynamic privacy schedule of decreasing noise can help private gradient descent. The main contribution is a theoretical analysis of a dynamic privacy schedule which helps reduce the utility upper bound of private gradient descent (with or without momentum).\n\nStrengt... | [
5,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_KIS8jqLp4fQ",
"iclr_2021_KIS8jqLp4fQ",
"C1crV0zzvL8",
"iclr_2021_KIS8jqLp4fQ",
"C1crV0zzvL8",
"oiz7c_nnV0g",
"QK-zazNCyof",
"ugLHN65xHL",
"XfNyPaOPhGN",
"GfApTkH2fZv",
"wgxBUPwpRW",
"Kim9mEkpMB",
"PYfWHpkd-VF",
"06hVqjtcuLn",
"4D2h78AmwLZ",
"iclr_2021_KIS8jqLp4fQ",
"iclr_2... |
iclr_2021_1OCwJdJSnSA | Disentangled cyclic reconstruction for domain adaptation | The domain adaptation problem involves learning a unique classification or regression model capable of performing on both a source and a target domain. Although the labels for the source data are available during training, the labels in the target domain are unknown. An effective way to tackle this problem lies in extr... | withdrawn-rejected-submissions | This paper investigates the problem of unsupervised domain adaptation and proposes a framework based on a specific type of disentangled representations learning. The paper is well written and the proposed method seems plausible. However, according to Reviewers #3 and #4, the proposed framework does not seems to be suff... | val | [
"tM0tNkNWY2",
"KvSYJN9oZP",
"z6-uSVSLZQ",
"pPV1fYP_Ff",
"0NK3u-cZKNj",
"7--ncpdOR-Q",
"pLwIV8DFpQV",
"_NFHdsDONeF",
"ikd2lIq2iY7",
"-3pXJjQneXg",
"F1w1o5M4rk",
"Y5PxOafEBMT",
"qL24KMBMDaF",
"hk1ZuxGiZ9W",
"5Z38T3YAu6B",
"iRDdZMQRk0t"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the domain adaptation problem by addressing the challenge of splitting task-specific and task-orthogonal information in the target domain using the proposed disentangled cyclic reconstruction method. The authors further develop a variant for the unsupervised domain adaption (UDA) task. The autho... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_1OCwJdJSnSA",
"_NFHdsDONeF",
"7--ncpdOR-Q",
"F1w1o5M4rk",
"iRDdZMQRk0t",
"iRDdZMQRk0t",
"tM0tNkNWY2",
"tM0tNkNWY2",
"5Z38T3YAu6B",
"5Z38T3YAu6B",
"5Z38T3YAu6B",
"tM0tNkNWY2",
"iRDdZMQRk0t",
"iclr_2021_1OCwJdJSnSA",
"iclr_2021_1OCwJdJSnSA",
"iclr_2021_1OCwJdJSnSA"
] |
iclr_2021_IZQm8mMRVqW | Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization | The Heavy Ball Method, proposed by Polyak over five decades ago, is a first-order method for optimizing continuous functions. While its stochastic counterpart has proven extremely popular in training deep networks, there are almost no known functions where deterministic Heavy Ball is provably faster than the simple and... | withdrawn-rejected-submissions | This paper considers the analysis of momentum for nonconvex problems. While this is a worthy direction, as some reviewers pointed out, the examples considered are rather specialized and one can argue they are mostly "convex-like". Therefore it is not clear that these results are generalizable and whether the analysis o... | train | [
"wEgtRwYmyv7",
"zXTuQr9RSYM",
"5Ty3b2vwgXE",
"zMLlVvTf_8",
"c6Fafg0rmJB",
"JDRN__TVRj0",
"U73oEMpFqv5",
"kxBwE9pXUy"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the positive review and the feedback.\n\nWe first want to be clear that even if there are an infinite number of samples, the optimization landscape is still non-convex, since there are two isolated global minimum $\\pm w_*$, and at least a local maximum, and a saddle point as Chen et al. ... | [
-1,
-1,
-1,
-1,
6,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"c6Fafg0rmJB",
"kxBwE9pXUy",
"U73oEMpFqv5",
"JDRN__TVRj0",
"iclr_2021_IZQm8mMRVqW",
"iclr_2021_IZQm8mMRVqW",
"iclr_2021_IZQm8mMRVqW",
"iclr_2021_IZQm8mMRVqW"
] |
iclr_2021_CNA6ZrpNDar | On the Decision Boundaries of Neural Networks. A Tropical Geometry Perspective | This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piecewise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to characterize the decision boundaries of a simple network of the form (Affine, Re... | withdrawn-rejected-submissions | This paper studies the decision boundaries of shallow ReLU network using the formalism of tropical geometry. Its main takeaway is to provide a new interpretation of the lottery ticket hypothesis in terms of network pruning strategies that preserve certain geometric structure.
Reviewers were appreciative of the clarit... | val | [
"bDSkERqLueW",
"slCEFDKqZtc",
"ItfFp20oDqK",
"_RjDqw7-MXt",
"RiizgYGL0il",
"WBBO31zK4zZ",
"Cxse4YqCBkj",
"7Cg7wvc5zEo",
"1FjLPpFmBl_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Building upon the observation of Zhang (2018) that a class of deep neural networks have decision boundaries that correspond to tropical rational maps, the paper proposes new methods for pruning networks and constructing adversarial attacks that are based on the idea of minimizing or maximizing the changes to the d... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
"iclr_2021_CNA6ZrpNDar",
"iclr_2021_CNA6ZrpNDar",
"bDSkERqLueW",
"1FjLPpFmBl_",
"slCEFDKqZtc",
"7Cg7wvc5zEo",
"_RjDqw7-MXt",
"iclr_2021_CNA6ZrpNDar",
"iclr_2021_CNA6ZrpNDar"
] |
iclr_2021_c77KhoLYSwF | Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks | Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference. A recent survey of industry practitioners found that data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks. However, we find that the impressive performanc... | withdrawn-rejected-submissions | This paper generated significant discussion and division amongst the reviewers. On the positive side, some reviewers enjoyed both contributions, feeling the further empirical investigation of existing attacks to be interesting, and the creation of a benchmark to be very useful. On the negative side, no new positive res... | train | [
"9DWRvIxVPQu",
"5mEm895KQWN",
"669cJqUZGme",
"JqmjT8CD8T1",
"tcuKmz331EY",
"5wuJCCHCvE",
"9VhGi03_y6M",
"oApdA7_MN4x",
"OvP4cdmNanw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors study a number of existing data poisoning attacks, ablating different design choices of these attacks and evaluating them on a common benchmark. They find that many of these attacks are quite brittle to changes in their original experimental evaluation and fail to generalize to more realistic/challengi... | [
7,
-1,
-1,
-1,
-1,
-1,
8,
5,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2021_c77KhoLYSwF",
"OvP4cdmNanw",
"9VhGi03_y6M",
"9DWRvIxVPQu",
"oApdA7_MN4x",
"OvP4cdmNanw",
"iclr_2021_c77KhoLYSwF",
"iclr_2021_c77KhoLYSwF",
"iclr_2021_c77KhoLYSwF"
] |
iclr_2021_awMgJJ9H-0q | Generative Learning With Euler Particle Transport | We propose an Euler particle transport (EPT) approach for generative learning. The proposed approach is motivated by the problem of finding the optimal transport map from a reference distribution to a target distribution characterized by the Monge-Ampere equation. Interpreting the infinitesimal linearization of the M... | withdrawn-rejected-submissions | The paper proposes a discretization of Wasserstein gradient flow with an euler scheme, and propose a way to estimate each step of the euler scheme using ratio estimators from samples regularized with gradient penalties. Statistical bounds are given to bound the estimated flows from the wasserstein flow.
Reviewers ha... | train | [
"VnTeGP_skk4",
"DqIkxccaOv0",
"4xxrHZTDVHD",
"KPJnNfwer_",
"8MC1oRwt5VN",
"HMMmLZs_a4E",
"ndZ_wxMypj",
"t34UR0IuvG3",
"UUD9j9cqbK_",
"6OwqvjwEuvd",
"gvHEJXVFkQM",
"D_8YGn6WZH"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Pros:\n* The exploration to use a flow/transport map for generative modeling is inspiring and worth encouraging.\n* The writing roughly follows a clear logic flow.\n* The proofs seem valid to me.\n\nCons:\n* Method.\n - My major concern is on whether the method indeed implements the optimal transport map. As I un... | [
5,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_awMgJJ9H-0q",
"4xxrHZTDVHD",
"ndZ_wxMypj",
"HMMmLZs_a4E",
"iclr_2021_awMgJJ9H-0q",
"6OwqvjwEuvd",
"t34UR0IuvG3",
"UUD9j9cqbK_",
"VnTeGP_skk4",
"8MC1oRwt5VN",
"D_8YGn6WZH",
"iclr_2021_awMgJJ9H-0q"
] |
iclr_2021_bMCfFepJXM | BRAC+: Going Deeper with Behavior Regularized Offline Reinforcement Learning | Online interactions with the environment to collect data samples for training a Reinforcement Learning agent is not always feasible due to economic and safety concerns. The goal of Offline Reinforcement Learning (RL) is to address this problem by learning effective policies using previously collected datasets. Standard... | withdrawn-rejected-submissions | The paper presents an interesting perspective on improving offline RL within BRAC framework.
Given the improvements over BRAC, the paper is well organized and easy to understand.
The overall results pique interest in comparison with more recent Offline/Batch RL papers: BRAC, BEAR, CQL.
The results in this paper bri... | train | [
"kpdhgwiUK_",
"pzfgB4iHJXQ",
"PlOkG6YgXKC",
"3st-qc2uqnk",
"r-WgmWRz9nQ",
"djdLzjd74DV",
"MhGJSHj5yaG",
"QLhY2lWHLu",
"JPuTH5ltp0f",
"GZOnKl_5MYg",
"Eu3CFfKyxZe"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers,\n\nWe really appreciate your effort on providing valuable feedbacks for our paper. We just uploaded the final version of our rebuttal revision. Thanks everyone!",
"The paper proposes a number of improvements to the standard behavior-regularized offline RL paradigm introduced in BRAC (Wu 2019). Sp... | [
-1,
5,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_bMCfFepJXM",
"iclr_2021_bMCfFepJXM",
"r-WgmWRz9nQ",
"iclr_2021_bMCfFepJXM",
"pzfgB4iHJXQ",
"Eu3CFfKyxZe",
"3st-qc2uqnk",
"GZOnKl_5MYg",
"iclr_2021_bMCfFepJXM",
"iclr_2021_bMCfFepJXM",
"iclr_2021_bMCfFepJXM"
] |
iclr_2021_tyd9yxioXgO | Compositional Video Synthesis with Action Graphs | Videos of actions are complex signals, containing rich compositional structure. Current video generation models are limited in their ability to generate such videos. To address this challenge, we introduce a generative model (AG2Vid) that can be conditioned on an Action Graph, a structure that naturally represents the ... | withdrawn-rejected-submissions | The paper focuses on the task of conditional video synthesis starting from a single image. The authors propose an *Action Graph* to model the configuration of objects, their interactions, and actions. They show promising results on two benchmark datasets (one synthetic and another realistic).
Based on the reviewers' ... | train | [
"TBsQHrJEGcw",
"ElqC_vCY5iw",
"gMrZuVZGyfp",
"yKMvH7ipXHg",
"uFMKYSbHgDW",
"msOosII84J9",
"k4pqYo1Sdg",
"-7hQ0hrPGYR",
"ENi1PfOPg69",
"e3JmR7Z3hLD",
"0KS4fwOThqi"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a generative method (AG2Vid) that generates video conditioned by the first frame, first layout and an action graph. An action graph is defined such that nodes represent objects in the scene and edges represent actions. To capture the temporal dynamics, each pairwise connection is enriched with ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"iclr_2021_tyd9yxioXgO",
"yKMvH7ipXHg",
"0KS4fwOThqi",
"uFMKYSbHgDW",
"TBsQHrJEGcw",
"e3JmR7Z3hLD",
"ENi1PfOPg69",
"iclr_2021_tyd9yxioXgO",
"iclr_2021_tyd9yxioXgO",
"iclr_2021_tyd9yxioXgO",
"iclr_2021_tyd9yxioXgO"
] |
iclr_2021__CrmWaJ2uvP | Recurrent Neural Network Architecture based on Dynamic Systems Theory for Data Driven Modelling of Complex Physical Systems | While dynamic systems can be modelled as sequence-to-sequence tasks by deep learning using different network architectures like DNN, CNN, RNNs or neural ODEs, the resulting models often provide poor understanding of the underlying system properties. We propose a new recurrent network architecture, the Dynamic Recurre... | withdrawn-rejected-submissions | This paper proposes a new RNN architecture called Dynamic RNN which is based on dynamic system identification.
Reviewers questioned the expressivity of the proposed model, practical application/impact of the proposed model, and interpretability of the proposed model. Even though the authors attempted to convince the r... | train | [
"wa3KldwentX",
"tOQznsiorVD",
"enqWtUBmsaA",
"nAhkzCgMdjD",
"lzkFfPeh1-C",
"3oSIzHJI2HE",
"Fk8xgggCKGE",
"tGHf_ON5HBh",
"I3rNKMigZLP"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers, thank you all for reading our paper and taking the time to write such helpful and constructive feedback. While extending the interpretation for our model during this rebuttal phase, we detected a mistake in the implementation of one variant of the network, namely the one that is trained for one sa... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
4,
3
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
3
] | [
"iclr_2021__CrmWaJ2uvP",
"Fk8xgggCKGE",
"3oSIzHJI2HE",
"tGHf_ON5HBh",
"I3rNKMigZLP",
"iclr_2021__CrmWaJ2uvP",
"iclr_2021__CrmWaJ2uvP",
"iclr_2021__CrmWaJ2uvP",
"iclr_2021__CrmWaJ2uvP"
] |
iclr_2021_PEcNk5Bad7z | Learning Irreducible Representations of Noncommutative Lie Groups | Recent work has constructed neural networks that are equivariant to continuous symmetry groups such as 2D and 3D rotations. This is accomplished using explicit group representations to derive the equivariant kernels and nonlinearities. We present two contributions motivated by frontier applications of equivariance beyo... | withdrawn-rejected-submissions | This work investigates an algorithm to learn representations of Lie groups. It first learns a representation of the Lie algebra by enforcing the Jacobi identity using known structure coefficients. Then obtains the group representation via matrix exponentiation.
The paper also proposes a Poincaré-equivariant neural netw... | train | [
"n0v_aWO7Zy",
"ye8156zthDX",
"GQPOMjRhMo6",
"thwrOScmVZ2",
"UZxhUTvoeUz",
"2MkO_IgSytp",
"8lMNXKjPPr"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review and pointing out these typos, and for asking how the structure constants (the entries of $A$) are obtained. It is simple to compute the structure constants. One approach was demonstrated by (Rao and Ruderman 1999) which we cite in section 4.1.\nWe have added an additional comment to make ... | [
-1,
-1,
-1,
-1,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"UZxhUTvoeUz",
"2MkO_IgSytp",
"UZxhUTvoeUz",
"8lMNXKjPPr",
"iclr_2021_PEcNk5Bad7z",
"iclr_2021_PEcNk5Bad7z",
"iclr_2021_PEcNk5Bad7z"
] |
iclr_2021_Ao2-JgYxuQf | Active Tuning | We introduce Active Tuning, a novel paradigm for optimizing the internal dynamics of recurrent neural networks (RNNs) on the fly. In contrast to the conventional sequence-to-sequence mapping scheme, Active Tuning decouples the RNN's recurrent neural activities from the input stream, using the unfolding temporal gradien... | withdrawn-rejected-submissions | This paper introduces a method to estimate dynamics parameters in recurrent structured models during the learning process. All three reviewers agreed that the idea is interesting and the proposed method could be potentially useful. However, two of the three reviewers have a serious concern about the lack of comparison ... | train | [
"NEYZ57JXPTw",
"Fhwy3LHaFZQ",
"u5a-1EUfxFJ",
"zqwd9ItiONM",
"bSSV8bTbu-n",
"Urp6dKu-zXu",
"fhawELqBPqq",
"7t7g6Xk4Og_"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"To complete the comparison with TCN and missing data values, we have just uploaded another paper revision.\n\nThis revision includes now TCN results for all three problem domains considered. While TCN partially outperforms the respective RNNs when trained on similar denoising levels, Active Tuning applied to a noi... | [
-1,
-1,
-1,
-1,
-1,
8,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_Ao2-JgYxuQf",
"Urp6dKu-zXu",
"fhawELqBPqq",
"7t7g6Xk4Og_",
"iclr_2021_Ao2-JgYxuQf",
"iclr_2021_Ao2-JgYxuQf",
"iclr_2021_Ao2-JgYxuQf",
"iclr_2021_Ao2-JgYxuQf"
] |
iclr_2021_pVwU-8cdjQQ | Unsupervised Video Decomposition using Spatio-temporal Iterative Inference | Unsupervised multi-object scene decomposition is a fast-emerging problem in representation learning. Despite significant progress in static scenes, such models are unable to leverage important dynamic cues present in video. We propose a novel spatio-temporal iterative inference framework that is powerful enough to join... | withdrawn-rejected-submissions | The reviewers appreciate the spatio-temporal formulation of amortised iterative inference.
However, the paper does not clearly state what is the end goal: if the end goal is video object segmentation, it should compared against other unsupervised object segmentation methods. If the goal is representation learning, it s... | test | [
"jTZtCJoBra",
"a7qILVg9CW",
"1pQijHm_OQ",
"ZUj8olvShUV",
"XYy7R8mW9Lz",
"gSpsQPmJ0b4",
"gWT46lZUis8",
"j-GumM7z1-p",
"SO-v-SZXxI",
"lzuarhYynF1",
"qRd1qEtUQZ-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors propose to better explicitly utilize the sequential information in the video to improve the performance of unsupervised scene decomposition in video. Concretely, 2D LSTM is used to combine the advantages of iterative inference and temporal information. By appropriately using the inferred... | [
6,
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_pVwU-8cdjQQ",
"iclr_2021_pVwU-8cdjQQ",
"iclr_2021_pVwU-8cdjQQ",
"XYy7R8mW9Lz",
"gSpsQPmJ0b4",
"a7qILVg9CW",
"jTZtCJoBra",
"qRd1qEtUQZ-",
"1pQijHm_OQ",
"iclr_2021_pVwU-8cdjQQ",
"iclr_2021_pVwU-8cdjQQ"
] |
iclr_2021_4_57x7xhymn | Action Concept Grounding Network for Semantically-Consistent Video Generation | Recent works in self-supervised video prediction have mainly focused on passive forecasting and low-level action-conditional prediction, which sidesteps the problem of semantic learning. We introduce the task of semantic action-conditional video prediction, which can be regarded as an inverse problem of action recognit... | withdrawn-rejected-submissions | This paper presents work on semantic action-conditioned video prediction. The reviewers appreciated the interesting task and use of capsule networks to address it. Concerns were raised over generalization ability of the proposed approach, points on clarity, scalability, and handling of uncertainty/diversity by the me... | train | [
"Bdc-VJZv6--",
"zqPDfckiTvX",
"oMgIis-AO2i",
"uY_jZdhWFFS",
"OzFEX-YFFOr",
"J6IbaW6IKS9",
"Wkq677uvw_G",
"Yipu3YHcBVq",
"qmf4PCKQs5X",
"Y4yx6RAY67R"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"Summary\n\n\nPros\n+ New interesting task for video synthesis\n+ New capsule based neural network to learn relationships\n+ Outperforms baselines in pixel based metrics and user study\n\nComments / Questions:\n- Stopping criteria:\nI may have missed it, but I cannot find what is the stopping criteria of the video ... | [
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_4_57x7xhymn",
"iclr_2021_4_57x7xhymn",
"Y4yx6RAY67R",
"iclr_2021_4_57x7xhymn",
"oMgIis-AO2i",
"qmf4PCKQs5X",
"zqPDfckiTvX",
"Bdc-VJZv6--",
"iclr_2021_4_57x7xhymn",
"iclr_2021_4_57x7xhymn"
] |
iclr_2021_iQxS0S9ir1a | Distributional Generalization: A New Kind of Generalization | We introduce a new notion of generalization--- Distributional Generalization--- which roughly states that outputs of a classifier at train and test time are close as distributions, as opposed to close in just their average error. For example, if we mislabel 30% of dogs as cats in the train set of CIFAR-10, then a ResNe... | withdrawn-rejected-submissions | The paper introduces a notion of distributional generalization, which aims at characterizing aspects of underlying distribution that are learned by a trained predictor. Authors make several interesting conjectures and support them with empirical evidence. Reviewers agreed on the novelty of the ideas; however, the work ... | train | [
"9P_EBzS9-6o",
"1H3WjRabGE",
"XC-ZsMPHeh1",
"TenhOrFCNe",
"aSFKTkEcnAf",
"rgZ5zTaH6F7",
"_623CXN_77",
"vN3RJYOK_G",
"oVm6p6iuAA3",
"9YWrlrqZGX9"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have updated the PDF with the following additions, following reviewer comments:\n\n- Added Appendix C.7 and Figure 15, to illustrate the concrete quantitative prediction of Conjecture 1. We show experimentally that the TV-distance between the joint distributions $(L(x), f(x))$ and $(L(x), y)$ is at most $\\epsi... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"rgZ5zTaH6F7",
"9YWrlrqZGX9",
"_623CXN_77",
"vN3RJYOK_G",
"oVm6p6iuAA3",
"iclr_2021_iQxS0S9ir1a",
"iclr_2021_iQxS0S9ir1a",
"iclr_2021_iQxS0S9ir1a",
"iclr_2021_iQxS0S9ir1a",
"iclr_2021_iQxS0S9ir1a"
] |
iclr_2021_dOiHyqVaFkg | Unsupervised Progressive Learning and the STAM Architecture | We first pose the Unsupervised Progressive Learning (UPL) problem: an online representation learning problem in which the learner observes a non-stationary and unlabeled data stream, and identifies a growing number of features that persist over time even though the data is not stored or replayed. To solve the UPL prob... | withdrawn-rejected-submissions | After reading the reviews, rebuttal, and looking through the paper I do feel that UPL setting is one that we need to consider. However is not clear to me that proposed approach matches the conditions described by the authors. In particular the scalability constraints seem important. I do feel that for the UPL setting m... | train | [
"vPcDeQFQkuM",
"RQcmHfQLw1",
"m3N_7Ae2OzI",
"jlWLX_ZF4ql",
"fj6ul2x7rQz",
"tS8DS50belD",
"ouJ1rdNtUeg",
"HkLUa67gVEF",
"T8BWROQ0q-",
"vH3iNWnPl1"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the constructive comments. Please find our responses to your individual comments below. \n\n* The approach has some of the shortcomings of nearest neighbors: efficiency especially at classification/test time (as the number of classes goes up to many 1000s, and required number of centroids per class g... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
2,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5,
3
] | [
"ouJ1rdNtUeg",
"tS8DS50belD",
"HkLUa67gVEF",
"T8BWROQ0q-",
"vH3iNWnPl1",
"iclr_2021_dOiHyqVaFkg",
"iclr_2021_dOiHyqVaFkg",
"iclr_2021_dOiHyqVaFkg",
"iclr_2021_dOiHyqVaFkg",
"iclr_2021_dOiHyqVaFkg"
] |
iclr_2021_ZVBtN6B_6i7 | Not All Memories are Created Equal: Learning to Expire | Attention mechanisms have shown promising results in sequence modeling tasks that require long-term memory. Recent work has investigated mechanisms to reduce the computational cost of preserving and storing the memories. However, not all content in the past is equally important to remember. We propose Expire-Span, a me... | withdrawn-rejected-submissions | The paper studies the problem of identifying what information to forget in attention mechanisms, with the goal of enabling attention mechanisms to deal with longer contexts. This is a simple yet intuitive extension: self-attention is augmented with an expiration value prediction. Experiments were carried out on NLP a... | train | [
"BnBOom1Xd_U",
"VVvapY4GkJa",
"Zq9Rx3To9T",
"Me6xfcZrr91",
"qW2UlpaisbI",
"xE2cHh7b-dN",
"uPX8CM4JVas",
"7jX2QcJU19"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Modified score: thank you authors for your thorough response. Given the new information and baselines, I think this is a promising paper that passes the acceptance threshold.\n\nOverall Quality: The authors present a method to improve the efficiency of transformer models when computing attention over previous time... | [
6,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_ZVBtN6B_6i7",
"Zq9Rx3To9T",
"iclr_2021_ZVBtN6B_6i7",
"uPX8CM4JVas",
"BnBOom1Xd_U",
"7jX2QcJU19",
"iclr_2021_ZVBtN6B_6i7",
"iclr_2021_ZVBtN6B_6i7"
] |
iclr_2021_FUtMxDTJ_h | Symmetry Control Neural Networks | This paper continues the quest for designing the optimal physics bias for neural networks predicting the dynamics of systems when the underlying dynamics shall be inferred from the data directly. The description of physical systems is greatly simplified when the underlying symmetries of the system are taken into accoun... | withdrawn-rejected-submissions | This paper proposes to learn symmetries of a physical system jointly with its Hamiltonian from data by learning a canonical transformation that render some of the coordinates constant.
The Hamiltonian dynamics and "canonical" transformation are softly enforced via loss terms.
A few experiments are performed demonstrati... | train | [
"8FwPW5RgGnE",
"dGsCpxMIWVY",
"2JDqRFYmHB",
"5QWS9upIeAA",
"Q9sbzokEQ0n",
"AI1x7Bom6mD",
"lONSq3u-fb",
"mfQO2UF7f0Q",
"_Egkrswxzn",
"S4dxT47QqT"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a set of loss functions that can make neural networks learn Hamiltonian dynamics with conserved quantities.\nThe research builds on Hamiltonian networks and adds the search for conserved quantities as an auxiliary task. The loss functions are a straight forward translation of equations describi... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"iclr_2021_FUtMxDTJ_h",
"iclr_2021_FUtMxDTJ_h",
"8FwPW5RgGnE",
"mfQO2UF7f0Q",
"S4dxT47QqT",
"_Egkrswxzn",
"_Egkrswxzn",
"iclr_2021_FUtMxDTJ_h",
"iclr_2021_FUtMxDTJ_h",
"iclr_2021_FUtMxDTJ_h"
] |
iclr_2021_QzKDLiosEd | Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel | We examine the magnetic flux emanating from a graphics processing unit’s (GPU’s) power cable, as acquired by a cheap $3 induction sensor, and find that this signal betrays the detailed topology and hyperparameters of a black-box neural network model. The attack acquires the magnetic signal for one query with... | withdrawn-rejected-submissions |
The paper presents a side-channel attack in a scenario where the attacker is able to place a induction sensor near the power cable of the victim's GPU. The authors train a neural network to analyse the magnetic flux measured by the sensor to recover the structure (layer type and layer parameters) of the target neural ... | train | [
"fAkPLJj4IaF",
"x0SBDu3Vmz",
"4-ErtPThv50",
"8yhQicIxYpA",
"N6iWssJfTK",
"pwwMms61ljU",
"FXwB6FyE6-j",
"oxZzFxZotk",
"dXOnrsnQXzC",
"1qW-LdPdpoF",
"d4PHsvQYH5",
"0qo9EdJlbaf",
"LcwmFL9WSxv"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"# Summary:\nThe paper presents a method for capturing the shape (type of layers) and their respective parameters of a neural network through the magnetic field induced as the GPU drains power. In particular, the GPU is snooped using an off-the-shelf magnetic induction sensor which is placed along the power cable o... | [
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_QzKDLiosEd",
"iclr_2021_QzKDLiosEd",
"iclr_2021_QzKDLiosEd",
"oxZzFxZotk",
"1qW-LdPdpoF",
"FXwB6FyE6-j",
"d4PHsvQYH5",
"dXOnrsnQXzC",
"x0SBDu3Vmz",
"LcwmFL9WSxv",
"0qo9EdJlbaf",
"fAkPLJj4IaF",
"iclr_2021_QzKDLiosEd"
] |
iclr_2021_88_MfcJoJlS | Guided Exploration with Proximal Policy Optimization using a Single Demonstration | Solving sparse reward tasks through exploration is one of the major challenges in deep reinforcement learning, especially in three-dimensional, partially-observable environments. Critically, the algorithm proposed in this article uses a single human demonstration to solve hard-exploration problems. We train an agent o... | withdrawn-rejected-submissions | There was quite a bit of internal discussion on this paper. To summarize:
- The idea is very neat and interesting and likely to work
- The paper is likely to inspire future work
- There are still serious doubts about the experimental evaluation that is not entirely up to par with current standards
- The reviewers we... | train | [
"DpW_sqMMTJL",
"xZCqmyNjsX5",
"avpfKUtwXrU",
"4z9D1Oku5fi",
"OVlGkQUGPBA",
"HFTVcaE4vCG",
"BuarVHb8YRb",
"n3GpgOTxq1"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the changes, clarifications, and comments.\n\n>Thank you\n\nI'll not yet update my rating because you didn't address my comment about MuJoCo environments. The ones that are built into OpenAI Gym fulfill your criteria (random init and partial observability are given; sparse reward is trivial to get from ... | [
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"xZCqmyNjsX5",
"OVlGkQUGPBA",
"HFTVcaE4vCG",
"BuarVHb8YRb",
"n3GpgOTxq1",
"iclr_2021_88_MfcJoJlS",
"iclr_2021_88_MfcJoJlS",
"iclr_2021_88_MfcJoJlS"
] |
iclr_2021_65MxtdJwEnl | Neural CDEs for Long Time Series via the Log-ODE Method | Neural Controlled Differential Equations (Neural CDEs) are the continuous-time analogue of an RNN, just as Neural ODEs are analogous to ResNets. However just like RNNs, training Neural CDEs can be difficult for long time series. Here, we propose to apply a technique drawn from stochastic analysis, namely the log-ODE me... | withdrawn-rejected-submissions | This paper presents a method for improving the learning of neural controlled differential equation (CDE) models. Neural CDE models provide a number of advantages over neural ODE models in terms of their ability to incorporate continuous-time observations. The primary strength of this paper is that it proposes a mathema... | train | [
"Gqp3DXjacpS",
"b1e5JLCcY-d",
"_TyR-_W1cMj",
"8WfgwtPHPWl",
"w79Veac2Xyw",
"k9PoZaaEATy",
"DGkhcbA7NVY",
"Iq9esF5FE4",
"oSoKyXkkRTq",
"0SUi0vCy5Hh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Summarizing the paper claims\n------------------------------------------\nThe paper introduces an approach that allows training Neural Controlled Differential Equations (CDEs) for long time series. In contrast to the Neural ODE that is determined to its initial condition, Neural CDE produces a trajectory dependent... | [
7,
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_65MxtdJwEnl",
"iclr_2021_65MxtdJwEnl",
"8WfgwtPHPWl",
"Iq9esF5FE4",
"iclr_2021_65MxtdJwEnl",
"oSoKyXkkRTq",
"0SUi0vCy5Hh",
"b1e5JLCcY-d",
"w79Veac2Xyw",
"Gqp3DXjacpS"
] |
iclr_2021_Py4VjN6V2JX | Contrastive Self-Supervised Learning of Global-Local Audio-Visual Representations | Contrastive self-supervised learning has delivered impressive results in many audio-visual recognition tasks. However, existing approaches optimize for learning either global representations useful for high-level understanding tasks such as classification, or local representations useful for tasks such as audio-visual ... | withdrawn-rejected-submissions | The paper received mixed reviews. While AnonReviewer1 and AnonReviewer2 liked the idea of jointly learning global-local representations, the other reviewers were concerned about the technical novelty. Reviewers also raised various questions about the experiments and ablation studies. AC found that the rebuttal well add... | train | [
"URIYF9QPf7c",
"mZUG7-jeYXa",
"FuthHkpN7O_",
"PNMWyDbhI8",
"wWoVGAWy04s",
"j4B0Jr6-Lm",
"zGR1IKfWVdl",
"7joq93k5CD4"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This paper proposes an audio-visual self-supervised learning approach based on two cross-modal contrastive loss that learns audio-visual representations that can generalize to both the tasks which require global semantic information and localized spatio-temporal information. Extensive experiments on 4 tas... | [
5,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_Py4VjN6V2JX",
"zGR1IKfWVdl",
"URIYF9QPf7c",
"j4B0Jr6-Lm",
"7joq93k5CD4",
"iclr_2021_Py4VjN6V2JX",
"iclr_2021_Py4VjN6V2JX",
"iclr_2021_Py4VjN6V2JX"
] |
iclr_2021_LFjnKhTNNQD | Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization | Adversarial training is the industry standard for producing models that are robust to small adversarial perturbations. However, machine learning practitioners need models that are robust to other kinds of changes that occur naturally, such as changes in the style or illumination of input images. Such changes in input ... | withdrawn-rejected-submissions | This work proposes a novel, interesting and simple technique to improve the model robustness to distribution shift. The proposed method is called Adversarial Batch Normalization (AdvBN) which is based on adversarial perturbation of BN statistics. Authors provide extensive experiments to show the effectiveness of AdvBN.... | train | [
"WgtjiBvLRfi",
"M5WNIwUqrn",
"V1VXCJEVdLR",
"1bZ_BOEhZ3G",
"Byemp1qoIlH",
"k2P3Co7IhtT",
"DcgmcNqKxLp",
"WYAd8AIfao",
"tc_i8p11ZVY",
"_LBuh7YBF7Y",
"gYtLf08NHBQ",
"APHnce683l4",
"IQGat5RcxZA",
"aqe8CZjV0Gd",
"NbWVm8zxbfD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an algorithm for generalization to unseen domains. The algorithm performs adversarial training on the batch normalization coefficients. The authors provides experimental results showing the benefits of the proposed algorithm and also provides ablation study.\n\nStrength:\nI think the algorithm ... | [
5,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_LFjnKhTNNQD",
"iclr_2021_LFjnKhTNNQD",
"iclr_2021_LFjnKhTNNQD",
"Byemp1qoIlH",
"k2P3Co7IhtT",
"DcgmcNqKxLp",
"APHnce683l4",
"WgtjiBvLRfi",
"V1VXCJEVdLR",
"M5WNIwUqrn",
"aqe8CZjV0Gd",
"NbWVm8zxbfD",
"iclr_2021_LFjnKhTNNQD",
"iclr_2021_LFjnKhTNNQD",
"iclr_2021_LFjnKhTNNQD"
] |
iclr_2021_1flmvXGGJaa | NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search | The most significant barrier to the advancement of Neural Architecture Search (NAS) is its demand for large computational resources, which hinders scientifically sound empirical evaluations. As a remedy, several tabular NAS benchmarks were proposed to simulate runs of NAS methods in seconds. However, all existing tabu... | withdrawn-rejected-submissions | The contributions of this paper lie in two areas: a new benchmarking dataset and a new way to generate benchmarking datasets. Overall, the reviewers are split in their assessment based on which particular area they are focusing on. Reviewers who focus more on evaluating this work as a new benchmarking dataset, correctl... | train | [
"KU2iPND2aQR",
"u0GKAe1JC6",
"-C1eURlb-Bp",
"Z1q5tQSd_io",
"5yri8lVH1p3",
"kPw3Od9dRf3",
"2Ka7hx7yht4",
"-KB5WJwzRwM",
"PQ_lfraAPIU",
"mPltkkL03C",
"Ww9BP-3qnUV",
"KtN3tBkfWzC",
"ktsUouOQ-7M",
"GslmZCpvF1X",
"1t6f41hFdDx",
"OuBBv42d5dk",
"Nq19gopAR4J",
"UyPamTuGmc1",
"sUeEXUK_fsB... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n\nThis work filled an important gap in the NAS benchmarks. The previous benchmarks only contain small search space due to the expensive cost of evaluation of neural architecture. In this search space, random search often become... | [
7,
3,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
5,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_1flmvXGGJaa",
"iclr_2021_1flmvXGGJaa",
"u0GKAe1JC6",
"Tp84KeLQByA",
"2Ka7hx7yht4",
"iclr_2021_1flmvXGGJaa",
"-KB5WJwzRwM",
"PQ_lfraAPIU",
"Ww9BP-3qnUV",
"iclr_2021_1flmvXGGJaa",
"KtN3tBkfWzC",
"kPw3Od9dRf3",
"GslmZCpvF1X",
"1t6f41hFdDx",
"Tp84KeLQByA",
"u0GKAe1JC6",
"UyPam... |
iclr_2021_EKb4Z0aSNf | CLOPS: Continual Learning of Physiological Signals | Deep learning algorithms are known to experience destructive interference when instances violate the assumption of being independent and identically distributed (i.i.d). This violation, however, is ubiquitous in clinical settings where data are streamed temporally and from a multitude of physiological sensors. To overc... | withdrawn-rejected-submissions | Reviewers could not reach consensus here and important concerns from one reviewer on empirical results could not be convincingly addressed. The authors have provided a comprehensive response to the reviews, yet failed to convince them.
| train | [
"DKBbtKSwthi",
"Vqg_phL9tYC",
"GpPLN95--Xg",
"gYF1JCKv3Fq",
"e_AnZD2nL2m",
"6hUIq71uoW",
"D6u5PEAbQBw",
"RgZyFKeI_7j",
"OZp8SGGrlX",
"l5jbM2NkiO",
"g5zIh9_4HB_",
"URWOrG-oAU0",
"xnhTErR--7",
"YpLe9a-zcws",
"M2-p8cWciVL",
"8h1vP7FyXS3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"**Update**\nI think the manuscript has been further improved now and I improve my score to 7. Also since I think results on public datasets in new domains other than images may be helpful for the wider research community. Keep in mind, as written below, that other reviewers more familiar with this research field m... | [
7,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4
] | [
2,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_EKb4Z0aSNf",
"iclr_2021_EKb4Z0aSNf",
"iclr_2021_EKb4Z0aSNf",
"YpLe9a-zcws",
"6hUIq71uoW",
"RgZyFKeI_7j",
"URWOrG-oAU0",
"OZp8SGGrlX",
"l5jbM2NkiO",
"g5zIh9_4HB_",
"Vqg_phL9tYC",
"DKBbtKSwthi",
"M2-p8cWciVL",
"8h1vP7FyXS3",
"iclr_2021_EKb4Z0aSNf",
"iclr_2021_EKb4Z0aSNf"
] |
iclr_2021_8pz6GXZ3YT | Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks | The lottery ticket hypothesis (LTH) states that learning on a properly pruned network (the winning ticket) has improved test accuracy over the originally unpruned network. Although LTH has been justified empirically in a broad range of deep neural network (DNN) involved applications like computer vision and natural lan... | withdrawn-rejected-submissions | Even though the authors revised the problem formulation, the paper seems not ready for publication. The assumptions are still too strong (The learning algorithm assumes knowledge of the sparsity mask). The proof technique also heavily relies on Zhong et al'17 without properly highlighting the difference. | train | [
"Hpy-7uD2wie",
"bgkkKTsZ2NI",
"3mC91OR7Fwo",
"ketc03CzgiJ",
"bYRXEbN4d6",
"XreR7IYFtdV",
"MkKdXRC39UP",
"R_pfprBx0vY",
"m9Co9WI8M3B",
"NSL_kx3OqUY",
"ZL44Uy68Y_O",
"UltV_gRXf_e",
"BdXyOqYEV8d",
"cxCBwJUE9zz",
"OaK82EPOM-R",
"1XAeiamIlJe"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThis paper investigates the theoretical evidence behind improved generalization of the winning lottery tickets. More precisely authors characterize the testing error of a pruned network that is then trained using AGD. Under relatively reasonable assumptions, they manage to show an improved generalizati... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"iclr_2021_8pz6GXZ3YT",
"3mC91OR7Fwo",
"ZL44Uy68Y_O",
"bYRXEbN4d6",
"R_pfprBx0vY",
"iclr_2021_8pz6GXZ3YT",
"OaK82EPOM-R",
"m9Co9WI8M3B",
"Hpy-7uD2wie",
"cxCBwJUE9zz",
"UltV_gRXf_e",
"1XAeiamIlJe",
"MkKdXRC39UP",
"iclr_2021_8pz6GXZ3YT",
"iclr_2021_8pz6GXZ3YT",
"iclr_2021_8pz6GXZ3YT"
] |
iclr_2021_jHykXSIk3ch | A spherical analysis of Adam with Batch Normalization | Batch Normalization (BN) is a prominent deep learning technique. In spite of its apparent simplicity, its implications over optimization are yet to be fully understood. While previous studies mostly focus on the interaction between BN and stochastic gradient descent (SGD), we develop a geometric perspective whi... | withdrawn-rejected-submissions | Three reviewers recommend rejecting or weak reject. The studied problem is interesting, but as one reviewer pointed out, it is not that clear how this work changes our theoretical understanding of those methods or what they imply for applications. Overall, I feel this work is on the borderline (probably it deserves hig... | train | [
"H-WjODdMW6V",
"VUY1aq0JKP",
"KBnjF0Dos7b",
"TcfrUe5JgO",
"1LhLY2Txkc1",
"ejILp5uQkK",
"VFhRjIVeXIi"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Post-response update: I thank the authors for their response. I updated my score, but still think the paper needs improvement to be of interest to the ICLR community.\n\n---\n\n\nThe submission analyzes the behavior of gradient descent and adaptive variants for scale-invariant models, including batch-normalization... | [
5,
-1,
-1,
-1,
-1,
5,
4
] | [
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_jHykXSIk3ch",
"ejILp5uQkK",
"VFhRjIVeXIi",
"H-WjODdMW6V",
"iclr_2021_jHykXSIk3ch",
"iclr_2021_jHykXSIk3ch",
"iclr_2021_jHykXSIk3ch"
] |
iclr_2021_vOchfRdvPy7 | To be Robust or to be Fair: Towards Fairness in Adversarial Training | Adversarial training algorithms have been proven to be reliable to improve machine learning models' robustness against adversarial examples. However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data. For instance, PGD adversarial... | withdrawn-rejected-submissions | This paper examines adversarially trained robust models, and finds that accuracy disparity is higher than for standard models. The authors introduce a method they call Fair Robust Learning using Lagrange multipliers to minimize overall robust error while constraining the accuracy discrepancy between classes.
In disc... | val | [
"owo6L2ysU0W",
"lehMp4-SA1k",
"t5pc4-QefQP",
"1yz66cT_t7",
"ZTbEUeN3LYG",
"8jwtMTlsQS2",
"2yFhGVFHxoS",
"b-scwlgo_FA",
"56yZZN9M1k",
"OCc9mi_gyWh",
"bBTBGCvxPCs",
"uC7Z2VuuZXj",
"x2LAEA0pJA9",
"7tSA4ZCYbJW",
"mQrkN9K0pE2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors study adversarially trained classifiers and observe that the accuracy discrepancy between classes is larger than that of standard models. They then propose a theoretical model where this phenomenon provably arises. Finally, they propose a method to reduce the (standard and robust) accuracy discrepancy ... | [
5,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_vOchfRdvPy7",
"iclr_2021_vOchfRdvPy7",
"ZTbEUeN3LYG",
"iclr_2021_vOchfRdvPy7",
"8jwtMTlsQS2",
"2yFhGVFHxoS",
"1yz66cT_t7",
"56yZZN9M1k",
"OCc9mi_gyWh",
"lehMp4-SA1k",
"uC7Z2VuuZXj",
"mQrkN9K0pE2",
"7tSA4ZCYbJW",
"owo6L2ysU0W",
"iclr_2021_vOchfRdvPy7"
] |
iclr_2021_-NEXDKk8gZ | Improved Denoising Diffusion Probabilistic Models | We explore denoising diffusion probabilistic models, a class of generative models which have recently been shown to produce excellent samples in the image and audio domains. While these models produce excellent samples, it has yet to be shown that they can achieve competitive log-likelihoods. We show that, with several... | withdrawn-rejected-submissions | This paper arose a number of questions and concerns among Reviewers that made it get below-average scores (unfortunately, Reviewers did not provide further feedback on the rebuttal). After discussion between the Program Chairs, calibrating decisions across all submissions and, given the drawbacks mentioned below, it is... | test | [
"QAzeYp9jac1",
"d-Y-o6tZS7I",
"2smUrvHu9zk",
"B9RT6KPjhk9",
"JvEbSlcd-W",
"6Hq69CeN4b",
"RWWvPZfKbv",
"kn3xx0MTdv3",
"IdSqd5RX6eC"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"First, we would like to thank all reviewers for taking the time to provide us with helpful feedback! The reviews have helped us understand various changes and clarifications we should make to our paper, and will help make this paper stronger.\n\nWe’ve updated the paper to include the following:\n\n1. Results on CI... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3
] | [
"iclr_2021_-NEXDKk8gZ",
"kn3xx0MTdv3",
"RWWvPZfKbv",
"6Hq69CeN4b",
"IdSqd5RX6eC",
"iclr_2021_-NEXDKk8gZ",
"iclr_2021_-NEXDKk8gZ",
"iclr_2021_-NEXDKk8gZ",
"iclr_2021_-NEXDKk8gZ"
] |
iclr_2021_yxafu6ZtUux | AN ONLINE SEQUENTIAL TEST FOR QUALITATIVE TREATMENT EFFECTS | Tech companies (e.g., Google or Facebook) often use randomized online experiments and/or A/B testing primarily based on the average treatment effects to compare their new product with an old one. However, it is also critically important to detect qualitative treatment effects such that the new one may significantly out... | withdrawn-rejected-submissions | The paper proposes a new framework for online hypothesis testing aimed at detecting causal effects (of treatments on outcomes) within subgroups in online settings where treatments are randomized. Such settings occur in online advertising where different versions of the same website may be presented to a set of otherwi... | train | [
"xL55g60uoiw",
"8j7dgYR4Mqt",
"xO8ODHjDHjl",
"u1-7ZVS8csh",
"0cJ1oihF2t1",
"fJZDfhxUN2",
"01t8mv2r25n",
"Z3lhs-56g-t"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"=== Contributions ===\n\nThis paper proposes a new framework for A/B testing in the frame of randomized online experiments. This new framework enables testing whether qualitative treatment effects for some specific segment(s) of the tested population can be detected or not. \n\nThe approach relies on a scalable al... | [
7,
-1,
-1,
-1,
-1,
6,
3,
4
] | [
3,
-1,
-1,
-1,
-1,
2,
4,
1
] | [
"iclr_2021_yxafu6ZtUux",
"Z3lhs-56g-t",
"fJZDfhxUN2",
"xL55g60uoiw",
"01t8mv2r25n",
"iclr_2021_yxafu6ZtUux",
"iclr_2021_yxafu6ZtUux",
"iclr_2021_yxafu6ZtUux"
] |
iclr_2021_KxUlUb26-P3 | PABI: A Unified PAC-Bayesian Informativeness Measure for Incidental Supervision Signals | Real-world applications often require making use of {\em a range of incidental supervision signals}. However, we currently lack a principled way to measure the benefit an incidental training dataset can bring, and the common practice of using indirect, weaker signals is through exhaustive experiments with various model... | withdrawn-rejected-submissions | This paper first makes the observation that incidental supervisory data can be used to define a new prior from which to calculate a PAC-Bayes generalization guarantee. This observation can be applied to any setting where there is unsupervised or semi-supervised pre-training followed by fine-tuning on labeled data. Th... | train | [
"lrCxMJ9PX-C",
"IE9tzOCSfY6",
"8ihedZrZ2Sz",
"uGx6k2k9bz9",
"B0vN_sH79X0",
"DZoTH7_cZmS",
"TTbGJ4tF-NA",
"VDlx1bn_f19",
"US2sAO8sPo",
"jXc0Lty_TDr",
"bcKVNgWh8XP",
"yJwMcFou6b",
"Gn3_Bi66Lan",
"8nM23j0OJjT",
"MPyUxej_rG6"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The use of PAC-Bayes theory for NLP tasks is rare. Although I know little on NLP, the paper proposition to leverage on PAC-Bayes for evaluating the benefit of various incidental supervision signals seems promising. However, even if the empirical results are good, the connection between PAC-Bayes and the proposed ... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"iclr_2021_KxUlUb26-P3",
"TTbGJ4tF-NA",
"iclr_2021_KxUlUb26-P3",
"Gn3_Bi66Lan",
"MPyUxej_rG6",
"lrCxMJ9PX-C",
"VDlx1bn_f19",
"US2sAO8sPo",
"jXc0Lty_TDr",
"bcKVNgWh8XP",
"yJwMcFou6b",
"8nM23j0OJjT",
"iclr_2021_KxUlUb26-P3",
"iclr_2021_KxUlUb26-P3",
"iclr_2021_KxUlUb26-P3"
] |
iclr_2021_8IbZUle6ieH | Graph Neural Network Acceleration via Matrix Dimension Reduction | Graph Neural Networks (GNNs) have become the de facto method for machine learning on graph data (e.g., social networks, protein structures, code ASTs), but they require significant time and resource to train. One alternative method is Graph Neural Tangent Kernel (GNTK), a kernel method that corresponds to infinitely wi... | withdrawn-rejected-submissions | While the author response clarified some concerns, it could not convince the reviewers that the current version of the paper should be accepted for publication at ICLR.
| train | [
"GYhwJl3tXno",
"stUQ4WUMBPD",
"xipkqnn64lQ",
"M5kRupPjq7o",
"17X8_y1KmRu",
"2CeiAxPm8sg",
"DWXiV9hE1Q",
"BMm7X7-Uj1",
"dfeYFqzgS5s",
"UC0VmMn2ARp"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"\n#### **Post Rebuttal**\nI'm afraid the authors failed to answer my main question regarding the results and the applicability of their proposed approximation.\n\nTherefore, I decide to keep my score.\n\n---\n#### **Contribution Claims** \nThe paper vouches for the use of GNTK as a strong method for graph learning... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
2,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_8IbZUle6ieH",
"iclr_2021_8IbZUle6ieH",
"M5kRupPjq7o",
"dfeYFqzgS5s",
"GYhwJl3tXno",
"UC0VmMn2ARp",
"UC0VmMn2ARp",
"stUQ4WUMBPD",
"GYhwJl3tXno",
"iclr_2021_8IbZUle6ieH"
] |
iclr_2021_UV9kN3S4uTZ | Dynamic Relational Inference in Multi-Agent Trajectories | Unsupervised learning of interactions from multi-agent trajectories has broad applications in physics, vision, and robotics. However, existing neural relational inference works are limited to static relations. We consider a more general setting of dynamic relational inference where interactions change over time. We pro... | withdrawn-rejected-submissions | This paper presents a method for relational inference in multi-agent/multi-object trajectory prediction tasks. Different from the neural relational inference (NRI) model [1], the presented method is able to model time-varying relations. Experimental results on physics simulations and sports games (basketball) show bene... | train | [
"X8YgDOhQcef",
"PYQu7TKUrvd",
"vVoOwTqGlMn",
"lFIJQGogQj"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces DYnamic multi-Agent Relational Inference (DYRI), which is an extension of Neural Relational Inference (NRI) for dynamic relations. This extension shows improved performance on the real-world basketball trajectory dataset.\n\nThe article is fairly well structured, apart from the literature re... | [
2,
4,
5,
4
] | [
5,
4,
5,
3
] | [
"iclr_2021_UV9kN3S4uTZ",
"iclr_2021_UV9kN3S4uTZ",
"iclr_2021_UV9kN3S4uTZ",
"iclr_2021_UV9kN3S4uTZ"
] |
iclr_2021_rSwTMomgCz | Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices | The goal of meta-reinforcement learning (meta-RL) is to build agents that can quickly learn new tasks by leveraging prior experience on related tasks. Learning a new task often requires both exploring to gather task-relevant information and exploiting this information to solve the task. In principle, optimal exploratio... | withdrawn-rejected-submissions | The authors clarified many of R1 and R4's concerns, but there were important remaining concerns regarding the presentation.
On the bright side, the approach is novel and the experimental results are solid.
However, the main point raised by R1 is the mismatch between the narrative and the theory and the actual algori... | train | [
"W1zPqLaRse",
"92MQhBo9Y1I",
"5LjV_jAp0w",
"Z-qxGWZitY",
"z9Ngo-Yl3AW",
"fxq5UUO0EG",
"YdextVKYlCq",
"7vq0glYzvL5",
"_I2Ow6tFeBf",
"ytP_RAUubI2",
"5asDoVx0bzR",
"aSG2_JaDVAx",
"XTz5T9hX9V5",
"2ViZxEWPsw9",
"mJlMuaXfxux",
"469EI6Rp6cN"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank R4 for their response.\n\n**Clarifying the chicken-and-egg problem.** To clarify, the chicken-and-egg problem occurs in end-to-end approaches since (i) exploitation can only be learned efficiently when exploitation is already good and (ii) the learning signal for exploration (i.e., the expected exploitati... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"92MQhBo9Y1I",
"7vq0glYzvL5",
"XTz5T9hX9V5",
"2ViZxEWPsw9",
"mJlMuaXfxux",
"469EI6Rp6cN",
"iclr_2021_rSwTMomgCz",
"2ViZxEWPsw9",
"mJlMuaXfxux",
"XTz5T9hX9V5",
"aSG2_JaDVAx",
"469EI6Rp6cN",
"iclr_2021_rSwTMomgCz",
"iclr_2021_rSwTMomgCz",
"iclr_2021_rSwTMomgCz",
"iclr_2021_rSwTMomgCz"
] |
iclr_2021_1wtC_X12XXC | Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain | The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning. However, a key question remains as to whether backprop can be formulated in a manner suitable for implementation in neural circuitry. The primary challenge is to ensure that any candidate formulation uses onl... | withdrawn-rejected-submissions | The authors propose an algorithm to perform backprop in a feed-forward neural network without the need to backpropagate errors. They hence claim that this algorithm is a biologically plausible variant of Backprop.
After a forward-propagation phase, the method introduces a relaxation phase and they remark that at the e... | train | [
"sBDKCvUC9oz",
"dETC7IPWfC3",
"WEhMmtYL-La",
"uFapsngW-rX",
"i7VIO16ZEYT",
"7_h3-VWKKDq",
"9mc6EI20nfO",
"l66BkndAsPJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"## Second Review\n\nThanks for taking all my comments seriously. After carefully reading other reviews and quick modifications introduced by authors, I believe this work is richer and has shown some potential towards building scalable and robust alternative to BP. Thanks for including angles as suggested, this f... | [
7,
4,
-1,
-1,
-1,
-1,
8,
4
] | [
4,
4,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_1wtC_X12XXC",
"iclr_2021_1wtC_X12XXC",
"dETC7IPWfC3",
"sBDKCvUC9oz",
"9mc6EI20nfO",
"l66BkndAsPJ",
"iclr_2021_1wtC_X12XXC",
"iclr_2021_1wtC_X12XXC"
] |
iclr_2021_4NNQ3l2hbN0 | Search Data Structure Learning | In our modern world, an enormous amount of data surrounds us, and we are rarely interested in more than a handful of data points at once. It is like searching for needles in a haystack, and in many cases, there is no better algorithm than a random search, which might not be viable. Previously proposed algorithms for ef... | withdrawn-rejected-submissions | This paper addresses an interesting problem and all reviewers agree. Most reviewers found the paper difficult to understand and it was hard to see the novel contributions. The paper will need a significant revision before publication. | train | [
"_NaBhYj9qy",
"sdI5n-45XLQ",
"1KJvIAicNpw",
"7HBe9LdJcpR",
"7csm5drILL",
"CrhwJMm0wJy",
"w2qzaLJ1EOD",
"cRftJDEmlT"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper is generally well written, and the authors target on an important and challenging problem of learning how to search. Search is a complex problem, and people designed various data structures to handle different kinds of search problems. The authors generalize these searching data structures by search data... | [
4,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
3,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_4NNQ3l2hbN0",
"iclr_2021_4NNQ3l2hbN0",
"cRftJDEmlT",
"sdI5n-45XLQ",
"CrhwJMm0wJy",
"w2qzaLJ1EOD",
"iclr_2021_4NNQ3l2hbN0",
"iclr_2021_4NNQ3l2hbN0"
] |
iclr_2021_ASAJvUPWaDI | A Near-Optimal Recipe for Debiasing Trained Machine Learning Models | We present an efficient and scalable algorithm for debiasing trained models, including deep neural networks (DNNs), which we prove to be near-optimal by bounding its excess Bayes risk. Unlike previous black-box reduction methods to cost-sensitive classification rules, the proposed algorithm operates on models that hav... | withdrawn-rejected-submissions | All reviewers feel this paper addresses and important topic, and has many merits. However, it is difficult to recommend publication at this time. The primary concern is that the paper has its theoretical optimality as an important contribution, but the reviewers and myself (in a non-public thread) were unable to verify... | train | [
"RUD0_h25-1Y",
"5yCCOP8ppKm",
"lSE430cyNC8",
"xcNyMypCGeo",
"b-AKlRfAlo",
"oSyf15eKNM9",
"xVTSwyA5uEh",
"mSL6D8jWdFu",
"0TGsBF_Ar2S",
"CJfpSi4InBd",
"LadiTNwuO_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Evaluation and improvement of fairness of machine learning algorithms is a very important issue. To this end, the authors of this paper propose a post-processing algorithm to enforce fairness in a narrowly defined notion of fairness. Unfortunately, I have serious concerns about the validity of the results and conc... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_ASAJvUPWaDI",
"iclr_2021_ASAJvUPWaDI",
"xVTSwyA5uEh",
"b-AKlRfAlo",
"oSyf15eKNM9",
"xVTSwyA5uEh",
"RUD0_h25-1Y",
"CJfpSi4InBd",
"LadiTNwuO_",
"iclr_2021_ASAJvUPWaDI",
"iclr_2021_ASAJvUPWaDI"
] |
iclr_2021_-DRft_lKDqo | Generalized Universal Approximation for Certified Networks | To certify safety and robustness of neural networks, researchers have successfully applied abstract interpretation, primarily using interval bound propagation. To understand the power of interval bounds, we present the abstract universal approximation (AUA) theorem, a generalization of the recent result by Baader et al... | withdrawn-rejected-submissions | The authors extend prior work showing that networks trained to be certifiably robust using interval bound propagation are universal approximators. They extend prior results applicable to ReLU networks to a much broader class of networks with general activation functions.
The paper makes an interesting contribution to... | train | [
"FsMsLUhPFIN",
"NCyzrcF-nbg",
"YJ_JUGXGUwD",
"39T79UW9e8A",
"yB7MWRo-1WC",
"-uymd7yKbv",
"PL3kJCEiRY",
"Ku-avEH4XM",
"f1cpuFCm1O"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work studies the task of universally approximating continuous functions by (certain classes of) neural networks. The paper shows that for a continuous function $f$ there exists a neural network $N$ such that for any box $B$ the range of outputs of $N$ is close (in a parameter $\\delta$) to the range of output... | [
5,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
2,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"iclr_2021_-DRft_lKDqo",
"PL3kJCEiRY",
"f1cpuFCm1O",
"Ku-avEH4XM",
"iclr_2021_-DRft_lKDqo",
"FsMsLUhPFIN",
"iclr_2021_-DRft_lKDqo",
"iclr_2021_-DRft_lKDqo",
"iclr_2021_-DRft_lKDqo"
] |
iclr_2021_gSJTgko59MC | Predicting the Outputs of Finite Networks Trained with Noisy Gradients | A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian Processes (GPs). A DNN trained with gradient flow was shown to map to a GP governed by the Neural Tangent Kernel (NTK), whereas earlier works showed that a DNN with an i.i.d. prior over its parameters maps to the so-called... | withdrawn-rejected-submissions | This paper develops an interesting new angle on the behavior of large-width neural networks by elucidating the connection between the NNGP and noisy gradient descent and by examining finite-width corrections through an Edgeworth expansion. While these contributions are important, the paper would better serve the commun... | test | [
"55PU86fd6UA",
"k4f7uyNrCc",
"qV-EWm3Kpvl",
"AYaHrGPMszT",
"QF48x9J976Q",
"Q7YVwBluOJi",
"YpOGMYDEdyf",
"HF-nfq4VJe0",
"iXiXxST117a",
"ASKuNigKNz",
"jSUZUzWLka",
"80j4_zdUzeO",
"bisK0AMRkG0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper shows a correspondence between deep neural networks (DNN) trained with noisy gradients and NNGP. It provides a general analytical form for the finite width correction (FWC) for NNSP expanding around NNGP. Finally, it argues that this FWC can be used to explain why finite width CNNs can improve the perfo... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"iclr_2021_gSJTgko59MC",
"qV-EWm3Kpvl",
"jSUZUzWLka",
"QF48x9J976Q",
"55PU86fd6UA",
"YpOGMYDEdyf",
"HF-nfq4VJe0",
"bisK0AMRkG0",
"iclr_2021_gSJTgko59MC",
"80j4_zdUzeO",
"iclr_2021_gSJTgko59MC",
"iclr_2021_gSJTgko59MC",
"iclr_2021_gSJTgko59MC"
] |
iclr_2021_OJiM1R3jAtZ | AWAC: Accelerating Online Reinforcement Learning with Offline Datasets | Reinforcement learning provides an appealing formalism for learning control policies from experience. However, the classic active formulation of reinforcement learning necessitates a lengthy active exploration process for each behavior, making it difficult to apply in real-world settings. If we can instead allow reinfo... | withdrawn-rejected-submissions | This paper proposed a new method improving online reinforcement learning using offline datasets. Three reviewers suggested (borderline) acceptance and two did rejection. The main concerns of reviewers are (a) limited/incremental novelty (from all reviewers) and (b) limited experiments (from three reviewers). AC also ag... | train | [
"4O8z5BnhKvg",
"qWSBCb9_l3N",
"CDHR2i7XQSE",
"mJSaN2il8s",
"7Kjz6tZyoJf",
"_OL-Qv_kxKb",
"VrXSuscAMf8",
"C0VdDDINR3h",
"sFrRpRazN6u",
"CwYUixa-YIT",
"Z5-8xB6_Jt",
"9g91BHjQgCj",
"qFckh0XAqP",
"Vurck0O34F0",
"keh7ZefetV",
"qhmsSRqSvt",
"k1eo8zEGrn"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I have read all the reviews and the author's responses. Even if there is the proposed method presents a slight modification with respect to AWR, the authors have made a clear case of its benefits, so I will keep my score of 6: Marginally above acceptance threshold",
"There is a little more time left in the rebut... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3,
5
] | [
"mJSaN2il8s",
"k1eo8zEGrn",
"keh7ZefetV",
"Vurck0O34F0",
"k1eo8zEGrn",
"keh7ZefetV",
"keh7ZefetV",
"qFckh0XAqP",
"Vurck0O34F0",
"qhmsSRqSvt",
"k1eo8zEGrn",
"iclr_2021_OJiM1R3jAtZ",
"iclr_2021_OJiM1R3jAtZ",
"iclr_2021_OJiM1R3jAtZ",
"iclr_2021_OJiM1R3jAtZ",
"iclr_2021_OJiM1R3jAtZ",
"ic... |
iclr_2021_k9EHBqXDEOX | Asynchronous Advantage Actor Critic: Non-asymptotic Analysis and Linear Speedup | Asynchronous and parallel implementation of standard reinforcement learning (RL) algorithms is a key enabler of the tremendous success of modern RL.
Among many asynchronous RL algorithms, arguably the most popular and effective one is the asynchronous advantage actor-critic (A3C) algorithm.
Although A3C ... | withdrawn-rejected-submissions | Although the reviewers found the paper well-written that analyzes a relatively popular algorithm (TD(0) version of A3C), there are concerns regarding the novelty of the convergence results given those for A2C, the comparison of the results with those for A2C, and the sufficiency of the experiments. Although the authors... | val | [
"mbcU4eABwPk",
"UKpp6JVtZ6x",
"j2z7j6BMvAf",
"QA-mYWBJvqM",
"njhSl1zXqNq",
"V6o6EpI98Er",
"hjlzbveTWb",
"-ivt84HV2KR"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studied the two time scale A3C in discounted MDP based on recent development in the finite sample analysis of A2C. The sample complexity result in this paper matches previous result in two time-scale A2C in terms of the dependence of \\epsilon, and this paper further shows the benefit of \"linear speed ... | [
5,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2021_k9EHBqXDEOX",
"mbcU4eABwPk",
"iclr_2021_k9EHBqXDEOX",
"hjlzbveTWb",
"iclr_2021_k9EHBqXDEOX",
"-ivt84HV2KR",
"iclr_2021_k9EHBqXDEOX",
"iclr_2021_k9EHBqXDEOX"
] |
iclr_2021_pAJ3svHLDV | R-MONet: Region-Based Unsupervised Scene Decomposition and Representation via Consistency of Object Representations | Decomposing a complex scene into multiple objects is a natural instinct of an intelligent vision system. Recently, the interest in unsupervised scene representation learning emerged and many previous works tackle this by decomposing scenes into object representations either in the form of segmentation masks or position... | withdrawn-rejected-submissions | The paper has good contributions to a challenging problem, leveraging a Faster-RCNN framework with a novel self-supervised learning loss. However reviewer 4 and other chairs (in calibration) considered that the paper does not meet the bar for acceptance. The other reviewers did not champion the paper either, hence i am... | val | [
"C0TtIhJRI_W",
"OVkRfWm_SpV",
"iMz6wEqaIsK",
"00dgdGNjzbX",
"Ae2KYHzN6La",
"5JQPBXXDJxV",
"F9BrRbntt7",
"wNNXumKXEP_",
"zcOv87ey0_w",
"nvJc5UDf9k",
"mEKh4exv72w",
"xDpUaKI6VZ1",
"bCfdJK8sC2a"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper presents a variation of the MONet model where an additional Region Proposal Network generates bounding boxes for various objects in the scene. An additional loss is introduced during training to make the segmentations produced by the MONet segmenter consistent with the proposed bounding boxes. Results a... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_pAJ3svHLDV",
"iclr_2021_pAJ3svHLDV",
"iclr_2021_pAJ3svHLDV",
"Ae2KYHzN6La",
"zcOv87ey0_w",
"bCfdJK8sC2a",
"C0TtIhJRI_W",
"OVkRfWm_SpV",
"OVkRfWm_SpV",
"bCfdJK8sC2a",
"C0TtIhJRI_W",
"OVkRfWm_SpV",
"iclr_2021_pAJ3svHLDV"
] |
iclr_2021_SP5RHi-rdlJ | Sparse Binary Neural Networks | Quantized neural networks are gaining popularity thanks to their ability to solve complex tasks with comparable accuracy as full-precision Deep Neural Networks (DNNs), while also reducing computational power and storage requirements and increasing the processing speed. These properties make them an attractive alternati... | withdrawn-rejected-submissions | This paper compresses neural networks via so called Sparse Binary Neural Network designs. All reviewers agree that the paper has limited novelty. Experiments are only performed on small datasets with simple neural networks. However, even with toy experiments, results are very weak. There is no comparison with the SOTA.... | val | [
"CnWXsHIN6on",
"sEVJlDGlzHq",
"pIG3TyRLW7J",
"ZyPslh53_m",
"gt5JZ-y9Wyr",
"3kj5Ior2fwC",
"TOnzM8A-qOU",
"8e2NFFNoj1j",
"AwdH4drr0Rs"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary of the paper:\nThis paper presents a sparse neural networks with binary weights. The binary weights are represented using 0s and 1s whereas the effective connections are encoded using the run-length encoder for efficient implementations of neural networks on IoT devices. To quantize the value of weights, a... | [
5,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2021_SP5RHi-rdlJ",
"iclr_2021_SP5RHi-rdlJ",
"TOnzM8A-qOU",
"CnWXsHIN6on",
"8e2NFFNoj1j",
"AwdH4drr0Rs",
"iclr_2021_SP5RHi-rdlJ",
"iclr_2021_SP5RHi-rdlJ",
"iclr_2021_SP5RHi-rdlJ"
] |
iclr_2021_ePh9bvqIgKL | Discovering Parametric Activation Functions | Recent studies have shown that the choice of activation function can significantly affect the performance of deep learning networks. However, the benefits of novel activation functions have been inconsistent and task-dependent, and therefore the rectified linear unit (ReLU) is still the most commonly used. This paper p... | withdrawn-rejected-submissions | The paper proposes the idea of searching parameterized activation functions, in contrast to the previous handcraft or learnable ones. It may be a counterpart of neural architecture search.
Pros:
1. The idea is very interesting.
2. The paper is well written.
3. The experiments show improvements over baseline activation... | test | [
"AdFxyMONYR8",
"sBpYMLUYTDc",
"XKCE5Yusx3-",
"F8cbWfSocz5",
"oDi_Xe20X2L",
"v9qz1iYRykA",
"V8l2umihLvJ",
"7iSQaTmIjNJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to search for activation functions with regularized evolution, an evolutionary algorithm proposed by Real et al. Various mutations are proposed that allow to investigate a larger search space than prior work. In particular, a mutation is added which adds trainable parameters to the activation f... | [
6,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2021_ePh9bvqIgKL",
"XKCE5Yusx3-",
"AdFxyMONYR8",
"7iSQaTmIjNJ",
"V8l2umihLvJ",
"V8l2umihLvJ",
"iclr_2021_ePh9bvqIgKL",
"iclr_2021_ePh9bvqIgKL"
] |
iclr_2021_6fb4mex_pUT | An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder | Deep neural networks (DNNs), especially convolutional neural networks, have achieved superior performance on image classification tasks. However, such performance is only guaranteed if the input to a trained model is similar to the training samples, i.e., the input follows the probability distribution of the training s... | withdrawn-rejected-submissions | **Problem significance** This paper proposes an attack mechanism in the latent space of a neural network f(x), which produces out-of-distribution examples. The AC agrees reviewers on the significance of the OOD detection problem, particularly addressing the vulnerability aspect is relevant and of great interest to the ... | train | [
"XYwWwg2DSwl",
"Op4ckaT5qbw",
"PKdYxOFhx5e",
"ENEWBdokWeT",
"9uguJEap0Tj",
"Ke4KG5JbcT-",
"IgXCywsUwzI",
"FzDxAcfpUB0",
"D2IfnB8ULX",
"dwAEpLYPNns",
"Fq50gK0xPIX",
"ti35JeXSjOs",
"F3P-erSjvV_",
"1dYBnmZVNoJ",
"gy7sLWyoExA",
"OreH3IBYTd5",
"lSwwVK3GQdv",
"WgGo3dKOID0",
"9pVjBpICPY... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
"\n**UPDATE**\n\nI acknowledge that I have read the author responses as well as the other reviews. Overall, I appreciate the clarifications and added experiments given by the authors. \n\nMy concerns about the low novelty of the presented algorithm and findings remain, however, as I find the OOD attack to be only a... | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_6fb4mex_pUT",
"iclr_2021_6fb4mex_pUT",
"9uguJEap0Tj",
"9uguJEap0Tj",
"iclr_2021_6fb4mex_pUT",
"IgXCywsUwzI",
"iclr_2021_6fb4mex_pUT",
"IgXCywsUwzI",
"0kNWLJFBmiu",
"ENEWBdokWeT",
"lSwwVK3GQdv",
"H6SX-eMVM20",
"1dYBnmZVNoJ",
"gy7sLWyoExA",
"23c0h8YSjP7",
"9uguJEap0Tj",
"icl... |
iclr_2021_J7bUsLCb0zf | Compute- and Memory-Efficient Reinforcement Learning with Latent Experience Replay | Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such ... | withdrawn-rejected-submissions |
This paper presents approach to improve compute and memory efficiency by freezing layers and storing latent features. The approach is simple and provide efficiency. However, there are concerns as well. One big concern is that the experiments are not on realistic settings for example real world images and the current C... | train | [
"tUccLitY8jG",
"yzzcbIs1TlC",
"hYNUkea6xfG",
"7GGn5tWaZ0M",
"UmaG-J1iBBx",
"6IhuBIDqJAB",
"mpkTRMZE6o",
"AfqjFjq6g_X",
"wvW0ZW-4IMG",
"reSiiImfcIO",
"p2k9zx3hSMW"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"- Summary\n - This paper presents a method for compute- and memory-efficient reinforcement learning where the visual encoder is frozen partway into training. After freezing latent vectors are stored in the replay buffer instead of images (and any existing images are replaced by them). This leads to both bette... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_J7bUsLCb0zf",
"iclr_2021_J7bUsLCb0zf",
"iclr_2021_J7bUsLCb0zf",
"iclr_2021_J7bUsLCb0zf",
"reSiiImfcIO",
"yzzcbIs1TlC",
"tUccLitY8jG",
"iclr_2021_J7bUsLCb0zf",
"p2k9zx3hSMW",
"iclr_2021_J7bUsLCb0zf",
"iclr_2021_J7bUsLCb0zf"
] |
iclr_2021_NgZKCRKaY3J | Mitigating bias in calibration error estimation | Building reliable machine learning systems requires that we correctly understand their level of confidence. Calibration focuses on measuring the degree of accuracy in a model's confidence and most research in calibration focuses on techniques to improve an empirical estimate of calibration error, ECEBIN. Using simulati... | withdrawn-rejected-submissions | The paper proposes an adjustment to the ECE metric to make it less biased in the small sample case by including the assumption that the confidence output by a classifier is monotonic with the true correctness probability. The main idea is to successively make finer bins until a non-monotonicity is observed. The paper... | val | [
"sITqER2ztS",
"HL2xM1I9eM",
"BOg9MBKfTg",
"tlZ6fw6f4xB",
"78UvR2bEKKK",
"NQZ-z7PJnQy",
"bO822pk14If",
"rulSH5GcA7X",
"fIGiF6M_7wJ",
"W0KQUogc-LO",
"8yOcfI71aGk",
"A4vQv60SJLM",
"jfsgwvnrSf4",
"nYRVJUa6U4Z",
"v9wZYiGDuel",
"etmZ5i_8aql",
"2A4u4aWrZ0B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"### Summary\n\nThis paper highlights a major flaw with the commonly-used $ECE_\\text{BIN}$ calibration metric, namely that it is biased for perfectly-calibrated models. Through a large number of empirical experiments (including through simulation), the authors show that a newly proposed metric, $ECE_\\text{SWEEP}$... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_NgZKCRKaY3J",
"iclr_2021_NgZKCRKaY3J",
"iclr_2021_NgZKCRKaY3J",
"etmZ5i_8aql",
"sITqER2ztS",
"2A4u4aWrZ0B",
"v9wZYiGDuel",
"HL2xM1I9eM",
"HL2xM1I9eM",
"bO822pk14If",
"jfsgwvnrSf4",
"iclr_2021_NgZKCRKaY3J",
"iclr_2021_NgZKCRKaY3J",
"etmZ5i_8aql",
"HL2xM1I9eM",
"iclr_2021_NgZK... |
iclr_2021_nPVlVsBTiJ | Adversarial Boot Camp: label free certified robustness in one epoch | Machine learning models are vulnerable to adversarial attacks. One approach to addressing this vulnerability is certification, which focuses on models that are guaranteed to be robust for a given perturbation size. A drawback of recent certified models is that they are stochastic: they require multiple computationall... | withdrawn-rejected-submissions | Although the connection between randomized smoothing and PDE revealed in this paper is an interesting direction to explore, the method proposed unfortunately is not certified. The method could work as a good empirical defense since the smoothed classifier could be learned more efficiently. | train | [
"6I11oU9l_gU",
"I6wz0mnCmJ",
"PYL8Z03dmmy",
"kas_iz80FKJ",
"xYMg204ey0J"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to use a deterministic classifier to replace the sampling\nprocess in randomized smoothing based certifiably robust models. The goal of\ntraining a deterministic robust classifier to avoid the high cost of randomized\nsmoothing is a right direction to look at. \n\nThe reason that we can get ce... | [
3,
-1,
4,
3,
7
] | [
4,
-1,
4,
4,
3
] | [
"iclr_2021_nPVlVsBTiJ",
"iclr_2021_nPVlVsBTiJ",
"iclr_2021_nPVlVsBTiJ",
"iclr_2021_nPVlVsBTiJ",
"iclr_2021_nPVlVsBTiJ"
] |
iclr_2021_qClL9hRDSMZ | Implicit bias of gradient descent for mean squared error regression with wide neural networks | We investigate gradient descent training of wide neural networks and the corresponding implicit bias in function space. For 1D regression, we show that the solution of training a width-n shallow ReLU network is within n−1/2 of the function which fits the training data and whose difference from initialization has smalle... | withdrawn-rejected-submissions | This paper analyzes the implicit bias of gradient descent of infinite width 2-layer neural networks with ReLU activation. It is shown that the dynamics of gradient descent to optimize the 2-layer NN converges to the optimization dynamics on the random feature model in the infinite width limit. Then, it is shown that th... | test | [
"BTubA1in0AQ",
"YdvEh5mmCs",
"Ta-slOHDsxO",
"qv7PgHfBqMz",
"8hxQZj1bXfc",
"liYDAx8Viyh",
"pp0W0w2lFn",
"rm4vJ6t5GMC",
"B_-wiTP4nu4",
"5qWCUMo40e",
"KjBCh_CWwXY",
"NlYk1F1VMC7",
"otiurR2r5CO",
"eFHL46mBgv"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Update after the rebuttal: I would like to thank the authors for the detailed reply and for addressing raised issues in the submission. I appreciate the authors' rationale, but the \"standard\" structure of papers makes it is easier to follow. The same for a conclusion, for the authors it may be reiterating the sa... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_qClL9hRDSMZ",
"iclr_2021_qClL9hRDSMZ",
"iclr_2021_qClL9hRDSMZ",
"otiurR2r5CO",
"YdvEh5mmCs",
"rm4vJ6t5GMC",
"BTubA1in0AQ",
"KjBCh_CWwXY",
"5qWCUMo40e",
"NlYk1F1VMC7",
"eFHL46mBgv",
"iclr_2021_qClL9hRDSMZ",
"iclr_2021_qClL9hRDSMZ",
"iclr_2021_qClL9hRDSMZ"
] |
iclr_2021_412_KkkGjJ4 | Weakly Supervised Scene Graph Grounding | Recent researches have achieved substantial advances in learning structured representations from images. However, current methods rely heavily on the annotated mapping between the nodes of scene graphs and object bounding boxes inside images. Here, we explore the problem of learning the mapping between scene graph nod... | withdrawn-rejected-submissions | This paper presents work on scene graph grounding under weak supervision. The reviewers appreciated the consideration of this task and formulation of a solution for it. However, concerns were raised over the importance of this weakly-supervised grounding task, how it addresses challenges in previous methods, the empi... | val | [
"H6N0GOpP2z-",
"8yluW74i4gB",
"LztEfoix3jL",
"767kUZdtu5s",
"P83sdrFKbnw",
"4wPdxiv2KV",
"lb5LUZd4Rq-",
"HCEFU9T2JQ7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new problem called weakly supervised scene graph grounding, which is potentially very useful but barely studied in the literature. Given a dataset of images labeled with scene graphs, but without bounding box annotation, the task is to learn to align each node of a given scene graph to the ... | [
7,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
5,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"iclr_2021_412_KkkGjJ4",
"H6N0GOpP2z-",
"lb5LUZd4Rq-",
"HCEFU9T2JQ7",
"4wPdxiv2KV",
"iclr_2021_412_KkkGjJ4",
"iclr_2021_412_KkkGjJ4",
"iclr_2021_412_KkkGjJ4"
] |
iclr_2021_3FK30d5BZdu | Hidden Incentives for Auto-Induced Distributional Shift | Decisions made by machine learning systems have increasing influence on the world, yet it is common for machine learning algorithms to assume that no such influence exists. An example is the use of the i.i.d. assumption in content recommendation. In fact, the (choice of) content displayed can change users' perception... | withdrawn-rejected-submissions | This is a thought-provoking paper which describes a significant problem that plausibly occurs in deployed ML/RL models.
The paper is clearly written, describing claims using examples and developing small unit-tests to probe models.
However, as the reviews and discussion show, the exposition should be substantially re-... | train | [
"iYOm4QNqXp",
"TFWRtpcj9yC",
"XZZ7E_5fDev",
"_9dqtrog6YI",
"VvzhxxbQYnP",
"emk09REdiTa",
"zZ8MIbrh32",
"qup22-NInly",
"CH4ZVjUoMb",
"QcaNyJO1Vpm",
"vkrRGVGS3kC",
"pO1eDVcOvNV",
"Sfe6YCnh59o",
"7FJm4dPubPE",
"En6yBXE1hlE",
"B1fTcgZZKnB",
"VSiqxlsfdo",
"ApMHIsuYOvH",
"YtLfDWvwp9",
... | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"As we understand, your main criticism was a lack of clarity. Thank you for listing specific questions and concerns! \nWe hope that we've managed to make things clear to you, and will be sure to address these concerns in our rebuttal.\nPlease let us know if you have remaining concerns or questions we can address b... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
3
] | [
"_9dqtrog6YI",
"qup22-NInly",
"7FJm4dPubPE",
"YtLfDWvwp9",
"qup22-NInly",
"vkrRGVGS3kC",
"qup22-NInly",
"CH4ZVjUoMb",
"ApMHIsuYOvH",
"ApMHIsuYOvH",
"bEHJMg1WsOl",
"eTOYhiPBY2Q",
"eTOYhiPBY2Q",
"eTOYhiPBY2Q",
"B1fTcgZZKnB",
"iclr_2021_3FK30d5BZdu",
"iclr_2021_3FK30d5BZdu",
"iclr_202... |
iclr_2021_avHr-H-1kEa | Temperature check: theory and practice for training models with softmax-cross-entropy losses | The softmax function combined with a cross-entropy loss is a principled approach to modeling probability distributions that has become ubiquitous in deep learning. The softmax function is defined by a lone hyperparameter, the temperature, that is commonly set to one or regarded as a way to tune model confidence after t... | withdrawn-rejected-submissions | Three reviewers are mildly positive, while one is negative. The substantive comments of the reviewers are consistent with each other; it is merely their evaluations that differ.
One contribution of the paper is that it shows how using temperature tuning can yield similar accuracy to using batch normalization; this is... | train | [
"ipdYYw1vtR",
"J4nYfWNUufZ",
"5VJcXymXSJd",
"x0sTO5rdKyp",
"g6XiO572Qm",
"XNal0fzLkz",
"dABF_E-d40",
"iVEGK3_5Ei5",
"Y-7PmMyHcNL",
"ywxV1QJLAZ",
"h78x_edIqLr",
"FzuMiGio-Ro",
"O_TeZ-fVvqP",
"SBf0u7pl_dy",
"w-ho4x2QZQn"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies how temperature scaling affects training dynamics of neural networks (with softmax layer and cross-entropy loss). The theoretical analysis shows that neural networks trained with smaller inverse temperatures (beta) exit the linear regime faster, which implies better performance. Experiments on i... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"iclr_2021_avHr-H-1kEa",
"O_TeZ-fVvqP",
"SBf0u7pl_dy",
"XNal0fzLkz",
"w-ho4x2QZQn",
"dABF_E-d40",
"iVEGK3_5Ei5",
"h78x_edIqLr",
"O_TeZ-fVvqP",
"SBf0u7pl_dy",
"ipdYYw1vtR",
"w-ho4x2QZQn",
"iclr_2021_avHr-H-1kEa",
"iclr_2021_avHr-H-1kEa",
"iclr_2021_avHr-H-1kEa"
] |
iclr_2021_E9W0QPxtZ_u | not-so-big-GAN: Generating High-Fidelity Images on Small Compute with Wavelet-based Super-Resolution | State-of-the-art models for high-resolution image generation, such as BigGAN and VQVAE-2, require an incredible amount of compute resources and/or time (512 TPU-v3 cores) to train, putting them out of reach for the larger research community. On the other hand, GAN-based image super-resolution models, such as ESRGAN, ca... | withdrawn-rejected-submissions | The reviewers brought up significant concerns that were not resolved by the authors' responses. The concerns are too significant for the paper to be accepted at this time. | test | [
"rVG0ALVHmb1",
"7gLqdeeDMub",
"CP27_Vs7--",
"wsF53x2FeBd",
"vQ5dXa7ZuiO",
"JeStZAqlEsY",
"WYdzxYS1_Tt"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"**Summary**\n\nThis paper proposes a cost-effective two-step training GAN framework (NSB-GAN). NSB-GAN contains a learned sampler in the wavelet domain, and a decoder to super-resolve images from the wavelet domain to the RGB space. Compared with the baseline BigGAN model their method reduces the cost of training ... | [
5,
-1,
-1,
-1,
-1,
6,
2
] | [
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_E9W0QPxtZ_u",
"iclr_2021_E9W0QPxtZ_u",
"rVG0ALVHmb1",
"JeStZAqlEsY",
"WYdzxYS1_Tt",
"iclr_2021_E9W0QPxtZ_u",
"iclr_2021_E9W0QPxtZ_u"
] |
iclr_2021_XG1Drw7VbLJ | Defining Benchmarks for Continual Few-Shot Learning | In recent years there has been substantial progress in few-shot learning, where a model is trained on a small labeled dataset related to a specific task, and in continual learning, where a model has to retain knowledge acquired on a sequence of datasets. Both of these fields are different abstractions of the same real ... | withdrawn-rejected-submissions | The reviewers appreciate the steps taken to combine continual learning with few-shot learning, this is an interesting intersection with many potential applications. However, the reviewers generally outlined a number of concerns with the benchmark and paper in its current form. They largely feel that this benchmark does... | val | [
"lZegbEne9Qo",
"us_5wv38RZc",
"L180LK-ekAX",
"HHiq_pLDY7B",
"rJlJPIzONJ2",
"CzKqEi5bO39",
"FRzCq0rqL7v",
"dPyk9VhHK1i",
"e7f96JGjpah",
"u4viPbfkYTC"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
">> We do not restrict people to not use pretrained representation. Furthermore, setting C represents a situation where the model is learning a class of superclasses. For example, a learner starts learning only cars but then motorcycles and boats are added to the class. The model should be able to learn the general... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"FRzCq0rqL7v",
"FRzCq0rqL7v",
"dPyk9VhHK1i",
"dPyk9VhHK1i",
"e7f96JGjpah",
"u4viPbfkYTC",
"iclr_2021_XG1Drw7VbLJ",
"iclr_2021_XG1Drw7VbLJ",
"iclr_2021_XG1Drw7VbLJ",
"iclr_2021_XG1Drw7VbLJ"
] |
iclr_2021_7hMenh--8g | Uncertainty Weighted Offline Reinforcement Learning | Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration. However, existing Q-learning and actor-critic based off-policy RL algorithms fail when bootstrapping from out-of-distribution (OOD) actions or states. We hypothesize that a ke... | withdrawn-rejected-submissions | Being able to give confidence intervals or have a robust measure of uncertainty is very important for offline RL methods. In this work, they proposed a dropout based method to have a measure of uncertainty. The authors provide an significant empirical improvements over other baselines. Nevertheless, as it stands right ... | train | [
"7G90ThkiIIS",
"P0omdO9CwgU",
"W-CQMsFXKj",
"xjfQPcLWMHp",
"QM07lZaVmA_",
"BbLP_a2HJ09",
"hlIGvsh23mE",
"rLauLKNZ5x8",
"TBpi9-yjLrm",
"jjyp3lgBMTP",
"wDEUSe5RkRi",
"vGJ0TQspD2A",
"mrnxCQOf_Ir"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to use an uncertainty-weighted objective for offline RL with BEAR (Kumar et al.) that penalizes the MMD distance between the learned policy and the previous policy. The uncertainty weighted objective weights the policy improvement objective with the variance in the Q-function, where this varian... | [
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_7hMenh--8g",
"iclr_2021_7hMenh--8g",
"7G90ThkiIIS",
"iclr_2021_7hMenh--8g",
"BbLP_a2HJ09",
"jjyp3lgBMTP",
"vGJ0TQspD2A",
"mrnxCQOf_Ir",
"wDEUSe5RkRi",
"P0omdO9CwgU",
"iclr_2021_7hMenh--8g",
"iclr_2021_7hMenh--8g",
"iclr_2021_7hMenh--8g"
] |
iclr_2021_FyucNzzMba- | Forward Prediction for Physical Reasoning | Physical reasoning requires forward prediction: the ability to forecast what will happen next given some initial world state. We study the performance of state-of-the-art forward-prediction models in the complex physical-reasoning tasks of the PHYRE benchmark (Bakhtin et al., 2019). We do so by incorporating models tha... | withdrawn-rejected-submissions | This paper evaluates several methods for physical prediction on the PHYRE benchmark, finding that while object-based methods (e.g. IN, Transformer) perform better in terms of predictive accuracy, pixel-based methods (e.g. STN, Deconv) perform better in terms of downstream task performance. The justification is that it ... | train | [
"vOoGdFWcQL0",
"w70ehYfQvR",
"dgHitZNKAlQ",
"5cj6wHywYdr",
"XIuThPtnoEI",
"Ph56juS91dl",
"CXwSjANhCW7",
"9lWNN1ihzL0",
"If_2Q7NNBQk",
"x--l5UYmZH_",
"rkEB4Gku1vc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"=== Summary\n\nThis paper investigates the performance of several state-of-the-art forward-prediction models in the complex physical-reasoning tasks of the PHYRE benchmark. The authors have provided thorough evaluations of the models by ablating on different ways of representing the state (object-based or pixel-ba... | [
5,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2021_FyucNzzMba-",
"iclr_2021_FyucNzzMba-",
"iclr_2021_FyucNzzMba-",
"iclr_2021_FyucNzzMba-",
"rkEB4Gku1vc",
"dgHitZNKAlQ",
"vOoGdFWcQL0",
"w70ehYfQvR",
"x--l5UYmZH_",
"iclr_2021_FyucNzzMba-",
"iclr_2021_FyucNzzMba-"
] |
iclr_2021_F9sPTWSKznC | DiP Benchmark Tests: Evaluation Benchmarks for Discourse Phenomena in MT | Despite increasing instances of machine translation (MT) systems including extrasentential context information, the evidence for translation quality improvement is sparse, especially for discourse phenomena. Popular metrics like BLEU are not expressive or sensitive enough to capture quality improvements or drops that a... | withdrawn-rejected-submissions | This paper introduces a dataset and a trained evaluation metric for evaluating discourse phenomena for MT. Several context-aware MT models are compared against a sentence level baseline. The paper develops metrics which evaluate the models according to their performance on four discourse phenomena: anaphora, lexical co... | train | [
"9x3kTRV0Poi",
"151dBGlVX3x",
"508uKW-Ato",
"foMM4VcmEq-",
"WquiHsD2b8T",
"EjUYcgbd-Ct",
"Cesva24Gqto",
"rTWX-vAGBS",
"JHQVpNhdXig",
"zpU6hEy5eCl",
"5covfDJb8_p",
"Dy-bXZEy3uv",
"Kt1X1xlTecf",
"7x_I6x6OCqX",
"2Nu_Yw-0EK4",
"dVzaLfSHXFU",
"VNZPADm6OYl"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you! We have now updated the abstract to say, *“Surprisingly, we find that the complex context-aware models that we test do not improve discourse-related translations consistently across languages and phenomena.”* We have also posted a comment that includes a summary of other changes made based on suggestion... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"foMM4VcmEq-",
"WquiHsD2b8T",
"iclr_2021_F9sPTWSKznC",
"Dy-bXZEy3uv",
"Kt1X1xlTecf",
"Cesva24Gqto",
"5covfDJb8_p",
"7x_I6x6OCqX",
"rTWX-vAGBS",
"5covfDJb8_p",
"2Nu_Yw-0EK4",
"dVzaLfSHXFU",
"VNZPADm6OYl",
"iclr_2021_F9sPTWSKznC",
"iclr_2021_F9sPTWSKznC",
"iclr_2021_F9sPTWSKznC",
"iclr... |
iclr_2021_asLT0W1w7Li | Efficient Exploration for Model-based Reinforcement Learning with Continuous States and Actions | Balancing exploration and exploitation is crucial in reinforcement learning (RL). In this paper, we study the model-based posterior sampling algorithm in continuous state-action spaces theoretically and empirically. First, we improve the regret bound: with the assumption that reward and transition functions can be mode... | withdrawn-rejected-submissions | This paper presents a model-based posterior sampling algorithm in continuous state-action spaces theoretically and empirically. The work is interesting and the authors provide numerical evaluations of the proposed method. But the reviewers find the contribution of the work limited.
| train | [
"tmZSR-Q_Qf",
"xZxIPDyi2X",
"UaLb_g15XwI",
"qD5KmXDUp-l",
"Hdeghsp_U-g",
"QnoMAPAwYYP",
"rJqD4DMG8jK",
"TXoHuowMlSz",
"GdvLmzMbVh",
"EUGexGS7uy",
"QGCNJ2ExA0",
"zfN5Cfr1FN",
"6wnlx6pUqJ0",
"1alHfZY87Hp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Review\nThis paper proposes a new model-based reinforcement learning algorithm named MPC-PSRL. Theoretically, the authors provide regret analysis of the proposed algorithm. The authors also provide empirical results that MPC-PSRL outperforms other previous model-based RL algorithms, such as PETS or MBPO.\nHowever,... | [
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_asLT0W1w7Li",
"qD5KmXDUp-l",
"iclr_2021_asLT0W1w7Li",
"UaLb_g15XwI",
"iclr_2021_asLT0W1w7Li",
"tmZSR-Q_Qf",
"iclr_2021_asLT0W1w7Li",
"QGCNJ2ExA0",
"1alHfZY87Hp",
"6wnlx6pUqJ0",
"zfN5Cfr1FN",
"EUGexGS7uy",
"iclr_2021_asLT0W1w7Li",
"iclr_2021_asLT0W1w7Li"
] |
iclr_2021_Y45i-hDynr | Parameterized Pseudo-Differential Operators for Graph Convolutional Neural Networks | We present a novel graph convolutional layer that is fast, conceptually simple, and provides high accuracy with reduced overfitting. Based on pseudo-differential operators, our layer operates on graphs with relative position information available for each pair of connected nodes. We evaluate our method on a variety of ... | withdrawn-rejected-submissions | All three reviewers expressed consistent concerns on this submission in their reviews. In addition, none of them enthusiastically supported this work during discussion. It is clear this submission does not make the bar of ICLR. Thus a reject is recommended. | train | [
"24n_on_iHDQ",
"ARtPk3J8jC0",
"-HhBuJyL-rt"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new graph convolutional layer for graphs with relative position encoding. These types of graphs occur in applications such as meshes, point clouds, and super-pixel neighborhoods. The method extremely simple and generic.\n\nStrengths:\n+ The method is simple and can be easily integrated into ex... | [
4,
5,
5
] | [
3,
4,
3
] | [
"iclr_2021_Y45i-hDynr",
"iclr_2021_Y45i-hDynr",
"iclr_2021_Y45i-hDynr"
] |
iclr_2021_hzkhOUll63 | Stability analysis of SGD through the normalized loss function | We prove new generalization bounds for stochastic gradient descent for both the convex and non-convex case. Our analysis is based on the stability framework. We analyze stability with respect to the normalized version of the loss function used for training. This leads to investigating a form of angle-wise stability in... | withdrawn-rejected-submissions | The paper considers generalization analysis of SGD using stability analysis. The authors argued the use of normalized version of the loss function, and angle-wise stability. However, the reviewers pointed out that both motivation and novelty of the current work are not strong enough for it to be accepted by ICLR. | train | [
"4RR6wJ9bMcQ",
"A_09VHOaUz",
"wj9hHVXTbBv",
"UMX-k1cB6eB",
"jWA9jiN5Pxg",
"lWIp8olLvty",
"Nn8K1GlJcLr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper discusses the uniformly statbility and on average stability of SGD on convex and nonconvex optimization algorithms. With proper choice of step size, the stabilities can be theoretically bounded. The convex case extends from Hardt et al. For the nonconvex case, the result is concrete and can be applied t... | [
6,
-1,
-1,
-1,
-1,
4,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_hzkhOUll63",
"lWIp8olLvty",
"4RR6wJ9bMcQ",
"Nn8K1GlJcLr",
"iclr_2021_hzkhOUll63",
"iclr_2021_hzkhOUll63",
"iclr_2021_hzkhOUll63"
] |
iclr_2021_o2ko2D_uvXJ | Group-Connected Multilayer Perceptron Networks | Despite the success of deep learning in domains such as image, voice, and graphs, there has been little progress in deep representation learning for domains without a known structure between features. For instance, a tabular dataset of different demographic and clinical factors where the feature interactions are not gi... | withdrawn-rejected-submissions | The paper proposes an MLP based approach for data without known structure (such as tabular data). At first, the data are partitioned into K blocks in a differentiable way, then the standard MLP is applied to each block. The results are then aggregated recursively to produce the final output.
Pros:
1. Handling less st... | train | [
"wqPuh0S5cMV",
"Lsl5tLSsLiU",
"LlbiWR6VYBw",
"Jm5of0zbxc3",
"qZj50Ev9IVD",
"C5UWUYybPPE",
"bVE6U8ZTIpG",
"-1j5k0S_1Q8",
"Js2Pq1udhtq",
"5HASBmctKg",
"n8orjXG9Ccy",
"Ye0_iEB85Z",
"iqjuvF--xim",
"R_n0tZNY1g8",
"i91vC_wKTNa"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"################################\nPro:\n$\\bullet$ It is an exciting idea to learn the relations in among feature dimensions, group them, and apply the operations within groups to train an MLP. The experiments on various datasets, and important ablations such as the effect of number and size of groups, types of p... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_o2ko2D_uvXJ",
"Jm5of0zbxc3",
"qZj50Ev9IVD",
"Ye0_iEB85Z",
"n8orjXG9Ccy",
"bVE6U8ZTIpG",
"iqjuvF--xim",
"iclr_2021_o2ko2D_uvXJ",
"wqPuh0S5cMV",
"wqPuh0S5cMV",
"i91vC_wKTNa",
"i91vC_wKTNa",
"-1j5k0S_1Q8",
"iclr_2021_o2ko2D_uvXJ",
"iclr_2021_o2ko2D_uvXJ"
] |
iclr_2021_Px7xIKHjmMS | Beyond GNNs: A Sample Efficient Architecture for Graph Problems | Despite their popularity in learning problems over graph structured data, existing Graph Neural Networks (GNNs) have inherent limitations for fundamental graph problems such as shortest paths, k-connectivity, minimum spanning tree and minimum cuts. In all these instances, it is known that one needs GNNs of high depth, ... | withdrawn-rejected-submissions | The paper presents a new GNN+ architecture and provide interesting theoretical observations about the architecture. The paper is quite promising and has several interesting insights. However, most of the reviewers believe that the paper is not ready for publication and can be significantly improved by: a) more formal a... | train | [
"4lVSNUxzPq",
"IPnXRSLQkbN",
"1ejc1mWKI2r",
"eS7r8YsuVhN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Overview: \n========\nThe paper suggests a modification to graph neural networks, which is claimed to overcome GNN expressiveness issues recently shown by Loukas (ICLR 2020).\n\nComments:\n=========\nI had multiple difficulties in following the content of the paper, some are detailed next.\n\n-- \"Unfortunately, [... | [
4,
5,
8,
5
] | [
3,
3,
3,
3
] | [
"iclr_2021_Px7xIKHjmMS",
"iclr_2021_Px7xIKHjmMS",
"iclr_2021_Px7xIKHjmMS",
"iclr_2021_Px7xIKHjmMS"
] |
iclr_2021_mfJepDyIUcQ | Safety Verification of Model Based Reinforcement Learning Controllers | Model-based reinforcement learning (RL) has emerged as a promising tool for developing controllers for real world systems (e.g., robotics, autonomous driving, etc.). However, real systems often have constraints imposed on their state space which must be satisfied to ensure the safety of the system and its environment.... | withdrawn-rejected-submissions | # Quality:
The paper makes a good job of presenting the proposed algorithm, which seems interesting and solid.
However, the paper fails to place the proposed approach in the larger context of the existing literature.
In addition, only qualitative results are presented, without any comparison.
As such, it is impossibl... | train | [
"mmCXVZuLafp",
"vNhEi-mdx-2",
"iwuqdtUn9Y4",
"gMPF8reSm7",
"QrecgNfKM_Q",
"_hk0pG6wprQ",
"Un0CaFGaLh-",
"C89iPn9jSyH",
"hPGcGO_Gj3-",
"eatsFJkAPTp",
"b1XWvt2ukTH",
"ZkqEjrYgTEv",
"h6yTDTvc86",
"4gz-1uJ3YC-",
"Wvc1fMTkjnN",
"4qWH34a5y-",
"uRqafS_kMB1",
"XJqRD3dCZcN"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"update after rebuttal: the answers were convincing, and the paper improved. \n\nSummary and Contribution\n\nThis work presents a novel approach for the safety verification of model based RL controllers. It uses a so-called reachability tube analysis to check whether unsafe states may be reached from the initial st... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"iclr_2021_mfJepDyIUcQ",
"XJqRD3dCZcN",
"4qWH34a5y-",
"Un0CaFGaLh-",
"eatsFJkAPTp",
"hPGcGO_Gj3-",
"C89iPn9jSyH",
"uRqafS_kMB1",
"uRqafS_kMB1",
"mmCXVZuLafp",
"mmCXVZuLafp",
"XJqRD3dCZcN",
"XJqRD3dCZcN",
"4qWH34a5y-",
"4qWH34a5y-",
"iclr_2021_mfJepDyIUcQ",
"iclr_2021_mfJepDyIUcQ",
... |
iclr_2021_px0-N3_KjA | D4RL: Datasets for Deep Data-Driven Reinforcement Learning | The offline reinforcement learning (RL) problem, also known as batch RL, refers to the setting where a policy must be learned from a static dataset, without additional online data collection. This setting is compelling as it potentially allows RL methods to take advantage of large, pre-collected datasets, much like how... | withdrawn-rejected-submissions | This paper proposes benchmark tasks for offline reinforcement learning. The paper has major strength and weakness, and it has resulted in very active discussion among reviewers, authors, and other participants.
The major strength includes the following:
- The proposed benchmark is already heavily used in the communit... | train | [
"iBlZrFsxYPE",
"ZfD4Mv25WkV",
"jzLbCMbTa1",
"_Sn87qXh3el",
"s_zv3rSd-1c",
"9FNdH8chjYb",
"Q5xR1H5hFH",
"wq-dAjsJFEs",
"5kcd4sIfh1H",
"k9Ih5gB-oy1",
"1bCUo-S6mf",
"mCoFbaMnYvQ",
"_6XHVdniGNt",
"L0o4qw789AQ",
"WfTQp0ju_3",
"cJZZqMTrso5",
"rW-NRwOiES",
"h2LJ6inSEHJ",
"n_k675ju96r",
... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"of... | [
"Summary:\nIn this paper a test suite of data sets and corresponding benchmarks for offline reinforcement learning is introduced.\nSeveral existing RL benchmarks are used, the results of several algorithms are presented.\nThe authors claim that the benchmarks were specifically designed for the offline setting and a... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
6
] | [
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2021_px0-N3_KjA",
"QCYW42LOCVw",
"wq-dAjsJFEs",
"iclr_2021_px0-N3_KjA",
"1bCUo-S6mf",
"wq-dAjsJFEs",
"_6XHVdniGNt",
"iclr_2021_px0-N3_KjA",
"iclr_2021_px0-N3_KjA",
"n_k675ju96r",
"h2LJ6inSEHJ",
"L0o4qw789AQ",
"WfTQp0ju_3",
"gzbW7diZEDA",
"iclr_2021_px0-N3_KjA",
"gzbW7diZEDA",
"... |
iclr_2021_JI2TGOehNT0 | Combining Imitation and Reinforcement Learning with Free Energy Principle | Imitation Learning (IL) and Reinforcement Learning (RL) from high dimensional sensory inputs are often introduced as separate problems, but a more realistic problem setting is how to merge the techniques so that the agent can reduce exploration costs by partially imitating experts at the same time it maximizes its retu... | withdrawn-rejected-submissions | This paper proposes a new algorithm that combines imitation learning and reinforcement learning, based on an extension of the free energy principal. The expert's demonstrations are encoded as a policy prior, and a posterior policy is inferred by maximizing expected rewards. While at a high-level this is a promising dir... | train | [
"BK_EjD3dSY",
"y-9WVIi9fv",
"ZVt-8QyLs4c",
"5PvLqA2R68",
"nXqw0t6hpfY",
"yAUXXkOb9tv",
"-UN09bQeVF3",
"6aOeaNg6_Ni",
"JTk4S_Fv1EY",
"Z-8KAV9hQJS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces two different interpretations of free energy minimization as a form of behavior cloning and reinforcement learning. \n\nStrength:\nThis approach seems to have significant gains on the environments evaluated.\nThe approach appears novel to my knowledge.\n\nWeaknesses:\nI found that I was con... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
2
] | [
"iclr_2021_JI2TGOehNT0",
"ZVt-8QyLs4c",
"5PvLqA2R68",
"6aOeaNg6_Ni",
"JTk4S_Fv1EY",
"Z-8KAV9hQJS",
"BK_EjD3dSY",
"iclr_2021_JI2TGOehNT0",
"iclr_2021_JI2TGOehNT0",
"iclr_2021_JI2TGOehNT0"
] |
iclr_2021_nkap3LV7t7O | Simple and Effective VAE Training with Calibrated Decoders | Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions. However, training VAEs often requires considerable hyperparameter tuning to determine the optimal amount of information retained by the latent variable. We study the impact of calibrated decoders, which learn the ... | withdrawn-rejected-submissions | This paper proposes to (re-)examine VAEs with calibrated uncertainties for the likelihood, which is say VAEs in which the variance is learned rather than chosen as a fixed hyperparameter. The authors argue that doing so provides a reasonable means of automatically navigating the tradeoff between minimizing the distorti... | train | [
"5kwV_3Leffo",
"w3Dbj-OTPu7",
"Ua_IhOT3Hcz",
"Ko4NU2pXGWr",
"_7QcPSAB9j4",
"-pLD2RNXRr9",
"Fr_GlP_5Js7",
"FSUibfFYtlE",
"5pSJbwLl9Zj",
"thVPPqUvBbc",
"0rBmrhoScje",
"mwch9h_wqDE"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper claims two contributions: 1. Proposes a connection between beta-VAE with fixed variance Gaussian decoder and VAE with variable variance Gaussian decoder 2. Proposes to optimize the variance of a Gaussian decoder\n\nThis paper uses a simple but useful method, but I have some concerns about novelty and so... | [
5,
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2021_nkap3LV7t7O",
"iclr_2021_nkap3LV7t7O",
"5pSJbwLl9Zj",
"_7QcPSAB9j4",
"0rBmrhoScje",
"iclr_2021_nkap3LV7t7O",
"thVPPqUvBbc",
"iclr_2021_nkap3LV7t7O",
"mwch9h_wqDE",
"-pLD2RNXRr9",
"5kwV_3Leffo",
"FSUibfFYtlE"
] |
iclr_2021_Ux5zdAir9-U | GraphLog: A Benchmark for Measuring Logical Generalization in Graph Neural Networks | Relational inductive biases have a key role in building learning agents that can generalize and reason in a compositional manner. While relational learning algorithms such as graph neural networks (GNNs) show promise, we do not understand their effectiveness to adapt to new tasks. In this work, we study the logical gen... | withdrawn-rejected-submissions | The reviewers point out several important issues to be addressed, including comparing to other methods that can address the "combinatorial generalization" problems studied (one reviewer points out the crucial difference from "compositional generalization" studied before), addressing the gap between the proposed dataset... | train | [
"wh0Pa6G5PcK",
"QZ3oEXNdXgE",
"z9uv7vPVWl8",
"S6MADpY0Ca8",
"uaUgnaU7OEm",
"wxxppSbdcMP",
"pUu8rEGIaqG",
"N9Qe3ujrrp4",
"jB6PLYteZM",
"SayKprlp89V",
"swgY5mPW__9",
"ZpnGxhJw-5g",
"6sZ7pMioezd",
"a3n5hP_ZZg2",
"QvLIOScP5Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I liked the idea of GraphLog; it seems like an interesting and potentially very useful dataset. The \"results\" seemed little to do with the dataset (apart from being enabled by the dataset), but were comparing some models that were neither the current state-of-the-art or particularly novel. I could imagine a pape... | [
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_Ux5zdAir9-U",
"iclr_2021_Ux5zdAir9-U",
"iclr_2021_Ux5zdAir9-U",
"uaUgnaU7OEm",
"jB6PLYteZM",
"6sZ7pMioezd",
"N9Qe3ujrrp4",
"SayKprlp89V",
"QZ3oEXNdXgE",
"a3n5hP_ZZg2",
"QvLIOScP5Q",
"6sZ7pMioezd",
"wh0Pa6G5PcK",
"iclr_2021_Ux5zdAir9-U",
"iclr_2021_Ux5zdAir9-U"
] |
iclr_2021_9D_Ovq4Mgho | Network-Agnostic Knowledge Transfer for Medical Image Segmentation | Conventional transfer learning leverages weights of pre-trained networks, but mandates the need for similar neural architectures. Alternatively, knowledge distillation can transfer knowledge between heterogeneous networks but often requires access to the original training data or additional generative networks. Knowled... | withdrawn-rejected-submissions | A majority of the reviewers find the paper lacks novelty and provides an insufficient discussion of the state-of-the-art in knowledge distillation and student teacher training to warrant publication.
The approach is quite narrow to the application domain and the paper does not provide novel insights on how to chose a g... | val | [
"E--4oo3D6V2",
"UM87V1rIBAW",
"utX7IyFFyPm",
"KfIiRaQ47O",
"UD987Px9p7X",
"Lyihgg6RbzD",
"7FwoSYnPMWA",
"-6rkWfXRaw5",
"Lf6stxW5EUN",
"kyjYw8PPnAm"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer their insightful comments. Following these suggestions, we have carefully and extensively improved the paper. \n\n~~~\nReviewer's comment: Most importantly the paper seems to be lacking proper positioning within the space of knowledge distillation and student-teacher training which leads to a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"Lf6stxW5EUN",
"Lf6stxW5EUN",
"-6rkWfXRaw5",
"Lf6stxW5EUN",
"Lf6stxW5EUN",
"kyjYw8PPnAm",
"iclr_2021_9D_Ovq4Mgho",
"iclr_2021_9D_Ovq4Mgho",
"iclr_2021_9D_Ovq4Mgho",
"iclr_2021_9D_Ovq4Mgho"
] |
iclr_2021_0BaWDGvCa5p | A Provably Convergent and Practical Algorithm for Min-Max Optimization with Applications to GANs | We present a first-order algorithm for nonconvex-nonconcave min-max optimization problems such as those that arise in training GANs. Our algorithm provably converges in poly(d,L,b) steps for any loss function f:Rd×Rd→R which is b-bounded with L-Lipschitz gradient. To achieve convergence, we 1) give a novel approximati... | withdrawn-rejected-submissions | All reviewers appreciated the main idea in the paper for solving the nonconvex-nonconcave minimax problems, which is deemed an extremely hard open problem. However, as R1 also pointed out, neither the theoretical nor the experimental results seem particularly strong, given that many variations of GDA and theoretical un... | train | [
"xDxo0HWp7Tl",
"6W_aUPVuKLR",
"jH1LSBEetTn",
"wJZi_DL_h9O",
"vMGg1WBG1jX",
"0EAH9Q2-pMS",
"ePFZnGe4l3G",
"xooePqgrD6h",
"WP6VUcIJcBQ",
"L2yLtRmJtxE",
"uQnjMRgEnVN",
"TL5WmAu2YON",
"DlcquFUQr6b"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you again for your time in reviewing the paper and for engaging in a discussion. We emphasize that the unbounded function you mention is *not* a “negative example” to our main result. As we have clearly stated in our original submission, our main result (Theorem 2.3) guarantees convergence for functions w... | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"6W_aUPVuKLR",
"jH1LSBEetTn",
"vMGg1WBG1jX",
"iclr_2021_0BaWDGvCa5p",
"uQnjMRgEnVN",
"wJZi_DL_h9O",
"TL5WmAu2YON",
"DlcquFUQr6b",
"TL5WmAu2YON",
"DlcquFUQr6b",
"wJZi_DL_h9O",
"iclr_2021_0BaWDGvCa5p",
"iclr_2021_0BaWDGvCa5p"
] |
iclr_2021_tckGH8K9y6o | Symmetric Wasserstein Autoencoders | Leveraging the framework of Optimal Transport, we introduce a new family of generative autoencoders with a learnable prior, called Symmetric Wasserstein Autoencoders (SWAEs). We propose to symmetrically match the joint distributions of the observed data and the latent representation induced by the encoder and the decod... | withdrawn-rejected-submissions | In this paper, the authors proposed a new variant of the Wasserstein autoencoder (WAE), which matches the joint distribution of data and the latent codes induced by the encoder and the joint distribution induced by the decoder in the framework of optimal transport. Because of matching the distributions that are not con... | train | [
"QQyE01dU6n5",
"mVD4Xrvu6WC",
"boCHSS9UnQ",
"ewRLLzemipA",
"YIoRsuoIp9c",
"qCRa4ZX9ZWr",
"9rO9ggLVVzx",
"N6HBl1ivLwF",
"C9gaiZC5oj",
"S3LWKJ_t6XN"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n \nThis works proposes an new auto-encoder variant based on an Optimal Transport (OT) penalty. While there are many such previous works of OT and auto-encoders, this work proposes a joint OT penalty on data and latent space. A... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_tckGH8K9y6o",
"iclr_2021_tckGH8K9y6o",
"ewRLLzemipA",
"QQyE01dU6n5",
"N6HBl1ivLwF",
"C9gaiZC5oj",
"S3LWKJ_t6XN",
"iclr_2021_tckGH8K9y6o",
"iclr_2021_tckGH8K9y6o",
"iclr_2021_tckGH8K9y6o"
] |
iclr_2021_dFBRrTMjlyL | Bidirectionally Self-Normalizing Neural Networks | The problem of exploding and vanishing gradients has been a long-standing obstacle that hinders the effective training of neural networks. Despite various tricks and techniques that have been employed to alleviate the problem in practice, there still lacks satisfactory theories or provable solutions. In this paper, we ... | withdrawn-rejected-submissions | Three knowledgeable referees rate this paper ok but not good enough or borderline positive (4,4,6), and one fairly confident referee rates it borderline positive 6. The referees discussed the authors' responses and, while they considered the idea and some of the theory good, they remain concerned, in particular about t... | train | [
"6vxzD-J49xX",
"EysfBxWulyA",
"nOwQInCYuya",
"ISUDe5bVvLc",
"pTSLSBagIoe",
"b7GaFdZXG81",
"CiZpzKfXQZ",
"6YbssJTKZiT"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the appreciation of our work and the critical reading.\n\n**1. The only (minor) issue I have on the presentation side is the name “bidirectional self-normalizing neural networks”.** \nWe agree with the reviewer and have changed the name to “bidirectionally self-normalizing neural network... | [
-1,
-1,
-1,
-1,
6,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"b7GaFdZXG81",
"6YbssJTKZiT",
"pTSLSBagIoe",
"CiZpzKfXQZ",
"iclr_2021_dFBRrTMjlyL",
"iclr_2021_dFBRrTMjlyL",
"iclr_2021_dFBRrTMjlyL",
"iclr_2021_dFBRrTMjlyL"
] |
iclr_2021_Cn706AbJaKW | An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process | Mainstream machine learning conferences have seen a dramatic increase in the number of participants, along with a growing range of perspectives, in recent years. Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias. In this ... | withdrawn-rejected-submissions | Reviewers appreciated the care and substantial effort that went into the paper, for instance:
AR3) I think it's of good value for the community to see and discuss the paper in the conference.
AR4) would be quite valuable for the senior members of the community to read and be familiar with.
The main argument for reject... | train | [
"pk0N1mUP6Mm",
"mitEwGGQart",
"b75LFjHv8d",
"pIY-QRnHfk2",
"a0NYl8J7owg",
"soTZcQJo7bR",
"U8Xb4cYv88G",
"P3360aJi8gQ",
"CCzgPzqxJJ4"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your feedback. We address your comments below.\n\n1) In the appendix, we examine the statistical assumptions of ANOVA, and we included a plot of variance as a function of reviewer score in order to justify this choice. We found that variance was similar for papers with moderate and borderline scores... | [
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"soTZcQJo7bR",
"P3360aJi8gQ",
"U8Xb4cYv88G",
"CCzgPzqxJJ4",
"pIY-QRnHfk2",
"iclr_2021_Cn706AbJaKW",
"iclr_2021_Cn706AbJaKW",
"iclr_2021_Cn706AbJaKW",
"iclr_2021_Cn706AbJaKW"
] |
iclr_2021_GJnpCsLQThe | Gradient Descent Ascent for Min-Max Problems on Riemannian Manifolds | In the paper, we study a class of useful non-convex minimax optimization problems on Riemanian manifolds and propose a class of Riemanian gradient descent ascent algorithms to solve these minimax problems. Specifically, we propose a new
Riemannian gradient descent ascent (RGDA) algorithm for the deterministic m... | withdrawn-rejected-submissions | In this paper, the authors provide a Riemannian version of gradient descent/ascent for min-max problems on manifolds. Assuming a tractable retraction mapping for the descent/ascent step, the authors provide a complexity analysis for finding a (local) saddle point in the spirit of Lin et al. (2020).
The paper received ... | train | [
"PNgS_C8KaML",
"SguJsxVzl1x",
"Tjn8c0lr4Qp",
"27Mo65SmBY",
"y6iJm3L0_A",
"sF0jCQ7ifA9",
"OVlAShd1Gi",
"5x4nL43YEH_"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1.Summarize what the paper claims to do/contribute. Be positive and generous.\n\nThis paper considers solving the minimax saddle point of the form \\min_{x\\in X}\\max_{y\\in Y} f(x,y), where X is a Riemannian manifold and Y is a closed convex set. The objective function f is nonconvex in x and is strongly-concave... | [
5,
-1,
-1,
-1,
-1,
7,
4,
4
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_GJnpCsLQThe",
"PNgS_C8KaML",
"OVlAShd1Gi",
"5x4nL43YEH_",
"sF0jCQ7ifA9",
"iclr_2021_GJnpCsLQThe",
"iclr_2021_GJnpCsLQThe",
"iclr_2021_GJnpCsLQThe"
] |
iclr_2021_LjFGgI-_tT0 | BayesAdapter: Being Bayesian, Inexpensively and Robustly, via Bayesian Fine-tuning | Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with normal NNs, mainly due to their limited scalability in training, and low fidelity in their uncertainty estimates. In this work, we develop a new framework, named ... | withdrawn-rejected-submissions | This paper aims at improving the adoption of Bayesian NNs by providing a practical and user friendly variational inference method. The main ideas consist of two parts:
1. Warm-start the variational inference from a pre-trained deterministic NN. It takes advantage of existing deep learning library features for easy impl... | train | [
"81d-CShHFdO",
"a1bhKlPhrDu",
"Z8TrBxJQl2X",
"hAn0xqaK5cb",
"Zezt6IvxE1Z",
"wDJ9Bbj4la",
"-wSyqO658-I",
"drVduZIPgB",
"1Cjil7LRnsc",
"jCMENHJXCLE",
"rkxucHwaiLC",
"cLm5TIAu9Eh",
"Q1uDi2BWcXX",
"hh79F4aS2Ad",
"lpsZ81g1fVM",
"71PohIdAvr-",
"vGU80_i202L",
"hUiMEzKBIpd",
"cf1x6SffANo... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
"Thank you again for the immediate reply!\n\n- We kindly point out that almost all the existing variational BNNs perform *from-scratch* learning (e.g., the noisy KFAC, noisy EKFAC, and VOGN mentioned in [Wenzel et.al.]). They typically cannot demonstrate good performance without cold posterior, thus we conclude th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"a1bhKlPhrDu",
"Z8TrBxJQl2X",
"hAn0xqaK5cb",
"vGU80_i202L",
"hh79F4aS2Ad",
"iclr_2021_LjFGgI-_tT0",
"jCMENHJXCLE",
"1Cjil7LRnsc",
"cLm5TIAu9Eh",
"Q1uDi2BWcXX",
"iclr_2021_LjFGgI-_tT0",
"hUiMEzKBIpd",
"hUiMEzKBIpd",
"cf1x6SffANo",
"dtybQUEcJ1L",
"dtybQUEcJ1L",
"5quiF9Q1TYK",
"iclr_2... |
iclr_2021_tJz_QUXB7C | Generating Plannable Lifted Action Models for Visually Generated Logical Predicates | We propose FOSAE++, an unsupervised end-to-end neural system that generates a compact discrete state transition model (dynamics / action model) from raw visual observations. Our representation can be exported to Planning Domain Description Language (PDDL), allowing symbolic state-of-the-art classical planners to perfor... | withdrawn-rejected-submissions | This paper described a system for deriving PDDL (Planning Domain Description Language) operator descriptions from unlabeled visual image pairs. The goal is to construct STRIPS-like descriptions with preconditions, add lists and delete lists for operators that can explain the state transitions seem in the image pairs. ... | train | [
"IPIoGgCo6Fv",
"46YDsi-KYm",
"ZXpN5AGj_bq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work presents FOSAE++, an end-to-end system capable of producing \"lifted\" action models provided only bounding box annotations of image pairs before and after an unknown action is executed. Building on recent work in the space, the primary contribution of this work is to generate PDDL action rules. To accom... | [
6,
5,
6
] | [
3,
1,
4
] | [
"iclr_2021_tJz_QUXB7C",
"iclr_2021_tJz_QUXB7C",
"iclr_2021_tJz_QUXB7C"
] |
iclr_2021_zfO1MwBFu- | Information Theoretic Regularization for Learning Global Features by Sequential VAE | Sequential variational autoencoders (VAEs) with global latent variable z have been studied for the purpose of disentangling the global features of data, which is useful in many downstream tasks. To assist the sequential VAEs further in obtaining meaningful z, an auxiliary loss that maximizes the mutual information (MI)... | withdrawn-rejected-submissions | This paper presents a representation method for time series data in the sequential VAE, where the global feature z and local features s are better disentangled. The intuition behind learning z is to maximize the mutual information between z and input x, while minimizing the mutual information between z and s. The seco... | val | [
"8ytlnHIEGln",
"WIoGXqSMrOU",
"DSIrdcP0e_v",
"7N4Q0WucCAj",
"6rWYhPewfe6",
"6yyj-VOoDag",
"PsU_QZq1Lrs",
"tzFJPgSNiws",
"qooXFoswvho",
"cqKE9-mUNo",
"lQtpn3_q_Bt",
"feyUKsV19a",
"SrW2Am3FAxd",
"uzgipdk8bV",
"V3AU31b2VTz",
"p9ye8IZyHfo"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Update after rebuttal:\nI agree with Reviewer 5 that this paper has good ingredients, and the discussion and update of the draft clarifies the novelty and provides better review on the related work. However, the experiments presented in this paper are not very comprehensive, particularly the baselines and the abla... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_zfO1MwBFu-",
"iclr_2021_zfO1MwBFu-",
"tzFJPgSNiws",
"PsU_QZq1Lrs",
"qooXFoswvho",
"cqKE9-mUNo",
"lQtpn3_q_Bt",
"feyUKsV19a",
"SrW2Am3FAxd",
"WIoGXqSMrOU",
"WIoGXqSMrOU",
"WIoGXqSMrOU",
"WIoGXqSMrOU",
"8ytlnHIEGln",
"p9ye8IZyHfo",
"iclr_2021_zfO1MwBFu-"
] |
iclr_2021_avBunqDXFS | Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer | Rehearsal is a critical component for class-incremental continual learning, yet it requires a substantial memory budget. Our work investigates whether we can significantly reduce this memory budget by leveraging unlabeled data from an agent's environment in a realistic and challenging continual learning paradigm. Speci... | withdrawn-rejected-submissions | This paper proposes a semi-supervised setting to reduce memory budget in replay-based continual learning.
It uses unlabeled data in the environment for replaying which requires no storage, and generates pseudo-labels where unlabeled data is connected to labeled one.
The method was validated on the proposed tasks.
Pros... | val | [
"HEP6e7aiTZ",
"L1bB59KXOOn",
"WjsoZ12Ocw4",
"pDHszy-EhB3",
"swLlXfYCdxL",
"2Qqdt8sH_es",
"YFqX81I2we9",
"r3akekB1XOV",
"2xcqersB6Nk",
"mK0QkJ4w78_",
"Uit4bXrTyef",
"YkvwTuOEznM",
"q7CkHseXkuY",
"SfYjc_XlJzq",
"z-PeszUupN7",
"3zdGcKtO1QT"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper comes up with a novel scenario where the unlabled data are available as well as labeled data in the continual learning scenario.\n\n### Overall\n- Based on my understanding, the major contribution is the proposal of a task scenario, aka, experimental setting. The novelty of DistillMatch is an incrementa... | [
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_avBunqDXFS",
"iclr_2021_avBunqDXFS",
"pDHszy-EhB3",
"HEP6e7aiTZ",
"2Qqdt8sH_es",
"3zdGcKtO1QT",
"r3akekB1XOV",
"L1bB59KXOOn",
"mK0QkJ4w78_",
"z-PeszUupN7",
"iclr_2021_avBunqDXFS",
"SfYjc_XlJzq",
"SfYjc_XlJzq",
"YFqX81I2we9",
"iclr_2021_avBunqDXFS",
"iclr_2021_avBunqDXFS"
] |
iclr_2021_exa2mDqPb5E | CIGMO: Learning categorical invariant deep generative models from grouped data | Images of general objects are often composed of three hidden factors: category (e.g., car or chair), shape (e.g., particular car form), and view (e.g., 3D orientation). While there have been many disentangling models that can discover either a category or shape factor separately from a view factor, such models typica... | withdrawn-rejected-submissions | Description:
The paper presents a weakly-supervised model CICGMO for disentangling category, shape and view information from images. Label information is not need as the weak supervision is done by grouping together different views of the same object. They show that this outperforms other techniques on tasks such as i... | train | [
"zdpPhJDnY5y",
"wS350HKyXW4",
"qV72oeJRvhr",
"JrMQUnb8X-_",
"UhQSWtQqkKX",
"tmPQJN3Lhbb",
"8Pp_ob19UEx",
"KuO96g3IQbe",
"V_2efhR_ja"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"[This note has been updated to be more informative.]\n\nWe thank very much the reviewers for valuable comments and questions. We have revised the manuscript based on these and this made a significant improvement in solidity and clarity. \n\nThe major changes in the revision are as follows.\n1. One reviewer commen... | [
-1,
-1,
-1,
-1,
-1,
4,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"iclr_2021_exa2mDqPb5E",
"tmPQJN3Lhbb",
"8Pp_ob19UEx",
"KuO96g3IQbe",
"V_2efhR_ja",
"iclr_2021_exa2mDqPb5E",
"iclr_2021_exa2mDqPb5E",
"iclr_2021_exa2mDqPb5E",
"iclr_2021_exa2mDqPb5E"
] |
iclr_2021_7ehDLD1yoE0 | STRATA: Simple, Gradient-free Attacks for Models of Code | Adversarial examples are imperceptible perturbations in the input to a neural model that result in misclassification. Generating adversarial examples for source code poses an additional challenge compared to the domains of images and natural language, because source code perturbations must adhere to strict semantic gui... | withdrawn-rejected-submissions | The paper gives a gradient-free method for generating adversarial examples for the code2seq model of source code.
While the reviewers found the high-level objectives interesting, the experimental evaluation leaves quite a bit to be desired. (Please see the reviews for more details.) As a result, the paper cannot be ac... | train | [
"0BFuWT48fn9",
"gi4pTnJowd2",
"Uk0cpkSr9MH",
"D89mJMKetl_",
"4evD3OVKbDw",
"nSKdDeptArF",
"27Gn8C4Pid7",
"fCuTWq6RM7",
"Ad8avZV9O0a",
"BUl926O9E0",
"aRPNYd7A8Ys"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for taking the reviewers' comments on board and for amending the paper. I also appreciate your detailed response to all the points that were raised. I believe the edits have brought additional value to the paper.",
"We would like to thank all of the reviewers again for their helpful feedback. We have p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"4evD3OVKbDw",
"iclr_2021_7ehDLD1yoE0",
"iclr_2021_7ehDLD1yoE0",
"fCuTWq6RM7",
"Ad8avZV9O0a",
"BUl926O9E0",
"aRPNYd7A8Ys",
"iclr_2021_7ehDLD1yoE0",
"iclr_2021_7ehDLD1yoE0",
"iclr_2021_7ehDLD1yoE0",
"iclr_2021_7ehDLD1yoE0"
] |
iclr_2021_sjGBjudWib | FAST GRAPH ATTENTION NETWORKS USING EFFECTIVE RESISTANCE BASED GRAPH SPARSIFICATION | The attention mechanism has demonstrated superior performance for inference over nodes in graph neural networks (GNNs), however, they result in a high computational burden during both training and inference. We propose FastGAT, a method to make attention based GNNs lightweight by using spectral sparsification to genera... | withdrawn-rejected-submissions | Two reviewers recommend rejection, whereas two reviewers slightly lean towards acceptance. All reviewers agree that the paper tackles an important problem, and the proposed direction holds promise and is worth exploring. However, the reviewers raised concerns about the novelty of the proposed approach [R3,R4], the appl... | train | [
"lO1KmwTAbuf",
"aznoRKJ68qt",
"gkTL9mWfbn1",
"RU_gTVQGF7-",
"694FW0EyD-I",
"-b39vM-Wjf",
"PYGkgv0ny6",
"M0gSrESgSRZ",
"debmX0bcx9",
"MwX-397Na4F",
"hQheFC07LP_",
"bVzQv3qBlZo",
"cv2rztNXndZ",
"SXxlHLAfMYv",
"sJFiYytJFi",
"E1jsm0Mzqak"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"## Summary\nThis paper proposes a paradigm which speeds up the training/inference time of GATs while not compromising too much performance. The method adopts a layerwise sampling procedure. In particular. The authors propose to sample a sub-portion of edges for each layer based on their effective resistance. Such ... | [
5,
6,
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_sjGBjudWib",
"iclr_2021_sjGBjudWib",
"iclr_2021_sjGBjudWib",
"iclr_2021_sjGBjudWib",
"-b39vM-Wjf",
"PYGkgv0ny6",
"debmX0bcx9",
"MwX-397Na4F",
"bVzQv3qBlZo",
"hQheFC07LP_",
"RU_gTVQGF7-",
"gkTL9mWfbn1",
"aznoRKJ68qt",
"lO1KmwTAbuf",
"E1jsm0Mzqak",
"iclr_2021_sjGBjudWib"
] |
iclr_2021_m0ECRXO6QlP | Supervision Accelerates Pre-training in Contrastive Semi-Supervised Learning of Visual Representations | We investigate a strategy for improving the efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training. We propose a semi-supervised loss, SuNCEt, based on noise-contrastive estimation and neighbourhood component analysis, that aims to disting... | withdrawn-rejected-submissions | The paper proposes to speed-up self-supervised learning for semi-supervised learning by combining self-supervised pretraining and supervised fine-tuning into a single objective. The proposed supervised loss builds on Neighbourhood Components Analysis and soft nearest neighbor losses. Most reviewers are concerned about ... | train | [
"Ap8xym5AMA7",
"LXZySP3r53",
"11cQYfQI6QQ",
"JgOR0uAcgr4",
"MROvan0Z0y",
"08ZA1kZftri",
"arZ8DMjinfG",
"_7q3-QZelKJ",
"UackXCZunNf",
"kTY0UYywGsc",
"4ua1vt1w_Ok"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the authors' detailed response. However, after reading it, I still feel that this paper is not strong enough in terms of technical novelty or performance improvement. It is not surprising to me that pre-training can accelerate performance. Therefore, I remain my original score.",
"Dear reviewers and... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
3
] | [
"MROvan0Z0y",
"iclr_2021_m0ECRXO6QlP",
"4ua1vt1w_Ok",
"UackXCZunNf",
"_7q3-QZelKJ",
"iclr_2021_m0ECRXO6QlP",
"kTY0UYywGsc",
"iclr_2021_m0ECRXO6QlP",
"iclr_2021_m0ECRXO6QlP",
"iclr_2021_m0ECRXO6QlP",
"iclr_2021_m0ECRXO6QlP"
] |
iclr_2021_H92-E4kFwbR | Composite Adversarial Training for Multiple Adversarial Perturbations and Beyond | One intriguing property of deep neural networks (DNNs) is their vulnerability to adversarial perturbations. Despite the plethora of work on defending against individual perturbation models, improving DNN robustness against the combinations of multiple perturbations is still fairly under-studied. In this paper, we propo... | withdrawn-rejected-submissions | I thank authors and reviewers for discussions. Reviewers found the paper (specially the CAT-r method proposed in the rebuttal period) interesting but there are some remaining concerns about the significance of the results and experiments. Given all, I think the paper still needs a bit of more work before being accepted... | train | [
"AVCMFSvCEIr",
"xWSQ9IJy5M1",
"eR_h1NmfbV8",
"-29JJRL0g_",
"5UU6PthcRn",
"QLYokYQG5xQ",
"YJq7zb_hMJS",
"wA3w_pMvSTs"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose a method for dealing with *composite* adversarial attacks, which are defined as a sequence of perturbation operators each applying some constrained perturbation to the output of the previous operator. Their method models the composed adversarial examples $x^*$ as the sum of the unperturbed exam... | [
5,
6,
5,
-1,
-1,
-1,
-1,
5
] | [
3,
3,
3,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_H92-E4kFwbR",
"iclr_2021_H92-E4kFwbR",
"iclr_2021_H92-E4kFwbR",
"eR_h1NmfbV8",
"iclr_2021_H92-E4kFwbR",
"xWSQ9IJy5M1",
"wA3w_pMvSTs",
"iclr_2021_H92-E4kFwbR"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.