paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18
values | meta_review stringlengths 29 10k | label stringclasses 3
values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2021__b8l7rVPe8z | Relevance Attack on Detectors | This paper focuses on high-transferable adversarial attacks on detectors, which are hard to attack in a black-box manner, because of their multiple-output characteristics and the diversity across architectures. To pursue a high attack transferability, one plausible way is to find a common property across detectors, whi... | withdrawn-rejected-submissions | This paper proposes a transferable adversarial attack method for object detection by using the relevance map. Four reviewers provided detailed reviews: 2 of them rated “Ok but not good enough - rejection”, 1 rated “Marginally below” and 1 rated “Marginally above”. While reviewers consider the paper well written and usi... | train | [
"BYjkq8bf2ot",
"9gvSGcW_XCr",
"qo7nn2IJay",
"oUbfVy-y-iT",
"TlSlDOWd71X",
"F-pJXaTIQb2",
"SGVPSZPND3h",
"AHqOsybtVLN",
"Adlnb8Kbl2o",
"M1non0gR8rB",
"TJmcGcoKjnY"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Program Chairs, Area Chairs, and Reviewers,\n\nFirst of all, we would like to thank you for your time, constructive critiques, and valuable suggestions. Your input contributed to a significant improvement of the paper and to our proposed method as well. We did our best to address your comments on our revised ... | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
4,
3,
4
] | [
"iclr_2021__b8l7rVPe8z",
"TJmcGcoKjnY",
"F-pJXaTIQb2",
"TlSlDOWd71X",
"qo7nn2IJay",
"iclr_2021__b8l7rVPe8z",
"Adlnb8Kbl2o",
"M1non0gR8rB",
"iclr_2021__b8l7rVPe8z",
"iclr_2021__b8l7rVPe8z",
"iclr_2021__b8l7rVPe8z"
] |
iclr_2021_RuUdMAU-XbI | Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks | One practice of employing deep neural networks is to apply the same architecture to all the input instances. However, a fixed architecture may not be representative enough for data with high diversity. To promote the model capacity, existing approaches usually employ larger convolutional kernels or deeper network struc... | withdrawn-rejected-submissions | The idea presented in the paper is interesting and has caught the attention of the reviewers. However there seem to be only a tepid support for acceptance with a reviewer championing rejection.
There is little novelty in the approach but empirical validation shows results that consistently improve over selected baseli... | train | [
"mLzmYfChPw",
"OEZKq5tY-e",
"hjBoiudtxt",
"f89UyxcKPFV",
"d34iazgz45v",
"m92gHQteQOR",
"EldKO57oxqi",
"KRgw27ODcWl",
"PSnEh6-4_ub",
"unQg7n7EzHR",
"Ls_Z0Ge92k",
"0ZLdWwSnSdY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a novel method, called Dynamic Graph Network (DG-Net), for optimizing the architecture of a neural network. Building on the previous work introduced by (Xie et al., 2019), the authors propose to consider the network as a complete directed acyclic graph (DAG). Then, the edge weights of the DAG ar... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_RuUdMAU-XbI",
"iclr_2021_RuUdMAU-XbI",
"d34iazgz45v",
"0ZLdWwSnSdY",
"OEZKq5tY-e",
"OEZKq5tY-e",
"Ls_Z0Ge92k",
"Ls_Z0Ge92k",
"mLzmYfChPw",
"iclr_2021_RuUdMAU-XbI",
"iclr_2021_RuUdMAU-XbI",
"iclr_2021_RuUdMAU-XbI"
] |
iclr_2021_TTLwOwNkOfx | Learning Hyperbolic Representations for Unsupervised 3D Segmentation | There exists a need for unsupervised 3D segmentation on complex volumetric data, particularly when annotation ability is limited or discovery of new categories is desired. Using the observation that much of 3D volumetric data is innately hierarchical, we propose learning effective representations of 3D patches for unsu... | withdrawn-rejected-submissions | This application paper applies hyperbolic convolutions in VAE learning to perform unsupervised 3D segmentation.
Addition of these components enables performance improvements in the unsupervised segmentation task.
Overall, the paper is borderline and the reviewers mention the limited novelty of the approach, which larg... | train | [
"SRb7P72fqKA",
"AxHgzjsAszj",
"WALQ8XJESw",
"jLw1K5gSRRR",
"_Kaszzs9X_",
"LVF7_3oS2-p",
"EzoneBSA-WJ",
"pGnEAh8XnEj",
"FjfWacR0LHE",
"r6TFhYrfZJ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the clarifications. I will keep my scores although I agree with other reviewers that there is not much novelty in terms of theory since the model is an aggregation of different existing methods into a single pipeline.\nI am not an expert in biomedical imaging so I can't judge the relevance of the app... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
3
] | [
"LVF7_3oS2-p",
"_Kaszzs9X_",
"FjfWacR0LHE",
"r6TFhYrfZJ",
"EzoneBSA-WJ",
"pGnEAh8XnEj",
"iclr_2021_TTLwOwNkOfx",
"iclr_2021_TTLwOwNkOfx",
"iclr_2021_TTLwOwNkOfx",
"iclr_2021_TTLwOwNkOfx"
] |
iclr_2021_vkxGQB9f2Vg | Fourier Stochastic Backpropagation | Backpropagating gradients through random variables is at the heart of numerous machine learning applications. In this paper, we present a general framework for deriving stochastic backpropagation rules for any distribution, discrete or continuous. Our approach exploits the link between the characteristic function and t... | withdrawn-rejected-submissions | The focus of the paper is stochastic backpropagation for both continuous and discrete random variables. By using standard results from Fourier analysis the authors rewrite the corresponding gradients in an infinite weighted sum form ((3) and (9)), extending the results of (Rezende et al. 2014) and (Fellows. et al., 201... | train | [
"tMdzpnLfcot",
"fNpjtBk8Wfr",
"baPdFfeMSQQ",
"UAT4PskhLbL",
"Up7T_KOR9of",
"suxGnZe_oaS",
"ysz2WJTd_0",
"77xFHsuW7al",
"7uPMzwYojMd",
"m1EA9JHYyjt",
"F7mt02-eCg",
"DMEVKMp8Azt"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Update**\nThe errors in the paper were fixed, and the discussion was improved.\nIt is very rare to see a novel result as fundamental as the one presented in this paper, and I believe this puts it in the top 5% of accepted papers, so I have updated my score accordingly. \n\nI think the discussion and experimentat... | [
10,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_vkxGQB9f2Vg",
"baPdFfeMSQQ",
"7uPMzwYojMd",
"F7mt02-eCg",
"suxGnZe_oaS",
"7uPMzwYojMd",
"DMEVKMp8Azt",
"m1EA9JHYyjt",
"tMdzpnLfcot",
"iclr_2021_vkxGQB9f2Vg",
"iclr_2021_vkxGQB9f2Vg",
"iclr_2021_vkxGQB9f2Vg"
] |
iclr_2021_JvPsKam58LX | Robust Multi-Agent Reinforcement Learning Driven by Correlated Equilibrium | In this paper we deal with robust cooperative multi-agent reinforcement learning (CMARL). While CMARL has many potential applications, only a trained policy that is robust enough can be confidently deployed in real world. Existing works on robust MARL mainly apply vanilla adversarial training in centralized training an... | withdrawn-rejected-submissions | The paper tackles the interesting area of cooperative multi-agent learning and presents a promising method to make MAL robust to mistakes of teammates, while learning correlated equilibria. Reviewers find the presented setting and theoretical contributions limited and the experiments not extensive enough; also some tec... | train | [
"Fc6I65mlU2",
"r7iKxQGqJg_",
"nNcz6R2p52f",
"6sJyap8uo4",
"dledFT3RUD6",
"jTpXZpimtgU",
"76pAhboo6Z",
"-6PC_OJtCzY",
"b4sYwbUpRQH",
"oWZCThCWmM"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your review of our paper.\n\nHere are our responses to each point you mentioned:\n\n\"Motivation of problem formulation\"\n- The motivation of our formulation is to obtain a policy that is robust when one agent makes some but not very big mistakes. In normal MARL algorithm, the team can gua... | [
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3,
3
] | [
"jTpXZpimtgU",
"76pAhboo6Z",
"-6PC_OJtCzY",
"b4sYwbUpRQH",
"oWZCThCWmM",
"iclr_2021_JvPsKam58LX",
"iclr_2021_JvPsKam58LX",
"iclr_2021_JvPsKam58LX",
"iclr_2021_JvPsKam58LX",
"iclr_2021_JvPsKam58LX"
] |
iclr_2021_NlrFDOgRRH | Distributed Associative Memory Network with Association Reinforcing Loss | Despite recent progress in memory augmented neural network research, associative memory networks with a single external memory still show limited performance on complex relational reasoning tasks. The main reason for this problem comes from the lossy representation of a content-based addressing memory and its insuffici... | withdrawn-rejected-submissions | After carefully reading the reviews and the rebuttal, and after going over the paper itself, I'm not sure the paper it ready for ICLR. I do believe there is a lot of useful content in the current manuscript, and I urge the authors to keep working on the manuscript and resubmit it in due time.
My concerns are as follo... | train | [
"i90WIjrbUoa",
"g6Nt5hnPg34",
"oAqC7JrrUR5",
"RfaMubSXjgp",
"C6m4DBMoYl",
"Wq92zDZiJOo",
"T0jy3mLapvT",
"ULNcDwAjNEB",
"nJWw47b_QUl",
"SsZKkZ0NH_C",
"ND7JPmeyfB",
"dnW4AgXr-Te",
"pjzuR2kI7yF",
"rH2iYWNpRPM",
"1Y4V9m5xNl5",
"FjqfWUIWJNm",
"SkJYmuaeiGi",
"WlRyjzrpPvt",
"vz61GfmB8lS... | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",... | [
"We appreciate all reviewer's constructive comments and feedback. We updated our paper according to the reviewer's concerns as follows.\n\nWe,\n\n* Updates $N^{th}$ Farthest result in Table 1.\n\n* Add experimental results on Convex hull task (relational reasoning task) in Table 3.\n\n* Add visualization of DAM's m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5,
3
] | [
"iclr_2021_NlrFDOgRRH",
"RWrfnJfg9eM",
"Y6ui8syiwWE",
"GLesDNQuH_k",
"T0jy3mLapvT",
"ULNcDwAjNEB",
"SsZKkZ0NH_C",
"ND7JPmeyfB",
"SsZKkZ0NH_C",
"ND7JPmeyfB",
"RWrfnJfg9eM",
"OX9N9PHnEb1",
"19xJPGRdq47",
"GLesDNQuH_k",
"FjqfWUIWJNm",
"SkJYmuaeiGi",
"Y6ui8syiwWE",
"vz61GfmB8lS",
"rH... |
iclr_2021_xxWl2oEvP2h | Rewriting by Generating: Learn Heuristics for Large-scale Vehicle Routing Problems | The large-scale vehicle routing problems are defined based on the classical VRPs with thousands of customers. It is of great importance to find an efficient and high-quality solution for real-world applications. However, existing algorithms for VRPs including non-learning heuristics and RL-based methods, only perform ... | withdrawn-rejected-submissions | The authors propose an RL-based approach, “Rewriting-by Generating (RBG)”, to solve large-scale capacitated vehicle routing problems (CVRPs): such problems are NP-hard in general and are ubiquitous. The RL agent consists of a "Generator" and "Rewriter". In generation, the graph is sub-divided into several regions and i... | train | [
"CeircAwxKn",
"58yOEq-LPOu",
"5ulKQWTP5h",
"YKwssbGji7",
"pbGMxHdm6p",
"lVaixZOgUz7",
"lp8jCDIE-Nw",
"78Wx6wu-6g1",
"axQ2nQFtpYC",
"vpsSK6ZLU0j",
"_aIBP_iWfJ",
"iYxNLEjJJQU"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your positive feedback, which helped us to improve our paper significantly, and we hope the following answers can be useful for addressing your concerns.\n\n$\\textbf{Question 1:} $ The reason of comparison to Ant Colony baseline and possible alternatives.\n\n$\\textbf{Response:} $ We select baselines f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
5
] | [
"axQ2nQFtpYC",
"5ulKQWTP5h",
"CeircAwxKn",
"_aIBP_iWfJ",
"vpsSK6ZLU0j",
"_aIBP_iWfJ",
"_aIBP_iWfJ",
"iYxNLEjJJQU",
"iclr_2021_xxWl2oEvP2h",
"iclr_2021_xxWl2oEvP2h",
"iclr_2021_xxWl2oEvP2h",
"iclr_2021_xxWl2oEvP2h"
] |
iclr_2021_EUUp9nWXsop | IALE: Imitating Active Learner Ensembles | Active learning (AL) prioritizes the labeling of the most informative data samples. However, the performance of AL heuristics depends on the structure of the underlying classifier model and the data. We propose an imitation learning scheme that imitates the selection of the best expert heuristic at each stage of the AL... | withdrawn-rejected-submissions | In the discussion, all reviewers acknowledge the novelty of this paper, such as learning from a wide range of AL heuristics, and the ability to transfer the to tasks with arbitrary number of classes. They also think that the additional experiments provided by the authors improve the paper's empirical validity.
Howeve... | train | [
"XpBqiK38_2U",
"OZEsYXHqrRO",
"V-Edcgq7wgB",
"D8KwCr2jGv7",
"MG3v87QtpWP",
"wl518sb9Yw",
"NWi9-MRiKbp"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"#### Paper summary:\n\nIn this work, an imitation learning (AL) approach is proposed to imitate multiple active learning algorithms, in order to take their advantages to learn a better active learning algorithm. The main idea is to treat the active learning algorithms as experts and utilize the DAGGER algorithm fo... | [
4,
5,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_EUUp9nWXsop",
"iclr_2021_EUUp9nWXsop",
"iclr_2021_EUUp9nWXsop",
"OZEsYXHqrRO",
"XpBqiK38_2U",
"NWi9-MRiKbp",
"iclr_2021_EUUp9nWXsop"
] |
iclr_2021_q-qxdClTs0d | Out-of-distribution Prediction with Invariant Risk Minimization: The Limitation and An Effective Fix | This work considers the out-of-distribution (OOD) prediction problem where (1)~the training data are from multiple domains and (2)~the test domain is unseen in the training. DNNs fail in OOD prediction because they are prone to pick up spurious correlations. Recently, Invariant Risk Minimization (IRM) is proposed to ad... | withdrawn-rejected-submissions | Loosely, while IRM aims to find a feature mapping Phi s.t. response Y given Phi(X) is independent of the environment variables E, they suggest that when E is strongly correlated with Y, then it is possible for Phi obtained via IRM to involve environment variables. They motivate this by suggesting that if there exists a... | train | [
"2J3rKotCOx",
"-CseCzHzPB",
"fdC2ogeR-CX",
"W2QtFvpvm58",
"yZc5LirNWAc",
"IMK3qmMYRGI",
"nvjwGFPxTrw",
"jNeH91hYOb",
"SrkQnnG55qg",
"OxgGyww2QJN",
"HBIbcrtJ1M6",
"_HtzMbph3ct",
"juCzhj6zczm",
"7kBz7On9zA4",
"Pz4iN87dQlg",
"f9R1Yas6jVE",
"MWwRJgaCRZI",
"LW-im5tzPuu",
"WG0F978s1V"
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_re... | [
"### Summary of Paper\n\nThis paper identifies and tries to fix a limitation of the recent work of Invariant Risk Minimization (Arjovsky et al., '19). IRM is a solution framework for the OoD prediction problem, where one has to learn a classifier based on data from multiple domains, hoping to generalize to unseen d... | [
7,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_q-qxdClTs0d",
"iclr_2021_q-qxdClTs0d",
"W2QtFvpvm58",
"yZc5LirNWAc",
"IMK3qmMYRGI",
"SrkQnnG55qg",
"iclr_2021_q-qxdClTs0d",
"LW-im5tzPuu",
"_HtzMbph3ct",
"WG0F978s1V",
"Pz4iN87dQlg",
"Pz4iN87dQlg",
"7kBz7On9zA4",
"MWwRJgaCRZI",
"f9R1Yas6jVE",
"-CseCzHzPB",
"2J3rKotCOx",
... |
iclr_2021_aI8VuzSvCPn | Adversarial Synthetic Datasets for Neural Program Synthesis | Program synthesis is the task of automatically generating a program consistent with a given specification. A natural way to specify programs is to provide examples of desired input-output behavior, and many current program synthesis approaches have achieved impressive results after training on randomly generated input-... | withdrawn-rejected-submissions | The paper uses adversarial data to improve generalization in Programming By Example (PBE). The reviews were somewhat mixed with some people finding this useful and interesting while others finding it straightforward and unsurprsing. The reviewers were not convinced of the ultimate usefulness of the approach since it is... | train | [
"uHsDtKXQNqE",
"EXDTsT3UN3d",
"TcolzivcXLW",
"WhPWkHQHKr1",
"LtqxOFgRJ7E",
"8cOhSPosJ2l",
"_vD_wmKsy70",
"kMZDtQ8uRcR",
"HUg23X48PZE",
"TNlwWwTAoZX",
"lI_vBvjnNex",
"RiaABr1vGg7",
"Lq96NAf8qAH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an approach to develop synthetic datasets aimed at training DeepCoder-alike models. They claim that current approaches to generate synthetic datasets do not help coding algorithms to obtain a good generalization. The reviewed literature and the experiments presented back such claim. The approach... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_aI8VuzSvCPn",
"iclr_2021_aI8VuzSvCPn",
"8cOhSPosJ2l",
"uHsDtKXQNqE",
"Lq96NAf8qAH",
"_vD_wmKsy70",
"TNlwWwTAoZX",
"lI_vBvjnNex",
"iclr_2021_aI8VuzSvCPn",
"EXDTsT3UN3d",
"RiaABr1vGg7",
"iclr_2021_aI8VuzSvCPn",
"iclr_2021_aI8VuzSvCPn"
] |
iclr_2021_ZglaBL5inu | Laplacian Eigenspaces, Horocycles and Neuron Models on Hyperbolic Spaces | We use hyperbolic Poisson kernel to construct the horocycle neuron model on hyperbolic spaces, which is a spectral generalization of the classical neuron model. We prove a universal approximation theorem for horocycle neurons. As a corollary, this theorem leads to a state-of-the-art result on the expressivity of neuron... | withdrawn-rejected-submissions | Reviewers generally appreciate the contributions of the paper, namely the horocycle neuron, Poisson neuron, and the universal approximation properties. However, there are concerns, especially by R4 and R5, that the presentation is confusing, lacks clarity, and should be substantially improved.
Note: Theorem 1.7 in (He... | test | [
"fLHwEvoWNVH",
"V8GxZEqsUqJ",
"uCxedy8fCxh",
"gvtbN0Vqzt8",
"hFHW8-pYRwy",
"cB-y59gaAgB",
"t26m4uMSw2-",
"cqDm6FgJXwN",
"1TSiC7W5Kg",
"dXNj-p7O3Lt",
"AHKKbtCYPmM",
"sNWhihCXMDo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduced a new hyperbolic neuron based on horocycles (hyperbolic counterparts of hyperplanes). The authors proved that these neurons in H^n are as useful as traditional neurons in R^n through theoretical arguments and demonstrated they can significantly improve learning in hyperbolic embeddings of tre... | [
8,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
5,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_ZglaBL5inu",
"iclr_2021_ZglaBL5inu",
"iclr_2021_ZglaBL5inu",
"AHKKbtCYPmM",
"t26m4uMSw2-",
"fLHwEvoWNVH",
"dXNj-p7O3Lt",
"uCxedy8fCxh",
"V8GxZEqsUqJ",
"sNWhihCXMDo",
"uCxedy8fCxh",
"iclr_2021_ZglaBL5inu"
] |
iclr_2021_8qsqXlyn-Lp | Factoring out Prior Knowledge from Low-Dimensional Embeddings | Low-dimensional embedding techniques such as tSNE and UMAP allow visualizing high-dimensional data and therewith facilitate the discovery of interesting structure. Although they are widely used, they visualize data as is, rather than in light of the background knowledge we have about the data. What we already know, how... | withdrawn-rejected-submissions | A method is proposed for removing prior knowledge, presented as a
distance matrix, from low-dimensional embeddings, to focus them on
what is new.
The task of visualizing novely in data is interesting and good
solutions would potentially be highly useful.
The proposed method essentially substracts a distance matrix fr... | train | [
"VlDpecnR13I",
"0S-uPbGW-Rz",
"8M_Teprs2kp",
"SqlwP1erKZD",
"bxtsB1Oba2y",
"0GvQ-fTOxBI",
"M1eZJwHVGg0",
"AyE47WgzzW",
"z1qTygAX_C",
"Ok6zsjZFAk"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I have read the author response. Overall my evaluation currently still remains at the same level.\n\n\"None of the available implementations for SLLE and ctSNE can deal with distance matrices as input.\" seems like a too brief dismissal of SLLE and ctSNE, at least the mathematics of ctSNE look to me like it can us... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"bxtsB1Oba2y",
"SqlwP1erKZD",
"M1eZJwHVGg0",
"AyE47WgzzW",
"z1qTygAX_C",
"Ok6zsjZFAk",
"iclr_2021_8qsqXlyn-Lp",
"iclr_2021_8qsqXlyn-Lp",
"iclr_2021_8qsqXlyn-Lp",
"iclr_2021_8qsqXlyn-Lp"
] |
iclr_2021_fgX9O5q0BT | On Noise Injection in Generative Adversarial Networks | Noise injection is an effective way of circumventing overfitting and enhancing generalization in machine learning, the rationale of which has been validated in deep learning as well. Recently, noise injection exhibits surprising performance when
generating high-fidelity images in Generative Adversarial Networ... | withdrawn-rejected-submissions | This paper studies the role of “noise injection” in GANs with tools from Riemannian geometry, and derives a new noise injection approach that aims to learn a fuzzy coordinate system to model non-Euclidean geometry. The new noise injection approach is shown to improve over StyleGANv2 noise injection on lower-resolution ... | train | [
"eaRSziRFwd0",
"55QT618mp1M",
"6XS5lmJ26bP",
"1Z2wPrE81s1",
"n5m_rwCJyOl",
"OjOBQq4OijV",
"gt3JqezO5k",
"tTv4ceREJc",
"GRSM9ubBojE",
"gKHxvgSzYzI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"To summarize, this paper proposed a new noise injection method that is easy to implement and is able to replace the original noise injection method in StyleGAN 2. The approach is supported by detailed theoretical analysis and impactful performance improvement on GAN training and inversion. The results show that th... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_fgX9O5q0BT",
"iclr_2021_fgX9O5q0BT",
"iclr_2021_fgX9O5q0BT",
"iclr_2021_fgX9O5q0BT",
"55QT618mp1M",
"eaRSziRFwd0",
"GRSM9ubBojE",
"gKHxvgSzYzI",
"iclr_2021_fgX9O5q0BT",
"iclr_2021_fgX9O5q0BT"
] |
iclr_2021_PhV-qfEi3Mr | Improving the accuracy of neural networks in analog computing-in-memory systems by a generalized quantization method | Crossbar-enabled analog computing-in-memory (CACIM) systems can significantly improve the computation speed and energy efficiency of deep neural networks (DNNs). However, the transition of DNN from the digital systems to CACIM systems usually reduces its accuracy. The major issue is that the weights of DNN are stored a... | withdrawn-rejected-submissions | This work develops a weight-quantization method for deep neural networks that is suitable for a type of analog hardware system known as crossbar-enabled analog computing-in-memory (CACIM). The goal of this work is to train models on GPUs in such a way that they retain their predictive accuracy during inference when de... | train | [
"e9Q3FNkHk1t",
"bbLRFeeozEq",
"V6nQ759yD_",
"0hF4zeEsWIL",
"tsXOcqpMta3",
"PaLMG4_91w1",
"S3F2uxglNtX",
"SddXRcJ0Izk"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In order to improve robustness of analog written weights to circuit variation, quantization is used. My first question is why is that the case? The claim that quantization reduces analog noise does not seem to be correct. As far as I can tell, this setup leads to quantization noise on top of analog noise. This is ... | [
3,
5,
-1,
-1,
-1,
-1,
5,
4
] | [
5,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_PhV-qfEi3Mr",
"iclr_2021_PhV-qfEi3Mr",
"S3F2uxglNtX",
"e9Q3FNkHk1t",
"SddXRcJ0Izk",
"bbLRFeeozEq",
"iclr_2021_PhV-qfEi3Mr",
"iclr_2021_PhV-qfEi3Mr"
] |
iclr_2021_J150Q1eQfJ4 | Fully Convolutional Approach for Simulating Wave Dynamics | We investigate the performance of fully convolutional networks to predict the motion and interaction of surface waves in open and closed complex geometries. We focus on a U-Net type architecture and assess its ability to capture and extrapolate wave propagation in time as well as the reflection, interference and diffra... | withdrawn-rejected-submissions | Reviews were somewhat mixed here, but the consensus is to reject, with at least one voice (R2) urging rejection. Across reviewers, the recommendation to reject is primarily based on the level of originality with the proposed U-Net architecture and on weakness of experiments, especially in comparing to baselines.
Revie... | train | [
"vjJqhgGgrIz",
"jME2uzhDBji",
"plmhLHI7wPz",
"BkN1KevZyJ8",
"SpTcO33ccM",
"kiSDIXGAtOb",
"CQ0h1kfg1MV"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your reviews, they are being helpful in improving how our work is delivered. Here are some of our thoughts and improvements we will add to the final version:\n\n1. The objective of the paper was not to compare the results with previous networks, but to demonstrate the generalisation to geometries com... | [
-1,
-1,
-1,
5,
4,
7,
3
] | [
-1,
-1,
-1,
4,
4,
4,
4
] | [
"jME2uzhDBji",
"plmhLHI7wPz",
"kiSDIXGAtOb",
"iclr_2021_J150Q1eQfJ4",
"iclr_2021_J150Q1eQfJ4",
"iclr_2021_J150Q1eQfJ4",
"iclr_2021_J150Q1eQfJ4"
] |
iclr_2021_ucuia1JiY9 | A Probabilistic Approach to Constrained Deep Clustering | Clustering with constraints has gained significant attention in the field of semi-supervised machine learning as it can leverage partial prior information on a growing amount of unlabelled data. Following recent advances in deep generative models, we derive a novel probabilistic approach to constrained clustering that ... | withdrawn-rejected-submissions | We thank the authors for their detailed responses to reviewers, and for engaging in a constructive discussions.
As explained by the reviewers, the paper is clearly written and the method is novel. However, the novelty is to combine existing ideas and techniques to define an objective function that allows to incorporat... | train | [
"A4aioYD02Th",
"ATbt_NI-s5Q",
"K3EahUkBYjB",
"CshQzNHllmz",
"mWVNX-WypS_",
"y-EuOe_2w6K",
"JzzqgbNbq66",
"6Gr838gnfe5",
"MPovIDb8AL5",
"eK9wxwMFWf",
"REWyhTZXgU"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary.\n\nThis paper extends the variational deep embedding VaDE model (a VAE-based clustering method) to integrate pairwise constraints between objects, i.e., must-link and cannot-link. The constraints are integrated a priori as a condition. That is, the prior over the cluster labels is conditioned on the const... | [
5,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5
] | [
5,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_ucuia1JiY9",
"K3EahUkBYjB",
"MPovIDb8AL5",
"mWVNX-WypS_",
"JzzqgbNbq66",
"iclr_2021_ucuia1JiY9",
"6Gr838gnfe5",
"y-EuOe_2w6K",
"A4aioYD02Th",
"REWyhTZXgU",
"iclr_2021_ucuia1JiY9"
] |
iclr_2021_kGvXK_1qzyy | Drift Detection in Episodic Data: Detect When Your Agent Starts Faltering | Detection of deterioration of agent performance in dynamic environments is challenging due to the non-i.i.d nature of the observed performance. We consider an episodic framework, where the objective is to detect when an agent begins to falter. We devise a hypothesis testing procedure for non-i.i.d rewards, which is opt... | withdrawn-rejected-submissions | The paper's initial evaluation was below par, but the author feedback helped clarify several crucial points after which two of the reviewers increased their scores by a point, bringing the current evaluation to borderline.
The paper addresses a relevant and challenging problem in the RL domain. However, in my opinion... | train | [
"JqCNyuojeLA",
"hryeiwwoaY7",
"rcw069COy-b",
"BqloIbXqSeX",
"JOgO2UeGt5_",
"WA0fQ4v50px",
"bfeCgkfd7d-",
"u1W1L6N8UC9",
"Hf_tJyp_6u",
"JQVeEkGuGjy"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper designed a hypothesis testing procedure for detecting changes in episode sequential data. For online operation, it also proposed a novel Bootstrap mechanism for False alarm rate control. The method is demonstrated based on a non-iid and non-Gaussian setting for reward signals.\n\nIn all the method is t... | [
5,
5,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_kGvXK_1qzyy",
"iclr_2021_kGvXK_1qzyy",
"iclr_2021_kGvXK_1qzyy",
"WA0fQ4v50px",
"iclr_2021_kGvXK_1qzyy",
"JQVeEkGuGjy",
"hryeiwwoaY7",
"rcw069COy-b",
"JqCNyuojeLA",
"JOgO2UeGt5_"
] |
iclr_2021_vCEhC7nOb6 | Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets | We analyze the inductive bias of gradient descent for weight normalized smooth homogeneous neural nets, when trained on exponential or cross-entropy loss. Our analysis focuses on exponential weight normalization (EWN), which encourages weight updates along the radial direction. This paper shows that the gradient flow p... | withdrawn-rejected-submissions | The main concern is that the results in this paper are based on strong asymptotic assumptions. (At least) more empirical results are needed.
| train | [
"9mqqM34fbP",
"t5qt2aRWIA",
"wYpSGpeFRcQ",
"Qkelk-8Ya2",
"YNluSAMr75b",
"JzjZhbWIeUl",
"S6ySHyaJIYJ",
"ChhCQB6VsCv",
"KdTJ2iuL3Xz",
"xshPt-CPbJ",
"2qaMFOrrKuf",
"NPck5HgmCRX"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper analyzes weight normalization methods, including exponential weight normalization (EWN) and standard weight normalization (SWN), in contrast with unnormalized networks. Under a number of assumptions, the paper characterizes the asymptotic relation between weight norm and gradient norm at the node level ... | [
5,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
4
] | [
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2021_vCEhC7nOb6",
"Qkelk-8Ya2",
"iclr_2021_vCEhC7nOb6",
"S6ySHyaJIYJ",
"iclr_2021_vCEhC7nOb6",
"2qaMFOrrKuf",
"YNluSAMr75b",
"9mqqM34fbP",
"iclr_2021_vCEhC7nOb6",
"NPck5HgmCRX",
"iclr_2021_vCEhC7nOb6",
"iclr_2021_vCEhC7nOb6"
] |
iclr_2021_InGI-IMDL18 | Secure Federated Learning of User Verification Models | We consider the problem of training User Verification (UV) models in federated setup, where the conventional loss functions are not applicable due to the constraints that each user has access to the data of only one class and user embeddings cannot be shared with the server or other users. To address this problem, we ... | withdrawn-rejected-submissions | In this paper, the authors propose to adapt the recent paper by Yu et al. (ICML 2020), namely FedAwS. In that paper, the authors solved a potential failure mode in federated learning, when all the users only have access to one class in their devices. In this paper, the authors extend FedAwS to a setting in which federa... | train | [
"DwH11bJXjQv",
"uJ0ZU8BwNRx",
"_Ni7uGjhXk",
"sEHqaEqb8qc",
"nXb4gSY8F-L",
"lPLFyJesrad",
"SWPjdV5QZiw",
"Gg869m9mtCz",
"esb_ss0OE3i",
"_6eD_-5pAR0",
"3OsnLjalnpR",
"mk2LVZOWfha"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"**Summary**\nFederated learning takes advantage of the fact that private user data does not need to be transferred and shared across devices or servers. This makes FL particularly attractive for the user verification scenario, where privacy-sensitive biometric data are used to train verification models. One crucia... | [
6,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2021_InGI-IMDL18",
"iclr_2021_InGI-IMDL18",
"nXb4gSY8F-L",
"iclr_2021_InGI-IMDL18",
"lPLFyJesrad",
"SWPjdV5QZiw",
"esb_ss0OE3i",
"DwH11bJXjQv",
"sEHqaEqb8qc",
"mk2LVZOWfha",
"iclr_2021_InGI-IMDL18",
"iclr_2021_InGI-IMDL18"
] |
iclr_2021_jjKzfD9vP9 | Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing | The Mixup scheme of mixing a pair of samples to create an augmented training sample has gained much attention recently for better training of neural networks. A straightforward and widely used extension is to combine Mixup and regional dropout methods: removing random patches from a sample and replacing it with the fea... | withdrawn-rejected-submissions | Overall, this paper has been on the very borderline. All reviewers agree that the motivation and the idea of the paper are reasonable (although somewhat incremental) and make an interesting extension of mixup-type data augmentation. However, one expert reviewer raised some concerns which are unfortunately not fully res... | train | [
"2TMur92pcNI",
"MKJln3vM-as",
"r6AZLlNii-i",
"sPfhr_wP7_r",
"07_yGRIgZ5g",
"aEb1b3iBxUz",
"Y6iN8D1rPvA",
"cijOJm_JJR",
"1vCpcABnYpf",
"WLGB3LE9iQ",
"5aVELmkl_mS",
"mAE1KnjwNNY",
"tu6UQPwbsRr",
"iCP8JeBLsP",
"v6Y3NSu-0m_",
"z8lc96xrb_p",
"koItAPIRQH",
"ILKHmGL3wx"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper advances the line of research in data augmentation following Mixup paper. Cutmix is a image-specific variant of mixup that pastes a rectangular region from a donor image to a target image; however, it does this in a completely random fashion not paying attention to whether the discriminative parts of ei... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_jjKzfD9vP9",
"07_yGRIgZ5g",
"aEb1b3iBxUz",
"Y6iN8D1rPvA",
"mAE1KnjwNNY",
"1vCpcABnYpf",
"WLGB3LE9iQ",
"z8lc96xrb_p",
"koItAPIRQH",
"koItAPIRQH",
"koItAPIRQH",
"koItAPIRQH",
"koItAPIRQH",
"ILKHmGL3wx",
"2TMur92pcNI",
"2TMur92pcNI",
"iclr_2021_jjKzfD9vP9",
"iclr_2021_jjKzf... |
iclr_2021_m4baHw5LZ7M | Deep Learning Solution of the Eigenvalue Problem for Differential Operators | Solving the eigenvalue problem for differential operators is a common problem in many scientific fields. Classical numerical methods rely on intricate domain discretization, and yield non-analytic or non-smooth approximations. We introduce a novel Neural Network (NN)-based solver for the eigenvalue problem of different... | withdrawn-rejected-submissions | The paper addresses the problem of solving for the eigenpairs of a self-adjoint differential operator. This problem, of course, is classical; the main innovations here are
a) the use a parametric form of the (pointwise) solution using a (shallow) neural network so as to avoid discretization, and
b) obtaining multiple... | train | [
"H0vvpgiZTMn",
"WLN7MNKu1Fn",
"F2jJTCGiXnr",
"_WvA0woMH9a",
"nhWllFF_JbO",
"DrHiJzYgXSm",
"_L2lwbfwnIR",
"WOZOnsEJ8XK",
"wjFOE3SPAm2"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nWe thank the reviewer for raising important issues.\n\n1. The solution of PDEs via neural networks has emerged in recent years and is still very challenging for every specific application. The idea of parameterization the solution via a neural network is a very general framework where each equation or problem we... | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
9
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
2,
2
] | [
"_L2lwbfwnIR",
"iclr_2021_m4baHw5LZ7M",
"WOZOnsEJ8XK",
"DrHiJzYgXSm",
"wjFOE3SPAm2",
"iclr_2021_m4baHw5LZ7M",
"iclr_2021_m4baHw5LZ7M",
"iclr_2021_m4baHw5LZ7M",
"iclr_2021_m4baHw5LZ7M"
] |
iclr_2021_PdauS7wZBfC | Predictive Coding Approximates Backprop along Arbitrary Computation Graphs | The backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical compu... | withdrawn-rejected-submissions | This paper extends recent work (Whittington & Bogacz, 2017, Neural computation, 29(5), 1229-1262) by showing that predictive coding (Rao & Ballard, 1999, Nature neuroscience 2(1), 79-87) as an implementation of backpropagation can be extended to arbitrary network structures. Specifically, the original paper by Whitting... | test | [
"lWegAXK_uVJ",
"qV5M0UvC5gZ",
"0wY69t2aaQ_",
"iTdHmF0DKih",
"G2ELhvNMUIp",
"JW3A19WCM38",
"zl9hqELu5Us",
"KZd1YE1ytn",
"9evPd1ch91R",
"b9oWwaQTTlJ",
"lOigDgNZdv"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Update after author responses: \nWhile the author address some of my comments, I would have still liked to see a more detailed discussion of how the algorithm compares in terms of algorithmic scaling, which I think is relevant because it is a fundamental property of the algorithm, even if it is targeted toward... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_PdauS7wZBfC",
"0wY69t2aaQ_",
"JW3A19WCM38",
"lOigDgNZdv",
"JW3A19WCM38",
"9evPd1ch91R",
"lWegAXK_uVJ",
"b9oWwaQTTlJ",
"iclr_2021_PdauS7wZBfC",
"iclr_2021_PdauS7wZBfC",
"iclr_2021_PdauS7wZBfC"
] |
iclr_2021_ryUprTOv7q0 | Quantum Deformed Neural Networks | We develop a new quantum neural network layer designed to run efficiently on a quantum computer but that can be simulated on a classical computer when restricted in the way it entangles input states. We first ask how a classical neural network architecture, both fully connected or convolutional, can be executed on a qu... | withdrawn-rejected-submissions | A "quantum deformed" generalization of a probabilistic binary neural network is introduced, which can be either run on a quantum computer or simulated with a classical computer. Reviewers agreed that the paper is well written, introduces some new ideas merging quantum computing with a variational Bayesian framework, a... | test | [
"-KSiC81GCII",
"ze4TIjyau9r",
"BTT6M8j1bKP",
"9sv92Z1Pmlg",
"L5wDf3D5eOU",
"vO8PieSUUea",
"zTB7Dk3tSt9",
"F-Xyr82pcix",
"AvY1h9NHpD",
"PzB74SHx7eK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"After rebuttal\nThanks for responding to the concerns in the review.\nThe main point in the author's rebuttal is that the major contribution of the paper is a novel QNN that can be efficiently simulated. However, the proposed method of simulation is only an approximation of the real run through CLT. The author pro... | [
5,
6,
4,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_ryUprTOv7q0",
"iclr_2021_ryUprTOv7q0",
"iclr_2021_ryUprTOv7q0",
"AvY1h9NHpD",
"-KSiC81GCII",
"ze4TIjyau9r",
"PzB74SHx7eK",
"BTT6M8j1bKP",
"iclr_2021_ryUprTOv7q0",
"iclr_2021_ryUprTOv7q0"
] |
iclr_2021_kic8cng35wX | Weak NAS Predictor Is All You Need | Neural Architecture Search (NAS) finds the best network architecture by exploring the architecture-to-performance manifold. It often trains and evaluates a large amount of architectures, causing tremendous computation cost. Recent predictor-based NAS approaches attempt to solve this problem with two key steps: sampling... | withdrawn-rejected-submissions | In line with recent work in the NAS literature, the authors consider a weak NAS performance strategy to filter out bad architectures and narrow down the exploration to the most promising region of the search space. The authors propose to estimate weak predictors progressively by learning a series of weak predictors tha... | train | [
"_Dw94RpI7SE",
"7zWxevh-hjn",
"R-dJ2-8VDay",
"TX_NeiEu-Jq",
"UOkWY9-2J2",
"_LIKbQt2Jg1",
"Liejlv5umSA",
"kUFXa0Os92c",
"xptymNNtxpa",
"flTv2z2U-bl"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary of contribution: The authors propose an interesting approach to address the sample-efficiency issue in Neural Architecture Search (NAS). Compared to other existing predictor based methods, the approach distinguishes itself by progressive shrinking the search space. The paper correctly identifies the sampli... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2021_kic8cng35wX",
"kUFXa0Os92c",
"_Dw94RpI7SE",
"_Dw94RpI7SE",
"xptymNNtxpa",
"kUFXa0Os92c",
"flTv2z2U-bl",
"iclr_2021_kic8cng35wX",
"iclr_2021_kic8cng35wX",
"iclr_2021_kic8cng35wX"
] |
iclr_2021_Ew0zR07CYRd | Bounded Myopic Adversaries for Deep Reinforcement Learning Agents | Adversarial attacks against deep neural networks have been widely studied. Adversarial examples for deep reinforcement learning (DeepRL) have significant security implications, due to the deployment of these algorithms in many application domains. In this work we formalize an optimal myopic adversary for deep reinforce... | withdrawn-rejected-submissions | Most reviewers are positive about this work, though they believe it is somewhat incremental, and its theoretical contributions are minor. None of the reviewers are very excited about this work. Overall, the PC believes this is a borderline paper.
Minor note: During the discussions, the paper by Xiao et al., "Character... | test | [
"PcMKzTc55-N",
"xy6jUe8HN9l",
"dHt15HdpK_y",
"a51inm8YB49",
"vEKc6wn3XHa",
"Q__9gSo_Enc",
"IryjSBRSW0k",
"eQqkiYiwUff",
"sv7YVVbP08",
"P56yssZwRmH",
"tHOPGMV6dDQ",
"VJgts3infB",
"8WTNWb-4OIy",
"DjbcZ4f8cGP",
"7B1LpEvv871",
"v9nbUYCpbbV",
"ucSNz4TxLon"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary: This paper proposes an optimal myopic adversary for deep reinforcement learning agent, in which the adversary finds a bounded perturbation of the state that minimizes the value of the action taken by the agent. The authors introduce a differentiable approximation for the optimal myopic adversarial formula... | [
5,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_Ew0zR07CYRd",
"iclr_2021_Ew0zR07CYRd",
"iclr_2021_Ew0zR07CYRd",
"eQqkiYiwUff",
"iclr_2021_Ew0zR07CYRd",
"IryjSBRSW0k",
"7B1LpEvv871",
"tHOPGMV6dDQ",
"P56yssZwRmH",
"DjbcZ4f8cGP",
"VJgts3infB",
"dHt15HdpK_y",
"dHt15HdpK_y",
"ucSNz4TxLon",
"PcMKzTc55-N",
"xy6jUe8HN9l",
"iclr... |
iclr_2021_cotg54BSX8 | Grey-box Extraction of Natural Language Models | Model extraction attacks attempt to replicate a target machine learning model from predictions obtained by querying its inference API. Most existing attacks on Deep Neural Networks achieve this by supervised training of the copy using the victim's predictions. An emerging class of attacks exploit algebraic properties o... | withdrawn-rejected-submissions | After discussion with the reviewers, it seems that a. without fine-tuning the result is close to being trivial (as noted also by two reviewers) b. with fine-tuning results are lower c. The setup of just a linear classification layer is less common (but exists) d. The cases where extraction succeeds the performance is l... | train | [
"DUpl0854on-",
"s-WHphcpdQM",
"snF8zsfo4dM",
"pREBjbTjZD2",
"R3ib6vYyKDr",
"OFa2e3LvL2P",
"kxTNL48DpAR",
"tELFg0KpEY0",
"Es9JrCKoPTw",
"CZMk-m1WvFw",
"Tl4eK89Ywur"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThis paper proposes a range of algebraic model extraction attacks (different from the prevalent learning-based approaches) for transformer models trained for NLP tasks in a grey-box setting i.e., an existing, public, usually pretrained encoder, with a private classification layer. Through attacks on di... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2021_cotg54BSX8",
"snF8zsfo4dM",
"pREBjbTjZD2",
"Es9JrCKoPTw",
"Tl4eK89Ywur",
"CZMk-m1WvFw",
"DUpl0854on-",
"iclr_2021_cotg54BSX8",
"iclr_2021_cotg54BSX8",
"iclr_2021_cotg54BSX8",
"iclr_2021_cotg54BSX8"
] |
iclr_2021_awnQ2qTLSwn | Learning to Share in Multi-Agent Reinforcement Learning | In this paper, we study the problem of networked multi-agent reinforcement learning (MARL), where a number of agents are deployed as a partially connected network. Networked MARL requires all agents make decision in a decentralized manner to optimize a global objective with restricted communication between neighbors ov... | withdrawn-rejected-submissions | Although there was some initial disagreement on this paper, the majority of reviewers agree that this work is not ready for publication and can be improved in various manners. After the discussion phase there is also serious concern that the experiments need more work (statistically), to verify if they hold up. More co... | train | [
"gvJLIDDlYrL",
"2J1Lj5YxCeI",
"f9TS3KA-XLG",
"x7ZgcFBCeKb",
"hp8I06MOTMh",
"C6-usLCnHCO",
"BFC28lltF-e",
"NcVed6uL74f",
"4aEJ6A6IvBw",
"b9O_b_yBHBo",
"60wI5bcMhON",
"-JHJijUyJE4"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary\nThe paper considers the cooperative MARL setting where agents get local rewards and they are interconnected as a graph where neighbors can communicate. The paper specifically considers the communication of reward sharing, that is, an agent shares (part of) its reward to its neighbors, such that each agent... | [
8,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
3
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_awnQ2qTLSwn",
"iclr_2021_awnQ2qTLSwn",
"iclr_2021_awnQ2qTLSwn",
"-JHJijUyJE4",
"60wI5bcMhON",
"gvJLIDDlYrL",
"b9O_b_yBHBo",
"2J1Lj5YxCeI",
"2J1Lj5YxCeI",
"iclr_2021_awnQ2qTLSwn",
"iclr_2021_awnQ2qTLSwn",
"iclr_2021_awnQ2qTLSwn"
] |
iclr_2021_Jr8XGtK04Pw | Hippocampal representations emerge when training recurrent neural networks on a memory dependent maze navigation task | Can neural networks learn goal-directed behaviour using similar strategies to the brain, by combining the relationships between the current state of the organism and the consequences of future actions? Recent work has shown that recurrent neural networks trained on goal based tasks can develop representations resemblin... | withdrawn-rejected-submissions | This paper analyses a recurrent neural network model trained to perform a simple maze task, and reports that the network exhibits multiple hallmarks of neural selectivity reported in neurophysiological recordings from the hippocampus— in particular, they find place cells which also are tuned to task-relevant locations,... | test | [
"rNajhMVKXIu",
"pOReP3ueqT",
"B41a73EcDHW",
"uxPIryRVS2t",
"G3rSGNX-K8O",
"pZz851JhQ3z",
"m3PM2PItHmM",
"BAYI2FNNM_",
"uAqsgTFsP6u",
"p9hQ8XzEi0b",
"FlPCv756lWC",
"DxXVAWWHKeK",
"lF-NjoDyDrY",
"_Sf3NTtx0zd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n \nThe authors trained a recurrent network to perform a sensory prediction task and this gave rise to units that resembled hippocampal place fields. Then they augmented the network with a Q-learning objective and shown that the activity in the network sweep forward in space if the agent is fixed at a dec... | [
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_Jr8XGtK04Pw",
"iclr_2021_Jr8XGtK04Pw",
"p9hQ8XzEi0b",
"_Sf3NTtx0zd",
"lF-NjoDyDrY",
"m3PM2PItHmM",
"G3rSGNX-K8O",
"uAqsgTFsP6u",
"pOReP3ueqT",
"rNajhMVKXIu",
"uxPIryRVS2t",
"iclr_2021_Jr8XGtK04Pw",
"iclr_2021_Jr8XGtK04Pw",
"iclr_2021_Jr8XGtK04Pw"
] |
iclr_2021_j0yLJ-MsgJ | Class Imbalance in Few-Shot Learning | Few-shot learning aims to train models on a limited number of labeled samples from a support set in order to generalize to unseen samples from a query set. In the standard setup, the support set contains an equal amount of data points for each class. This assumption overlooks many practical considerations arising from ... | withdrawn-rejected-submissions | The paper studies the effectiveness of few-shot learning techniques in settings where the training labels are imbalanced. While addressing an interesting practical problem, reviewers raised concerns about the paper's technical depth, insufficient distinction to existing techniques for coping with label imbalance, and l... | train | [
"5DBmxItPA2k",
"QQ0Mc9qIIon",
"r-6OywmyVvJ",
"QBTMfnBhMjr",
"Y2JKUzEmWUE",
"PqgQx5arvPl",
"5KU1Lco4hL",
"KfKFX3Ug36",
"ulFnf9R-MZK",
"kzvrrP8ob61",
"uQ6FQ8V3zpu"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary \n\nThis paper introduces a new benchmark for imbalanced few-shot learning where the number of samples per class is different. The authors extensively evaluate 10 SOTA few-shot methods on this benchmark and show consistent performance drop in this challenging setting. They also show that simple over-sampli... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_j0yLJ-MsgJ",
"iclr_2021_j0yLJ-MsgJ",
"PqgQx5arvPl",
"Y2JKUzEmWUE",
"5DBmxItPA2k",
"QQ0Mc9qIIon",
"iclr_2021_j0yLJ-MsgJ",
"uQ6FQ8V3zpu",
"kzvrrP8ob61",
"iclr_2021_j0yLJ-MsgJ",
"iclr_2021_j0yLJ-MsgJ"
] |
iclr_2021_SVP44gujOBL | A Simple Approach To Define Curricula For Training Neural Networks | In practice, sequence of mini-batches generated by uniform sampling of examples from the entire data is used for training neural networks. Curriculum learning is a training strategy that sorts the training examples by their difficulty and gradually exposes them to the learner. In this work, we propose two novel curricu... | withdrawn-rejected-submissions | This paper proposed two algorithms for curriculum learning, one based on the the knowledge of a good solution (e.g. a local minima or a solution found by SGD) and another one proposed for natural image datasets based on entropy and standard deviation over pixels.
Reviewers seem to like the ideas behind the proposed a... | train | [
"SC2yW2Mfgwt",
"em87MxTYhBs",
"cRbsXenFBcL",
"MOFiq9KUA1h",
"qaD0C-_rzaP",
"BJWLD5cI4h",
"_GUKaIIgbXW",
"XgiwCWbo7ey",
"IlxEQLhReJm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper contains two curriculum learning algorithms of which one assume knowledge of the parameters found by the baseline, uniform-sampling, model to push updates in that direction, and the second orders images according to an increasing stddev/entropy of pixels. While the first approach is impractical because o... | [
3,
3,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
5,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_SVP44gujOBL",
"iclr_2021_SVP44gujOBL",
"iclr_2021_SVP44gujOBL",
"iclr_2021_SVP44gujOBL",
"cRbsXenFBcL",
"SC2yW2Mfgwt",
"MOFiq9KUA1h",
"em87MxTYhBs",
"iclr_2021_SVP44gujOBL"
] |
iclr_2021_M71R_ivbTQP | Extract Local Inference Chains of Deep Neural Nets | We study how to explain the main steps/chains of inference that a deep neural net (DNN) relies on to produce predictions in a local region of data space. This problem is related to network pruning and interpretable machine learning but the highlighted differences are: (1) fine-tuning of neurons/filters is forbidden: on... | withdrawn-rejected-submissions | Overall, this seems like a neat idea and well-done work. Main principle is to extract a very sparse net that does a good job at locally "explaining" a given example. The NeuroChains idea does this with a diffentiable sparse objective. I think this work is well-positioned and has nice properties: (1) retains a very smal... | train | [
"HjtMqmtlS9P",
"oHazaWGhygT",
"_j0_A_SbGw7",
"Djw8QcvlP0U",
"pqMilHWlQ0P",
"cLJLrjonGCp",
"H8P9kiRp_F",
"fJfAysE373X",
"-P2BhUt6lmk",
"rSVuZ4JpnHz",
"ecikc9tBn6"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"content:\nIt is about pruning for explanation. The goal of the methods presented is, given a sample x, to extract a network, which is\nan unmodified subset of the original network,\nand has similar predictions to the original network in a region around x.\n\nThe authors derive a gradient-based optimization procedu... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"iclr_2021_M71R_ivbTQP",
"iclr_2021_M71R_ivbTQP",
"HjtMqmtlS9P",
"-P2BhUt6lmk",
"HjtMqmtlS9P",
"rSVuZ4JpnHz",
"ecikc9tBn6",
"HjtMqmtlS9P",
"iclr_2021_M71R_ivbTQP",
"iclr_2021_M71R_ivbTQP",
"iclr_2021_M71R_ivbTQP"
] |
iclr_2021_uUlGTEbBRL | Rethinking Compressed Convolution Neural Network from a Statistical Perspective | Many designs have recently been proposed to improve the model efficiency of convolutional neural networks (CNNs) at a fixed resource budget, while there is a lack of theoretical analysis to justify them. This paper first formulates CNNs with high-order inputs into statistical models, which have a special "Tucker-like" ... | withdrawn-rejected-submissions | This paper presents a theoretical analysis of CNN compression using tensor methods. None of the three reviewers have strong opinion; there scores are 5, 6, and 5.
The attempt to understand the mechanism of how tensor decomposition compresses CNNs is meaningful and interesting. However, the main contribution of this wo... | train | [
"FSbC6WekRpZ",
"dKy63SQ2fv",
"ZWuUsTd9Zjj",
"OAIoWu2l_Wv",
"AS5BsTsFNTn",
"Ev2ad82fgwF",
"GwXctHNyTDA",
"7D0mWqmT2W0",
"8TagzeiqcB",
"GJoDzWc9ysP",
"rQIDc2X5Nwm",
"dCkLeiORvvq",
"KDaHrdGBxiK",
"XHAcUBUZ7lg"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: \nThis paper formulated higher-order CNNs into a Tucker form and provides sample complexity analysis to higher-order CNNs and compressed designs of CNNs via tensor analysis. It uses then theoretically analyzes the efficiency of four block designs from ResNet, MobileNetV1, and MobileNetV2. The paper also c... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_uUlGTEbBRL",
"iclr_2021_uUlGTEbBRL",
"8TagzeiqcB",
"dCkLeiORvvq",
"rQIDc2X5Nwm",
"iclr_2021_uUlGTEbBRL",
"7D0mWqmT2W0",
"KDaHrdGBxiK",
"GJoDzWc9ysP",
"XHAcUBUZ7lg",
"ZWuUsTd9Zjj",
"FSbC6WekRpZ",
"iclr_2021_uUlGTEbBRL",
"iclr_2021_uUlGTEbBRL"
] |
iclr_2021_7JSTDTZtn7- | Byzantine-Robust Learning on Heterogeneous Datasets via Resampling | In Byzantine-robust distributed optimization, a central server wants to train a machine learning model over data distributed across multiple workers. However, a fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages to the server. While this problem has received significant atte... | withdrawn-rejected-submissions | This paper presents an algorithm for distributed optimization in that aims to be "Byzantine-robust", in the sense that it learns successfully when some of the workers send arbitrary messages. The goal in this work is to remain robust when each worker samples data from a different distribution.
While reviewers found t... | train | [
"iZM9xqzMCt",
"fNLGJvNrhQ",
"k2PvopC2XAD",
"bq9BnvmoP2D",
"PvrlYNWfbB2",
"NipxWUF4Jgf",
"5H8WQjxSMc",
"alf9ZPkSo0l",
"91A7OlOBouV"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have updated Figure 3 in the main text by running each experiment 5 times. Note that we made a few minor changes in the experimental setups and plot styles (not the algorithm) but not the resampling algorithm and aggregation rules. The conclusions draw from Figure 3 remain the same.",
"We thank Reviewer 2 for... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"k2PvopC2XAD",
"k2PvopC2XAD",
"bq9BnvmoP2D",
"5H8WQjxSMc",
"alf9ZPkSo0l",
"91A7OlOBouV",
"iclr_2021_7JSTDTZtn7-",
"iclr_2021_7JSTDTZtn7-",
"iclr_2021_7JSTDTZtn7-"
] |
iclr_2021_oGzm2X0aek | Offline Adaptive Policy Leaning in Real-World Sequential Recommendation Systems | The training process of RL requires many trial-and-errors that are costly in real-world applications. To avoid the cost, a promising solution is to learn the policy from an offline dataset, e.g., to learn a simulator from the dataset, and train optimal policies in the simulator. By this approach, the quality of policie... | withdrawn-rejected-submissions | This paper is rejected.
The authors focus on offline RL for the sequential recommender system problem and propose an approach that:
* builds multiple models based on splits of the offline data using domain knowledge
* splits the policy into a context extraction system and context conditioned policy (similar to Rakelle... | train | [
"iosakICWzlv",
"wphh7HMdjM6",
"7-ZrzQdtaBF",
"yEJnSsiCCk",
"zlKY8B8_DcW",
"vEBRKxfxF-R",
"pCIG0i1M8Uw",
"8mNUxCFwUxv",
"s1K09HEYOI",
"d-8GSqlCOS",
"nAot8Cua_n9"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies off-policy reinforcement learning for sequential recommendation. The basic idea is to summarize each possible environment dynamic (i.e., the state transition) into an environment context vector, and optimize policy with respect to this context vector accordingly. The proposed solution is evaluat... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"iclr_2021_oGzm2X0aek",
"iclr_2021_oGzm2X0aek",
"s1K09HEYOI",
"s1K09HEYOI",
"iosakICWzlv",
"nAot8Cua_n9",
"d-8GSqlCOS",
"iclr_2021_oGzm2X0aek",
"iclr_2021_oGzm2X0aek",
"iclr_2021_oGzm2X0aek",
"iclr_2021_oGzm2X0aek"
] |
iclr_2021__zHHAZOLTVh | A Maximum Mutual Information Framework for Multi-Agent Reinforcement Learning | In this paper, we propose a maximum mutual information (MMI) framework for multi-agent reinforcement learning (MARL) to enable multiple agents to learn coordinated behaviors by regularizing the accumulated return with the mutual information between actions. By introducing a latent variable to induce nonzero mutual info... | withdrawn-rejected-submissions | Overview:
This paper introduces a maximum mutual information method for helping to coordinate RL agents without communication.
Discussion:
Some reviewers leaned towards accept, but I found the two reviewers recommending rejecting to be more convincing.
Recommendation:
This is an important research topic and I'm glad ... | val | [
"AX7MgmhWayl",
"MZ6ApVQsV2u",
"yAokm5AOLY9",
"HVl-wdb0FH8",
"ZBBlXqIkGn1",
"GJ_E6OxbIzT",
"iQmafpOPVm",
"RavOCxthizy",
"U3lsfXmHsjx",
"4Zpzd6xPPkt",
"V9X3jZDWahP"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper can be seen as a modification of SAC, in a multiagent setup by adding the conditional entropy $H(\\pi_i|\\pi_j)$ as a second set of regularization on top of $H(\\pi_i)$. The overall idea and intuition appear to be interesting. \n\n\n1.The first question is whether the mutual information is informative e... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021__zHHAZOLTVh",
"yAokm5AOLY9",
"iQmafpOPVm",
"iclr_2021__zHHAZOLTVh",
"4Zpzd6xPPkt",
"U3lsfXmHsjx",
"V9X3jZDWahP",
"AX7MgmhWayl",
"iclr_2021__zHHAZOLTVh",
"iclr_2021__zHHAZOLTVh",
"iclr_2021__zHHAZOLTVh"
] |
iclr_2021_VMtftZqMruq | Towards Understanding Linear Value Decomposition in Cooperative Multi-Agent Q-Learning | Value decomposition is a popular and promising approach to scaling up multi-agent reinforcement learning in cooperative settings. However, the theoretical understanding of such methods is limited. In this paper, we introduce a variant of the fitted Q-iteration framework for analyzing multi-agent Q-learning with value d... | withdrawn-rejected-submissions | This paper begins to formalize a connection between value decomposition and difference rewards. Whilst we are in agreement with the authors that papers do not need to make new algorithmic contribution and purely theoretical papers that deepen our understanding of established methods can be significant contributions, al... | train | [
"rk4emRbCg9D",
"pYeAjYP2GR",
"IBDwWwJWoh",
"Zc-AXnvuaR",
"w9OUsVKtjKH",
"n5ZtgZgGSX-",
"ydhZYHI_7dn",
"cM_t7m-NO3r",
"M9W-xYFka1Z",
"48jHqaMns4F",
"1eGOCebkH_X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper is largely a theoretical undertaking and focuses on bringing new insights into the currently popular value decomposition schemes like VDN, QMIX etc. for multi-agent reinforcement learning. They find two major implications: 1) linear value decomposition leads to implicit difference based credit assignmen... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_VMtftZqMruq",
"iclr_2021_VMtftZqMruq",
"iclr_2021_VMtftZqMruq",
"w9OUsVKtjKH",
"pYeAjYP2GR",
"rk4emRbCg9D",
"cM_t7m-NO3r",
"48jHqaMns4F",
"1eGOCebkH_X",
"iclr_2021_VMtftZqMruq",
"iclr_2021_VMtftZqMruq"
] |
iclr_2021_pOHW7EwFbo9 | Explicit Pareto Front Optimization for Constrained Reinforcement Learning | Many real-world problems require that reinforcement learning (RL) agents learn policies that not only maximize a scalar reward, but do so while meeting constraints, such as remaining below an energy consumption threshold. Typical approaches for solving constrained RL problems rely on Lagrangian relaxation, but these su... | withdrawn-rejected-submissions | Considering reviewers' comments and comparing with similar papers recently published or submitted, this is a good paper but hasn't reached the bar of ICLR. We believe that the paper is not ready for publication yet, and strongly encourage the authors to use the reviewers' feedback to improve the work and resubmit to o... | test | [
"togqm-C6Mrq",
"qsv4ls96q-9",
"OzOc17vteRX",
"CY9x1PZxvwi",
"9RekNel4hh9",
"TWjy37cvman",
"CUSIuwqJZoe",
"EBYIN08b2mc",
"c-HyIMk-I_Z",
"fTf_BpgKJsE"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper is interested in reinforcement learning where one needs to satisfy constraints (for instance energy spent) in addition to maximizing rewards. The proposed approach proposes to extend any method able to approximate the Pareto front of optimal policies by also learning portions of the front that satisfy us... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"iclr_2021_pOHW7EwFbo9",
"OzOc17vteRX",
"TWjy37cvman",
"CUSIuwqJZoe",
"c-HyIMk-I_Z",
"togqm-C6Mrq",
"fTf_BpgKJsE",
"iclr_2021_pOHW7EwFbo9",
"iclr_2021_pOHW7EwFbo9",
"iclr_2021_pOHW7EwFbo9"
] |
iclr_2021_gYbimGJAENn | Powers of layers for image-to-image translation | We propose a simple architecture to address unpaired image-to-image translation tasks: style or class transfer, denoising, deblurring, deblocking, etc.
We start from an image autoencoder architecture with fixed weights.
For each task we learn a residual block operating in the latent space, which is itera... | withdrawn-rejected-submissions | All the reviewers shared the concerns about the novelty and the quality of the results. Comparisons with some SOTA results are missing, and the inclusion of deblurring/denosing tasks is not convincing. The authors carefully addressed these issues in the rebuttal but the reviewers didn’t change their mind afterwards. Af... | train | [
"wpAW1jD5oux",
"ImtXwT8uJx6",
"m38A8MxRDhA",
"u7NYggADg5H",
"5mAkQKsIMN",
"G-sg4zguXo6",
"4BhKUFuipDC",
"7kOHFIjWEfs"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1) The review states that “The paper includes as motivation the idea that applying that different levels of transformation can be achieved by choosing different numbers of iterations, but the application of this is shown only for denoising (Table 1).” \nThis is not correct. “If the network is trained with a fixe... | [
-1,
-1,
-1,
-1,
3,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"5mAkQKsIMN",
"G-sg4zguXo6",
"4BhKUFuipDC",
"7kOHFIjWEfs",
"iclr_2021_gYbimGJAENn",
"iclr_2021_gYbimGJAENn",
"iclr_2021_gYbimGJAENn",
"iclr_2021_gYbimGJAENn"
] |
iclr_2021_GzHjhdpk-YH | The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation | Comparing metric measure spaces (i.e. a metric space endowed with a probability distribution) is at the heart of many machine learning problems. This includes for instance predicting properties of molecules in quantum chemistry or generating graphs with varying connectivity. The most popular distance between such metri... | withdrawn-rejected-submissions | This paper present novel formulations to address the problem of unbalanced Gromov. The Conic formulation is very interesting but stays theoretical until optimization algorithms are available. The Unbalanced Gromov is a nice extension of Gromov and comes with relatively efficient solvers. Some very limited numerical exp... | train | [
"qJFteJhbEv",
"eSAfQz1IWiu",
"8VILPF0YT3x",
"nyzBS1cuuu1",
"ubzjMspIFIv",
"vkR9n84oGoW",
"0L_oCO24wLx",
"ZFuCc8AmP5f",
"-t96efchbTA",
"vjfh6kz1OFb",
"aQTYzjP0clK",
"XGndPvBQ2ta"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces a novel unbalanced Gromov-Wasserstein type problem. The Gromov-Wasserstein distance is very useful in practice for comparing probability distributions that do not lie in the same metric spaces. It has recently found several successful applications in ML for computational chemistry, graphs comp... | [
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_GzHjhdpk-YH",
"0L_oCO24wLx",
"vkR9n84oGoW",
"vjfh6kz1OFb",
"iclr_2021_GzHjhdpk-YH",
"XGndPvBQ2ta",
"aQTYzjP0clK",
"qJFteJhbEv",
"iclr_2021_GzHjhdpk-YH",
"ubzjMspIFIv",
"iclr_2021_GzHjhdpk-YH",
"iclr_2021_GzHjhdpk-YH"
] |
iclr_2021_VyDYSMx1sFU | End-to-End on-device Federated Learning: A case study | With the development of computation capability in devices, companies are eager to utilize ML/DL methods to improve their service quality. However, with traditional Machine Learning approaches, companies need to build up a powerful data center to collect data and perform centralized model training, which turns out to be... | withdrawn-rejected-submissions | This paper proposes the use of federated learning to the application of steering wheel prediction for autonomous driving. While the application is new and interesting, the reviewers felt that the approach and results were mostly empirical. I suggest that the authors improve the conceptual/algorithmic contribution of th... | train | [
"kt5JZTV0Z86",
"8pP3s7gp_yX",
"Icsyeh_VTOk",
"PUTR6RM85_v"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The study evaluates federated learning (FL) in the context of steering wheel angle prediction, which is relevant for autonomous driving systems. Authors compare against two baselines a centrally-computed and locally-computed models and measure prediction error, training time and bandwidth cost. The work evaluates ... | [
6,
4,
4,
2
] | [
4,
4,
4,
5
] | [
"iclr_2021_VyDYSMx1sFU",
"iclr_2021_VyDYSMx1sFU",
"iclr_2021_VyDYSMx1sFU",
"iclr_2021_VyDYSMx1sFU"
] |
iclr_2021_ZD7Ll4pAw7C | Rethinking the Pruning Criteria for Convolutional Neural Network | Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), and various pruning criteria have been proposed to remove the redundant filters of CNNs. From our comprehensive experiments, we find some blind spots on pruning criteria: (1) Similarity: There are some strong similarities among... | withdrawn-rejected-submissions | The paper works towards analysis to understand the difference -- and primarily the lack thereof -- between different pruning methods. The central observation is that the convolutional filters in a layer are not strongly correlated and -- if the weights of the layer are taken as a matrix -- then the covariance matrix is... | train | [
"r7EQpn3YbTw",
"sQPzgkjcbaz",
"HINHxUYrpO",
"7faxaW3D0B4",
"uN6ITZ4G5O6",
"7iv4vTeGch",
"KnewOiXp5WG",
"RKOOu9PrjEH",
"ZYHEgjX4lz7",
"g08wTdmO6P",
"fT5aAdQQ6DU"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you again for your comments. \n\n \n\n**Q1: Why the authors claim it as a diagonal matrix when actually observing it as a block-diagonal matrix?**\n\n\nA1: For a convolutional filter, the fact that the covariance matrix $\\mathbf{\\Sigma_{\\text{block}}} \\in \\mathbb{R}^{(k^2\\times N_{i+1})\\times (k^2\\t... | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"HINHxUYrpO",
"iclr_2021_ZD7Ll4pAw7C",
"iclr_2021_ZD7Ll4pAw7C",
"g08wTdmO6P",
"HINHxUYrpO",
"ZYHEgjX4lz7",
"fT5aAdQQ6DU",
"iclr_2021_ZD7Ll4pAw7C",
"iclr_2021_ZD7Ll4pAw7C",
"iclr_2021_ZD7Ll4pAw7C",
"iclr_2021_ZD7Ll4pAw7C"
] |
iclr_2021_JiNvAGORcMW | Cross-State Self-Constraint for Feature Generalization in Deep Reinforcement Learning | Representation learning on visualized input is an important yet challenging task for deep reinforcement learning (RL). The feature space learned from visualized input not only dominates the agent's generalization ability in new environments but also affect the data efficiency during training. To help the RL agent learn... | withdrawn-rejected-submissions | I thank the authors for their submission and participation in the author response period. The updated experiments are appreciated. However, after discussion all reviewers unanimously agree that the paper is not ready for publication and encourage resubmission to another venue. In particular, R2 and R3 have raised conce... | train | [
"h9djPIQfLYr",
"RRBN84PvtEq",
"JFXisCVBLN",
"dQL7FLfxWtu",
"Mbzm3vjn8un",
"H0u2tcDbsuw",
"Bvtai6JHQdB",
"eWk4vzrC63d",
"3sZrlerZXa",
"zmSesZT-QEj",
"ZojNjKF1HtE",
"nEWv37XHrQi"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Authors,\n\nthank you for the updated paper and the provided clarifications.\n\nWhile the added experimental results are a great addition, my main concern about the lack of fair comparison against other, widely available regularisation methods, was not addressed yet, so I will keep my score. \n\nI believe the... | [
-1,
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"Bvtai6JHQdB",
"dQL7FLfxWtu",
"iclr_2021_JiNvAGORcMW",
"3sZrlerZXa",
"iclr_2021_JiNvAGORcMW",
"eWk4vzrC63d",
"ZojNjKF1HtE",
"Mbzm3vjn8un",
"JFXisCVBLN",
"nEWv37XHrQi",
"iclr_2021_JiNvAGORcMW",
"iclr_2021_JiNvAGORcMW"
] |
iclr_2021_Oe2XI-Aft-k | Perturbation Type Categorization for Multiple ℓp Bounded Adversarial Robustness | Despite the recent advances in adversarial training based defenses, deep neural networks are still vulnerable to adversarial attacks outside the perturbation type they are trained to be robust against. Recent works have proposed defenses to improve the robustness of a single model against the union of multiple perturba... | withdrawn-rejected-submissions | The paper proposes a model to defend against multiple lp norm attacks by classifying those attacks. The reviewers raised several concerns about the methodologies. Furthermore, it's not clear how the proposed algorithm can deal with an unseen attack (e.g., only trained on l1, l_infty attacks but encounter l2 attack in t... | train | [
"0pMLwtEUBEy",
"5pCLwowHr4O",
"UduYTjNuVJG",
"V1q5prt8RqE",
"sIcexq1_Ibq",
"f9jxum0oodT",
"jKeUfzY1hD",
"2_PeuMNwrse",
"7vdZA2irjkl",
"fR5vtp3LUh",
"Ks_H8qg-gK",
"eOZZwHJVEPX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an ensemble approach to deal with multiple perturbation types. The underlying idea is to train a robust classifier for each perturbation type (i.e., l1, l2, and l-inf) and choose a model to predict based on the decision of a perturbation classifier which is trained to distinguish perturbation t... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2021_Oe2XI-Aft-k",
"f9jxum0oodT",
"iclr_2021_Oe2XI-Aft-k",
"iclr_2021_Oe2XI-Aft-k",
"eOZZwHJVEPX",
"0pMLwtEUBEy",
"eOZZwHJVEPX",
"Ks_H8qg-gK",
"fR5vtp3LUh",
"iclr_2021_Oe2XI-Aft-k",
"iclr_2021_Oe2XI-Aft-k",
"iclr_2021_Oe2XI-Aft-k"
] |
iclr_2021_QfEssgaXpm | Reinforcement Learning for Control with Probabilistic Stability Guarantee | Reinforcement learning is promising to control dynamical systems for which the traditional control methods are hardly applicable. However, in control theory, the stability of a closed-loop system can be hardly guaranteed using the policy/controller learned solely from samples. In this paper, we will combine Lyapunov's ... | withdrawn-rejected-submissions | Although the reviewers like the general idea of the paper, there are concerns regarding the clarity of the statements, especially in stating the main assumptions, referring to related work, and how well the experiments support the results of the paper. Although the authors' long response addressed some of the issues/co... | train | [
"3aOyupweJB9",
"thMXefYeSP4",
"4V0sQCTAPXT",
"T9mttw-BB4e"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the probabilistic stability guarantee of control systems. In general, hard stability guarantee is difficult with only finite samples. The authors instead focus on developing probabilistic stability conditions. High probability bound is derived in terms of the number of trajectories and the lengt... | [
6,
6,
5,
5
] | [
3,
4,
3,
4
] | [
"iclr_2021_QfEssgaXpm",
"iclr_2021_QfEssgaXpm",
"iclr_2021_QfEssgaXpm",
"iclr_2021_QfEssgaXpm"
] |
iclr_2021_5WcLI0e3cAY | K-PLUG: KNOWLEDGE-INJECTED PRE-TRAINED LANGUAGE MODEL FOR NATURAL LANGUAGE UNDERSTANDING AND GENERATION | Existing pre-trained language models (PLMs) have demonstrated the effectiveness of self-supervised learning for a broad range of natural language processing (NLP) tasks. However, most of them are not explicitly aware of domain-specific knowledge, which is essential for downstream tasks in many domains, such as tasks in... | withdrawn-rejected-submissions | This paper proposes a new method for pre-training of language models in the e-commerce domain. It introduces five objectives for pre-training by incorporating domain knowledge into the model.
Pros • The paper is generally easy to follow. • Design of the pre-training objectives is reasonable. • Experimental results are... | train | [
"vg94ZCW27QK",
"4cCewKYu7Z",
"V1wDzPSASHV",
"bVdAfqq9tT",
"AtCRo8favcy",
"IVPp_MFV2Lf",
"bKq90EPl1pW",
"_xPa8yRPwrD",
"IaBR5e1VFq",
"p4kqwBQuUgJ",
"nqECRKnCCE5",
"YoBOodOR02H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"(For Q1) Yes, these ideas are also very interesting, but I guess the settings are already quite different from this work, so it might need actual experiments to see if they can work as well. Anyway, this looks like an interesting direction that is worth investigating.",
"Thanks for the detailed explanations and ... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"V1wDzPSASHV",
"AtCRo8favcy",
"4cCewKYu7Z",
"iclr_2021_5WcLI0e3cAY",
"p4kqwBQuUgJ",
"nqECRKnCCE5",
"YoBOodOR02H",
"bVdAfqq9tT",
"iclr_2021_5WcLI0e3cAY",
"iclr_2021_5WcLI0e3cAY",
"iclr_2021_5WcLI0e3cAY",
"iclr_2021_5WcLI0e3cAY"
] |
iclr_2021_Z2qyx5vC8Xn | Temporal Difference Uncertainties as a Signal for Exploration | An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy, which can yield near-optimal exploration strategies in tabular settings. However, in non-tabular settings that involve function approximators, obtaining accurate uncertainty estimates is almost a... | withdrawn-rejected-submissions | The submitted paper contains interesting theoretical insights into common approaches for exploration and proposes a new way for deriving intrinsic rewards for exploration which is evaluated in several benchmark environments. While all reviewers appreciate these aspects, there are concerns about whether the paper is rea... | train | [
"-ShCtRgNyfm",
"UQ7ohw8s2kP",
"eLVCx-cNs7",
"aH_lAgKDB8Z",
"1jImYg2CKdj",
"vGh8I3KbcbX",
"LvtmVvs6ODR",
"Grj1X5hcAHg",
"0ckDMF5mIeq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes to use an intrinsic reward based on uncertainties calculated from temporal difference errors. The approach, called Temporal Difference Uncertainties (TDU), estimates the variance of td errors across multiple (bootstrapped) parameters, for a given state, action, next state and reward, where vari... | [
5,
5,
5,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
2,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_Z2qyx5vC8Xn",
"iclr_2021_Z2qyx5vC8Xn",
"iclr_2021_Z2qyx5vC8Xn",
"eLVCx-cNs7",
"0ckDMF5mIeq",
"UQ7ohw8s2kP",
"Grj1X5hcAHg",
"-ShCtRgNyfm",
"iclr_2021_Z2qyx5vC8Xn"
] |
iclr_2021_AcH9xD24Hd | Learning the Step-size Policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm | We consider the problem of how to learn a step-size policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. This is a limited computational memory quasi-Newton method widely used for deterministic unconstrained optimization but currently avoided in large-scale problems for requiring step size... | withdrawn-rejected-submissions | The paper presents a novel procedure to set the steps-size for the L-BFGS algorithm using a neural network.
Overall, the reviewers found the paper interesting and the main idea well-thought. However, a baseline that was proposed by one of the reviewers seems to be basically on par with the performance of the proposed a... | train | [
"AXMyWDcep89",
"p8mpCMet0wM",
"DFR-iATAya",
"2hnrTcLJULC",
"MVAcQqG4p0X",
"V-WINISaZDB",
"YV1hoR5RwuU",
"AHmhtI6uF2Q",
"Be-ZHgVcbfw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Summary**:\n\nThe paper presents a novel steps-size adaptation for the L-BFGS algorithm inspired by the learning-to-learn idea. The step-size policy is determined by two linear layers which compare a higher dimensional mapping of curvature information which is trained to adapt the step-size.\n\n\n**Reasons for s... | [
4,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_AcH9xD24Hd",
"YV1hoR5RwuU",
"AHmhtI6uF2Q",
"MVAcQqG4p0X",
"AXMyWDcep89",
"Be-ZHgVcbfw",
"iclr_2021_AcH9xD24Hd",
"iclr_2021_AcH9xD24Hd",
"iclr_2021_AcH9xD24Hd"
] |
iclr_2021_xHqKw3xJQhi | VEM-GCN: Topology Optimization with Variational EM for Graph Convolutional Networks | Over-smoothing has emerged as a severe problem for node classification with graph convolutional networks (GCNs). In the view of message passing, the over-smoothing issue is caused by the observed noisy graph topology that would propagate information along inter-class edges, and consequently, over-mix the features of no... | withdrawn-rejected-submissions | The authors propose a new approach to topology optimization to address over-smoothing in GCNs. This is a borderline paper. Topology optimization is clearly important and relevant and the approach tries to optimize the topology (add/delete edges) by viewing the problem as a latent variable model and aiming to optimize t... | train | [
"BMeR3d-vMb5",
"qVMdgM9hUYh",
"QL_P3CJOFWk",
"R2l_3reh89W",
"pSpNcipX6tG",
"phdkA9oawTL",
"aAGow5UlnrH",
"OIfvPlxVx3m",
"H5_4Bxb7wh",
"GiZkWDn54Ux",
"wPuAbd0iJJE",
"y2mFAXttNp"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"A brief summary of the paper.\nThis paper proposed a novel architecture termed VEM-GCN to address the over-smoothing problem of GNNs in the node classification task. The main idea is to optimize the graph topology by removing the inter-class edges as well as adding the intra-class edges, and then the noise informa... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"iclr_2021_xHqKw3xJQhi",
"phdkA9oawTL",
"iclr_2021_xHqKw3xJQhi",
"GiZkWDn54Ux",
"y2mFAXttNp",
"wPuAbd0iJJE",
"OIfvPlxVx3m",
"BMeR3d-vMb5",
"pSpNcipX6tG",
"iclr_2021_xHqKw3xJQhi",
"iclr_2021_xHqKw3xJQhi",
"iclr_2021_xHqKw3xJQhi"
] |
iclr_2021_uQnJqzkhrmj | Ranking Cost: One-Stage Circuit Routing by Directly Optimizing Global Objective Function | Circuit routing has been a historically challenging problem in designing electronic systems such as very large-scale integration (VLSI) and printed circuit boards (PCBs). The main challenge is that connecting a large number of electronic components under specific design rules and constraints involves a very large searc... | withdrawn-rejected-submissions | The paper studies an interesting problem motivated by VLSI design. The reviewers agree that there are interesting aspects of the RC algorithm. Nevertheless, the paper could be improved by a clearer characterization/apples-to-apples comparison to baselines, particularly regarding computation cost, use of parallelism, as... | train | [
"E3oaovOtvL",
"Fi8PoCjMJfE",
"P2cPX8jcwP",
"WfMxf5ktjoS",
"_-IA2q3VnWE",
"KD-SaLrI4rg",
"LV7QCMRzUWt",
"TKU7nAEEBrX",
"rKR0_9_75sK",
"l4jwvgU4MXt",
"Uf1KqBP1c-R",
"dtkuOAcuZ4B",
"aDvyzLbYyV_",
"7W9xvz2iuUO",
"WpvUDS4UHfT",
"dvmSyMe7p9",
"EP5SlErXXGw",
"nPjOZyNq6rp"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Since reviewers pay more attention to the time complexity of our RC algorithm and the sequential A* algorithm, we also point out:\n\n1. **The sequential A* algorithm (the A*) suffers from non-optimal solution problem as stated in Section 2.2.2**. Even given the full sampling, the A* will fail in some cases. For ex... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_uQnJqzkhrmj",
"iclr_2021_uQnJqzkhrmj",
"LV7QCMRzUWt",
"_-IA2q3VnWE",
"TKU7nAEEBrX",
"iclr_2021_uQnJqzkhrmj",
"rKR0_9_75sK",
"l4jwvgU4MXt",
"dtkuOAcuZ4B",
"7W9xvz2iuUO",
"dvmSyMe7p9",
"Fi8PoCjMJfE",
"EP5SlErXXGw",
"nPjOZyNq6rp",
"iclr_2021_uQnJqzkhrmj",
"iclr_2021_uQnJqzkhrmj... |
iclr_2021_cP2fJWhYZe0 | Overinterpretation reveals image classification model pathologies | Image classifiers are typically scored on their test set accuracy, but high accuracy can mask a subtle type of model failure. We find that high scoring convolutional neural networks (CNNs) on popular benchmarks exhibit troubling pathologies that allow them to display high accuracy even in the absence of semantically sa... | withdrawn-rejected-submissions | The reviewers generally feel that the phenomenon discovered in this paper is relevant and could be very important when considering interpretability. However, there are still a number of remaining concerns. The reviewers are not convinced by the human study - they feel there is structure in the SIS’s such that a human t... | train | [
"6KMqOeRRQtj",
"Z8UEIjrPo4_",
"TN2bytjmz_O",
"70-nYREuCPp",
"A9ZQSBHK4bL",
"9uxmGmswj9t",
"QveGtZ35leb",
"CxoC_TeUl7X",
"valWHRyFpMk",
"ca_bzR4gGsy"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper studies “overinterpretation”(provides a high-confidence decision without\nsalient supporting input features) of deep learning models. It modifies previous method Sufficient Input Subsets (SIS) (Carter et al., 2019) to scale to high-dimensional inputs. It detects overinterpretation on both CIFA... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_cP2fJWhYZe0",
"iclr_2021_cP2fJWhYZe0",
"6KMqOeRRQtj",
"valWHRyFpMk",
"Z8UEIjrPo4_",
"ca_bzR4gGsy",
"CxoC_TeUl7X",
"iclr_2021_cP2fJWhYZe0",
"iclr_2021_cP2fJWhYZe0",
"iclr_2021_cP2fJWhYZe0"
] |
iclr_2021_ZS-9XoX20AV | GraphSAD: Learning Graph Representations with Structure-Attribute Disentanglement | Graph Neural Networks (GNNs) learn effective node/graph representations by aggregating the attributes of neighboring nodes, which commonly derives a single representation mixing the information of graph structure and node attributes. However, these two kinds of information might be semantically inconsistent and could b... | withdrawn-rejected-submissions | In this paper, the authors propose a method to find disentangling embeddings of the structure and the attribute of the graph. Overall, this is an interesting paper and the paper is well-written and easy to follow, and the paper has some merits. However, the reviewers were still not convinced by the response, and the p... | train | [
"yLMjSc8W3Cq",
"cKMSsGpOZLH",
"xfJUX5ExWoZ",
"wrpSaABnbaP",
"JpWN1s-MfFW",
"-jnQNDMtK1p",
"ZuxFSnVaNt8",
"F4uSE2Zp_MZ",
"2KZlXB-KMXb",
"T3zcW-yfJw",
"ET0uJIPi_JQ",
"YIEFBKzXWg_"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your feedback!\n\nWe respond to the two questions as follows:\n\nQ1: The classifier-based metric cannot judge whether an element of graph embedding corresponds to structure or attribute factor. \n\nA1: As you suggested, a classifier can hardly judge whether a feature element is for graph structure or no... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"cKMSsGpOZLH",
"-jnQNDMtK1p",
"wrpSaABnbaP",
"JpWN1s-MfFW",
"2KZlXB-KMXb",
"T3zcW-yfJw",
"YIEFBKzXWg_",
"ET0uJIPi_JQ",
"iclr_2021_ZS-9XoX20AV",
"iclr_2021_ZS-9XoX20AV",
"iclr_2021_ZS-9XoX20AV",
"iclr_2021_ZS-9XoX20AV"
] |
iclr_2021_Utc4Yd1RD_s | Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization | There is now extensive evidence demonstrating that deep neural networks are vulnerable to adversarial examples, motivating the development of defenses against adversarial attacks. However, existing adversarial defenses typically improve model robustness against individual specific perturbation types. Some recent method... | withdrawn-rejected-submissions | This paper first examines a multi-domain separation phenomenon, where different types of adversarial noise lead to different running statistics, and then introduces Gated Batch Normalization (GBN), a building block for deep neural networks that improves robustness against multiple perturbation types. GBN consists of a ... | val | [
"_FkLFcty8L",
"wMTnPzgmhj",
"STQmmSl2B5J",
"C0Ey5uRlUq_",
"45sckTPBBlE",
"cxIhPgz3BOH",
"D-qsRnEjgUF",
"DhgcjViHV8S",
"Vi3v___WWM",
"tT3Vj-Ps4Tt"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces an algorithm for defending against multiple adversarial attacks (L1, L2, L-inf) by learning separate batch-norm statistics for each attack type. At inference time, the batch-normalized outputs corresponding to the different attack types are averaged according to the probability of attack type... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_Utc4Yd1RD_s",
"iclr_2021_Utc4Yd1RD_s",
"C0Ey5uRlUq_",
"cxIhPgz3BOH",
"Vi3v___WWM",
"tT3Vj-Ps4Tt",
"DhgcjViHV8S",
"_FkLFcty8L",
"iclr_2021_Utc4Yd1RD_s",
"iclr_2021_Utc4Yd1RD_s"
] |
iclr_2021__OGAW_hznmG | Maximum Entropy competes with Maximum Likelihood | Maximum entropy (MAXENT) method has a large number of
applications in theoretical and applied machine learning, since it
provides a convenient non-parametric tool for estimating unknown
probabilities. The method is a major contribution of statistical
physics to probabilistic inference. However,... | withdrawn-rejected-submissions | As one of the reviewers concisely summarized: This paper investigates maximum entropy (MaxEnt) inference and compares it to a Bayesian estimator and regularized maximum likelihood for finite models.
Two reviewers specifically question whether they have learned anything new after reading. This combined with various ot... | train | [
"sv_dZEsfi4E",
"8J-5-kOe1vY",
"faoYXiRvBoz",
"UxHeZc7U5t",
"IJv8efE74x",
"e_6Ggquqt04",
"uJOYR4OilJH",
"bwGgDGyRX0r",
"aVT1_QFNOEr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nWe thank Reviewer 2 for reporting on our paper and for making very useful suggestions.\n\n1. Reviewer: I’m confused about the application scenarios of the proposed method. The authors did not show any application value, though they claimed that MAXENT has a large number of applications in applied machine learnin... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"bwGgDGyRX0r",
"aVT1_QFNOEr",
"e_6Ggquqt04",
"uJOYR4OilJH",
"iclr_2021__OGAW_hznmG",
"iclr_2021__OGAW_hznmG",
"iclr_2021__OGAW_hznmG",
"iclr_2021__OGAW_hznmG",
"iclr_2021__OGAW_hznmG"
] |
iclr_2021_5mhViEOQxaV | Controllable Pareto Multi-Task Learning | A multi-task learning (MTL) system aims at solving multiple related tasks at the same time. With a fixed model capacity, the tasks would be conflicted with each other, and the system usually has to make a trade-off among learning all of them together. Multiple models with different preferences over tasks have to be tr... | withdrawn-rejected-submissions | The paper first aims to propose a new controllable Pareto multi-task learning framework to find pareto-optimal solutions. But after the revision according the comments, the paper claims to find finite Pareto stationary solutions. But the paper still can not prove their proposed method can find the Pareto stationary sol... | train | [
"hTG-at2xLlQ",
"DAoFCfUB23P",
"ytSPZOZF8pl",
"c12mADhA6Ku",
"8TyE6s6EylL",
"LSUzhlzMDb-",
"Lyl-8LhqPv5",
"EXu0xhQNn-f",
"7zkr52_dwSE",
"Y4wDcqafBi9",
"X-t3ASNikSL",
"NiopHA5CN0I",
"JtLFtOgWYIO",
"LM09E2j_QiV",
"MZjm7AIw6dZ",
"KzBKK-evLe"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a method to controllably generate models on the trade-off front of multi-task learning problems. The key idea is to use a hypernetwork that will generate the MTL parameters on demand conditioned on the desired trade-off. The hypernetwork can be trained along with the MTL in an end-to-end manner.... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_5mhViEOQxaV",
"ytSPZOZF8pl",
"MZjm7AIw6dZ",
"MZjm7AIw6dZ",
"LSUzhlzMDb-",
"Lyl-8LhqPv5",
"hTG-at2xLlQ",
"hTG-at2xLlQ",
"KzBKK-evLe",
"X-t3ASNikSL",
"KzBKK-evLe",
"iclr_2021_5mhViEOQxaV",
"iclr_2021_5mhViEOQxaV",
"iclr_2021_5mhViEOQxaV",
"iclr_2021_5mhViEOQxaV",
"iclr_2021_5m... |
iclr_2021_mo3Uqtnvz_ | Multi-scale Network Architecture Search for Object Detection | Many commonly-used detection frameworks aim to handle the multi-scale object detection problem. The input image is always encoded to multi-scale features and objects grouped by scale range are assigned to the corresponding features. However, the design of multi-scale feature production is quite hand-crafted or partiall... | withdrawn-rejected-submissions | This submission got 3 rejection and 1 marginally below the threshold. In the original reviews, most of the concerns lie in the limited novelty, the inferior performance to some existing similar works and the limited scalability of the proposed method. Though authors provide some additional experiments, the reviewers st... | train | [
"c2w17Glo78Z",
"08RpyKOCTug",
"x_7JxOfGQNf",
"zjuCDXTZgn0",
"iqMTH-kxMn1",
"4HG9U01eIw",
"OdlKMup27u",
"VFp_2BEPAt",
"jVcwNQp7MCw",
"aftOPvWyd8K"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"#### Summary\n\nThis submission works on the task of architecture search for object detection. The authors focus on two components: how to produce multi-scale features and how to use multi-scale features. The authors formalized a simple search space, and applied an evolution-based search algorithm. Experiments sho... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_mo3Uqtnvz_",
"c2w17Glo78Z",
"VFp_2BEPAt",
"jVcwNQp7MCw",
"jVcwNQp7MCw",
"aftOPvWyd8K",
"iclr_2021_mo3Uqtnvz_",
"iclr_2021_mo3Uqtnvz_",
"iclr_2021_mo3Uqtnvz_",
"iclr_2021_mo3Uqtnvz_"
] |
iclr_2021_zspml_qcldq | Cross-Modal Retrieval Augmentation for Multi-Modal Classification | Recent advances in using retrieval components over external knowledge sources have shown impressive results for a variety of downstream tasks in natural language processing. Here, we explore the use of unstructured external knowledge sources of images and their corresponding captions for improving visual question answe... | withdrawn-rejected-submissions | The paper discusses the problem of how to augment cross-modal retrieval for the task of multi-modal classification -- it uses image caption pairs to improve downstream multimodal learning, and shows improvement in the task of visual question answering. However, the paper has the following weaknesses: (a) lack of novelt... | train | [
"g4BxevFMd0g",
"HWiQK20LXmN",
"ZLOdzIfHauO",
"BzzwWT6MBC",
"pHTBGZTbs1t",
"5RDxihUqx8W",
"3RkRIx0Vv60"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the detailed review.\n\nWe kindly point the reviewer to our “general response” for detailed explanation of major concerns.\n\n### Concern 1+2 - DXR performance\n\n1) **Q:** The reviewer is concerned with the performance of our retriever, with respect to other baselines.\n\n **A:** As we mention in ... | [
-1,
-1,
-1,
-1,
5,
4,
3
] | [
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"pHTBGZTbs1t",
"5RDxihUqx8W",
"3RkRIx0Vv60",
"iclr_2021_zspml_qcldq",
"iclr_2021_zspml_qcldq",
"iclr_2021_zspml_qcldq",
"iclr_2021_zspml_qcldq"
] |
iclr_2021_uvEgLKYMBF9 | Variance Reduction in Hierarchical Variational Autoencoders | Variational autoencoders with deep hierarchies of stochastic layers have been known to suffer from the problem of posterior collapse, where the top layers fall back to the prior and become independent of input.
We suggest that the hierarchical VAE objective explicitly includes the variance of the function paramet... | withdrawn-rejected-submissions | This paper develops a smoothing procedure to avoid the problem of posterior collapse in VAEs. The method is interesting and novel, the experiments are well executed, and the authors answered satisfactorily to most of the reviewers' concerns. However, there is one remaining issue that would require additional discussion... | val | [
"fyaJSYIPXll",
"xnb4AmXOM9z",
"R74hsS49jqw",
"QaB9mmJDZqE",
"jlSwAl2kWBd",
"pn1jfLrJ3Qt",
"ajdjm1EqQ4g",
"dTBv9qe2Cyk",
"h4_39hUw18z",
"rThq-Qm2wXo",
"dRmke23e8a1",
"KGgcJ8jvM7n",
"yigkUkI-u1R",
"vJCIFYtKQ0s",
"QVwPp9Utaf6",
"ILKq10aHkXD",
"a9uJdhR38Jl",
"__j4TnAWECt",
"1dRlUdwNs... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a Hermite variational auto-encoder which use Ornstein Uhlenbeck Semi-group to p(z_i|z_i+1) which i denotes the latent layer number. It has clear theoretical inspiration and had solid analysis on variance reduction. \n\nPros:\nQuality: The paper's generic theoretical motivation and analysis is w... | [
6,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_uvEgLKYMBF9",
"iclr_2021_uvEgLKYMBF9",
"iclr_2021_uvEgLKYMBF9",
"jlSwAl2kWBd",
"pn1jfLrJ3Qt",
"KGgcJ8jvM7n",
"h4_39hUw18z",
"h4_39hUw18z",
"yigkUkI-u1R",
"xnb4AmXOM9z",
"fyaJSYIPXll",
"__j4TnAWECt",
"a9uJdhR38Jl",
"1dRlUdwNsmb",
"fyaJSYIPXll",
"QVwPp9Utaf6",
"xnb4AmXOM9z",... |
iclr_2021_3Wp8HM2CNdR | Whitening for Self-Supervised Representation Learning | Most of the self-supervised representation learning methods are based on the contrastive loss and the instance-discrimination task, where augmented versions of the same image instance ("positives") are contrasted with instances extracted from other images ("negatives"). For the learning to be effective, a lot of negat... | withdrawn-rejected-submissions | This paper received borderline recommendations (5, 5, 6, 7) but even the two slightly more positive reviewers were lukewarm (R1 and R2). While the reviewers acknowledged the heavy computational requirements to do an apples-to-apples comparison with existing baselines, they remain underwhelmed with the lack of experimen... | val | [
"zjeBkkuNAcT",
"1h488jkojc",
"-HIi0nLHEca",
"RGXQ-Jgs_fX",
"-Om5e0m-SK",
"ouDFyKEBdE_",
"S0au62kt5FJ",
"TvReA43uQrQ",
"svow4IP1Cl0"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed an MSE loss function with whitening (W-MSE) for self-supervised representation learning. The motivation is to reduce the demand of negative examples in contrastive representation learning. The proposed W-MSE loss function is compared with popular contrastive loss on a few benchmarks.\n\nContras... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_3Wp8HM2CNdR",
"iclr_2021_3Wp8HM2CNdR",
"TvReA43uQrQ",
"1h488jkojc",
"zjeBkkuNAcT",
"svow4IP1Cl0",
"iclr_2021_3Wp8HM2CNdR",
"iclr_2021_3Wp8HM2CNdR",
"iclr_2021_3Wp8HM2CNdR"
] |
iclr_2021_mgVbI13p96 | Multi-modal Self-Supervision from Generalized Data Transformations | In the image domain, excellent representation can be learned by inducing invariance to content-preserving transformations, such as image distortions. In this paper, we show that, for videos, the answer is more complex, and that better results can be obtained by accounting for the interplay between invariance, distincti... | withdrawn-rejected-submissions | The authors provided a comprehensive rebuttal to the reviewers' feedback that addressed most of the concerns. AnonReviewer3 raised some major concerns that were partially resolved in a revision. The paper has received a split recommendation from the reviewers but within the review and discussion periods, there was no s... | test | [
"wW7DIuEpVw9",
"LvNBQ5TXoYh",
"R1kLTAqxBHt",
"s1M78iPMRNk",
"LZT6rFbPujR",
"BcEEh1_CLD8",
"TynXXpX6NM",
"0x7kfvnNXk-",
"Ft9B61sbkYU",
"w4Ljq4LXBPx",
"0dZ_yLyrdzn",
"PaBrDD3zt3b",
"CFm_V2mLmOz",
"o97clfqMLk1",
"4WFgEweHbm",
"ItrKXKTexUZ",
"ZRTrjv5YMU3",
"tjM_2nZBh7g",
"Qi2s16X4T6F... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"## Summary\n\nThe paper introduces a general framework dubbed Generalized Data Transformations (GDT) for self supervised learning. The framework is used to perform video-audio self supervised learning and analyze what kind of transformations the representations should be invariant to or on the contrary variant to ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_mgVbI13p96",
"iclr_2021_mgVbI13p96",
"BcEEh1_CLD8",
"LZT6rFbPujR",
"TynXXpX6NM",
"PaBrDD3zt3b",
"iclr_2021_mgVbI13p96",
"Ft9B61sbkYU",
"CFm_V2mLmOz",
"0dZ_yLyrdzn",
"ZRTrjv5YMU3",
"CFm_V2mLmOz",
"o97clfqMLk1",
"wW7DIuEpVw9",
"Qi2s16X4T6F",
"tjM_2nZBh7g",
"iclr_2021_mgVbI13... |
iclr_2021_IeuEO1TccZn | Sufficient and Disentangled Representation Learning | We propose a novel approach to representation learning called sufficient and disentangled representation learning (SDRL). With SDRL, we seek a data representation that maps the input data to a lower-dimensional space with two properties: sufficiency and disentanglement. First, the representation is sufficient in the se... | withdrawn-rejected-submissions | This paper aims to present a new representation learning framework for supervised learning based on finding a representation such that the input is conditionally independent given the representation, the components of the representations are independent and the representation is rotation-invariant. While there were bot... | test | [
"d6_dpfXUK9",
"nDBotXtdacz",
"HOZiLitx2hy",
"PdLqyKh7KL1",
"vdmlGM5kSmy",
"y-6j1ib1WU9",
"vUZXZ64nI1H",
"B8tzfP4tVcJ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We’d like to thank you for taking the time to read our submission and for your comments. However, based on the comments we are afraid you may have misunderstood the key points of our paper and did not provide a fair assessment of our contribution. We hope our responses below may help clarify the key points of our ... | [
-1,
-1,
-1,
-1,
5,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"B8tzfP4tVcJ",
"vdmlGM5kSmy",
"y-6j1ib1WU9",
"vUZXZ64nI1H",
"iclr_2021_IeuEO1TccZn",
"iclr_2021_IeuEO1TccZn",
"iclr_2021_IeuEO1TccZn",
"iclr_2021_IeuEO1TccZn"
] |
iclr_2021_awOrpNtsCX | Shape-Tailored Deep Neural Networks Using PDEs for Segmentation | We present Shape-Tailored Deep Neural Networks (ST-DNN). ST-DNN extend convolutional networks, which aggregate data from fixed shape (square) neighbor-hoods to compute descriptors, to be defined on arbitrarily shaped regions. This is useful for segmentation applications, where it is desired to have descriptors that agg... | withdrawn-rejected-submissions | This paper proposes a new kind of CNN that convolves on deformable regions and cooperates with the Poisson equation to determine the deformable regions. Experiments on texture segmentation look promising.
Pros:
1. The paper is well written and easy to follow.
2. The idea is interesting and the reviewers liked it.
3. ... | val | [
"pwvjnc_fSYK",
"SYGZtgPuK8g",
"PS6OLbTDD9s",
"eGvhI-DbsvO",
"7n7IkJJgLZm",
"j-fwIhLvsyX",
"AK0bFQZ7TQF",
"NYkiczjB0Us",
"-W01Hp-lWBp",
"wFnqYpQ2NrH",
"jibC4ZMaRbR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose shape-tailored deep neural networks which performs filtering according to Poisson PDE using convolutions and output robust descriptors for the task of texture segmentation. \n\nThe strengths:\n1. The relationship to Poisson PDE and its formulation as a convolution. \n2. Property of covariance i... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2021_awOrpNtsCX",
"iclr_2021_awOrpNtsCX",
"SYGZtgPuK8g",
"iclr_2021_awOrpNtsCX",
"wFnqYpQ2NrH",
"-W01Hp-lWBp",
"jibC4ZMaRbR",
"pwvjnc_fSYK",
"iclr_2021_awOrpNtsCX",
"iclr_2021_awOrpNtsCX",
"iclr_2021_awOrpNtsCX"
] |
iclr_2021_OBI5QuStBz3 | Improved Communication Lower Bounds for Distributed Optimisation | Motivated by the interest in communication-efficient methods for distributed machine learning, we consider the communication complexity of minimising a sum of d-dimensional functions ∑i=1Nfi(x), where each function fi is held by one of the N different machines. Such tasks arise naturally in large-scale optimisation, w... | withdrawn-rejected-submissions | This work presents an improved lower bound on the communciation complexity of distributed optimization in some settings. While reviewers agree that the paper is addressing a challenging and important question, all reviewers questioned the significance of the contributions of this work. In particular, two reviewers felt... | train | [
"9v1WhLU4jp",
"T570s9BGv8y",
"CkRw73ZRgn5",
"LUU8fum4VWx",
"sbBvBzAYYYw",
"m9ldXoeUkfJ",
"w1vrzpTRsSn"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for thorough comments on the typos and technical details. We will incorporate the corrections and do another proof-reading pass. More detailed responses to select comments can be found below:\n\n> Page 6 (Definition of $D$, between Theorem 4 and Definition 5): Do you mean \"$D = Ω(...)$\" ins... | [
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"sbBvBzAYYYw",
"sbBvBzAYYYw",
"m9ldXoeUkfJ",
"w1vrzpTRsSn",
"iclr_2021_OBI5QuStBz3",
"iclr_2021_OBI5QuStBz3",
"iclr_2021_OBI5QuStBz3"
] |
iclr_2021_kPheYCFm0Od | Variational Multi-Task Learning | Multi-task learning aims to improve the overall performance of a set of tasks by leveraging their relatedness. When training data is limited using priors is pivotal, but currently this is done in ad-hoc ways. In this paper, we develop variational multi-task learning - VMTL, a general probabilistic inference framework f... | withdrawn-rejected-submissions | This paper presents a probabilistic model for multitask learning with representation learning. The basic idea is to share information across tasks by making the prior over the model parameters of one task conditioned on a convex combination of the variational posteriors of the other tasks.
While some of the reviewers ... | train | [
"2L7uorYRkl",
"E3dF6RsPa-v",
"jjNd0oCcDrO",
"_JuvGolpnQ",
"8xZCY9oQyTq",
"pqnvGLnNEZV",
"qjEFDHS0J_3",
"I1iy6O6gyWR",
"PQ4GuZq1eJF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"[Summary]\n\nIn this paper, the authors propose to perform multi-task learning by casting as a variational inference problem. The key of the technique lies in how task relationships are modelled, through clever use of priors. In this paper, the priors for each task are conditioned on other tasks to enable knowledg... | [
8,
5,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
4,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_kPheYCFm0Od",
"iclr_2021_kPheYCFm0Od",
"iclr_2021_kPheYCFm0Od",
"E3dF6RsPa-v",
"2L7uorYRkl",
"2L7uorYRkl",
"jjNd0oCcDrO",
"PQ4GuZq1eJF",
"iclr_2021_kPheYCFm0Od"
] |
iclr_2021_gDHCPUvKRP | Sparse Linear Networks with a Fixed Butterfly Structure: Theory and Practice | A butterfly network consists of logarithmically many layers, each with a linear number of non-zero weights (pre-specified). The fast Johnson-Lindenstrauss transform (FJLT) can be represented as a butterfly network followed by a random projection to a subset of the coordinates. Moreover, a random matrix based on FJLT wi... | withdrawn-rejected-submissions | This paper shows that linear layers can be replaced by butterfly networks. Put simple, the paper follows the idea of sketching to design new architectures that can reduce the number of trainable parameters and also gives the theoretical and empirical analysis to validate this claim. In this regard, the paper would be ... | train | [
"-hP6wXmzUoX",
"b4kjyvzuT0e",
"2pXWN0DtXj",
"_ZgaPzZMOEM",
"g3Bu2BD1gVU",
"G0jrL1LgYvv",
"to_WU1S7UJp",
"BQRr-tMzMQl",
"2MXhCez_-Sb",
"u3JSxT0Oq8R"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the additional feedback. \n\nWith respect to Fig1a, we have added a line in the new version stating that the black vertical lines denote the error bars corresponding to standard deviation, and the values above the rectangles denote the average accuracy.\n\nWith respect to the first point,... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"b4kjyvzuT0e",
"g3Bu2BD1gVU",
"iclr_2021_gDHCPUvKRP",
"u3JSxT0Oq8R",
"2pXWN0DtXj",
"BQRr-tMzMQl",
"2MXhCez_-Sb",
"iclr_2021_gDHCPUvKRP",
"iclr_2021_gDHCPUvKRP",
"iclr_2021_gDHCPUvKRP"
] |
iclr_2021_UFWnZn2v0bV | LAYER SPARSITY IN NEURAL NETWORKS | Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting. In this paper, we discuss sparsity in the framework of neural networks. In particular, we formulate a new notion of sparsity that concerns the networks’ layers and, therefor... | withdrawn-rejected-submissions | Reviewers agree that the idea of layer wise regularization is interesting and is in line with many efforts in the optimization realm to specialize in the training procedure and the learning rate to each layer. Given the depth of some state of the art neural networks, efficiency is at stake and the idea brought up in t... | train | [
"Ee3N8a-A63K",
"zsaL5iKfYdN",
"hThwWVqqYWo",
"ZaR_Vo91iTv",
"vPcEUUCNJ6",
"6Qyi9Ziwk4t",
"zPXIE-mXOWK",
"4FG9-eZxxkV",
"ZTMPByIu3Gp"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new notion of layer sparsity for neural networks that aims at simplifying network architectures and reducing the number of parameters. A type of regularizers has been introduced to encourage this specific structure.\n\nOverall the paper is well written and the idea is interesting. It pushes t... | [
5,
5,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2021_UFWnZn2v0bV",
"iclr_2021_UFWnZn2v0bV",
"iclr_2021_UFWnZn2v0bV",
"Ee3N8a-A63K",
"ZTMPByIu3Gp",
"4FG9-eZxxkV",
"zsaL5iKfYdN",
"iclr_2021_UFWnZn2v0bV",
"iclr_2021_UFWnZn2v0bV"
] |
iclr_2021_bWqodw-mFi1 | Explicit homography estimation improves contrastive self-supervised learning | The typical contrastive self-supervised algorithm uses a similarity measure in latent space as the supervision signal by contrasting positive and negative images directly or indirectly. Although the utility of self-supervised algorithms has improved recently, there are still bottlenecks hindering their widespread use, ... | withdrawn-rejected-submissions | All four reviewers raised concerns on the limited technical novelty and insufficient experiments. They unanimously recommended a rejection. I carefully read the authors' rebuttal but did not find strong reasons to go against the reviewers' recommendations. The reviewers made excellent points to further improve the pape... | train | [
"pjGUiMqNmgw",
"lWjdMe-p2PY",
"24BS139ICaU",
"FzGlg6eOgxk",
"3u5d97sfoXh",
"CbKVRaT0EDe",
"I7PvLlneog2",
"kpoQNoJB7cC",
"XXIS0uhfp92",
"epxKWdFZyzb",
"U6Oj9TQQNkb",
"i-22EnCSm3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"**1. Summary:**\n\nThe authors propose a module that regresses the parameters of an affine transformation or homography as an additional objective in the contrastive self-supervised learning framework. The authors argue that the geometric information encoded in the proposed module can supplement the signal provide... | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_bWqodw-mFi1",
"iclr_2021_bWqodw-mFi1",
"FzGlg6eOgxk",
"kpoQNoJB7cC",
"pjGUiMqNmgw",
"lWjdMe-p2PY",
"lWjdMe-p2PY",
"lWjdMe-p2PY",
"U6Oj9TQQNkb",
"i-22EnCSm3",
"iclr_2021_bWqodw-mFi1",
"iclr_2021_bWqodw-mFi1"
] |
iclr_2021_LT0KSFnQDWF | Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting | While Graph Neural Networks (GNNs) have achieved remarkable results in a variety of applications, recent studies exposed important shortcomings in their ability to capture the structure of the underlying graph. It has been shown that the expressive power of standard GNNs is bounded by the Weisfeiler-Lehman (WL) graph i... | withdrawn-rejected-submissions | We thank the authors for their detailed responses and the revised version, which addresses several of the questions raised by the reviewers.
The paper is correct and clearly written. All reviewers agree that the idea to add structural features in the message passing of graph neural networks is sensible. While differen... | test | [
"aSz_hJF3SYo",
"TFtZY5xnTh_",
"lgBGKd1X9RV",
"k0aQxUqm_o5",
"cZ9YvVqQ9bU",
"GyRfBd6_Lo",
"fZXUzjBC70Y",
"GoD29WtaO0Y",
"B_RlsEHMQIS",
"bpiQ8rnqWa",
"SBM4ZP_AZoV",
"uEp9z7a5qCr",
"ZeysRS8k6eO"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a natural extension of Message Passing Neural Net (MPNN) by incorporating structural features. These structural features are computed as the counts from different substructures (like small lines, stars or complete graphs) induced in the original graph. These counts are combined to obtain a new... | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_LT0KSFnQDWF",
"iclr_2021_LT0KSFnQDWF",
"k0aQxUqm_o5",
"iclr_2021_LT0KSFnQDWF",
"TFtZY5xnTh_",
"uEp9z7a5qCr",
"ZeysRS8k6eO",
"ZeysRS8k6eO",
"aSz_hJF3SYo",
"aSz_hJF3SYo",
"k0aQxUqm_o5",
"iclr_2021_LT0KSFnQDWF",
"iclr_2021_LT0KSFnQDWF"
] |
iclr_2021_ZaYZfu8pT_N | Non-iterative Parallel Text Generation via Glancing Transformer | Although non-autoregressive models with one-iteration generation achieve remarkable inference speed-up, they still fall behind their autoregressive counterparts in prediction accuracy. The non-autoregressive models with the best accuracy currently rely on multiple decoding iterations, which largely sacrifice the infere... | withdrawn-rejected-submissions | This work raised quite a few questions, and left the reviewers somewhat divided. The authors have done their best to answer these questions, conducting additional experiments where needed.
The close relation of this work to Mask-Predict (Ghazvininejad et al. 2019) was noted by several reviewers. Although the current v... | train | [
"NRuEyxn8p5",
"EvCks-wCMTS",
"s63qxCdmCyA",
"6tR3AMfh-af",
"Fkus6oZuEHi",
"rpagn3N4s9B",
"jxlhNj8z_c1",
"qzDERpvzvm",
"WewrV5bc_Bl",
"c2g8AxLn2Nq",
"_N8E3DFF4SK",
"feTdiFsKjcF",
"Fu91yWWeXmU"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a non-autoregressive neural machine translation model that does not require multiple iterations to achieve a good translation quality. The key difference to previous models is that during training it uses decoding to estimate the number of words to randomly sample (proportionally to the error) ... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"iclr_2021_ZaYZfu8pT_N",
"iclr_2021_ZaYZfu8pT_N",
"_N8E3DFF4SK",
"s63qxCdmCyA",
"rpagn3N4s9B",
"NRuEyxn8p5",
"WewrV5bc_Bl",
"Fu91yWWeXmU",
"feTdiFsKjcF",
"qzDERpvzvm",
"iclr_2021_ZaYZfu8pT_N",
"iclr_2021_ZaYZfu8pT_N",
"iclr_2021_ZaYZfu8pT_N"
] |
iclr_2021_Kr7CrZPPPo | Learning a Non-Redundant Collection of Classifiers | Supervised learning models constructed under the i.i.d. assumption have often been shown to exploit spurious or brittle predictive signals instead of more robust ones present in the training data. Inspired by Quality-Diversity algorithms, in this work we train a collection of classifiers to learn distinct solutions to ... | withdrawn-rejected-submissions | The paper tackles a major problem of supervised ML, that of the minimisation of the risk of a set of classifiers. This problem has received attention in numerous work over the past decades, much of which spans the formal aspects of the problem. The paper tackles the problem from a “diversity” standpoint. My main concer... | val | [
"LNY5tNEiPCO",
"ocz1pQ5wMN4",
"KNbP2TwHvUp",
"wSNDhiSwY8j",
"1JVlMYkoyxO",
"I6ld9DJkZ_-",
"NOHn46jjs3H",
"bEM85KekHy",
"Zi_tTAyEuaV",
"DLjglPu3EkU",
"p3SQUyK3vI"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have uploaded a revised version of the paper, making the following changes in light of reviewer feedback:\n* Results for several additional baselines based on alternate diversity regularization terms\n* Results for the unconditional version of our proposed estimator\n* Further analysis of results, especially wr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"iclr_2021_Kr7CrZPPPo",
"bEM85KekHy",
"bEM85KekHy",
"Zi_tTAyEuaV",
"DLjglPu3EkU",
"p3SQUyK3vI",
"p3SQUyK3vI",
"iclr_2021_Kr7CrZPPPo",
"iclr_2021_Kr7CrZPPPo",
"iclr_2021_Kr7CrZPPPo",
"iclr_2021_Kr7CrZPPPo"
] |
iclr_2021_pg9c6etTWXR | Learnable Uncertainty under Laplace Approximations | Laplace approximations are classic, computationally lightweight means to construct Bayesian neural networks (BNNs). As in other approximate BNNs, one cannot necessarily expect the induced predictive uncertainty to be calibrated. Here we develop a formalism to explicitly "train" the uncertainty in a decoupled way to the... | withdrawn-rejected-submissions | The paper proposes a simple modification to Laplace approximation to improve the quality of uncertainty estimates in neural networks.
The key idea is to add “uncertainty units” which do not affect the predictions but change the Hessian of the loss landscape, thereby improving the quality of uncertainty estimates. The... | train | [
"L4N4qpUBoAe",
"6WjZqHZQRnR",
"bOZR-qQoK2l",
"i12Xhg-_GJG",
"UkNiBxyx40",
"yh9ofRbgBvN",
"6i0-i4f2TG",
"_6vO55Y4MfA",
"NQRduqLYAd-",
"Tvds2wTQOf9",
"_dplKs3vFe5",
"Hcogrs6mjh3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"POST DISCUSSION UPDATE\n------------------------------------\nI like the proposed method and I will keep my score.\n\nEND OF UPDATE\n------------------------------------\nThe paper proposes a new method to learn uncertainty under Laplace approximations. The method relies on uncertainty units that do not change the... | [
7,
4,
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
2,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_pg9c6etTWXR",
"iclr_2021_pg9c6etTWXR",
"iclr_2021_pg9c6etTWXR",
"iclr_2021_pg9c6etTWXR",
"yh9ofRbgBvN",
"6i0-i4f2TG",
"_6vO55Y4MfA",
"Tvds2wTQOf9",
"bOZR-qQoK2l",
"6WjZqHZQRnR",
"i12Xhg-_GJG",
"L4N4qpUBoAe"
] |
iclr_2021_BnokSKnhC7F | Maximum Reward Formulation In Reinforcement Learning | Reinforcement learning (RL) algorithms typically deal with maximizing the expected cumulative return (discounted or undiscounted, finite or infinite horizon). However, several crucial applications in the real world, such as drug discovery, do not fit within this framework because an RL agent only needs to identify stat... | withdrawn-rejected-submissions | The paper considers an alternative to the standard MDP formulation, motived by the novo drug design problem. The formulation is meant to optimize a notion of expected maximum reward along the trajectory rather than the expected sum of rewards. The formulation is presented through a variation of the Bellman equation. ... | train | [
"XT_aWlOMtE8",
"52yofTegTAH",
"OzOptwDqKYg",
"qtRymOFbvQl",
"JyGL5MYPKZI",
"iDn7uWykTjH",
"LYQrVfNjffI",
"5X37q2ZsZU",
"2M07OQgj6YV",
"Xm2nc6KlCfT",
"Iz4aMf4hQ7i",
"mM-VodUSJ-q",
"1ZrcJMjECio",
"2vj3LQfVKE",
"GAe9kKt75M8",
"2HSaoSBznl2",
"kWf6Pobr5mx"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n\nThis paper proposes a modified bellman equation for reinforcement learning that optimizes the maximum expected single step reward along a trajectory, instead of the maximum cumulative reward. This formulation is applied to th... | [
5,
5,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
3,
3,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_BnokSKnhC7F",
"iclr_2021_BnokSKnhC7F",
"qtRymOFbvQl",
"1ZrcJMjECio",
"2M07OQgj6YV",
"5X37q2ZsZU",
"iclr_2021_BnokSKnhC7F",
"Iz4aMf4hQ7i",
"Xm2nc6KlCfT",
"kWf6Pobr5mx",
"LYQrVfNjffI",
"52yofTegTAH",
"2HSaoSBznl2",
"XT_aWlOMtE8",
"2HSaoSBznl2",
"iclr_2021_BnokSKnhC7F",
"iclr... |
iclr_2021_QHUUrieaqai | LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning | While designing inductive bias in neural architectures has been widely studied, we hypothesize that transformer networks are flexible enough to learn inductive bias from suitable generic tasks. Here, we replace architecture engineering by encoding inductive bias in the form of datasets. Inspired by Peirce's view that d... | withdrawn-rejected-submissions |
The authors propose a pretraining strategy learning inductive biases in transformers for deduction, induction, and abduction. Further, the claims and results seem to indicate that such pretraining is more successful in transformers which provide a more malleable architecture for learning inductive (structural) biases... | train | [
"L43tw6GK-XQ",
"saN2-Dzen1K",
"Efmid9veCRI",
"E4EOvdKn1C5",
"p3epjGcWsJ",
"bKS-FlGlsWZ",
"4RWsqqYF6Pc",
"fhHKfcCx5PZ",
"LKTew8Whah",
"aqaR02w_S2E",
"epkQj2-7qeL",
"RzwGW3-nApe",
"KNDQbIRBvFI",
"TEDOgrNqoEo",
"gDsOAmKLSzP",
"j_ycfE6Jj6z",
"rYQY08q-AHG",
"JSfNIp_7hv3",
"1UIYk_KdHqL... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
"We thank the reviewer for your efforts and inputs. In the following are our answers to specific questions:\n\n“Does LIME help LSTMs, LSTM+attention, GPT2?”\n\nFirstly, we would like to point out that GPT2 is a pre-trained language model that uses transformer architecture (same as ours). For LSTMs, we have performe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"1UIYk_KdHqL",
"E4EOvdKn1C5",
"E4EOvdKn1C5",
"p3epjGcWsJ",
"fhHKfcCx5PZ",
"KNDQbIRBvFI",
"j_ycfE6Jj6z",
"epkQj2-7qeL",
"gDsOAmKLSzP",
"iclr_2021_QHUUrieaqai",
"rYQY08q-AHG",
"iclr_2021_QHUUrieaqai",
"TEDOgrNqoEo",
"JSfNIp_7hv3",
"aqaR02w_S2E",
"L43tw6GK-XQ",
"iclr_2021_QHUUrieaqai",
... |
iclr_2021_RcjRb9pEQ-Q | Fine-grained Synthesis of Unrestricted Adversarial Examples | We propose a novel approach for generating unrestricted adversarial examples by manipulating fine-grained aspects of image generation. Unlike existing unrestricted attacks that typically hand-craft geometric transformations, we learn stylistic and stochastic modifications leveraging state-of-the-art generative models. ... | withdrawn-rejected-submissions | The paper received two borderline accept recommendations and one accept recommendation from three reviewers with low confidence and a reject recommendation from an expert reviewer.
Although all reviewers found that the paper addresses an important and challenging problem of semantically constraining adversarial attac... | train | [
"n_u56tCpeb",
"fWy-oy7jcNx",
"3OceFa1V72Z",
"9Wd_nDm8yfi",
"FXtvIFNdKT",
"GNlnNAa8_W-",
"XGAOuBw6Kf0",
"2zHSMvTZkyE",
"0GA_HFtHTYb"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments and feedback. We respond to the questions in the following:\n\n- In our attacks we iterate until we fool the classifier, so the fooling rate is 100% for both targeted and non-targeted attacks. In general attackers choose whether they want to only change the model’s original prediction (... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
5
] | [
"GNlnNAa8_W-",
"2zHSMvTZkyE",
"XGAOuBw6Kf0",
"0GA_HFtHTYb",
"iclr_2021_RcjRb9pEQ-Q",
"iclr_2021_RcjRb9pEQ-Q",
"iclr_2021_RcjRb9pEQ-Q",
"iclr_2021_RcjRb9pEQ-Q",
"iclr_2021_RcjRb9pEQ-Q"
] |
iclr_2021_ok4MWWSeOJ1 | Unifying Regularisation Methods for Continual Learning | Continual Learning addresses the challenge of learning a number of different distributions sequentially. The goal of maintaining knowledge of earlier distributions without re-accessing them starkly conflicts with standard SGD training for artificial neural networks. An influential method to tackle this are so-called re... | withdrawn-rejected-submissions | The paper proposes a unification of three popular baseline regularizers in continual learning. The unification is realized through a claim that they all regularize (surprisingly) related objectives.
The key strengths of the paper highlighted by the reviewers were:
1. The established connection is valuable and interest... | train | [
"CfXFr3JNcGz",
"WdBTNtD5vym",
"jyJ_UNtI1U",
"_Dkw0UN3unW",
"hWSibxExe8p",
"C9ebjIv3foX",
"j7wbHd2SqBy",
"aTJ4vuREa_O",
"hj7nkPerTk6",
"ECQLIeaNOnH",
"gL_T7iDn6lt",
"7zzTjjOJfbW",
"siyvn5DAW24",
"bu7gXjPERz1",
"F4lYV-T7964",
"848h8hEEGDx",
"_R9XDR5dOW",
"MvVL9yikRCX",
"xd4U_bCFCIT... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"... | [
"******************************************************************\n********************** POST DISCUSSION UPDATE **********************\n******************************************************************\nThank you to the authors for the discussion. Given that the relationship between AF and F has now been addres... | [
5,
6,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_ok4MWWSeOJ1",
"iclr_2021_ok4MWWSeOJ1",
"iclr_2021_ok4MWWSeOJ1",
"j7wbHd2SqBy",
"hj7nkPerTk6",
"aTJ4vuREa_O",
"gL_T7iDn6lt",
"848h8hEEGDx",
"8UtOQc3ANcR",
"7zzTjjOJfbW",
"siyvn5DAW24",
"bu7gXjPERz1",
"xd4U_bCFCIT",
"F4lYV-T7964",
"_R9XDR5dOW",
"iclr_2021_ok4MWWSeOJ1",
"jyJ_... |
iclr_2021_rx19UMFbC9u | Waste not, Want not: All-Alive Pruning for Extremely Sparse Networks | Network pruning has been widely adopted for reducing computational cost and memory consumption in low-resource devices. Recent studies show that saliency-based pruning can achieve high compression ratios (e.g., 80-90% of the parameters in original networks are removed) without sacrificing much accuracy loss. Neverthele... | withdrawn-rejected-submissions | This paper introduces All-Alive Pruning, an approach which checks for and removes the connections to and from units with zero gradient ("dead" units). The method is shown to improve performance of IMP at extreme (>128x) compression ratios on MNIST, CIFAR-10, and Tiny ImageNet. All reviewers felt that the problem the au... | test | [
"NWRMC0DyVz",
"cd_cVXtOoa",
"Yyf5Ll_aoBT",
"uPATHHOaK6a",
"8sjquNlXfQu",
"DeJ19Nouwwz",
"hywksm4uyD0",
"P3f42vGHdGe"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"== Summary ==\n\nThe submission deals with eliminating neurons in a network where either a) all the input connections xor b) all the output connections have been pruned. When this is the case, the unpruned a) output or b) input connections are unused and can also be pruned: and the freed parameter budget used for ... | [
5,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
3,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_rx19UMFbC9u",
"NWRMC0DyVz",
"P3f42vGHdGe",
"hywksm4uyD0",
"DeJ19Nouwwz",
"iclr_2021_rx19UMFbC9u",
"iclr_2021_rx19UMFbC9u",
"iclr_2021_rx19UMFbC9u"
] |
iclr_2021_KG4igOosnw8 | Discriminative Representation Loss (DRL): A More Efficient Approach than Gradient Re-Projection in Continual Learning | The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting. In recent studies, several gradient-based approaches have been developed to make more efficient use of compact episodic memories, which constrain the gradients resulting from new samples wi... | withdrawn-rejected-submissions | This paper explores the connection between diversity of gradients and discriminativeness of representations. Based on the observations, authors propose Discriminative Representation Loss (DRL).
This paper resulted in a lot of discussions and specifically, R5's detailed comments helped the authors improve their paper. ... | train | [
"NgrrI9wyZyE",
"SJyvg1MQqi-",
"u0oxnEKUO6F",
"4w7Z92fYPWh",
"AC7U7mAXdjI",
"AoJ4KqfafZj",
"dKQizO-65Lm",
"b0_2claP-oA",
"flC9ZaLigha",
"zlDSeWyFQcs",
"ljZum_Bhffx",
"2sT-e7RpXq6",
"7vFWhmwliDt",
"3XJeF3JrRm",
"6vC4V_w6RfC",
"D_yo2_POhy0",
"klTRIlAzkX_",
"7E9ip7vVXM7",
"X0g8GfSn2N... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_r... | [
"Thanks a lot for R5's further clarification! In our understanding, having a representative memory for the entire distribution seems an ideal case, please also refer to [1]. In continual learning, the forward transferring can only happen when a new task is able to reuse representations that are learned from the pr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"SJyvg1MQqi-",
"u0oxnEKUO6F",
"4w7Z92fYPWh",
"dKQizO-65Lm",
"zlDSeWyFQcs",
"2sT-e7RpXq6",
"b0_2claP-oA",
"7vFWhmwliDt",
"iclr_2021_KG4igOosnw8",
"7E9ip7vVXM7",
"iclr_2021_KG4igOosnw8",
"7vFWhmwliDt",
"klTRIlAzkX_",
"ljZum_Bhffx",
"ljZum_Bhffx",
"ljZum_Bhffx",
"ljZum_Bhffx",
"flC9Za... |
iclr_2021_pULTvw9X313 | MeshMVS: Multi-view Stereo Guided Mesh Reconstruction | Deep learning based 3D shape generation methods generally utilize latent features extracted from color images to encode the objects' semantics and guide the shape generation process. These color image semantics only implicitly encode 3D information, potentially limiting the accuracy of the generated shapes. In this pap... | withdrawn-rejected-submissions | This submission is an interesting case...
The method it presents appears to work quite well, achieving state-of-the-art quantitative reconstruction results (though qualitatively, the reconstructed surfaces are locally noisy).
The method is quite complex, which different reviewers saw as either a strength or a weaknes... | train | [
"Fq4Vwx0AvKN",
"V8xxnxbG75",
"5BT5B7V7-OI",
"TnLvM8Rcq0N",
"KsGE3v5Zex8",
"XQgmDB-zFba",
"tygzj8aP2wd"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have uploaded the revised manuscript as per the rebuttal discussions. Following are the changes:\n* Rearranged ablation study experiments to more clearly show which component contributes what amount to the final score\n* Results when using rendered depths only\n* Added more related works on classical references... | [
-1,
-1,
-1,
-1,
9,
6,
4
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2021_pULTvw9X313",
"KsGE3v5Zex8",
"XQgmDB-zFba",
"tygzj8aP2wd",
"iclr_2021_pULTvw9X313",
"iclr_2021_pULTvw9X313",
"iclr_2021_pULTvw9X313"
] |
iclr_2021_bQf4aGhfmFx | Effective Regularization Through Loss-Function Metalearning | Loss-function metalearning can be used to discover novel, customized loss functions for deep neural networks, resulting in improved performance, faster training, and improved data utilization. A likely explanation is that such functions discourage overfitting, leading to effective regularization. This paper theoretica... | withdrawn-rejected-submissions | The paper presents a method for meta-learning the loss function. The analysis mainly concerns the recently proposed TaylorGLO method on the (slightly less recent) Baikal loss. There was no consensus on this paper, but no reviewer was willing to fight for acceptance either. I found the paper not self-contained, with imp... | train | [
"nl8BUeuY_Zf",
"7XjGztR6OX0",
"NfSmPoSuNIz",
"Rltqh9qVyWC",
"rrXz8h4jSF_",
"1xM0RA1pjzx",
"ztULO9MfrX3",
"_z-uLpQ_3Io",
"Zp5vIogeBJn",
"CswzCRzGD1",
"aCSPPnp4I4",
"GPDHkroU7fh",
"D_uKnsrZcz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer 5,\n\nafter the clarifications and additions of the authors I think that the paper is well rounded and tells an interesting story. Did any of their clarifications change your mind?\n\nRegarding your concerns of the paper. I disagree with the statement that \"This paper claims that it theoretically an... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
3
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"D_uKnsrZcz",
"iclr_2021_bQf4aGhfmFx",
"Rltqh9qVyWC",
"rrXz8h4jSF_",
"_z-uLpQ_3Io",
"iclr_2021_bQf4aGhfmFx",
"aCSPPnp4I4",
"7XjGztR6OX0",
"GPDHkroU7fh",
"D_uKnsrZcz",
"iclr_2021_bQf4aGhfmFx",
"iclr_2021_bQf4aGhfmFx",
"iclr_2021_bQf4aGhfmFx"
] |
iclr_2021_K398CuAKVKB | Removing Dimensional Restrictions on Complex/Hyper-complex Convolutions | It has been shown that the core reasons that complex and hypercomplex valued neural networks offer improvements over their real-valued counterparts is the fact that aspects of their algebra forces treating multi-dimensional data as a single entity (forced local relationship encoding) with an added benefit of reducing p... | withdrawn-rejected-submissions | While the updated version of this manuscript did motivate one reviewer to give the paper a marginal accept rating, all other reviewers really felt that the paper could use more work along the lines of their suggestions. The aggregate view of the reviewers is just not positive enough at this time to warrant an accept re... | train | [
"WN6PHl3swaH",
"hWL44137ZYc",
"bxCbR4ZBtPi",
"CuvwKpDMJi",
"VWIiJRh6h5z",
"bMV5Sue8ABH",
"t6jCoRPtwkE",
"oKTxWkDew_8",
"IglDyOh-F0s"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The purpose of this paper is to analyze the reasons why complex/hyper-complex neural networks yield performance improvements (especially pertaining to generalization). The authors argue that the underlying algebraic structure of complex/hyper-complex coordinate systems enable greater weight-sharing compar... | [
4,
6,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
2,
4,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_K398CuAKVKB",
"iclr_2021_K398CuAKVKB",
"VWIiJRh6h5z",
"oKTxWkDew_8",
"IglDyOh-F0s",
"WN6PHl3swaH",
"hWL44137ZYc",
"iclr_2021_K398CuAKVKB",
"iclr_2021_K398CuAKVKB"
] |
iclr_2021_trPMYEn1FCX | GENERATIVE MODEL-ENHANCED HUMAN MOTION PREDICTION | The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out-of-distribution (OoD). Here we formulate a new OoD benchmark based on the Human3.6M and CMU motion capture datasets, and introduce a hy- brid f... | withdrawn-rejected-submissions | This paper proposes a method for out-of-distribution modeling and evaluation in the human motion prediction task. Paper was reviewed by four expert reviewers who identified the following pros and cons.
> Pros:
- New benchmark for testing out of distribution performance [R1]
- Compelling performance with respect to th... | train | [
"sK5JWeWRW_C",
"UaNKPz8sASK",
"humwfu3bgwS",
"XDz_nfKRCMr",
"obHLqSSHhs5",
"6KUeTkR9rOk",
"k6EK7t1UUkU",
"blQNVIlMJGk"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper proposes a method and benchmark for out-of-distribution modeling and evaluation of human motion. They evaluate against state-of-the-art human motion methods, and show favorable performance against them.\n\n\nPros\n+ Generative model formulation for human motion prediction\n+ Benchmark for test... | [
5,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
4,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"iclr_2021_trPMYEn1FCX",
"6KUeTkR9rOk",
"k6EK7t1UUkU",
"blQNVIlMJGk",
"sK5JWeWRW_C",
"iclr_2021_trPMYEn1FCX",
"iclr_2021_trPMYEn1FCX",
"iclr_2021_trPMYEn1FCX"
] |
iclr_2021_QZaeLBDU03 | Learning Movement Strategies for Moving Target Defense | The field of cybersecurity has mostly been a cat-and-mouse game with the discovery of new attacks leading the way. To take away an attacker's advantage of reconnaissance, researchers have proposed proactive defense methods such as Moving Target Defense (MTD). To find good movement strategies, researchers have modeled M... | withdrawn-rejected-submissions | The paper proposes a new game-theoretic model, Bayesian Stackelberg Markov Game (BSMG), for designing defense strategies while accounting for the defender's uncertainty over attackers' types. The paper also proposes a learning approach, Bayesian Strong Stackelberg Q-learning (BSS-Q), to learn the optimal policy for BSM... | val | [
"9zQhZsTOaf",
"OIqDBeexvZ-",
"frKET0bySI2",
"cBQujkWIlQy",
"AHuLXExM5s",
"PNl6AIWTEql",
"idzaFjcnyh6",
"VaGASwqOjK",
"3sS_0VkruCr",
"lJzqJIE9I15",
"mB69N-XeDm6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"#########################\nPAPER SUMMARY\n#########################\n\nThis paper proposes the game-theoretic model of Bayesian Stackelberg Markov Games (BSMGs), a generalization of Markov games, as a formalism for studying Moving Target Defense (MTD) systems, a type of defender-attacker game with applications to ... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_QZaeLBDU03",
"iclr_2021_QZaeLBDU03",
"iclr_2021_QZaeLBDU03",
"AHuLXExM5s",
"idzaFjcnyh6",
"lJzqJIE9I15",
"9zQhZsTOaf",
"OIqDBeexvZ-",
"mB69N-XeDm6",
"iclr_2021_QZaeLBDU03",
"iclr_2021_QZaeLBDU03"
] |
iclr_2021_4Nt1F3qf9Gn | CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients | The healthcare industry generates troves of unlabelled physiological data. This data can be exploited via contrastive learning, a self-supervised pre-training method that encourages representations of instances to be similar to one another. We propose a family of contrastive learning methods, CLOCS, that encourages rep... | withdrawn-rejected-submissions | This paper proposes a representation learning approach from cardiac signals, which adopts contrastive learning to incorporates knowledge on patient-specificity. This problem is highly motivating because of potential application to medicine and healthcare and large amounts of accumulating unlabeled physiological data. T... | train | [
"pDOXJJnP6Mf",
"Lp3p_qUXl6Y",
"PWegT2YPTP",
"lF0DNS8Icg",
"AKBLk4XVHNK",
"A8y5c7vohM",
"km-jwTBw8WO",
"8kd928a8UUN",
"nnVXxVNw08Y"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all the reviewers for the time and effort they have taken to read our manuscript and provide us with valuable feedback. \n\n**FINAL MANUSCRIPT UPLOADED**\nWe have now finalized our manuscript and have uploaded a modified version of the main manuscript and the supplementary material. Here are the main chan... | [
-1,
5,
-1,
-1,
-1,
-1,
4,
4,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"iclr_2021_4Nt1F3qf9Gn",
"iclr_2021_4Nt1F3qf9Gn",
"km-jwTBw8WO",
"8kd928a8UUN",
"nnVXxVNw08Y",
"Lp3p_qUXl6Y",
"iclr_2021_4Nt1F3qf9Gn",
"iclr_2021_4Nt1F3qf9Gn",
"iclr_2021_4Nt1F3qf9Gn"
] |
iclr_2021_rvosiWfMoMR | Automatic Music Production Using Generative Adversarial Networks | When talking about computer-based music generation, two are the main threads of research: the construction of autonomous music-making systems, and the design of computer-based environments to assist musicians. However, even though creating accompaniments for melodies is an essential part of every producer's and songwri... | withdrawn-rejected-submissions | All Reviewers point out that the paper, although having some strong points, does not meet the bar for a highly-selective machine learning conference like ICLR. Hence, my recommendation is to REJECT the paper. As a brief summary, I highlight below some pros and cons that arose during the review and meta-review processes... | train | [
"-EcJaASvot",
"MwuYQ0okIIs",
"sGqet7hSufC",
"pwtRn2VKyPb",
"tBjdNZSl8n",
"OUqoXfqliy"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"- DEMUCS is by no means perfect, and we should expect some bleed-through of the target signal (eg drums) into the separated signal (eg bass). If this happens, the task becomes significantly easier than if the system was presented with clean stems. $-> \\textbf{Our answer}$ This has been clarified with a more in-de... | [
-1,
-1,
-1,
5,
4,
2
] | [
-1,
-1,
-1,
4,
3,
5
] | [
"pwtRn2VKyPb",
"tBjdNZSl8n",
"OUqoXfqliy",
"iclr_2021_rvosiWfMoMR",
"iclr_2021_rvosiWfMoMR",
"iclr_2021_rvosiWfMoMR"
] |
iclr_2021_4SiMia0kjba | Causal Probabilistic Spatio-temporal Fusion Transformers in Two-sided Ride-Hailing Markets | Achieving accurate spatio-temporal predictions in large-scale systems is extremely valuable in many real-world applications, such as weather forecasts, retail forecasting, and urban traffic forecasting. So far, most existing methods for multi-horizon, multi-task and multi-target predictions select important predicting ... | withdrawn-rejected-submissions | The paper presents a spatial-temporal prediction framework with causal effects of predictors for better interpretability. The idea is interesting and the touch on modeling causal relations could be useful in practical applications. The paper receives mixed ratings and therefore there has been extensive discussion. We a... | train | [
"E6gZWHZNzad",
"gBbjE3A22EE",
"vXh4QTzMv9-",
"_E7aYBncwb",
"ufB9NSy4aPG",
"VGCzN4r15aw",
"NHOzh4FH32F",
"nCqPm9Qsi7",
"eOqS8ijd4n2",
"cnjsn5gtG-k",
"fXJWCXuNYC7",
"GXk-mmXPME7",
"M8MWOwEFPWS",
"VlTvPc0dGnc",
"sxbDM_LiZ2q"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose an interpretable spatio-temporal fusion transformer for predicting supply and demand in ride-haling platforms. More generally, the authors claim that their approach extends to other two-sided markets such as electric grids, retail etc by showing empirical results of their approach using data f... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2021_4SiMia0kjba",
"iclr_2021_4SiMia0kjba",
"VGCzN4r15aw",
"VlTvPc0dGnc",
"NHOzh4FH32F",
"cnjsn5gtG-k",
"fXJWCXuNYC7",
"iclr_2021_4SiMia0kjba",
"nCqPm9Qsi7",
"nCqPm9Qsi7",
"nCqPm9Qsi7",
"E6gZWHZNzad",
"sxbDM_LiZ2q",
"iclr_2021_4SiMia0kjba",
"iclr_2021_4SiMia0kjba"
] |
iclr_2021_PghuCwnjF6y | TaskSet: A Dataset of Optimization Tasks | We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a va... | withdrawn-rejected-submissions | The contributions of this paper are twofold: 1) datasets of tasks are provided, and 2) based on the datasets and hyperparameter lists on the datasets, a transfer learning approach for hyperparameter optimization (HPO) is proposed. Many reviewers positively evaluated the idea and approach discussed in this paper. Howeve... | val | [
"7EZo95SUdGa",
"HxH3Vsg-zE",
"QzOBhcv7OC8",
"SyNJGCA64L0",
"WxJE4jpsP3n",
"f_lYl-vwbB",
"nt4G4HrUQ2Q",
"fvXjgBAo_L6"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your thoughtful review.\n\n\\> Justification for the selection of tasks:\n\nMuch like any benchmark or dataset in machine learning research, we have to agree as a community on the value of a particular benchmark in driving forward progress on problems we care about. For example, image classification ... | [
-1,
-1,
-1,
-1,
7,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
4,
2,
4,
4
] | [
"nt4G4HrUQ2Q",
"WxJE4jpsP3n",
"f_lYl-vwbB",
"fvXjgBAo_L6",
"iclr_2021_PghuCwnjF6y",
"iclr_2021_PghuCwnjF6y",
"iclr_2021_PghuCwnjF6y",
"iclr_2021_PghuCwnjF6y"
] |
iclr_2021_uxYjVEXx48i | An Examination of Preference-based Reinforcement Learning for Treatment Recommendation | Treatment recommendation is a complex multi-faceted problem with many conflicting objectives, e.g., optimizing the survival rate (or expected lifetime), mitigating negative impacts, reducing financial expenses and time costs, avoiding over-treatment, etc. While this complicates the hand-engineering of a reward function... | withdrawn-rejected-submissions | Reviewers agree that this work is promising. The paper is well-grounded in the literature and different aspects of the considered methods are investigated through a variety of experiments. Unfortunately, this paper does not provide sufficient details to allow the reader to understand what has been done nor how to adequ... | train | [
"RWI9jII764",
"pjow_Cbhvfj",
"mWEJum55EQp",
"youwIir2Lea",
"GwS5RMTKiGL",
"atOGTsdDzOf",
"f9NugFT3Oup",
"Ni49i3yV8Sh",
"khmanJufZX_",
"SHytQImEmf5",
"Xqw-jbJl9cX",
"oT0-1khP2xL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"#### **Summary**\nThis paper develops a preference-based framework for Reinforcement Learning in Healthcare, focusing on treatment recommendation. The paper highlights several factors where policies trained with preferences may be better suited for use in healthcare applications and extensively evaluates these fac... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_uxYjVEXx48i",
"Ni49i3yV8Sh",
"f9NugFT3Oup",
"RWI9jII764",
"RWI9jII764",
"RWI9jII764",
"oT0-1khP2xL",
"oT0-1khP2xL",
"Xqw-jbJl9cX",
"Xqw-jbJl9cX",
"iclr_2021_uxYjVEXx48i",
"iclr_2021_uxYjVEXx48i"
] |
iclr_2021_TqQ0oOzJlai | How Important is Importance Sampling for Deep Budgeted Training? | Long iterative training processes for Deep Neural Networks (DNNs) are commonly required to achieve state-of-the-art performance in many computer vision tasks. Core-set selection and importance sampling approaches might play a key role in budgeted training regimes, i.e.~when limiting the number of training iterations. T... | withdrawn-rejected-submissions | This work investigates how importance sampling strategies can improve training with budgeted constraints., with a focus on the benefits from variety provided by data-augmentation samples.
Initial clarification issues raised by the reviewers were taken into account such as a new title, clarification of some explanation... | train | [
"x4R6oW7EsDm",
"diVA5vwFokZ",
"vGJWfHAhpjp",
"GFPiKSxkW5v",
"tnUhPFQLZu",
"WFee637hkgu",
"TWmf8pzL9w2",
"_wKNTj9KieX",
"80UySypiAjR",
"GDizkSGh1zw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I would like to thank the authors for their additional explanations.\nAs I still feel some insights and connection missing between (the very valid) points raised in the paper I'll stick to my previous score.\n\nSome remaining points include e.g.: \nRegarding data augmentation the amount of total steps performed i... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"diVA5vwFokZ",
"_wKNTj9KieX",
"iclr_2021_TqQ0oOzJlai",
"TWmf8pzL9w2",
"80UySypiAjR",
"GDizkSGh1zw",
"iclr_2021_TqQ0oOzJlai",
"iclr_2021_TqQ0oOzJlai",
"iclr_2021_TqQ0oOzJlai",
"iclr_2021_TqQ0oOzJlai"
] |
iclr_2021_8wa7HrUsElL | D3C: Reducing the Price of Anarchy in Multi-Agent Learning | Even in simple multi-agent systems, fixed incentives can lead to outcomes that are poor for the group and each individual agent. We propose a method, D3C, for online adjustment of agent incentives that reduces the loss incurred at a Nash equilibrium. Agents adjust their incentives by learning to mix their incentive wit... | withdrawn-rejected-submissions | This paper proposes a method of decentralized mechanism design to reduce the price of anarchy. Based on the detailed responses of the authors, all reviewers were satisfied by the technical contribution after the rebuttal period.
There was, however, a heavily engaged and lengthy discussion between most reviewers regar... | train | [
"EbZqAXVsDTN",
"o4d9oGl3tb4",
"pgREdlfY7ub",
"9iZZL3vBA9y",
"TJFZTnRen5U",
"39gJnMiCWi",
"4gSV5FK-eB9",
"NGn8yqlcm9t",
"EnovW4pF9Ju",
"-VM0JswXzi-",
"i0gnl7khcAs",
"T9ChW_VUW-_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper details a method by which it attempts a method to change the utility function for agents so they incorporate the utilities of other agents, resulting in more cooperating agents, so that the price of anarchy is minimized. It examines such trained agents in several problem settings, seeing them performing ... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_8wa7HrUsElL",
"iclr_2021_8wa7HrUsElL",
"39gJnMiCWi",
"EbZqAXVsDTN",
"iclr_2021_8wa7HrUsElL",
"NGn8yqlcm9t",
"i0gnl7khcAs",
"9iZZL3vBA9y",
"o4d9oGl3tb4",
"T9ChW_VUW-_",
"iclr_2021_8wa7HrUsElL",
"iclr_2021_8wa7HrUsElL"
] |
iclr_2021_TlS3LBoDj3Z | QTRAN++: Improved Value Transformation for Cooperative Multi-Agent Reinforcement Learning | QTRAN is a multi-agent reinforcement learning (MARL) algorithm capable of learning the largest class of joint-action value functions up to date. However, despite its strong theoretical guarantee, it has shown poor empirical performance in complex environments, such as Starcraft Multi-Agent Challenge (SMAC). In this pap... | withdrawn-rejected-submissions | This paper proposes practical improvements to theoretically well founded QTRAN, which is a state-of-the-art technique of cooperative multi-agent reinforcement learning. The improvements include new designs of loss function and action-value estimator, which might be widely applicable beyond QTRAN. However, it is not o... | test | [
"l1gcpZMfCwK",
"N3N1ASRnPr6",
"oJw_iowQub",
"ouCmkmZw5oU",
"S_3bywsM8V",
"CDdepvvA7H",
"V3XuOFKOV1E",
"KqX63asf529",
"JTlz-Dee61",
"4P-0CrzELbf",
"VI03vpa08L",
"rtOdWlSSmT",
"AI_A8kgknFy"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"### Summary and claims\n\nThis work proposes a MARL (multi-agent reinforcement learning) algorithm.\nIn the MARL setting, multiple agents have to make choices based on independent information to maximize a common objective. An existing algorithm in this space is QTRAN.\nThe authors propose several modifications to... | [
6,
6,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
2,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_TlS3LBoDj3Z",
"iclr_2021_TlS3LBoDj3Z",
"iclr_2021_TlS3LBoDj3Z",
"iclr_2021_TlS3LBoDj3Z",
"CDdepvvA7H",
"V3XuOFKOV1E",
"oJw_iowQub",
"oJw_iowQub",
"l1gcpZMfCwK",
"l1gcpZMfCwK",
"AI_A8kgknFy",
"N3N1ASRnPr6",
"iclr_2021_TlS3LBoDj3Z"
] |
iclr_2021_qf6Nmm-_6Z | VECoDeR - Variational Embeddings for Community Detection and Node Representation | In this paper, we study how to simultaneously learn two highly correlated tasks of graph analysis, i.e., community detection and node representation learning. We propose an efficient generative model called VECoDeR for jointly learning Variational Embeddings for COmmunity DEtection and node Representation. VECoDeR assu... | withdrawn-rejected-submissions | The paper proposes a new method to learn representation and community structure of a network jointly. The reviewers agree that the paper contains some interesting ideas but they raise also some important concerns. For example:
- even after considering the authors' rebuttal, the paper seems not too novel. In particular... | val | [
"CmDoD71gNBL",
"wkUB0SBdtbH",
"DBO46CzAc6",
"Fhk2ztEx4uG",
"yZGNhiFFKpe",
"j_R-rd9AYCs",
"fYeglMiCiQz",
"5BDsYb5613",
"4kimRyN-XOO",
"pG9e9ykt6Cm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a generative model for community detection and node representation in a unified framework. The underlying assumption of the work is that connected nodes in a graph should have similar node embeddings and similar community assignments. Experimental evaluations show improvement both on node class... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"iclr_2021_qf6Nmm-_6Z",
"DBO46CzAc6",
"CmDoD71gNBL",
"fYeglMiCiQz",
"4kimRyN-XOO",
"pG9e9ykt6Cm",
"5BDsYb5613",
"iclr_2021_qf6Nmm-_6Z",
"iclr_2021_qf6Nmm-_6Z",
"iclr_2021_qf6Nmm-_6Z"
] |
iclr_2021_5FRJWsiLRmA | Reservoir Transformers | We demonstrate that transformers obtain impressive performance even when some of the layers are randomly initialized and never updated. Inspired by old and well-established ideas in machine learning, we explore a variety of non-linear reservoir layers interspersed with regular transformer layers, and show improvements ... | withdrawn-rejected-submissions | The reviewers were split between accept (7) and borderline reject (two 5's). All three reviewers acknowledged that the proposed approach is simple and intuitive (but this paper follows, for the most part, the concept of reservoir operation and apply it to transformers). The main criticisms were insufficient experiments... | train | [
"meerRhluas6",
"7Ic7sYcpjmC",
"js7i-QUGGlR",
"DRWr9xHmnI",
"D4mY1_dg7r4",
"yGGLsa2OW8D"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"### Summary\n\nThe paper studies how transformers train when some layers are kept fixed at the randomly initialized parameters. The authors observe that transformers can be effectively trained with a significant percentage of their layers \"frozen\". It is also argued that as a result transformers can be trained m... | [
5,
7,
5,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1
] | [
"iclr_2021_5FRJWsiLRmA",
"iclr_2021_5FRJWsiLRmA",
"iclr_2021_5FRJWsiLRmA",
"meerRhluas6",
"7Ic7sYcpjmC",
"js7i-QUGGlR"
] |
iclr_2021_tv8n52XbO4p | Learning to Generate Noise for Multi-Attack Robustness | Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations. However, the majority of existing defense methods are tailored to defend against a single category of adversarial perturbation (e.g. ℓ∞-attack). In safety-critical... | withdrawn-rejected-submissions | This is a borderline case. The paper seems solid although some of the numbers are likely incorrect because in some results tables in the appendix the error taken over all attacks is higher than for the best individual attack (which should never happen).
The main contribution of this paper is to augment a standard adve... | train | [
"uwcwhu4VVoj",
"OGfsUYi5LXR",
"Rr3F8mVTL2g",
"PRvYk0zUK9n",
"WY-Ik7ShZzf",
"G8PHbp1CjyE",
"qEUHxU_sDs",
"7LIKLOhO1C5",
"-TDmiQMmM_",
"NIwKWutlIw",
"3IkJRTYfhJC",
"bOaOgkJeC3B",
"A9_soy56AEs",
"ttLeaFPXyz-",
"kQvZBV9ak9c",
"iEPkMEVZy1q",
"pY59RlzBgMn",
"GBZarAaRqPC",
"6l4UvrmqojD"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
... | [
"Summary\n=======\nThe authors propose a number of techniques to learn models which are adversarially robust to multiple perturbations. These involve a noise generator, a loss to enforce consistency, as well as a stochastic variant of adversarial training. With these changes, they are able to produce improvements t... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
1
] | [
"iclr_2021_tv8n52XbO4p",
"G8PHbp1CjyE",
"uwcwhu4VVoj",
"bOaOgkJeC3B",
"6l4UvrmqojD",
"7LIKLOhO1C5",
"-TDmiQMmM_",
"3IkJRTYfhJC",
"NIwKWutlIw",
"ttLeaFPXyz-",
"ttLeaFPXyz-",
"gDOb_AIkk5",
"iclr_2021_tv8n52XbO4p",
"JN8SQP5aVg",
"pY59RlzBgMn",
"iclr_2021_tv8n52XbO4p",
"GBZarAaRqPC",
"... |
iclr_2021_bGPNpnZYr1 | Least Probable Disagreement Region for Active Learning | Active learning strategy to query unlabeled samples nearer the estimated decision boundary at each step has been known to be effective when the distance from the sample data to the decision boundary can be explicitly evaluated; however, in numerous cases in machine learning, especially when it involves deep learning, c... | withdrawn-rejected-submissions | While the results are promising, several concerns were raised in the reviews, leading to the reject recommendation at this time. There is an agreement among all reviewers that the paper would benefit from a revision.
Most reviewers felt that the paper lacks a rigorous and compelling theoretical justification for the ... | train | [
"r6Adm82tg9",
"UTH-gIytHJ",
"4LVTgfRLanK",
"x-RkHJyTHsJ",
"DCdigs0epY",
"C3HrZIFZgJv",
"QntYGRAhFf",
"Q0R-9hpOYGP",
"P5wG4b4XFev"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper defines a new \"distance to the decision boundary\" and empirically evaluates the corresponding active learning algorithm.\n\nUnfortunately, the motivation for this new distance to the decision boundary is lacking. It seems that there are a variety of possible quantities available and why this new one i... | [
4,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"iclr_2021_bGPNpnZYr1",
"QntYGRAhFf",
"Q0R-9hpOYGP",
"P5wG4b4XFev",
"r6Adm82tg9",
"UTH-gIytHJ",
"iclr_2021_bGPNpnZYr1",
"iclr_2021_bGPNpnZYr1",
"iclr_2021_bGPNpnZYr1"
] |
iclr_2021_1OP1kReyL56 | Model Selection for Cross-Lingual Transfer using a Learned Scoring Function | Transformers that are pre-trained on multilingual text corpora, such as, mBERT and XLM-RoBERTa, have achieved impressive cross-lingual transfer learning results. In the zero-shot cross-lingual transfer setting, only English training data is assumed, and the fine-tuned model is evaluated on another target language. No... | withdrawn-rejected-submissions | While the paper has merits, the experiments are lacking in important respects: I agree with Reviewer 1 that it is a serious problem that the approach is not evaluated on truly low-resource languages - since a significant pivot-to-target language bias is to be expected (as also suggested by Reviewer 2). I also agree wit... | train | [
"ennhxFJGFwm",
"3-QHRTV2DId",
"FNIKA7Khstf",
"ZOibW0Wk5Ex",
"-f83KkvUUQ1",
"GsmtLr19-XR",
"Fl3Z4dWrsTy",
"TeInpc2CW6o",
"l6I9tAxnoKi"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"## Summary\nResearch Problem: Zero-shot cross-lingual transfer, with model selection using source language dev set, has high variance with different hyperparameters or even different random seeds.\n\nThis paper proposes learning a model selection function using pivot languages dev set. The model selection function... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_1OP1kReyL56",
"iclr_2021_1OP1kReyL56",
"l6I9tAxnoKi",
"3-QHRTV2DId",
"ennhxFJGFwm",
"TeInpc2CW6o",
"iclr_2021_1OP1kReyL56",
"iclr_2021_1OP1kReyL56",
"iclr_2021_1OP1kReyL56"
] |
iclr_2021_refmbBH_ysO | SpreadsheetCoder: Formula Prediction from Semi-structured Context | Spreadsheet formula prediction has been an important program synthesis problem with many real-world applications. Previous works typically utilize input-output examples as the specification for spreadsheet formula synthesis, where each input-output pair simulates a separate row in the spreadsheet. However, such a formu... | withdrawn-rejected-submissions | This paper clearly has great ideas and reviewers appreciated that. However, the lack of experiments that can be validated by the community (only 1 experiment on the proprietary dataset) is an issue. We don't know if the reported accuracy is a respectable one (in the public domain). Having a proprietary dataset is a p... | train | [
"xuXopg9M_a",
"14-eofWkiA",
"HineHF7hYGM",
"OEva2ZLOxQV",
"pTbMmCR56-",
"e7IKkQRRdjO",
"kQNWTvhDCRb",
"E05dI3CqacF",
"txJeQj0oFG",
"ERTfya3WR7s"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for clarifying the questions and incorporating the suggestions.\n\nNice to know that using the user-provided ranges improves the accuracy. Also the inference time is low enough for the approach to be used in real applications. The accuracy is good enough and will only go further up if more sophisticated... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"e7IKkQRRdjO",
"E05dI3CqacF",
"OEva2ZLOxQV",
"14-eofWkiA",
"iclr_2021_refmbBH_ysO",
"txJeQj0oFG",
"ERTfya3WR7s",
"iclr_2021_refmbBH_ysO",
"iclr_2021_refmbBH_ysO",
"iclr_2021_refmbBH_ysO"
] |
iclr_2021_t4EWDRLHwcZ | Graph Learning via Spectral Densification | Graph learning plays important role in many data mining and machine learning tasks, such as manifold learning, data representation and analysis, dimensionality reduction, data clustering, and visualization, etc. For the first time, we present a highly-scalable spectral graph densification approach (GRASPEL) for graph l... | withdrawn-rejected-submissions | The reviewers generally like the paper, in particular the scalability of the proposed approach. The author response and revised version clarified some questions of the reviewers, however, it didn't fully mitigate their concerns. | val | [
"KLzb5uiNYQf",
"YKyB6Xkffbf",
"QTPuBCK0QVH",
"dV5cuVxttBa",
"_OGa8yPbDhA",
"K5MERY8lbx-",
"v05bak9zFFm",
"JwwRAqRnun",
"pfC-oN_uKgt",
"9G0Hz1g61a7",
"EsQP-m1_o6I"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"Update: In the revised version, the authors have addressed some of my technical concerns, therefore I slightly increased my score.\n\n\nSummary:\n\nThe paper proposes a method to learn a sparse graph from data by optimizing an objective similar to the graphical Lasso over the set of valid Laplacian matrices. The m... | [
5,
6,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_t4EWDRLHwcZ",
"iclr_2021_t4EWDRLHwcZ",
"iclr_2021_t4EWDRLHwcZ",
"iclr_2021_t4EWDRLHwcZ",
"K5MERY8lbx-",
"EsQP-m1_o6I",
"iclr_2021_t4EWDRLHwcZ",
"dV5cuVxttBa",
"YKyB6Xkffbf",
"KLzb5uiNYQf",
"QTPuBCK0QVH"
] |
iclr_2021_nQxCYIFk7Rz | Multiple Descent: Design Your Own Generalization Curve | This paper explores the generalization loss of linear regression in variably parameterized families of models, both under-parameterized and over-parameterized. We show that the generalization curve can have an arbitrary number of peaks, and moreover, locations of those peaks can be explicitly controlled. Our results hi... | withdrawn-rejected-submissions | While there was some interest in the analysis, the consensus view was that the original treatment was not sufficiently well-motivated, and the revision was too dissimilar from the original submission for it to be evaluated for publication in this year's ICLR. | val | [
"EKmfeSbj2ZJ",
"qsa7qxh0vUa",
"_4Cejwpacai",
"pj_7IqsR39g",
"bSPvaUv64Wy",
"wu720GH5WOv",
"q9L-cT-u8Ca",
"kEgeIIWu0S-",
"2mIEfz9Y52r",
"1sONZs-1wg",
"ejq5mzQAtb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Previous work has shown peaks in generalization error as the capacity of the model increases (called the double-descent phenomenon). The submitted paper proposes methods for generating data that would arbitrarily change the number and positions of peaks in a generalization-vs-capacity curve for linear regression, ... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"iclr_2021_nQxCYIFk7Rz",
"iclr_2021_nQxCYIFk7Rz",
"bSPvaUv64Wy",
"qsa7qxh0vUa",
"1sONZs-1wg",
"EKmfeSbj2ZJ",
"ejq5mzQAtb",
"iclr_2021_nQxCYIFk7Rz",
"iclr_2021_nQxCYIFk7Rz",
"iclr_2021_nQxCYIFk7Rz",
"iclr_2021_nQxCYIFk7Rz"
] |
iclr_2021_pXi-zY262sE | Ruminating Word Representations with Random Noise Masking | We introduce a training method for better word representation and performance, which we call \textbf{GraVeR} (\textbf{Gra}dual \textbf{Ve}ctor \textbf{R}umination). The method is to gradually and iteratively add random noises and bias to word embeddings after training a model, and re-train the model from scratch but in... | withdrawn-rejected-submissions | This paper proposes adding noise regularization, iteratively during training to word embeddings.
The method is evaluated on CNN-based text classification.
Overall, there is novelty in the proposed method, however there are concerns about the experiments and analysis of the proposed approach. | train | [
"X81QeUPqp2",
"MrxYRsTZXV",
"aeF6n5Nmoev"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\n##########################################################################\n\nSummary:\n\nThis paper presents a simple meta-learning method for iteratively fine-tuning\nword embeddings. The method works by adding noise and bias terms to the learned\nembeddings after each training session, and initializing the sa... | [
3,
4,
4
] | [
4,
4,
4
] | [
"iclr_2021_pXi-zY262sE",
"iclr_2021_pXi-zY262sE",
"iclr_2021_pXi-zY262sE"
] |
iclr_2021_Jq8JGA89sDa | Detecting Hallucinated Content in Conditional Neural Sequence Generation | Neural sequence models can generate highly fluent sentences but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input, which can cause a lack of trust in the model.
To better assess the faithfulness of the machine outputs, we propose a new task to pre... | withdrawn-rejected-submissions |
The paper proposes the novel task of detecting hallucinated tokens in sequence
generation, and a strategy to train such models using artificially generated
samples. The methods show reasonable correlation with human judgements.
The expert reviewers are unanimous in their lack of enthusiasm
about this work, with overa... | train | [
"gMcBPcwKWmq",
"k-oLq98Oxaq",
"6WccRTbT1OI",
"f_35E-Gx8jw",
"ouViCU-EB9l",
"Qqu0LLdtsOs",
"DBa7LUu_-CZ",
"XuwHH-ob-N2",
"whrv7HTlmer"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary: The paper addresses the problem of \"hallucinated\" content in conditional neural generation for two specific tasks: machine translation and summarization. It proposes a new task for faithfulness assessment, which classifies each token as either hallucinated or not. The classifier uses a pre-trained LM (e... | [
5,
5,
6,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_Jq8JGA89sDa",
"iclr_2021_Jq8JGA89sDa",
"iclr_2021_Jq8JGA89sDa",
"iclr_2021_Jq8JGA89sDa",
"whrv7HTlmer",
"k-oLq98Oxaq",
"6WccRTbT1OI",
"gMcBPcwKWmq",
"iclr_2021_Jq8JGA89sDa"
] |
iclr_2021_K5a_QFEUzA1 | Cross-model Back-translated Distillation for Unsupervised Machine Translation | Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train th... | withdrawn-rejected-submissions | This paper proposed an additional training objective for unsupervised neural machine translation (UNMT). They first train two UNMT models and use these models to generate pseudo parallel corpora. These parallel corpora are used to optimize the UNMT training objective. The experiments are conducted on several language ... | train | [
"3pLTvufPPm1",
"y4a5TCxuABd",
"8slankuRNKO",
"yrhIBVUpkjj",
"p7W-X0jkVas"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, two unsupervised agents are utilized at cross-model by using the dual nature of the unsupervised machine translation model, in which forward translation of agent_1 is combined with the backward translation of agent_2, more synthetic translation pairs are obtained to train a new supervised machine tr... | [
5,
-1,
7,
7,
6
] | [
5,
-1,
5,
4,
3
] | [
"iclr_2021_K5a_QFEUzA1",
"iclr_2021_K5a_QFEUzA1",
"iclr_2021_K5a_QFEUzA1",
"iclr_2021_K5a_QFEUzA1",
"iclr_2021_K5a_QFEUzA1"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.