paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18
values | meta_review stringlengths 29 10k | label stringclasses 3
values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2021_pGIHq1m7PU | Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs | Modeling time-evolving knowledge graphs (KGs) has recently gained increasing interest. Here, graph representation learning has become the dominant paradigm for link prediction on temporal KGs. However, the embedding-based approaches largely operate in a black-box fashion, lacking the ability to interpret their predicti... | poster-presentations | The paper has received 4 positive reviews all supporting the acceptance of the paper. The authors have provided a strong rebuttal and have addressed the reviewers' concerns. Please make sure to include all reviewer feedbacks in the camera-ready version. | train | [
"WmtHhe19-_9",
"1oAphl6vVks",
"NQWD8jzNSDd",
"YUR08MJR12Z",
"8Qu7n57qgYe",
"eoLKCOF1P1O",
"b3AYkRnUhfQ",
"HvTjl8lICZ4",
"gG2h5CFUbj",
"H2GgYN_XggW",
"owXcKojMvUX",
"nNNYdyoaPx8",
"RISr6LTok47",
"fdKT2pbaJv",
"LX7_yiX-rCi",
"QR_Lc81-s-l",
"logkY-LQ8RL",
"sQg8zdTlXo"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\n**[UPDATE, 30 Nov]: Rating raised after reading the authors rebuttal.**\n\nThe paper presents a knowledge graph embedding model that learns latent representation of a temporal knowledge graph to predict unseen events.\n\nThe problem addressed in the paper is relevant for the community, and the angle proposed by ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
1,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2021_pGIHq1m7PU",
"QR_Lc81-s-l",
"WmtHhe19-_9",
"sQg8zdTlXo",
"QR_Lc81-s-l",
"sQg8zdTlXo",
"WmtHhe19-_9",
"sQg8zdTlXo",
"iclr_2021_pGIHq1m7PU",
"gG2h5CFUbj",
"gG2h5CFUbj",
"gG2h5CFUbj",
"gG2h5CFUbj",
"WmtHhe19-_9",
"WmtHhe19-_9",
"iclr_2021_pGIHq1m7PU",
"iclr_2021_pGIHq1m7PU",
... |
iclr_2021_hsFN92eQEla | EVALUATION OF NEURAL ARCHITECTURES TRAINED WITH SQUARE LOSS VS CROSS-ENTROPY IN CLASSIFICATION TASKS | Modern neural architectures for classification tasks are trained using the cross-entropy loss, which is widely believed to be empirically superior to the square loss. In this work we provide evidence indicating that this belief may not be well-founded.
We explore several major neural architectures and a range of... | poster-presentations | The paper questions the use of cross-entropy loss for classification tasks and shows that using squared error loss can work just as well for deep neural networks. The authors conduct extensive experiments across ASR, NLP, and CV tasks. Comparing cross-entropy to squared error loss is certainly not novel, but the conclu... | train | [
"PFhXRHNCgRL",
"zlCb8clTWBo",
"jYWwmesH7mW",
"E9hWk2YFRr5",
"GWa2aKDk-pQ",
"cmtj9T-j3d_",
"UtutCgLBgXe",
"EM6koFBcsuU",
"iEfwhZDxnMi",
"FM-XMSDc2rT"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the comments!\n\nPerhaps the easiest way to see that the two losses are not equivalent is to consider the case with an infinite number of y’s and a single fixed x. Assuming you have two outcomes 1 with probability p and -1 with probability 1-p, the minimizer of the square loss is 2*p -1, while the mini... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3
] | [
"cmtj9T-j3d_",
"UtutCgLBgXe",
"EM6koFBcsuU",
"iEfwhZDxnMi",
"FM-XMSDc2rT",
"iclr_2021_hsFN92eQEla",
"iclr_2021_hsFN92eQEla",
"iclr_2021_hsFN92eQEla",
"iclr_2021_hsFN92eQEla",
"iclr_2021_hsFN92eQEla"
] |
iclr_2021_meG3o0ttiAD | Toward Trainability of Quantum Neural Networks | Quantum Neural Networks (QNNs) have been recently proposed as generalizations of classical neural networks to achieve the quantum speed-up. Despite the potential to outperform classical models, serious bottlenecks exist for training QNNs; namely, QNNs with random structures have poor trainability due to the vanishing g... | withdrawn-rejected-submissions | This paper introduces two new quantum neural networks with specific structures: TT-QNNs and SC-QNNs. The main contribution of this work is to show a theoretical lower bound that the gradient of the two neural networks (at random initialization) with respect to certain training objectives is well lower bounded by 2^{-2 ... | train | [
"TkjGClbjpb_",
"dE11B97FNnj",
"0vsEvEI3xFr",
"SdxjBvea9uu",
"4_9pkYzKrH",
"Rluy1d94eyc",
"qwBIf9-abT"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the constructive suggestions and feedbacks. \nThe main contribution of this work is to investigate the trainability of QNNs. The encoding circuit is a plugin-in part of the main paper, with which we prove additional bounds on the norm of the gradient.\nWe would like to provide discussions... | [
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"4_9pkYzKrH",
"Rluy1d94eyc",
"4_9pkYzKrH",
"qwBIf9-abT",
"iclr_2021_meG3o0ttiAD",
"iclr_2021_meG3o0ttiAD",
"iclr_2021_meG3o0ttiAD"
] |
iclr_2021_AJY3fGPF1DC | Selecting Treatment Effects Models for Domain Adaptation Using Causal Knowledge | Selecting causal inference models for estimating individualized treatment effects (ITE) from observational data presents a unique challenge since the counterfactual outcomes are never observed. The problem is challenged further in the unsupervised domain adaptation (UDA) setting where we only have access to labeled sam... | withdrawn-rejected-submissions | This paper considers the problem of identification of causal effects under the unsupervised domain adaptation setting. The authors assume the invariance of the causal structure and use it to regularize the predictor of causal effects. The method is interesting and looks effective, although this assumption may not hold ... | test | [
"3DUjYcxDR9u",
"Rxq9-2yJFgE",
"KWSlgGBKHI3",
"w6WRBamIqQS",
"gnxwMzjPpJF",
"Z3VYXbesnmQ",
"V3fZ7h5Bdq3",
"yWm2kx4fb0f",
"IGAdDDepUSO",
"1XEDVHDy4zb",
"Es1vbvkT1u-"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you again for your initial comments on our manuscript. \n\nPlease let us know if the revised manuscript and responses have addressed your concerns. Furthermore, if you have additional comments, we are eager to address them!\n\nThank you again!",
"We thank the reviewer for the constructive comments whi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"yWm2kx4fb0f",
"yWm2kx4fb0f",
"Rxq9-2yJFgE",
"iclr_2021_AJY3fGPF1DC",
"IGAdDDepUSO",
"1XEDVHDy4zb",
"Es1vbvkT1u-",
"iclr_2021_AJY3fGPF1DC",
"iclr_2021_AJY3fGPF1DC",
"iclr_2021_AJY3fGPF1DC",
"iclr_2021_AJY3fGPF1DC"
] |
iclr_2021__O9YLet0wvN | Closing the Generalization Gap in One-Shot Object Detection | Despite substantial progress in object detection and few-shot learning, detecting objects based on a single example - one-shot object detection - remains a challenge. A central problem is the generalization gap: Object categories used during training are detected much more reliably than novel ones. We here show that th... | withdrawn-rejected-submissions | The reviewers have ranked this paper as borderline accept. On the negative side, the main claim of the paper (the more categories for training a one-shot detector, the better) has already been observed in several works and very intuitive. However, the paper has done significant experimental work to support this claim. ... | train | [
"uUPMDaNplpb",
"lcNNlxEL8L5",
"tDOrwEx1KIq",
"5NfG6cOvrv6",
"_mkVVmPSEvk",
"8V2ERPX1I3f",
"HSoybL_HBax",
"idQRqgxXKuG",
"jEP5B5dfWq1",
"R7CaiULZwv3",
"dB06fuqF6iR",
"TuXhVg1tGJ0",
"qbm-qNeRoFI",
"S6PrZ9US-WY",
"6iQ-QOxtggx",
"LySeQQiLUmb",
"vAS95390MnK",
"Ezpad-0d90g",
"iyNo0GaEw... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
"This paper provides a variety of studies to understand the generalization gap between known and novel classes in one-shot object detection. The studies are carried out by using siamese Faster R-CNN framework on four benchmark datasets. The most notable observation was that it was more important to increase the num... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021__O9YLet0wvN",
"8V2ERPX1I3f",
"5NfG6cOvrv6",
"dB06fuqF6iR",
"iclr_2021__O9YLet0wvN",
"idQRqgxXKuG",
"jEP5B5dfWq1",
"TuXhVg1tGJ0",
"R7CaiULZwv3",
"vAS95390MnK",
"6iQ-QOxtggx",
"iyNo0GaEwAu",
"S6PrZ9US-WY",
"LySeQQiLUmb",
"Ezpad-0d90g",
"Jx8nC_8UViN",
"ppwj8hWq7ZV",
"uUPMDa... |
iclr_2021_WoLQsYU8aZ | PettingZoo: Gym for Multi-Agent Reinforcement Learning | OpenAI's Gym library contains a large, diverse set of environments that are useful benchmarks in reinforcement learning, under a single elegant Python API (with tools to develop new compliant environments) . The introduction of this library has proven a watershed moment for the reinforcement learning community, becaus... | withdrawn-rejected-submissions | This paper represents the PettingZoo library of multi-agent environments, providing a common API and benchmark for multi-agent learning. The library has high potential for impact and is likely of interest to a wide range of people in the ICLR community. However, in its current form the paper could be significantly impr... | train | [
"VV1liR3kBC",
"x2e9zO8hsAt",
"OFTdDFxnDXx",
"QV0d3MUq00Y",
"qjFv_w3C7-n",
"HKrwtYts4hg",
"f8gTO9-QZlQ",
"Lv4DG60owkJ",
"-WZ67UQc5Td",
"iEa5FN3E1Kn",
"YR4ha4qd3jm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for time and effort on writing the library.\n\nThe paper introduces an API and library for multi-agent reinforcement learning along with simple installation of a very diverse set of environments along. Each environment has clear documentation of inputs/outputs, etc. \n\nPros:\n- The majority of the paper is... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2021_WoLQsYU8aZ",
"YR4ha4qd3jm",
"HKrwtYts4hg",
"iEa5FN3E1Kn",
"-WZ67UQc5Td",
"f8gTO9-QZlQ",
"Lv4DG60owkJ",
"VV1liR3kBC",
"iclr_2021_WoLQsYU8aZ",
"iclr_2021_WoLQsYU8aZ",
"iclr_2021_WoLQsYU8aZ"
] |
iclr_2021_Sc8cY4Jpi3s | Towards Practical Second Order Optimization for Deep Learning | Optimization in machine learning, both theoretical and applied, is presently dominated by first-order gradient methods such as stochastic gradient descent. Second-order optimization methods, that involve second derivatives and/or second order statistics of the data, are far less prevalent despite strong theoretical pro... | withdrawn-rejected-submissions | Dear authors,
I like to topic of your paper very much. Indeed, your work is trying to show that 2nd order methods can be efficiently implemented in a distributed environment and can achieve improvement in training times.
However, having worked on distributed computing for many years, I personally think that reporting... | train | [
"XacLjho8D8Z",
"3AJBLIxhz5",
"f6GMzmS5vG",
"XIVh-30yckJ",
"GLfPRVuNJH9",
"hD2uUiQLGGT",
"JdTQIqpbgGF",
"s4QSvuggi1h",
"Um8jFmyJB0V"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Main idea: The paper developed a practical second-order preconditioned method, which is based on the Shampoo algorithm, to improve the wall-clock time compared with state-of-the-art first-order methods for training deep networks. \n\n\n1.\tThis paper is heavily drawn from the shampoo algorithm, algorithm designing... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2
] | [
"iclr_2021_Sc8cY4Jpi3s",
"iclr_2021_Sc8cY4Jpi3s",
"iclr_2021_Sc8cY4Jpi3s",
"GLfPRVuNJH9",
"hD2uUiQLGGT",
"f6GMzmS5vG",
"Um8jFmyJB0V",
"XacLjho8D8Z",
"iclr_2021_Sc8cY4Jpi3s"
] |
iclr_2021_JNP-CqSjkDb | Transforming Recurrent Neural Networks with Attention and Fixed-point Equations | Transformer has achieved state of the art performance in multiple Natural Language Processing tasks recently. Yet the Feed Forward Network(FFN) in a Transformer block is computationally expensive. In this paper, we present a framework to transform Recurrent Neural Networks(RNNs) and their variants into self-attention-s... | withdrawn-rejected-submissions | The paper proposes a new computational block called "StarSaber" which is a self-attention based block derived from RNN fixed-point approximations.
All the 4 reviewers and the authors agree that the work is not ready for publication. While the motivation of the authors is interesting, some reviewers have raised concern... | train | [
"G0JD4iWp1U",
"Hohxffg_GCG",
"1AwvMrwwrx",
"HqkD-RT1Mre"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to make use RNN cell to replace the connection between continuous layers in Transformer. Although the proposed method makes use RNN cell to replace the heavy MLP layer after self-attention. I still think there's no significant difference between vanilla Transformer. The authors also propose to ... | [
4,
3,
4,
5
] | [
4,
4,
4,
4
] | [
"iclr_2021_JNP-CqSjkDb",
"iclr_2021_JNP-CqSjkDb",
"iclr_2021_JNP-CqSjkDb",
"iclr_2021_JNP-CqSjkDb"
] |
iclr_2021_0LlujmaN0R_ | Truthful Self-Play | We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to ... | withdrawn-rejected-submissions | This paper exposes a modification of self-play (in a non-cooperative setup) where agents expose their internal states to each other, and adds a "truthfulness" mechanism to ensure the agents do not hide information to each other.
The reviewers generally agree that the ideas presented, in particular the imaginary reward... | train | [
"4ySteA2O0m",
"v1WChO-RBc",
"6w60rOTu81",
"Vbk0ixUBmBg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"## Summary\n\nThe paper proposes an elegant baseline addition to policy gradient / self play to encourage truthful signaling in Comm-POSGs. It outperforms self-play on various communication domains and uses a number of interesting concepts. Cool paper - I think ICLR will enjoy reading!\n\n## Score\n\nClear accept ... | [
6,
5,
5,
4
] | [
2,
3,
4,
4
] | [
"iclr_2021_0LlujmaN0R_",
"iclr_2021_0LlujmaN0R_",
"iclr_2021_0LlujmaN0R_",
"iclr_2021_0LlujmaN0R_"
] |
iclr_2021_PQlC91XxqK5 | Segmenting Natural Language Sentences via Lexical Unit Analysis | In this work, we present Lexical Unit Analysis (LUA), a framework for general sequence segmentation tasks. Given a natural language sentence, LUA scores all the valid segmentation candidates and utilizes dynamic programming (DP) to extract the maximum scoring one. LUA enjoys a number of appealing properties such as inh... | withdrawn-rejected-submissions | This paper is concerned with sequence segmentation. The authors introduce a framework which they call 'lexical unit analysis' - a neural network is used to score spans and then dynamic programming is used to find the best scoring overall segmentation. The authors present extensive experiments on various Chinese NLP tas... | train | [
"VaKf2MrmuOF",
"N9wvWuBauz7",
"MHuFjpSZiG",
"leX94iNimVL",
"njhGY3IgR0o",
"tgf4_wXqwUX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a method called LUA, Lexical Unit Analysis for general segmentation tasks. LUA scores all the valid segmentation of a sequence and uses Dynamic Programming to find the segmentation with the highest score. In addition, LUA can incorporate labeling of the segment as an additional component for sp... | [
7,
-1,
-1,
-1,
5,
6
] | [
5,
-1,
-1,
-1,
3,
5
] | [
"iclr_2021_PQlC91XxqK5",
"njhGY3IgR0o",
"VaKf2MrmuOF",
"tgf4_wXqwUX",
"iclr_2021_PQlC91XxqK5",
"iclr_2021_PQlC91XxqK5"
] |
iclr_2021_X6YPReSv5CX | Mixture of Step Returns in Bootstrapped DQN | The concept of utilizing multi-step returns for updating value functions has been adopted in deep reinforcement learning (DRL) for a number of years. Updating value functions with different backup lengths provides advantages in different aspects, including bias and variance of value estimates, convergence speed, and ex... | withdrawn-rejected-submissions | This paper extends Bootstrap DQN with multi-step TD target. The initial submission had missing details, communication problems, and results lacking rigor. The authors made a clear effort to address the reviewers concerns.
This paper's contribution is supported primarily by the empirical results which need major work. ... | train | [
"si9-xMcrD5",
"KLgcPtJNHaN",
"eal5ZBHZaJI",
"nkDA32Jr2KB",
"Ene0dpLFyDq",
"DESZwOIgjSp",
"hCiniJ3RacT",
"Gul1_3bltyF",
"rX4QF7DkLkz",
"OtXoLIPxBzT",
"RdnB4LP12fN",
"S1QHQpDvh9",
"PbV8ckZuOpJ",
"uCoj-RFidRW",
"vrae7mx6xJg",
"QJD5Q_oE0wn",
"3OeB6Y8xTJ",
"wZJJcmWG7mK",
"4Kh5G6O6UDs"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"### Summary of Contributions\n\nThe paper proposes the Mixture Bootstrapped DQN (MB-DQN) algorithm, an extension of bootstrapped DQN where each outputted value estimate uses a different multi-step TD target. They provide a motivating example suggesting that with shorter backups, the slower convergence results in g... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"iclr_2021_X6YPReSv5CX",
"Ene0dpLFyDq",
"nkDA32Jr2KB",
"OtXoLIPxBzT",
"DESZwOIgjSp",
"S1QHQpDvh9",
"RdnB4LP12fN",
"PbV8ckZuOpJ",
"PbV8ckZuOpJ",
"PbV8ckZuOpJ",
"S55ayvxeGy1",
"pEDMHZimg5G",
"uCoj-RFidRW",
"si9-xMcrD5",
"si9-xMcrD5",
"DkPgsit_NKM",
"DkPgsit_NKM",
"DkPgsit_NKM",
"h_... |
iclr_2021_6MaBrlQ5JM | THE EFFICACY OF L1 REGULARIZATION IN NEURAL NETWORKS | A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds. In this work, we present a new perspective towards the bias-variance tradeoff in neural networks. As an alternative to selecting the number of neurons, we theoretically show that L1 ... | withdrawn-rejected-submissions | The paper presents a generalization bounds for l1 regularized networks. The reviewers thought the results were clear and sound, but on the other hand rely on rather standard technical tools, and their impact is limited.
One question is why this particular regularization is related to practical learning of neural nets ... | train | [
"IJTtZlUCE-a",
"jy1IsPL-fXt",
"B0DCVSqsjBV",
"xdMu-PNBfmn",
"B7ciORjgaLi",
"Wuap3M5xMd",
"QkKj7eV3fY"
] | [
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments. We address each concern below.\n\nOn contribution:\nWe agree that the theoretical tool is not beyond the existing statistical learning theory. However, the discovery of the tight statistical risk bound (at the rate of n^{1/2} under L1 loss) is new. The existing tight bound for neural n... | [
-1,
-1,
-1,
-1,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"B7ciORjgaLi",
"Wuap3M5xMd",
"QkKj7eV3fY",
"iclr_2021_6MaBrlQ5JM",
"iclr_2021_6MaBrlQ5JM",
"iclr_2021_6MaBrlQ5JM",
"iclr_2021_6MaBrlQ5JM"
] |
iclr_2021_nEMiSX_ipXr | Proper Measure for Adversarial Robustness | This paper analyzes the problems of adversarial accuracy and adversarial training. We argue that standard adversarial accuracy fails to properly measure the robustness of classifiers. Its definition has a tradeoff with standard accuracy even when we neglect generalization. In order to handle the problems of the standar... | withdrawn-rejected-submissions | The paper considers new notions of adversarial accuracy and risk which are called "genuine" with an aim to fix issues with the existing definitions in the literature. A number of issues in the paper, including lack of motivation and intuition, and poor formalism were identified by the reviewers. The paper also fails to... | val | [
"4QhOWHrMLcP",
"Sg101TtiLfn",
"nE5cNdAXNmn",
"hlM8nJFPwSS",
"Zw9565oGvxM",
"VZca9AXL6y2"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThe paper revisits previously known definitions of \"adversarial accuracy\" (and its complement: adversarial risk) which captures the accuracy (and risk) of learning models under adversarial perturbations of the test instances. The paper argues that the established (called standard) definition of adver... | [
3,
-1,
-1,
3,
3,
3
] | [
5,
-1,
-1,
4,
5,
4
] | [
"iclr_2021_nEMiSX_ipXr",
"hlM8nJFPwSS",
"hlM8nJFPwSS",
"iclr_2021_nEMiSX_ipXr",
"iclr_2021_nEMiSX_ipXr",
"iclr_2021_nEMiSX_ipXr"
] |
iclr_2021_jn1WDxmDe5P | Meta-k: Towards Unsupervised Prediction of Number of Clusters | Data clustering is a well-known unsupervised learning approach. Despite the recent advances in clustering using deep neural networks, determining the number of clusters without any information about the given dataset remains an existing problem. There have been classical approaches based on data statistics that require... | withdrawn-rejected-submissions | The reviewers were unanimous that this submission is not ready for publication at ICLR. Concerns were raised about clarity of the exposition, as well as lack of sufficient experiments comparing to related work. | train | [
"QhuIBTyFt14",
"90ti6wK9urs",
"2-QYQvPIsD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The reviewed paper presents a completely unsupervised framework Meta-K for predicting the number of clusters. The approach advocated in the paper comprises two main parts: autoencoder for feature extraction and multilayer perceptron (MLP) for predicting the number of clusters. Autoencoder is used if necessary to d... | [
3,
4,
4
] | [
5,
3,
4
] | [
"iclr_2021_jn1WDxmDe5P",
"iclr_2021_jn1WDxmDe5P",
"iclr_2021_jn1WDxmDe5P"
] |
iclr_2021_Gj9aQfQEHRS | Transformers satisfy | The Propositional Satisfiability Problem (SAT), and more generally, the Constraint Satisfaction Problem (CSP), are mathematical questions defined as finding an assignment to a set of objects that satisfies a series of constraints. The modern approach is trending to solve CSP through neural symbolic methods. Most recent... | withdrawn-rejected-submissions | This paper presents a new graph neural network (GNN) architecture with attention and with applications to Boolean satisfiability.
The reviewers expressed concerns over various aspects of the paper such as a need for better ablations and an analysis of the difficulty level of the SAT problems used in evaluation. No re... | train | [
"zOLQVf2RLG",
"aEk-FJxQmpZ",
"eR39jNY49a",
"3AGAZR5nJMW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The paper \"Transformers satisfy\" presents an improved graph neural network model to solve SAT problems. Prior applications of GNNs to SAT have used convolutional GNNs instead of Graph Attention Networks (GATs), and this work suggests a modification of GATs to improve their performance on the bipartite g... | [
4,
4,
3,
4
] | [
4,
3,
5,
4
] | [
"iclr_2021_Gj9aQfQEHRS",
"iclr_2021_Gj9aQfQEHRS",
"iclr_2021_Gj9aQfQEHRS",
"iclr_2021_Gj9aQfQEHRS"
] |
iclr_2021_l3gNU1KStIC | Stochastic Inverse Reinforcement Learning | The goal of the inverse reinforcement learning (IRL) problem is to recover the reward functions from expert demonstrations. However, the IRL problem like any ill-posed inverse problem suffers the congenital defect that the policy may be optimal for many reward functions, and expert demonstrations may be optimal for man... | withdrawn-rejected-submissions | This paper describes a method called 'stochastic' inverse reinforcement learning. It is somewhat unclear how this differs from other probabilistic approaches to IRL. In particular Bayesian approaches have been used in the past to obtain distributions over reward functions. However, SIRL tries to estimate a generative m... | train | [
"wxCZwyptP2h",
"GcR71LzajNH",
"GzfXAw_M5QH",
"YqyHaCWzJ80",
"qU2O9IxcXo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary\n--------\nThe paper describes a method for inverse reinforcement learning---called stochastic IRL---that learns a distribution over reward functions. In that sense, the method is similar to Bayesian approaches, however, the learned distribution doesn't seem to approximate the posterior for any given prior... | [
2,
2,
4,
3,
3
] | [
5,
5,
2,
3,
4
] | [
"iclr_2021_l3gNU1KStIC",
"iclr_2021_l3gNU1KStIC",
"iclr_2021_l3gNU1KStIC",
"iclr_2021_l3gNU1KStIC",
"iclr_2021_l3gNU1KStIC"
] |
iclr_2021_XavM6v_q59q | GN-Transformer: Fusing AST and Source Code information in Graph Networks | As opposed to natural languages, source code understanding is influenced by grammar relations between tokens regardless of their identifier name. Considering graph representation of source code such as Abstract Syntax Tree (AST) and Control Flow Graph (CFG), can capture a token’s grammatical relationships that are not ... | withdrawn-rejected-submissions | While there are some potentially interesting aspects to this work, it doesn’t acknowledge a significant amount of relevant literature, and there are some unsupported claims. All reviewers believe the paper is not ready for acceptance. Reviewers provided some good thorough reviews and suggestions, but the authors did no... | train | [
"hk_i7RzMqx_",
"sPFHDu4AusC",
"kJOtRRZgZTL",
"3cOpxk6i2OJ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Summary\n\nThis paper focuses on the problem of training a neural model to understand source code. The authors argue that both graph information (such as the parsed abstract syntax tree) and sequence information (such as the raw program tokens) are useful for understanding code, and describe a particular metho... | [
3,
5,
5,
5
] | [
4,
3,
4,
4
] | [
"iclr_2021_XavM6v_q59q",
"iclr_2021_XavM6v_q59q",
"iclr_2021_XavM6v_q59q",
"iclr_2021_XavM6v_q59q"
] |
iclr_2021_u9ax42K7ND | Hierarchical Meta Reinforcement Learning for Multi-Task Environments | Deep reinforcement learning algorithms aim to achieve human-level intelligence by solving practical decisions-making problems, which are often composed of multiple sub-tasks. Complex and subtle relationships between sub-tasks make traditional methods hard to give a promising solution. We implement a first-person shooti... | withdrawn-rejected-submissions | The reviewers agreed that the paper presents interesting ideas but the presentation of the paper needs be improved. Also, the experiments and the related work section need be improved. | train | [
"3N2W64g2nl3",
"ynFpwHSVCze",
"mS-YKxHrOzf",
"Uv5MdeTXmDP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers a FPS game that can be decomposed into two sub-tasks, navigation and shooting. A hierarchical meta RL method is introduced and the updating rules for sub-policies and meta parameters are provided. Experiments focus on this specific environment and hence the hierarchical structure is also speci... | [
3,
3,
4,
3
] | [
5,
4,
4,
3
] | [
"iclr_2021_u9ax42K7ND",
"iclr_2021_u9ax42K7ND",
"iclr_2021_u9ax42K7ND",
"iclr_2021_u9ax42K7ND"
] |
iclr_2021_dJbf5SqbFrM | Continuous Transfer Learning | Transfer learning has been successfully applied across many high-impact applications. However, most existing work focuses on the static transfer learning setting, and very little is devoted to modeling the time evolving target domain, such as the online reviews for movies. To bridge this gap, in this paper, we focus on... | withdrawn-rejected-submissions | The paper proposes transfer learning where the target domain data is evolving along time. They use both labeled and unlabeled data to learn domain and time-invariant features based on a discrepancy measure they introduce. Their proposed algorithm uses VAE to learn such features. Reviewers have mixed response, althou... | val | [
"bQJ9AwlGFzm",
"bggs_0odzZZ",
"PEeQ2yukSp0",
"QcBc6bf0wYO",
"pQmSQTPQsC0",
"IHE-bkafZDp",
"XdJtgahGN29"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nThe paper proposed a transfer learning setting where the target domain varies/evolves over time and the source domain is considered static. The paper uses C-divergence to measure label-dependent domain discrepancy between source/previous target domain and the current target domain and provided a theoretical boun... | [
6,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_dJbf5SqbFrM",
"XdJtgahGN29",
"IHE-bkafZDp",
"pQmSQTPQsC0",
"bQJ9AwlGFzm",
"iclr_2021_dJbf5SqbFrM",
"iclr_2021_dJbf5SqbFrM"
] |
iclr_2021_bIQF55zCpWf | Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy | Regularization plays a crucial role in machine learning models, especially for deep neural networks. The existing regularization techniques mainly rely on the i.i.d. assumption and only consider the knowledge from the current sample, without the leverage of the neighboring relationship between samples. In this work, we... | withdrawn-rejected-submissions | This paper proposes a novel way (Pani) that constructs image patch-level graphs and then linearly interpolates the patch-level features. The authors show how this can be used in Virtual Adversarial Training (PaniVAT) and Mixup/MixMatch (Pani Mixup). The method is shown to improve classification compared to standard VAT... | train | [
"CIm00k8w2nj",
"oqiWtdVN3Qi",
"9BgmR_jalNN",
"hbhirtt8Zyf",
"uSfQGGJJIov",
"7m53q1aG_y",
"F7xB3dJvVlZ",
"plCcrnD9rW3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a new regularization method via patch level interpolation. During the training, images within a batch will be used to construct an image graph. For example, for a certain image, its nearest neighbors in the feature spaces will be used. Then patches from its neighbors will be used to interpol... | [
5,
5,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
3,
-1,
-1,
-1,
-1,
2,
1
] | [
"iclr_2021_bIQF55zCpWf",
"iclr_2021_bIQF55zCpWf",
"plCcrnD9rW3",
"oqiWtdVN3Qi",
"F7xB3dJvVlZ",
"CIm00k8w2nj",
"iclr_2021_bIQF55zCpWf",
"iclr_2021_bIQF55zCpWf"
] |
iclr_2021_b6BdrqTnFs7 | Grounded Compositional Generalization with Environment Interactions | In this paper, we present a compositional generalization approach in grounded agent instruction learning. Compositional generalization is an important part of human intelligence, but current neural network models do not have such ability. This is more complicated in multi-modal problems with grounding. Our proposed app... | withdrawn-rejected-submissions | This paper proposes an approach to training language instruction following agents that aims to improve their compositional generalization., by means of an entropy regularization method to reduce redundant dependency on input.
All four expert reviewers agreed that the paper is not ready for publication in its current f... | test | [
"6KGX_TYymlX",
"h3IWJH9xs6",
"QQoH2txIv4w",
"bDCWjb-aozG",
"lfv85TZ8O9z",
"5V1VLuatqts",
"Cihjwtpqs6j",
"gS69bTanOC4",
"wlufE3AkFG",
"HbyH9KWl65U"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary\n\nThis paper tries to address a very important problem, compositional generalization in grounded agent instruction learning. It proposes to use interactions between agent and the environment to define output components, and entropy regularization to reduce redundant dependency on input. It shows significa... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_b6BdrqTnFs7",
"iclr_2021_b6BdrqTnFs7",
"HbyH9KWl65U",
"wlufE3AkFG",
"gS69bTanOC4",
"HbyH9KWl65U",
"6KGX_TYymlX",
"iclr_2021_b6BdrqTnFs7",
"iclr_2021_b6BdrqTnFs7",
"iclr_2021_b6BdrqTnFs7"
] |
iclr_2021_jOQbDGngsg8 | Secure Network Release with Link Privacy | Many data mining and analytical tasks rely on the abstraction of networks (graphs) to summarize relational structures among individuals (nodes). Since relational data are often sensitive, we aim to seek effective approaches to release utility-preserved yet privacy-protected structured data. In this paper, we leverage t... | withdrawn-rejected-submissions | This paper studies synthetic data generation for graphs under the constraint of edge differential privacy. There were a number of concerns/topics of discussions, which we consider separately:
1. Theoretical contributions. There are not that many theoretical contributions in this paper. I think this is OK, if the other ... | train | [
"TO2dywZk1Rh",
"J7k4yaKmvdy",
"ucYqsqef3Xx",
"Ivh90WKVuDO",
"dEOgwuKBD0W",
"D3ZLkIo7FNv",
"nEib7ps8vI",
"iYX0zx-lMeG",
"6P6I3UvOsc",
"iNmCCy1cDhl",
"YT9k3FiXnQu",
"rv8OrMoNQxU",
"Va3lS8jL9KR"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Q4: Theorem 1 proof in Appendix: please explain how this is different from Abadi et al. given that s=C.\n\nA4: s=C is obtained from our task-specific analysis under the definition of edge-DP for graph link reconstruction. [Abadi et al] proposed DPSGD for the standard DP protection for the traditional image classif... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
2,
3
] | [
"J7k4yaKmvdy",
"iNmCCy1cDhl",
"Ivh90WKVuDO",
"YT9k3FiXnQu",
"Va3lS8jL9KR",
"rv8OrMoNQxU",
"iYX0zx-lMeG",
"6P6I3UvOsc",
"iclr_2021_jOQbDGngsg8",
"iclr_2021_jOQbDGngsg8",
"iclr_2021_jOQbDGngsg8",
"iclr_2021_jOQbDGngsg8",
"iclr_2021_jOQbDGngsg8"
] |
iclr_2021_mj7WsaHYxj | FLAG: Adversarial Data Augmentation for Graph Neural Networks | Data augmentation helps neural networks generalize better, but it remains an open question how to effectively augment graph data to enhance the performance of GNNs (Graph Neural Networks). While most existing graph regularizers focus on augmenting graph topological structures by adding/removing edges, we offer a novel ... | withdrawn-rejected-submissions | This paper studies the problem of adversarial training for graph neural networks. The proposed method is build on the free training approach, and more specifically FreeLB, with some additional tricks including bias perturbation (for node-classification) and unbounded attacks. While these additions are potentially usef... | train | [
"9690QDAknnr",
"CJsOFXTopZ1",
"ZvhvbV7eBrb",
"rg0sscLspJC",
"T9ubYQwEiKE",
"ove0ZF7rBii",
"riS8rLyt077",
"MOKuHtxhXz-",
"2wH0E5h_173",
"AnBRf2GFUqv"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"---- Summary\n\nThis paper proposes FLAG (Free Large-scale Adversarial Augmentation on Graphs), an adversarial data augmentation technique that can be applied to different GNN models in order to improve their generalization. The proposed technique consists on adding adversarial perturbations to the nodes’ features... | [
6,
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7
] | [
3,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_mj7WsaHYxj",
"iclr_2021_mj7WsaHYxj",
"CJsOFXTopZ1",
"T9ubYQwEiKE",
"iclr_2021_mj7WsaHYxj",
"2wH0E5h_173",
"9690QDAknnr",
"AnBRf2GFUqv",
"iclr_2021_mj7WsaHYxj",
"iclr_2021_mj7WsaHYxj"
] |
iclr_2021_2wjKRmraNan | Non-Inherent Feature Compatible Learning | The need of Feature Compatible Learning (FCL) arises from many large scale retrieval-based applications, where updating the entire library of embedding vectors is expensive. When an upgraded embedding model shows potential, it is desired to transform the benefit of the new model without refreshing the library. While pr... | withdrawn-rejected-submissions | This paper deals with a problem of feature compatible learning, where the features produced by new model should be compatible with old features. As pointed out by the reviewers, there are several weaknesses with this paper: (a) the novelty is not strong enough, (b) the experimental results should be better explained an... | train | [
"fZlJ887BGu",
"JkwY64n0_7Q",
"SGEVd-97RA5",
"gzSsmqKcMs5",
"V36hKpLMIRe",
"MSJvsmBYeD8",
"2-ZsLE5i4dU",
"aLqm_ntatw5"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\n**Q12: Why random walk could produce the beneficial weights?**\n\n**A12:** With random walk based refinement, each column of the feature matrix is averaged based on the sample-pairwise similarities, which is a message-passing process on a fully connected graph and such process is conducted many times (infinite ... | [
-1,
-1,
-1,
-1,
5,
5,
2,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"V36hKpLMIRe",
"MSJvsmBYeD8",
"aLqm_ntatw5",
"2-ZsLE5i4dU",
"iclr_2021_2wjKRmraNan",
"iclr_2021_2wjKRmraNan",
"iclr_2021_2wjKRmraNan",
"iclr_2021_2wjKRmraNan"
] |
iclr_2021_1qJtBS8QF9 | Graph View-Consistent Learning Network | Recent years, methods based on neural networks have made great achievements in solving large and complex graph problems. However, high efficiency of these methods depends on large training and validation sets, while the acquisition of ground-truth labels is expensive and time-consuming. In this paper, a graph view-cons... | withdrawn-rejected-submissions | The authors consider view-consistency when learning graph neural networks. However, as mentioned by the reviewers, the novelty of the proposed method is limited and the rationality of the implementation is not convincing. More deep discussions about related papers and analytic experiments are required to support this w... | train | [
"u1SA5T4DCVW",
"nBTVc3lIVPD",
"4WLpPbCjWb8",
"akzEP3pDls9",
"08kHSXFTv-F",
"H0X9-xH8ZG1",
"wXZ7TGmPGLR",
"DvMEEYMXToa",
"6SBGi0kbn94",
"TiTlgrvK1lN"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"First of all, thank you very much for your valuable comments on our paper. However, we found that there are some discrepancies in your comment with our paper, and we are here to present a statement. What we use in our paper is the consistency between the two views, instead of constructing positive and negative pai... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5,
5
] | [
"TiTlgrvK1lN",
"H0X9-xH8ZG1",
"wXZ7TGmPGLR",
"6SBGi0kbn94",
"DvMEEYMXToa",
"iclr_2021_1qJtBS8QF9",
"iclr_2021_1qJtBS8QF9",
"iclr_2021_1qJtBS8QF9",
"iclr_2021_1qJtBS8QF9",
"iclr_2021_1qJtBS8QF9"
] |
iclr_2021_ghjxvfgv9ht | Self-Pretraining for Small Datasets by Exploiting Patch Information | Deep learning tasks with small datasets are often tackled by pretraining models with large datasets on relevent tasks. Although pretraining methods mitigate the problem of overfitting, it can be difficult to find appropriate pretrained models sometimes. In this paper, we proposed a self-pretraininng method by exploit... | withdrawn-rejected-submissions | All reviewers agreed on the major shortcomings of this submission, the most important of which is that the contributions are insufficiently evaluated. There was no author response. | train | [
"6sSNKzEw4PW",
"OuwYN4KLpbC",
"KlPGIKyYp-w"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an interesting approach for training neural networks with a small dataset. The main idea is to train the model from the early layers to the deeper layers step-by-step, with different types of inputs (i.e., patches, cropped images, or full images) sampled from the given training set. Experimenta... | [
4,
2,
4
] | [
4,
5,
5
] | [
"iclr_2021_ghjxvfgv9ht",
"iclr_2021_ghjxvfgv9ht",
"iclr_2021_ghjxvfgv9ht"
] |
iclr_2021_IohHac70h3R | On the Marginal Regret Bound Minimization of Adaptive Methods | Numerous adaptive algorithms such as AMSGrad and Radam have been proposed and applied to deep learning recently. However, these modifications do not improve the convergence rate of adaptive algorithms and whether a better algorithm exists still remains an open question. In this work, we propose a new motivation for des... | withdrawn-rejected-submissions | The paper presents a new online convex optimization algorithm that uses per-coordinate learning rates. The learning rates are changed over time using information coming from the gradients. A regret upper bound is proved and the algorithm is empirically validated on deep learning experiments.
While the analysis is in p... | val | [
"m78gsA-I9BL",
"fL-pMjIGa2q",
"2B5dHxP8Tx8",
"odQkmAxts0",
"lJQhMsiIA1h",
"repzzpz3Voo",
"PS_WIgIX68e",
"asZrvZcFVNS",
"PDJC_M4XN9R",
"2NnuTgG-LSv"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"R5: Thank you for your interest in our paper and your valuable feedback. We want to clarify the following.\n\nYou are definitely correct about the analysis part. Thank you for being such a careful reader. However, as we mention to another reviewer, the regret bound of AdaGrad also has the form of eqn (3), i.e. $O(... | [
-1,
-1,
-1,
-1,
-1,
8,
5,
4,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3,
4
] | [
"2NnuTgG-LSv",
"asZrvZcFVNS",
"PDJC_M4XN9R",
"PS_WIgIX68e",
"repzzpz3Voo",
"iclr_2021_IohHac70h3R",
"iclr_2021_IohHac70h3R",
"iclr_2021_IohHac70h3R",
"iclr_2021_IohHac70h3R",
"iclr_2021_IohHac70h3R"
] |
iclr_2021_nIqapkAyZ9_ | SVMax: A Feature Embedding Regularizer | A neural network regularizer (eg, weight decay) boosts performance by explicitly penalizing the complexity of a network. In this paper, we penalize inferior network activations -- feature embeddings -- which in turn regularize the network's weights implicitly. We propose singular value maximization (SVMax) to learn a u... | withdrawn-rejected-submissions | The paper presents a new regularizer based on singular value decomposition in embedding space to avoid model collapse. The reviewes liked the simplicity of the idea, but there were some remaining concerns regarding the experiments. Moreover, two reviewers mentionned some concerns with respect to the clarity of the pape... | train | [
"6Zy0B05d0zy",
"rfFH-4r2iFM",
"n-xEHkppC7P",
"0M94NlY2wHN",
"vEQugCSV-Wv",
"ln0Rwo7GaEZ",
"5qTnzBAW32",
"2mY7yHv9Po1"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for reviewing our paper. We appreciate that you mention many deep metric learning methods can benefit from our proposed SVMax regularizer.\n\n**Regarding experimental results.** For fair comparisons, we did not tune our hyperparameters to a particular ranking loss. Thus, the 2.3% lag (50-47.7) is expect... | [
-1,
-1,
-1,
-1,
5,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
5,
3,
5,
3
] | [
"vEQugCSV-Wv",
"ln0Rwo7GaEZ",
"5qTnzBAW32",
"2mY7yHv9Po1",
"iclr_2021_nIqapkAyZ9_",
"iclr_2021_nIqapkAyZ9_",
"iclr_2021_nIqapkAyZ9_",
"iclr_2021_nIqapkAyZ9_"
] |
iclr_2021_GHCu1utcBvX | Transferability of Compositionality | Compositional generalization is the algebraic capacity to understand and produce large amount of novel combinations from known components. It is a key element of human intelligence for out-of-distribution generalization. To equip neural networks with such ability, many algorithms have been proposed to extract compositi... | withdrawn-rejected-submissions | This work considers an apparent problem with current approaches to compositional generalisation (CG) in neural networks. The problem seems to be roughly:
1. prior work in CG aims to extract 'compositional representations' from the training distribution
2. work on CG, the training set and the test set are drawn from dif... | train | [
"7_3ehygfkjK",
"XrAE1BQ--0G",
"LwA2Gc3fsA5",
"Rc1e1wY1Ntq",
"jv1P4mt8Nc0",
"qHKKNMieYrN",
"w7yzYJh846L",
"xGHcvStyzNG",
"kJbEvyHG9U",
"2oUn545DV9i"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"*Summary*\n\nThis paper proposes an architecture that addresses transferability of compositionality. The proposed architecture consists of three components: a network that transforms the input X into a series of hidden representations {H_1, H_2, ... H_K}, a network that reconstructs the input X from this series of... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"iclr_2021_GHCu1utcBvX",
"LwA2Gc3fsA5",
"qHKKNMieYrN",
"7_3ehygfkjK",
"xGHcvStyzNG",
"kJbEvyHG9U",
"2oUn545DV9i",
"iclr_2021_GHCu1utcBvX",
"iclr_2021_GHCu1utcBvX",
"iclr_2021_GHCu1utcBvX"
] |
iclr_2021_Z4YatHL7aq | Semantically-Adaptive Upsampling for Layout-to-Image Translation | We propose the Semantically-Adaptive UpSampling (SA-UpSample), a general and highly effective upsampling method for the layout-to-image translation task. SA-UpSample has three advantages: 1) Global view. Unlike traditional upsampling methods (e.g., Nearest-neighbor) that only exploit local neighborhoods, SA-UpSample ca... | withdrawn-rejected-submissions | The paper proposes an upsampling layer design for converting layouts to images. Three reviewers rate the paper below the bar, while one reviewer rates the paper marginally above the bar. The main concern that several reviewers raise is the novelty. Particularly, R1 and R3 point out that the proposed method shares great... | train | [
"0aRleGLPEHb",
"ASEhtbaYljc",
"EuNEqGztJmZ",
"muVc4QLdZ3A"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n\nThis paper proposes a semantically-adaptive upsampling approach for layout-to-image translation. It uses the semantic label map to predict spatially-adaptive upsampling kernels for feature map upsampling. Compared with tradi... | [
4,
5,
6,
5
] | [
5,
5,
5,
4
] | [
"iclr_2021_Z4YatHL7aq",
"iclr_2021_Z4YatHL7aq",
"iclr_2021_Z4YatHL7aq",
"iclr_2021_Z4YatHL7aq"
] |
iclr_2021_a7gkBG1m6e | Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing | There is considerable evidence that deep neural networks are vulnerable to adversarial perturbations applied directly to their digital inputs. However, it remains an open question whether this translates to vulnerabilities in real-world systems. Specifically, in the context of image inputs to autonomous driving systems... | withdrawn-rejected-submissions | Thank you for your submission to ICLR. Overall the reviewers and I think that this paper presents some nice contributions to the adversarial attacks literature, demonstrating a low-sample-complexity, "physically-realizable" attack in a domain of clear importance and interest in machine learning. The move to consideri... | test | [
"M_8zGEnK6Hg",
"IdfgHW8Oi-0",
"I7ZxcXvm9tv",
"KZ0usfUBx4u",
"N2uPuY7Vq_0",
"pNvQR-9SN7o",
"PiAAuZ1S01U",
"b5E6fQ2hrFh"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"To summarize, the authors propose a road-painting attack with rectangles to deceive a controller network such that the car will deviate from the correct trajectory. The simulation is done on CARLA.\n\n**The threat model**\nPainting roads with rectangles is very interesting. The closest one I saw is patching stop s... | [
6,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"iclr_2021_a7gkBG1m6e",
"pNvQR-9SN7o",
"M_8zGEnK6Hg",
"PiAAuZ1S01U",
"b5E6fQ2hrFh",
"iclr_2021_a7gkBG1m6e",
"iclr_2021_a7gkBG1m6e",
"iclr_2021_a7gkBG1m6e"
] |
iclr_2021_Fo6S5-3Dx_ | Deep Evolutionary Learning for Molecular Design | In this paper, we propose a deep evolutionary learning (DEL) process that integrates fragment-based deep generative model and multi-objective evolutionary computation for molecular design. Our approach enables (1) evolutionary operations in the latent space of the generative model, rather than the structural space, to... | withdrawn-rejected-submissions | This work combines deep generative models (variational autoencoders, FragVAE) and multi-objective evolutionary computation for molecular design. They use a multilayer perceptron as a predictor for properties. Evolutionary operations are used to explore the latent space of the generative model to produce novel competiti... | train | [
"CQPrV1nQ7SW",
"1tT-49L0H4d",
"0LI49sskAM",
"D4lSrJQFBs9",
"Y6Vnx74SQU_",
"87zkkV9EUQC",
"3y3gVpvDs1",
"XoQhLfl-nX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to optimize the continuous representation of molecules in a latent space learned by a fragment-based variational autoencoder using an evolutionary algorithm. To improve the quality of the generated molecules over time, they use new generated samples as augmented data to fine-tune the generative... | [
4,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
3,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_Fo6S5-3Dx_",
"iclr_2021_Fo6S5-3Dx_",
"1tT-49L0H4d",
"3y3gVpvDs1",
"CQPrV1nQ7SW",
"XoQhLfl-nX",
"iclr_2021_Fo6S5-3Dx_",
"iclr_2021_Fo6S5-3Dx_"
] |
iclr_2021_ToWi1RjuEr8 | Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning | In this work, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised le... | withdrawn-rejected-submissions | This paper aims to develop a simple yet efficient deep RL algorithm for off-policy RL. The proposed method uses advantages to as weight in regression, which is an extension of the known method of reward-weighted regression. The paper is in general nicely written, and it comes with a set of theoretical analyses and expe... | test | [
"bYD2nEasQ4y",
"uvAoLb6UIQN",
"FcHYKkdivH",
"YqF5hLgZJHR",
"V4-DPC1F3vn",
"wbfKhvAVD_I",
"-HXGlxS8cWd",
"maXAjmJpBWP",
"PGRoKS96dfa",
"hwG96AKnLS",
"L6rsK5gL1Zz",
"_8xPsZXPYA"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your response. \n\nAfter going over the feedback, (i) \\beta hyper-parameter ablation hasn't been shown, (ii) experimental results are reasonable but they don't offer strong conclusions in favor of AWR or why AWR is favorable despite these results, (iii) I still believe AWR has to be strengthened by ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
4,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"maXAjmJpBWP",
"FcHYKkdivH",
"V4-DPC1F3vn",
"wbfKhvAVD_I",
"PGRoKS96dfa",
"hwG96AKnLS",
"_8xPsZXPYA",
"L6rsK5gL1Zz",
"iclr_2021_ToWi1RjuEr8",
"iclr_2021_ToWi1RjuEr8",
"iclr_2021_ToWi1RjuEr8",
"iclr_2021_ToWi1RjuEr8"
] |
iclr_2021_nRJ08rN_b17 | Vision at A Glance: Interplay between Fine and Coarse Information Processing Pathways | Object recognition is often viewed as a feedforward, bottom-up process in machine learning, but in real neural systems, object recognition is a complicated process which involves the interplay between two signal pathways. One is the parvocellular pathway (P-pathway), which is slow and extracts fine features of objects;... | withdrawn-rejected-submissions | This paper explores a network that has a parvo (fine, detailed, slow)
and magno (low-res, quick) stream. The ideas are interesting and the
results intriguing, and one reviewer is in favor of acceptance.
Several reviewers criticized the clarity of the paper. and the lack of
details for, explanations of, and critical ev... | train | [
"FUrE3sdVNV",
"4xFjC012sZC",
"fSD9KoGWpg0",
"kEony2UNTPQ",
"lO552gAVDrY",
"RjVKKrG4KU-"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your careful review. \n(1)\t1) Our model’s design decisions are not arbitrary but based on biological constraints conceptually. For example, FineNet and CoarseNet mimic P-pathway and M-pathway, that two pathways interacts with an associative memory has been proposed by moshe bar (moshe bar, Nature Neur... | [
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"kEony2UNTPQ",
"lO552gAVDrY",
"RjVKKrG4KU-",
"iclr_2021_nRJ08rN_b17",
"iclr_2021_nRJ08rN_b17",
"iclr_2021_nRJ08rN_b17"
] |
iclr_2021_pQ-AoEbNYQK | DiffAutoML: Differentiable Joint Optimization for Efficient End-to-End Automated Machine Learning | The automated machine learning (AutoML) pipeline comprises several crucial components such as automated data augmentation (DA), neural architecture search (NAS) and hyper-parameter optimization (HPO). Although many strategies have been developed for automating each component in separation, joint optimization of these c... | withdrawn-rejected-submissions | The paper was discussed by the reviewers that acknowledged the rebuttal and the authors’ responses. In particular, they appreciated the fact that some of their concerns were alleviated (e.g., going beyond the single ImageNet evaluation).
More generally, while all the reviewers thought that the problem tackled by the ... | train | [
"n5dL74fbp5X",
"Ql2O2FrIG2z",
"Ob9opb-2c7k",
"bNFhNTAGYM1",
"qMUjiwquYT",
"DJuwcDufCv",
"6sG_-GNBQK",
"oSiH4b6T1ln",
"m-aDBbZiIxU"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Hi, thanks for you answer. For now I just to clarify one detail that is still not fully clear to me in your answer. What validation set exactly was used for the results in the tables? And what is used to compute the validation loss in DiffAutoML? Are these the different sets or are they the same?",
"Thanks for y... | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"Ql2O2FrIG2z",
"oSiH4b6T1ln",
"DJuwcDufCv",
"6sG_-GNBQK",
"m-aDBbZiIxU",
"iclr_2021_pQ-AoEbNYQK",
"iclr_2021_pQ-AoEbNYQK",
"iclr_2021_pQ-AoEbNYQK",
"iclr_2021_pQ-AoEbNYQK"
] |
iclr_2021_dmCL033_YwO | DeeperGCN: Training Deeper GCNs with Generalized Aggregation Functions | Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs. Recent works developed frameworks to train deep GCNs. Such works show impressive results in tasks like point cloud classification and segmentation, and protein interaction prediction. In this... | withdrawn-rejected-submissions | One referee supports acceptance, whereas three referees lean towards rejection. All referees agree that the idea introduced in the paper is interesting but find that the motivation and evaluation of the proposed aggregation functions could be significantly strengthened. The rebuttal addresses R1's concerns about novelt... | val | [
"p5KaWWRWQkI",
"jkmGLpOvLP2",
"-jn_4545phk",
"ereCtzLbbgM",
"-YfoNBEuae7",
"3p9gcc9hZD6",
"ge8v88lGz-I",
"1D_4HWBdQIu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose new and, in particular, parameterized aggregation functions for GNNs in order to especially support the construction of deeper GNNs. The paper is fairly understandable and the \"deeper GNN\" topic has gained more attention recently. However, in my opinion, the paper is missing the theoretical j... | [
4,
5,
6,
-1,
-1,
-1,
-1,
4
] | [
3,
5,
5,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_dmCL033_YwO",
"iclr_2021_dmCL033_YwO",
"iclr_2021_dmCL033_YwO",
"p5KaWWRWQkI",
"jkmGLpOvLP2",
"-jn_4545phk",
"1D_4HWBdQIu",
"iclr_2021_dmCL033_YwO"
] |
iclr_2021_foNTMJHXHXC | Out-of-Distribution Generalization via Risk Extrapolation (REx) | Distributional shift is one of the major obstacles when transferring machine learning prediction systems from the lab to the real world. To tackle this problem, we assume that variation across training domains is representative of the variation we might encounter at test time, but also that shifts at test time may be m... | withdrawn-rejected-submissions | The paper is proposing Risk Extrapolation (REX) as a domain generalization algorithm. Authors extends the distributionally robust learning to affine mixture of distributions from convex mixture. Authors later uses variances instead of this extension and demonstrate various empirical and theoretical properties. The pape... | val | [
"RiTpWcDI1QY",
"33Pu1nPfpXV",
"2pgmh7zO2Tx",
"Gg93gBHV0ip",
"aPdWLn5Jfc",
"V_exJcN0tsc",
"2prkohJ3hE-",
"vSPayxqzYzT",
"Cil_NxIe1N",
"VXiIiC0NBM",
"HQ51LuwsqHW",
"S0eQv_PFlgA",
"F7Mf3gab1A",
"RJ4crRpVKl1",
"Ra9BDwHABC",
"bN4yW9qqqRk",
"qAkAGs7eq3t",
"76OUF4kNz-1",
"5kN2X-tfv3q",
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper addresses the problem of distributional shift in transfer learning from multiple training domains. The authors propose Risk Extrapolation (REx), which is a novel approach for out-of-distribution generalization when the new test domain for which we do not even have the covariate matrix. Thoroug... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2021_foNTMJHXHXC",
"Cil_NxIe1N",
"Gg93gBHV0ip",
"aPdWLn5Jfc",
"V_exJcN0tsc",
"vSPayxqzYzT",
"RJ4crRpVKl1",
"RJ4crRpVKl1",
"76OUF4kNz-1",
"TZNTpU6k5-_",
"TZNTpU6k5-_",
"TZNTpU6k5-_",
"TZNTpU6k5-_",
"qAkAGs7eq3t",
"RiTpWcDI1QY",
"5kN2X-tfv3q",
"5kN2X-tfv3q",
"iclr_2021_foNTMJHX... |
iclr_2021_YtgKRmhAojv | One Reflection Suffice | Orthogonal weight matrices are used in many areas of deep learning. Much previous work attempt to alleviate the additional computational resources it requires to constrain weight matrices to be orthogonal. One popular approach utilizes *many* Householder reflections. The only practical drawback is that many reflections... | withdrawn-rejected-submissions | This paper proposes to use a single parametric Householder reflection to represent Orthogonal weight matrices.
It demonstrates that this is sufficient provided that we make the reflection direction a function of the input vector. It is also demonstrated under which conditions this modified transformation is invertible.... | test | [
"7uWF7BZooLp",
"RuSDanYcf0r",
"RKxR-Yo9iw",
"dJGJhR9ROMV",
"JrKGB0KjQJm",
"2FdSfElOLuU",
"ev1ztTRZlzm",
"gM8o6khS5nU",
"UEabWg5D2Xh",
"bf9Aene1KBj",
"x66N4F53v94",
"2pmy-M7cxJ0",
"ouUNBHKNWTq",
"IDYagJtsYzL",
"sYetR2BSBPs",
"fV6dRl3XiQC",
"TRbw7y0kEr"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Something came up so we didn't mange to finish the experimeint, it took a much longer than anticipated. \n\nWe think the point with a (d, L) table is very good. We plan on writing proper expermiental code for this, instead of quickyl hacking something together. That said, before doing so we'll take a step back and... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
2
] | [
"RuSDanYcf0r",
"RKxR-Yo9iw",
"JrKGB0KjQJm",
"ev1ztTRZlzm",
"dJGJhR9ROMV",
"gM8o6khS5nU",
"2FdSfElOLuU",
"iclr_2021_YtgKRmhAojv",
"2FdSfElOLuU",
"x66N4F53v94",
"2pmy-M7cxJ0",
"gM8o6khS5nU",
"TRbw7y0kEr",
"fV6dRl3XiQC",
"iclr_2021_YtgKRmhAojv",
"iclr_2021_YtgKRmhAojv",
"iclr_2021_YtgKR... |
iclr_2021_cL4wkyoxyDJ | Towards Counteracting Adversarial Perturbations to Resist Adversarial Examples | Studies show that neural networks are susceptible to adversarial attacks. This exposes a potential threat to neural network-based artificial intelligence systems. We observe that the probability of the correct result outputted by the network increases by applying small perturbations generated for class labels other tha... | withdrawn-rejected-submissions | This work is attempting to develop a new way to train models that are robust to (l_p-bounded) adversarial perturbations and to do so in a way that departs from the tools successfully used for this purpose in the past. This is a worthwhile aspiration, however, as pointed out in the comments/reviews, there are significan... | train | [
"HD5rKFwf-U_",
"_cKlY5FnPC8",
"1yKptR2rbPt",
"Pe-smn7-UBp",
"Zrwgwm_GFH6"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I wrote (Carlini & Wagner, 2017). We made a mistake in this paper by making it look like oblivious attacks matter---they do not. As we said in the follow-up paper (\"On evaluating adversarial robustness\"):\n\nAlong the same lines, there is no justification to study a “zero-knowledge” (Biggio et al.,\n2013) threat... | [
-1,
3,
2,
2,
1
] | [
-1,
5,
5,
5,
5
] | [
"Zrwgwm_GFH6",
"iclr_2021_cL4wkyoxyDJ",
"iclr_2021_cL4wkyoxyDJ",
"iclr_2021_cL4wkyoxyDJ",
"iclr_2021_cL4wkyoxyDJ"
] |
iclr_2021_Ef1nNHQHZ20 | Layer-wise Adversarial Defense: An ODE Perspective | Deep neural networks are observed to be fragile against adversarial attacks, which have dramatically limited their practical applicability. On improving model robustness, the adversarial training techniques have proven effective and gained increasing attention from research communities. Existing adversarial training ap... | withdrawn-rejected-submissions | This submission aims to improve adversarial training by making it involve also layer-wise (instead of only input-wise) perturbations. This is an interesting idea and it is accompanied by an interesting ODE-based perspective on the resulting dynamics. However, as the comments and reviews detail, the current manuscript m... | train | [
"OGi1coe8XVg",
"912ssCqJXz0",
"jHGVFM8A1KZ",
"huPKGfzAdO"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Summary\nThe paper proposes to improve the robustness of residual networks (ResNet) by adversarial training with layer-wise and gradient-based perturbations. It further offers an interpretation of the proposed perturbations as ordinary differential equations with the lenses of the operator-splitting theory, re... | [
5,
5,
5,
4
] | [
4,
4,
3,
4
] | [
"iclr_2021_Ef1nNHQHZ20",
"iclr_2021_Ef1nNHQHZ20",
"iclr_2021_Ef1nNHQHZ20",
"iclr_2021_Ef1nNHQHZ20"
] |
iclr_2021_GNv-TyWu3PY | Robust Learning for Congestion-Aware Routing | We consider the problem of routing users through a network with unknown congestion functions over an infinite time horizon. On each time step t, the algorithm receives a routing request and must select a valid path. For each edge e in the selected path, the algorithm incurs a cost cet=fe(xet)+ηet, where xet is the flow... | withdrawn-rejected-submissions | The paper proposes an algorithm with sublinear regret for the problem of routing users through a network with unknown congestion functions over an infinite time horizon. The reviewers generally appreciated the main contribution of this work. One of the reviewers also felt that, although it may be possible to obtain the... | test | [
"12wIRFRtgYJ",
"xDpSxgBfAJZ",
"GQ1DXW3nFRL",
"Rzgfood_KRG",
"A-bzpZsel7Y",
"-Py5rXSXLNR",
"QOsDNi_zP4q",
"qZk5xkOe1G2",
"1kpFzepg8eq",
"bSdF-75iHT-",
"3tP7y1mGe3A",
"NXy-3H3mIfF",
"7EjrAuLv8CD",
"4SwDSoprfR8",
"AzSntIGEXNx",
"ovBn_ddU4Wu"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nIn this paper the authors study the problem of routing users according to their requests through a network with unknown congestion functions over infinite time horizon. They model the problem as follows. A directed graph G(V,E) is given, where each edge is associated with a congestion function f_e that maps the... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_GNv-TyWu3PY",
"Rzgfood_KRG",
"-Py5rXSXLNR",
"A-bzpZsel7Y",
"GQ1DXW3nFRL",
"QOsDNi_zP4q",
"qZk5xkOe1G2",
"7EjrAuLv8CD",
"12wIRFRtgYJ",
"4SwDSoprfR8",
"AzSntIGEXNx",
"ovBn_ddU4Wu",
"ovBn_ddU4Wu",
"iclr_2021_GNv-TyWu3PY",
"iclr_2021_GNv-TyWu3PY",
"iclr_2021_GNv-TyWu3PY"
] |
iclr_2021_tY38nwwdCDa | USING OBJECT-FOCUSED IMAGES AS AN IMAGE AUGMENTATION TECHNIQUE TO IMPROVE THE ACCURACY OF IMAGE-CLASSIFICATION MODELS WHEN VERY LIMITED DATA SETS ARE AVAILABLE | Today, many of the machine learning models are extremely data hungry. On the other hand, the accuracy of the algorithms used is very often affected by the amount of the training data available, which is, unfortunately, rarely abundant. Fortunately, image augmentation is one of the very powerful techniques that can be u... | withdrawn-rejected-submissions | The paper introduces an augmentation technique that, given an image with a detected object, keeps the object and removes the background.
The reviewers expressed numerous valid concerns about the paper's novelty, the setting (assumption that there's a single object), the scalability of the approach and the experimental... | test | [
"7T0lOfy7Kmr",
"hMHlXEpe4Je",
"y2nIK3z8pBo",
"tn_cPiMCTTD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: Authors propose an augmentation technique for image classification. The augmented image is obtained by segmenting the salient object and masking the background. Therefore the technique gives one additional augmented image per each training image. The authors show an improved performance when using this au... | [
3,
2,
3,
5
] | [
4,
5,
5,
5
] | [
"iclr_2021_tY38nwwdCDa",
"iclr_2021_tY38nwwdCDa",
"iclr_2021_tY38nwwdCDa",
"iclr_2021_tY38nwwdCDa"
] |
iclr_2021_jsM6yvqiT0W | Improved Uncertainty Post-Calibration via Rank Preserving Transforms | Modern machine learning models with high accuracy often exhibit poor uncertainty calibration: the output probabilities of the model do not reflect its accuracy, and tend to be over-confident. Existing post-calibration methods such as temperature scaling recalibrate a trained model using rather simple calibrators with o... | withdrawn-rejected-submissions | # Paper Summary
This paper considers calibrating the output of a multiclass classifier in such a way that the output probabilities approximately are approximately "correct". They observe that if such a method is able to re-order the logits, then it will change the accuracy of the classifier. Therefore, if they use a c... | train | [
"o5FvYwMtfsy",
"NWkWwxDuHRl",
"b4EqyW9E23A",
"H0NnEUt3SV",
"lvgwMvFLT9B",
"WWAMI-FvcRT",
"m5L9y03XRF9",
"lP4e7PGf4dN",
"sLeBmX8CPT5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Overview: \n\nThis paper proposes a post-calibration technique that is meant to be more powerful than temperature scaling without introducing overfitting. The authors argue that previous attempts to generalize temperature scaling have a tendency to overfit not because of the additional parameters, but rather becau... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
7,
2
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_jsM6yvqiT0W",
"iclr_2021_jsM6yvqiT0W",
"iclr_2021_jsM6yvqiT0W",
"sLeBmX8CPT5",
"WWAMI-FvcRT",
"sLeBmX8CPT5",
"iclr_2021_jsM6yvqiT0W",
"iclr_2021_jsM6yvqiT0W",
"iclr_2021_jsM6yvqiT0W"
] |
iclr_2021_9DQ0SdY4UIz | Effective Subspace Indexing via Interpolation on Stiefel and Grassmann manifolds | We propose a novel local Subspace Indexing Model with Interpolation (SIM-I) for low-dimensional embedding of image datasets. Our SIM-I is constructed via two steps: in the first step we build a piece-wise linear affinity-aware subspace model under a given partition of the dataset; in the second step we interpolate betw... | withdrawn-rejected-submissions | Thanks for your submission to ICLR.
This paper proposes a subspace indexing model for low-dimensional embedding. The reviewers were all generally in agreement that the paper is not ready for publication. In particular, they felt that the paper had several key weaknesses:
-Relevant literature is not discussed
-Relev... | train | [
"37LhDrXC6-I",
"h80zx0arp49",
"O5OEF2ciRI7",
"Hwv9Mc5_yxp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper proposes a subspace indexing model with interpolation (SIM-I) for dimension reduction. To capture the global nonlinearity and the local variation of data, SIM-I split the global space into a collection of local disjoint partitions by a kd-tree style scheme and build a subspace indexing model (... | [
4,
5,
3,
4
] | [
3,
4,
2,
5
] | [
"iclr_2021_9DQ0SdY4UIz",
"iclr_2021_9DQ0SdY4UIz",
"iclr_2021_9DQ0SdY4UIz",
"iclr_2021_9DQ0SdY4UIz"
] |
iclr_2021_bFnn6lPn3Sp | A Benchmark for Voice-Face Cross-Modal Matching and Retrieval | Cross-modal associations between a person's voice and face can be learned algorithmically, and this is a useful functionality in many audio and visual applications. The problem can be defined as two tasks: voice-face matching and retrieval. Recently, this topic has attracted much research attention, but it is still in ... | withdrawn-rejected-submissions | The reviewers pointed out several opportunities for improvements and concurred that the paper needs significant work before it is ready for publication. The authors did not provide a rebuttal. We hope the review process was useful to the authors. | train | [
"c6T5skOdMhE",
"9d1QRjRWsTv",
"0kbe8eNg2nf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The goal of this paper is to learn cross-modal associations between a person’s face and a voice. The authors use a standard three stream network trained with a triplet loss, and evaluate on the VoxCeleb-VGGFace datasets. \n\nStrengths: \n- The authors train on VoxCeleb-VGGFace2, and evaluate on VoxCeleb1, which ... | [
3,
3,
4
] | [
5,
5,
4
] | [
"iclr_2021_bFnn6lPn3Sp",
"iclr_2021_bFnn6lPn3Sp",
"iclr_2021_bFnn6lPn3Sp"
] |
iclr_2021_XMoyS8zm6GA | Slice, Dice, and Optimize: Measuring the Dimension of Neural Network Class Manifolds | Deep neural network classifiers naturally partition input space into regions belonging to different classes. The geometry of these class manifolds (CMs) is widely studied and is intimately related to model performance; for example, the margin is defined via boundaries between these CMs. We present a simple technique to... | withdrawn-rejected-submissions | This paper aims to study the dimension of the Class Manifolds (CM) which are defined as the region classified as certain classes by a neural network. The authors develop a method to measure the dimension of CM by generating random linear subspaces and compute the intersection of the linear subspace with CM. All reviewe... | val | [
"yETsvdDxtb-",
"7ujhxXE04Oo",
"muM5orNGIvT",
"7CgzmxaFy1r",
"f-xNLP6uFqZ",
"iZ-HIlTcH1",
"OeTVJDGovzb",
"Pq6dAejJCiv",
"1BaLYdaGoTO",
"rCWdHVOwZ6F",
"3rxq5yLJh-R",
"sjyyyFhb-Zq",
"QO_4C5Chard",
"WhzfvcGcq80",
"erYsUNd5EJf",
"tPDcJTs2P03",
"-60l6esPPeR",
"308Nt9wIYfS",
"GU2A1lGl86... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to understand the behavior of deep networks for classification tasks by studying the dimensionality of the \"class manifolds\", i.e., regions in the data space that are mapped to the same one-hot output. To measure such dimensionality, the paper proposes a method that is based on intersecting t... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_XMoyS8zm6GA",
"iclr_2021_XMoyS8zm6GA",
"GU2A1lGl86H",
"erYsUNd5EJf",
"1BaLYdaGoTO",
"1BaLYdaGoTO",
"1BaLYdaGoTO",
"1BaLYdaGoTO",
"3rxq5yLJh-R",
"7ujhxXE04Oo",
"7ujhxXE04Oo",
"7ujhxXE04Oo",
"GU2A1lGl86H",
"erYsUNd5EJf",
"tPDcJTs2P03",
"yETsvdDxtb-",
"stM_DT1BmVj",
"stM_DT... |
iclr_2021_WW8VEE7gjx | Dimension reduction as an optimization problem over a set of generalized functions | We reformulate unsupervised dimension reduction problem (UDR) in the language of tempered distributions, i.e. as a problem of approximating an empirical probability density function pemp(x) by another tempered distribution q(x) whose support is in a k-dimensional subspace. Thus, our problem is reduced to the minimizati... | withdrawn-rejected-submissions | Overall, there were significant concerns about the motivation and experiments in this paper, and these were thought not to merit acceptance on their own. Because of this, the reviewers started discussing the theory to see if that would justify acceptance. The reviewers were not able to find a clear advantage over exist... | val | [
"5jfOUrYDwEr",
"BzblAuRnqFr",
"OgIiL2D70bU",
"boK0P0c-ADa"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: In the dimension reduction problem, we are given a set of high-dimensional points and would like to embed them in a lower dimensional space so as to preserve relevant properties. This paper proposes a certain optimization framework for dimension reduction, and suggests to solve it by alternating optimizat... | [
5,
-1,
7,
4
] | [
2,
-1,
3,
4
] | [
"iclr_2021_WW8VEE7gjx",
"iclr_2021_WW8VEE7gjx",
"iclr_2021_WW8VEE7gjx",
"iclr_2021_WW8VEE7gjx"
] |
iclr_2021_EZ8aZaCt9k | No Spurious Local Minima: on the Optimization Landscapes of Wide and Deep Neural Networks | Empirical studies suggest that wide neural networks are comparably easy to optimize, but mathematical support for this observation is scarce. In this paper, we analyze the optimization landscapes of deep learning with wide networks. We prove especially that constraint and unconstraint empirical-risk minimization over s... | withdrawn-rejected-submissions | This paper studies an interesting problem: the landscape of neural networks. I agree with the authors' comment that this work improves our understanding of one aspect of neural networks, and I do find the result of this paper is of interest to some extent. Reviewer 5 pointed out the technique used in the paper is inter... | train | [
"7Z6jePkf2A",
"ZERu1NP4Ti6",
"U65hwHRvcN",
"eK8OwfpHgPI",
"qECKROy1K9A"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"It is still a myth about why neural networks can work without convexity (or, with many local minima). \nSome existing papers tried to explain it using “spurious” minima, which means that there is no non-increasing path to a global minimum.\nThis paper proved that the deep neural networks have no spurious minima wi... | [
4,
5,
4,
4,
6
] | [
4,
4,
3,
4,
3
] | [
"iclr_2021_EZ8aZaCt9k",
"iclr_2021_EZ8aZaCt9k",
"iclr_2021_EZ8aZaCt9k",
"iclr_2021_EZ8aZaCt9k",
"iclr_2021_EZ8aZaCt9k"
] |
iclr_2021_RVANVvSi8MZ | Weighted Line Graph Convolutional Networks | Line graphs have shown to be effective in improving feature learning in graph neural networks. Line graphs can encode topology information of their original graphs and provide a complementary representational perspective. In this work, we show that the encoded information in line graphs is biased. To overcome this issu... | withdrawn-rejected-submissions | The paper proposed a GNN model based on a weighted line graph (dual of the input graph), where information is simultaneously propagated on both graphs, coupling the two propagations at each step.
Overall, the reviewers were lukewarm about the paper, with some raised criticism including
- limited novelty in light of ... | train | [
"QrkQU64AyT",
"jZKxeaJ96z8",
"t8MHg4vYsud",
"yexbSYKhi7G",
"agNGW-Z1g3s",
"t0KqC1d4CnV",
"Bc0ZRGeWpfE",
"Mfs33e1HUw2",
"S6I6ov6moc",
"NHFrT1acso"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Question 1: The use of the word \"dynamics\" is confusing.\n\nAnswer: Thank you for pointing out this. We will fix this word to reduce the confusion.\n\nQuestion 2: It is not clear which datasets have node features.\n\nAnswer: Thank you for pointing out this. Social network datasets including COLLAB, IMDB, and RED... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4,
4
] | [
"Mfs33e1HUw2",
"S6I6ov6moc",
"Bc0ZRGeWpfE",
"NHFrT1acso",
"t0KqC1d4CnV",
"iclr_2021_RVANVvSi8MZ",
"iclr_2021_RVANVvSi8MZ",
"iclr_2021_RVANVvSi8MZ",
"iclr_2021_RVANVvSi8MZ",
"iclr_2021_RVANVvSi8MZ"
] |
iclr_2021_IpPQmzj4T_ | Teleport Graph Convolutional Networks | We consider the limitations in message-passing graph neural networks. In message-passing operations, each node aggregates information from its neighboring nodes. To enlarge the receptive field, graph neural networks need to stack multiple message-passing graph convolution layers, which leads to the over-fitting issue a... | withdrawn-rejected-submissions | The paper seeks to increase receptive fields of GNNs by aggregating information beyond local neighborhoods with the idea of addressing oversmoothing and/or overfitting issues with message passing algorithms. The proposed method is simple and primarily makes use of node features and local structure similarities. In this... | train | [
"2cHEyWBv6l5",
"L7Gt8MXKv82",
"aSDfPljIte",
"zsQNcEJn1jy"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"A new architecture for graph neural networks, which the authors name as Teleport Graph Convolutional Networks (TGL), is proposed in this paper. Teleport graph convolution layer is proposed to address the limitations in message-passing operations of graph neural networks: 1. over-smoothing and 2. over-fitting. T... | [
5,
5,
3,
5
] | [
4,
4,
3,
5
] | [
"iclr_2021_IpPQmzj4T_",
"iclr_2021_IpPQmzj4T_",
"iclr_2021_IpPQmzj4T_",
"iclr_2021_IpPQmzj4T_"
] |
iclr_2021_RgDq8-AwvtN | Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data | While deep learning (DL) has resulted in major breakthroughs in many applications, the frameworks commonly used in DL remain fragile to seemingly innocuous changes in the data. In response, adversarial training has emerged as a principled approach for improving the robustness of DL against norm-bounded perturbations. ... | withdrawn-rejected-submissions | # Quality:
The technical contribution of the paper seems reasonable and there were only minor points being highlighted by the reviewers.
# Clarity:
The paper would benefit from being more polished. During the rebuttal, the authors suggested that several reviewers misunderstood the paper. This alone should encourage th... | train | [
"oezlT96FUV8",
"geStKKzi5tv",
"s5T6SEJ8l-l",
"uN7_9dG4MVD",
"OuygYaTqZ8K",
"Cp1fg26LYWH",
"4kddaVVuOQ",
"DI1Rq8mKuRg",
"2_mwmMdreL4",
"J0Z2Afh2v36",
"grIXP5pdjtf",
"OAkXiM4G777",
"nib7RNeW-4",
"KB-KY5WijUp",
"ER1aRZ474Jg"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper extends current adversarial learning approaches beyond imperceptible L_p norm perturbations. The proposed approach can handle many models of natural variation, such as a change in brightness. The main idea behind the approach is to use unsupervised approaches such as GANs to model the natural variation. ... | [
5,
-1,
-1,
5,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
-1,
2,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_RgDq8-AwvtN",
"s5T6SEJ8l-l",
"OuygYaTqZ8K",
"iclr_2021_RgDq8-AwvtN",
"DI1Rq8mKuRg",
"iclr_2021_RgDq8-AwvtN",
"iclr_2021_RgDq8-AwvtN",
"2_mwmMdreL4",
"OAkXiM4G777",
"grIXP5pdjtf",
"ER1aRZ474Jg",
"uN7_9dG4MVD",
"oezlT96FUV8",
"4kddaVVuOQ",
"iclr_2021_RgDq8-AwvtN"
] |
iclr_2021_ccwT339SIu | Contrastive Video Textures | Existing methods for video generation struggle to generate more than a short sequence of frames. We introduce a non-parametric approach for infinite video generation based on learning to resample frames from an input video. Our work is inspired by Video Textures, a classic method relying on pixel similarity to stitch s... | withdrawn-rejected-submissions | The paper initially had mixed reviews (4,5,6). The main issues raised were:
1) limited novelty (re-using/integrating components) [R2];
2) limited generalization ability since the model needs to be retrained on every video [R2, R3];
3) limited applicability - experiments limited to certain domain of video, while result... | train | [
"wltSC9o2bbt",
"-CALA1wtOY",
"2VlAhcu1a0",
"zLpFPhQFarT",
"F2UGOUpcnlr",
"DNFQvDfn0Ai"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**NOTE.** We kindly request you to watch this updated video (https://youtu.be/jtFHqFOaQXQ) with more comparisons to baselines, clearly highlighting the advantage of our approach. \n\n**Generalization.** Our work is similar to test-time training methods such as SinGAN[1], Deep Image Prior[2] which train example-spe... | [
-1,
-1,
-1,
6,
4,
5
] | [
-1,
-1,
-1,
3,
5,
5
] | [
"F2UGOUpcnlr",
"zLpFPhQFarT",
"DNFQvDfn0Ai",
"iclr_2021_ccwT339SIu",
"iclr_2021_ccwT339SIu",
"iclr_2021_ccwT339SIu"
] |
iclr_2021__lV1OrJIgiG | Model-based Navigation in Environments with Novel Layouts Using Abstract 2-D Maps | Efficiently training agents with planning capabilities has long been one of the major challenges in decision-making. In this work, we focus on zero-shot navigation ability on a given abstract 2-D occupancy map, like human navigation by reading a paper map, by treating it as an image. To learn this ability, we need to e... | withdrawn-rejected-submissions | The paper considers the problem of 2D point-goal navigation in novel environments given access to an abstract occupancy grid map of the environment, together with knowledge of the agent's state and the goal location typical of point-goal navigation. The paper proposes learning a navigation policy in a model-based fashi... | val | [
"41ejbm5DVG2",
"N0LEd6zhUGZ",
"4I_X3x0FPHa",
"JTOZFICcRbf",
"fuh5BJ-taTB",
"Cgg72BAjPg4",
"bmwqquAeBKu",
"hVDgjgQca6",
"6Jb1uqgD4XI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"[EDIT AFTER DICUSSIONS] I thank the authors for their answer to my comments. I agree with the summary of the Area Chair and do not wish to modify my score.\n[/EDIT]\n\n##########################################################################\nSummary:\nThis paper presents addresses the problem of zero-shot naviga... | [
6,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021__lV1OrJIgiG",
"iclr_2021__lV1OrJIgiG",
"41ejbm5DVG2",
"N0LEd6zhUGZ",
"hVDgjgQca6",
"6Jb1uqgD4XI",
"iclr_2021__lV1OrJIgiG",
"iclr_2021__lV1OrJIgiG",
"iclr_2021__lV1OrJIgiG"
] |
iclr_2021_LuyryrCs6Ez | CURI: A Benchmark for Productive Concept Learning Under Uncertainty | Humans can learn and reason under substantial uncertainty in a space of infinitely many concepts, including structured relational concepts (“a scene with objects that have the same color”) and ad-hoc categories defined through goals (“objects that could fall on one’s head”). In contrast, standard classification benchma... | withdrawn-rejected-submissions | This paper was reviewed by 3 experts in the field. The reviewers raised their concerns on lack of novelty, unconvincing experiment, and the presentation of this paper, While the paper clearly has merit, the decision is not to recommend acceptance. The authors are encouraged to consider the reviewers' comments when revi... | val | [
"YMyC4vWi-rC",
"We67RaMaK9k",
"3T1ZUQU2jA1",
"lxwrCPmcHrz",
"k2gfboXN2dV",
"F55c4p2wCFM",
"4fHkdxU8KE",
"9ZmZ8sbSlA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary:\n\nThe following work presents a CLEVR-based compositionality benchmark. The task of the model is to verify logical statements about an image, and in order to achieve such, must learn how to map individual statements to a composition of functions over the image checking for color, placement, shape, etc. S... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_LuyryrCs6Ez",
"iclr_2021_LuyryrCs6Ez",
"iclr_2021_LuyryrCs6Ez",
"iclr_2021_LuyryrCs6Ez",
"9ZmZ8sbSlA",
"YMyC4vWi-rC",
"We67RaMaK9k",
"iclr_2021_LuyryrCs6Ez"
] |
iclr_2021_-p6rexF3qdQ | Learn Robust Features via Orthogonal Multi-Path | It is now widely known that by adversarial attacks, clean images with invisible perturbations can fool deep neural networks.
To defend adversarial attacks, we design a block containing multiple paths to learn robust features and the parameters of these paths are required to be orthogonal with each other.
... | withdrawn-rejected-submissions | I thank the authors and reviewers for the lively discussions. Although reviewers mentioned the work has potentials to improve adversarial robustness, they agreed that the current draft needs a bit more work specially to strengthen its experimental results and comparisons with related works.
| train | [
"k5E2KvOXV1T",
"nlMw6wZ4oM",
"4Z-DcXXaNky",
"GDpHzX7EP5g",
"CLsU0I5rHXY",
"i75nlbBR3UD",
"HZ5ppa4jg50",
"xXs_eQpCwND"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper addresses the problem of adversarial defence, by proposing to build multiple parallel orthogonal layers to replace a regular neural network layer. The layers in the OMP block are trained to be diverse and orthogonal to each other. Experiments on both white-box and black-box attacks, with or without adve... | [
5,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
3,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"iclr_2021_-p6rexF3qdQ",
"HZ5ppa4jg50",
"k5E2KvOXV1T",
"i75nlbBR3UD",
"xXs_eQpCwND",
"iclr_2021_-p6rexF3qdQ",
"iclr_2021_-p6rexF3qdQ",
"iclr_2021_-p6rexF3qdQ"
] |
iclr_2021_c1zLYtHYyQG | Learning from Demonstrations with Energy based Generative Adversarial Imitation Learning | Traditional reinforcement learning methods usually deal with the tasks with explicit reward signals. However, for vast majority of cases, the environment wouldn't feedback a reward signal immediately. It turns out to be a bottleneck for modern reinforcement learning approaches to be applied into more realistic scenario... | withdrawn-rejected-submissions | This work proposes to uses an energy-based objective combined with generative adversarial networks for imitation learning. While most reviewers find the work easy to follow and come with theoretical justifications, albeit mostly followed from previous works, and good coverage of experimental results, all of them raised... | train | [
"BsrU0NlkTcG",
"8y1n_WLa59H",
"SQhsm3K1Od6",
"MlJIYXtNI6",
"vS3M6wPvTA",
"EvhLQAAeQOq",
"_3rxe_So8k5",
"XAg6uWZhPL",
"uciHshy3bfe"
] | [
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Authors,\n\nI would be very nice to discuss the following paper [1], which also uses energy-based model for imitation learning. \n\nThe energy function plays the role of cost function for optimal control, and it can be learned from demonstration, such as human drivers for autonomous driving. It uses MCMC to ... | [
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
5
] | [
"iclr_2021_c1zLYtHYyQG",
"uciHshy3bfe",
"EvhLQAAeQOq",
"_3rxe_So8k5",
"XAg6uWZhPL",
"iclr_2021_c1zLYtHYyQG",
"iclr_2021_c1zLYtHYyQG",
"iclr_2021_c1zLYtHYyQG",
"iclr_2021_c1zLYtHYyQG"
] |
iclr_2021_bsRjn0RH620 | Towards Understanding the Cause of Error in Few-Shot Learning | Few-Shot Learning (FSL) is a challenging task of recognizing novel classes from scarce labeled samples. Many existing researches focus on learning good representations that generalize well to new categories. However, given low-data regime, the restricting factors of performance on novel classes has not been well studie... | withdrawn-rejected-submissions | This paper proposes a contribution aiming at understanding the cause of errors in few-shot learning. The motivation is interesting but the reviewers pointed out many aspects that require more precisions and polishing in addition to the fact that the upper bound provided it rather loose. The rebuttal provided addresses ... | train | [
"pHeWVsu2Nx",
"vFLgr8EYdT_",
"4zmP9DrG-x",
"MMYlPqn4p6N",
"jt26dfy0jdW",
"qAQu4AIDnty",
"J_fJukvQcgB",
"rzOaGvLwQFW",
"fMOaASHkBev"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Great thanks for ACs and reviewers taking time and efforts on our work. We will further polish this work by these constructive suggestions.",
"Thank you for reading our paper carefully and making constructive comments.\n\n1. Thank you for pointing out the typos and we will fix them as suggested.\n\n2. Actually, ... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"iclr_2021_bsRjn0RH620",
"J_fJukvQcgB",
"qAQu4AIDnty",
"fMOaASHkBev",
"rzOaGvLwQFW",
"iclr_2021_bsRjn0RH620",
"iclr_2021_bsRjn0RH620",
"iclr_2021_bsRjn0RH620",
"iclr_2021_bsRjn0RH620"
] |
iclr_2021_ZlIfK1wCubc | Contrasting distinct structured views to learn sentence embeddings | We propose a self-supervised method that builds sentence embeddings from the combination of diverse explicit syntactic structures of a sentence. We assume structure is crucial to build consistent representations as we expect sentence meaning to be a function from both syntax and semantic aspects. In this perspective, ... | withdrawn-rejected-submissions | I thank the authors both for going the extra mile in doing further experiments for their response, and making the efforts to synthesize the main comments and concerns of the reviewers.
Overall, I'm pretty sympathetic to the idea that syntactic and semantic representations should be very helpful to learning sentence em... | train | [
"ksZ0C8m5eNT",
"6yXQH_F9Yn",
"cbeO1Xh7MJE",
"BuJXY_ctPBz"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nThank the authors for showing their effort in revising the paper.\nSome of my concerns have been solved thanks to the rebuttal.\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ... | [
5,
-1,
3,
4
] | [
4,
-1,
4,
4
] | [
"iclr_2021_ZlIfK1wCubc",
"iclr_2021_ZlIfK1wCubc",
"iclr_2021_ZlIfK1wCubc",
"iclr_2021_ZlIfK1wCubc"
] |
iclr_2021_ggNgn8Fhr5Q | Frequency Decomposition in Neural Processes | Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and ... | withdrawn-rejected-submissions | The paper analyses the behaviour of Neural Processes in the frequency domain and, in particular, how it suppresses high-frequency components of the input functions. While this is entirely intuitive, the paper adds some theoretical analysis via the Nyquist-Shannon theorem. But the analysis remains too generic and it is ... | train | [
"ZvEa9hhrWuy",
"Cpf-cfnRskR",
"CHH_LZT2Tw7",
"T8LOMUG8gRe",
"zn-C7b8f6MK",
"zGljMfTAEOT",
"FeIOM__T90y",
"oOO8Ozv6ZU",
"hJvbcQwujE",
"JO8CLaolcCM",
"-nZRGcsWHcd",
"iUVRUOjTEj",
"TRZ7tsf6dFJ",
"Szock3BMhms",
"W2kU6ViwQh9",
"xix2Dkwp85s",
"V7dLlnEeZjT",
"KikP9tDQaX8"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_rev... | [
"This paper addresses an interesting and timely problem, which is to understand how Neural Processes work to learn a representation of a function space. Offering a closer investigation into a recently introduced framework, this work will likely be of interest to the ICLR community. The work focuses on the 1-dimensi... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"iclr_2021_ggNgn8Fhr5Q",
"Szock3BMhms",
"T8LOMUG8gRe",
"zn-C7b8f6MK",
"zGljMfTAEOT",
"oOO8Ozv6ZU",
"W2kU6ViwQh9",
"hJvbcQwujE",
"TRZ7tsf6dFJ",
"-nZRGcsWHcd",
"iUVRUOjTEj",
"xix2Dkwp85s",
"ZvEa9hhrWuy",
"KikP9tDQaX8",
"V7dLlnEeZjT",
"iclr_2021_ggNgn8Fhr5Q",
"iclr_2021_ggNgn8Fhr5Q",
... |
iclr_2021_0EJjoRbFEcX | Understanding Classifiers with Generative Models | Although deep neural networks are effective on supervised learning tasks, they have been shown to be brittle. They are prone to overfitting on their training distribution and are easily fooled by small adversarial perturbations. In this paper, we leverage generative models to identify and characterize instances where c... | withdrawn-rejected-submissions | As several reviewers pointed out, the contribution is too incremental from previous work. | train | [
"AS8JeVZV5kD",
"hBxR1dbJ9Vj",
"n1fnFvv_Kom",
"DoT5dwsL-Wv",
"WXB4wds4BmE",
"SYUZ5xzOp_L",
"9aWUUo2nKJO"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Update after rebuttal:** The author rebuttal clarified some minor issues for me, but it did nothing to address my main concern, which is that very similar methods have been proposed before. I'm therefore keeping my score the same. \n\n---------------------------------------------\nThis paper proposes a simple m... | [
4,
-1,
-1,
-1,
5,
6,
5
] | [
4,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_0EJjoRbFEcX",
"WXB4wds4BmE",
"AS8JeVZV5kD",
"9aWUUo2nKJO",
"iclr_2021_0EJjoRbFEcX",
"iclr_2021_0EJjoRbFEcX",
"iclr_2021_0EJjoRbFEcX"
] |
iclr_2021_4CxsUBDQJqv | Learning Intrinsic Symbolic Rewards in Reinforcement Learning | Learning effective policies for sparse objectives is a key challenge in Deep Reinforcement Learning (RL). A common approach is to design task-related dense rewards to improve task learnability. While such rewards are easily interpreted, they rely on heuristics and domain expertise. Alternate approaches that train neura... | withdrawn-rejected-submissions | This paper proposes an algorithm to learn symbolic intrinsic rewards via a symbolic function generator. The policy optimizes this reward function and an evolutionary algorithm selects between a set of such policies. The core idea is that learning with such a symbolic reward function is useful in sparse reward environme... | train | [
"8gfmr-hFdhq",
"VCSrGjCAuOm",
"6RUY6xhG3k",
"Rpg9BNh9wm",
"UFwPZ7HSCpn",
"ip9jPKFvUiS",
"w4ZSa9h-rQe",
"GZboqjM96Nw"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all reviewers for their very helpful feedback.\n\nOne point we wish to make clear is around interpretability. Several comments pointed out that the discovered rewards are not interpretable. We completely agree. \n\nOur goal in this paper is not to discover interpretable reward functions. Rather, we tackle... | [
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_4CxsUBDQJqv",
"w4ZSa9h-rQe",
"ip9jPKFvUiS",
"w4ZSa9h-rQe",
"GZboqjM96Nw",
"iclr_2021_4CxsUBDQJqv",
"iclr_2021_4CxsUBDQJqv",
"iclr_2021_4CxsUBDQJqv"
] |
iclr_2021_8iW8HOidj1_ | Dream and Search to Control: Latent Space Planning for Continuous Control | Learning and planning with latent space dynamics has been shown to be useful for sample efficiency in model-based reinforcement learning (MBRL) for discrete and continuous control tasks. In particular, recent work, for discrete action spaces, demonstrated the effectiveness of latent-space planning via Monte-Carlo Tree ... | withdrawn-rejected-submissions | This paper proposes an extension to the Dreamer agent in which planning (either via MCTS or rollouts) is used to select actions, rather than sampling from the policy prior. The results show small improvements over the baseline Dreamer agent.
Pros:
- Important study on incorporating decision-time planning into Dyna-bas... | train | [
"wfPvhi619Ve",
"AOIovK-s5q",
"MVR7Wa3ysML",
"L0mLLhf7Yn",
"ihiaqF3SxFz",
"fq8nPYujPA8",
"OStEZYh92h",
"qgmwgsQO1BZ",
"lSNBJ0i5Zx"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your feedback. \n\n**`Is the policy prior for search updated based on the search policy (as in MuZero)?**\n\nOur method for updating the policy network differs significantly from MuZero, which uses the result of MCTS as supervision for the policy prior. \nIn order to maximize sample efficiency, our opt... | [
-1,
-1,
-1,
-1,
-1,
5,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"fq8nPYujPA8",
"OStEZYh92h",
"lSNBJ0i5Zx",
"qgmwgsQO1BZ",
"iclr_2021_8iW8HOidj1_",
"iclr_2021_8iW8HOidj1_",
"iclr_2021_8iW8HOidj1_",
"iclr_2021_8iW8HOidj1_",
"iclr_2021_8iW8HOidj1_"
] |
iclr_2021_9az9VKjOx00 | TopoTER: Unsupervised Learning of Topology Transformation Equivariant Representations | We present the Topology Transformation Equivariant Representation (TopoTER) learning, a general paradigm of unsupervised learning of node representations of graph data for the wide applicability to Graph Convolutional Neural Networks (GCNNs). We formalize the TopoTER from an information-theoretic perspective, by maximi... | withdrawn-rejected-submissions | The paper is concerned with learning transformation equivariant node representation of graph data in an unsupervised setting. The paper extends prior work in this topic by focusing on equivariance under topology transformations (adding/removing edges) and considering an information theoretic perspective. Reviewers high... | train | [
"rOxLjEneEyt",
"2GDp9XWtWTp",
"sLDfGf4Vz2",
"bpR-5ELa5eI",
"emhdb0ivR0",
"xSW77cTnmuc",
"HTWVrVUlxQC",
"0AsbYbsOHe8",
"G0NfFyS15d-",
"AkGMoxqEFiD"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper develops a framework for unsupervised learning of graphs. The goal is to build graph representation using an encoder that is useful for downstream tasks such as graph classification. The representation is computed with an encoder $E$ applied to a graph data $(X,A)$, containing vertex data $X$ and adjace... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"iclr_2021_9az9VKjOx00",
"AkGMoxqEFiD",
"G0NfFyS15d-",
"rOxLjEneEyt",
"HTWVrVUlxQC",
"0AsbYbsOHe8",
"iclr_2021_9az9VKjOx00",
"iclr_2021_9az9VKjOx00",
"iclr_2021_9az9VKjOx00",
"iclr_2021_9az9VKjOx00"
] |
iclr_2021_H6ZWlQrPGS2 | Fast Binarized Neural Network Training with Partial Pre-training | Binarized neural networks, networks with weights and activations constrained to lie in a 2-element set, allow for more time- and resource-efficient inference than standard floating-point networks. However, binarized neural networks typically take more training to plateau in accuracy than their floating-point counterpar... | withdrawn-rejected-submissions | ## Description
The paper asks the question whether it is possible to accelerate training a binarized neural network from scratch to a given target accuracy [by starting with training a full-precision network]. The main claimed contributions are: the idea to use *partially* pretrained networks, experimental evidence r... | train | [
"WA9xIOZCZGv",
"CE7Hh9gJFje",
"B0RViYtspSD",
"6g67t0IvZHn",
"_nFP5RfY4_n",
"SRdC7SfFBUS",
"7laaCCGKl5",
"TTuodXc-I16",
"F7Cb5Dqje9a",
"z8HgJhB6eSF",
"0-_gcpVePLR",
"lY3NW41KteA",
"i-V21IxK46c",
"praWbZ5i3db",
"ldgxUogXDv2",
"opdqNfRjW9A",
"h4-wXzhNMG",
"V3vuXmHW9Wb",
"XCN2gwRGFVt... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
"The concern about applicability to other binary optimizers is a reasonable point, and indeed one that our experiments do not answer. To avoid generating potentially misleading conclusions about the performance of the method on binary optimizers not fully evaluated in the work, we will add clear statements to the i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
4
] | [
"_nFP5RfY4_n",
"TTuodXc-I16",
"7laaCCGKl5",
"SRdC7SfFBUS",
"lY3NW41KteA",
"z8HgJhB6eSF",
"opdqNfRjW9A",
"h4-wXzhNMG",
"ldgxUogXDv2",
"0-_gcpVePLR",
"h4-wXzhNMG",
"XCN2gwRGFVt",
"5KaoPSoBUCv",
"V3vuXmHW9Wb",
"XV0UJQAVnL",
"h4-wXzhNMG",
"iclr_2021_H6ZWlQrPGS2",
"iclr_2021_H6ZWlQrPGS2... |
iclr_2021_sxZvLS2ZPfH | MVP-BERT: Redesigning Vocabularies for Chinese BERT and Multi-Vocab Pretraining | Despite the development of pre-trained language models (PLMs) significantly raise the performances of various Chinese natural language processing (NLP) tasks, the vocabulary for these Chinese PLMs remain to be the one provided by Google Chinese Bert, which is based on Chinese characters. Second, the masked language mod... | withdrawn-rejected-submissions | Three reviewers agreed to reject and the other reviewer also suggested it is below the threshold. | train | [
"QCKeuao6Aw",
"f9jTV2ZKPcA",
"4bvZLS7MDM",
"gaNvv0yVkv9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This work attempt to handle the vocabulary problem in Chinese pre-training, which is indeed an unsolved problem (for comparison, byte-pair encoding has been dominant in English pre-training models). Recently, there are some work trying to combine char-based vocab and word-based vocab for Chinese pre-training, ... | [
4,
2,
5,
3
] | [
3,
5,
4,
4
] | [
"iclr_2021_sxZvLS2ZPfH",
"iclr_2021_sxZvLS2ZPfH",
"iclr_2021_sxZvLS2ZPfH",
"iclr_2021_sxZvLS2ZPfH"
] |
iclr_2021_dN_iVr6iNuU | Preventing Value Function Collapse in Ensemble Q-Learning by Maximizing Representation Diversity | The first deep RL algorithm, DQN, was limited by the overestimation bias of the learned Q-function. Subsequent algorithms proposed techniques to reduce this problem, without fully eliminating it. Recently, the Maxmin and Ensemble Q-learning algorithms used the different estimates provided by ensembles of learners to re... | withdrawn-rejected-submissions | The paper tackles the Q-value overestimation problem by proposing a regularization technique to maximize diversity in representation space, preventing ensemble "collapse", in order to improve the efficacy of techniques such as Maxmin and Ensemble Q-learning. Reviewers praised the originality of the method and the inter... | val | [
"ZMTW4hBmbi2",
"iSXqnbd6hN",
"QRoreoXnJWd",
"Y_9vXodaaPo",
"n2uWgpPY_yD",
"Xn3Klt-U0Ij"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Sorry that I missed the plots Figure 6 & 7 in the Appendix. Thanks for pointing that out.",
">It would be great if the authors could keep the algorithm fixed and then show the performance across different games.\n\n\nThank you for comments. For the main results section, we chose to show the best results for each... | [
-1,
-1,
4,
5,
5,
6
] | [
-1,
-1,
1,
4,
4,
3
] | [
"iSXqnbd6hN",
"Xn3Klt-U0Ij",
"iclr_2021_dN_iVr6iNuU",
"iclr_2021_dN_iVr6iNuU",
"iclr_2021_dN_iVr6iNuU",
"iclr_2021_dN_iVr6iNuU"
] |
iclr_2021_io-EI8C0q6A | Unsupervised Cross-lingual Representation Learning for Speech Recognition | This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latent... | withdrawn-rejected-submissions | This work mainly applies wav2vec 2.0 to multilingual speech recognition and lacks of novelty.
The various pre-training and fine-tuning mix-match are specific to the speech recognition task. As suggested by reviewers, it is recommended to resubmit to a speech conference.
Also the paper lacks comparisons to SOTA on one... | test | [
"-1K4cvoCT1U",
"C2rsGoRsGfg",
"eFL9hSnwxKT",
"ywW0AjXpxrx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Authors extended the XLSR model from the previous mono-lingual task, where self-supervised learning was used in representation learning, to multi-lingual task. The basic idea does make a lot of sense, where all the data, irrespective of the language, is pushed through the representation learning task. We are anyw... | [
6,
4,
6,
5
] | [
3,
5,
4,
5
] | [
"iclr_2021_io-EI8C0q6A",
"iclr_2021_io-EI8C0q6A",
"iclr_2021_io-EI8C0q6A",
"iclr_2021_io-EI8C0q6A"
] |
iclr_2021_6Lhv4x2_9pw | Bayesian neural network parameters provide insights into the earthquake rupture physics. | I present a simple but informative approach to gain insight into the Bayesian neural network (BNN) trained parameters. I used 2000 dynamic rupture simulations to train a BNN model to predict if an earthquake can break through a simple 2D fault. In each simulation, fault geometry, stress conditions, and friction paramet... | withdrawn-rejected-submissions | The paper considers an interesting application of Bayesian neural nets to the geophysics domain; however, the paper does not make a novel contribution from the machine learning perspective, and the improvements on top of the previously proposed approach by Ahamed & Daub (2019) seem to be quite modest. Overall, the pap... | train | [
"brGJyqtZdLE",
"yRllK5g6NV1",
"_GCxNlK9-b",
"acgIJLTnQSx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a Bayesian neural network for predicting if an earthquake will break a fault or not, overcoming 'small data problem' and predicting model uncertainty. The data is composed of 8 features and a binary output, and the samples are all coming from simulations. An analysis on the means and standard d... | [
6,
4,
4,
4
] | [
4,
5,
4,
3
] | [
"iclr_2021_6Lhv4x2_9pw",
"iclr_2021_6Lhv4x2_9pw",
"iclr_2021_6Lhv4x2_9pw",
"iclr_2021_6Lhv4x2_9pw"
] |
iclr_2021_NPab8GcO5Pw | On the Landscape of Sparse Linear Networks | Network pruning, or sparse network has a long history and practical significance in modern applications. Although the loss functions of neural networks may yield bad landscape due to non-convexity, we focus on linear activation which already owes benign landscape. With no unrealistic assumption, we conclude the followi... | withdrawn-rejected-submissions | The paper studies optimization landscapes arising the fitting of sparse linear networks to data. It argues that for scalar outputs, every local minimum is global, while for d >= 3 dimensional outputs, there can be spurious local minimizers. The paper also argues that similar results hold for deep networks. Counterexamp... | val | [
"n9yGBO3v817",
"cD9uyN76eE",
"6pszxx4mT2",
"fpj1WPfqewq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the loss landscapes of sparse linear networks. It proves that under squared loss, (1) spurious local minimum does not exist when the output dimension is one, or with separated first layer and orthogonal training data; and (2) for two-layer sparse linear networks, the good property in (1) does no... | [
4,
4,
7,
5
] | [
4,
4,
5,
4
] | [
"iclr_2021_NPab8GcO5Pw",
"iclr_2021_NPab8GcO5Pw",
"iclr_2021_NPab8GcO5Pw",
"iclr_2021_NPab8GcO5Pw"
] |
iclr_2021_yuXQOhKRjBr | Towards Powerful Graph Neural Networks: Diversity Matters | Graph neural networks (GNNs) offer us an effective framework for graph representation learning via layer-wise neighborhood aggregation. Their success is attributed to their expressive power at learning representation of nodes and graphs. To achieve GNNs with high expressive power, existing methods mainly resort to comp... | withdrawn-rejected-submissions | The reviewers, including me, agreed that considering sampling diversity is interesting and reasonable when designing GNNs. However, the proposed method is too heuristic and empirical. Without the authors' feedback, I tend to reject this work. | train | [
"ZUXzuNxuC_i",
"jtAVvwjGNl",
"eB32BJqEXl",
"RH2YuZE6e5-",
"YJNi6_sPi2t",
"Ef0H5XSp-re"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Review Summary**\n\nI have a question about the correctness of this paper because I do not think the statement and proof of the main theorem are mathematically rigorous nor correct. Regarding empirical evaluation, I think this paper correctly evaluates the proposed diverse sampling enhances the performance of ba... | [
3,
-1,
4,
4,
4,
4
] | [
3,
-1,
5,
4,
4,
5
] | [
"iclr_2021_yuXQOhKRjBr",
"ZUXzuNxuC_i",
"iclr_2021_yuXQOhKRjBr",
"iclr_2021_yuXQOhKRjBr",
"iclr_2021_yuXQOhKRjBr",
"iclr_2021_yuXQOhKRjBr"
] |
iclr_2021_V6WHleb2nV | Data Transfer Approaches to Improve Seq-to-Seq Retrosynthesis | Retrosynthesis is a problem to infer reactant compounds to synthesize a given
product compound through chemical reactions. Recent studies on retrosynthesis
focus on proposing more sophisticated prediction models, but the dataset to feed
the models also plays an essential role in achieving the best gen... | withdrawn-rejected-submissions | While the authors thought that the paper had some strong experimental comparisons, there were serious concerns with novelty and paper claims. For a stronger ML paper the authors would need to either: (a) design a new training methodology beyond pre-training that is better suited for leveraging multiple datasets for Ret... | train | [
"bTMoOJMy8cp",
"xi4yr3esJ8K",
"hisof0IqJln",
"BqLOEkw_01w",
"2tjlrJeLtvM"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Summary of the paper\nThis paper proposes to improve retrosynthesis models with pre-training and self-training techniques. For pre-training, the model is trained on the USPTO reaction dataset and fine-tuned on USPTO-50K dataset. For self-training, the model is trained on artificial reaction instances generated... | [
4,
-1,
4,
4,
4
] | [
5,
-1,
4,
4,
5
] | [
"iclr_2021_V6WHleb2nV",
"iclr_2021_V6WHleb2nV",
"iclr_2021_V6WHleb2nV",
"iclr_2021_V6WHleb2nV",
"iclr_2021_V6WHleb2nV"
] |
iclr_2021_Bw7VC-DJUM | Learning Spatiotemporal Features via Video and Text Pair Discrimination | Current video representations heavily rely on learning from manually annotated video datasets which are time-consuming and expensive to acquire. We observe videos are naturally accompanied by abundant text information such as YouTube titles and Instagram captions. In this paper, we leverage this visual-textual connecti... | withdrawn-rejected-submissions | The paper presents an approach for weakly supervised pre-training for videos using textual information provided with web videos on Youtube and Instagram.
## Strength
* The work shows strong results with relative small dataset and computational resources compared to other work in the area of self/weakly supervised lear... | val | [
"45RjaN9deCh",
"yny35_RYxRM",
"hDgOwps8m2a",
"JWDTY69xtbq",
"v4abCIA4hKa",
"2xjYrv3qllc",
"bflqN3UBA0A",
"SHSm2pF5Ij9"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"*Summary:*\nThe paper proposes an approach to learn a video feature backbone in an unsupervised manner through the use of video titles (text modality) associated with user generated content from Youtube or Instagram. The key idea is to use a contrastive loss that increases the similarity score between a positive p... | [
6,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
3,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"iclr_2021_Bw7VC-DJUM",
"2xjYrv3qllc",
"SHSm2pF5Ij9",
"45RjaN9deCh",
"bflqN3UBA0A",
"iclr_2021_Bw7VC-DJUM",
"iclr_2021_Bw7VC-DJUM",
"iclr_2021_Bw7VC-DJUM"
] |
iclr_2021_IPGZ6S3LDdw | Fast MNAS: Uncertainty-aware Neural Architecture Search with Lifelong Learning | Sampling-based neural architecture search (NAS) always guarantees better convergence yet suffers from huge computational resources compared with gradient-based approaches, due to the rollout bottleneck -- exhaustive training for each sampled generation on proxy tasks. This work provides a general pipeline to accelerate... | withdrawn-rejected-submissions | This paper presents a compelling mechanism for reducing the neural architecture search process based on accumulated experience that the reviewers found compelling with significant improvements in performance. This is an intriguing idea. However, there were concerns about clarity that need to be addressed, and more co... | train | [
"MMRUrYA5VXQ",
"gSUhsTi1Nzs",
"73BYh9jUnE",
"6O0QPa0XRbN",
"ohASsq4cpnb",
"dZCoTtZSKTN",
"DpAJQETfm_Z",
"vav5GvMpHu"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Q1. This paper is generally well-written and well-motivated, except for some unclear sentences.**\n\n**A1.** Thank you for pointing out these problems, which may lead to an unclear understanding. We will improve these issues in subsequent versions.\n\n**Q2. Architecture knowledge is not well described.**\n\n**A... | [
-1,
-1,
-1,
-1,
5,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
5,
3,
3,
4
] | [
"ohASsq4cpnb",
"dZCoTtZSKTN",
"DpAJQETfm_Z",
"vav5GvMpHu",
"iclr_2021_IPGZ6S3LDdw",
"iclr_2021_IPGZ6S3LDdw",
"iclr_2021_IPGZ6S3LDdw",
"iclr_2021_IPGZ6S3LDdw"
] |
iclr_2021_om1guSP_ray | Graph Pooling by Edge Cut | Graph neural networks (GNNs) are very efficient at solving several tasks in graphs such as node classification or graph classification. They come from an adaptation of convolutional neural networks on images to graph structured data. These models are very effective at finding patterns in images that can discriminate im... | withdrawn-rejected-submissions | This paper proposes a graph pooling mechanism based on adaptive edge scores that are then fed into a min-cut procedure.
Reviewers acknowledged that this is an important topic of study, but all agreed that the current manuscript does not provide enough evidence about the significance and novelty of their proposed appro... | train | [
"zg4YDOBqtkj",
"K58H8p9iIec",
"M5RLQ1eMDJV",
"vlqTqWqjcaq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a novel pooling layer for graph neural networks. Pooling in GNNs amounts to merging nodes that are very similar through the layers. Specifically, the paper proposes to merge nodes whose edges have a high score according to the edge cuts. The edge score in practice is computed in each layer using... | [
4,
3,
3,
5
] | [
4,
5,
5,
3
] | [
"iclr_2021_om1guSP_ray",
"iclr_2021_om1guSP_ray",
"iclr_2021_om1guSP_ray",
"iclr_2021_om1guSP_ray"
] |
iclr_2021_s4D2nnwCcM | Uncertainty-Based Adaptive Learning for Reading Comprehension | Recent years have witnessed a surge of successful applications of machine reading comprehension. Of central importance to the tasks is the availability of massive amount of labeled data, which facilitates the training of large-scale neural networks. However, in many real-world problems, annotated data are expensive to ... | withdrawn-rejected-submissions | All reviewers agree that the current approach is very similar to traditional uncertainty-based active learning, and that the empirical results are inconclusive, so at this point the paper is not ready for publication. | train | [
"03NqELnhr22",
"IZtHGOaU7Ol",
"Js0sc5bv4nI",
"0fA2D8InCrI",
"7Hio8UttMJn",
"1Cf_kh3bmbT",
"YJscYpR5Psu",
"8jnKSTiiv9l"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Update after author response:** I appreciate the authors' efforts to address my concerns. Thanks for the correction on QBC, I appreciate it. I still believe that the paper needs to accompany a more comprehensive evaluation and qualitative insights to highlight the effectiveness of the proposed method. For instan... | [
4,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
4,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"iclr_2021_s4D2nnwCcM",
"03NqELnhr22",
"1Cf_kh3bmbT",
"YJscYpR5Psu",
"8jnKSTiiv9l",
"iclr_2021_s4D2nnwCcM",
"iclr_2021_s4D2nnwCcM",
"iclr_2021_s4D2nnwCcM"
] |
iclr_2021_lXW6Sk1075v | FORK: A FORward-looKing Actor for Model-Free Reinforcement Learning | In this paper, we propose a new type of Actor, named forward-looking Actor or FORK for short, for Actor-Critic algorithms. FORK can be easily integrated into a model-free Actor-Critic algorithm. Our experiments on six Box2D and MuJoCo environments with continuous state and action spaces demonstrate significant performa... | withdrawn-rejected-submissions | All reviewers agreed that the novelty of the method was not at the level expected for publication, and also raised a number of technical concerns regarding the approach. There was no response from the authors on these issues, hence the reviewer consensus is that the paper is not ready for publication at this time. | val | [
"E1WWYvF7wvh",
"LCIhBQg_UNS",
"lpb-7rl2ND2",
"PqCWz-yrPLv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes to combine ideas from model-based RL into model-free off-policy policy gradient algorithms (like SAC and TD3). Specifically, the paper proposes to learn auxiliary models of environment rewards and dynamics and use a two-step rollout from these models during the computation of the policy gradient... | [
5,
3,
5,
3
] | [
4,
5,
4,
5
] | [
"iclr_2021_lXW6Sk1075v",
"iclr_2021_lXW6Sk1075v",
"iclr_2021_lXW6Sk1075v",
"iclr_2021_lXW6Sk1075v"
] |
iclr_2021_Im43P9kuaeP | Certified Watermarks for Neural Networks | Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio. Recently, watermarking methods have been extended to deep learning models -- in principle, the watermark should be preserved when an adversary tries to copy the model. However, in practice, watermarks can often be ... | withdrawn-rejected-submissions | While it’s commonly acknowledged that the paper is well written, the reviews are a bit split: R3 and R1 are mildly positive/negative, respectively, R2 and R4 both voted for reject. R2 asked many questions regarding experiments, which were addressed in the details in the rebuttal. R4 raised 6 questions regarding the bou... | train | [
"AGJegn66wO5",
"68iPRAFqiWq",
"Xi_xdG434Th",
"BhGnmeM-AKe",
"Zr5llzueht8",
"ZTbGo_zUySM",
"IwWwCTYSSv",
"0wb-8nvpfyo",
"G7AjdHNlcy5",
"KsTer-5yEo5",
"bod1fQWJThd"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors have created a well written paper for a new watermarking method that addresses an important challenge in security intellectual property rights for deep learning models. They claim their method has resistance to l2 attacks within a certifiable bound, and show experimental results that the method is also... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_Im43P9kuaeP",
"BhGnmeM-AKe",
"IwWwCTYSSv",
"bod1fQWJThd",
"AGJegn66wO5",
"G7AjdHNlcy5",
"G7AjdHNlcy5",
"KsTer-5yEo5",
"iclr_2021_Im43P9kuaeP",
"iclr_2021_Im43P9kuaeP",
"iclr_2021_Im43P9kuaeP"
] |
iclr_2021_fgpXAu8puGj | NAHAS: Neural Architecture and Hardware Accelerator Search | Neural architectures and hardware accelerators have been two driving forces for the rapid progress in deep learning.
Although previous works have optimized either neural architectures given fixed hardware, or hardware given fixed neural architectures, none has considered optimizing them jointly. In this paper, we... | withdrawn-rejected-submissions | This paper considers the problem of searching over the joint space of hardware and neural architectures to trade-off accuracy and latency.
Reviewers raised some valid questions about the following aspects:
1. Low technical novelty
2. Prior work on hardware and neural architecture co-design, and closely related work a... | train | [
"8fxhY9sKzv",
"_V8B21k1DkO",
"zcIcN0z4HuE",
"Sn5Ow78BSsH",
"qht9SH31hml"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their feedback. We would like to address some common questions:\n\n**Q1.** NAHAS novelty compared to related work?\n\n**A1.** To our best knowledge, NAHAS is the first work on co-optimizing neural architectures and hardware accelerators based on industry-standard accelerators and large-s... | [
-1,
4,
5,
5,
6
] | [
-1,
4,
2,
3,
4
] | [
"iclr_2021_fgpXAu8puGj",
"iclr_2021_fgpXAu8puGj",
"iclr_2021_fgpXAu8puGj",
"iclr_2021_fgpXAu8puGj",
"iclr_2021_fgpXAu8puGj"
] |
iclr_2021_R7aFOrR0b2 | Dataset Curation Beyond Accuracy | Neural networks are known to be data-hungry, and collecting large labeled datasets is often a crucial step in deep learning deployment. Researchers have studied dataset aspects such as distributional shift and labeling cost, primarily using downstream prediction accuracy for evaluation. In sensitive real-world applicat... | withdrawn-rejected-submissions | The authors empirically analyse the properties of datasets which lead to poor calibration. In particular, they show that high class imbalance, high degree of label noise, and small dataset size are all likely to lead to poor overall calibration or poor per-class calibration. While there are some interesting insights in... | train | [
"TTtiwm_TVRC",
"aVVxVfb3gvn",
"LY3yLjXs7-",
"X9QI5g_7BqY",
"Bf5rDzJjmlq"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nWe thank the reviewers for their thoughtful comments and ideas to improve the paper. We address major questions from reviewers (R1, R2, R3, R4) below.\n\n\n\n# R1. \n\n----------------\n\nHow do these experiments relate to accuracy? \n\n- There is no simple relationship between calibration and accuracy. It has p... | [
-1,
4,
4,
4,
6
] | [
-1,
4,
3,
4,
3
] | [
"iclr_2021_R7aFOrR0b2",
"iclr_2021_R7aFOrR0b2",
"iclr_2021_R7aFOrR0b2",
"iclr_2021_R7aFOrR0b2",
"iclr_2021_R7aFOrR0b2"
] |
iclr_2021_LMslR3CTzE_ | Neural Subgraph Matching | Subgraph matching is the problem of determining the presence and location(s) of a given query graph in a large target graph.
Despite being an NP-complete problem, the subgraph matching problem is crucial in domains ranging from network science and database systems to biochemistry and cognitive science.
Ho... | withdrawn-rejected-submissions | This paper proposes an interesting approach for learning to decide whether a query graph is isomorphic to a subgraph within the target graph. The approach has a number of interesting aspects from the machine learning perspective, e.g. the anchored graphs and the order embeddings. Empirical results show promise in abl... | train | [
"_JHNIR8THDM",
"zRwLTyr4WJK",
"4-1aYm9yPvr",
"drPH-wNqTG",
"G9pvxwDGNA2",
"NIOQ8ec3T82",
"7s-j4HgZQfb",
"oKt4h_7Ncs_"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a sub-graph isomorphism method using neural networks and embeddings that are supposed to preserve important topological structure of subgraphs. Sub-graph isomorphism is NP-complete and has fascinated researchers with approximate solutions using heuristics and pre-processing (graph indexes) of m... | [
5,
-1,
-1,
-1,
-1,
5,
3,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"iclr_2021_LMslR3CTzE_",
"_JHNIR8THDM",
"oKt4h_7Ncs_",
"7s-j4HgZQfb",
"NIOQ8ec3T82",
"iclr_2021_LMslR3CTzE_",
"iclr_2021_LMslR3CTzE_",
"iclr_2021_LMslR3CTzE_"
] |
iclr_2021_xQnvyc6r3LL | Finding Patient Zero: Learning Contagion Source with Graph Neural Networks | Locating the source of an epidemic, or patient zero (P0), can provide critical insights into the infection's transmission course and allow efficient resource allocation.
Existing methods use graph-theoretic centrality measures and expensive message-passing algorithms, requiring knowledge of the underlying dynam... | withdrawn-rejected-submissions | The paper introduces a GNN approach to solve the problem of source detection in an epidemics. While the paper contains some interesting new ideas, the reviewers raised some important concerns about the paper and so the paper should not be accepted in the current form. In particular,
- the paper does not motivate the M... | train | [
"9PFUt76mOH0",
"nJYcF9CZdjP",
"8me4farh9X",
"yTBDeoAyfBg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the problem of source detection in an epidemics when one observes the underlying graph and a snapshot of the population at a given time i.e. who is infected or not infected. For a SIR (or SEIR) model, the authors propose to use GNN for this task. The learning procedure is then the following: giv... | [
3,
5,
7,
3
] | [
3,
4,
4,
5
] | [
"iclr_2021_xQnvyc6r3LL",
"iclr_2021_xQnvyc6r3LL",
"iclr_2021_xQnvyc6r3LL",
"iclr_2021_xQnvyc6r3LL"
] |
iclr_2021_dcktlmtcM7 | Neural Time-Dependent Partial Differential Equation | Partial differential equations (PDEs) play a crucial role in studying a vast number of problems in science and engineering. Numerically solving nonlinear and/or high-dimensional PDEs is frequently a challenging task. Inspired by the traditional finite difference and finite elements methods and emerging advancements in ... | withdrawn-rejected-submissions | The paper introduces an approach for learning the dynamics of PDEs. It makes use of bi-directional LSTMs trained to regress future values from past observations, up to a given horizon. Experiments are performed on data generated from numerical solvers on two examples, inviscid Burgers and a Navier-Stokes system. While ... | train | [
"WqoR7svzhhO",
"VYd6D2wXxrb",
"eu6FwLoJ1S9",
"IxGPi0XUzUi"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThe paper proposes to use LSTM to learn partial differential equations. Experiments include Wave equation, Heat equation, Burgers' equation, and Navier-Stokes equation. They compare with PINN on Allen-Cahn and Burgers equation.\n\nThe paper is nicely written and easy to understand. But I have several con... | [
3,
5,
4,
5
] | [
4,
5,
4,
5
] | [
"iclr_2021_dcktlmtcM7",
"iclr_2021_dcktlmtcM7",
"iclr_2021_dcktlmtcM7",
"iclr_2021_dcktlmtcM7"
] |
iclr_2021_iG_Cg6ONjX | A General Computational Framework to Measure the Expressiveness of Complex Networks using a Tight Upper Bound of Linear Regions | The expressiveness of deep neural network (DNN) is a perspective to understand the surprising performance of DNN. The number of linear regions, i.e. pieces that a piece-wise-linear function represented by a DNN, is generally used to measure the expressiveness. And the upper bound of regions number partitioned by a rect... | withdrawn-rejected-submissions | This paper studies the number of linear regions of a multi-layer ReLU network and gives a new upper bound. Reviewers concern about the writing and the results are incremental compared with previous results. | train | [
"Dm1r3iUkxc",
"zlX8MOJIgxR",
"axwq1hdqw2V",
"Tz9i2hFUJa2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This paper extends on the framework of matrix computation in Hinz & Van d Geer (2019) to give a tight upper bound for linear regions. In particular, the paper shows improvement over the bounds derived in Serra et al. 2018 and extends the bounds for more complex networks with skip and residual connections.... | [
4,
4,
3,
4
] | [
3,
2,
4,
2
] | [
"iclr_2021_iG_Cg6ONjX",
"iclr_2021_iG_Cg6ONjX",
"iclr_2021_iG_Cg6ONjX",
"iclr_2021_iG_Cg6ONjX"
] |
iclr_2021_GbCkSfstOIA | Semi-Supervised Learning via Clustering Representation Space | We proposed a novel loss function that combines supervised learning with clustering in deep neural networks. Taking advantage of the data distribution and the existence of some labeled data, we construct a meaningful latent space. Our loss function consists of three parts, the quality of the clustering result, the mar... | withdrawn-rejected-submissions | All reviewers agree that the writing is not precise. It does not help to find any novelty in the ideas, and the limited and too quickly described experiences are not convincing enough to forgive this problem. The authors chose not to oppose or comment on the detailed arguments provided by the reviewers. I agree with th... | val | [
"PG0xY51egep",
"6dXNH1GXgGz",
"T9Izk3xoR-f",
"ikdGRvylMJy"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper attempts to address the semi-supervised learning topic by proposing a method based on an aggregated loss considering both cross-entry and Davies-Bouldin Index. Cross-entropy is used to ensure the maximum margin between classes and Davies-Bouldin Index is applied to the labeled data and to the whole data... | [
4,
2,
4,
4
] | [
5,
5,
4,
5
] | [
"iclr_2021_GbCkSfstOIA",
"iclr_2021_GbCkSfstOIA",
"iclr_2021_GbCkSfstOIA",
"iclr_2021_GbCkSfstOIA"
] |
iclr_2021_pXmtZdDW16 | Embedding a random graph via GNN: mean-field inference theory and RL applications to NP-Hard multi-robot/machine scheduling | We develop a theory for embedding a random graph using graph neural networks (GNN) and illustrate its capability to solve NP-hard scheduling problems. We apply the theory to address the challenge of developing a near-optimal learning algorithm to solve the NP-hard problem of scheduling multiple robots/machines with tim... | withdrawn-rejected-submissions | This work develops an approach to embed random graphs (some even with dependent edges, hence going beyond classical models such as Erdos-Renyi G(n,p)) using GNNs, and uses these to develop approximation algorithms for solving NP-hard scheduling problems, which typically involve some notion of minimizing weighted comple... | val | [
"dDM5DooCVh",
"80kRcg1xsVG",
"rJ-TDWcwgfr",
"cM1r1X83_Sd",
"crDNmvvnIld",
"A6cclveZHZF",
"7XKT3CPME9r",
"VuXHPNJb9j1"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"#### Summary\nThe paper considers the problem of multi-robot reward collection (MRRC) in which a number of robots are supposed to perform a number of tasks, in a centralized setting. Finding the optimal scheduling (assigning robots to tasks) under reasonable assignment constraints poses an NP-hard problem. The pap... | [
5,
6,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2021_pXmtZdDW16",
"iclr_2021_pXmtZdDW16",
"80kRcg1xsVG",
"dDM5DooCVh",
"VuXHPNJb9j1",
"7XKT3CPME9r",
"iclr_2021_pXmtZdDW16",
"iclr_2021_pXmtZdDW16"
] |
iclr_2021_WlT94P_zuHF | Transformer-QL: A Step Towards Making Transformer Network Quadratically Large | Transformer networks have shown outstanding performance on many natural language processing tasks. However the context length (the number of previous tokens on which the output states depend) of a Transformer network grows at best linearly with the memory and computational power used. This limitation prevents a transfo... | withdrawn-rejected-submissions | This paper introduces Transformer-QL, a new variant of transformer networks that can process long sequences more efficiently. This is an important research problem, which has been widely studied recently. Unfortunately, this paper does not compare to such previous works (eg. see "Efficient transformers: A survey"), the... | train | [
"-4DW1m-CoGz",
"hbKcZiXjhad",
"GGMOZ9g1qog",
"Tkjgb9P_xsx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose Transformer-QL to capture the contextual information in multiple temporal scales - finer scales to capture recent past information and coarser scales to capture distance past information. The results show significant improvement in the perplexity score over Transformer-XL and Compressive Transf... | [
4,
5,
5,
7
] | [
4,
3,
4,
4
] | [
"iclr_2021_WlT94P_zuHF",
"iclr_2021_WlT94P_zuHF",
"iclr_2021_WlT94P_zuHF",
"iclr_2021_WlT94P_zuHF"
] |
iclr_2021_C5th0zC9NPQ | Sensory Resilience based on Synesthesia | Situated cognition depends on accessing environmental state through sensors. Engineering and cost constraints usually lead to limited “pathways” where, for example, a vision sub-system only includes a camera and the software to deal with it. This traditional and rational design style entails any hardware defect on the ... | withdrawn-rejected-submissions | All three reviewers recommend rejection, based on multiple (mostly shared) concerns. While the authors address the concerns in their rebuttal, the unanimously negative scores remain. I don't see basis to accept the paper. | train | [
"F1MBR_6I6uE",
"NY2LQSJ_jIY",
"SsSxZP1GF9Y",
"ZHQpAqaqukB",
"alY5qxYMKRL",
"5dbVz7t-5vw"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the summary. We confirm that is what we have attempted with this paper. One minor point: The agent is not restricted to ML tasks. The SP is implemented as such in the frame of this paper, just because auto-encoders allow us approximating the theta function in section 3.2. The agent itself---reduced t... | [
-1,
-1,
-1,
3,
2,
5
] | [
-1,
-1,
-1,
4,
5,
4
] | [
"ZHQpAqaqukB",
"alY5qxYMKRL",
"5dbVz7t-5vw",
"iclr_2021_C5th0zC9NPQ",
"iclr_2021_C5th0zC9NPQ",
"iclr_2021_C5th0zC9NPQ"
] |
iclr_2021_mOO-LfEVZK | Manifold-aware Training: Increase Adversarial Robustness with Feature Clustering | The problem of defending against adversarial attacks has attracted increasing attention in recent years. While various types of defense methods (e.g., adversarial training, detection and rejection, and recovery) were proven empirically to bring robustness to the network, their weakness was shown by later works. Inspire... | withdrawn-rejected-submissions | Two reviewers expressed clear concerns about the paper but the authors did not provide any response. | train | [
"oxM2g9iZ9IY",
"DiDrtOmSKyF",
"tD4cYRakmsO",
"zzQzZ9CO3wY",
"VdxOoQuC6FI"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Results: To defend against adversarial attacks, this work experimentally analyzes the feature distribution of traditionally- trained CNNs for gaining more knowledge about adversarial examples. Two properties, i.e., the non-clustering property and confusing-distance property, of the feature distribution are identif... | [
4,
5,
7,
1,
5
] | [
3,
5,
3,
5,
3
] | [
"iclr_2021_mOO-LfEVZK",
"iclr_2021_mOO-LfEVZK",
"iclr_2021_mOO-LfEVZK",
"iclr_2021_mOO-LfEVZK",
"iclr_2021_mOO-LfEVZK"
] |
iclr_2021_d9Emve8gG5E | OFFER PERSONALIZATION USING TEMPORAL CONVOLUTION NETWORK AND OPTIMIZATION | Lately, personalized marketing has become important for retail/e-retail firms due to significant rise in online shopping and market competition. Increase in online shopping and high market competition has led to an increase in promotional expenditure for online retailers, and hence, rolling out optimal offers has becom... | withdrawn-rejected-submissions | Reviews are somewhat mixed, but all are below the acceptance threshold. Reviewers praise the overall application and the presentation (though there is some variance in response to this aspect), but have concerns about lack of certain comparisons and technical novelty. | train | [
"mNZUxZC8Ue",
"FlEQFBbgxM3",
"pdrFMuLKyHg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper uses TCN structure and try to understand consumer purchase pattern for personalized marketing strategies. The corresponding optimization optimizes the purchase probability. However, there are several concerns.\n\n1, The motivation is not clear. The contribution of the proposed model doesn't support pers... | [
4,
5,
3
] | [
2,
4,
4
] | [
"iclr_2021_d9Emve8gG5E",
"iclr_2021_d9Emve8gG5E",
"iclr_2021_d9Emve8gG5E"
] |
iclr_2021_Bx05YH2W8bE | DyHCN: Dynamic Hypergraph Convolutional Networks | Hypergraph Convolutional Network (HCN) has become a default choice for capturing high-order relations among nodes, \emph{i.e., } encoding the structure of a hypergraph. However, existing HCN models ignore the dynamic evolution of hypergraphs in the real-world scenarios, \emph{i.e., } nodes and hyperedges in a hypergrap... | withdrawn-rejected-submissions | The paper builds upon hypergraph convolutional networks (HCN), extending them to time-varying hypergraphs in dynamical settings. However, as some of the reviewers pointed out, it would be useful to explore other system variations to better justify the choices in this particular approach; perhaps an evaluation on a wid... | train | [
"A3L80ymjEpo",
"eQRFryeEzQ",
"Rual_mHv1Z6",
"gOEKZxneyGa",
"nDAw5qLe4l",
"sqJNEFeUxT",
"Wu2tq8LPIRI",
"jtLx_XFoQQ",
"R7CZtgRy6Uw",
"YAOrTmNzKD"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your insightful comments.\n\nFollowing your comments, we have updated the manuscript with more recent references, more explanations on the experiment, and added an NYC-Taxi dataset for the taxi demand prediction task.\n\nQ1. An only discrete-time dynamic hypergraph is considered.\n\nA1. In the introd... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"jtLx_XFoQQ",
"R7CZtgRy6Uw",
"gOEKZxneyGa",
"YAOrTmNzKD",
"Wu2tq8LPIRI",
"nDAw5qLe4l",
"iclr_2021_Bx05YH2W8bE",
"iclr_2021_Bx05YH2W8bE",
"iclr_2021_Bx05YH2W8bE",
"iclr_2021_Bx05YH2W8bE"
] |
iclr_2021_-yo2vfTt_Cg | Adaptive norms for deep learning with regularized Newton methods | We investigate the use of regularized Newton methods with adaptive norms for optimizing neural networks. This approach can be seen as a second-order counterpart of adaptive gradient methods, which we here show to be interpretable as first-order trust region methods with ellipsoidal constraints. In particular, we prove ... | withdrawn-rejected-submissions | The paper considers adaptive stochastic optimization methods and shows that they can be re-interpreted as first order trust region methods with an ellipsoidal trust region, they consider a related second order method, and they show convergence properties and empirical results.
The results are of interest, but the sign... | test | [
"PMu5QYC63-1",
"cKhutvcKMzg",
"ccUr5Sy4WVt",
"CZel7TGzDC",
"uovdEGVecOr",
"gm5br35wFIU",
"jYiVkgDo6L",
"Z7_WLIzVt8N",
"0CkA4tDkUXb"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks to the authors for their comments.\n\nMy main concern is still lack of good baseline to compare against existing work, specifically because of changes in critical hyper-parameter: batch size when comparing various methods. Justification in the footnote 12, pg. 26. does not explain the discrepancy of why f... | [
-1,
-1,
-1,
-1,
-1,
5,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
4
] | [
"CZel7TGzDC",
"Z7_WLIzVt8N",
"jYiVkgDo6L",
"0CkA4tDkUXb",
"gm5br35wFIU",
"iclr_2021_-yo2vfTt_Cg",
"iclr_2021_-yo2vfTt_Cg",
"iclr_2021_-yo2vfTt_Cg",
"iclr_2021_-yo2vfTt_Cg"
] |
iclr_2021_IrofNLZuWF | Stochastic Optimization with Non-stationary Noise: The Power of Moment Estimation | We investigate stochastic optimization under weaker assumptions on the distribution of noise than those used in usual analysis. Our assumptions are motivated by empirical observations in training neural networks. In particular, standard results on optimal convergence rates for stochastic optimization assume either the... | withdrawn-rejected-submissions | The paper studies the problem of stochastic optimization where the gradient noise process is non-stationary. While this is an important problem in the community, the reviewers find that the assumptions are poorly justified. While the authors provided extensive feedback, the reviewers did not change their initial assess... | test | [
"bEBhBghH5Iy",
"B5GbGlTwalu",
"_JxTI0ZFTCF",
"nFntLz3BNvs",
"Rz095QoQv9",
"qkgC9OhnQiH",
"26A-3ywsfcD",
"gUPJRool5_w",
"zc_cg3_8Fhb",
"cE4TJ_s6-ZK",
"jatMVG1wDX",
"XF0-hHagePh",
"tJ_96LNCHHo"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"## Summary\n\nThe authors provide a new analysis of SGD and versions of RMSprop, taking into account possible non-stationarity of the gradient noise. In particular, the authors propose. \n(i) the convergence analysis of SGD with stepsizes dependent on the second moment of stochastic gradients and a \"norm\" versio... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2021_IrofNLZuWF",
"qkgC9OhnQiH",
"nFntLz3BNvs",
"Rz095QoQv9",
"bEBhBghH5Iy",
"jatMVG1wDX",
"gUPJRool5_w",
"XF0-hHagePh",
"tJ_96LNCHHo",
"iclr_2021_IrofNLZuWF",
"iclr_2021_IrofNLZuWF",
"iclr_2021_IrofNLZuWF",
"iclr_2021_IrofNLZuWF"
] |
iclr_2021_yZkF6xqhfQ | Do Transformers Understand Polynomial Simplification? | Recently researchers have demonstrated that Transformers can be trained to learn symbolic tasks such as solving integration and differential equations in an end-to-end fashion. In these setups, for an input symbolic expression, the Transformer predicts the final solution in a single step. Since such tasks may consist o... | withdrawn-rejected-submissions | While the reviewers find the experiments in the paper somewhat interesting, they find that the paper does not sufficiently address whether the limitations shown for models in this paper translate to larger models and other, more realistic, tasks, or an artifact of the setup considered in the paper. Overall the takeawa... | train | [
"Zx8WRLxWsX",
"oCR9tikJgzM",
"V8xJWLScLcR",
"F1RxdoOPHn",
"IJb4K62msO",
"e9ijauFNdTx",
"qBGebI1oP0Y",
"TSJb-Pr1y4Y",
"L4KunL5ql5r",
"vyl0h9u1k5",
"KTJ4xp9etKU",
"8YIQrMyWtTx",
"dWBgvQvwmr8",
"ANkct9HsvT",
"MMUFF9E2xmm"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the capability of the transformer architecture to\nperform rewriting to normal form in a simplified polynomial setting.\n\nIt is a continuation of the research by Piotrowski et al (PUBK) in the\narea of using neural nets to do symbolic rewriting, followed later by\nLample&Charton (LC).\n\nSeveral... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_yZkF6xqhfQ",
"IJb4K62msO",
"ANkct9HsvT",
"e9ijauFNdTx",
"ANkct9HsvT",
"qBGebI1oP0Y",
"TSJb-Pr1y4Y",
"8YIQrMyWtTx",
"dWBgvQvwmr8",
"Zx8WRLxWsX",
"iclr_2021_yZkF6xqhfQ",
"MMUFF9E2xmm",
"iclr_2021_yZkF6xqhfQ",
"iclr_2021_yZkF6xqhfQ",
"iclr_2021_yZkF6xqhfQ"
] |
iclr_2021_5g5x0eVdRg | DHOG: Deep Hierarchical Object Grouping | Unsupervised learning of categorical representations using data augmentations appears to be a promising approach and has proven useful for finding suitable representations for downstream tasks. However current state-of-the-art methods require preprocessing (e.g. Sobel edge detection) to work. We introduce a mutual info... | withdrawn-rejected-submissions | During the discussion phase, although the reviewers acknowledge superior empirical performance of the proposed method, they shared the two major concerns:
1. Lack of theoretical or empirical justification/proof for the key statement: "the current methods do not effectively maximize the MI objective because greedy SGD t... | val | [
"twnaLNPV8dt",
"HOSsCHhuWnK",
"WSrGccEVcgZ",
"Gu_WCuWOGD9",
"MeAc6TJH8DC",
"hoEHThJA2mQ",
"voY5_wIG_ko",
"XMorvtGrF-X",
"yNb-ihZOfJj"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors study the problem of unsupervised representation learning from data augmentations. Specifically, the authors claim that existing methods are prone to getting stuck at local minima owing to easy-to-learn local representations that optimise the commonly used MI objectives, and then propose... | [
4,
-1,
-1,
-1,
-1,
-1,
6,
3,
4
] | [
2,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_5g5x0eVdRg",
"iclr_2021_5g5x0eVdRg",
"twnaLNPV8dt",
"voY5_wIG_ko",
"XMorvtGrF-X",
"yNb-ihZOfJj",
"iclr_2021_5g5x0eVdRg",
"iclr_2021_5g5x0eVdRg",
"iclr_2021_5g5x0eVdRg"
] |
iclr_2021_O358nrve1W | Neurally Guided Genetic Programming for Turing Complete Programming by Example | The ability to synthesise source code from input/output examples allows nonexperts to generate programs, and experts to abstract away a wide range of simple programming tasks. Current research in this area has explored neural synthesis, SMT solvers, and genetic programming; each of these approaches is limited, however,... | withdrawn-rejected-submissions | There are some interesting ideas in this paper, but I agree with reviewers that without a comparison to existing work, it is hard to place this work in its proper context. The authors make several arguments in dismissing the need for side-by-side comparisons, but I do not find these arguments convincing.
* First, the ... | train | [
"N8lENsq2mKL",
"9GEldn8XHap",
"ac4R_KMnZzx",
"SHpTrc48Ex3",
"nKaiWwVozz2",
"objH7qR4dVP",
"FF-xHUMXyk4",
"tfHYyEJBtKh"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Time taken for a genetic search varies between a few minutes, in the case of success, to a little over 4 hours in the worst case. We were not optimising particularly for clock-time in this research, so a range of optimizations are likely to be possible here -- in particular we note that genetic search procedures a... | [
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"9GEldn8XHap",
"SHpTrc48Ex3",
"tfHYyEJBtKh",
"objH7qR4dVP",
"FF-xHUMXyk4",
"iclr_2021_O358nrve1W",
"iclr_2021_O358nrve1W",
"iclr_2021_O358nrve1W"
] |
iclr_2021_PBfaUXYZzU | Class-Weighted Evaluation Metrics for Imbalanced Data Classification | Class distribution skews in imbalanced datasets may lead to models with prediction bias towards majority classes, making fair assessment of classifiers a challenging task. Balanced Accuracy is a popular metric used to evaluate a classifier’s prediction performance under such scenarios. However, this metric falls short ... | withdrawn-rejected-submissions | This paper proposes a weighted balanced accuracy metric to evaluate the performance of imbalanced multiclass classification. The metric is based on a one-versus-all decomposition from multi-class to binary, and then aggregating the metrics on the binary classification sub-problems in a weighted manner. The authors hope... | train | [
"7d7oK9mzld8",
"__GtS1g236w",
"rOfDp6aSwh7",
"GQEzKZb5dGL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a simple addition to the Balanced Accuracy approach - which the authors refer to as ‘importance’. However, there is nothing in the formulation of this concept which requires that this is an importance and could in fact be any form of weighting. The paper evaluates the new metric - but only again... | [
6,
3,
3,
4
] | [
4,
5,
4,
4
] | [
"iclr_2021_PBfaUXYZzU",
"iclr_2021_PBfaUXYZzU",
"iclr_2021_PBfaUXYZzU",
"iclr_2021_PBfaUXYZzU"
] |
iclr_2021_Lvb2BKqL49a | Regularized Mutual Information Neural Estimation | With the variational lower bound of mutual information (MI), the estimation of MI can be understood as an optimization task via stochastic gradient descent. In this work, we start by showing how Mutual Information Neural Estimator (MINE) searches for the optimal function T that maximizes the Donsker-Varadhan representa... | withdrawn-rejected-submissions | This paper is a study in optimizing the Donsker-Varadhan lower bound on mutual information focusing on a "drift" problem. The bound is a difference between terms which appears to have an extra degree of freedom where the two terms increase or decrease together. They propose a fix for this problem. The authors state t... | train | [
"9gqOLUXduKo",
"AunbqRjBJpZ",
"IvERaFx2Zb",
"RT9-Il4loBW",
"T2HxFrrwzd",
"Hv5sCJ0smv8",
"MumJMF4g3MV",
"TXx3rgMeQcL",
"Y1gfqVSXPL_",
"qdhcOU5Y9C"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"### EDIT:\n\n I thank the authors for their detailed response. I also appreciate the effort that's been put into refining the draft. Unfortunately, I'm still not very happy with the motivation of attacking the drifting phenomenon on MINE. The main reason for removing the drifting effect is for moving average of hi... | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"iclr_2021_Lvb2BKqL49a",
"iclr_2021_Lvb2BKqL49a",
"AunbqRjBJpZ",
"9gqOLUXduKo",
"9gqOLUXduKo",
"qdhcOU5Y9C",
"Y1gfqVSXPL_",
"AunbqRjBJpZ",
"iclr_2021_Lvb2BKqL49a",
"iclr_2021_Lvb2BKqL49a"
] |
iclr_2021_ol_xwLR2uWD | Reviving Autoencoder Pretraining | The pressing need for pretraining algorithms has been diminished by numerous advances in terms of regularization, architectures, and optimizers. Despite this trend, we re-visit the classic idea of unsupervised autoencoder pretraining and propose a modified variant that relies on a full reverse pass trained in conjuncti... | withdrawn-rejected-submissions | This work presents a practical unsupervised pretraining strategy that does not require layer-wise training stages. Clearly this is an area that has lot of potential and the work seems to head in the right direction.
However, despite a very positive review, I share the same concerns raised by the remaining 3 reviewers.... | train | [
"M2J_DwsBc92",
"mHSjWDP76Xn",
"u1AeekdNfJQ",
"EpUdgq3vOsq",
"Ze-q16teP9V",
"oOH66sSh61U",
"ezgJ9KR-A9W",
"innsq3xI-kC",
"SNUhhiKXQXf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an auto-encoder pre-training approach for regularising the neural network parameters, which can be used in many different existing neural models. The proposed approach is build based on the unsupervised auto-encoder pre-training and the orthogonality constraints. A number of classical applicati... | [
5,
4,
-1,
-1,
-1,
-1,
-1,
3,
9
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_ol_xwLR2uWD",
"iclr_2021_ol_xwLR2uWD",
"mHSjWDP76Xn",
"innsq3xI-kC",
"SNUhhiKXQXf",
"M2J_DwsBc92",
"iclr_2021_ol_xwLR2uWD",
"iclr_2021_ol_xwLR2uWD",
"iclr_2021_ol_xwLR2uWD"
] |
iclr_2021_pbXQtKXwLS | Guiding Neural Network Initialization via Marginal Likelihood Maximization | We propose a simple, data-driven approach to help guide hyperparameter selection for neural network initialization. We leverage the relationship between neural network and Gaussian process models having corresponding activation and covariance functions to infer the hyperparameter values desirable for model initializati... | withdrawn-rejected-submissions | Addressing the initialization issue in DNNs is an important topic, and the proposed approach is found by the reviewers to be interesting. However, the reviewers feel that to clearly promote this research beyond the 'proof of concept' phase, deeper investigation in multi-layer architectures would be required. This would... | val | [
"WtwaY2lQ26h",
"GFbKYCsLQ5K",
"c64ID2O5xWE",
"VK_ARMusLr",
"eI2MU-_LAZm",
"j_qjzxlx3BO",
"isII6JgK9U",
"IDFEClCpi6f",
"X07ykCltA5z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"### Summary\nThe authors propose a rule for neural network (NN) initialization, which takes into account input data. \n\nThey suppose that weights and biases of a NN are randomly drawn resp. from $\\mathcal{N}(0, \\sigma_w^2 / N)$ and $\\mathcal{N}(0, \\sigma_b^2)$, where $N$ is the number of inputs. Then, they ar... | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_pbXQtKXwLS",
"iclr_2021_pbXQtKXwLS",
"iclr_2021_pbXQtKXwLS",
"isII6JgK9U",
"X07ykCltA5z",
"c64ID2O5xWE",
"GFbKYCsLQ5K",
"WtwaY2lQ26h",
"iclr_2021_pbXQtKXwLS"
] |
iclr_2021_J_pvI6ap5Mn | Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization | Graph neural networks (GNNs) have been shown with superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear r... | withdrawn-rejected-submissions | The paper presents a novel framework from transfer learning over GNNs. Experiments ought to better substantiate how structural differences/similarities are measured, as well as relying on prior art to measure transferability success. A plan for incorporating (structural) features would also strengthen the present work. | val | [
"cd83F0ZekwN",
"qG05RqeoeCq",
"rKrjg2twMP",
"_f2cpuuQXqL",
"dq8XVrcrCL",
"Q2wds1JtVh",
"RlSkyU8wl3S",
"GpOZpfY-GF",
"snCtKNlg-gx"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose ego-graph information maximization (EGI) to build more transferable GNN. They further theoretically study the transferability of EGI. \n\n\nThe article is fairly well structured, apart from the literature review, which is somewhat missing significant related works, such as Ron Levie's papers, w... | [
4,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_J_pvI6ap5Mn",
"iclr_2021_J_pvI6ap5Mn",
"cd83F0ZekwN",
"cd83F0ZekwN",
"qG05RqeoeCq",
"snCtKNlg-gx",
"GpOZpfY-GF",
"iclr_2021_J_pvI6ap5Mn",
"iclr_2021_J_pvI6ap5Mn"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.