paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2020_Syx9ET4YPB | Do Image Classifiers Generalize Across Time? | We study the robustness of image classifiers to temporal perturbations derived from videos. As part of this study, we construct ImageNet-Vid-Robust and YTBB-Robust, containing a total 57,897 images grouped into 3,139 sets of perceptually similar images. Our datasets were derived from ImageNet-Vid and Youtube-BB respectively and thoroughly re-annotated by human experts for image similarity. We evaluate a diverse array of classifiers pre-trained on ImageNet and show a median classification accuracy drop of 16 and 10 percent on our two datasets. Additionally, we evaluate three detection models and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points. Our analysis demonstrates that perturbations occurring naturally in videos pose a substantial and realistic challenge to deploying convolutional neural networks in environments that require both reliable and low-latency predictions. | reject | This paper proposed to evaluate the robustness of CNN models on similar video frames. The authors construct two carefully labeled video databases. Based on extensive experiments, they conclude that the state of the art classification and detection models are not robust when testing on very similar video frames. While Reviewer #1 is overall positive about this work, Reviewer #2 and #3 rated weak reject with various concerns. Reviewer #2 concerns limited contribution since the results are similar to our intuition. Reviewer #3 appreciates the value of the databases, but concerns that the defined metrics make the contribution look huge. The authors and Reviewer #3 have in-depth discussion on the metric, and Reviewer #3 is not convinced. Given the concerns raised by the reviewers, the ACs agree that this paper can not be accepted at its current state. | train | [
"r1lxp3pjjr",
"Hke6nQt5ir",
"SkxbfbuVjS",
"B1l7zeOVir",
"S1lLhkuVjH",
"rkxcWM73FH",
"Skx8zfYntr",
"S1eNC80O5B"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > “Adversarial papers do not \"reports the maximum error made in a \\delta ball around each image\": there is no exhaustive search, where one would feed the network with all possible images at \\delta distance. Each adversarial method returns one and only one image, which is estimated to be the worst case error i... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"Hke6nQt5ir",
"SkxbfbuVjS",
"rkxcWM73FH",
"Skx8zfYntr",
"S1eNC80O5B",
"iclr_2020_Syx9ET4YPB",
"iclr_2020_Syx9ET4YPB",
"iclr_2020_Syx9ET4YPB"
] |
iclr_2020_SJgn464tPB | Stabilizing Off-Policy Reinforcement Learning with Conservative Policy Gradients | In recent years, advances in deep learning have enabled the application of reinforcement learning algorithms in complex domains. However, they lack the theoretical guarantees which are present in the tabular setting and suffer from many stability and reproducibility problems \citep{henderson2018deep}. In this work, we suggest a simple approach for improving stability and providing probabilistic performance guarantees in off-policy actor-critic deep reinforcement learning regimes. Experiments on continuous action spaces, in the MuJoCo control suite, show that our proposed method reduces the variance of the process and improves the overall performance. | reject | This paper proposes a new algorithmic approach to reduced variance in off-policy, policy gradient updates.
Multiple reviewers were concerned with both the soundness of the proposed approach, and the cost of using rollouts. In particular, the interaction between the target policy and the behavior policy, and how they are swapped was unclear, where the algorithms in the paper do not match the code provided.
The results show apparent reduction in variance across runs compared with TD3: clear improvements in two domains, minor improvements , and/or an increase in variance in others. In some domains there was decrease in mean performance. The reviewers wanted comparisons with other baseline methods (even in terms of variance across runs).
It is difficult to evaluate the results in this paper, as the performance is averaged over only 5 runs, and runs which result in "failure" are discarded from analysis. The authors explain this was done in the original TD3 code, and one can sympathise in following common practices in the literature. However, the consensus of the reviewers and the AC was that this choice was not well defended, obscures a key difficulty of the learning problem, and makes algorithms look considerably stronger then they actually are. This is particularly confounding in a paper about improving the robustness of learning algorithms. This is not acceptable empirical practice and we strongly encourage the authors to discontinue this.
The reviewers gave nice suggestions including changing the pitch of the paper, and including results in noisy tasks. To reduce the burden of doing more scientific experiments, we suggest the authors start with small or even designed problems to carefully study robustness of learning and the potential improvements due to their algorithm. After this is done in a statistically significant way, it would be natural to move to more demonstration style scaled up results. | train | [
"B1xKNGqaFH",
"S1xsOUhhYH",
"BJxj6-AGor",
"BylDdIxzsH",
"SkgW3ikGsr",
"B1ey8nZ1cB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"\n[Summary]\nThis paper proposes an approach called conservative policy gradients to stabilize the training of deep policy gradient methods. At fixed intervals, the current policy and a separate target policy are evaluated with a number of rollouts. The target policy is then updated to match the current policy onl... | [
3,
3,
-1,
-1,
-1,
1
] | [
4,
3,
-1,
-1,
-1,
3
] | [
"iclr_2020_SJgn464tPB",
"iclr_2020_SJgn464tPB",
"S1xsOUhhYH",
"B1xKNGqaFH",
"B1ey8nZ1cB",
"iclr_2020_SJgn464tPB"
] |
iclr_2020_H1gpET4YDB | Blockwise Self-Attention for Long Document Understanding | We present BlockBERT, a lightweight and efficient BERT model that is designed to better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on several benchmark question answering datasets with various paragraph lengths. Results show that BlockBERT uses 18.7-36.1% less memory and reduces the training time by 12.0-25.1%, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa. | reject | This paper proposes blockwise masked attention mechanisms to sparsify Transformer architectures, the main motivation being reducing the memory usage with long sequence inputs. The resulting model is called BlockBERT. The paper falls in a trend of recent papers compressing/sparsifying/distilling Transformer architectures, a very relevant area of research given the daunting resources needed to train these models.
While the proposed contribution is very simple and interesting, it also looks a rather small increment over prior work, namely Sparse Transformer and Adaptive Span Transformer, among others. Experiments are rather limited and the memory/time reduction is not overwhelming (18.7-36.1% less memory, 12.0-25.1% less time), while final accuracy is sometimes sacrificed by a few points. No comparison to other adaptively sparse attention transformer architectures (Correia et al. EMNLP 19 or Sukhbaatar et al. ACL 19) which should as well provide memory reductions due to the sparsity of the gradients, which require less activations to be cached. I suggest addressing this concerns in an eventual resubmission of the paper. | test | [
"rJx4LsnKsr",
"rkeazEaFor",
"B1g2WKhKjS",
"r1gHbOnFir",
"SJlkRGil9B",
"HJgqUsBatS",
"SkgHIb4G9B"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate your feedback and will keep polishing our work. Thank you very much.\n\nWe believe the reviewer has misunderstood the scope of the paper’s contribution. This paper addresses an issue with *self-attention*, which is used (via transformers) in many other applications other than BERT, such as machine tr... | [
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
1,
3,
4
] | [
"SJlkRGil9B",
"iclr_2020_H1gpET4YDB",
"SkgHIb4G9B",
"HJgqUsBatS",
"iclr_2020_H1gpET4YDB",
"iclr_2020_H1gpET4YDB",
"iclr_2020_H1gpET4YDB"
] |
iclr_2020_SJgCEpVtvr | DYNAMIC SELF-TRAINING FRAMEWORK FOR GRAPH CONVOLUTIONAL NETWORKS | Graph neural networks (GNN) such as GCN, GAT, MoNet have achieved state-of-the-art results on semi-supervised learning on graphs. However, when the number of labeled nodes is very small, the performances of GNNs downgrade dramatically. Self-training has proved to be effective for resolving this issue, however, the performance of self-trained GCN is still inferior to that of G2G and DGI for many settings. Moreover, additional model complexity make it more difficult to tune the hyper-parameters and do model selection. We argue that the power of self-training is still not fully explored for the node classification task. In this paper, we propose a unified end-to-end self-training framework called \emph{Dynamic Self-traning}, which generalizes and simplifies prior work. A simple instantiation of the framework based on GCN is provided and empirical results show that our framework outperforms all previous methods including GNNs, embedding based method and self-trained GCNs by a noticeable margin. Moreover, compared with standard self-training, hyper-parameter tuning for our framework is easier. | reject | The paper is develops a self-training framework for graph convolutional networks where we have partially labeled graphs with a limited amount of labeled nodes. The reviewers found the paper interesting. One reviewer notes the ability to better exploit available information and raised questions of computational costs. Another reviewer felt the difference from previous work was limited, but that the good results speak for themselves. The final reviewer raised concerns on novelty and limited improvement in results. The authors provided detailed responses to these queries, providing additional results.
The paper has improved over the course of the review, but due to a large number of stronger papers, was not accepted at this time. | train | [
"rJe6somhiH",
"SyxZv7N2oB",
"rke6baXnsH",
"ByxRt-tpFr",
"r1lIXkR6YH",
"H1eZGztV9B"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the constructive comments.\n\n1. \"As the self-training is going on, are there different computational costs or are they about the same?\"\nThe computational costs is only slightly increased. The reason is that the computational cost of the original GCN model is dominated by previous laye... | [
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
5,
3,
1
] | [
"H1eZGztV9B",
"r1lIXkR6YH",
"ByxRt-tpFr",
"iclr_2020_SJgCEpVtvr",
"iclr_2020_SJgCEpVtvr",
"iclr_2020_SJgCEpVtvr"
] |
iclr_2020_rJe04p4YDB | Semi-supervised Learning by Coaching | Recent semi-supervised learning (SSL) methods often have a teacher to train a student in order to propagate labels from labeled data to unlabeled data. We argue that a weakness of these methods is that the teacher does not learn from the student’s mistakes during the course of student’s learning. To address this weakness, we introduce Coaching, a framework where a teacher generates pseudo labels for unlabeled data, from which a student will learn and the student’s performance on labeled data will be used as reward to train the teacher using policy gradient.
Our experiments show that Coaching significantly improves over state-of-the-art SSL baselines. For instance, on CIFAR-10, with only 4,000 labeled examples, a WideResNet-28-2 trained by Coaching achieves 96.11% accuracy, which is better than 94.9% achieved by the same architecture trained with 45,000 labeled. On ImageNet with 10% labeled examples, Coaching trains a ResNet-50 to 72.94% top-1 accuracy, comfortably outperforming the existing state-of-the-art by more than 4%. Coaching also scales successfully to the high data regime with full ImageNet. Specifically, with additional 9 million unlabeled images from OpenImages, Coaching trains a ResNet-50 to 82.34% top-1 accuracy, setting a new state-of-the-art for the architecture on ImageNet without using extra labeled data. | reject | Authors propose a new method of semi-supervised learning and provide empirical results. Reviewers found the presentation of the method confusing and poorly motivated. Despite the rebuttal, reviewers still did not find clarity on how or why the method works as well as it does. | train | [
"HJe2x9RTFr",
"HkeoS1G2tr",
"rJllPrxcsB",
"r1gYGQg9iH",
"S1lX9YcJsB",
"r1lQdj51ir",
"BkeL-j5ksS",
"HJxQgs5kiS",
"ByljDc5koB",
"SJg8ZTnHcB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies the teacher-student models in semi-supervised learning. Unlike previous methods in which only the student will learn from the teacher, this paper proposes a method to let the teacher learn from the student by reinforcement learning. Experimental results demonstrate the proposal’s performance.\n\... | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_rJe04p4YDB",
"iclr_2020_rJe04p4YDB",
"HJe2x9RTFr",
"SJg8ZTnHcB",
"iclr_2020_rJe04p4YDB",
"HkeoS1G2tr",
"HJxQgs5kiS",
"HJe2x9RTFr",
"SJg8ZTnHcB",
"iclr_2020_rJe04p4YDB"
] |
iclr_2020_SJlJSaEFwS | Robust Cross-lingual Embeddings from Parallel Sentences | Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation. However, these approaches assume word embedding spaces are isomorphic between different languages, which has been shown not to hold in practice (Søgaard et al., 2018), and fundamentally limits their performance. This motivates investigating joint learning methods which can overcome this impediment, by simultaneously learning embeddings across languages via a cross-lingual term in the training objective. Given the abundance of parallel data available (Tiedemann, 2012), we propose a bilingual extension of the CBOW method which leverages sentence-aligned corpora to obtain robust cross-lingual word and sentence representations. Our approach significantly improves cross-lingual sentence retrieval performance over all other approaches, as well as convincingly outscores mapping methods while maintaining parity with jointly trained methods on word-translation. It also achieves parity with a deep RNN method on a zero-shot cross-lingual document classification task, requiring far fewer computational resources for training and inference. As an additional advantage, our bilingual method also improves the quality of monolingual word vectors despite training on much smaller datasets. We make our code and models publicly available.
| reject | The authors propose a new approach to learning cross-lingual embeddings from parallel data. For an overview of this literature, see [0]. Reviews are mixed, and some objections seem unresolved. The authors also ignore a new line of research in which pretrained language models are used to align vocabularies across languages, e.g., [1-2] The paper would also benefit from a discussion of massively parallel resources such as JW300 and WikiMatrix. Finally, it feels odd not to compare to distilled representations from NMT architectures, e.g., [3].
[0] http://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1419
[1] https://www.aclweb.org/anthology/N19-1162.pdf
[2] https://www.aclweb.org/anthology/K19-1004.pdf
[3] https://arxiv.org/abs/1901.07291 | train | [
"r1xkQQ6iYr",
"BygJyY52FS",
"H1enz_F3sB",
"ryl8DUK2iB",
"SklRxBtnjH",
"SJxyNE0FiS",
"H1gf_MJZjS",
"BJeROFsS5H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"\n***Update***\n\nI thank the reviewers for answering my questions, and I have read the reviews from the other reviewers. I am borderline on this paper, but still learn towards rejection. I feel that it is a bit incremental and still a little misleading/over-stated. For instance, xnli isn't mentioned but mldoc is... | [
3,
3,
-1,
-1,
-1,
-1,
-1,
8
] | [
5,
5,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_SJlJSaEFwS",
"iclr_2020_SJlJSaEFwS",
"BygJyY52FS",
"r1xkQQ6iYr",
"BJeROFsS5H",
"H1gf_MJZjS",
"iclr_2020_SJlJSaEFwS",
"iclr_2020_SJlJSaEFwS"
] |
iclr_2020_BkllBpEKDH | Continuous Adaptation in Multi-agent Competitive Environments | In a multi-agent competitive environment, we would expect an agent who can quickly adapt to environmental changes may have a higher probability to survive and beat other agents. In this paper, to discuss whether the adaptation capability can help a learning agent to improve its competitiveness in a multi-agent environment, we construct a simplified baseball game scenario to develop and evaluate the adaptation capability of learning agents. Our baseball game scenario is modeled as a two-player zero-sum stochastic game with only the final reward. We purpose a modified Deep CFR algorithm to learn a strategy that approximates the Nash equilibrium strategy. We also form several teams, with different teams adopting different playing strategies, trying to analyze (1) whether an adaptation mechanism can help in increasing the winning percentage and (2) what kind of initial strategies can help a team to get a higher winning percentage. The experimental results show that the learned Nash-equilibrium strategy is very similar to real-life baseball game strategy. Besides, with the proposed strategy adaptation mechanism, the winning percentage can be increased for the team with a Nash-equilibrium initial strategy. Nevertheless, based on the same adaptation mechanism, those teams with deterministic initial strategies actually become less competitive. | reject | This paper studies whether adopting strategy adaptation mechanisms helps players improve their performance in zero-sum stochastic games (in this case baseball). Moreover they study two questions in particular, a) whether adaptation techniques are helpful when faced with a small number of iterations and 2) what’s the effect of different initial strategies when both teams adopt the same adaptation technique. Reviewers expressed concerns regarding the fact that the author’s adaptation techniques improve upon initial strategies, which seems to indicate that their initial strategies were not Nash (despite the use of CFR). In the lack of theory of why this seems to happen at the current setup (and whether indeed the initial strategies are Nash and why do the improve), stronger empirical evidence from more rigorous experiments seem somewhat necessary for recommending acceptance of this paper. | train | [
"r1eMQEs3FB",
"BJeNAA4dsB",
"Hyg4hAEdjH",
"SyeNzR4ujB",
"H1xRlT4_jH",
"BkxzF3NCtr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"UPDATE: Thank you for the detailed response. I think the changes you have made to the paper have improved it, but there remains significant work to complete before the paper reaches its full potential. For example, Figure 3 is a useful additional insight but it provides no quantification of the variation between r... | [
1,
-1,
-1,
-1,
-1,
1
] | [
5,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_BkllBpEKDH",
"BkxzF3NCtr",
"BkxzF3NCtr",
"r1eMQEs3FB",
"r1eMQEs3FB",
"iclr_2020_BkllBpEKDH"
] |
iclr_2020_BkleBaVFwB | Scalable Generative Models for Graphs with Graph Attention Mechanism | Graphs are ubiquitous real-world data structures, and generative models that approximate distributions over graphs and derive new samples from them have significant importance. Among the known challenges in graph generation tasks, scalability handling of large graphs and datasets is one of the most important for practical applications. Recently, an increasing number of graph generative models have been proposed and have demonstrated impressive results. However, scalability is still an unresolved problem due to the complex generation process or difficulty in training parallelization.
In this paper, we first define scalability from three different perspectives: number of nodes, data, and node/edge labels. Then, we propose GRAM, a generative model for graphs that is scalable in all three contexts, especially in training. We aim to achieve scalability by employing a novel graph attention mechanism, formulating the likelihood of graphs in a simple and general manner. Also, we apply two techniques to reduce computational complexity. Furthermore, we construct a unified and non-domain-specific evaluation metric in node/edge-labeled graph generation tasks by combining a graph kernel and Maximum Mean Discrepancy. Our experiments on synthetic and real-world graphs demonstrated the scalability of our models and their superior performance compared with baseline methods. | reject | The paper proposed an efficient way of generating graphs. Although the paper claims to propose simplified mechanism, the reviewers find that the generation task to be relatively very complex, and the use of certain module seems ad-hoc. Furthermore, the results on the new metric is at times inconsistent with other prior metrics. The paper can be improved by addressing those concerns concerns. | train | [
"rklrTtQRFB",
"rkx60eXnjS",
"Bygqngm3iS",
"BklZPlQ2jH",
"BJxNQeX3oS",
"ryxBhdSUor",
"rylsDuSLjH",
"BJgZTPB8jS",
"H1gHLDHLor",
"S1lCVvH8sr",
"ByeCx3AAFH",
"SkxBHx7gcr",
"ByeZGmulKS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"# Response to rebuttal\n\nI would like to thank their authors for their rebuttal.\n\nAfter reading the other reviews, the author response and the revised manuscript, I have decided to keep my score of weak reject for the time being.\n\nIn short, while I appreciate the effort the authors put in partly addressing so... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
-1
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1
] | [
"iclr_2020_BkleBaVFwB",
"ryxBhdSUor",
"rylsDuSLjH",
"BJgZTPB8jS",
"H1gHLDHLor",
"ByeZGmulKS",
"rklrTtQRFB",
"ByeCx3AAFH",
"S1lCVvH8sr",
"SkxBHx7gcr",
"iclr_2020_BkleBaVFwB",
"iclr_2020_BkleBaVFwB",
"iclr_2020_BkleBaVFwB"
] |
iclr_2020_r1l-HTNtDB | S2VG: Soft Stochastic Value Gradient method | Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL). Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias. In this paper, we propose a simple and elegant model-based reinforcement learning algorithm called soft stochastic value gradient method (S2VG). S2VG combines the merits of the maximum-entropy reinforcement learning and MBRL, and exploits both real and imaginary data. In particular, we embed the model in the policy training and learn Q and V functions from the real (or imaginary) data set. Such embedding enables us to compute an analytic policy gradient through the back-propagation rather than the likelihood-ratio estimation, which can reduce the variance of the gradient estimation. We name our algorithm Soft Stochastic Value Gradient method to indicate its connection with the well-known stochastic value gradient method in \citep{heess2015Learning}. | reject | The authors consider improvements to model-based reinforcement learning to improve sample efficiency and computational speed. They propose a method which they claim is simple and elegant and embeds the model in the policy learning step, this allows them to compute analytic gradients through the model which can have lower variance than likelihood ratio gradients. They evaluate their method on Mujoco with limited data.
All of the reviewers found the presentation confusing and below the bar for an acceptable submission. Although the authors tried to explain the algorithm better to the reviewers, they did not find the presentation sufficiently improved. I agree that the paper has substantial room for improvement around clarity. Reviewers also asked that experiments be run for more time steps. I agree that this would be an important addition as many model-based reinforcement learning approaches perform worse asymptotically model free approaches and it would be interesting to see how the proposed approach does. A reviewer pointed out that equation 2 is missing a term, and indeed I believe that is true. The authors response is not correct, they likely refer to an equation in SVG where the state is integrated out. Finally, the method does not compare to state-of-the-art model-based approaches, claiming that they use ensembles or uncertainty to improve performance. The authors would need to show that adding either of these to their approach attains similar performance to state-of-the-art approaches.
At this time, this paper is below the bar for acceptance. | train | [
"HkehnBTnKH",
"r1xKZ73jsB",
"rJgsal3siS",
"SklYSAqjoH",
"B1g5oiHaFB",
"BJlMle0pFB"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"#rebuttal response\nThanks for the explanations for the difference between S2VG and SVG, new results on two more MuJoCo environments. However, I still think that the motivation of the paper is not clear: the authors may have a good algorithm. But the value of adding entropy on SVG or training the policy on imagina... | [
1,
-1,
-1,
-1,
1,
3
] | [
4,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_r1l-HTNtDB",
"HkehnBTnKH",
"B1g5oiHaFB",
"BJlMle0pFB",
"iclr_2020_r1l-HTNtDB",
"iclr_2020_r1l-HTNtDB"
] |
iclr_2020_HyezBa4tPB | Dirichlet Wrapper to Quantify Classification Uncertainty in Black-Box Systems | Nowadays, machine learning models are becoming a utility in many sectors. AI companies deliver pre-trained encapsulated models as application programming interfaces (APIs) that developers can combine with third party components, their models, and proprietary data, to create complex data products. This complexity and the lack of control and knowledge of the internals of these external components might cause unavoidable effects, such as lack of transparency, difficulty in auditability, and the emergence of uncontrolled potential risks. These issues are especially critical when practitioners use these components as black-boxes in new datasets. In order to provide actionable insights in this type of scenarios, in this work we propose the use of a wrapping deep learning model to enrich the output of a classification black-box with a measure of uncertainty. Given a black-box classifier, we propose a probabilistic neural network that works in parallel to the black-box and uses a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters of the classifier and enables the estimation of aleatoric uncertainty for any data sample.
Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions, discarding those more uncertain, leading to an improvement in the trustability of the resulting system. We showcase the proposed technique and methodology in two practical scenarios, one for NLP and another for computer vision, where a simulated API based is applied to different domains. Results demonstrate the effectiveness of the uncertainty computed by the wrapper and its high correlation to wrong predictions and misclassifications. | reject | This paper proposes adding a Dirichlet distribution as a wrapper on top of a black box classifier in order to better capture uncertainty in the predictions. This paper received four reviews in total with scores (1,1,1,6). The reviewer who gave the weak accept found the paper well written, easy to follow and intuitive. The other reviewers, however, were primarily concerned about the empirical evaluation of the method. They found the baselines too weak and weren't convinced that the method would work well in practice. The reviewers also cited a lack of comparison to existing literature for their scores. One reviewer noted that while the method addresses aleatoric uncertainty, it doesn't provide any mechanism for epistemic uncertainty, which would be necessary for the applications motivating the work.
The authors did not provide a response and thus there was no discussion. | test | [
"BJgRMcbStS",
"rkekhHcaYB",
"SkgWFpePqH",
"Byev-9Zwcr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors propose to use a dirichlet prior over the multinomial distribution outputted by blackbox DL models, to quantify uncertainty in predictions. The main contribution is to learn the parameters of the prior and use it as a wrapper over the black box, to adjudicate whether to retain or reject ... | [
6,
1,
1,
1
] | [
3,
3,
5,
4
] | [
"iclr_2020_HyezBa4tPB",
"iclr_2020_HyezBa4tPB",
"iclr_2020_HyezBa4tPB",
"iclr_2020_HyezBa4tPB"
] |
iclr_2020_ryxmrpNtvH | Deeper Insights into Weight Sharing in Neural Architecture Search | With the success of deep neural networks, Neural Architecture Search (NAS) as a way of automatic model design has attracted wide attention. As training every child model from scratch is very time-consuming, recent works leverage weight-sharing to speed up the model evaluation procedure. These approaches greatly reduce computation by maintaining a single copy of weights on the super-net and share the weights among every child model. However, weight-sharing has no theoretical guarantee and its impact has not been well studied before. In this paper, we conduct comprehensive experiments to reveal the impact of weight-sharing: (1) The best-performing models from different runs or even from consecutive epochs within the same run have significant variance; (2) Even with high variance, we can extract valuable information from training the super-net with shared weights; (3) The interference between child models is a main factor that induces high variance; (4) Properly reducing the degree of weight sharing could effectively reduce variance and improve performance. | reject | This paper provides a series of empirical evaluations on a small neural architecture search space with 64 architectures. The experiments are interesting, but limited in scope and limited to 64 architectures trained on CIFAR-10. It is unclear whether lessons learned on this search space would transfer to large search spaces. One upside is that code is available, making the work reproducible.
All reviewers read the rebuttal and participated in the private discussion of reviewers and AC, but none of them changed their mind. All gave a weak rejection score.
I agree with this assessment and therefore recommend rejection. | train | [
"SJlzZfOtKH",
"HkgF5kWNiS",
"Bylr_1WEjB",
"ryxLrkbNjH",
"Byx_AfDsdr",
"ryxxwa_CYH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"First of all, I have to state that this is not my area of expertise. So, my review here is an educated guess. \n\nThe paper is an empirical study that looks into the effect of weight sharing in neural network architecture search. The basic idea is that by sharing weights among multiple candidate architectures, the... | [
3,
-1,
-1,
-1,
3,
3
] | [
1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2020_ryxmrpNtvH",
"Byx_AfDsdr",
"SJlzZfOtKH",
"ryxxwa_CYH",
"iclr_2020_ryxmrpNtvH",
"iclr_2020_ryxmrpNtvH"
] |
iclr_2020_rkeNr6EKwB | Small-GAN: Speeding up GAN Training using Core-Sets | BigGAN suggests that Generative Adversarial Networks (GANs) benefit disproportionately from large minibatch sizes. This finding is interesting but also discouraging -- large batch sizes are slow and expensive to emulate on conventional hardware. Thus, it would be nice if there were some trick by which we could generate batches that were effectively big though small in practice. In this work, we propose such a trick, inspired by the use of Coreset-selection in active learning. When training a GAN, we draw a large batch of samples from the prior and then compress that batch using Coreset-selection. To create effectively large batches of real images, we create a cached dataset of Inception activations of each training image, randomly project them down to a smaller dimension, and then use Coreset-selection on those projected embeddings at training time. We conduct experiments showing that this technique substantially reduces training time and memory usage for modern GAN variants, that it reduces the fraction of dropped modes in a synthetic dataset, and that it helps us use GANs to reach a new state of the art in anomaly detection. | reject | The paper proposes to use greedy core set sampling to improve GAN training with large batches. Although the problem is clear and the solution works, reviewers have raised several concerns. One concern is that the technical novelty is limited; another (in the first version) that even a simpler version of gradient accumulation can solve the main task (rather that computing core-sets). In the end, some discussion was done, with quite a few additions and experiments done by the authors. The final concern that seemingly was not addressed: the gradient accumulation seems to give the same numbers as large batches, thus you can 'mimic' large batch sizes with smaller ones and gradient accumulation, making the main claim of the paper questionable. The achievement of SOTA is good, but it is not clear wether it is due to the proposed technique, or rather smart tuning of a larger set of hyperparameters. Thus, I would agree with the concern of Reviewer1. | train | [
"BJxRLkB5sS",
"r1eHGDmcsB",
"SyezPMXqoS",
"rJgcWGQ5iB",
"rJlmgvPwoB",
"rkehABwDjr",
"BylBLSwPoH",
"SyganVwwiS",
"rJlFubv3FS",
"rkloqqAycS",
"HJlnHqQhFH",
"HJxpqWI19H"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Hello, \n\nWe believe we have addressed your concerns and clarified some points you raised. Do you have an updated impression of our paper? Should that not be the case, please do not hesitate to get in touch with us. Thanks for your consideration and time. We appreciate it!",
"We would like to thank each of the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
-1
] | [
"rJlFubv3FS",
"iclr_2020_rkeNr6EKwB",
"HJlnHqQhFH",
"rkloqqAycS",
"rJlFubv3FS",
"rJlFubv3FS",
"rJlFubv3FS",
"rJlFubv3FS",
"iclr_2020_rkeNr6EKwB",
"iclr_2020_rkeNr6EKwB",
"iclr_2020_rkeNr6EKwB",
"iclr_2020_rkeNr6EKwB"
] |
iclr_2020_SJgBra4YDS | Manifold Modeling in Embedded Space: A Perspective for Interpreting "Deep Image Prior" | Deep image prior (DIP), which utilizes a deep convolutional network (ConvNet) structure itself as an image prior, has attracted huge attentions in computer vision community. It empirically shows the effectiveness of ConvNet structure for various image restoration applications. However, why the DIP works so well is still unknown, and why convolution operation is essential for image reconstruction or enhancement is not very clear. In this study, we tackle these questions. The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity. The proposed method named as manifold modeling in embedded space (MMES) is implemented by using a novel denoising-auto-encoder in combination with multi-way delay-embedding transform. In spite of its simplicity, the image/tensor completion and super-resolution results of MMES are quite similar even competitive to DIP in our extensive experiments, and these results would help us for reinterpreting/characterizing the DIP from a perspective of ``low-dimensional patch-manifold prior''. | reject | The paper proposes a combination of a delay embedding as well as an autoencoder to perform representation learning. The proposed algorithm shows competitive performance with deep image prior, which is a convnet structure. The paper claims that the new approach is interpretable and provides explainable insight into image priors.
The discussion period was used constructively, with the authors addressing reviewer comments, and the reviewers acknowledging this an updating their scores.
Overall, the proposed architecture is good, but the structure and presentation of the paper is still not up to the standards of ICLR. The current presentation seems to over-claim interpretability, without sufficient theoretical or empirical evidence.
| train | [
"Hkllj_e0tH",
"r1xJqwuRKB",
"BJgtcNFcsB",
"SJxMVv5YsB",
"B1lGa0-YiH",
"S1g1383viS",
"B1gsCXcMiB",
"BJxJwLqziB",
"SygN0B5GsS",
"rJgc9ScMiS",
"SJlA0E9MsB",
"BJlYwV5MsS",
"BJxxDX5GoS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"\nThis paper proposed a low-dimensional patch-manifold prior perspective for reinterpreting the deep image prior. I think this is very interesting work that could have a lot of impact in vision and beyond, since effective construct the problem in reconstruction tasks are highly relevant to a number of tasks. I was... | [
6,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
-1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_SJgBra4YDS",
"iclr_2020_SJgBra4YDS",
"iclr_2020_SJgBra4YDS",
"S1g1383viS",
"S1g1383viS",
"iclr_2020_SJgBra4YDS",
"r1xJqwuRKB",
"Hkllj_e0tH",
"Hkllj_e0tH",
"Hkllj_e0tH",
"Hkllj_e0tH",
"r1xJqwuRKB",
"r1xJqwuRKB"
] |
iclr_2020_HJeLBpEFPB | Unsupervised Universal Self-Attention Network for Graph Classification | Existing graph embedding models often have weaknesses in exploiting graph structure similarities, potential dependencies among nodes and global network properties. To this end, we present U2GAN, a novel unsupervised model leveraging on the strength of the recently introduced universal self-attention network (Dehghani et al., 2019), to learn low-dimensional embeddings of graphs which can be used for graph classification. In particular, given an input graph, U2GAN first applies a self-attention computation, which is then followed by a recurrent transition to iteratively memorize its attention on vector representations of each node and its neighbors across each iteration. Thus, U2GAN can address the weaknesses in the existing models in order to produce plausible node embeddings whose sum is the final embedding of the whole graph. Experimental results show that our unsupervised U2GAN produces new state-of-the-art performances on a range of well-known benchmark datasets for the graph classification task. It even outperforms supervised methods in most of benchmark cases. | reject | All three reviewers are consistently negative on this paper. Thus a reject is recommended. | train | [
"r1xOur17iS",
"rkgHD6AfoB",
"rkx44w1XoB",
"S1gYNy1QiS",
"Hyxp_0n1jH",
"HyxQAQLatr",
"BkltXbVRKB",
"Hkl1LHjCYS",
"BJlxmiYO5B",
"SkxGatT7cS",
"H1eBhX99KS",
"r1eTcsGcFS",
"SkeBb529_r",
"rygwXZ6c_H",
"SJgjubaqOS",
"B1eMNak8dr",
"rkx_gVw5uS",
"HJgAQyv9_r",
"Bke5Dp18_B",
"HkxZJjgHOH"... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"Thank you for your comments.\n\nFor some clarifications:\n\n1.\tGAT borrows the standard attention from [1] in using a single-layer feedforward neural network parametrized by a weight vector and then applying the non-linear function followed by the softmax function to compute importance weights of neighbors for a ... | [
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"BkltXbVRKB",
"Hyxp_0n1jH",
"HyxQAQLatr",
"Hkl1LHjCYS",
"iclr_2020_HJeLBpEFPB",
"iclr_2020_HJeLBpEFPB",
"iclr_2020_HJeLBpEFPB",
"iclr_2020_HJeLBpEFPB",
"SkxGatT7cS",
"iclr_2020_HJeLBpEFPB",
"r1eTcsGcFS",
"iclr_2020_HJeLBpEFPB",
"rkx_gVw5uS",
"HJgAQyv9_r",
"HJgAQyv9_r",
"HkxZJjgHOH",
... |
iclr_2020_SkxUrTVKDH | Split LBI for Deep Learning: Structural Sparsity via Differential Inclusion Paths | Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error. However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima. In this paper, instead of pruning or distilling over-parameterized models to compressive ones, we propose a new approach based on \emph{differential inclusions of inverse scale spaces}, that generates a family of models from simple to complex ones by coupling gradient descent and mirror descent to explore model structural sparsity. It has a simple discretization, called the Split Linearized Bregman Iteration (SplitLBI), whose global convergence analysis in deep learning is established that from any initializations, algorithmic iterations converge to a critical point of empirical risks. Experimental evidence shows that\ SplitLBI may achieve state-of-the-art performance in large scale training on ImageNet-2012 dataset etc., while with \emph{early stopping} it unveils effective subnet architecture with comparable test accuracies to dense models after retraining instead of pruning well-trained ones. | reject | This paper investigates an existing method for fitting sparse neural networks, and provides a novel proof of global convergence. Overall, this seems like a sensible, if marginal, contribution. However, there were serious red flags regarding the care which which the scholarship was done which make me deem the current submission unsuitable for publication. In particular, two points raised by R4, which were not addressed even after the rebuttal:
1) "One important issue with the paper is that it blurs the distinction between prior work and the new contribution. For example, the subsection on Split Linearized Bregman Iteration in the "Methodology" section does not contain anything new compared to [1], and this is not clear enough to the reader."
2) "The newly-written conclusion is still incorrect, stating again that Split LBI achieves SOTA performance on ImageNet."
I believe that R3's high score is due to not noticing these unsupported or misleading claims.
| train | [
"BJxzGS6vjB",
"HJePlf46YH",
"BJxS7wJZcr",
"Byl5vk7hjH",
"HylxmyQ3iS",
"HkgDsAGhjB",
"r1e6Z0GhiS",
"HyxVHTzhiH",
"Syxd02MhiS",
"S1ejdhGniH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Summary\n=======\nThis paper aims to train sparse neural networks efficiently, by jointly optimizing the weights and sparsity structure of the network. It applies the Split Linear Bregman Iteration (Split LBI) method from [1] in a large-scale setting, to train deep neural networks.\n\nThe approach works by conside... | [
3,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_SkxUrTVKDH",
"iclr_2020_SkxUrTVKDH",
"iclr_2020_SkxUrTVKDH",
"BJxzGS6vjB",
"BJxzGS6vjB",
"BJxzGS6vjB",
"BJxzGS6vjB",
"HJePlf46YH",
"BJxS7wJZcr",
"iclr_2020_SkxUrTVKDH"
] |
iclr_2020_SylwBpNKDr | Boosting Network: Learn by Growing Filters and Layers via SplitLBI | Network structures are important to learning good representations of many tasks in computer vision and machine learning communities. These structures are either manually designed, or searched by Neural Architecture Search (NAS) in previous works, which however requires either expert-level efforts, or prohibitive computational cost. In practice, it is desirable to efficiently and simultaneously learn both the structures and parameters of a network from arbitrary classes with budgeted computational cost. We identify it as a new learning paradigm -- Boosting Network, where one starts from simple models, delving into complex trained models progressively.
In this paper, by virtue of an iterative sparse regularization path -- Split Linearized Bregman Iteration (SplitLBI), we propose a simple yet effective boosting network method that can simultaneously grow and train a network by progressively adding both convolutional filters and layers. Extensive experiments with VGG and ResNets validate the effectiveness of our proposed algorithms. | reject | This paper considers how to learn the structure of deep network by beginning with a simple network and then progressively adding layers and filters as needed. The paper received three reviews by expert working in this area. R1 recommends Weak Reject due to concerns about novelty, degree of contribution, clarity of technical exposition, and experiments. R2 recommends Weak Accept and has some specific suggestions and questions. R3 recommends Weak Reject, also citing concerns with experiments and writing. The authors submitted a response that addressed many of these comments, but R1 and R3 continue to have concerns about contribution and the experiments, while R2 maintains their Weak Accept rating. Given the split decision, the AC also read the paper. While we believe the paper has significant merit, we agree with R1 and R3 on the need for additional experimentation, and believe another round of peer review would help clarify the writing and contribution. We hope the reviewer comments will hep authors prepare a revision for a future venue. | train | [
"S1x5dCfciS",
"H1enMAfcoS",
"rkeT26zcoB",
"rklaQTf5oS",
"rkxUaz-AFH",
"HygAtPPZ5S",
"HklWUCF79H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all the reviewers for their insightful and constructive comments. We have substantially revised the paper as suggested by the reviewers, and summarize the major changes as follows:\n\n1. In Introduction, we explicitly highlight our contributions, and explain that (1) we for the first time, define the task... | [
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
4,
1,
4
] | [
"iclr_2020_SylwBpNKDr",
"rkxUaz-AFH",
"HygAtPPZ5S",
"HklWUCF79H",
"iclr_2020_SylwBpNKDr",
"iclr_2020_SylwBpNKDr",
"iclr_2020_SylwBpNKDr"
] |
iclr_2020_BkePHaVKwS | Learning Surrogate Losses | The minimization of loss functions is the heart and soul of Machine Learning. In this paper, we propose an off-the-shelf optimization approach that can seamlessly minimize virtually any non-differentiable and non-decomposable loss function (e.g. Miss-classification Rate, AUC, F1, Jaccard Index, Mathew Correlation Coefficient, etc.). Our strategy learns smooth relaxation versions of the true losses by approximating them through a surrogate neural network. The proposed loss networks are set-wise models which are invariant to the order of mini-batch instances. Ultimately, the surrogate losses are learned jointly with the prediction model via bilevel optimization. Empirical results on multiple datasets with diverse real-life loss functions compared with state-of-the-art baselines demonstrate the efficiency of learning surrogate losses. | reject | Unfortunately, this was a borderline paper that generated disagreement among the reviewers. After high level round of additional deliberation it was decided that this paper does not yet meet the standard for acceptance. The paper proposes a potentially interesting approach to learning surrogates for non-differentiable and non-decomposable loss functions. However, the work is a bit shallow technically, as any supporting theoretical justification is supplied by pointing to other work. The paper would be stronger with a more serious and comprehensive analysis. The reviewers criticized the lack of clarity in the technical exposition, which the authors attempted to mitigate in the rebuttal/revision process. The paper would benefit from additional clarity and systematic presentation of complete details to allow reproduction. | train | [
"H1lyzGcKor",
"SyxCgwKyoH",
"H1xmxEZ3oS",
"HkgIEkppFH",
"SyebTfpcjS",
"BJlGODResr",
"SkxCY8HPoS",
"HyxMYexDjr",
"HJgRehU7jB",
"r1lK5Xg7ir",
"S1lSkpfzjr",
"S1xPxXH3tr",
"rJlDGZYRKH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks a lot for the nice suggestions, our comments are as follows.\n\nPart 1 - Non-decomposability Experiment\n\nIt is possible to systematically structure your suggested experiments as an ablation of the capacity of $h(x)$ where $x = \\frac{1}{N} \\sum\\limits_{i=1}^{N} g(y_i, \\hat y_i)$\n\nCase 1: SL-Deep, whe... | [
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"HyxMYexDjr",
"rJlDGZYRKH",
"iclr_2020_BkePHaVKwS",
"iclr_2020_BkePHaVKwS",
"H1lyzGcKor",
"S1xPxXH3tr",
"rJlDGZYRKH",
"HJgRehU7jB",
"r1lK5Xg7ir",
"S1lSkpfzjr",
"HkgIEkppFH",
"iclr_2020_BkePHaVKwS",
"iclr_2020_BkePHaVKwS"
] |
iclr_2020_B1g_BT4FvS | Samples Are Useful? Not Always: denoising policy gradient updates using variance explained | Policy gradient algorithms in reinforcement learning optimize the policy directly and rely on efficiently sampling an environment. However, while most sampling procedures are based solely on sampling the agent's policy, other measures directly accessible through these algorithms could be used to improve sampling before each policy update. Following this line of thoughts, we propose the use of SAUNA, a method where transitions are rejected from the gradient updates if they do not meet a particular criterion, and kept otherwise. This criterion, the fraction of variance explained Vex, is a measure of the discrepancy between a model and actual samples. In this work, Vex is used to evaluate the impact each transition will have on learning: this criterion refines sampling and improves the policy gradient algorithm. In this paper: (a) We introduce and explore Vex, the criterion used for denoising policy gradient updates. (b) We conduct experiments across a variety of benchmark environments, including standard continuous control problems. Our results show better performance with SAUNA. (c) We investigate why Vex provides a reliable assessment for the selection of samples that will positively impact learning. (d) We show how this criterion can work as a dynamic tool to adjust the ratio between exploration and exploitation. | reject | The authors aim to improve policy gradient methods by denoising the gradient estimate. They propose to filter the transitions used to form the gradient update based on a variance explains criterion. They evaluate their method in combination with PPO and A2C, and demonstrate improvements over the baseline methods.
Initially, reviewers were concerned about the motivation and explanation of the method. The authors revised the paper by clarifying the motivation and providing a justification based on the options framework. Furthermore, the authors included additional experiments investigating the impact of their approach on the gradient estimator, showing that with their filtering, the gradient estimator had larger magnitude.
Reviewers found the justification via the options framework to be a stretch, and I agree. The authors should explain how the options framework leads to dropping gradient terms. At the moment, the paper describes an algorithm using the options framework, however, they don't connect the policy gradients of that algorithm to their method. Furthermore, the authors should more clearly verify the claims about reducing noise in the gradient estimate. While the additional experiments on the norm are nice, the authors should go further. For example, if the claim is that the variance of the gradient estimator is reduced, then that should be verified. Finally, there are many approaches for reducing the variance of the policy gradient (Grathwohl et al. 2018, Wu et al 2018, Liu et al. 2018) and no comparisons are made to these approaches.
Given the remaining issues, I recommend rejection for this paper at this time, however, I encourage the authors to address these issues and submit to a future venue.
| train | [
"BylQuBY6tB",
"r1giq6Bk9H",
"Skgbaav2sS",
"SJxTQq82sB",
"Sylx6tUnoH",
"r1xrEt83jr",
"HkgfXyihYS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a novel way to denoise the policy gradient by filtering the samples to add by a criterion \"variance explained\". The variance explained basically measures how well the learn value function could predict the average return, and the filter will keep samples with a high or low variance explained ... | [
3,
6,
-1,
-1,
-1,
-1,
6
] | [
4,
1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_B1g_BT4FvS",
"iclr_2020_B1g_BT4FvS",
"iclr_2020_B1g_BT4FvS",
"r1giq6Bk9H",
"BylQuBY6tB",
"HkgfXyihYS",
"iclr_2020_B1g_BT4FvS"
] |
iclr_2020_B1lOraEFPB | Transition Based Dependency Parser for Amharic Language Using Deep Learning | Researches shows that attempts done to apply existing dependency parser on morphological rich languages including Amharic shows a poor performance. In this study, a dependency parser for Amharic language is implemented using arc-eager transition system and LSTM network. The study introduced another way of building labeled dependency structure by using a separate network model to predict dependency relation. This helps the number of classes to decrease from 2n+2 into n, where n is the number of relationship types in the language and increases the number of examples for each class in the data set. Evaluation of the parser model results 91.54 and 81.4 unlabeled and labeled attachment score respectively. The major challenge in this study was the decrease of the accuracy of labeled attachment score. This is mainly due to the size and quality of the tree-bank available for Amharic language. Improving the tree-bank by increasing the size and by adding morphological information can make the performance of parser better. | reject | The paper builds a transition-based dependency parser for Amharic, first predicting transitions and then dependency labels. The model is poorly motivated, and poorly described. The experiments have serious problems with their train/test splits and lack of baseline. The reviewers all convincingly argue for reject. The authors have not responded. | train | [
"rye8rHBmsH",
"H1xIpYAstB",
"H1ek-2mRKr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes a dependency parser for Amharic text trained on the Yimam et al. 2018 treebank.\n\nThe proposed method is an unlabelled arc-eager transition-based dependency parser followed by a dependency label classifier. Both models are based on LSTM architectures. Unfortunately there is no clear descriptio... | [
1,
1,
1
] | [
4,
3,
4
] | [
"iclr_2020_B1lOraEFPB",
"iclr_2020_B1lOraEFPB",
"iclr_2020_B1lOraEFPB"
] |
iclr_2020_rketraEtPr | Learning Time-Aware Assistance Functions for Numerical Fluid Solvers | Improving the accuracy of numerical methods remains a central challenge in many disciplines and is especially important for nonlinear simulation problems. A representative example of such problems is fluid flow, which has been thoroughly studied to arrive at efficient simulations of complex flow phenomena. This paper presents a data-driven approach that learns to improve the accuracy of numerical solvers. The proposed method utilizes an advanced numerical scheme with a fine simulation resolution to acquire reference data. We, then, employ a neural network that infers a correction to move a coarse thus quickly obtainable result closer to the reference data. We provide insights into the targeted learning problem with different learning approaches: fully supervised learning methods with a naive and an optimized data acquisition as well as an unsupervised learning method with a differentiable Navier-Stokes solver. While our approach is very general and applicable to arbitrary partial differential equation models, we specifically highlight gains in accuracy for fluid flow simulations. | reject | This paper provides a data-driven approach that learns to improve the accuracy of numerical solvers. It solves an important problem and provides some promising direction. However, the presented paper is not novel in terms of ML methodology. The presentation can be significantly improved for ML audience (e.g., it would be preferred to explicitly state the problem setting in the beginning of Section 3). | train | [
"BJlSqVu_oH",
"Skeb_EOdoS",
"rygj74O_oS",
"H1efFRwPFr",
"H1ldAtK6KB",
"rJgClbLZqr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your time to review our paper and your positive assessment.",
"Thank you very much for taking the time to review our paper. We would like to clarify a set of points below.\n\nOur second approach, which we call unsupervised, could be adapted to a variety of PDE problems. The key advantage ... | [
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
1,
1,
1
] | [
"H1efFRwPFr",
"H1ldAtK6KB",
"rJgClbLZqr",
"iclr_2020_rketraEtPr",
"iclr_2020_rketraEtPr",
"iclr_2020_rketraEtPr"
] |
iclr_2020_S1gqraNKwB | Contextual Inverse Reinforcement Learning | We consider the Inverse Reinforcement Learning problem in Contextual Markov
Decision Processes. In this setting, the reward, which is unknown to the agent, is a
function of a static parameter referred to as the context. There is also an “expert”
who knows this mapping and acts according to the optimal policy for each context.
The goal of the agent is to learn the expert’s mapping by observing demonstrations.
We define an optimization problem for finding this mapping and show that when
it is linear, the problem is convex. We present and analyze the sample complexity
of three algorithms for solving this problem: the mirrored descent algorithm,
evolution strategies, and the ellipsoid method. We also extend the first two methods
to work with general reward functions, e.g., deep neural networks, but without the
theoretical guarantees. Finally, we compare the different techniques empirically in
driving simulation and a medical treatment regime. | reject | The authors introduce a framework for inverse reinforcement learning tasks whose reward functions are dependent on context variables and provide a solution by formulating it as a convex optimization problem. Overall, the authors agreed that the method appears to be sound. However, after discussion there were lingering concerns about (1) in what situations this framework is useful or advantageous, (2) how it compares to existing, modern IRL algorithms that take context into account, and (3) if the theoretical and experimental results were truly useful in evaluating the algorithm. Given that these issues were not able to be fully resolved, I recommend that this paper be rejected at this time. | train | [
"r1euwGVP9B",
"HJlQiXlTYH",
"SylCBgDhiH",
"Syl9b9U3sS",
"S1lpU5rniH",
"rkltCxr3oB",
"HJxCDNIiiH",
"SyeEosNioH",
"HJetPmQosH",
"Syx0YTs9sS",
"rkxGb2j5oS",
"ryxKl5q_sS",
"HJeE5wFVsS",
"HJgeNsYNiH",
"SJxs2Qt4ir",
"BklJpy7Jqr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduces a formulation for the contextual inverse reinforcement learning (COIRL) problem and proposed three algorithms for solving the proposed problem. Theoretical analysis of scalability and sample complexity are conducted for cases where both the feature function and the context-to-reward mapping f... | [
6,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_S1gqraNKwB",
"iclr_2020_S1gqraNKwB",
"Syl9b9U3sS",
"S1lpU5rniH",
"rkltCxr3oB",
"HJxCDNIiiH",
"Syx0YTs9sS",
"HJetPmQosH",
"HJeE5wFVsS",
"rkxGb2j5oS",
"SJxs2Qt4ir",
"iclr_2020_S1gqraNKwB",
"r1euwGVP9B",
"BklJpy7Jqr",
"HJlQiXlTYH",
"iclr_2020_S1gqraNKwB"
] |
iclr_2020_rJxcBpNKPr | OvA-INN: Continual Learning with Invertible Neural Networks | In the field of Continual Learning, the objective is to learn several tasks one after the other without access to the data from previous tasks. Several solutions have been proposed to tackle this problem but they usually assume that the user knows which of the tasks to perform at test time on a particular sample, or rely on small samples from previous data and most of them suffer of a substantial drop in accuracy when updated with batches of only one class at a time. In this article, we propose a new method, OvA-INN, which is able to learn one class at a time and without storing any of the previous data. To achieve this, for each class, we train a specific Invertible Neural Network to output the zero vector for its class. At test time, we can predict the class of a sample by identifying which network outputs the vector with the smallest norm. With this method, we show that we can take advantage of pretrained models by stacking an invertible network on top of a features extractor. This way, we are able to outperform state-of-the-art approaches that rely on features learning for the Continual Learning of MNIST and CIFAR-100 datasets. In our experiments, we are reaching 72% accuracy on CIFAR-100 after training our model one class at a time. | reject | This paper is board-line but in the end below the standards for ICLR. Firstly this paper could use significant polishing. The text has significant grammar and style issues: incorrect words, phrases and tenses; incomplete sentences; entire sections of the paper containing only lists, etc. The paper is in need of significant editing.
This of course is not enough to merit rejection, but there are concerns about the contribution of the new method, experiment details, and the topic of study. The results are reported from either a single run or unknown number of runs of the learning system, which is not acceptable even if the we suspect the variance is low. The proposed approach relies on pre-training a feature extractor which in many ways side-steps the forgetting/interference problem rather than what we really need: new algorithms that processes the training data in ways the mitigate interference by learning representations. In general the reviewers found it very difficult to access the fairness of the comparisons dues do differences between how different methods make use of stored data and pre-training. The reviewers highlighted the similarity between the propose approach and recent work in angle of generative modeling / out of distribution (OOD) detection which suggests that the proposed approach has limited utility (as detailed by R1) and that OOD baselines were missing. Finally, the CL problem formulation explored here, where task identifiers are available during training and data is i.i.d, is of limited utility. Its hard to imagine how approaches that learn individual networks for each task could scale to more realistic problem formulations.
All reviewers agreed the paper's experiments were borderline and the paper has substantial issues. There are too many revisions to be done. | train | [
"H1eIrA5k5r",
"H1guUry3ir",
"SyxSzWQojH",
"BJeYhFTqjS",
"HJeaVcaqoS",
"H1eQwt69jS",
"Skx6Uq7TKB",
"SkeS0zb6Kr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"[update after rebuttal]\n\nI thank the authors for their detailed reply, answers to my questions, and updates to the paper. (I especially appreciate the substantial effort to address all points and improve the manuscript even given the strongly negative rating. I am not personally in favour of \"extremifying\" the... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_rJxcBpNKPr",
"SyxSzWQojH",
"HJeaVcaqoS",
"Skx6Uq7TKB",
"SkeS0zb6Kr",
"H1eIrA5k5r",
"iclr_2020_rJxcBpNKPr",
"iclr_2020_rJxcBpNKPr"
] |
iclr_2020_SkxcSpEKPS | Generative Adversarial Networks For Data Scarcity Industrial Positron Images With Attention | In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized. Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in positron process, we propose the idea of combining deep learning to generate positron images with good quality and clear details by adversarial nets. The structure of the paper is as follows: firstly, we encode to get the hidden vectors of medical CT images based on transfer Learning, and use PCA to extract positron image features. Secondly, we construct a positron image memory based on attention mechanism as a whole input to the adversarial nets which uses medical hidden variables as a query. Finally, we train the whole model jointly and update the input parameters until convergence. Experiments have proved the possibility of generating rare positron images for industrial non-destructive testing using countermeasure networks, and good imaging results have been achieved. | reject | The paper studies Positron Emission Tomography (PET) in medical imaging. The paper focuses on the challenges created by gamma-ray photon scattering, that results in poor image quality. To tackle this problem and enhance the image quality, the paper suggests using generative adversarial networks. Unfortunately due to poor writing and severe language issues, none of the three reviewers were able to properly assess the paper [see the reviews for multiple examples of this]. In addition, in places, some important implementation details were missing.
The authors chose not to response to reviewers' concerns. In its current form, the submission cannot be well understood by people interested in reading the paper, so it needs to be improved and resubmitted. | train | [
"H1xdnWUVtB",
"BkxnKw5pFr",
"SkgMSQgJ9S"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The topic of the paper is a GAN framework to enhance PET images in industrial inspection, as far as I understand by transfer learning from a medical PET database. Unfortunately, I am unable assess the paper due to serious language problems. The text is incoherent and not understandable, it is impossible to deciphe... | [
1,
1,
1
] | [
4,
5,
1
] | [
"iclr_2020_SkxcSpEKPS",
"iclr_2020_SkxcSpEKPS",
"iclr_2020_SkxcSpEKPS"
] |
iclr_2020_B1liraVYwr | LocalGAN: Modeling Local Distributions for Adversarial Response Generation | This paper presents a new methodology for modeling the local semantic distribution of responses to a given query in the human-conversation corpus, and on this basis, explores a specified adversarial learning mechanism for training Neural Response Generation (NRG) models to build conversational agents. The proposed mechanism aims to address the training instability problem and improve the quality of generated results of Generative Adversarial Nets (GAN) in their utilizations in the response generation scenario. Our investigation begins with the thorough discussions upon the objective function brought by general GAN architectures to NRG models, and the training instability problem is proved to be ascribed to the special local distributions of conversational corpora. Consequently, an energy function is employed to estimate the status of a local area restricted by the query and its responses in the semantic space, and the mathematical approximation of this energy-based distribution is finally found. Building on this foundation, a local distribution oriented objective is proposed and combined with the original objective, working as a hybrid loss for the adversarial training of response generation models, named as LocalGAN. Our experimental results demonstrate that the reasonable local distribution modeling of the query-response corpus is of great importance to adversarial NRG, and our proposed LocalGAN is promising for improving both the training stability and the quality of generated results.
| reject | This paper tackles neural response generation with Generative Adversarial Nets (GANs), and to address the training instability problem with GANs, it proposes a local distribution oriented objective. The new objective is combined with the original objective, and used as a hybrid loss for the adversarial training of response generation models, named as LocalGAN. Authors responded with concerns about reviewer 3's comments, and I agree with the authors explanation, so I am disregarding review 3, and am relying on my read through of the latest version of the paper. The other reviewers think the paper has good contributions, however they are not convinced about the clarity of the presentations and made many suggestions (even after the responses from the authors). I suggest a reject, as the paper should include a clear presentation of the approach and technical formulation (as also suggested by the reviewers). | val | [
"S1gRbInpFB",
"H1l71JsIsS",
"Syg4B09Lir",
"H1ef6C1gjr",
"Hyejy1ocjr",
"SJenD09qoH",
"BJxwaeiUir",
"rygcIksUoB",
"r1gblS6iFr",
"SJx4M15v9r",
"B1lQ3DqCtH",
"B1eqfuSaFH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"POST-REBUTTAL FEEDBACK\n\nI share the same concerns as that of reviewer 2 in the response to the rebuttal. Hence, my score remains unchanged.\n\n\nSUMMARY OF REVIEW\n\nThis paper motivates the need to \"contextualize\" responses based on the query to bring about stable training in NRG and consequently proposes loc... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
-1,
-1
] | [
"iclr_2020_B1liraVYwr",
"r1gblS6iFr",
"S1gRbInpFB",
"iclr_2020_B1liraVYwr",
"iclr_2020_B1liraVYwr",
"S1gRbInpFB",
"iclr_2020_B1liraVYwr",
"SJx4M15v9r",
"iclr_2020_B1liraVYwr",
"iclr_2020_B1liraVYwr",
"B1eqfuSaFH",
"iclr_2020_B1liraVYwr"
] |
iclr_2020_HyxnH64KwS | The problem with DDPG: understanding failures in deterministic environments with sparse rewards | In environments with continuous state and action spaces, state-of-the-art actor-critic reinforcement learning algorithms can solve very complex problems, yet can also fail in environments that seem trivial, but the reason for such failures is still poorly understood. In this paper, we contribute a formal explanation of these failures in the particular case of sparse reward and deterministic environments. First, using a very elementary control problem, we illustrate that the learning process can get
stuck into a fixed point corresponding to a poor solution. Then, generalizing from the studied example, we provide a detailed analysis of the underlying mechanisms which results in a new understanding of one of the convergence regimes of these algorithms. The resulting perspective casts a new light on already existing solutions to the issues we have highlighted, and suggests other potential approaches. | reject | This paper provides an extensive investigation of the robustness of Deep Deterministic Policy Gradient algorithm.
Papers providing extensive and qualitative empirical studies, illustrative benchmark domains, identification of problems with existing methods, and new insights can be immensely valuable, and this paper is certainly in this direction, if not quite there yet.
The vast majority of this paper investigates one deep learning algorithm in designed domain. There is some theory but it's relegated to the appendix. There are a few issues with this approach: (1) there is no concrete evidence that this is a general issue beyond the provided example (more on that below). (2) Even in the designed domain the problem is extremely rare. (3) The study and perhaps even the issue is only shown for one particular architecture (with a whole host of unspecified meta-parameter details). Why not just use SAC it works? DDPG has other issues, why is it of interest to study and fix this particular architecture? The motivation that it is the first and most popular algorithm is not well developed enough to be convincing. (4) There is really no reasoning to suggest that the particular 1D is representative or interesting in general.
The authors including Mujoco results to address #1. But the error bars overlap, its completely unclear if the baseline was tuned at all---this is very problematic as the domains were variants created by the authors. If DDPG was not tuned for the variant then the plots are not representative. In general, there are basically no implementation details (how parameters were tested, how experiments were conducted)or general methodological details given in the paper. Given the evidence provided in this paper its difficult to claim this is a general and important issue.
I encourage the authors to look at John Langfords hard exploration tasks, and broaden their view of this work general learning mechanisms. | train | [
"Syxwmy2nFB",
"SklCsc1Pir",
"S1gHF5ywiB",
"rkgaRq1DiH",
"HyxLVc1DiH",
"rkgSYPB5Yr",
"B1lHCnA3tB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Overview: This paper describes a shortfall with the DDPG algorithm on a continuous state action space with sparse rewards. To first prove the existence of this shortfall, the authors demonstrate its theoretical possibility by reviewing the behavior of DDPG actor critic equations and the “two-regimes” proofs in the... | [
6,
-1,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_HyxnH64KwS",
"Syxwmy2nFB",
"B1lHCnA3tB",
"rkgSYPB5Yr",
"iclr_2020_HyxnH64KwS",
"iclr_2020_HyxnH64KwS",
"iclr_2020_HyxnH64KwS"
] |
iclr_2020_SyepHTNFDS | Graph Residual Flow for Molecular Graph Generation | Statistical generative models for molecular graphs attract attention from many researchers from the fields of bio- and chemo-informatics. Among these models, invertible flow-based approaches are not fully explored yet. In this paper, we propose a powerful invertible flow for molecular graphs, called Graph Residual Flow (GRF). The GRF is based on residual flows, which are known for more flexible and complex non-linear mappings than traditional coupling flows. We theoretically derive non-trivial conditions such that GRF is invertible, and present a way of keeping the entire flows invertible throughout the training and sampling. Experimental results show that a generative model based on the proposed GRF achieve comparable generation performance, with much smaller number of trainable parameters compared to the existing flow-based model. | reject | The authors propose a graph residual flow model for molecular generation. Conceptual novelty is limited since it is simple extension and there isn't much improvement over state of art. | val | [
"HJl9F8xKiH",
"ByxLjllKoB",
"rJxVmxxtjB",
"HygVaGssYH",
"rkxxhCD0YH",
"Byg02cl9cS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their comments and questions. In particular, thank you for precious suggestions on our writing. We revised our paper following your suggestions.\nExcept for these points, we address them in order.\n\n> Other than the above, GRF is a straightforward application of iResNet in molecule gener... | [
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"HygVaGssYH",
"rkxxhCD0YH",
"Byg02cl9cS",
"iclr_2020_SyepHTNFDS",
"iclr_2020_SyepHTNFDS",
"iclr_2020_SyepHTNFDS"
] |
iclr_2020_HkxAS6VFDB | Prune or quantize? Strategy for Pareto-optimally low-cost and accurate CNN | Pruning and quantization are typical approaches to reduce the computational cost of CNN inference. Although the idea to combine them together seems natural, it is being unexpectedly difficult to figure out the resultant effect of the combination unless measuring the performance on a certain hardware which a user is going to use. This is because the benefits of pruning and quantization strongly depend on the hardware architecture where the model is executed. For example, a CPU-like architecture without any parallelization may fully exploit the reduction of computations by unstructured pruning for speeding up, but a GPU-like massive parallel architecture would not. Besides, there have been emerging proposals of novel hardware architectures such as one supporting variable bit precision quantization. From an engineering viewpoint, optimization for each hardware architecture is useful and important in practice, but this is quite a brute-force approach. Therefore, in this paper, we first propose hardware-agnostic metric to measure the computational cost. And using the metric, we demonstrate that Pareto-optimal performance, where the best accuracy is obtained at a given computational cost, is achieved when a slim model with smaller number of parameters is quantized moderately rather than a fat model with huge number of parameters is quantized to extremely low bit precision such as binary or ternary. Furthermore, we empirically found the possible quantitative relation between the proposed metric and the signal to noise ratio during SGD training, by which the information obtained during SGD training provides the optimal policy of quantization and pruning. We show the Pareto frontier is improved by 4 times in post-training quantization scenario based on these findings. These findings are available not only to improve the Pareto frontier for accuracy vs. computational cost, but also give us some new insights on deep neural network. | reject | The authors propose a hardware-agnostic metric called effective signal norm (ESN) to measure the computational cost of convolutional neural networks. They then demonstrate that models with fewer parameters achieve far better accuracy after quantization. The main novelty is on the metric ESN. However, ESN is based on ideal hardware, and thus not suitable for existing hardware. Assumptions made in the paper are hard to be proved. Experimental results are not convincing, and related pruning methods are not compared. Finally, the paper is not written clearly, and the structure and some arguments are confusing. | val | [
"rylzNrlaYB",
"SJx81WQnor",
"rkl1rymnjB",
"H1ePR0f3iS",
"SJgEtPI9jS",
"H1eH_IIqiB",
"ryl--I85sB",
"rygKkH8qir",
"H1lMKQL5oB",
"rylP-pcvor",
"Hyge3qqDsr",
"Syx2cK9wiS",
"rylETCDaFS",
"S1lnx65qqS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new metric to evaluate both the amount of pruning and quantization. This metric is agnostic to the hardware architecture and is simply obtained by computing the Frobenius norm of some point-wise transformation of the quantized weights. They first show empirically that this Evaluation metric is... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_HkxAS6VFDB",
"SJgEtPI9jS",
"H1ePR0f3iS",
"rylETCDaFS",
"rylP-pcvor",
"ryl--I85sB",
"rygKkH8qir",
"H1lMKQL5oB",
"rylzNrlaYB",
"iclr_2020_HkxAS6VFDB",
"Syx2cK9wiS",
"S1lnx65qqS",
"iclr_2020_HkxAS6VFDB",
"iclr_2020_HkxAS6VFDB"
] |
iclr_2020_H1eArT4tPH | The Effect of Residual Architecture on the Per-Layer Gradient of Deep Networks | A critical part of the training process of neural networks takes place in the very first gradient steps post initialization. In this work, we study the connection between the network's architecture and initialization parameters, to the statistical properties of the gradient in random fully connected ReLU networks, through the study of the the Jacobian. We compare three types of architectures: vanilla networks, ResNets and DenseNets. The later two, as we show, preserve the variance of the gradient norm through arbitrary depths when initialized properly, which prevents exploding or decaying gradients at deeper layers. In addition, we show that the statistics of the per layer gradient norm is a function of the architecture and the layer's size, but surprisingly not the layer's depth.
This depth invariant result is surprising in light of the literature results that state that the norm of the layer's activations grows exponentially with the specific layer's depth. Experimental support is given in order to validate our theoretical results and to reintroduce concatenated ReLU blocks, which, as we show, present better initialization properties than ReLU blocks in the case of fully connected networks. | reject | This paper studies the statistics of activation norms and Jacobian norms for randomly-initialized ReLU networks in the presence (and absence) of various types of residual connections. Whereas the variance of the gradient norm grows with depth for vanilla networks, it can be depth-independent for residual networks when using the proper initialization.
Reviewers were positive about the setup, but also pointed out important shortcomings on the current manuscript, especially related to the lack of significance of the measured gradient norm statistics with regards to generalisation, and with some techinical aspects of the derivations. For these reasons, the AC believes this paper will strongly benefit from an extra iteration. | train | [
"BylIU-_KFH",
"SkxqfsRwsB",
"rke0H5CPsS",
"B1gPTYAwjB",
"HklnutAvoB",
"BJl_j5w8tr",
"Hyxhc4lAFr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the mean and variance of the gradient norm at each layer for vanilla feedforward, ResNet and DenseNet, respectively, at the initialization step, which is related with Hanin & Ronick 2018 studying the mean and variance of forward activations. They show that ResNet and DenseNet preserve the varianc... | [
1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_H1eArT4tPH",
"iclr_2020_H1eArT4tPH",
"BJl_j5w8tr",
"BylIU-_KFH",
"Hyxhc4lAFr",
"iclr_2020_H1eArT4tPH",
"iclr_2020_H1eArT4tPH"
] |
iclr_2020_SyxJU64twr | Model Ensemble-Based Intrinsic Reward for Sparse Reward Reinforcement Learning | In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models. In the proposed method, the mixture of multiple dynamics models is used to approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the surprise seen from each dynamics model to the mixture of the dynamics models. In order to show the effectiveness of the proposed intrinsic reward generation method, a working algorithm is constructed by combining the proposed intrinsic reward generation method with the proximal policy optimization (PPO) algorithm. Numerical results show that for representative locomotion tasks, the proposed model-ensemble-based intrinsic reward generation method outperforms the previous methods based on a single dynamics model. | reject | This paper considers the challenge of sparse reward reinforcement learning through intrinsic reward generation based on the deviation in predictions of an ensemble of dynamics models. This is combined with PPO and evaluated in some Mujoco domains.
The main issue here was with the way the sparse rewards were provided in the experiments, which was artificial and could lead to a number of problems with the reward structure and partial observability. The work was also considered incremental in its novelty. These concerns were not adequately rebutted, and so as it stands this paper should be rejected. | train | [
"ryefwM-CtS",
"ryx6mr0oiS",
"rkxYWGzjsS",
"SJepVJVitS",
"SkeFOuPqoS",
"H1eb2uwqor",
"rJeHrdwqiB",
"rJeUWy3iYH"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"\nThis paper proposes an auxiliary reward for model-based reinforcement learning. The proposed method uses ensembles to build a better model of environment dynamics and suggests some rules to optimize the new ensemble-based dynamics and to estimate the intrinsic reward.\n\nI am torn on this paper. I like the deriv... | [
3,
-1,
-1,
3,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
4
] | [
"iclr_2020_SyxJU64twr",
"rkxYWGzjsS",
"SkeFOuPqoS",
"iclr_2020_SyxJU64twr",
"rJeUWy3iYH",
"SJepVJVitS",
"ryefwM-CtS",
"iclr_2020_SyxJU64twr"
] |
iclr_2020_BylJUTEKvB | Cross-Iteration Batch Normalization | A well-known issue of Batch Normalization is its significantly reduced effectiveness in the case of small mini-batch sizes. When a mini-batch contains few examples, the statistics upon which the normalization is defined cannot be reliably estimated from it during a training iteration. To address this problem, we present Cross-Iteration Batch Normalization (CBN), in which examples from multiple recent iterations are jointly utilized to enhance estimation quality. A challenge of computing statistics over multiple iterations is that the network activations from different iterations are not comparable to each other due to changes in network weights. We thus compensate for the network weight changes via a proposed technique based on Taylor polynomials, so that the statistics can be accurately estimated and batch normalization can be effectively applied. On object detection and image classification with small mini-batch sizes, CBN is found to outperform the original batch normalization and a direct calculation of statistics over previous iterations without the proposed compensation technique. | reject | This paper proposes cross-iteration batch normalization, which is a strategy for maintaining statistics across iterations to improve the applicability of batch normalization on small batches of data.
The reviewers pointed out some strong points but also some weak points about the paper. The paper was judged to be novel and theoretically sound, and the paper was judged to be well-written.
However, there were some doubts regarding the relevance and significance of the work. Reviewers commented on being unconvinced by the utility of the approach, it being unclear when the proposed method is beneficial, and the relative small magnitude of the empirical improvement.
On the balance, the paper seems decent but not completely convincing. This means that with the current high competitiveness and selectivity of ICLR I unfortunately cannot recommend the manuscript for acceptance. | train | [
"SklY_7OnsB",
"S1lsh6uhsB",
"BJg9S4Y3YH",
"S1gVEwwniS",
"SJeSrSLhoB",
"BJlAMFljir",
"HJe0LYgioB",
"HyeCXcxssB",
"B1liu9xior",
"SJx3HnoiFr",
"rygzIvETtS",
"ryg6_YDZ9r",
"SJlKGSD7Yr",
"ryxiQjM7YB",
"BJxa88XMtS",
"r1xkG3e1FS",
"rkgnCj7ndS",
"SJgXbaEj_S"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Thanks a lot for your response. After reading our response, we hope R#1 give a second thought about the paper.\n\n1. As the performance variance on COCO and ImageNet is so low, most of the previous works, such as previous normalization methods [1,2,3], image recognition methods on ImageNet [4,5,6], object detectio... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SJeSrSLhoB",
"S1gVEwwniS",
"iclr_2020_BylJUTEKvB",
"HJe0LYgioB",
"HyeCXcxssB",
"rygzIvETtS",
"BJg9S4Y3YH",
"SJx3HnoiFr",
"iclr_2020_BylJUTEKvB",
"iclr_2020_BylJUTEKvB",
"iclr_2020_BylJUTEKvB",
"iclr_2020_BylJUTEKvB",
"ryxiQjM7YB",
"BJxa88XMtS",
"r1xkG3e1FS",
"rkgnCj7ndS",
"SJgXbaEj_... |
iclr_2020_HkeeITEYDr | Robust Reinforcement Learning with Wasserstein Constraint | Robust Reinforcement Learning aims to find the optimal policy with some degree of robustness to environmental dynamics. Existing learning algorithms usually enable the robustness though disturbing the current state or simulated environmental parameters in a heuristic way, which lack quantified robustness to the system dynamics (i.e. transition probability). To overcome this issue, we leverage Wasserstein distance to measure the disturbance to the reference transition probability. With Wasserstein distance, we are able to connect transition probability disturbance to the state disturbance, and reduces an infinite-dimensional optimization problem to a finite-dimensional risk-aware problem. Through the derived risk-aware optimal Bellman equation, we first show the existence of optimal robust policies, provide a sensitivity analysis for the perturbations, and then design a novel robust learning algorithm—WassersteinRobustAdvantageActor-Critic algorithm (WRA2C). The effectiveness of the proposed algorithm is verified in theCart-Pole environment. | reject | This paper studies the robust reinforcement learning problem in which the constraint on model uncertainty is captured by the Wasserstein distance. The reviewers expressed concerns regarding novelty with respect to prior work, the presentation or the results, and unconvincing experiments. In its current form the paper is not ready for acceptance to ICLR-2020. | test | [
"HJgYlcYJqS",
"Hkg37YNhiH",
"SJlFgtE3iS",
"H1eunOEnsr",
"rkxqmGa3KH",
"B1lN7ZnAtH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nThis work aims to produce reinforcement learning methods that are ‘distributionally robust’. They approach this by assuming the transition function may vary between elements of the domain distribution and extend some recent results (Blanchet & Murthy, 2019) to give some theoretical results (contraction and optim... | [
3,
-1,
-1,
-1,
3,
3
] | [
1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_HkeeITEYDr",
"rkxqmGa3KH",
"B1lN7ZnAtH",
"HJgYlcYJqS",
"iclr_2020_HkeeITEYDr",
"iclr_2020_HkeeITEYDr"
] |
iclr_2020_SJxeI6EYwS | Simple and Effective Stochastic Neural Networks | Stochastic neural networks (SNNs) are currently topical, with several paradigms being actively investigated including dropout, Bayesian neural networks, variational information bottleneck (VIB) and noise regularized learning. These neural network variants impact several major considerations, including generalization, network compression, and robustness against adversarial attack and label noise. However, many existing networks are complicated and expensive to train, and/or only address one or two of these practical considerations. In this paper we propose a simple and effective stochastic neural network (SE-SNN) architecture for discriminative learning by directly modeling activation uncertainty and encouraging high activation variability. Compared to existing SNNs, our SE-SNN is simpler to implement and faster to train, and produces state of the art results on network compression by pruning, adversarial defense and learning with label noise. | reject | This paper proposes to use stacked layers of Gaussian latent variables with a maxent objective function as a regulariser. I agree with the reviewers that there is very little novelty and the experiments are not very convincing. | val | [
"BkeJQaCz9B",
"SkeD9je3jB",
"SkeJwme_jS",
"HJeRf7lOsH",
"HJgsg7xnYH",
"rkgLYY6cKH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a simple stochastic neural network, which makes each neuron output Gaussian random variables. The model is trained with reparameterization trick. The authors advocates the adoptation of a non-informative prior, and shows that learning with the prior equals with an entropy-maximization regulariz... | [
6,
-1,
-1,
-1,
3,
3
] | [
1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_SJxeI6EYwS",
"HJgsg7xnYH",
"rkgLYY6cKH",
"BkeJQaCz9B",
"iclr_2020_SJxeI6EYwS",
"iclr_2020_SJxeI6EYwS"
] |
iclr_2020_HylZIT4Yvr | Structural Language Models for Any-Code Generation | We address the problem of Any-Code Generation (AnyGen) - generating code without any restriction on the vocabulary or structure. The state-of-the-art in this problem is the sequence-to-sequence (seq2seq) approach, which treats code as a sequence and does not leverage any structural information. We introduce a new approach to AnyGen that leverages the strict syntax of programming languages to model a code snippet as tree structural language modeling (SLM). SLM estimates the probability of the program's abstract syntax tree (AST) by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. Unlike previous structural techniques that have severely restricted the kinds of expressions that can be generated, our approach can generate arbitrary expressions in any programming language. Our model significantly outperforms both seq2seq and a variety of existing structured approaches in generating Java and C# code. We make our code, datasets, and models available online. | reject | This paper proposes a new method for code generation based on structured language models.
After viewing the paper, reviews, and author response my assessment is that I basically agree with Reviewer 4. (Now, after revision) This work seems to be (1) a bit incremental over other works such as Brockschmidt et al. (2019), and (2) a bit of a niche topic for ICLR. At the same time it has (3) good engineering effort resulting in good scores, and (4) relatively detailed conceptual comparison with other work in the area. Also, (5) the title of "Structural Language Models for Code Generation" is clearly over-claiming the contribution of the work -- as cited in the paper there are many language models, unconditional or conditional, that have been used in code generation in the past. In order to be accurate, the title would need to be modified to something that more accurately describes the (somewhat limited) contribution of the work.
In general, I found this paper borderline. ICLR, as you know is quite competitive so while this is a reasonably good contribution, I'm not sure whether it checks the box of either high quality or high general interest to warrant acceptance. Because of this, I'm not recommending it for acceptance at this time, but definitely encourage the authors to continue to polish for submission to a different venue (perhaps a domain conference that would be more focused on the underlying task of code generation?) | train | [
"HkearesHjH",
"B1lioR9HjS",
"HJguQ0cBoH",
"BJxYC6qSiB",
"S1x9VacSsH",
"HyeGhnqBoS",
"rJeyiVOptH",
"BJe4jZK0FH",
"BJx4Rjq85B"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewers for all their excellent suggestions! \nWe updated our submission to address the following comments raised by the reviewers:\n\n(1) Following the suggestions of AnonReviewer4, and to emphasize that our \"any code generation\" is limited to the \"code generation given the surroun... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"iclr_2020_HylZIT4Yvr",
"rJeyiVOptH",
"BJxYC6qSiB",
"BJe4jZK0FH",
"HyeGhnqBoS",
"BJx4Rjq85B",
"iclr_2020_HylZIT4Yvr",
"iclr_2020_HylZIT4Yvr",
"iclr_2020_HylZIT4Yvr"
] |
iclr_2020_HklZUpEtvr | OPTIMAL TRANSPORT, CYCLEGAN, AND PENALIZED LS FOR UNSUPERVISED LEARNING IN INVERSE PROBLEMS | The penalized least squares (PLS) is a classic approach to inverse problems, where a regularization term is added to stabilize the solution. Optimal transport (OT) is another mathematical framework for computer vision tasks by providing means to transport one measure to another at minimal cost. Cycle-consistent generative adversarial network (cycleGAN) is a recent extension of GAN to learn target distributions with less mode collapsing behavior. Although similar in that no supervised training is required, the algorithms look different, so the mathematical relationship between these approaches is not clear. In this article, we provide an important advance to unveil the missing link. Specifically, we reveal that a cycleGAN architecture can be derived as a dual formulation of the optimal transport problem, if the PLS with a deep learning penalty is used as a transport cost between the two probability measures from measurements and unknown images. This suggests that cycleGAN can be considered as stochastic generalization of classical PLS approaches.
Our derivation is so general that various types of cycleGAN architecture can be easily derived by merely changing the transport cost. As proofs of concept, this paper provides novel cycleGAN architecture for unsupervised learning in accelerated MRI and deconvolution microscopy problems, which confirm the efficacy and the flexibility of the theory. | reject | This paper provides a novel approach for addressing ill-posed inverse problems based on a formulation as a regularized estimation problem and showing that this can be optimized using the CycleGAN framework. While the paper contains interesting ideas and has been substantially improved from its original form, the paper still does not meet the quality bar of ICLR due to a critical gap between the presented theory and applications. The paper will benefit from a revision and resubmission to another venue. | train | [
"rkeyuPZsjB",
"H1gxwBwqoH",
"Hke_nXjKiS",
"Bkx-QAKtiH",
"rJlaPSHYoS",
"rkl8Y5nroB",
"H1xg8jTriB",
"rJeAtdnBor",
"BkxqnlTHir",
"r1eLbLXPYH",
"HkxdojD6tB",
"S1xuhkHFqS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"[Q] Overall, I greatly appreciate the improvements that the authors have made to their paper. After reading the revised work and the comments in the discussion, my assessment of the contributions have been clarified to the following:\n\n==> Thanks for your understanding of our contributions.\n\n[Q] What is the mot... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
4
] | [
"H1gxwBwqoH",
"H1xg8jTriB",
"Bkx-QAKtiH",
"rJlaPSHYoS",
"H1xg8jTriB",
"HkxdojD6tB",
"S1xuhkHFqS",
"iclr_2020_HklZUpEtvr",
"r1eLbLXPYH",
"iclr_2020_HklZUpEtvr",
"iclr_2020_HklZUpEtvr",
"iclr_2020_HklZUpEtvr"
] |
iclr_2020_B1gzLaNYvr | TSInsight: A local-global attribution framework for interpretability in time-series data | With the rise in employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time-series data has been neglected with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight where we attach an auto-encoder with a sparsity-inducing norm on its output to the classifier and fine-tune it based on the gradients from the classifier and a reconstruction penalty. The auto-encoder learns to preserve features that are important for the prediction by the classifier and suppresses the ones that are irrelevant i.e. serves as a feature attribution method to boost interpretability. In other words, we ask the network to only reconstruct parts which are useful for the classifier i.e. are correlated or causal for the prediction. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with other commonly used attribution methods on a range of different time-series datasets to validate its efficacy. Furthermore, we analyzed the set of properties that TSInsight achieves out of the box including adversarial robustness and output space contraction. The obtained results advocate that TSInsight can be an effective tool for the interpretability of deep time-series models. | reject | Main content:
Blind review #2 summarizes it well:
The aim of this work is to improve interpretability in time series prediction. To do so, they propose to use a relatively post-hoc procedure which learns a sparse representation informed by gradients of the prediction objective under a trained model. In particular, given a trained next-step classifier, they propose to train a sparse autoencoder with a combined objective of reconstruction and classification performance (while keeping the classifier fixed), so as to expose which features are useful for time series prediction. Sparsity, and sparse auto-encoders, have been widely used for the end of interpretability. In this sense, the crux of the approach is very well motivated by the literature.
--
Discussion:
All reviews had difficulties understanding the significance and novelty, which appears to have in large part arisen from the original submission not having sufficiently contextualized the motivation and strengths of the approach (especially for readers not already specialized in this exact subarea).
--
Recommendation and justification:
The reviews are uniformly low, probably due to the above factors, and while the authors' revisions during the rebuttal period have improved the objections, there are so many strong submissions that it would be difficult to justify override the very low reviewer scores. | train | [
"HkgYPG-3YH",
"r1xe7HI3oB",
"r1gsCFajoH",
"SyxBHBPssH",
"rkgPcVUoir",
"HyxTaX8jsS",
"ryg2UQIssr",
"SJg4jGUisr",
"SJl2U01TFH",
"H1lLPJJ0KS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors proposed an algorithm for identifying important inputs for the time-series data as an explanation of the model's output.\nGiven a fixed model, the authors proposed to put an auto-encoder to the input of the model, so that the input data is first transformed through the auto-encoder, and ... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
1
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_B1gzLaNYvr",
"SJg4jGUisr",
"SyxBHBPssH",
"HyxTaX8jsS",
"iclr_2020_B1gzLaNYvr",
"HkgYPG-3YH",
"SJl2U01TFH",
"H1lLPJJ0KS",
"iclr_2020_B1gzLaNYvr",
"iclr_2020_B1gzLaNYvr"
] |
iclr_2020_B1em8TVtPr | Discourse-Based Evaluation of Language Understanding | New models for natural language understanding have made unusual progress recently, leading to claims of universal text representations. However, current benchmarks are predominantly targeting semantic phenomena; we make the case that discourse and pragmatics need to take center stage in the evaluation of natural language understanding.
We introduce DiscEval, a new benchmark for the evaluation of natural language understanding, that unites 11 discourse-focused evaluation datasets.
DiscEval can be used as supplementary training data in a multi-task learning setup, and is publicly available, alongside the code for gathering and preprocessing the datasets.
Using our evaluation suite, we show that natural language inference, a widely used pretraining task, does not result in genuinely universal representations, which opens a new challenge for multi-task learning. | reject | This paper proposes a new benchmark to evaluate natural language processing models on discourse-related tasks based on existing datasets that are not available in other benchmarks (SentEval/GLUE/SuperGLUE). The authors also provide a set of baselines based on BERT, ELMo, and others; and estimates of human performance for some tasks.
I think this has the potential to be a valuable resource to the research community, but I am not sure that it is the best fit for a conference such as ICLR. R3 also raises a valid concern regarding the performance of fine-tuned BERT that are comparable to human estimates on half of the tasks (3 out of 5), which slightly weakens the main motivation of having this new benchmark.
My main suggestion to the authors is to have a very solid motivation for the new benchmark, including the reason of inclusion for each of the tasks. I believe that this is important to encourage the community to adopt it. For something like this, it would be nice (although not necessary) to have a clean website for submission as well. I believe that someone who proposes a new benchmark needs to do as best as they can to make it easy for other people to use it.
Due to the above issues and space constraint, I recommend to reject the paper. | val | [
"HyeP_cSoiS",
"B1lDCxfsir",
"Hye7TgzssB",
"B1goxbzssH",
"HJeVslGojH",
"H1xiUtOFFS",
"SyxE6vl0tS",
"SkeKZ0WjqH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"- \"Additionally, some tasks seem to have quite subjective label definitions.\"\n>> I see. The points you make seem reasonable, and I agree that there are probably significant technical challenges in obtaining human annotations that might go beyond the scope of this work (but I do hope you keep working on them! I ... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"B1lDCxfsir",
"SyxE6vl0tS",
"SkeKZ0WjqH",
"H1xiUtOFFS",
"iclr_2020_B1em8TVtPr",
"iclr_2020_B1em8TVtPr",
"iclr_2020_B1em8TVtPr",
"iclr_2020_B1em8TVtPr"
] |
iclr_2020_rkgQL6VFwr | Learning Generative Image Object Manipulations from Language Instructions | The use of adequate feature representations is essential for achieving high performance in high-level human cognitive tasks in computational modeling. Recent developments in deep convolutional and recurrent neural networks architectures enable learning powerful feature representations from both images and natural language text. Besides, other types of networks such as Relational Networks (RN) can learn relations between objects and Generative Adversarial Networks (GAN) have shown to generate realistic images. In this paper, we combine these four techniques to acquire a shared feature representation of the relation between objects in an input image and an object manipulation action description in the form of human language encodings to generate an image that shows the resulting end-effect the action would have on a computer-generated scene. The system is trained and evaluated on a simulated dataset and experimentally used on real-world photos. | reject | The submission proposes to train a model to modify objects in an image using language (the modified image is the effect of an action). The model combines CNN, RNN, Relation Nets and GAN and is trained and evaluated on synthetic data, with some examples of results on real images.
The paper received relatively low scores (1 reject and 2 weak rejects). The authors did not provide any responses to the reviews and did not revise their submission. Thus there was no reviewer discussion and the scores remained unchanged.
The reviewers all agreed that the submission addressed an interesting task, but there was no special insight in how the components were put together, and the work was limited in the experimental results. Comparisons against additional baselines (AE, VAE), and ablation studies or examinations of how the components can be varied is needed.
The paper is currently too weak to be accepted at ICLR. The authors are encouraged to improve their evaluation and resubmit to an appropriate venue. | train | [
"BygWQQcCYr",
"ByxOfQOJcr",
"Bkg6NF0Wqr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a model that takes an image and a sentence as input, where the sentence is an instruction to manipulate objects in the scene, and outputs another image which shows the scene after manipulation. The model is an integration of CNN, RNN, Relation Nets, and GAN. The results are mostly on synthetic ... | [
3,
3,
1
] | [
5,
3,
3
] | [
"iclr_2020_rkgQL6VFwr",
"iclr_2020_rkgQL6VFwr",
"iclr_2020_rkgQL6VFwr"
] |
iclr_2020_Skl4LTEtDS | Growing Action Spaces | In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces. | reject | This paper presents a novel approach to learning in problems which have large action spaces with natural hierarchies. The proposed approach involves learning from a curriculum of increasingly larger action spaces to accelerate learning. The method is demonstrated on both small continuous action domains, as well as a Starcraft domain.
While this is indeed an interesting paper, there were two major concerns expressed by the reviewers. The first concerns the choice of baselines for comparison, and the second involves improving the discussion and intuition for why the hierarchical approach to growing action spaces will not lead to the agent missing viable solutions. The reviewers felt that neither of these were adequately addressed in the rebuttal, and as such it is to be rejected in its current form. | train | [
"rygbpM6toS",
"HJliDeatsB",
"HkxFYkatiH",
"Syg2YyWTYH",
"HyxjNYwTKr",
"Bke5PXoRFr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your positive comments as well as careful feedback, which have led us to run some additional experiments with relevant results, shown in Appendix A.1 of the revised paper.\n\nOff-action-space: We agree that the wording of this section of the discussion is somewhat too strong as it stands, and have re... | [
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
4,
3,
5
] | [
"Syg2YyWTYH",
"HyxjNYwTKr",
"Bke5PXoRFr",
"iclr_2020_Skl4LTEtDS",
"iclr_2020_Skl4LTEtDS",
"iclr_2020_Skl4LTEtDS"
] |
iclr_2020_rkerLaVtDr | A General Upper Bound for Unsupervised Domain Adaptation | In this work, we present a novel upper bound of target error to address the problem for unsupervised domain adaptation. Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks. Furthermore, Ben-David et al. (2010) provide an upper bound for target error when transferring the knowledge, which can be summarized as minimizing the source error and distance between marginal distributions simultaneously. However, common methods based on the theory usually ignore the joint error such that samples from different classes might be mixed together when matching marginal distribution. And in such case, no matter how we minimize the marginal discrepancy, the target error is not bounded due to an increasing joint error. To address this problem, we propose a general upper bound taking joint error into account, such that the undesirable case can be properly penalized. In addition, we utilize constrained hypothesis space to further formalize a tighter bound as well as a novel cross margin discrepancy to measure the dissimilarity between hypotheses which alleviates instability during adversarial learning. Extensive empirical evidence shows that our proposal outperforms related approaches in image classification error rates on standard domain adaptation benchmarks. | reject | Given two distributions, source and target, the paper presents an upper bound on the target risk of a classifier in terms of its source risk and other terms comparing the risk under the source/target input distribution and target/source labeling function. In the end, the bound is shown to be minimized by the true labeling function for the source, and at this minimum, the value of the bound is shown to also control the "joint error", i.e., the best achievable risk on both target and source by a single classifier.
The point of the analysis is to go beyond the target risk bound presented by Ben-David et al. 2010 that is in terms of the discrepancy between the source and target and the performance of the source labeling function on the target or vice versa, whichever is smaller. Apparently, concrete domain adaptation methods "based on" the Ben-David et al. bound do not end up controlling the joint error. After various heuristic arguments, the authors develop an algorithm for unsupervised domain adaptation based on their bound in terms of a two-player game.
Only one reviewer ended up engaging with the authors in a nontrivial way. This review also argued for (weak) acceptance. Another reviewer mostly raised minor issues about grammar/style and got confused by the derivation of the "general" bound, which I've checked is ok. The third reviewer raised some issues around the realizability assumption and also asked for better understanding as to what aspects of the new proposal are responsible for the improved performance, e.g., via an ablation study.
I'm sympathetic to reviewer 1, even though I wish they had engaged with the rebuttal. I don't believe the revision included any ablation study. I think this would improve the paper. I don't think the issues raised by reviewer 3 rise to the level of rejection, especially since their main technical concern is due to their own confusion. Reviewer 2 argues for weak acceptance. However, if there was support for this paper, it wasn't enough for reviewers to engage with each other, despite my encouragement, which was disappointing. | train | [
"B1gtgtdcor",
"HJgpp9u5jB",
"B1eyOJuqoB",
"SJedMXwcsB",
"SylA7NW9ir",
"r1lloWWqiS",
"S1ljl8bmsB",
"SJeHhrb7oH",
"HJgdZBWmjr",
"Skl_0N-msr",
"S1xBSNsZKH",
"HyeTcvDaKr",
"H1eCGeC6tS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThanks for the reply.\n\nThe transformation you referred to is simply owing to triangle inequality and we've already mentioned it in the text.\n(We didn't drop any minus terms.)\nNow we add another line to make the derivation more readable.\n\nAs for the suggestion in point 4, we are on it (It's done now).\n\nPl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"SJedMXwcsB",
"SylA7NW9ir",
"r1lloWWqiS",
"HJgdZBWmjr",
"S1ljl8bmsB",
"SJeHhrb7oH",
"SJeHhrb7oH",
"H1eCGeC6tS",
"HyeTcvDaKr",
"S1xBSNsZKH",
"iclr_2020_rkerLaVtDr",
"iclr_2020_rkerLaVtDr",
"iclr_2020_rkerLaVtDr"
] |
iclr_2020_HkxIIaVKPB | Unsupervised-Learning of time-varying features | We present an architecture based on the conditional Variational Autoencoder to learn a representation
of transformations in time-sequence data. The model is constructed in a way that allows to identify sub-spaces of features indicating changes between frames without learning features that are constant within a time-sequence. Therefore, the approach disentangles content from transformations. Different model-architectures are applied to affine image-transformations on MNIST as well as a car-racing video-game task.
Results show that the model discovers relevant parameterizations, however, model architecture has a major impact on the feature-space. It turns out, that there is an advantage of only learning features describing change of state between images, over learning the states of the images at each frame. In this case, we do not only achieve higher accuracy but also more interpretable linear features. Our results also uncover the need for model architectures that combine global transformations with convolutional architectures. | reject | This work proposes a VAE-based model for learning transformations of sequential data (the main here intuition is to have the model learn changes between frames without learning features that are constant within a time-sequence). All reviewers agreed that this is a very interesting submission, but have all challenged the novelty and rigor of this paper, asking for more experimental evidence supporting the strengths of the model. After having read the paper, I agree with the reviewers and I currently see this one as a weak submission without potentially comparing against other models or showing whether the representations learned from the proposed model lead in downstream improvements in a task that uses this representations. | train | [
"ByxdzON3iH",
"SklrpUSliH",
"BkgBjsNgsB",
"HJlFpNXeoB",
"SJeUvbXF_H",
"Byxeg0D3KH",
"ryxFa1xkqr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"About issue 2: the formulation in the paper was such that it would seem that the data is stationary and you want to add a constraint to the model to make it conform to that. But it's really more like you have some assumptions about invariances that your representation should obey and you enforce with your model. T... | [
-1,
-1,
-1,
-1,
1,
1,
3
] | [
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"BkgBjsNgsB",
"ryxFa1xkqr",
"Byxeg0D3KH",
"SJeUvbXF_H",
"iclr_2020_HkxIIaVKPB",
"iclr_2020_HkxIIaVKPB",
"iclr_2020_HkxIIaVKPB"
] |
iclr_2020_rklPITVKvS | BRIDGING ADVERSARIAL SAMPLES AND ADVERSARIAL NETWORKS | Generative adversarial networks have achieved remarkable performance on various tasks but suffer from sensitivity to hyper-parameters, training instability, and mode collapse. We find that this is partly due to gradient given by non-robust discriminator containing non-informative adversarial noise, which can hinder generator from catching the pattern of real samples. Inspired by defense against adversarial samples, we introduce adversarial training of discriminator on real samples that does not exist in classic GANs framework to make adversarial training symmetric, which can balance min-max game and make discriminator more robust. Robust discriminator can give more informative gradient with less adversarial noise, which can stabilize training and accelerate convergence. We validate the proposed method on image generation tasks with varied network architectures quantitatively. Experiments show that training stability, perceptual quality, and diversity of generated samples are consistently improved with small additional training computation cost. | reject | This paper proposes incorporating adversarial training on real images to improve the stability of GAN training. The key idea relies on the observation that GAN training already implicitly does a form of adversarial training on the generated images and so this work proposes adding adversarial training on real images as well. In practice, adversarial training on real images is performed using FGSM and experiments are conducted on CelebA, CiFAR10, and LSUN reporting using standard generative metrics like FID.
Initially all reviewers were in agreement that this work should not be accepted. However, in response to the discussion with the authors Reviewer 2 updated their score from weak reject to weak accept. The other reviewers recommendation remained unchanged. The core concerns of reviewers 3 and 1 is limited technical contribution and unconvincing experimental evidence. In particular, concerns were raised about the overlap with [1] from CVPR 2019. The authors argue that their work is different due to the focus on the unsupervised setting, however, this application distinction is minor and doesn’t result in any major algorithmic changes. With respect to experiments, the authors do provide performance across multiple datasets and architectures which is encouraging, however, to distinguish this work it would have been helpful to provide further study and analysis into the aspects unique to this work -- such as the settings and type of adversarial attack (as mentioned by R3) and stability across GAN variants.
After considering all reviewer and author comments, the AC does not recommend this work for publication in its current form and recommends the authors consider both additional experiments and text description to clarify and solidify their contributions over prior work.
[1] Liu, X., & Hsieh, C. J. (2019). Rob-gan: Generator, discriminator, and adversarial attacker. CVPR 2019.
| train | [
"rJx0AWlwKB",
"S1e6fCpzoH",
"rJesYR6MoB",
"Hkg9ippGjr",
"SkguG1AGiH",
"rker41CGoB",
"rJxNP1RfsS",
"ByeFqVKpdr",
"HylxPVL3Yr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an interesting idea based on introducing adversarial noise on real samples during GAN training. This novel approach may improve GAN training and have potentially large impact, but the paper in its current form is slightly below the standard of ICLR due to its lack of clarity.\n\nWhile it is ver... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_rklPITVKvS",
"rJx0AWlwKB",
"rJx0AWlwKB",
"HylxPVL3Yr",
"ByeFqVKpdr",
"ByeFqVKpdr",
"iclr_2020_rklPITVKvS",
"iclr_2020_rklPITVKvS",
"iclr_2020_rklPITVKvS"
] |
iclr_2020_BJxYUaVtPB | Match prediction from group comparison data using neural networks | We explore the match prediction problem where one seeks to estimate the likelihood of a group of M items preferred over another, based on partial group comparison data. Challenges arise in practice. As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios. Worse yet, we have no prior knowledge on the underlying model for a given scenario. These call for a unified approach that can be universally applied to a wide range of scenarios and achieve consistently high performances. To this end, we incorporate deep learning architectures so as to reflect the key structural features that most state-of-the-art algorithms, some of which are optimal in certain settings, share in common. This enables us to infer hidden models underlying a given dataset, which govern in-group interactions and statistical patterns of comparisons, and hence to devise the best algorithm tailored to the dataset at hand. Through extensive experiments on synthetic and real-world datasets, we evaluate our framework in comparison to state-of-the-art algorithms. It turns out that our framework consistently leads to the best performance across all datasets in terms of cross entropy loss and prediction accuracy, while the state-of-the-art algorithms suffer from inconsistent performances across different datasets. Furthermore, we show that it can be easily extended to attain satisfactory performances in rank aggregation tasks, suggesting that it can be adaptable for other tasks as well. | reject | This paper investigates neural networks for group comparison -- i.e., deciding if one group of objects would be preferred over another. The paper received 4 reviews (we requested an emergency review because of a late review that eventually did arrive). R1 recommends Weak Reject, based primarily on unclear presentation, missing details, and concerns about experiments. R2 recommends Reject, also based on concerns about writing, unclear notation, weak baselines, and unclear technical details. In a short review, R3 recommends Weak Accept and suggests some additional experiments, but also indicates that their familiarity with this area is not strong. R4 also recommends Weak Accept and suggests some clarifications in the writing (e.g. additional motivation future work). The authors submitted a response and revision that addresses many of these concerns. Given the split decision, the AC also read the paper; while we see that it has significant merit, we agree with R1 and R2's concerns, and feel the paper needs another round of peer review to address the remaining concerns. | train | [
"SJg58BU6Fr",
"rygH2lkYjr",
"BkxbtekFsB",
"SJxm8eyFiH",
"S1g6ml1KjB",
"ByeF2bR9YB",
"r1xPqmwoqr",
"S1xrIzbC5H"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper provides a technique to solve match prediction problem -- the problem of estimating likelihood of preference between a pair of M-sized sets. The paper replaces the previously proposed conventional statistical models with a deep learning architecture and achieve superior performance than some of the base... | [
3,
-1,
-1,
-1,
-1,
1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"iclr_2020_BJxYUaVtPB",
"ByeF2bR9YB",
"SJg58BU6Fr",
"r1xPqmwoqr",
"S1xrIzbC5H",
"iclr_2020_BJxYUaVtPB",
"iclr_2020_BJxYUaVtPB",
"iclr_2020_BJxYUaVtPB"
] |
iclr_2020_B1x9ITVYDr | Compressive Recovery Defense: A Defense Framework for ℓ0,ℓ2 and ℓ∞ norm attacks. | We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in \cite{bafna2018thwarting} to defend neural networks against ℓ0, ℓ2, and ℓ∞-norm attacks. In the case of ℓ0-norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For ℓ2-norm bounded noise, we provide recovery guarantees for BP, and for the case of ℓ∞-norm bounded noise, we provide recovery guarantees for Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in \cite{bafna2018thwarting} for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of ℓ0, ℓ2 and ℓ∞-norm attacks. | reject | After reading the author's response, all the reviwers still think that this paper is a simple extension of gradient masking, and can not provide the robustness in neural networks. | test | [
"HJgxdIHqFB",
"S1x280xqsS",
"SylWlBkgjr",
"B1gXsVkejS",
"HJgGL4yesr",
"Hye14Eygjr",
"SylPwe8AKB",
"H1xdXAoCYB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the problem of the robustness of the neural network-based classification models under adversarial attacks. The paper improves upon the known framework on defending against l_0, l_2 norm attackers. \n\nThe main idea of the algorithm is to use the \"compress sensing\" framework to preprocess the im... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_B1x9ITVYDr",
"iclr_2020_B1x9ITVYDr",
"H1xdXAoCYB",
"HJgxdIHqFB",
"SylPwe8AKB",
"SylPwe8AKB",
"iclr_2020_B1x9ITVYDr",
"iclr_2020_B1x9ITVYDr"
] |
iclr_2020_rJlcLaVFvB | Effect of top-down connections in Hierarchical Sparse Coding | Hierarchical Sparse Coding (HSC) is a powerful model to efficiently represent multi-dimensional, structured data such as images. The simplest solution to solve this computationally hard problem is to decompose it into independent layerwise subproblems. However, neuroscientific evidence would suggest inter-connecting these subproblems as in the Predictive Coding (PC) theory, which adds top-down connections between consecutive layers. In this study, a new model called Sparse Deep Predictive Coding (SDPC) is introduced to assess the impact of this inter-layer feedback connection. In particular, the SDPC is compared with a Hierarchical Lasso (Hi-La) network made out of a sequence of Lasso layers. A 2-layered SDPC and a Hi-La networks are trained on 3 different databases and with different sparsity parameters on each layer. First, we show that the overall prediction error generated by SDPC is lower thanks to the feedback mechanism as it transfers prediction error between layers. Second, we demonstrate that the inference stage of the SDPC is faster to converge than for the Hi-La model. Third, we show that the SDPC also accelerates the learning process. Finally, the qualitative analysis of both models dictionaries, supported by their activation probability, show that the SDPC features are more generic and informative. | reject | This paper introduces a new architecture for sparse coding.
The reviewers gave long and constructive feedback that the authors in turn responded at length on. There is consensus among the reviewers that despite contributions this paper in its current form is not ready for acceptance.
Rejection is therefore recommended with encouragement to make updated version for next conference.
| train | [
"rygVHXratH",
"Bkef0R-doS",
"SkgZFLmQsH",
"BkgzRVXXjB",
"Syxig4mQsr",
"Hyxh8GFTtr",
"BJg036X15S"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes Sparse Deep Predictive Coding (SDPC) to access the impact of the inter layer feedback, which is suggested by neuro-scientific evidence. The SDPC model is compared with HILA on 2 different databases, and the experimental results show that SDPC achieved lower prediction error, faster converge rat... | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_rJlcLaVFvB",
"iclr_2020_rJlcLaVFvB",
"rygVHXratH",
"Hyxh8GFTtr",
"BJg036X15S",
"iclr_2020_rJlcLaVFvB",
"iclr_2020_rJlcLaVFvB"
] |
iclr_2020_rkej86VYvB | Temporal Difference Weighted Ensemble For Reinforcement Learning | Combining multiple function approximators in machine learning models typically leads to better performance and robustness compared with a single function. In reinforcement learning, ensemble algorithms such as an averaging method and a majority voting method are not always optimal, because each function can learn fundamentally different optimal trajectories from exploration. In this paper, we propose a Temporal Difference Weighted (TDW) algorithm, an ensemble method that adjusts weights of each contribution based on accumulated temporal difference errors. The advantage of this algorithm is that it improves ensemble performance by reducing weights of Q-functions unfamiliar with current trajectories. We provide experimental results for Gridworld tasks and Atari tasks that show significant performance improvements compared with baseline algorithms. | reject | The paper proposes a method to combine the decision of an ensemble of RL agents. It uses an uncertainty measure based on the TD error, and suggests a weighted average or weighted voting mechanism to combine their policy or value functions to come up with a joint decision.
The reviewers raised several concerns, including whether the method works in the stochastic setting, whether it favours deterministic parts of the state space, its sensitivity to bias, and unfair comparison to a single agent setting.
There is also a relevant PhD dissertation (Elliot, 2017), which the authors surprisingly refused to discuss and cite because apparently it was not published at any conference. A PhD dissertation is a citable reference, if it is relevant. If it is, a good scholarship requires proper citation.
Overall, even though the proposed method might potentially be useful, it requires further investigations. Two out of three reviewers are not positive about the paper in its current form. Therefore, I cannot recommend acceptance at this stage.
Elliott, Daniel L., The Wisdom of the crowd : reliable deep reinforcement learning through ensembles of Q-functions, PhD Dissertation, Colorado State University, 2017 | train | [
"HylTZ00jjS",
"H1e6WXItsB",
"HklmafYUjH",
"rJeVP1izsH",
"rJlYinFzsr",
"Byl--SFMjH",
"Ske-uTrptr",
"SJeFubbPcH",
"S1xL3IS55r"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I'd like to announce the paper revision uploaded. I appreciate all the reviewers' works. Let me list the changes below.\n\n1. add minor changes to introduction and atari experiment\n2. remove section 3.3\n3. add a new citation of Doya et al. 2002 to section 2\n\n\nBest regards.",
"Dear reviewers,\n\nThank you ve... | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"iclr_2020_rkej86VYvB",
"iclr_2020_rkej86VYvB",
"rJeVP1izsH",
"Ske-uTrptr",
"SJeFubbPcH",
"S1xL3IS55r",
"iclr_2020_rkej86VYvB",
"iclr_2020_rkej86VYvB",
"iclr_2020_rkej86VYvB"
] |
iclr_2020_SJgs8TVtvr | Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations | Clustering high-dimensional data, such as images or biological measurements, is a long-standing problem and has been studied extensively. Recently, Deep Clustering gained popularity due to the non-linearity of neural networks, which allows for flexibility in fitting the specific peculiarities of complex data. Here we introduce the Mixture-of-Experts Similarity Variational Autoencoder (MoE-Sim-VAE), a novel generative clustering model. The model can learn multi-modal distributions of high-dimensional data and use these to generate realistic data with high efficacy and efficiency. MoE-Sim-VAE is based on a Variational Autoencoder (VAE), where the decoder consists of a Mixture-of-Experts (MoE) architecture. This specific architecture allows for various modes of the data to be automatically learned by means of the experts. Additionally, we encourage the latent representation of our model to follow a Gaussian mixture distribution and to accurately represent the similarities between the data points. We assess the performance of our model on synthetic data, the MNIST benchmark data set, and a challenging real-world task of defining cell subpopulations from mass cytometry (CyTOF) measurements on hundreds of different datasets. MoE-Sim-VAE exhibits superior clustering performance on all these tasks in comparison to the baselines and we show that the MoE architecture in the decoder reduces the computational cost of sampling specific data modes with high fidelity. | reject | The paper proposes a VAE with a mixture-of-experts decoder for clustering and generation of high-dimensional data. Overall, the reviewers found the paper well-written and structured , but in post rebuttal discussion questioned the overall importance and interest of the work to the community. This is genuinely a borderline submission. However, the calibrated average score currently falls below the acceptance threshold, so I’m recommending rejection, but strongly encouraging the authors to continue the work, better motivating the importance of the work, and resubmitting. | train | [
"Skgy1BS4iB",
"H1eo64BVor",
"ByxBU4SEir",
"rygr7VrEoS",
"H1gaakiptr",
"H1gqaqHe5H",
"S1gYbn7fqS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"————--\n\n***- in section 4.1 the MNIST data are taken with k=10. Though it is nicely explained and illustrated on this data set, it is possibly somewhat misleading as an example. The reason is that this is a classification problem with 10 classes, therefore the choice k=10 is obvious. It would be more important t... | [
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"H1eo64BVor",
"H1gaakiptr",
"H1gqaqHe5H",
"S1gYbn7fqS",
"iclr_2020_SJgs8TVtvr",
"iclr_2020_SJgs8TVtvr",
"iclr_2020_SJgs8TVtvr"
] |
iclr_2020_rke3U6NtwH | MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning | Graphs are known to have complicated structures and have myriad applications. How to utilize deep learning methods for graph classification tasks has attracted considerable research attention in the past few years. Two properties of graph data have imposed significant challenges on existing graph learning techniques. (1) Diversity: each graph has a variable size of unordered nodes and diverse node/edge types. (2) Complexity: graphs have not only node/edge features but also complex topological features. These two properties motivate us to use multiplex structure to learn graph features in a diverse way. In this paper, we propose a simple but effective approach, MxPool, which concurrently uses multiple graph convolution networks and graph pooling networks to build hierarchical learning structure for graph representation learning tasks. Our experiments on numerous graph classification benchmarks show that our MxPool has marked superiority over other state-of-the-art graph representation learning methods. For example, MxPool achieves 92.1% accuracy on the D&D dataset while the second best method DiffPool only achieves 80.64% accuracy. | reject | All three reviewers are consistently negative on this paper. Thus a reject is recommended. | train | [
"H1xnZGs3YH",
"HJxlWY32KS",
"rye-2tQptr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Strengths:\n-- The paper is well written and easy to follow\n-- Learning graph representation learning is a very important problem\n-- The performance of the proposed approach are strong on the existing data sets\n\nWeakness\n-- the novelty of the proposed method is marginal\n-- Some real-case study on why the mo... | [
3,
3,
3
] | [
5,
4,
5
] | [
"iclr_2020_rke3U6NtwH",
"iclr_2020_rke3U6NtwH",
"iclr_2020_rke3U6NtwH"
] |
iclr_2020_HJxhUpVKDr | Branched Multi-Task Networks: Deciding What Layers To Share | In the context of multi-task learning, neural networks with branched architectures have often been employed to jointly tackle the tasks at hand. Such ramified networks typically start with a number of shared layers, after which different tasks branch out into their own sequence of layers. Understandably, as the number of possible network configurations is combinatorially large, deciding what layers to share and where to branch out becomes cumbersome. Prior works have either relied on ad hoc methods to determine the level of layer sharing, which is suboptimal, or utilized neural architecture search techniques to establish the network design, which is considerably expensive. In this paper, we go beyond these limitations and propose a principled approach to automatically construct branched multi-task networks, by leveraging the employed tasks' affinities. Given a specific budget, i.e. number of learnable parameters, the proposed approach generates architectures, in which shallow layers are task-agnostic, whereas deeper ones gradually grow more task-specific. Extensive experimental analysis across numerous, diverse multi-tasking datasets shows that, for a given budget, our method consistently yields networks with the highest performance, while for a certain performance threshold it requires the least amount of learnable parameters. | reject | The authors present an approach to multi-task learning. Reviews are mixed. The main worries seem to be computational feasibility and lack of comparison with existing work. Clearly, one advantage to Cross-stitch networks over the proposed approach is that their approach learns sharing parameters in an end-to-end fashion and scales more efficiently to more tasks. Note: The authors mention SluiceNets in their discussion, but I think it would be appropriate to directly compare against this architecture - or DARTS [https://arxiv.org/abs/1806.09055], maybe - since the offline RSA computations only seem worth it if better than *anything* you can do end-to-end. I would encourage the authors to map out this space and situate their proposed method properly in the landscape of existing work. I also think it would be interesting to think of their approach as an ensemble learning approach and look at work in this space on using correlations between representations to learn what and how to combine. Finally, some work has suggested that benefits from MTL are a result of easier optimization, e.g., [3]; if that is true, will you not potentially miss out on good task combinations with your approach?
Other related work:
[0] https://www.aclweb.org/anthology/C18-1175/
[1] https://www.aclweb.org/anthology/P19-1299/
[2] https://www.aclweb.org/anthology/N19-1355.pdf - a somewhat similar two-stage approach
[3] https://www.aclweb.org/anthology/E17-2026/ | train | [
"SJxbgg25iS",
"rJxu2vT_sr",
"HJlRhJwdsB",
"SJxq-0I_sH",
"Syxsm2UuiH",
"ByxBHkb3FS",
"Hyg9Q-42tB",
"H1e1AbZM5H"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank Reviewer 3 for taking the time to respond to us on such short notice, and provide analytic points. Some of the expressed concerns are already addressed in the paper. We apologize if some things were not stated clearly enough, and re-iterate below to avoid further confusion.\n\n1. We agree that mtl is not ... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
1
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"rJxu2vT_sr",
"HJlRhJwdsB",
"ByxBHkb3FS",
"Hyg9Q-42tB",
"H1e1AbZM5H",
"iclr_2020_HJxhUpVKDr",
"iclr_2020_HJxhUpVKDr",
"iclr_2020_HJxhUpVKDr"
] |
iclr_2020_S1l6ITVKPS | An Explicitly Relational Neural Network Architecture | With a view to bridging the gap between deep learning and symbolic AI, we present a novel end-to-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data. In order to evaluate and analyse the architecture, we introduce a family of simple visual relational reasoning tasks of varying complexity. We show that the proposed architecture, when pre-trained on a curriculum of such tasks, learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures. The workings of a successfully trained model are visualised to shed some light on how the architecture functions. | reject | This paper proposes a model that can learn predicates (symbolic relations) from pixels and can be trained end to end. They show that the relations learned generate a representation that generalizes well, and provide some interpretation of the model.
Though it is reasonable to develop a model with synthetic data, the reviewers did wonder if the findings would generalize to new data from real situations. The authors argue that a new model should be understood (using synthetic data) before it can reasonably be applied to natural data. I hope the reviews have shown the authors which areas of the paper need further explanation, and that the use of a synthetic dataset needs to strong justification, or perhaps show some evidence that the method will probably work on real data (e.g. how it could be extended to natural images). | train | [
"rkgPum56YS",
"B1xvZT1Gsr",
"r1g_q21fir",
"HkeiL3kMjr",
"rJxJz3kMjr",
"B1gh7vPTYr",
"SJefjgYRFB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents PrediNet: an architecture explicitly designed to extract representations in the form of three-place predicates (relations). They evaluate the architecture on a visual relational task called the \"Relations Game\" which involves comparing Tetris-like shapes according to their appearance, relativ... | [
6,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_S1l6ITVKPS",
"SJefjgYRFB",
"B1gh7vPTYr",
"rkgPum56YS",
"rkgPum56YS",
"iclr_2020_S1l6ITVKPS",
"iclr_2020_S1l6ITVKPS"
] |
iclr_2020_BylaUTNtPS | Recurrent Independent Mechanisms | Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes. We propose Recurrent Independent Mechanisms (RIMs), a new recurrent architecture in which multiple groups of recurrent cells operate with nearly independent transition dynamics, communicate only sparingly through the bottleneck of attention, and are only updated at time steps where they are most relevant. We show that this leads to specialization amongst the RIMs, which in turn allows for dramatically improved generalization on tasks where some factors of variation differ systematically between training and evaluation. | reject | This paper has, at its core, a potential for constituting a valuable contribution. However, there was a shared belief among reviewers (that I also share) that the paper still has much room for improvement in terms of presentation and justification of the claims. I hope that the authors will be able to address the feedback they received to make this submission get where it should be.
| train | [
"SkxGMecLFB",
"HJeCqprooH",
"H1eVR3Hsor",
"BkxoZnrsjS",
"rkgM0skisH",
"B1x4z2rcir",
"SyxeOPO8tr",
"SylHp2mciB",
"Byg1WAQcjB",
"r1xY9hm9sB",
"SklPIVgOjB",
"SkeBPQlOjB",
"HkestRy_iH",
"SkeBftGWsr",
"rJxjgFGbsH",
"B1lZO_ly9B"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a neural network architecture consisting of multiple independent recurrent modules that interact sparingly. These independent modules are not all used simultaneously, a subset of them is active at each time step. This subset of active modules is chosen through an attention mechanism. The idea b... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
1,
-1,
-1,
-1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1
] | [
"iclr_2020_BylaUTNtPS",
"H1eVR3Hsor",
"BkxoZnrsjS",
"rkgM0skisH",
"SkxGMecLFB",
"r1xY9hm9sB",
"iclr_2020_BylaUTNtPS",
"SkxGMecLFB",
"B1lZO_ly9B",
"SyxeOPO8tr",
"SkxGMecLFB",
"B1lZO_ly9B",
"iclr_2020_BylaUTNtPS",
"rJxjgFGbsH",
"SyxeOPO8tr",
"iclr_2020_BylaUTNtPS"
] |
iclr_2020_HJlAUaVYvH | Optimising Neural Network Architectures for Provable Adversarial Robustness | Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any p-norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods. | reject | The authors propose a novel method to estimate the Lipschitz constant of a neural network, and use this estimate to derive architectures that will have improved adversarial robustness. While the paper contains interesting ideas, the reviewers felt it was not ready for publication due to the following factors:
1) The novelty and significance of the bound derived by the authors is unclear. In particular, the bound used is coarse and likely to be loose, and hence is not likely to be useful in general.
2) The bound on adversarial risk seems of limited significance, since in practice, this can be estimated accurately based on the adversarial risk measured on the test set.
3) The paper is poorly organized with several typos and is hard to read in its present form.
The reviewers were in consensus and the authors did not respond during the rebuttal phase.
Therefore, I recommend rejection. However, all the reviewers found interesting ideas in the paper. Hence, I encourage the authors to consider the reviewers' feedback and submit a revised version to a future venue. | train | [
"BkljE2Bntr",
"SyenrE9TYB",
"SJgeTXH6KS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThe author show that lipschitz constants of the neural network can be used to bound adversarial robustness. They then propose an adjustment to the network architecture that, according to their assumption, would result in smaller lipschitz constant, which according to the theory would correlate with bette... | [
3,
1,
1
] | [
3,
4,
4
] | [
"iclr_2020_HJlAUaVYvH",
"iclr_2020_HJlAUaVYvH",
"iclr_2020_HJlAUaVYvH"
] |
iclr_2020_Bkl086VYvH | Feature-map-level Online Adversarial Knowledge Distillation | Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. We train multiple networks simultaneously by employing discriminators to distinguish the feature map distributions of different networks. Each network has its corresponding discriminator which discriminates the feature map from its own as fake while classifying that of the other network as real. By training a network to fool the corresponding discriminator, it can learn the other network’s feature map distribution. Discriminators and networks are trained concurrently in a minimax two-player game. Also, we propose a novel cyclic learning scheme for training more than two networks together. We have applied our method to various network architectures on the classification task and discovered a significant improvement of performance especially in the case of training a pair of a small network and a large one. | reject | The paper received scores of WR (R1) WR (R2) WA (R3), although R3 stated that they were borderline. The main issues were (i) lack of novelty and (ii) insufficient experiments. The AC has closely look at the reviews/comments/rebuttal and examined the paper. Unfortunately, the AC feels that with no-one strongly advocating for acceptance, the paper cannot be accepted at this time. The authors should use the feedback from reviewers to improve their paper. | val | [
"BJl9oF0riS",
"BJeo-P2HjH",
"SylbYnkLsS",
"ByxizXVtiS",
"SJxW-yvHFr",
"S1xs8081qS",
"HJgVgUR-cH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"First of all, thank you for all your great comments and reviews. We really appreciate it.\nWe did our best to answer your comments, hope it resolves some of questions you have.\n\n1. Well, it is quite hard to compare the computational cost fairly between the online and offline distillation method since their mecha... | [
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"S1xs8081qS",
"HJgVgUR-cH",
"SJxW-yvHFr",
"BJeo-P2HjH",
"iclr_2020_Bkl086VYvH",
"iclr_2020_Bkl086VYvH",
"iclr_2020_Bkl086VYvH"
] |
iclr_2020_HJx0U64FwS | A Mechanism of Implicit Regularization in Deep Learning | Despite a lot of theoretical efforts, very little is known about mechanisms of implicit regularization by which the low complexity contributes to generalization in deep learning. In particular, causality between the generalization performance, implicit regularization and nonlinearity of activation functions is one of the basic mysteries of deep neural networks (DNNs). In this work, we introduce a novel technique for DNNs called random walk analysis and reveal a mechanism of the implicit regularization caused by nonlinearity of ReLU activation. Surprisingly, our theoretical results suggest that the learned DNNs interpolate almost linearly between data points, which leads to the low complexity solutions in the over-parameterized regime. As a result, we prove that stochastic gradient descent can learn a class of continuously differentiable functions with generalization bounds of the order of O(n−2) (n: the number of samples). Furthermore, our analysis is independent of the kernel methods, including neural tangent kernels. | reject | This paper analyzes a mechanism of the implicit regularization caused by nonlinearity of ReLU activation, and suggests that the learned DNNs interpolate almost linearly between data points, which leads to the low complexity solutions in the over-parameterized regime. The main objections include (1) some claims in this paper are not appropriate; (2) lack of proper comparison with prior work; and many other issues in the presentation. I agree with the reviewers’ evaluation and encourage the authors to improve this paper and resubmit to future conference.
| train | [
"Bklb9IfRtS",
"BJe4g3-KsH",
"SylfHnK2sr",
"HJgg2T2sor",
"S1g8tT3sjS",
"B1exQCbFir",
"HJla2aZYiH",
"SyeXQpZtoH",
"rJl2t9bKir",
"HJe0PiZFoH",
"HyggycZFor",
"SJl7uVgTKH",
"HyxTMY0aKS"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Review of \"A mechanism of ... deep learning\"\n\nThis paper studies the generalization performance and implicit regularization of deep learning. In particular, the authors propose a novel technique called \"random walk analysis\" to study the nonlinearity of the neural network with respect to the input data point... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_HJx0U64FwS",
"HyxTMY0aKS",
"HJgg2T2sor",
"BJe4g3-KsH",
"HJe0PiZFoH",
"SJl7uVgTKH",
"SJl7uVgTKH",
"SJl7uVgTKH",
"Bklb9IfRtS",
"HyxTMY0aKS",
"Bklb9IfRtS",
"iclr_2020_HJx0U64FwS",
"iclr_2020_HJx0U64FwS"
] |
iclr_2020_rJe1DTNYPH | Towards Disentangling Non-Robust and Robust Components in Performance Metric | The vulnerability to slight input perturbations is a worrying yet intriguing property of deep neural networks (DNNs). Though some efforts have been devoted to investigating the reason behind such adversarial behavior, the relation between standard accuracy and adversarial behavior of DNNs is still little understood. In this work, we reveal such relation by first introducing a metric characterizing the standard performance of DNNs. Then we theoretically show this metric can be disentangled into an information-theoretic non-robust component that is related to adversarial behavior, and a robust component. Then, we show by experiments that DNNs under standard training rely heavily on optimizing the non-robust component in achieving decent performance. We also demonstrate current state-of-the-art adversarial training algorithms indeed try to robustify DNNs by preventing them from using the non-robust component to distinguish samples from different categories. Based on our findings, we take a step forward and point out the possible direction of simultaneously achieving decent standard generalization and adversarial robustness. It is hoped that our theory can further inspire the community to make more interesting discoveries about the relation between standard accuracy and adversarial robustness of DNNs. | reject | All reviewers suggest rejection. Beyond that, the more knowledgable two have consistent questions about the motivation for using the CCKL objective. As such, the exposition of this paper, and justification of the work could use improvement, so that experienced reviewers understand the contributions of the paper. | test | [
"rkgWLbqnjS",
"SygDSurIir",
"HJgHGdHIsS",
"SJxL95SIiS",
"HkgHYFBUoS",
"H1ea-6ssFS",
"SkevaUL15S",
"B1x0RyRZcB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your response.\n\n-- It is now clear that CCKL only serves as an auxiliary loss for your analysis. As such I will not consider it is as a contribution of its own and only evaluate in the context of the other contributions.\n\n-- As mentioned in my original review, the experiments performed only estab... | [
-1,
-1,
-1,
-1,
-1,
1,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
1
] | [
"HkgHYFBUoS",
"SkevaUL15S",
"B1x0RyRZcB",
"iclr_2020_rJe1DTNYPH",
"H1ea-6ssFS",
"iclr_2020_rJe1DTNYPH",
"iclr_2020_rJe1DTNYPH",
"iclr_2020_rJe1DTNYPH"
] |
iclr_2020_HJekvT4twr | RGTI:Response generation via templates integration for End to End dialog | End-to-end models have achieved considerable success in task-oriented dialogue area, but suffer from the challenges of (a) poor semantic control, and (b) little interaction with auxiliary information. In this paper, we propose a novel yet simple end-to-end model for response generation via mixed templates, which can address above challenges.
In our model, we retrieval candidate responses which contain abundant syntactic and sequence information by dialogue semantic information related to dialogue history. Then, we exploit candidate response attention to get templates which should be mentioned in response. Our model can integrate multi template information to guide the decoder module how to generate response better. We show that our proposed model learns useful templates information, which improves the performance of "how to say" and "what to say" in response generation. Experiments on the large-scale Multiwoz dataset demonstrate the effectiveness of our proposed model, which attain the state-of-the-art performance. | reject | This paper describes a method to incorporate multiple candidate templates to aid in response generation for an end-to-end dialog system. Reviewers thought the basic idea is novel and interesting. However, they also agree that the paper is far from complete, results are missing, further experiments are needed as justification, and the presentation of the paper is not very clear. Given the these feedback from the reviews, I suggest rejecting the paper. | train | [
"ByeBak8bqH",
"Skx9kUBXcH",
"SJejyVYGqS",
"HkxAk9lVcS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1. Summary: The authors proposed a deep neural network-based model to generate responses fro task-oriented dialogue systems. The model mainly contains two parts, the first part is to retrieve relevant responses based on question and encode them into templates, the second part is a decoder to generate the response ... | [
1,
1,
1,
1
] | [
3,
4,
4,
5
] | [
"iclr_2020_HJekvT4twr",
"iclr_2020_HJekvT4twr",
"iclr_2020_HJekvT4twr",
"iclr_2020_HJekvT4twr"
] |
iclr_2020_r1glDpNYwS | LabelFool: A Trick in the Label Space | It is widely known that well-designed perturbations can cause state-of-the-art machine learning classifiers to mis-label an image, with sufficiently small perturbations that are imperceptible to the human eyes. However, by detecting the inconsistency between the image and wrong label, the human observer would be alerted of the attack. In this paper, we aim to design attacks that not only make classifiers generate wrong labels, but also make the wrong labels imperceptible to human observers. To achieve this, we propose an algorithm called LabelFool which identifies a target label similar to the ground truth label and finds a perturbation of the image for this target label. We first find the target label for an input image by a probability model, then move the input in the feature space towards the target label. Subjective studies on ImageNet show that in the label space, our attack is much less recognizable by human observers, while objective experimental results on ImageNet show that we maintain similar performance in the image space as well as attack rates to state-of-the-art attack algorithms. | reject | Thanks for the discussion with reviewers, which improved our understanding of your paper significantly.
However, we concluded that this paper is still premature to be accepted to ICLR2020. We hope that the detailed comments by the reviewers help improve your paper for potential future submission. | train | [
"r1g-9ZHlsB",
"S1lFBoocjH",
"HygViL9cjS",
"r1ggfNdqor",
"HJxDPHLcjS",
"BklW6d6tsB",
"r1gvWsUYsB",
"Ske39HWtiB",
"r1e0X66zsr",
"r1xM_96for",
"SJgk3O6GoS",
"HkgFLke2tr",
"B1g1QouqYB"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method to create adversarial perturbations whose target labels are similar to their ground truth. The target labels are selected using an existing perceptual similarity measure for images. Perturbations are generated using a DeepFool-like algorithm. Human evaluation supports that the pair of... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
5
] | [
"iclr_2020_r1glDpNYwS",
"HJxDPHLcjS",
"r1ggfNdqor",
"BklW6d6tsB",
"r1gvWsUYsB",
"SJgk3O6GoS",
"Ske39HWtiB",
"r1e0X66zsr",
"B1g1QouqYB",
"HkgFLke2tr",
"r1g-9ZHlsB",
"iclr_2020_r1glDpNYwS",
"iclr_2020_r1glDpNYwS"
] |
iclr_2020_BJexP6VKwH | Generalized Domain Adaptation with Covariate and Label Shift CO-ALignment | Unsupervised knowledge transfer has a great potential to improve the generalizability of deep models to novel domains. Yet the current literature assumes that the label distribution is domain-invariant and only aligns the covariate or vice versa. In this paper, we explore the task of Generalized Domain Adaptation (GDA): How to transfer knowledge across different domains in the presence of both covariate and label shift? We propose a covariate and label distribution CO-ALignment (COAL) model to tackle this problem. Our model leverages prototype-based conditional alignment and label distribution estimation to diminish the covariate and label shifts, respectively. We demonstrate experimentally that when both types of shift exist in the data, COAL leads to state-of-the-art performance on several cross-domain benchmarks. | reject | This paper proposes a method to address the covariate shift and label shift problems simultaneously.
The paper is an interesting attempt towards an important problem. However, Reviewers and AC commonly believe that the current version is not acceptable due to several major misconceptions and misleading presentations. In particular:
- The novelty of the paper is not very significant.
- The main concern of this work is that its shift assumption is not well justified.
- The proposed method may be problematic by using the minimax entropy and self-training with resampling.
- The presentation has many errors that require a full rewrite.
Hence I recommend rejection. | train | [
"S1xKW4jjiS",
"SyxAC7jsjS",
"SklibXissB",
"HJgoLzjjiH",
"SyeNofoooB",
"HkgBK9A1sr",
"r1gBdreAKH",
"BJlagbmAKS",
"Hyxpo0_N9r",
"r1ekjdP_qH",
"H1eomCuqwH",
"HkeTC4QsPB",
"BkgGDdnqPr"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"author"
] | [
"Thank you for your interest in our work! \n\nWe will answer your concerns below:\n\nQ1. Where and how did you estimate the target label distribution?\nThe target label distribution is estimated with our self-training process. As we have trained the classifier with class-balanced source samples, we believe the outp... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
1,
1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
-1,
-1,
-1,
-1
] | [
"r1ekjdP_qH",
"HkgBK9A1sr",
"Hyxpo0_N9r",
"BJlagbmAKS",
"r1gBdreAKH",
"Hyxpo0_N9r",
"iclr_2020_BJexP6VKwH",
"iclr_2020_BJexP6VKwH",
"iclr_2020_BJexP6VKwH",
"iclr_2020_BJexP6VKwH",
"iclr_2020_BJexP6VKwH",
"BkgGDdnqPr",
"H1eomCuqwH"
] |
iclr_2020_SJlbvp4YvS | Risk Averse Value Expansion for Sample Efficient and Robust Policy Learning | Model-based Reinforcement Learning(RL) has shown great advantage in sample-efficiency, but suffers from poor asymptotic performance and high inference cost. A promising direction is to combine model-based reinforcement learning with model-free reinforcement learning, such as model-based value expansion(MVE). However, the previous methods do not take into account the stochastic character of the environment, thus still suffers from higher function approximation errors. As a result, they tend to fall behind the best model-free algorithms in some challenging scenarios. We propose a novel Hybrid-RL method, which is developed from MVE, namely the Risk Averse Value Expansion(RAVE). In the proposed method, we use an ensemble of probabilistic models for environment modeling to generate imaginative rollouts, based on which we further introduce the aversion of risks by seeking the lower confidence bound of the estimation. Experiments on different environments including MuJoCo and robo-school show that RAVE yields state-of-the-art performance. Also we found that it greatly prevented some catastrophic consequences such as falling down and thus reduced the variance of the rewards. | reject | The authors propose to extend model-based/model-free hybrid methods (e.g., MVE, STEVE) to stochastic environments. They use an ensemble of probabilistic models to model the environment and use a lower confidence bound of the estimate to avoid risk. They found that their proposed method yields state-of-the-art performance over previous methods.
The valid concerns by Reviewers 1 & 4 were not addressed by the authors and although the authors responded to Reviewer 3, they did not revise the paper to address their concerns. The ideas and results in this paper are interesting, but without addressing the valid concerns raised by reviewers, I cannot recommend acceptance. | test | [
"SJ06CesjB",
"ByghG225sH",
"SyextZ29sr",
"SJezRg29sr",
"Bylh6JURtH",
"S1etZbgy9B",
"Ske-fbzGqr",
"SylI6P9iqr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the comments and clarifications! I look forward to seeing the new version.",
"Thanks for the explanations. For the loss of the negative log-likelihood, yes I understand it's used for inferring the parameters from the reward samples. However, you should make it clearer how this idea is used and appl... | [
-1,
-1,
-1,
-1,
3,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
4,
3,
1,
4
] | [
"SJezRg29sr",
"SyextZ29sr",
"Ske-fbzGqr",
"SylI6P9iqr",
"iclr_2020_SJlbvp4YvS",
"iclr_2020_SJlbvp4YvS",
"iclr_2020_SJlbvp4YvS",
"iclr_2020_SJlbvp4YvS"
] |
iclr_2020_SkeGvaEtPr | Neural Markov Logic Networks | We introduce Neural Markov Logic Networks (NMLNs), a statistical relational learning system that borrows ideas from Markov logic. Like Markov Logic Networks (MLNs), NMLNs are an exponential-family model for modelling distributions over possible worlds, but unlike MLNs, they do not rely on explicitly specified first-order logic rules. Instead, NMLNs learn an implicit representation of such rules as a neural network that acts as a potential function on fragments of the relational structure. Interestingly, any MLN can be represented as an NMLN. Similarly to recently proposed Neural theorem provers (NTPs) (Rocktaschel at al. 2017), NMLNs can exploit embeddings of constants but, unlike NTPs, NMLNs work well also in their absence. This is extremely important for predicting in settings other than the transductive one. We showcase the potential of NMLNs on knowledge-base completion tasks and on generation of molecular (graph) data. | reject | This paper on extending MLNs using NNs is borderline acceptable: one reviewer is strongly opposed, although I confess I don't really understand their response to the rebuttal or see what the issue with novelty is (a position shared by the other reviewers). I'm not sure how to weigh this review, but there is not a lot of signal in favour of rejection aside from the rating.
The remaining two reviews are in favour of acceptance, with their enthusiasm only bounded by the lack of scalability of the method, something they appreciate the authors are upfront about. My view is this paper brings something new to the table which will interest the community, but doesn't oversell the result.
Given the distribution of papers in my area, this one is just a little too borderline to accept, but this is primarily a reflection of the number of high-quality papers reviewed and the limited space of the conference. I have no doubt this paper will be successful at another conference, and it's a bit of a shame we were not in a position to accept it to this one. | train | [
"BkllX-qycH",
"S1xZDwXgsS",
"B1l4a8mesr",
"rkeBlLXljr",
"BylW-TCTtS",
"rkxIiJNo5r"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents Neural Markov Logic Networks (NMLN), which is a generalization of Markov Logic Networks (MLN). Unlike MLN which relies on pre-specified first-order logic (FOL) rules, NMLN learns potential functions parameterized by neural networks on fragments of the graph. The potential function can possibly... | [
6,
-1,
-1,
-1,
1,
6
] | [
3,
-1,
-1,
-1,
4,
1
] | [
"iclr_2020_SkeGvaEtPr",
"rkxIiJNo5r",
"BkllX-qycH",
"BylW-TCTtS",
"iclr_2020_SkeGvaEtPr",
"iclr_2020_SkeGvaEtPr"
] |
iclr_2020_SkgQwpVYwH | Credible Sample Elicitation by Deep Learning, for Deep Learning | It is important to collect credible training samples (x,y) for building data-intensive learning systems (e.g., a deep learning system). In the literature, there is a line of studies on eliciting distributional information from self-interested agents who hold a relevant information. Asking people to report complex distribution p(x), though theoretically viable, is challenging in practice. This is primarily due to the heavy cognitive loads required for human agents to reason and report this high dimensional information. Consider the example where we are interested in building an image classifier via first collecting a certain category of high-dimensional image data. While classical elicitation results apply to eliciting a complex and generative (and continuous) distribution p(x) for this image data, we are interested in eliciting samples xi∼p(x) from agents. This paper introduces a deep learning aided method to incentivize credible sample contributions from selfish and rational agents. The challenge to do so is to design an incentive-compatible score function to score each reported sample to induce truthful reports, instead of an arbitrary or even adversarial one. We show that with accurate estimation of a certain f-divergence function we are able to achieve approximate incentive compatibility in eliciting truthful samples. We then present an efficient estimator with theoretical guarantee via studying the variational forms of f-divergence function. Our work complements the literature of information elicitation via introducing the problem of \emph{sample elicitation}. We also show a connection between this sample elicitation problem and f-GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples. | reject | The primary contribution of this manuscript is a conceptual and theoretical solution to the sample elicitation problem, where agents are asked to report samples. The procedure is implemented using score functions to evaluate the quality of the samples.
The reviewers and AC agree that the problem studied is timely and interesting, as there is limited work on credible sample elicitation in the literature. However, the reviewers were unconvinced about the motivation of the work, and the clarity of the conceptual results. There is also a lack of empirical evaluation. IN the opinion of the AC, this manuscript, while interesting, can be improved by significant revision for clarity and context, and revisions should ideally include some empirical evaluation. | train | [
"B1xaYslOor",
"BklArEngjr",
"S1eZp6oliS",
"SkeHgYNhtB",
"Skgt6nCCKS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers,\n\nWe have updated our draft according to your comments and uploaded a revised draft. We have better motivated why we consider using deep learning to design a data-driven elicitation mechanism. We have also fixed typos and made several clarifications. We thank all reviewers for the helpful comments... | [
-1,
-1,
-1,
6,
1
] | [
-1,
-1,
-1,
1,
3
] | [
"iclr_2020_SkgQwpVYwH",
"SkeHgYNhtB",
"Skgt6nCCKS",
"iclr_2020_SkgQwpVYwH",
"iclr_2020_SkgQwpVYwH"
] |
iclr_2020_rJgVwTVtvS | Gradient Perturbation is Underrated for Differentially Private Convex Optimization | Gradient perturbation, widely used for differentially private optimization, injects noise at every iterative update to guarantee differential privacy. Previous work first determines the noise level that can satisfy the privacy requirement and then analyzes the utility of noisy gradient updates as in non-private case. In this paper, we explore how the privacy noise affects the optimization property. We show that for differentially private convex optimization, the utility guarantee of both DP-GD and DP-SGD is determined by an \emph{expected curvature} rather than the minimum curvature. The \emph{expected curvature} represents the average curvature over the optimization path, which is usually much larger than the minimum curvature and hence can help us achieve a significantly improved utility guarantee. By using the \emph{expected curvature}, our theory justifies the advantage of gradient perturbation over other perturbation methods and closes the gap between theory and practice. Extensive experiments on real world datasets corroborate our theoretical findings. | reject | In this paper, the authors showed that for differentially private convex optimization, the utility guarantee of both DP-GD and DP-SGD is determined by the expected curvature rather than the worst-case minimum curvature. Based on this motivation, the authors justified the advantage of gradient perturbation over other perturbation methods. This is a borderline paper, and has been discussed after author response. The main concerns of this paper include (1) the authors failed to show any loss function that can satisfy the expected curvature inequality; (2) the contribution of this paper is limited, since all the proofs in the paper are just small tweak of existing proofs; (3) this paper does not really improve any existing gradient perturbation based differentially private methods. Due to the above concerns, I have to recommend reject. | train | [
"S1xDBYpXiS",
"HJlR86LBsB",
"Bklof5amiS",
"SJxP_5T7or",
"SJl7v3nptS",
"rkgbIe-x9B",
"BJxSHXMe5S"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the review. We hope the following answers can address your concerns.\n\n1. Indeed, our convergence analysis shares the same procedure as the classical optimization analysis. Such analysis is widely used in the DP-ERM literature (i.e. Bassily et al. (2014)). But it is not simply replacing $\\mu$ by $... | [
-1,
-1,
-1,
-1,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"BJxSHXMe5S",
"iclr_2020_rJgVwTVtvS",
"rkgbIe-x9B",
"SJl7v3nptS",
"iclr_2020_rJgVwTVtvS",
"iclr_2020_rJgVwTVtvS",
"iclr_2020_rJgVwTVtvS"
] |
iclr_2020_H1gEP6NFwr | On the Tunability of Optimizers in Deep Learning | There is no consensus yet on the question whether adaptive gradient methods like Adam are easier to use than non-adaptive optimization methods like SGD. In this work, we fill in the important, yet ambiguous concept of ‘ease-of-use’ by defining an optimizer’s tunability: How easy is it to find good hyperparameter configurations using automatic random hyperparameter search? We propose a practical and universal quantitative measure for optimizer tunability that can form the basis for a fair optimizer benchmark. Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, we find that Adam is the most tunable for the majority of problems, especially with a low budget for hyperparameter tuning. | reject | The paper proposed a new metric to define the quality of optimizers as a weighted average of the scores reached after a certain number of hyperparameters have been tested.
While reviewers (and myself) understood the need to better be able to compare optimizers, they failed to be convinced by the proposed solutions. In particular (setting aside several complaints of the reviewers with which I disagree), by defining a very versatile metric, this paper lacks a strong conclusion as the ranking of optimizers would clearly depend on the instantiation of that metric.
Although that is to be expected, by the very behaviour of these optimizers, it makes it unclear what the added value of the metric is. As one reviewer pointed out, all the points made could have been similarly made with other, more common plots.
Ultimately, it wasn't clear to me what the paper was trying to achieve beyond defining a mathematical formula encompassing all "standard" evaluation metric, which I unfortunately see of limited value. | train | [
"HylIN9DOoH",
"r1xnfqPOoS",
"B1iEvPOjB",
"HkeCkPpMoH",
"HylQWL6xiH",
"BylSI7T6qr",
"BJeO-ILxjB",
"rJeWBbVyjS",
"SJx0zByAYr",
"HJeWF-C65S",
"H1gduyRp9r",
"SygUzyAp5B",
"HyxrKf6Tqr",
"rkeO59Qj5B",
"rkxpx9HoqS",
"Hkx_YZzicB",
"SJexXkMjqS",
"r1g3XJCNqB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"public",
"public",
"author",
"public",
"public",
"public"
] | [
"\nc. Importance of search order: \nYou correctly observe that the tunability metric depends strongly on the order in which different hyperparameters are found, and can have a high variance because of that. In Section 4.1 we acknowledge that variance and explain how we quantify it: We simulate many reruns of the HP... | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"r1xnfqPOoS",
"BylSI7T6qr",
"SJx0zByAYr",
"rJeWBbVyjS",
"BJeO-ILxjB",
"iclr_2020_H1gEP6NFwr",
"BylSI7T6qr",
"iclr_2020_H1gEP6NFwr",
"iclr_2020_H1gEP6NFwr",
"rkeO59Qj5B",
"Hkx_YZzicB",
"SJexXkMjqS",
"iclr_2020_H1gEP6NFwr",
"iclr_2020_H1gEP6NFwr",
"r1g3XJCNqB",
"iclr_2020_H1gEP6NFwr",
... |
iclr_2020_ryg8wpEtvB | Evaluating and Calibrating Uncertainty Prediction in Regression Tasks | Predicting not only the target but also an accurate measure of uncertainty is important for many applications and in particular safety-critical ones. In this work we study the calibration of uncertainty prediction for regression tasks which often arise in real-world systems. We show that the existing definition for calibration of a regression uncertainty [Kuleshov et al. 2018] has severe limitations in distinguishing informative from non-informative uncertainty predictions. We propose a new definition that escapes this caveat and an evaluation method using a simple histogram-based approach inspired by reliability diagrams used in classification tasks. Our method clusters examples with similar uncertainty prediction and compares the prediction with the empirical uncertainty on these examples. We also propose a simple scaling-based calibration that preforms well in our experimental tests. We show results on both a synthetic, controlled problem and on the object detection bounding-box regression task using the COCO and KITTI datasets. | reject | The paper investigates calibration for regression problems. The paper identifies a shortcoming of previous work by Kuleshov et al. 2018 and proposes an alternative.
All the reviewers agreed that while this is an interesting direction, the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about motivation, clarity of the presentation and lack of in-depth empirical evaluation.
I encourage the authors to revise the draft based on the reviewers’ feedback and resubmit to a different venue.
| val | [
"r1lQVT49FB",
"HJxjYldctH",
"HkgpKUBecS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"First, I do not follow the authors’ argumentation in the motivating example where the discus calibration in a mis-specified model. They argue that it should not be possible to calibrate the output of a mis-specified model. It is of course a bad idea to model a Gaussian with a Cauchy distribution (the way it is alw... | [
1,
3,
1
] | [
4,
4,
3
] | [
"iclr_2020_ryg8wpEtvB",
"iclr_2020_ryg8wpEtvB",
"iclr_2020_ryg8wpEtvB"
] |
iclr_2020_SyevDaVYwr | Confidence Scores Make Instance-dependent Label-noise Learning Possible | Learning with noisy labels has drawn a lot of attention. In this area, most of recent works only consider class-conditional noise, where the label noise is independent of its input features. This noise model may not be faithful to many real-world applications. Instead, few pioneer works have studied instance-dependent noise, but these methods are limited to strong assumptions on noise models. To alleviate this issue, we introduce confidence-scored instance-dependent noise (CSIDN), where each instance-label pair is associated with a confidence score. The confidence scores are sufficient to estimate the noise functions of each instance with minimal assumptions. Moreover, such scores can be easily and cheaply derived during the construction of the dataset through crowdsourcing or automatic annotation. To handle CSIDN, we design a benchmark algorithm termed instance-level forward correction. Empirical results on synthetic and real-world datasets demonstrate the utility of our proposed method. | reject | While two reviewers rated this paper as an accept, reviewer 3 strongly believes there are unresolved issues with the work as summarized in their post-rebuttal review. This work seems very promising and while the AC will recommend rejection at this time, the authors are strongly encouraged to resubmit this work. | val | [
"B1eOSJtsjS",
"SJlwDyNooH",
"r1xGNNd2FS",
"HyxhFl1osr",
"ryeinWE_or",
"H1llYpv8oB",
"Bylyi3PUsH",
"BJlzaySMsB",
"SJe2gf3Zsr",
"BJl1Xsq-oS",
"B1xv33IbjB",
"Bye6SmyRYS",
"BJxXPiqHqB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your appreciation and comments! Please find our responses below.\n\n\"This paper also introduces an assumption that the confidence score for each data is given. To me, this is also a strong assumption. Although the authors have provided examples of how to collect confidence score, collecting confiden... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"BJxXPiqHqB",
"BJlzaySMsB",
"iclr_2020_SyevDaVYwr",
"Bylyi3PUsH",
"Bye6SmyRYS",
"BJlzaySMsB",
"BJl1Xsq-oS",
"SJe2gf3Zsr",
"B1xv33IbjB",
"r1xGNNd2FS",
"Bye6SmyRYS",
"iclr_2020_SyevDaVYwr",
"iclr_2020_SyevDaVYwr"
] |
iclr_2020_rJxwDTVFDB | Pushing the bounds of dropout | We push on the boundaries of our knowledge about dropout by showing theoretically that dropout training can be understood as performing MAP estimation concurrently for an entire family of conditional models whose objectives are themselves lower bounded by the original dropout objective. This discovery allows us to pick any model from this family after training, which leads to a substantial improvement on regularisation-heavy language modelling. The family includes models that compute a power mean over the sampled dropout masks, and their less stochastic subvariants with tighter and higher lower bounds than the fully stochastic dropout objective. The deterministic subvariant's bound is equal to its objective, and the highest amongst these models. It also exhibits the best model fit in our experiments. Together, these results suggest that the predominant view of deterministic dropout as a good approximation to MC averaging is misleading. Rather, deterministic dropout is the best available approximation to the true objective. | reject | The reviewers have uniformly had significant reservations for the paper. Given that the authors did not even try to address them, this suggests the paper should be rejected. | val | [
"BkenstNatH",
"rJlsryRpFB",
"SyluRQZf9r"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new understanding of dropout on top of variational dropout, which shows that training with dropout equals to maximizing an empirical variational lower bound on the log-likelihood. This paper shows that the log posterior have the same lower bound when the inference model p(y|x) is defined by d... | [
3,
3,
3
] | [
4,
5,
3
] | [
"iclr_2020_rJxwDTVFDB",
"iclr_2020_rJxwDTVFDB",
"iclr_2020_rJxwDTVFDB"
] |
iclr_2020_r1x_DaVKwH | Is Deep Reinforcement Learning Really Superhuman on Atari? Leveling the playing field | Consistent and reproducible evaluation of Deep Reinforcement Learning (DRL) is not straightforward. In the Arcade Learning Environment (ALE), small changes in environment parameters such as stochasticity or the maximum allowed play time can lead to very different performance. In this work, we discuss the difficulties of comparing different agents trained on ALE. In order to take a step further towards reproducible and comparable DRL, we introduce SABER, a Standardized Atari BEnchmark for general Reinforcement learning algorithms. Our methodology extends previous recommendations and contains a complete set of environment parameters as well as train and test procedures. We then use SABER to evaluate the current state of the art, Rainbow. Furthermore, we introduce a human world records baseline, and argue that previous claims of expert or superhuman performance of DRL might not be accurate. Finally, we propose Rainbow-IQN by extending Rainbow with Implicit Quantile Networks (IQN) leading to new state-of-the-art performance. Source code is available for reproducibility. | reject | This paper proposes a new benchmark that compares performance of deep reinforcement learning algorithms on the Atari Learning Environment to the best human players. The paper identifies limitations of past evaluations of deep RL agents on Atari. The human baseline scores commonly used in deep RL are not the highest known human scores. To enable learning agents to reach these high scores, the paper recommends allowing the learning agents to play without a time limit. The time limit in Atari is not always consistent across papers, and removing the time limit requires additional software fixes due to some bugs in the game software. These ideas form the core of the paper's proposed new benchmark (SABER). The paper also proposes a new deep RL algorithm that combines earlier ideas.
The reviews and the discussion with the authors brought out several strengths and weaknesses of the proposal. One strength was identifying the best known human performance in these Atari games.
However, the reviewers were not convinced that this new benchmark is useful. The reviewers raised concerns about using clipped rewards, using games that received substantially different amounts of human effort, comparing learning algorithms to human baselines instead of other learning algorithms, and also the continued use of the Atari environment. Given all these many concerns about a new benchmark, the newly proposed algorithm was not viewed as a distraction.
This paper is not ready for publication. The new benchmark proposed for deep reinforcement learning on Atari was not convincing to the reviewers. The paper requires further refinement of the benchmark or further justification for the new benchmark. | train | [
"r1gj4nbAKS",
"S1xBWv6ciH",
"SygUxzacjB",
"r1l_jHntoS",
"r1gUOd5tjB",
"Hke1KModoB",
"HJl_jpzQjS",
"HJlUI5CWoB",
"BJlCZER-oB",
"S1lYKrA-jB",
"rylUiKKiYH",
"HJxg5_Satr"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper revisits the way RL algorithms are typically evaluated on the ALE benchmark, advocating for several key changes that contribute to more robust and reliable comparisons between algorithms. It also brings the following additional contributions: (1) a new measure of comparison to human performance based on... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_r1x_DaVKwH",
"r1l_jHntoS",
"HJl_jpzQjS",
"HJlUI5CWoB",
"Hke1KModoB",
"S1lYKrA-jB",
"iclr_2020_r1x_DaVKwH",
"rylUiKKiYH",
"r1gj4nbAKS",
"HJxg5_Satr",
"iclr_2020_r1x_DaVKwH",
"iclr_2020_r1x_DaVKwH"
] |
iclr_2020_SkgODpVFDr | Incorporating Horizontal Connections in Convolution by Spatial Shuffling | Convolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks.
The design of the regular convolution is based on the Receptive Field (RF) where the information within a specific region is processed.
In the view of the regular convolution's RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF.
As a result, the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information.
However, in lower layers of the biological brain, the information outside of the RF changes the properties of neurons.
In this work, we extend the regular convolution and propose spatially shuffled convolution (ss convolution).
In ss convolution, the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation.
We perform experiments on CIFAR-10 and ImageNet-1k dataset, and show that ss convolution improves the classification performance across various CNNs. | reject | The paper is well-motivated by neuroscience that our brains use information from outside the receptive field of convolutive processes through top-down mechanisms. However, reviewers feel that the results are not near the state of the art and the paper needs further experiments and need to scale to larger datasets. | train | [
"B1l-8NElcH",
"SJelH-XZcH",
"rJeDcCUb9S"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The authors extended the regular convolution and proposed spatially shuffled convolution to use the information outside of its RF, which is inspired by the idea that horizontal connections are believed to be important for visual processing in the visual cortex in biological brain. The authors proposed ss ... | [
3,
3,
3
] | [
3,
4,
4
] | [
"iclr_2020_SkgODpVFDr",
"iclr_2020_SkgODpVFDr",
"iclr_2020_SkgODpVFDr"
] |
iclr_2020_rkeYvaNKPr | Trajectory representation learning for Multi-Task NMRDPs planning | Expanding Non Markovian Reward Decision Processes (NMRDP) into Markov Decision Processes (MDP) enables the use of state of the art Reinforcement Learning (RL) techniques to identify optimal policies. In this paper an approach to exploring NMRDPs and expanding them into MDPs, without the prior knowledge of the reward structure, is proposed. The non Markovianity of the reward function is disentangled under the assumption that sets of similar and dissimilar trajectory batches can be sampled. More precisely, within the same batch, measuring the similarity between any couple of trajectories is permitted, although comparing trajectories from different batches is not possible. A modified version of the triplet loss is optimised to construct a representation of the trajectories under which rewards become Markovian. | reject | The paper considers a special case of decision making processes with
non-Markovian reward functions, where conditioned on an unobserved task-label
the reward function becomes Markovian.
A semi-supervised loss for learning trajectory embeddings is proposed.
The approach is tested on a multi-task grid-world environment and ablation
studies are performed.
The reviewers mainly criticize the experiments in the paper. The environments
studied are quite simple, leaving it uncertain if the approach still works in
more complex settings.
Apart from ablation results, no baselines were presented although the setting is
similar to continual learning / multi-task learning (with unobserved task label)
where prior work does exist.
Furthermore, the writing was found to be partially lacking in clarity, although
the authors addressed this in the rebuttal.
The paper is somewhat below acceptance threshold, judging from reviews and my own
reading, mostly due to lack of convincing experiments. Furthermore, the general setting
considered in this paper seems quite specific, and therefore of limited impact. | train | [
"r1gyX6sqjr",
"Hkgj2ho5jS",
"Skgzmnj5sr",
"B1e1Wqs9iH",
"BklQHOmjtH",
"SJeWQR52KS",
"Hylke1JaqS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Detailed comments: \nA point of criticism is that it seems that the authors do not compare with similar or related methods in the experiments. In any case, I think that the paper is interesting and will receive the attention of the community. \n\n> The main reason we do not compare to similar methods is that our w... | [
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
1,
5,
5
] | [
"BklQHOmjtH",
"SJeWQR52KS",
"SJeWQR52KS",
"Hylke1JaqS",
"iclr_2020_rkeYvaNKPr",
"iclr_2020_rkeYvaNKPr",
"iclr_2020_rkeYvaNKPr"
] |
iclr_2020_rJx9vaVtDS | Individualised Dose-Response Estimation using Generative Adversarial Nets | The problem of estimating treatment responses from observational data is by now a well-studied one. Less well studied, though, is the problem of treatment response estimation when the treatments are accompanied by a continuous dosage parameter. In this paper, we tackle this lesser studied problem by building on a modification of the generative adversarial networks (GANs) framework that has already demonstrated effectiveness in the former problem. Our model, DRGAN, is flexible, capable of handling multiple treatments each accompanied by a dosage parameter. The key idea is to use a significantly modified GAN model to generate entire dose-response curves for each sample in the training data which will then allow us to use standard supervised methods to learn an inference model capable of estimating these curves for a new sample. Our model consists of 3 blocks: (1) a generator, (2) a discriminator, (3) an inference block. In order to address the challenge presented by the introduction of dosages, we propose novel architectures for both our generator and discriminator. We model the generator as a multi-task deep neural network. In order to address the increased complexity of the treatment space (because of the addition of dosages), we develop a hierarchical discriminator consisting of several networks: (a) a treatment discriminator, (b) a dosage discriminator for each treatment. In the experiments section, we introduce a new semi-synthetic data simulation for use in the dose-response setting and demonstrate improvements over the existing benchmark models. | reject | This paper addresses the problem of estimating treatment responses involving a continuous dosage parameter. The basic idea is to learn a GAN model capable of generating synthetic dose-response curves for each training sample, which then facilitates the supervised training of an inference model that estimates these curves for new cases. For this purpose, specialized architectures are also proposed for the GAN, which involves a multi-task generator network and a hierarchical discriminator network. Empirical results demonstrate improvement over existing methods.
While there is always a chance that reviewers may underappreciate certain aspects of a submission, the fact that there was a unanimous decision to reject this work indicates that the contribution must be better marketed to the ML community. For example, after the rebuttal one reviewer remained unconvinced regarding explanations for why the proposed method is likely to learn the full potential outcome distribution. Among other things, another reviewer felt that both the proposed DRGAN model, and the GANITE framework upon which it is based, were not necessarily working as advertised in the present context. | train | [
"S1l8cmXhYB",
"H1ezwCitiH",
"Skg2l9kXoH",
"HJeeMtkQjS",
"HygWWdJmjS",
"BkeJEv17oH",
"SJlmIlnecH",
"SJxMxb4S5S"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces Dose Response Generative Adversarial Network (DRGAN) that is aimed at generating entire dose-response curve from observational data with single dose treatments. This work is an extension of GANITE (Yoon et al., 2018) for the case of real-valued treatments (i.e., dosage). The proposed model con... | [
1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_rJx9vaVtDS",
"HygWWdJmjS",
"HJeeMtkQjS",
"SJxMxb4S5S",
"SJlmIlnecH",
"S1l8cmXhYB",
"iclr_2020_rJx9vaVtDS",
"iclr_2020_rJx9vaVtDS"
] |
iclr_2020_SJl9PTNYDS | NPTC-net: Narrow-Band Parallel Transport Convolutional Neural Network on Point Clouds | Convolution plays a crucial role in various applications in signal and image processing, analysis and recognition. It is also the main building block of convolution neural networks (CNNs). Designing appropriate convolution neural networks on manifold-structured point clouds can inherit and empower recent advances of CNNs to analyzing and processing point cloud data. However, one of the major challenges is to define a proper way to "sweep" filters through the point cloud as a natural generalization of the planar convolution and to reflect the point cloud's geometry at the same time. In this paper, we consider generalizing convolution by adapting parallel transport on the point cloud. Inspired by a triangulated surface based method \cite{DBLP:journals/corr/abs-1805-07857}, we propose the Narrow-Band Parallel Transport Convolution (NPTC) using a specifically defined connection on a voxelized narrow-band approximation of point cloud data. With that, we further propose a deep convolutional neural network based on NPTC (called NPTC-net) for point cloud classification and segmentation. Comprehensive experiments show that the proposed NPTC-net achieves similar or better results than current state-of-the-art methods on point clouds classification and segmentation. | reject | All the reviewers recommend rejecting the paper. There is no basis for acceptance. | train | [
"B1gacTQ2sr",
"S1xmM0mniS",
"BkgbudX2sS",
"SyxSXmjEYB",
"SJlAu2L0FH",
"ryxzaToRFr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for reading and suggestions. \n\nWe modified Section 1 and defined the notations before use. To make it easier to understand, we provide a more intuitive point of view in Section 2 to explain what we are doing.\n\nOur results may be similar to the SOTA, but we hope to encourage more explorations about th... | [
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
5,
1,
1
] | [
"SJlAu2L0FH",
"SyxSXmjEYB",
"ryxzaToRFr",
"iclr_2020_SJl9PTNYDS",
"iclr_2020_SJl9PTNYDS",
"iclr_2020_SJl9PTNYDS"
] |
iclr_2020_HygiDTVKPr | A Mention-Pair Model of Annotation with Nonparametric User Communities | The availability of large datasets is essential for progress in coreference and other areas of NLP. Crowdsourcing has proven a viable alternative to expert annotation, offering similar quality for better scalability. However, crowdsourcing require adjudication, and most models of annotation focus on classification tasks where the set of classes is predetermined. This restriction does not apply to anaphoric annotation, where coders relate markables to coreference chains whose number cannot be predefined. This gap was recently covered with the introduction of a mention pair model of anaphoric annotation (MPA). In this work we extend MPA to alleviate the effects of sparsity inherent in some crowdsourcing environments. Specifically, we use a nonparametric partially pooled structure (based on a stick breaking process), fitting jointly with the ability of the annotators hierarchical community profiles. The individual estimates can thus be improved using information about the community when the data is scarce. We show, using a recently published large-scale crowdsourced anaphora dataset, that the proposed model performs better than its unpooled counterpart in conditions of sparsity, and on par when enough observations are available. The model is thus more resilient to different crowdsourcing setups, and, further provides insights into the community of workers. The model is also flexible enough to be used in standard annotation tasks for classification where it registers on par performance with the state of the art. | reject | Thanks to the reviewers and the authors for an interesting discussion. The reviewers are mixed, learning toward positive, but a few shortcomings were left unaddressed: (i) Turning the task into a mention-pair classification problem ignores the mention detection step, and synergies from joint modeling are lost. (ii) Lee et al. (2018) has been surpassed by some margin by BERT and spanBERT, models ignored in this paper. (iii) Several approaches to aggregating structured annotations have already been introduced, e.g., for sequence labelling tasks. [0] Overall, the limited novelty, the missing baselines, and the missing related work lead me to not favor acceptance at this point.
[0] https://www.aclweb.org/anthology/P17-1028/ | train | [
"BJgPWS1oiS",
"SJxqXLiZsH",
"Hkg-QSoZjr",
"rkli1EsWsS",
"SygYYjF-sr",
"Byec-4kCYr",
"H1l4l3C-qS",
"BJgzkH-vqH",
"HJxjk86dcB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I am happy that you will add these clarifications in the paper. It does not change the overall score. ",
"We wish to thank the reviewer for the comments.\n\nConcern\n \nThe paper is an incremental work to the existing one by Paun et al. (2018b). \n \nAnswer\n\nThe paper does build on Paun et al 2018, but the ext... | [
-1,
-1,
-1,
-1,
-1,
3,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
1
] | [
"Hkg-QSoZjr",
"HJxjk86dcB",
"BJgzkH-vqH",
"H1l4l3C-qS",
"Byec-4kCYr",
"iclr_2020_HygiDTVKPr",
"iclr_2020_HygiDTVKPr",
"iclr_2020_HygiDTVKPr",
"iclr_2020_HygiDTVKPr"
] |
iclr_2020_ryx2wp4tvS | MLModelScope: A Distributed Platform for ML Model Evaluation and Benchmarking at Scale | Machine Learning (ML) and Deep Learning (DL) innovations are being introduced at such a rapid pace that researchers are hard-pressed to analyze and study them. The complicated procedures for evaluating innovations, along with the lack of standard and efficient ways of specifying and provisioning ML/DL evaluation, is a major "pain point" for the community. This paper proposes MLModelScope, an open-source, framework/hardware agnostic, extensible and customizable design that enables repeatable, fair, and scalable model evaluation and benchmarking. We implement the distributed design with support for all major frameworks and hardware, and equip it with web, command-line, and library interfaces. To demonstrate MLModelScope's capabilities we perform parallel evaluation and show how subtle changes to model evaluation pipeline affects the accuracy and HW/SW stack choices affect performance. | reject | The paper proposes a platform for benchmarking, and in particular hardware-agnostic evaluation of machine learning models. This is an important problem as our field strives for more reproducibility.
This was a very confusing paper to discuss and review, since most of the reviewers (and myself) do not know much about the area. Two of the reviewers found the paper contributions sufficient to be (weakly) accepted. The third reviewer had many issues with the work and engaged in a lengthy debate with the authors, but there was strong disagreement regarding their understanding of the scope of the paper as a Tools/Systems submission.
Given the lack of consensus, I must recommend rejection at this time, but highly encourage the authors to take the feedback into account and resubmit to a future venue. | train | [
"rkeYrUo3oB",
"BJgsVWo3or",
"r1xlT9d5oH",
"HkllQKc3jH",
"BylQSLt2sr",
"B1lRxZUniS",
"S1e26grhiS",
"rygAs5E2iB",
"Bye2Avy2jr",
"rJxTJ1disB",
"rJxJh5IioH",
"SygzqdociH",
"HyxQI4NPiB",
"H1eVkNEwjB",
"rklBR5qaKr",
"rJggCzLbcB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Being able to conduct model evaluation at scale is one of the features of the design and follows from how the system is architected.\n\nAs stated in our previous comments, MLModel leverages a microservice architecture design (e.g. as described in https://pytorch.org/blog/model-serving-in-pyorch/) where each evalua... | [
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
"BJgsVWo3or",
"HkllQKc3jH",
"iclr_2020_ryx2wp4tvS",
"BylQSLt2sr",
"B1lRxZUniS",
"S1e26grhiS",
"rygAs5E2iB",
"Bye2Avy2jr",
"rJxTJ1disB",
"rJxJh5IioH",
"SygzqdociH",
"r1xlT9d5oH",
"rklBR5qaKr",
"rJggCzLbcB",
"iclr_2020_ryx2wp4tvS",
"iclr_2020_ryx2wp4tvS"
] |
iclr_2020_Bke6vTVYwH | Graph convolutional networks for learning with few clean and many noisy labels | In this work we consider the problem of learning a classifier from noisy labels when a few clean labeled examples are given. The structure of clean and noisy data is modeled by a graph per class and Graph Convolutional Networks (GCN) are used to predict class relevance of noisy examples. For each class, the GCN is treated as a binary classifier learning to discriminate clean from noisy examples using a weighted binary cross-entropy loss function, and then the GCN-inferred "clean" probability is exploited as a relevance measure. Each noisy example is weighted by its relevance when learning a classifier for the end task. We evaluate our method on an extended version of a few-shot learning problem, where the few clean examples of novel classes are supplemented with additional noisy data. Experimental results show that our GCN-based cleaning process significantly improves the classification accuracy over not cleaning the noisy data and standard few-shot classification where only few clean examples are used. The proposed GCN-based method outperforms the transductive approach (Douze et al., 2018) that is using the same additional data without labels. | reject | The paper combines graph convolutional networks with noisy label learning. The reviewers feel that novelty in the work is limited and there is a need for further experiments and extensions. | train | [
"rJxXNsJLor",
"SyxC53fBir",
"HkgHcqLVoH",
"r1lg9KIEjS",
"SylESLC6Fr",
"rkgIzF3Zcr"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the review for their positive comments. We agree that our main contribution is to cast a graph convolution network as a binary classifier learning to discriminate clean from noisy data and show its excellent results for few-shot learning. \n\nQ1: In future, I would like to see a joint approach to such tr... | [
-1,
6,
-1,
-1,
6,
6
] | [
-1,
4,
-1,
-1,
3,
1
] | [
"SyxC53fBir",
"iclr_2020_Bke6vTVYwH",
"SylESLC6Fr",
"rkgIzF3Zcr",
"iclr_2020_Bke6vTVYwH",
"iclr_2020_Bke6vTVYwH"
] |
iclr_2020_SkxpDT4YvS | Policy Optimization with Stochastic Mirror Descent | Improving sample efficiency has been a longstanding goal in reinforcement learning.
In this paper, we propose the VRMPO: a sample efficient policy gradient method with stochastic mirror descent.
A novel variance reduced policy gradient estimator is the key of VRMPO to improve sample efficiency.
Our VRMPO needs only O(ϵ−3) sample trajectories to achieve an ϵ-approximate first-order stationary point,
which matches the best-known sample complexity.
We conduct extensive experiments to show our algorithm outperforms state-of-the-art policy gradient methods in various settings. | reject | This paper proposes a new policy gradient method based on stochastic mirror descent and variance reduction. Both theoretical analysis and experiments are provided to demonstrate the sample efficiency of the proposed algorithm. The main concerns of this paper include: (1) unclear presentation in both the main results and the proof; and (2) missing baselines (e.g., HAPG) in the experiments. This paper has been carefully discussed but even after author response and reviewer discussion, it does not gather sufficient support.
Note: the authors disclosed their identity by adding the author names in the revision during the author response. After discussion with PC chair, the openreview team helped remove that revision during the reviewer discussion to avoid desk reject.
| train | [
"rJxuWhn2YS",
"H1er_pvEoB",
"S1xrcRXviH",
"HkeT5vnGoS",
"rJgs13QDsH",
"HJxca3xViB",
"HJe4JbcTtS",
"H1lx1SBgqS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a variant of policy gradient algorithm with mirror descent update, which is a natural generalization of projected policy gradient descent. The authors also proposed a variance reduced policy gradient algorithm following the variance reduction techniques in optimization. The authors further prov... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_SkxpDT4YvS",
"HJe4JbcTtS",
"iclr_2020_SkxpDT4YvS",
"rJxuWhn2YS",
"rJxuWhn2YS",
"H1lx1SBgqS",
"iclr_2020_SkxpDT4YvS",
"iclr_2020_SkxpDT4YvS"
] |
iclr_2020_rylCP6NFDB | Hindsight Trust Region Policy Optimization | As reinforcement learning continues to drive machine intelligence beyond its conventional boundary, unsubstantial practices in sparse reward environment severely limit further applications in a broader range of advanced fields. Motivated by the demand for an effective deep reinforcement learning algorithm that accommodates sparse reward environment, this paper presents Hindsight Trust Region Policy Optimization (HTRPO), a method that efficiently utilizes interactions in sparse reward conditions to optimize policies within trust region and, in the meantime, maintains learning stability. Firstly, we theoretically adapt the TRPO objective function, in the form of the expected return of the policy, to the distribution of hindsight data generated from the alternative goals. Then, we apply Monte Carlo with importance sampling to estimate KL-divergence between two policies, taking the hindsight data as input. Under the condition that the distributions are sufficiently close, the KL-divergence is approximated by another f-divergence. Such approximation results in the decrease of variance and alleviates the instability during policy update. Experimental results on both discrete and continuous benchmark tasks demonstrate that HTRPO converges significantly faster than previous policy gradient methods. It achieves effective performances and high data-efficiency for training policies in sparse reward environments. | reject | The paper pursues an interesting approach, but requires additional maturation. The experienced reviewers raise several concerns about the current version of the paper. The significance of the contribution was questioned. The paper missed key opportunities to evaluate and justify critical aspects of the proposed approach, via targeted ablation and baseline studies. The quality and clarity of the technical exposition was also criticized. The comments submitted by the reviewers should help the authors strengthen the paper. | train | [
"rJlwrDqojB",
"SJemAN9ojH",
"BJxyRm9joB",
"SylU9Q5ojr",
"BklB8lqioB",
"Bye56P4fFH",
"rklIz1_4tH",
"B1xQBe0E5r"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to emphasize the contributions of this paper.\n1. We propose a methodology called Hindsight Trust Region Policy Optimization (HTRPO). In HTRPO, a hindsight form of policy optimization problem within trust region is theoretically derived, which can be approximately solved with the Monte Carlo estimato... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
1
] | [
"iclr_2020_rylCP6NFDB",
"Bye56P4fFH",
"SylU9Q5ojr",
"rklIz1_4tH",
"B1xQBe0E5r",
"iclr_2020_rylCP6NFDB",
"iclr_2020_rylCP6NFDB",
"iclr_2020_rylCP6NFDB"
] |
iclr_2020_B1l0wp4tvr | Information Plane Analysis of Deep Neural Networks via Matrix--Based Renyi's Entropy and Tensor Kernels | Analyzing deep neural networks (DNNs) via information plane (IP) theory has gained tremendous attention recently as a tool to gain insight into, among others, their generalization ability. However, it is by no means obvious how to estimate mutual information (MI) between each hidden layer and the input/desired output, to construct the IP. For instance, hidden layers with many neurons require MI estimators with robustness towards the high dimensionality associated with such layers. MI estimators should also be able to naturally handle convolutional layers, while at the same time being computationally tractable to scale to large networks. None of the existing IP methods to date have been able to study truly deep Convolutional Neural Networks (CNNs), such as the e.g.\ VGG-16. In this paper, we propose an IP analysis using the new matrix--based R\'enyi's entropy coupled with tensor kernels over convolutional layers, leveraging the power of kernel methods to represent properties of the probability distribution independently of the dimensionality of the data. The obtained results shed new light on the previous literature concerning small-scale DNNs, however using a completely new approach. Importantly, the new framework enables us to provide the first comprehensive IP analysis of contemporary large-scale DNNs and CNNs, investigating the different training phases and providing new insights into the training dynamics of large-scale neural networks. | reject | This paper considers the information plane analysis of DNNs. Estimating mutual information is required in such analysis which is difficult task for high dimensional problems. This paper proposes a new "matrix–based Renyi’s entropy coupled with ´tensor kernels over convolutional layers" to solve this problem. The methods seems to be related to an existing approach but derived using a different "starting point". Overall, the method is able to show improvements in high-dimensional case.
Both R1 and R3 have been critical of the approach. R3 is not convinced that the method would work for high-dimensional case and also that no simulation studies were provided. In the revised version the authors added a new experiment to show this. R3's another comment makes an interesting point regarding "the estimated quantities evolve during training, and that may be interesting in itself, but calling the estimated quantities mutual information seems like a leap that's not justified in the paper." I could not find an answer in the rebuttal regarding this.
R1 has also commented that the contribution is incremental in light of existing work. The authors mostly agree with this, but insist that the method is derived differently.
Overall, I think this is a reasonable paper with some minor issues. I think this can use another review cycle where the paper can be improved with additional results and to take care of some of the doubts that reviewers' had this time.
For now, I recommend to reject this paper, but encourage the authors to resubmit at another venue after revision. | train | [
"rkephS4DsB",
"ryx8NLNwsS",
"ByxPexVvor",
"r1lO2AmwjH",
"Hye7dGFTKB",
"S1lNaTIRKr",
"Bkg2H7Kp9H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"- Comment 5: Imprecise formulation (\"have the same functional form of the statistical quantity\"), \"marginal\", comma between f(X) and g(Y).\n\n- Answer 5: We have taken the liberty to collect three of your comments so they can be answered simultaneously. First, we acknowledge that the sentence \"have the same f... | [
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
1
] | [
"Bkg2H7Kp9H",
"Bkg2H7Kp9H",
"S1lNaTIRKr",
"Hye7dGFTKB",
"iclr_2020_B1l0wp4tvr",
"iclr_2020_B1l0wp4tvr",
"iclr_2020_B1l0wp4tvr"
] |
iclr_2020_HklJdaNYPH | Augmenting Self-attention with Persistent Memory | Transformer networks have lead to important progress in language modeling and machine translation. These models include two consecutive modules, a feed-forward layer and a self-attention layer. The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers. Building upon this intuition, we propose a new model that solely consists of attention layers. More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer. Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer. Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks. | reject | This paper proposes a modification to the Transformer architecture in which the self-attention and feed-forward layer are merged into a self-attention layer with "persistent" memory vectors. This involves concatenating the contextual representations with global, learned memory vectors, which are attended over. Experiments show slight gains in character and word-level language modeling benchmarks.
While the proposed architectural changes are interesting, they are also rather minor and had a small impact in performance and in number of model parameters. The motivation of the persistent memory vector as replacing the FF-layer is a bit tenuous since Eqs 5 and 9 are substantially different. Overall the contribution seems a bit thin for a ICLR paper. I suggest more analysis and possibly experimentation in other tasks in a future iteration of this paper. | train | [
"BkehdZb_uB",
"H1g81d42oB",
"BJeXmDNnsH",
"S1lXnkxpKB",
"ByxAgnZl9B"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a simple modification to the ubiquitous Transformer model. Noticing that the feed-forward layer of a Transformer layer looks a bit like an attention over \"persistent\" memory vectors, the authors propose to explicitly incorporate this notion directly into the self-attention layer. This involve... | [
3,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
4,
5
] | [
"iclr_2020_HklJdaNYPH",
"BkehdZb_uB",
"S1lXnkxpKB",
"iclr_2020_HklJdaNYPH",
"iclr_2020_HklJdaNYPH"
] |
iclr_2020_HJxJdp4YvS | Variational pSOM: Deep Probabilistic Clustering with Self-Organizing Maps | Generating visualizations and interpretations from high-dimensional data is a
common problem in many fields. Two key approaches for tackling this problem
are clustering and representation learning. There are very performant deep
clustering models on the one hand and interpretable representation learning techniques,
often relying on latent topological structures such as self-organizing maps,
on the other hand. However, current methods do not yet successfully combine
these two approaches. We present a new deep architecture for probabilistic clustering,
VarPSOM, and its extension to time series data, VarTPSOM, composed of VarPSOM
modules connected by LSTM cells. We show that they achieve superior
clustering performance compared to current deep clustering methods on static
MNIST/Fashion-MNIST data as well as medical time series, while inducing an
interpretable representation. Moreover, on the medical time series, VarTPSOM
successfully predicts future trajectories in the original data space. | reject | The authors present a deep model for probabilistic clustering and extend it to handle time series data. The proposed method beats existing deep models on two datasets and the representations learned in the process are also interpretable.
Unfortunately, despite detailed responses by the authors, the reviewers felt that some of their main concerns were not addressed. For example, the authors and the reviewers are still not converging on whether SOM-VAE uses a VAE or an autoencoder. Further, the discussion about the advantages of VAE over AE is still not very convincing. Currently the work is positioned as a variational clustering method but the reviewers feel that it is a clustering method which uses a VAE (yes, I understand that this difference is subtle but needs to be clarified).
The reviewers read the responses of the author and during discussions with the AC suggested that there were still not convinced about some of their initial questions. Given this, at this point I would prefer going by the consensus of the reviewers and recommend that this paper cannot be accepted. | val | [
"S1xttm4hiS",
"HJlOgsr5iS",
"Hkgq1iXusr",
"BylOiq7djr",
"SyeBpRGOir",
"BJgyVCz_oH",
"S1l5BDKiYS",
"S1e-sprCtS",
"SyxaZiL6Fr"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your feedback. We respond to the different points raised in your response below.\n\nComparison with SOM-VAE:\nWe agree that the SOM-VAE is very similar to the VQ-VAE framework proposed by van den Oord et al. [2]. Nevertheless, by looking at their reconstruction loss (Equation 1 of the SOM-VAE paper [... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"HJlOgsr5iS",
"SyeBpRGOir",
"BylOiq7djr",
"S1l5BDKiYS",
"SyxaZiL6Fr",
"S1e-sprCtS",
"iclr_2020_HJxJdp4YvS",
"iclr_2020_HJxJdp4YvS",
"iclr_2020_HJxJdp4YvS"
] |
iclr_2020_BJeguTEKDB | INSTANCE CROSS ENTROPY FOR DEEP METRIC LEARNING | Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed. Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points. In this work, we approach deep metric learning from a novel perspective. We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one. ICE has three main appealing properties. Firstly, similar to categorical cross entropy (CCE), ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision. Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size. Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated. It rescales samples’ gradients to control the differentiation degree over training examples instead of truncating them by sample mining. In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE. | reject | The paper proposes a new objective function called ICE for metric learning.
There was a substantial discussion with the authors about this paper. The two reviewers most experienced in the field found the novelty compared to the vast existing literature lacking, and remained unconvinced after the discussion. Some reviewers also found the technical presentation and interpretations to need improvement, and this was partially addressed by a new revision.
Based on this discussion, I recommend a rejection at this time, but encourage the authors to incorporate the feedback and in particular place the work in context more fully, and resubmit to another venue. | train | [
"BkelHNmssH",
"BklyihIijS",
"BJlv3C1ssS",
"SJg5d965oS",
"BylCWh7loH",
"BkxTY8RGjS",
"rklBK-kqsS",
"Hkg88mzdsr",
"HJew1MGhFr",
"ryxSBCNAFr"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThank you for your reply and new reviews. \n\n3.a. Comments on the feature normalisation. \nIt is true that traditionally, CCE is trained naively without feature/weight normalisation. \nRecently, normalisation for weights parameters and output representations becomes popular and is widely studied. It has been d... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
1
] | [
"SJg5d965oS",
"BJlv3C1ssS",
"BylCWh7loH",
"rklBK-kqsS",
"HJew1MGhFr",
"ryxSBCNAFr",
"Hkg88mzdsr",
"iclr_2020_BJeguTEKDB",
"iclr_2020_BJeguTEKDB",
"iclr_2020_BJeguTEKDB"
] |
iclr_2020_Syl-_aVtvH | Federated User Representation Learning | Collaborative personalization, such as through learned user representations (embeddings), can improve the prediction accuracy of neural-network-based models significantly. We propose Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and resource-efficient way to utilize existing neural personalization techniques in the Federated Learning (FL) setting. FURL divides model parameters into federated and private parameters. Private parameters, such as private user embeddings, are trained locally, but unlike federated parameters, they are not transferred to or averaged on the server. We show theoretically that this parameter split does not affect training for most model personalization approaches. Storing user embeddings locally not only preserves user privacy, but also improves memory locality of personalization compared to on-server training. We evaluate FURL on two datasets, demonstrating a significant improvement in model quality with 8% and 51% performance increases, and approximately the same level of performance as centralized training with only 0% and 4% reductions. Furthermore, we show that user embeddings learned in FL and the centralized setting have a very similar structure, indicating that FURL can learn collaboratively through the shared parameters while preserving user privacy. | reject | This manuscript personalization techniques to improve the scalability and privacy preservation of federated learning. Empirical results are provided which suggests improved performance.
The reviewers and AC agree that the problem studied is timely and interesting, as the tradeoffs between personalization and performance are a known concern in federated learning. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and clarity of the conceptual and empirical results. Reviewers were also unconvinced by the provided empirical evaluation results. | val | [
"B1lEfksosH",
"ByxxiNE8iH",
"SJeGpxFFYB",
"BkgTC6L6KS",
"HJx8Dzey5S",
"SyggxlxnKS",
"r1x97E5B_S"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Dear reviewers,\n\nThank you for the thorough reviews!\n\nFor the comments on novelty, although our method is not based on a complex theoretical ideas, it is simple yet practical. With minimal modification, the any personalization models which satisfy the split-personalization constraint can be used in FL in a sca... | [
-1,
-1,
8,
3,
1,
-1,
-1
] | [
-1,
-1,
1,
4,
4,
-1,
-1
] | [
"iclr_2020_Syl-_aVtvH",
"SJeGpxFFYB",
"iclr_2020_Syl-_aVtvH",
"iclr_2020_Syl-_aVtvH",
"iclr_2020_Syl-_aVtvH",
"r1x97E5B_S",
"iclr_2020_Syl-_aVtvH"
] |
iclr_2020_rkezdaEtvH | Hyperbolic Discounting and Learning Over Multiple Horizons | Reinforcement learning (RL) typically defines a discount factor as part of the Markov Decision Process. The discount factor values future rewards by an exponential scheme that leads to theoretical convergence guarantees of the Bellman equation. However, evidence from psychology, economics and neuroscience suggests that humans and animals instead have hyperbolic time-preferences. Here we extend earlier work of Kurth-Nelson and Redish and propose an efficient deep reinforcement learning agent that acts via hyperbolic discounting and other non-exponential discount mechanisms. We demonstrate that a simple approach approximates hyperbolic discount functions while still using familiar temporal-difference learning techniques in RL. Additionally, and independent of hyperbolic discounting, we make a surprising discovery that simultaneously learning value functions over multiple time-horizons is an effective auxiliary task which often improves over state-of-the-art methods. | reject | While there was some support for the ideas presented in this paper, it was on the borderline, and ultimately did not make the cut for publication at ICLR.
Concerns were raised as to the significance of the contribution, beyond that of past work. | train | [
"BJgxFCYssS",
"Sye_B8wjjB",
"H1epWKSoiH",
"BJl73H99jB",
"ryeh5wkcjB",
"H1e2NwJ4jB",
"SJgtrgdXsS",
"ryldEjDmor",
"SJgBEDMhtB",
"S1lYtMykcB"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"OK - makes sense. Yes, a deeper analysis of the auxiliary task, including as you suggested earlier, (1) measuring interaction with other auxiliary tasks and (2) performance as a function of number of heads would be useful future work. \n\nFinally, in a related note on our auxiliary task, we are using shallow fun... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
1,
4
] | [
"Sye_B8wjjB",
"H1epWKSoiH",
"BJl73H99jB",
"ryeh5wkcjB",
"H1e2NwJ4jB",
"iclr_2020_rkezdaEtvH",
"S1lYtMykcB",
"SJgBEDMhtB",
"iclr_2020_rkezdaEtvH",
"iclr_2020_rkezdaEtvH"
] |
iclr_2020_S1emOTNKvS | Robust Graph Representation Learning via Neural Sparsification | Graph representation learning serves as the core of many important prediction tasks, ranging from product recommendation in online marketing to fraud detection in financial domain. Real-life graphs are usually large with complex local neighborhood, where each node is described by a rich set of features and easily connects to dozens or even hundreds of neighbors. Most existing graph learning techniques rely on neighborhood aggregation, however, the complexity on real-life graphs is usually high, posing non-trivial overfitting risk during model training. In this paper, we present Neural Sparsification (NeuralSparse), a supervised graph sparsification technique that mitigates the overfitting risk by reducing the complexity of input graphs. Our method takes both structural and non-structural information as input, utilizes deep neural networks to parameterize the sparsification process, and optimizes the parameters by feedback signals from downstream tasks. Under the NeuralSparse framework, supervised graph sparsification could seamlessly connect with existing graph neural networks for more robust performance on testing data. Experimental results on both benchmark and private datasets show that NeuralSparse can effectively improve testing accuracy and bring up to 7.4% improvement when working with existing graph neural networks on node classification tasks. | reject | This submission proposes a graph sparsification mechanism that can be used when training GNNs.
Strengths:
-The paper is easy to follow.
-The proposed method is sound and effective.
Weaknesses:
-The novelty is limited.
Given the limited novelty and the number of strong submissions to ICLR, this submission, while promising, does not meet the bar for acceptance. | train | [
"HyxjK2qniS",
"HkxPy4yhoS",
"B1xlsMPssr",
"r1ei61PDYr",
"ByxhW2TNsH",
"r1xxZipViH",
"rklYK5TEsB",
"r1xkiEt2Fr",
"ryeIwujgcB"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear AC and reviewers,\n\nWe sincerely appreciate the valuable comments from the reviewers. The constructive suggestions indeed help us further improve the paper.\n\nMeanwhile, we feel sorry for not receiving further feedback from reviewer#2. We fully respect the reviewer's comments and have spent significant effo... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
1,
8
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_S1emOTNKvS",
"ByxhW2TNsH",
"r1xkiEt2Fr",
"iclr_2020_S1emOTNKvS",
"r1ei61PDYr",
"r1xkiEt2Fr",
"ryeIwujgcB",
"iclr_2020_S1emOTNKvS",
"iclr_2020_S1emOTNKvS"
] |
iclr_2020_B1eX_a4twH | Superseding Model Scaling by Penalizing Dead Units and Points with Separation Constraints | In this article, we study a proposal that enables to train extremely thin (4 or 8 neurons per layer) and relatively deep (more than 100 layers) feedforward networks without resorting to any architectural modification such as Residual or Dense connections, data normalization or model scaling. We accomplish that by alleviating two problems. One of them are neurons whose output is zero for all the dataset, which renders them useless. This problem is known to the academic community as \emph{dead neurons}. The other is a less studied problem, dead points. Dead points refers to data points that are mapped to zero during the forward pass of the network. As such, the gradient generated by those points is not propagated back past the layer where they die, thus having no effect in the training process. In this work, we characterize both problems and propose a constraint formulation that added to the standard loss function solves them both. As an additional benefit, the proposed method allows to initialize the network weights with constant or even zero values and still allowing the network to converge to reasonable results. We show very promising results on a toy, MNIST, and CIFAR-10 datasets. | reject | This paper proposes constraints to tackle the problems of dead neurons and dead points. The reviewers point out that the experiments are only done on small datasets and it is not clear if the experiments will scale further. I encourage the authors to carry out further experiments and submit to another venue. | train | [
"HyeNFfRWnr",
"SyghLfCbhH",
"SJldceB15H",
"BJeUAR0wKS",
"BklblRx5iS",
"HylKOcadsr",
"BJlb3O3djH",
"H1lcSshdoH",
"rJlcmT2doH",
"r1xpaZa_jr",
"rkxSv_adoH",
"BJx-JVRujH",
"SyxIfjTuiS",
"Hyga_pTdor",
"SklSReRusS",
"BJgL7CxOqS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"\n(comments are in reverse order)\n\n> 2.3\n\n5000 epochs should be enough indeed.\n\nIn short, what I meant is that controlling the learning rate by fixing it instead of optimizing it will likely lead to an exacerbated difference between ReLU and other results. I don't believe it would turn the results upside-dow... | [
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
-1,
-1,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1
] | [
"BklblRx5iS",
"BklblRx5iS",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
"iclr_2020_B1eX_a4twH",
... |
iclr_2020_r1lEd64YwH | Learning Semantically Meaningful Representations Through Embodiment | How do humans acquire a meaningful understanding of the world with little to no supervision or semantic labels provided by the environment? Here we investigate embodiment and a closed loop between action and perception as one key component in this process. We take a close look at the representations learned by a deep reinforcement learning agent that is trained with visual and vector observations collected in a 3D environment with sparse rewards. We show that this agent learns semantically meaningful and stable representations of its environment without receiving any semantic labels. Our results show that the agent learns to represent the action relevant information extracted from pixel input in a wide variety of sparse activation patterns. The quality of the representations learned shows the strength of embodied learning and its advantages over fully supervised approaches with regards to robustness and generalizability. | reject | What is investigated is what kind of representations are formed by embodied agents; it is argued that these are different than from non-embodied arguments. This is an interesting question related to foundational AI and Alife questions, such as the symbol grounding problem. Unfortunately, the empirical investigations are insufficient. In particular, there is no comparison with a non-embodied control condition. The reviewers point this out, and the authors propose a different control condition, which unfortunately is not sufficient to test the hypothesis.
This paper should be rejected in its current form, but the question is interesting and hopefully the authors will do the missing experiments and submit a new version of the paper. | train | [
"H1eyyIUhsS",
"B1el1NKijH",
"rJxQskxsir",
"Skxrl81XsB",
"BylBESy7iH",
"Bkg8A4k7iB",
"Hylx6OChtB",
"SJgrTBn1KS",
"rkgKdL1RtS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I do not think that the autoencoder experiments are a useful comparison because the autoencoder is trained on a completely different task than the RL agent: image reconstruction vs. value/action prediction. The autoencoder is trained to encode a lot of information that is irrelevant to the RL agent.\n\nInstead of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"Bkg8A4k7iB",
"rJxQskxsir",
"BylBESy7iH",
"SJgrTBn1KS",
"Hylx6OChtB",
"rkgKdL1RtS",
"iclr_2020_r1lEd64YwH",
"iclr_2020_r1lEd64YwH",
"iclr_2020_r1lEd64YwH"
] |
iclr_2020_BJgEd6NYPH | Ellipsoidal Trust Region Methods for Neural Network Training | We investigate the use of ellipsoidal trust region constraints for second-order optimization of neural networks. This approach can be seen as a higher-order counterpart of adaptive gradient methods, which we here show to be interpretable as first-order trust region methods with ellipsoidal constraints. In particular, we show that the preconditioning matrix used in RMSProp and Adam satisfies the necessary conditions for provable convergence of second-order trust region methods with standard worst-case complexities. Furthermore, we run experiments across different neural architectures and datasets to find that the ellipsoidal constraints constantly outperform their spherical counterpart both in terms of number of backpropagations and asymptotic loss value. Finally, we find comparable performance to state-of-the-art first-order methods in terms of backpropagations, but further advances in hardware are needed to render Newton methods competitive in terms of time. | reject | This paper interprets adaptive gradient methods as trust region methods, and then extends the trust regions to axis-aligned ellipsoids determined by the approximate curvature. It's fairly natural to try to extend the algorithms in this way, but the paper doesn't show much evidence that this is actually effective. (The experiments show an improvement only in terms of iterations, which doesn't account for the computational cost or the increased batch size; there doesn't seem to be an improvement in terms of epochs.) I suspect the second-order version might also lose some of the online convex optimization guarantees of the original methods, raising the question of whether the trust-region interpretation really captures the benefits of the original methods. The reviewers recommend rejection (even after discussion) because they are unsatisfied with the experiments; I agree with their assessment.
| train | [
"SklwP8PbcS",
"H1e970N2Fr",
"Byg1zRwCFB",
"HkeB7BrKor",
"BkeW27rtir",
"ryx_SIWwiB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"In this paper, the authors investigate the use of ellipsoidal trust region constraints for second order optimization. The authors first show that adaptive gradient methods can be viewed as first-order trust region methods with ellipsoid constraints. The authors then show that the preconditioning matrix of RMSProp ... | [
3,
3,
6,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1
] | [
"iclr_2020_BJgEd6NYPH",
"iclr_2020_BJgEd6NYPH",
"iclr_2020_BJgEd6NYPH",
"SklwP8PbcS",
"Byg1zRwCFB",
"H1e970N2Fr"
] |
iclr_2020_H1xSOTVtvH | Robust Domain Randomization for Reinforcement Learning | Producing agents that can generalize to a wide range of environments is a significant challenge in reinforcement learning. One method for overcoming this issue is domain randomization, whereby at the start of each training episode some parameters of the environment are randomized so that the agent is exposed to many possible variations. However, domain randomization is highly inefficient and may lead to policies with high variance across domains. In this work, we formalize the domain randomization problem, and show that minimizing the policy's Lipschitz constant with respect to the randomization parameters leads to low variance in the learned policies. We propose a method where the agent only needs to be trained on one variation of the environment, and its learned state representations are regularized during training to minimize this constant. We conduct experiments that demonstrate that our technique leads to more efficient and robust learning than standard domain randomization, while achieving equal generalization scores. | reject | The paper presents a technique for learning RL agents to generalize well to unseen environments.
All reviewers and AC think that the paper has some potential but is a bit below the bar to be accepted due to the following facts:
(a) Limited experiments, i.e., consider more appealing baselines/scenarios and provide more experimental details.
(b) The proposed method/idea is simple/reasonable, but not super novel, i.e., not enough considering the ICLR high standard (potentially enough for a workshop paper).
Hence, I think this is a borderline paper toward rejection.
| test | [
"S1gzAdmp_r",
"SyeW2CIEsH",
"SkluvCIVsS",
"rke1cC8VsH",
"H1gGDk4uKH",
"Bklrq4s0KS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces the high variance policies challenge in domain randomization for reinforcement learning. The paper gives a new bound for the expected return of the policy when the policy is Lipschitz continuous. Then the paper proposes a new method to minimize the Lipschitz constant for policies of all random... | [
3,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_H1xSOTVtvH",
"S1gzAdmp_r",
"Bklrq4s0KS",
"H1gGDk4uKH",
"iclr_2020_H1xSOTVtvH",
"iclr_2020_H1xSOTVtvH"
] |
iclr_2020_r1lUdpVtwB | Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation | The current approaches for machine translation usually require large set of parallel corpus in order to achieve fluency like in the case of neural machine translation (NMT), statistical machine translation (SMT) and example-based machine translation (EBMT). The context awareness of phrase-based machine translation (PBMT) approaches is also questionable. This research develops a system that translates English text to Amharic text using a combination of context based machine translation (CBMT) and a recurrent neural network machine translation (RNNMT). We built a bilingual dictionary for the CBMT system to use along with a large target corpus. The RNNMT model has then been provided with the output of the CBMT and a parallel corpus for training. Our combinational approach on English-Amharic language pair yields a performance improvement over the simple neural machine translation (NMT). | reject | The authors propose a model which combines a neural machine translation system and a context-based machine translation model, which combines some aspects of rule and example based MT. This paper presents work based on obsolete techniques, has relatively low novelty, has problematic experimental design and lacks compelling performance improvements. The authors rebutted some of the reviewers claims, but did not convince them to change their scores. | train | [
"rJg7IowhsB",
"rkl7jCgiiS",
"S1g_CoeosB",
"H1gL3ngoiH",
"r1e-hTxAKH",
"Hyl4Le2g9r",
"r1gCkziQcr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Regarding the suitability for ICLR: I agree that machine translation is well within the domain of interest of ICLR participants. The issue is w.r.t. the low novelty and significance of this specific paper - in the context of a machine learning conference. For a more focused conference on machine translation - this... | [
-1,
-1,
-1,
-1,
1,
1,
1
] | [
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"S1g_CoeosB",
"r1e-hTxAKH",
"r1gCkziQcr",
"Hyl4Le2g9r",
"iclr_2020_r1lUdpVtwB",
"iclr_2020_r1lUdpVtwB",
"iclr_2020_r1lUdpVtwB"
] |
iclr_2020_S1xI_TEtwS | Amata: An Annealing Mechanism for Adversarial Training Acceleration | Despite of the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that much degrade their performance. This is known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose a simple yet effective modification to the commonly used projected gradient descent (PGD) adversarial training by increasing the number of adversarial training steps and decreasing the adversarial training step size gradually as training proceeds. We analyze the optimality of this annealing mechanism through the lens of optimal control theory, and we also prove the convergence of our proposed algorithm. Numerical experiments on standard datasets, such as MNIST and CIFAR10, show that our method can achieve similar or even better robustness with around 1/3 to 1/2 computation time compared with PGD. | reject | The paper proposes a modification for adversarial training in order to improve the robustness of the algorithm by developing an annealing mechanism for PGD adversarial training. This mechanism gradually reduces the step size and increases the number of iterations of PGD maximization. One reviewer found the paper to be clear and competitive with existing work, but raised concerns of novelty and significance. Another reviewer noted the significant improvements in training times but had concerns about small scale datasets. The final reviewer liked the optimal control formulation, and requested further details. The authors provided detailed answers and responses to the reviews, although some of these concerns remain. The paper has improved over the course of the review, but due to a large number of stronger papers, was not accepted at this time. | val | [
"rJlHPwG3or",
"B1lyevMniB",
"Sye_9DMniB",
"SkeNVvMnjH",
"BJgaBtsAKH",
"BylzHYg15H",
"S1ebs9cc5H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your comments. \n\nOn the clarity role of optimal control:\nIn our revision, we have improved the clarity by adding explanations of the different variables and the optimal control results used (e.g. the optimal control criterion). See also the answers to clarity questions to reviewer 1. \nT... | [
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"BylzHYg15H",
"iclr_2020_S1xI_TEtwS",
"BJgaBtsAKH",
"S1ebs9cc5H",
"iclr_2020_S1xI_TEtwS",
"iclr_2020_S1xI_TEtwS",
"iclr_2020_S1xI_TEtwS"
] |
iclr_2020_HJlP_pEFPH | SRDGAN: learning the noise prior for Super Resolution with Dual Generative Adversarial Networks | Single Image Super Resolution (SISR) is the task of producing a high resolution (HR) image from a given low-resolution (LR) image. It is a well researched prob- lem with extensive commercial applications like digital camera, video compres- sion, medical imaging, etc. Most recent super resolution works focus on the fea- ture learning architecture, like Chao Dong (2016); Dong et al. (2016); Wang et al. (2018b); Ledig et al. (2017). However, these works suffer from the following chal- lenges: (1) The low-resolution (LR) training images are artificially synthesized us- ing HR images with bicubic downsampling, which have much more information than real demosaic-upscaled images. The mismatch between training and realistic mobile data heavily blocks the effect on practical SR problem. (2) These methods cannot effectively handle the blind distortions during super resolution in practical applications. In this work, an end-to-end novel framework, including high-to-low network and low-to-high network, is proposed to solve the above problems with dual Generative Adversarial Networks (GAN). First, the above mismatch prob- lems are well explored with the high-to-low network, where clear high-resolution image and the corresponding realistic low-resolution image pairs can be gener- ated. With high-to-low network, a large-scale General Mobile Super Resolution Dataset, GMSR, is proposed, which can be utilized for training or as a bench- mark for super resolution methods. Second, an effective low-to-high network (super resolution network) is proposed in the framework. Benefiting from the GMSR dataset and novel training strategies, the proposed super resolution model can effectively handle detail recovery and denoising at the same time. | reject | All reviewers agree that the authors have done a great job identifying weaknesses with the current SOTA in super-resolution. However, there is also agreement that the proposed approach may be too simple to accurately capture a range of real camera distortions, and more comparisons to the SOTA are needed. While this paper certainly has merits and opens the door for strong work in the future, there is not enough support to accept the paper in its current form. | train | [
"BygtgrvnKS",
"SkgLCEuTKB",
"HkevFaACKB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In the paper, the authors proposed an end-to-end single image super-resolution framework, which is composed of two parts. First, a high-to-low network is trained to generate realistic HR/LR image pairs for training super-resolution models. Using this network, the authors designed a large-scale General Mobile Super... | [
3,
3,
1
] | [
4,
5,
5
] | [
"iclr_2020_HJlP_pEFPH",
"iclr_2020_HJlP_pEFPH",
"iclr_2020_HJlP_pEFPH"
] |
iclr_2020_rJlDO64KPH | Self-Supervised Speech Recognition via Local Prior Matching | We propose local prior matching (LPM), a self-supervised objective for speech recognition. The LPM objective leverages a strong language model to provide learning signal given unlabeled speech. Since LPM uses a language model, it can take advantage of vast quantities of both unpaired text and speech. The loss is theoretically well-motivated and simple to implement. More importantly, LPM is effective. Starting from a model trained on 100 hours of labeled speech, with an additional 360 hours of unlabeled data LPM reduces the WER by 26% and 31% relative on a clean and noisy test set, respectively. This bridges the gap by 54% and 73% WER on the two test sets relative to a fully supervised model on the same 360 hours with labels. By augmenting LPM with an additional 500 hours of noisy data, we further improve the WER on the noisy test set by 15% relative. Furthermore, we perform extensive ablative studies to show the importance of various configurations of our self-supervised approach. | reject | The paper proposed local prior matching that utilizes a language model to rescore the hypotheses generate by a teacher model on unlabeled data, which are then used to training the student model for improvement. The experimental results on Librispeech is thorough. But two concerns on this paper are: 1) limited novelty: LM trained on large tex data is already used in weak distillation and the only difference is the use of multiply hypotheses. As pointed out by the reviewers, the method is better understood through distillation even though the authors try to derive it from Bayesian perspective. 2) Librispeech is a medium sized dataset, justifications on much larger dataset for ASR would make it more convincing. | train | [
"rJgtIA0isH",
"ryeVVYzMir",
"Bkgs5uzGoS",
"BJl1NPzMjB",
"SJeEslb2tS",
"H1g7aNtaYB",
"B1lHoTTIqS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We created another three subsets: (1) train-clean-180, (2) train-other-180, and (3) train-other-360 to study the LPM performances with different unlabeled speech sizes. In particular, (1) contains 180 hrs of speech sampled from train-clean-360, (3) contains 360 hrs of speech sampled from train-other-500, and (2) c... | [
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"ryeVVYzMir",
"SJeEslb2tS",
"H1g7aNtaYB",
"B1lHoTTIqS",
"iclr_2020_rJlDO64KPH",
"iclr_2020_rJlDO64KPH",
"iclr_2020_rJlDO64KPH"
] |
iclr_2020_SJldu6EtDS | Wasserstein Adversarial Regularization (WAR) on label noise | Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or Web scraping. We propose a new regularization method, which enables learning robust classifiers in presence of noisy data. To achieve this goal, we propose a new adversarial regularization scheme based on the Wasserstein distance. Using this distance allows taking into account specific relations between classes by leveraging the geometric properties of the labels space. Our Wasserstein Adversarial Regularization (WAR) encodes a selective regularization, which promotes smoothness of the classifier between some classes, while preserving sufficient complexity of the decision boundary between others. We first discuss how and why adversarial regularization can be used in the context of label noise and then show the effectiveness of our method on five datasets corrupted with noisy labels: in both benchmarks and real datasets, WAR outperforms the state-of-the-art
competitors. | reject | This article proposes a regularisation scheme to learn classifiers that take into account similarity of labels, and presents a series of experiments. The reviewers found the approach plausible, the paper well written, and the experiments sufficient. At the same time, they expressed concerns, mentioning that the technical contribution is limited (in particular, the Wasserstein distance has been used before in estimation of conditional distributions and in multi-label learning), and that it would be important to put more efforts into learning the metric. The author responses clarified a few points and agreed that learning the metric is an interesting problem. There were also concerns about the competitiveness of the approach, which were addressed in part in the authors' responses, albeit not fully convincing all of the reviewers. This article proposes an interesting technique for a relevant type of problems, and demonstrates that it can be competitive with extensive experiments. ``Although this is a reasonably good article, it is not good enough, given the very high acceptance bar for this year's ICLR.
| train | [
"BJx9vswatr",
"ByethZeojr",
"rJlyYK2cor",
"B1gLWF3_oH",
"rygdfE3dir",
"HkgzZ73dsr",
"rkeJnoruKS",
"rylCRiKCYS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nUpdate after rebuttal:\nThe rebuttal addresses my concerns. The point raised by reviewer 2 regarding other results on Clothing1M is concerning, but the authors' response is reasonable. Moreover, the proposed method is novel and could be complementary with other methods that achieve good results on Clothing1M. I ... | [
8,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2020_SJldu6EtDS",
"rJlyYK2cor",
"B1gLWF3_oH",
"rkeJnoruKS",
"rylCRiKCYS",
"BJx9vswatr",
"iclr_2020_SJldu6EtDS",
"iclr_2020_SJldu6EtDS"
] |
iclr_2020_rked_6NFwH | Path Space for Recurrent Neural Networks with ReLU Activations | It is well known that neural networks with rectified linear units (ReLU) activation functions are positively scale-invariant (i.e., the neural network is invariant to positive rescaling of weights). Optimization algorithms like stochastic gradient descent that optimize the neural networks in the vector space of weights, which are not positively scale-invariant. To solve this mismatch, a new parameter space called path space has been proposed for feedforward and convolutional neural networks. The path space is positively scale-invariant and optimization algorithms operating in path space have been shown to be superior than that in the original weight space. However, the theory of path space and the corresponding optimization algorithm cannot be naturally extended to more complex neural networks, like Recurrent Neural Networks(RNN) due to the recurrent structure and the parameter sharing scheme over time. In this work, we aim to construct path space for RNN with ReLU activations so that we can employ optimization algorithms in path space. To achieve the goal, we propose leveraging the reduction graph of RNN which removes the influence of time-steps, and prove that all the values of whose paths can serve as a sufficient representation of the RNN with ReLU activations. We then prove that the path space for RNN is composed by the basis paths in reduction graph, and design a \emph{Skeleton Method} to identify the basis paths efficiently. With the identified basis paths, we develop the optimization algorithm in path space for RNN models. Our experiments on several benchmark datasets show that we can obtain significantly more effective RNN models in this way than using optimization methods in the weight space. | reject | The scores of the reviewers are just far to low to warrant an acceptance recommendation from the AC. | train | [
"SJea-J2ior",
"BylsACcssS",
"BJg3raqoiH",
"B1eWBC5ooS",
"Skg8bjXTFH",
"BJgnP_s15B",
"ryxFe1MT9B"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1. Thanks for your valuable suggestions on this paper. We have made the following revisions. In Section 2, we have revised the description of the graph for RNN by adding the description of edges. We also refine Definition 1 and change to use different notations to denote node and its value. \n\n2. Please... | [
-1,
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
-1,
3,
5,
1
] | [
"Skg8bjXTFH",
"ryxFe1MT9B",
"iclr_2020_rked_6NFwH",
"BJgnP_s15B",
"iclr_2020_rked_6NFwH",
"iclr_2020_rked_6NFwH",
"iclr_2020_rked_6NFwH"
] |
iclr_2020_HJlY_6VKDr | BUZz: BUffer Zones for defending adversarial examples in image classification | We propose a novel defense against all existing gradient based adversarial attacks on deep neural networks for image classification problems. Our defense is based on a combination of deep neural networks and simple image transformations. While straight forward in implementation, this defense yields a unique security property which we term buffer zones. In this paper, we formalize the concept of buffer zones. We argue that our defense based on buffer zones is secure against state-of-the-art black box attacks. We are able to achieve this security even when the adversary has access to the entire original training data set and unlimited query access to the defense. We verify our security claims through experimentation using FashionMNIST, CIFAR-10 and CIFAR-100. We demonstrate <10% attack success rate -- significantly lower than what other well-known defenses offer -- at only a price of a 15-20% drop in clean accuracy. By using a new intuitive metric we explain why this trade-off offers a significant improvement over prior work. | reject | This paper formalizes the concept of buffer zones, and proposes a defense method based on a combination of deep neural networks and simple image transformations. The authors argue that the proposed method based on buffer zones is robust against state-of-the-art black box attacks methods.This paper, however, falls short of (1) unjustified claims (e.g., buffer zones are widened when the models are diverse); (2) incomplete literature survey and related work; (3) similar ideas are well-known in the literature, (4) unfair experimental evaluations and many others. Even after author response, it still does not gather support from the reviewers. Thus I recommend reject. | train | [
"SkgZFs_2iS",
"SyeDWuk2oH",
"SJehaSr9sH",
"Hyx1yGVqoH",
"Syg-K1E5sH",
"H1xzzmnIFr",
"SkgjFhCcKr",
"B1gGCEmg5S"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your helpful comments about the usage of \"Undecided\" class label. Hopefully, our answer can somehow address your concern. \n\n1. We have checked the following calibration papers:\n\n1. https://arxiv.org/abs/1702.01691 \n2. https://arxiv.org/pdf/1910.06259.pdf\n3. https://openreview.net/pdf?id=Bkx... | [
-1,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
1
] | [
"SyeDWuk2oH",
"Syg-K1E5sH",
"H1xzzmnIFr",
"SkgjFhCcKr",
"B1gGCEmg5S",
"iclr_2020_HJlY_6VKDr",
"iclr_2020_HJlY_6VKDr",
"iclr_2020_HJlY_6VKDr"
] |
iclr_2020_H1lKd6NYPS | Online Meta-Critic Learning for Off-Policy Actor-Critic Methods | Off-Policy Actor-Critic (Off-PAC) methods have proven successful in a variety of continuous control tasks. Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we introduce a novel and flexible meta-critic that observes the learning process and meta-learns an additional loss for the actor that accelerates and improves actor-critic learning. Compared to the vanilla critic, the meta-critic network is explicitly trained to accelerate the learning process; and compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks. Crucially, our meta-critic framework is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency. We demonstrate that online meta-critic learning leads to improvements in a variety of continuous control environments when combined with contemporary Off-PAC methods DDPG, TD3 and the state-of-the-art SAC. | reject | There was extension discussion of the paper between the reviewers. It's clear that the reviewers appreciated the main idea in the paper, and the notion of an "online" meta-critic that accelerates the RL process is definitely very appealing. However, there were unanswered questions about what the method is actually doing that make me reticent to recommend acceptance at this point. I would refer the authors to R3 and R1 for an in-depth discussion of the issues, but the short summary is that it's not clear whether, if, and how the meta-loss in this case actually converges, and what the meta-critic is actually doing. In the absence of a theoretical understanding for what the modification does to accelerate RL, we are left with the empirical experiments, and there it is necessary to consider alternative hypotheses and perform detailed ablation analyses to understand that the method really works for the reasons stated by the authors (and not some of the alternative explanations, see e.g. R3). While there is nothing wrong with a result that is primarily empirical, it is important to analyze that the empirical gains really are happening for the reasons claimed, and to carefully study convergence and asymptotic properties of the algorithm. The comparatively diminished gains with the stronger algorithms (TD3 and especially SAC) make me more skeptical. Therefore, I would recommend that the paper not be accepted at this time, though I encourage the authors to resubmit with a more in-depth experimental evaluation. | test | [
"rkeydHcRKH",
"rJxpsZGsjS",
"BJxgdqkKoH",
"ByxrQ3kYoS",
"HJgvVYJYjS",
"Bygg1PktiS",
"B1xtiPJKoB",
"r1xRDNSNKS",
"Skgom5J6FS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I am very torn about this paper as the proposed approach is a fairly straightforward extension of past work on the meta-critic approach to meta-learning and the results are pretty good, but nothing amazing. I tend to accept this paper because I like their general direction and think what they are proposing is pret... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_H1lKd6NYPS",
"HJgvVYJYjS",
"r1xRDNSNKS",
"r1xRDNSNKS",
"Skgom5J6FS",
"rkeydHcRKH",
"rkeydHcRKH",
"iclr_2020_H1lKd6NYPS",
"iclr_2020_H1lKd6NYPS"
] |
iclr_2020_HJe5_6VKwS | Model-based Saliency for the Detection of Adversarial Examples | Adversarial perturbations cause a shift in the salient features of an image, which may result in a misclassification. We demonstrate that gradient-based saliency approaches are unable to capture this shift, and develop a new defense which detects adversarial examples based on learnt saliency models instead. We study two approaches: a CNN trained to distinguish between natural and adversarial images using the saliency masks produced by our learnt saliency model, and a CNN trained on the salient pixels themselves as its input. On MNIST, CIFAR-10 and ASSIRA, our defenses are able to detect various adversarial attacks, including strong attacks such as C&W and DeepFool, contrary to gradient-based saliency and detectors which rely on the input image. The latter are unable to detect adversarial images when the L_2- and L_infinity- norms of the perturbations are too small. Lastly, we find that the salient pixel based detector improves on saliency map based detectors as it is more robust to white-box attacks. | reject | This submission proposes a method for detecting adversarial attacks using saliency maps.
Strengths:
-The experimental results are encouraging.
Weaknesses:
-The novelty is minor.
-Experimental validation of some claims (e.g. robustness to white-box attacks) is lacking.
These weaknesses were not sufficiently addressed in the discussion phase. AC agrees with the majority recommendation to reject.
| train | [
"rJeB5tK3sH",
"ByxGwA3ojB",
"B1eJDNjqsB",
"Syx9bGl9jB",
"Skls0bl5jr",
"ByeVbWx5jB",
"SyliC__PYB",
"rklB0nk6tB",
"r1luCy_aYS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your quick response- we highly appreciate it. \n\nWhile experimenting with white-box attacks, our goal was to break our defense. As such, we generated attacks with large perturbations. (And wanted to show that iteratively, we could learn to also defend against such attacks.) However, the resultant image... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"ByxGwA3ojB",
"B1eJDNjqsB",
"rklB0nk6tB",
"SyliC__PYB",
"r1luCy_aYS",
"iclr_2020_HJe5_6VKwS",
"iclr_2020_HJe5_6VKwS",
"iclr_2020_HJe5_6VKwS",
"iclr_2020_HJe5_6VKwS"
] |
iclr_2020_HylhuTEtwr | The advantage of using Student's t-priors in variational autoencoders | Is it optimal to use the standard Gaussian prior in variational autoencoders? With Gaussian distributions, which are not weakly informative priors, variational autoencoders struggle to reconstruct the actual data. We provide numerical evidence that encourages using Student's t-distributions as default priors in variational autoencoders, and we challenge the usual setup for the variational autoencoder structure by comparing Gaussian and Student's t-distribution priors with different forms of the covariance matrix. | reject | The consensus among all reviewers was to reject this paper, and the authors did not provide a rebuttal. | test | [
"B1etdm5itr",
"SygHTGnhYr",
"HJg7nEQatr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This paper studies alternative priors for VAEs, comparing Normal distributions (diagonal and full covariance) against Student-t’s (diagonal and full covariance). In particular, the paper is concerned with posterior collapse---i.e. posterior remains at the prior, limiting the model’s ability to reconstr... | [
1,
1,
1
] | [
5,
4,
4
] | [
"iclr_2020_HylhuTEtwr",
"iclr_2020_HylhuTEtwr",
"iclr_2020_HylhuTEtwr"
] |
iclr_2020_H1xTup4KPr | Needles in Haystacks: On Classifying Tiny Objects in Large Images | In some important computer vision domains, such as medical or hyperspectral imaging, we care about the classification of tiny objects in large images. However, most Convolutional Neural Networks (CNNs) for image classification were developed using biased datasets that contain large objects, in mostly central image positions. To assess whether classical CNN architectures work well for tiny object classification we build a comprehensive testbed containing two datasets: one derived from MNIST digits and one from histopathology images. This testbed allows controlled experiments to stress-test CNN architectures with a broad spectrum of signal-to-noise ratios. Our observations indicate that: (1) There exists a limit to signal-to-noise below which CNNs fail to generalize and that this limit is affected by dataset size - more data leading to better performances; however, the amount of training data required for the model to generalize scales rapidly with the inverse of the object-to-image ratio (2) in general, higher capacity models exhibit better generalization; (3) when knowing the approximate object sizes, adapting receptive field is beneficial; and (4) for very small signal-to-noise ratio the choice of global pooling operation affects optimization, whereas for relatively large signal-to-noise values, all tested global pooling operations exhibit similar performance. | reject | This paper proposes performs an empirical study to evaluate CNN-based object classifier for the case where the object of interest is very small relative to the size of the image. Two synthetic databases are used to conduct the experiments, through which the authors made a number of observations and conclusions. The reviewers concern that the databases used are too structured or artificial, and one of the two databases is very small as well. On top of that, only one network architecture is used for evaluation. Furthermore, the conclusion from two databases seem inconsistent as well. The authors provided detailed responses to the reviewers' comments but were not able to change the overall rating of the paper. Given these concerns, as well as no methodological contribution, there are general concerns from all reviewers that the contributions of this work is not sufficient for ICLR. The ACs concur the concerns and the paper can not be accepted at its current state. | train | [
"SJgeIVgssH",
"HkgaQNxsjH",
"BJeXgVlsoS",
"SylBTBUTYr",
"HylRTeIAYH",
"Sye60k9rqS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Many thanks for dedicating time to review our paper. We feel the reviewer is missing some important points of our study, which we would like to clarify below:\n\n[Depth of the contribution]\nWe argue that the depth of our contribution is significant: the paper makes an important contribution by revealing a substan... | [
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
4,
4,
3
] | [
"SylBTBUTYr",
"HylRTeIAYH",
"Sye60k9rqS",
"iclr_2020_H1xTup4KPr",
"iclr_2020_H1xTup4KPr",
"iclr_2020_H1xTup4KPr"
] |
iclr_2020_ryx6daEtwr | GPNET: MONOCULAR 3D VEHICLE DETECTION BASED ON LIGHTWEIGHT WHEEL GROUNDING POINT DETECTION NETWORK | We present a method to infer 3D location and orientation of vehicles on a single image. To tackle this problem, we optimize the mapping relation between the vehicle’s wheel grounding point on the image and the real location of the wheel in the 3D real world coordinate. Here we also integrate three task priors, including a ground plane constraint and vehicle wheel grounding point position, as well as a small projection error from the image to the ground plane. And a robust light network for grounding point detection in autopilot is proposed based on the vehicle and wheel detection result. In the light grounding point detection network, the DSNT key point regression method is used for balancing the speed of convergence and the accuracy of position, which has been proved more robust and accurate compared with the other key point detection methods. With more, the size of grounding point detection network is less than 1 MB, which can be executed quickly on the embedded environment. The code will be available soon. | reject | This paper aims to estimate the 3D location and orientation of vehicle from a 2D image. Instead of using a CNN-based 3D detection pipeline, the authors propose to detect the vehicle’s wheel grounding points and then using the ground plane constraint for the estimation. All three reviewers provided unanimous rating of rejection. Many concerns are raised by the reviewers, including poor generalization to new situations, small improvement over prior work, low presentation quality, the lack of detailed description of the experiments, etc. The authors did not respond to the reviewers’ comments. The AC agrees with the reviewers’ comments, and recommend rejection. | test | [
"SJgLFl3CFB",
"HkgHJ8y1cB",
"rkg6-V6B9B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a method to detect cars from a single image. The method imposes several handcrafted constraints specific to the dataset in order to achieve higher improvement and efficiency. These constraints are quite strong and they not generalize to new situations (eg. a car in the sky, a car upside down ... | [
1,
1,
1
] | [
5,
5,
5
] | [
"iclr_2020_ryx6daEtwr",
"iclr_2020_ryx6daEtwr",
"iclr_2020_ryx6daEtwr"
] |
iclr_2020_S1xCuTNYDr | Regularizing Black-box Models for Improved Interpretability | Most of the work on interpretable machine learning has focused on designingeither inherently interpretable models, which typically trade-off accuracyfor interpretability, or post-hoc explanation systems, which lack guarantees about their explanation quality. We explore an alternative to theseapproaches by directly regularizing a black-box model for interpretabilityat training time. Our approach explicitly connects three key aspects ofinterpretable machine learning: (i) the model’s internal interpretability, (ii)the explanation system used at test time, and (iii) the metrics that measureexplanation quality. Our regularization results in substantial improvementin terms of the explanation fidelity and stability metrics across a range ofdatasets and black-box explanation systems while slightly improving accuracy. Finally, we justify theoretically that the benefits of our regularizationgeneralize to unseen points. | reject | This paper investigates a promising direction on the important topic of interpretability; the reviewers find a variety of issues with the work, and I urge the authors to refine and extend their investigations. | train | [
"BylipNhFoH",
"rJlTVc1PsS",
"BJxxPi1woH",
"H1xNq5JvjH",
"SJlIdq1woB",
"HkgJFF9GoS",
"SkgZLZH2Kr",
"B1eqkesCFH",
"H1lHgrse9S"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"-- “The distinction…agnostic. Further, we believe …total variation.” \n\nIn the present manuscript the authors claim: \"Moreover, recent approaches that claim to overcome this apparent trade-off between prediction accuracy and explanation quality are in fact by-design proposals that impose certain constraints on ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"rJlTVc1PsS",
"H1lHgrse9S",
"HkgJFF9GoS",
"SkgZLZH2Kr",
"B1eqkesCFH",
"iclr_2020_S1xCuTNYDr",
"iclr_2020_S1xCuTNYDr",
"iclr_2020_S1xCuTNYDr",
"iclr_2020_S1xCuTNYDr"
] |
iclr_2020_rJeA_aVtPB | Decaying momentum helps neural network training | Momentum is a simple and popular technique in deep learning for gradient-based optimizers. We propose a decaying momentum (Demon) rule, motivated by decaying the total contribution of a gradient to all future updates. Applying Demon to Adam leads to significantly improved training, notably competitive to momentum SGD with learning rate decay, even in settings in which adaptive methods are typically non-competitive. Similarly, applying Demon to momentum SGD rivals momentum SGD with learning rate decay, and in many cases leads to improved performance. Demon is trivial to implement and incurs limited extra computational overhead, compared to the vanilla counterparts. | reject | This paper proposes a new decaying momentum rule to improve existing optimization algorithms for training deep neural networks, including momentum SGD and Adam. The main objections from the reviewers include: (1) its novelty is limited compared with prior work; (2) the experimental comparison needs to be improved (e.g., the baselines might not be carefully tuned, and learning rate decay is not applied, while it usually boosts the performance of all the algorithms a lot). After reviewer discussion, I agree with the reviewers’ evaluation and recommend reject. | train | [
"HkgsNW4njH",
"B1xIpD_isH",
"HylkHQlojB",
"S1xO2hS9jB",
"SJecAor5oB",
"S1xuG64qsr",
"B1xkhtvaKr",
"HJlqQLMmsS",
"SJxfOBMQoS",
"S1eb8SzXiB",
"Byeu6VMmsB",
"Bye55GlTKS",
"HJguTJHpKS"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your response.\n\n(1) It seems ... it is indeed very small.\n\n>> While for our experiments for the ResNet18 CIFAR10 setting the gap is small, the gap for VGG16 CIFAR100 is substantial (1.5%) gap, given the very optimized performance of existing algorithms.\n\n(2) One-cycle is ... using \"learning ra... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
5
] | [
"B1xIpD_isH",
"HJlqQLMmsS",
"S1xO2hS9jB",
"S1eb8SzXiB",
"S1xuG64qsr",
"Byeu6VMmsB",
"iclr_2020_rJeA_aVtPB",
"Bye55GlTKS",
"HJguTJHpKS",
"HJguTJHpKS",
"B1xkhtvaKr",
"iclr_2020_rJeA_aVtPB",
"iclr_2020_rJeA_aVtPB"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.