paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_rkx3-04FwB
MONET: Debiasing Graph Embeddings via the Metadata-Orthogonal Training Unit
Are Graph Neural Networks (GNNs) fair? In many real world graphs, the formation of edges is related to certain node attributes (e.g. gender, community, reputation). In this case, any GNN using these edges will be biased by this information, as it is encoded in the structure of the adjacency matrix itself. In this paper, we show that when metadata is correlated with the formation of node neighborhoods, unsupervised node embedding dimensions learn this metadata. This bias implies an inability to control for important covariates in real-world applications, such as recommendation systems. To solve these issues, we introduce the Metadata-Orthogonal Node Embedding Training (MONET) unit, a general model for debiasing embeddings of nodes in a graph. MONET achieves this by ensuring that the node embeddings are trained on a hyperplane orthogonal to that of the node metadata. This effectively organizes unstructured embedding dimensions into an interpretable topology-only, metadata-only division with no linear interactions. We illustrate the effectiveness of MONET though our experiments on a variety of real world graphs, which shows that our method can learn and remove the effect of arbitrary covariates in tasks such as preventing the leakage of political party affiliation in a blog network, and thwarting the gaming of embedding-based recommendation systems.
reject
This work presents a method for debiasing graph embeddings. The main concerns for the work were originally identified by Reviewer 3, who pointed out that the method is only capable of linear debiasing. Authors responded by updating the manuscript in several places to mention this limitation as well as adding Table 3 to the Appendix showing that SVM's with non-linear kernels are still able to identify bias in the embeddings. Reviewers agreed that this addition improved the manuscript, however some reviewers still had concerns about the revised manuscript. This AC has several recommendations for improving the paper. First additional revision is needed to better address the limitations of linear debiasing, for example Table 1 still reads "MONET is successful in removing all metadata information from the topology embeddings – the links in the graph are no longer an effective predictor of political party". Statements like this are a bit misleading, as the embeddings will still be biased with respect to a non-linear classifiers (as evident by Table 3). Additionally, updating Table 1 and related experiments to measure embedding bias with respect to non-linear classifiers would help clarify the limitations for perspective readers. Second, the paper should be updated to address remaining concerns that the linear debiasing assumption limits the applicability of the method. One could either discuss or demonstrate additional applications of the method that work even with the linear assumption, extend MONET so it can improve model bias with respect to non-linear classifiers, or show that MONET still outperforms baselines when the non-linear assumption is violated.
train
[ "Bye6Jh_TFB", "S1lWDbG0FH", "ryerBa9nsB", "BkeWkT93jB", "rJec8hGtiB", "B1gzMsfFsS", "SkgQ3qfFiB", "Bke4vIkIcr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary: The paper introduces a GNN model (MONET) for debiasing graph embeddings, by enforcing orthogonality between the embedding spaces of the graph topology & the graph metadata. They show that unsupervised learning induces bias from important graph metadata, when the metadata is correlated with the node edges....
[ 6, 6, -1, -1, -1, -1, -1, 3 ]
[ 1, 1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_rkx3-04FwB", "iclr_2020_rkx3-04FwB", "Bye6Jh_TFB", "Bke4vIkIcr", "Bye6Jh_TFB", "S1lWDbG0FH", "Bke4vIkIcr", "iclr_2020_rkx3-04FwB" ]
iclr_2020_SJlRWC4FDB
Adversarial Attacks on Copyright Detection Systems
It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. We discuss a range of copyright detection systems, and why they are particularly vulnerable to attacks. These vulnerabilities are especially apparent for neural network based systems. As proof of concept, we describe a well-known music identification method and implement this system in the form of a neural net. We then attack this system using simple gradient methods. Adversarial music created this way successfully fools industrial systems, including the AudioTag copyright detector and YouTube's Content ID system. Our goal is to raise awareness of the threats posed by adversarial examples in this space and to highlight the importance of hardening copyright detection systems to attacks.
reject
This paper shows a case study of an adversarial attack on a copyright detection system. The paper implements a music identification method with a simple convolutional neural network, and shows that it is possible to fool such CNN with an adversarial learning. After the discussion period, two among three reviewers incline to the rejection of the paper. Although the majority of the reviewers agree that this is an interesting problem with an important application, they also find many of their concerns remain unaddressed. These include the generality of the finding as the current paper is more like a proof-of-concept that black/white-box attack can work for copyright system. The reviewers are also concerned that the technique solution/finding is not novel as it is very similar to prior work in other domains (e.g., image classification). One reviewer was particularly concerned about that the user study is missing, making it difficult to judge whether the quality of the modified audio is reasonable or not.
val
[ "S1e3vPwiiH", "Hklj0rPoor", "S1gBwSPjir", "rJxU_LDssS", "rylqQBwijr", "ryxuDm2AKr", "H1lUxX-H9H", "Bkx-RNuUqH", "Bkeg58SwqS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to review our work.\n\nFirst, we would like to point out that as mentioned in the general response, this is intended to be an applications paper. The main contribution of this paper is neither building a new fingerprinting model (we used the model proposed by Wang et al., with an extr...
[ -1, -1, -1, -1, -1, 3, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, 1, 5, 4, 1 ]
[ "ryxuDm2AKr", "Bkx-RNuUqH", "Bkeg58SwqS", "H1lUxX-H9H", "iclr_2020_SJlRWC4FDB", "iclr_2020_SJlRWC4FDB", "iclr_2020_SJlRWC4FDB", "iclr_2020_SJlRWC4FDB", "iclr_2020_SJlRWC4FDB" ]
iclr_2020_SylR-CEKDS
Modeling question asking using neural program generation
People ask questions that are far richer, more informative, and more creative than current AI systems. We propose a neural program generation framework for modeling human question asking, which represents questions as formal programs and generates programs with an encoder-decoder based deep neural network. From extensive experiments using an information-search game, we show that our method can ask optimal questions in synthetic settings, and predict which questions humans are likely to ask in unconstrained settings. We also propose a novel grammar-based question generation framework trained with reinforcement learning, which is able to generate creative questions without supervised data.
reject
The authors explore different ways to generate questions about the current state of a “Battleship” game. Overall the reviewers feel that the problem setting is interesting, and the program generation part is also interesting. However, the proposed approach is evaluated in tangential tasks rather than learning to generate question to achieve the goal. Improving this part is essential to improve the quality of the work.
train
[ "SkeBcFvEKH", "HygUfJiojH", "HJlX3DqjoS", "r1xhDv9iir", "rJe6MwcooB", "S1lCTIcojS", "Hye7F-uXtB", "Bye4GluptS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper uses a deep neural network architecture (CNN + Transformer) to model logical translations of questions in the form of programs. The experimental setup uses the \"battleship\" game scenario, which is an interesting domain for questions because of the inherent partial observability present in the game. The...
[ 6, -1, -1, -1, -1, -1, 1, 6 ]
[ 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_SylR-CEKDS", "HJlX3DqjoS", "Bye4GluptS", "SkeBcFvEKH", "Hye7F-uXtB", "iclr_2020_SylR-CEKDS", "iclr_2020_SylR-CEKDS", "iclr_2020_SylR-CEKDS" ]
iclr_2020_HkeJzANFwS
Contextual Text Style Transfer
In this paper, we introduce a new task, Contextual Text Style Transfer, to translate a sentence within a paragraph context into the desired style (e.g., informal to formal, offensive to non-offensive). Two new datasets, Enron-Context and Reddit-Context, are introduced for this new task, focusing on formality and offensiveness, respectively. Two key challenges exist in contextual text style transfer: 1) how to preserve the semantic meaning of the target sentence and its consistency with the surrounding context when generating an alternative sentence with a specific style; 2) how to deal with the lack of labeled parallel data. To address these challenges, we propose a Context-Aware Style Transfer (CAST) model, which leverages both parallel and non-parallel data for joint model training. For parallel training data, CAST uses two separate encoders to encode each input sentence and its surrounding context, respectively. The encoded feature vector, together with the target style information, are then used to generate the target sentence. A classifier is further used to ensure contextual consistency of the generated sentence. In order to lever-age massive non-parallel corpus and to enhance sentence encoder and decoder training, additional self-reconstruction and back-translation losses are introduced. Experimental results on Enron-Context and Reddit-Context demonstrate the effectiveness of the proposed model over state-of-the-art style transfer methods, across style accuracy, content preservation, and contextual consistency metrics.
reject
The paper proposes a new style transfer task, contextual style transfer, which hypothesises that the document context of the sentence is important, as opposed to previous work which only looked at sentence context. A major contribution of the paper is the creation of two new crowd-sourced datasets, Enron-Context and Reddit-Context focussed on formality and offensiveness. The reviewers are skeptical that it was context that has really improved results on the style transfer tasks. The authors responded to all the reviewers but there was no further discussion. I feel like this paper has not convinced me or the reviewers of the strength of its contribution and, although interesting, I recommend for it to be rejected.
train
[ "r1lVfEwqYr", "rJlA-nW1qS", "SkeIzu9tYB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose the task of contextual text style transfer: transferring the style of one text into another (i.e., informal to formal, or offensive to non-offensive), when the text is present within some larger, provided context. The authors propose a model (CAST) which takes advantage of the additional contex...
[ 3, 3, 6 ]
[ 3, 3, 1 ]
[ "iclr_2020_HkeJzANFwS", "iclr_2020_HkeJzANFwS", "iclr_2020_HkeJzANFwS" ]
iclr_2020_BkggGREKvS
Promoting Coordination through Policy Regularization in Multi-Agent Deep Reinforcement Learning
A central challenge in multi-agent reinforcement learning is the induction of coordination between agents of a team. In this work, we investigate how to promote inter-agent coordination using policy regularization and discuss two possible avenues respectively based on inter-agent modelling and synchronized sub-policy selection. We test each approach in four challenging continuous control tasks with sparse rewards and compare them against three baselines including MADDPG, a state-of-the-art multi-agent reinforcement learning algorithm. To ensure a fair comparison, we rely on a thorough hyper-parameter selection and training methodology that allows a fixed hyper-parameter search budget for each algorithm and environment. We consequently assess both the hyper-parameter sensitivity, sample-efficiency and asymptotic performance of each learning method. Our experiments show that the proposed methods lead to significant improvements on cooperative problems. We further analyse the effects of the proposed regularizations on the behaviors learned by the agents.
reject
After reading the reviews and discussing this paper with the reviewers, I believe that this paper is not quite ready for publication at this time. While there was some enthusiasm from the reviewers about the paper, there were also major concerns raised about the comparisons and experimental evaluation, as well as some concerns about novelty. The major concerns about experimental evaluation center around the experiments being restricted to continuous action settings where there is a limited set of baselines (see R3). While I see the authors' point that the method is not restricted to this setting, showing more experiments with more baselines would be important: the demonstrated experiments do strike me as somewhat simplistic, and the standardized comparisons are limited. This might not by itself be that large of an issue, if it wasn't for the other problem: the contribution strikes me as somewhat ad-hoc. While I can see the intuition behind why these two auxiliary objectives might work well, since there is only intuition, then the burden in terms of showing that this is a good idea falls entirely on the experiments. And this is where in my opinion the work comes up short: if we are going to judge the efficacy of the method entirely on the experimental evaluation without any theoretical motivation, then the experimental evaluation does not seem to me to be sufficient. This issue could be addressed either with more extensive and complete experiments and comparisons, or a more convincing conceptual or theoretical argument explaining why we should expect these two particular auxiliary objectives to make a big difference.
train
[ "r1eVV2HJqS", "Hkl-jIN7oS", "SJlmfvEQoS", "rklo3SNXjr", "B1e115VmoS", "ryg3ucNXjr", "Skg2C94QiB", "rklRKFVXoS", "H1ljmcEmjS", "ryxwmjNQoB", "HJecIEN7ir", "rygkUJhntS", "BJgFjR-CKS", "rJxog3L9Dr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposed two approaches to encourage cooperation among multi-agents under the *centralized training decentralized execution* framework. The main contribution of this paper is that they propose to allow agents to predict the behavior of others and introduce this prediction loss into the RL learning objec...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, -1 ]
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, -1 ]
[ "iclr_2020_BkggGREKvS", "BJgFjR-CKS", "BJgFjR-CKS", "rygkUJhntS", "r1eVV2HJqS", "r1eVV2HJqS", "r1eVV2HJqS", "r1eVV2HJqS", "r1eVV2HJqS", "r1eVV2HJqS", "iclr_2020_BkggGREKvS", "iclr_2020_BkggGREKvS", "iclr_2020_BkggGREKvS", "iclr_2020_BkggGREKvS" ]
iclr_2020_H1eWGREFvB
Stein Self-Repulsive Dynamics: Benefits from Past Samples
We propose a new Stein self-repulsive dynamics for obtaining diversified samples from intractable un-normalized distributions. Our idea is to introduce Stein variational gradient as a repulsive force to push the samples of Langevin dynamics away from the past trajectories. This simple idea allows us to significantly decrease the auto-correlation in Langevin dynamics and hence increase the effective sample size. Importantly, as we establish in our theoretical analysis, the asymptotic stationary distribution remains correct even with the addition of the repulsive force, thanks to the special properties of the Stein variational gradient. We perform extensive empirical studies of our new algorithm, showing that our method yields much higher sample efficiency and better uncertainty estimation than vanilla Langevin dynamics.
reject
This paper proposes a new sampling mechanism which uses a self-repulsive term to increase the diversity of the samples. The reviewers had concerns, most of which were addressed in the rebuttal. Unfortunately, none of the reviewers genuinely championed the paper. Since there were a lot of good submissions this year, we had to make decisions on the borderline papers and this lack of full support means that this submission will be rejected. I highly encourage you to keep updating the manuscript and to rebusmit it to a later conference.
val
[ "H1xeyoxG9S", "ryl_R727jH", "S1eiENh7jB", "rJlvwS37sS", "H1eSoNhmjr", "r1lL1H3QsH", "B1gmV727oH", "Bkx5zShQjS", "rJeL6D2XiH", "BkgV-wG5FH", "rJeF90yCKS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I have the rebuttal of the authors, the paper improved indeed and some point on role of M is better clarified now although it is still a bit convoluted. The paper would be stronger if the analysis shows any theoretical advantage to the presented method. I think the author put a good effort in addressing some of my...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_H1eWGREFvB", "BkgV-wG5FH", "H1xeyoxG9S", "rJeF90yCKS", "H1xeyoxG9S", "H1xeyoxG9S", "BkgV-wG5FH", "H1xeyoxG9S", "iclr_2020_H1eWGREFvB", "iclr_2020_H1eWGREFvB", "iclr_2020_H1eWGREFvB" ]
iclr_2020_B1lXfA4Ywr
Towards Modular Algorithm Induction
We present a modular neural network architecture MAIN that learns algorithms given a set of input-output examples. MAIN consists of a neural controller that interacts with a variable-length input tape and learns to compose modules together with their corresponding argument choices. Unlike previous approaches, MAIN uses a general domain-agnostic mechanism for selection of modules and their arguments. It uses a general input tape layout together with a parallel history tape to indicate most recently used locations. Finally, it uses a memoryless controller with a length-invariant self-attention based input tape encoding to allow for random access to tape locations. The MAIN architecture is trained end-to-end using reinforcement learning from a set of input-output examples. We evaluate MAIN on five algorithmic tasks and show that it can learn policies that generalizes perfectly to inputs of much longer lengths than the ones used for training.
reject
The reviewers all agreed that although there is a sensible idea here, the method and presentation need a lot of work, especially their treatment of related methods.
train
[ "B1lEE5CKiH", "HygP6m6FjB", "SJlbBsl5FB", "H1gspdehFH" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks to all of the reviewers for your time and for the very helpful comments. We will take them into account when revising this paper.", "This paper proposes a new variation of modular neural networks. Specifically, they proposed a new architecture for the neural controller that selects which module to use giv...
[ -1, 1, 1, 1 ]
[ -1, 3, 5, 1 ]
[ "iclr_2020_B1lXfA4Ywr", "iclr_2020_B1lXfA4Ywr", "iclr_2020_B1lXfA4Ywr", "iclr_2020_B1lXfA4Ywr" ]
iclr_2020_Bke7MANKvS
A Kolmogorov Complexity Approach to Generalization in Deep Learning
Deep artificial neural networks can achieve an extremely small difference between training and test accuracies on identically distributed training and test sets, which is a standard measure of generalization. However, the training and test sets may not be sufficiently representative of the empirical sample set, which consists of real-world input samples. When samples are drawn from an underrepresented or unrepresented subset during inference, the gap between the training and inference accuracies can be significant. To address this problem, we first reformulate a classification algorithm as a procedure for searching for a source code that maps input features to classes. We then derive a necessary and sufficient condition for generalization using a universal cognitive similarity metric, namely information distance, based on Kolmogorov complexity. Using this condition, we formulate an optimization problem to learn a more general classification function. To achieve this end, we extend the input features by concatenating encodings of them, and then train the classifier on the extended features. As an illustration of this idea, we focus on image classification, where we use channel codes on the input features as a systematic way to improve the degree to which the training and test sets are representative of the empirical sample set. To showcase our theoretical findings, considering that corrupted or perturbed input features belong to the empirical sample set, but typically not to the training and test sets, we demonstrate through extensive systematic experiments that, as a result of learning a more general classification function, a model trained on encoded input features is significantly more robust to common corruptions, e.g., Gaussian and shot noise, as well as adversarial perturbations, e.g., those found via projected gradient descent, than the model trained on uncoded input features.
reject
This is an interesting paper that aims to redefine generalization based on the difference between the training error and the inference error (measured on the empirical sample set), rather than the test error. The authors propose to improve generalization in image classification by augmenting the input with encodings of the image using a source code, and learn this encoding using the compression distance, an approximation of the Kolmogorov complexity. They show that training in this fashion leads to performance that is more robust to corruption and adversarial perturbations that exist in the empirical sample set. Reviewers agree on the importance of this topic and the novelty of the approach, but there continue to exist sharp disagreement in the ratings. Most have concerns about the formalism and clarity in the presentation. Especially given that the paper is 10 pages, it should be evaluated against a more rigorous standard, which doesn't appear to be met. I encourage the authors to consider a rewrite with a goal towards clarity for a more general ML audience and resubmit for a future conference.
train
[ "rklnz1f9jB", "S1ersDyciS", "SyeHOwy9or", "ByxCMwk9sH", "H1xhHHk9jr", "H1xBfS19oB", "Byxgs4k5jH", "rkgUNMk9iB", "Ske87byqsH", "S1xOJc_pFS", "HJxnAMqatB", "BkldR7lIcB", "S1xdSRt39r" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the detailed responses and clarifications. I read them and the revised version of the paper. I appreciate the clarifications.\n\nRegarding the response RW4, the idea that this may be used in combination with other techniques is reasonable, but this has not yet been demonstrated (e.g., in Section 3), ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 8, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 1, 3 ]
[ "S1xdSRt39r", "S1xdSRt39r", "S1xdSRt39r", "S1xdSRt39r", "S1xOJc_pFS", "S1xOJc_pFS", "S1xOJc_pFS", "HJxnAMqatB", "BkldR7lIcB", "iclr_2020_Bke7MANKvS", "iclr_2020_Bke7MANKvS", "iclr_2020_Bke7MANKvS", "iclr_2020_Bke7MANKvS" ]
iclr_2020_H1g4M0EtPS
Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks
We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle, providing us with loss function evaluations. We employ Markov Random Fields (MRF) to exploit the structure of input data to systematically model the covariance structure of the gradients. The MRF structure in addition to Bayesian inference for the gradients facilitates one-step attacks akin to Fast Gradient Sign Method (FGSM) albeit in the black-box setting. The resulting method uses fewer queries than the current state of the art to achieve comparable performance. In particular, in the regime of lower query budgets, we show that our method is particularly effective in terms of fewer average queries with high attack accuracy while employing one-step attacks.
reject
This paper presents a Markov Random Fields (MRF) for generating adversarial examples in a black-box setting, where only it has access to loss function evaluations. The method exploits the structure of input data to model the covariance structure of the gradients. Empirically, the resulting method uses fewer queries than the current state of the art to achieve comparable performance. Overall, the paper has valuable contributions. The main issue is on empirical evaluation, which can be strengthened, e.g., by including results with multi-step methods and more thorough analysis of the estimated gradients.
train
[ "BJemKc6tjS", "Bkenvq9VjH", "HJl6O25EjS", "HyxfK59NjH", "Bkg0b_9EsB", "HkxENuvTFH", "B1gYI3PpFB", "HJe8shqRYB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all the reviewers for the reviews and the insightful comments. We have uploaded a revised draft.\n\n- The revised draft incorporates the gradient estimation performance on MNIST and ImageNet for the LeNet and VGG-16bn architectures in Section A.6\n- We also added autocorrelation plots for th...
[ -1, -1, -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2020_H1g4M0EtPS", "HJe8shqRYB", "B1gYI3PpFB", "Bkenvq9VjH", "HkxENuvTFH", "iclr_2020_H1g4M0EtPS", "iclr_2020_H1g4M0EtPS", "iclr_2020_H1g4M0EtPS" ]
iclr_2020_ryxUMREYPr
Is There Mode Collapse? A Case Study on Face Generation and Its Black-box Calibration
Generative adversarial networks (GANs) nowadays are capable of producing im-ages of incredible realism. One concern raised is whether the state-of-the-artGAN’s learned distribution still suffers from mode collapse. Existing evaluation metrics for image synthesis focus on low-level perceptual quality. Diversity tests of samples from GANs are usually conducted qualitatively on a small scale. In this work, we devise a set of statistical tools, that are broadly applicable to quantitatively measuring the mode collapse of GANs. Strikingly, we consistently observe strong mode collapse on several state-of-the-art GANs using our toolset. We analyze possible causes, and for the first time present two simple yet effective “black-box” methods to calibrate the GAN learned distribution, without accessing either model parameters or the original training data.
reject
This paper studies the problem of mode collapse in GANs. The authors present new metrics to judge the model's diversity of the generated faces. The authors present two black-box approaches to increasing the model diversity. The benefit of using a black box approach is that the method does not require access to the weights of the model and hence it is more easily usable than white-box approaches. However, there are significant evaluation problems and lack of theoretical and empirical motivation on why the methods proposed by the paper are good. The reviewers have not changed their score after having read the response and there is still some gaps in evaluation which can be improved in the paper. Thus, I'm recommending a Rejection.
train
[ "SJlp3GIRtr", "B1xNDIKosH", "B1lO4M2jsr", "Hye2jRossr", "B1e7BLKijr", "S1xrU4J2FS", "HJlStebC5r" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The direction of this work, to evaluate whether mode collapse exists without using label information, is very good. Rather than using the labels, the authors use an off-the-shelf model (on faces) to provide a space on which to measure distances between generated images. They use this distance to test the hypothesi...
[ 3, -1, -1, -1, -1, 6, 1 ]
[ 4, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_ryxUMREYPr", "SJlp3GIRtr", "HJlStebC5r", "SJlp3GIRtr", "S1xrU4J2FS", "iclr_2020_ryxUMREYPr", "iclr_2020_ryxUMREYPr" ]
iclr_2020_SyeUMRNYDr
Generating Dialogue Responses From A Semantic Latent Space
Generic responses are a known issue for open-domain dialog generation. Most current approaches model this one-to-many task as a one-to-one task, hence being unable to integrate information from multiple semantically similar valid responses of a prompt. We propose a novel dialog generation model that learns a semantic latent space, on which representations of semantically related sentences are close to each other. This latent space is learned by maximizing correlation between the features extracted from prompt and responses. Learning the pair relationship between the prompts and responses as a regression task on the latent space, instead of classification on the vocabulary using MLE loss, enables our model to view semantically related responses collectively. An additional autoencoder is trained, for recovering the full sentence from the latent space. Experimental results show that our proposed model eliminates the generic response problem, while achieving comparable or better coherence compared to baselines.
reject
This paper proposes a response generation approach that aims to tackle the generic response problem. The approach is learning a latent semantic space by maximizing the correlation between features extracted from prompts and responses. The reviewers were concerned about the lack of comparison with previous papers tackling the same problem, and did not change their decision (i.e., were not convinced) even after the rebuttal. Hence, I suggest a reject for this paper.
train
[ "Bylt08JZor", "ryeEwPybir", "HkeiEU1WjB", "r1xLvGk6Yr", "Bkl5T12pFB", "B1loCsNZqS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the review. We respond to the comments in your review below, and we will revise the paper to clarify those issues.\n\n - There seems to be a need for more detail on comparisons of papers that previously tackle the generic response problem.\n\nWe will revise the paper and add more discussion on previo...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "Bkl5T12pFB", "r1xLvGk6Yr", "B1loCsNZqS", "iclr_2020_SyeUMRNYDr", "iclr_2020_SyeUMRNYDr", "iclr_2020_SyeUMRNYDr" ]
iclr_2020_rkxUfANKwB
All SMILES Variational Autoencoder for Molecular Property Prediction and Optimization
Variational autoencoders (VAEs) defined over SMILES string and graph-based representations of molecules promise to improve the optimization of molecular properties, thereby revolutionizing the pharmaceuticals and materials industries. However, these VAEs are hindered by the non-unique nature of SMILES strings and the computational cost of graph convolutions. To efficiently pass messages along all paths through the molecular graph, we encode multiple SMILES strings of a single molecule using a set of stacked recurrent neural networks, harmonizing hidden representations of each atom between SMILES representations, and use attentional pooling to build a final fixed-length latent representation. By then decoding to a disjoint set of SMILES strings of the molecule, our All SMILES VAE learns an almost bijective mapping between molecules and latent representations near the high-probability-mass subspace of the prior. Our SMILES-derived but molecule-based latent representations significantly surpass the state-of-the-art in a variety of fully- and semi-supervised property regression and molecular property optimization tasks.
reject
The paper proposes All SMILES VAE which can capture the chemical properties of small molecules and also optimize the structures of these molecules. The model achieves significantly performance improvement over existing methods on the Zinc250K and Tox21 datasets. Overall it is a very solid paper - it addresses an important problem, provides detailed description of the proposed method and shows promising experiment results. The work could be a landmark piece, leading to major impacts in the field. However, given its potential, the paper could benefit from major revisions of the draft. Below are some suggestions on improving the work: 1. The current version contains a lot of materials. It tries to strike the balance between machine learning methodology and details of the application domain. But the reality is that the lack of architecture details and some sloppy definitions of ML terms make it hard for readers to fully appreciate the methodology novelty. 2. There is still room for improvement in experiments. As suggested in the review, more datasets should be used to evaluate the proposed model. Since it is hard to provide theoretic analysis of the proposed model, extensive experiments should be provided. 3. The complexity analysis is not fully convincing. Some fair comparison with the alternative approaches should be provided. In summary, it is a paper with big potentials. The current version is a step away from being ready for publication. We hope the reviews can help improve the paper for a strong publication in the future.
val
[ "SkeICkT_iB", "BkeNT1aOoB", "Hklfcxausr", "Hkl-dgpdjH", "BklSrgp_ir", "ByxEblTOiS", "B1eUYyTdjr", "HylmnBCatS", "S1xGhOBM5H", "BJeCVDLX5H", "ryxBDC5H_r", "S1gJseWHdS" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "1: Avrim Blum, John Hopcroft, and Ravi Kannan. Foundations of Data Science. June 2017. URL https://www. microsoft.com/en-us/research/publication/foundations-of-data-science-2/.\n2: Seokho Kang and Kyunghyun Cho. Conditional molecular design with deep generative models. arXiv preprint arXiv:1805.00108, 2018. \n3:Ra...
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 3, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, -1, -1 ]
[ "BJeCVDLX5H", "BJeCVDLX5H", "HylmnBCatS", "HylmnBCatS", "HylmnBCatS", "S1xGhOBM5H", "BJeCVDLX5H", "iclr_2020_rkxUfANKwB", "iclr_2020_rkxUfANKwB", "iclr_2020_rkxUfANKwB", "S1gJseWHdS", "iclr_2020_rkxUfANKwB" ]
iclr_2020_SJgwf04KPr
Confidence-Calibrated Adversarial Training: Towards Robust Models Generalizing Beyond the Attack Used During Training
Adversarial training is the standard to train models robust against adversarial examples. However, especially for complex datasets, adversarial training incurs a significant loss in accuracy and is known to generalize poorly to stronger attacks, e.g., larger perturbations or other threat models. In this paper, we introduce confidence-calibrated adversarial training (CCAT) where the key idea is to enforce that the confidence on adversarial examples decays with their distance to the attacked examples. We show that CCAT preserves better the accuracy of normal training while robustness against adversarial examples is achieved via confidence thresholding. Most importantly, in strong contrast to adversarial training, the robustness of CCAT generalizes to larger perturbations and other threat models, not encountered during training. We also discuss our extensive work to design strong adaptive attacks against CCAT and standard adversarial training which is of independent interest. We present experimental results on MNIST, SVHN and Cifar10.
reject
This paper proposes a confidence-calibrated adversarial training (CCAT). The key idea is to enforce that the confidence on adversarial examples decays with their distance to the attacked examples. The authors show that CCAT can achieve better natural accuracy and robustness. After the author response and reviewer discussion, all the reviewers still think more work (e.g., improving the motivation to better position this work, conducting a fair comparison with adversarial training which does not have adversarial example detection component) needs to be done to make it a strong case. Therefore, I recommend reject.
train
[ "Hkeh7P29tr", "SJxBFIb8KS", "Skx1F-7VtH", "Bkg-iqohjr", "rye0Ot4hjH", "HJlyVYE2sS", "rkg0P-8jor", "HygRdeUsir", "BkxbpkUosr", "SJepa0rjsr", "rygbfpBsiS", "BJga21DvKr", "rylBKvYsPS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "\n====== AFTER READING THE AUTHOR RESPONSE ======\n\nMany thanks for the extensive response and the respective revision from the author(s).\nMainly, I found the main results are adjusted in the revision to rather demonstrate its good detection performance at 99% TPR, and I feel the message of the manuscript become...
[ 3, 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_SJgwf04KPr", "iclr_2020_SJgwf04KPr", "iclr_2020_SJgwf04KPr", "rye0Ot4hjH", "SJepa0rjsr", "HygRdeUsir", "iclr_2020_SJgwf04KPr", "SJxBFIb8KS", "Hkeh7P29tr", "Skx1F-7VtH", "rylBKvYsPS", "rylBKvYsPS", "iclr_2020_SJgwf04KPr" ]
iclr_2020_Hke_f0EYPH
Efficient Training of Robust and Verifiable Neural Networks
Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees. We propose that many common certified defenses can be viewed under a unified framework of regularization. This unified framework provides a technique for comparing different certified defenses with respect to robust generalization. In addition, we develop a new regularizer that is both more efficient than existing certified defenses and can be used to train networks with higher certified accuracy. Our regularizer also extends to an L0 threat model and ensemble models. Through experiments on MNIST, CIFAR-10 and GTSRB, we demonstrate improvements in training speed and certified accuracy compared to state-of-the-art certified defenses.
reject
This paper studies the problem of certified robustness to adversarial examples. It first demonstrates that many existing certified defenses can be viewed under a unified framework of regularization. Then, it proposes a new double margin-based regularizer to obtain better certified robustness. Overall, it has major technical issues and the rebuttal is not satisfying.
train
[ "Hkx64BnJ5r", "r1lGr2q3ir", "Skl7wnchsS", "rkeGn2q3sH", "Hkl1c392oS", "BJlT0292oS", "B1xpb39hor", "B1gnQjh5KH", "rygwJPQ0FH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "At a first glance, this paper proposed an interesting refinement of interval bound propagation (IBP). However, it has a major flaw in empirical evaluation, and the proposed \"theory\" and \"bounds\" are also questionable and have many issues.\n\nIn short, the main results of the paper in Figure 1 and Table 2 are p...
[ 1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_Hke_f0EYPH", "B1xpb39hor", "r1lGr2q3ir", "B1gnQjh5KH", "rygwJPQ0FH", "Hkx64BnJ5r", "iclr_2020_Hke_f0EYPH", "iclr_2020_Hke_f0EYPH", "iclr_2020_Hke_f0EYPH" ]
iclr_2020_rJeuMREKwS
Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning
In the multi-objective reinforcement learning (MORL) paradigm, the relative importance of each environment objective is often unknown prior to training, so agents must learn to specialize their behavior to optimize different combinations of environment objectives that are specified post-training. These are typically linear combinations, so the agent is effectively parameterized by a weight vector that describes how to balance competing environment objectives. However, many real world behaviors require non-linear combinations of objectives. Additionally, the conversion between desired behavior and weightings is often unclear. In this work, we explore the use of a language based on propositional logic with quantitative semantics--in place of weight vectors--for specifying non-linear behaviors in an interpretable way. We use a recurrent encoder to encode logical combinations of objectives, and train a MORL agent to generalize over these encodings. We test our agent in several grid worlds with various objectives and show that our agent can generalize to many never-before-seen specifications with performance comparable to single policy baseline agents. We also demonstrate our agent's ability to generate meaningful policies when presented with novel specifications and quickly specialize to novel specifications.
reject
The reviewers generally agreed that the technical novelty of the work was limited, and the experimental evaluation was insufficient to make up for this, evaluating the method only on relatively simple toy tasks. As much, I do not think that the paper is ready for publication at this time.
train
[ "rJlMSeZNqS", "HJljch5njH", "r1eSBnqniB", "H1xM0o9nir", "BylNXo52sB", "HyxCK2baKH", "r1efsV40Fr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank the authors for the response. The major novelty of this paper is encoding the objective as a logical expression and the experiment part is limited. I will keep my score.\n----------------------------------------\nSummary\nThis paper presents a new approach for MORL (Multi-Objective Reinforcement Learning), w...
[ 3, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_rJeuMREKwS", "rJlMSeZNqS", "HyxCK2baKH", "r1efsV40Fr", "iclr_2020_rJeuMREKwS", "iclr_2020_rJeuMREKwS", "iclr_2020_rJeuMREKwS" ]
iclr_2020_rJxYMCEFDr
Leveraging Adversarial Examples to Obtain Robust Second-Order Representations
Deep neural networks represent data as projections on trained weights in a high dimensional manifold. This is a first-order based absolute representation that is widely used due to its interpretable nature and simple mathematical functionality. However, in the application of visual recognition, first-order representations trained on pristine images have shown a vulnerability to distortions. Visual distortions including imaging acquisition errors and challenging environmental conditions like blur, exposure, snow and frost cause incorrect classification in first-order neural nets. To eliminate vulnerabilities under such distortions, we propose representing data points by their relative positioning in a high dimensional manifold instead of their absolute positions. Such a positioning scheme is based on a data point’s second-order property. We obtain a data point’s second-order representation by creating adversarial examples to all possible decision boundaries and tracking the movement of corresponding boundaries. We compare our representation against first-order methods and show that there is an increase of more than 14% under severe distortions for ResNet-18. We test the generalizability of the proposed representation on larger networks and on 19 complex and real-world distortions from CIFAR-10-C. Furthermore, we show how our proposed representation can be used as a plug-in approach on top of any network. We also provide methodologies to scale our proposed representation to larger datasets.
reject
The authors propose a method to train a neural network that is robust to visual distortions of the input image. The reviewers agree that the paper lacks justification of the proposed method and experimental evidence of its performance.
train
[ "SJeEI7t0YS", "H1eERColcB", "SylTiBon9S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a method to represent data with its second-order information in the deep network, to improve its representation robustness. To obtain this representation, the authors used artificial adversarial examples to obtain the robust representations. Experimental results show that the new representation...
[ 3, 1, 1 ]
[ 1, 4, 5 ]
[ "iclr_2020_rJxYMCEFDr", "iclr_2020_rJxYMCEFDr", "iclr_2020_rJxYMCEFDr" ]
iclr_2020_S1xsG0VYvB
Understanding the functional and structural differences across excitatory and inhibitory neurons
One of the most fundamental organizational principles of the brain is the separation of excitatory (E) and inhibitory (I) neurons. In addition to their opposing effects on post-synaptic neurons, E and I cells tend to differ in their selectivity and connectivity. Although many such differences have been characterized experimentally, it is not clear why they exist in the first place. We studied this question in deep networks equipped with E and I cells. We found that salient distinctions between E and I neurons emerge across various deep convolutional recurrent networks trained to perform standard object classification tasks. We explored the necessary conditions for the networks to develop distinct selectivity and connectivity across cell types. We found that neurons that project to higher-order areas will have greater stimulus selectivity, regardless of whether they are excitatory or not. Sparser connectivity is required for higher selectivity, but only when the recurrent connections are excitatory. These findings demonstrate that the functional and structural differences observed across E and I neurons are not independent, and can be explained using a smaller number of factors.
reject
This paper explores the role of excitatory and inhibitory neurons, and how their properties might differ based on simulations. A few issues were raised during the review period, and I commend the authors for stepping up to address these comments and run additional experiments. It seems, though, that the reviewer's worries were born out in the results of the additional experiments: "1. The object classification task is not really relevant to elicit the observed behavior and 2. Inhibitory neurons are not essential (at least when training with batch norm)." I hope the authors can make improvements in light of these observations, and discuss their implications in a future version of this paper.
test
[ "Byl5iH3ptH", "HJxQCIL3iS", "BJeKgTEniB", "HygZ_5NhjB", "SygNwIROiS", "Byg5g7JhoH", "BylvOn4k5H", "ryeony13jr", "r1e-SACisB", "ryxXTNhoiH", "Bygxp8R_iS", "HkeO1L0OjH", "SylL9rC_jS", "B1g27loaKH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "[Update after rebuttal period]\n\nI would like to thank the authors for the detailed response and for addressing the concerns on such a tight schedule.\n\nOverall my two concerns appear to have been right: 1. The object classification task is not really relevant to elicit the observed behavior and 2. Inhibitory ne...
[ 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ 5, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_S1xsG0VYvB", "Byl5iH3ptH", "Byg5g7JhoH", "r1e-SACisB", "Byl5iH3ptH", "BylvOn4k5H", "iclr_2020_S1xsG0VYvB", "HkeO1L0OjH", "SygNwIROiS", "iclr_2020_S1xsG0VYvB", "B1g27loaKH", "SylL9rC_jS", "BylvOn4k5H", "iclr_2020_S1xsG0VYvB" ]
iclr_2020_HJxiMAVtPH
Multi-scale Attributed Node Embedding
We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram. Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE). Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes. We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings. Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.
reject
This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work, and I urge the authors to continue to develop refinements and extensions.
train
[ "HylFMo1DjH", "B1lKGEX8ir", "HygA2fQ8sS", "r1xh7AzIsS", "Sye0HTfIjr", "ryxLNTmaYS", "S1lMVC0pKr", "Skx3TcKRKB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for their valuable feedback. We have responded to most comments individually. The paper has been updated to include:\n\n- Low and high baseline for transfer learning.", "\"Figure 2 shows that the presented two approaches just slightly better or equivalent, or sometimes worse than baseline ...
[ -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, 4, 1, 5 ]
[ "iclr_2020_HJxiMAVtPH", "Skx3TcKRKB", "Skx3TcKRKB", "S1lMVC0pKr", "ryxLNTmaYS", "iclr_2020_HJxiMAVtPH", "iclr_2020_HJxiMAVtPH", "iclr_2020_HJxiMAVtPH" ]
iclr_2020_rJehf0VKwS
Proactive Sequence Generator via Knowledge Acquisition
Sequence-to-sequence models such as transformers, which are now being used in a wide variety of NLP tasks, typically need to have very high capacity in order to perform well. Unfortunately, in production, memory size and inference speed are all strictly constrained. To address this problem, Knowledge Distillation (KD), a technique to train small models to mimic larger pre-trained models, has drawn lots of attention. The KD approach basically attempts to maximize recall, i.e., ranking Top-k”tokens in teacher models as higher as possible, however, whereas precision is more important for sequence generation because of exposure bias. Motivated by this, we develop Knowledge Acquisition (KA) where student models receive log q(y_t|y_{<t},x) as rewards when producing the next token y_t given previous tokens y_{<t} and the source sentence x. We demonstrate the effectiveness of our approach on WMT’17 De-En and IWSLT’15 Th-En translation tasks, with experimental results showing that our approach gains +0.7-1.1 BLEU score compared to token-level knowledge distillation.
reject
This paper shows a nice idea to transfer knowledge from larger sequence models to small models. However, all the reivewers find that the contribution is too limited and the experiments are insufficient. All the reviewers agree to reject.
train
[ "SJlYKaY2iB", "BylOS13oir", "SkxROnojiB", "HJgdWXTejB", "SklYRK9loH", "HylC3UclsH", "SylRvuv6KS", "SkggMGYTtB", "BkgFPO-W5B" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The difference in objectives shows that KL(student | teacher) encourages the exploration a lot, which results in much smooth distributions that are able to capture multiple modes. That means, the probabilities of words with similar semantic meanings should be pushed up. Thus, the probability of ground truth word (...
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2020_rJehf0VKwS", "SkxROnojiB", "SylRvuv6KS", "iclr_2020_rJehf0VKwS", "iclr_2020_rJehf0VKwS", "iclr_2020_rJehf0VKwS", "iclr_2020_rJehf0VKwS", "iclr_2020_rJehf0VKwS", "iclr_2020_rJehf0VKwS" ]
iclr_2020_r1g6MCEtwr
Zero-Shot Out-of-Distribution Detection with Feature Correlations
When presented with Out-of-Distribution (OOD) examples, deep neural networks yield confident, incorrect predictions. Detecting OOD examples is challenging, and the potential risks are high. In this paper, we propose to detect OOD examples by identifying inconsistencies between activity patterns and class predicted. We find that characterizing activity patterns by feature correlations and identifying anomalies in pairwise feature correlation values can yield high OOD detection rates. We identify anomalies in the pairwise feature correlations by simply comparing each pairwise correlation value with its respective range observed over the training data. Unlike many approaches, this can be used with any pre-trained softmax classifier and does not require access to OOD data for fine-tuning hyperparameters, nor does it require OOD access for inferring parameters. The method is applicable across a variety of architectures and vision datasets and generally performs better than or equal to state-of-the-art OOD detection methods, including those that do assume access to OOD examples.
reject
The paper proposes a new scoring function for OOD detection based on calculating the total deviation of the pairwise feature correlations. This is an important problem that is of general interest in our community. Reviewer 2 found the paper to be clear, provided a set of weaknesses relating to lack of explanations of performance and more careful ablations, along with a set of strategies to address them. Reviewer 1 recognized the importance of being useful for pretrained networks but also raised questions of explanation and theoretical motivations. Reviewer 3 was extremely supportive, used the authors' code to highlight the difference between far-from-distribution behaviour versus near-distribution OOD examples. The authors provided detailed responses to all points raised and provided additional eidence. There was no convergence of the review recommendations. The review added much more clarity to the paper and it is no a better paper. The paper demonstrates all the features of a good paper, but unfortunately didn't yet reach the level for acceptance for the next conference.
train
[ "rJe62ec0tH", "rkgcvXssor", "ByxOb5VioH", "Bye2FcK9ir", "rylP-azSoH", "HkxQUC19iS", "HyxcOnJ5or", "H1xozffaFB", "HyxoOsxKiH", "SkeLa9gtir", "ryx7TVmroB", "r1xrLymBjr", "rJghOAGBjr", "Bkxza1qaKH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a new scoring function for OOD detection based on calculating the total deviation of the pairwise feature correlations. The method only requires in-distribution data for tuning its hyper-parameters and can use a pre-trained classifier directly. Its performance is evaluated with small image data...
[ 3, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_r1g6MCEtwr", "iclr_2020_r1g6MCEtwr", "Bye2FcK9ir", "HkxQUC19iS", "rJe62ec0tH", "Bkxza1qaKH", "rJe62ec0tH", "iclr_2020_r1g6MCEtwr", "H1xozffaFB", "H1xozffaFB", "H1xozffaFB", "Bkxza1qaKH", "rylP-azSoH", "iclr_2020_r1g6MCEtwr" ]
iclr_2020_BJlaG0VFDH
Decoupling Weight Regularization from Batch Size for Model Compression
Conventionally, compression-aware training performs weight compression for every mini-batch to compute the impact of compression on the loss function. In this paper, in order to study when would be the right time to compress weights during optimization steps, we propose a new hyper-parameter called Non-Regularization period or NR period during which weights are not updated for regularization. We first investigate the influence of NR period on regularization using weight decay and weight random noise insertion. Throughout various experiments, we show that stronger weight regularization demands longer NR period (regardless of batch size) to best utilize regularization effects. From our empirical evidence, we argue that weight regularization for every mini-batch allows small weight updates only and limited regularization effects such that there is a need to search for right NR period and weight regularization strength to enhance model accuracy. Consequently, NR period becomes especially crucial for model compression where large weight updates are necessary to increase compression ratio. Using various models, we show that simple weight updates to comply with compression formats along with long NR period is enough to achieve high compression ratio and model accuracy.
reject
This paper proposes to apply regularizers such as weight decay or weight noise only periodically, rather than every epoch. It investigates how the "non-regularization period", or period between regularization steps, interacts with other hyperparameters. Overall, the writing feels somewhat scattered, and it is hard to identify a clear argument for why the NRP should help. Certainly one could save computation this way, but regularizers like weight decay or weight noise incur only a small computational cost anyway. One explicit claim from the paper is that a higher NRP allows larger regularization. There's a sense in which this is demonstrated, though not a very interesting sense: Figure 4 shows that the weight decay strength should be adjusted proportionally to the NRP. But varying the parameters in this way simply results in an unbiased (but noisier) estimate of gradients of exactly the same regularization penalty, so I don't think there's much surprising here. Similarly, Section 3 argues that a higher NRP allows for larger stochastic perturbations, which makes it easier to escape local optima. But this isn't demonstrated experimentally, nor does it seem obvious that stochasticity will help find a better local optimum. Overall, I think this paper needs substantial cleanup before it's ready to be published at a venue such as ICLR.
val
[ "Hkl-BaaqjS", "rkeT1bLCFH", "BJg9k5UqoB", "rygfeJMvsS", "HyeHg5ZPoH", "Byx5P39pKB", "H1l86F3Isr", "SkxAxFnLjS", "Hye9L_hUir", "B1xn2oQs5r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Feel free to let us know if the updated paper addresses some of your concerns, the paper is currently borderline. ", "I think that the paper is very well written, I like it. The authors localized a phenomenon and demonstrated how to exploit it. I trust the results because I performed exactly the same experiment...
[ -1, 8, -1, -1, -1, 3, -1, -1, -1, 3 ]
[ -1, 4, -1, -1, -1, 3, -1, -1, -1, 4 ]
[ "B1xn2oQs5r", "iclr_2020_BJlaG0VFDH", "iclr_2020_BJlaG0VFDH", "HyeHg5ZPoH", "H1l86F3Isr", "iclr_2020_BJlaG0VFDH", "Byx5P39pKB", "B1xn2oQs5r", "rkeT1bLCFH", "iclr_2020_BJlaG0VFDH" ]
iclr_2020_r1e0G04Kvr
Deep Graph Translation
Deep graph generation models have achieved great successes recently, among which, however, are typically unconditioned generative models that have no control over the target graphs are given an input graph. In this paper, we propose a novel Graph-Translation-Generative-Adversarial-Networks (GT-GAN) that transforms the input graphs into their target output graphs. GT-GAN consists of a graph translator equipped with innovative graph convolution and deconvolution layers to learn the translation mapping considering both global and local features, and a new conditional graph discriminator to classify target graphs by conditioning on input graphs. Extensive experiments on multiple synthetic and real-world datasets demonstrate that our proposed GT-GAN significantly outperforms other baseline methods in terms of both effectiveness and scalability. For instance, GT-GAN achieves at least 10X and 15X faster runtimes than GraphRNN and RandomVAE, respectively, when the size of the graph is around 50.
reject
This paper studies a problem of graph translation, which aims at learning a graph translator to translate an input graph to a target graph using adversarial training framework. The reviewers think the problem is interesting. However, the paper needs to improve further in term of novelty and writing.
val
[ "ryxsS9TDoH", "SklKLtTvsH", "rJeZ08aDsS", "Skg74UTPsS", "rkgnHQpvjB", "H1x0ltmTFS", "BklKS6YRtB", "H1eMKX61qS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "--------------------------------------------\nQ3: Though the studied problem is interesting, the proposed method makes sense but is not very novel. It seems to be adopting GAN with GNN and l1 regularizer.\n\nAnswer: Although it seems like it is “easy” to adopting GAN with GNN, we would like to argue that it is def...
[ -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "H1x0ltmTFS", "H1x0ltmTFS", "BklKS6YRtB", "BklKS6YRtB", "H1eMKX61qS", "iclr_2020_r1e0G04Kvr", "iclr_2020_r1e0G04Kvr", "iclr_2020_r1e0G04Kvr" ]
iclr_2020_rkly70EKDH
Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently
It has been observed \citep{zhang2016understanding} that deep neural networks can memorize: they achieve 100\% accuracy on training data. Recent theoretical results explained such behavior in highly overparametrized regimes, where the number of neurons in each layer is larger than the number of training samples. In this paper, we show that neural networks can be trained to memorize training data perfectly in a mildly overparametrized regime, where the number of parameters is just a constant factor more than the number of training samples, and the number of neurons is much smaller.
reject
The paper studies the amount of over-parameterization needed for a quadratic 2 /3 layer neural network to memorize a separable training data set with arbitrary labels. While the reviewers agree that this paper contains interesting results, the review process uncovered highly related prior work, which requires a major revision to put the current paper into perspective and generally various clarifications. The paper will benefit from a revision and resubmission to another venue, and is in its current form not ready for acceptance at ICLR-2020.
train
[ "B1l8dPX2KB", "B1loxGgTYB", "HJgMadUAYS", "HJxU_Ooijr", "rkgcHdsoor", "BJlexdojjB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper studies the mildly over-parameterized neural networks. In particular, it shows that when the width is at least O(sqrt{n}) where n is the number of samples, PDG fits the training data. The analysis is done for 2-layer or 3-layer networks with (mainly) quadratic activations, with only one-layer weights (t...
[ 1, 3, 8, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1 ]
[ "iclr_2020_rkly70EKDH", "iclr_2020_rkly70EKDH", "iclr_2020_rkly70EKDH", "HJgMadUAYS", "B1l8dPX2KB", "B1loxGgTYB" ]
iclr_2020_Byekm0VtwS
A Training Scheme for the Uncertain Neuromorphic Computing Chips
Uncertainty is a very important feature of the intelligence and helps the brain become a flexible, creative and powerful intelligent system. The crossbar-based neuromorphic computing chips, in which the computing is mainly performed by analog circuits, have the uncertainty and can be used to imitate the brain. However, most of the current deep neural networks have not taken the uncertainty of the neuromorphic computing chip into consideration. Therefore, their performances on the neuromorphic computing chips are not as good as on the original platforms (CPUs/GPUs). In this work, we proposed the uncertainty adaptation training scheme (UATS) that tells the uncertainty to the neural network in the training process. The experimental results show that the neural networks can achieve comparable inference performances on the uncertain neuromorphic computing chip compared to the results on the original platforms, and much better than the performances without this training scheme.
reject
The paper is proposing uncertainty of the NN’s in the training process on analog-circuits based chips. As one reviewer emphasized, the paper addresses important and unique research problem to run NN on chips. Unfortunately, a few issues are raised by reviewers including presentation, novelly and experiments. This might be partially be mitigated by 1) writing motivation/intro in most lay person possible way 2) give easy contrast to normal NN (on computers) to emphasize the unique and interesting challenges in this setting. We encourage authors to take a few cycles of edition, and hope this paper to see the light soon.
train
[ "r1eAqlKqjS", "HygtdA_9sB", "S1lkFcwCFB", "Bylo-7Sy9S", "rJlZhT3p9B" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In Bayesian neural network, the uncertainty of the weight, such as the standard deviation, is usually a trainable parameter. However, in neuromorphic computing chip, this is a fixed restriction, which is determined by the device or the circuit. Our method is to deal with this underlying uncertainties. ", "Thanks...
[ -1, -1, 1, 6, 1 ]
[ -1, -1, 1, 1, 3 ]
[ "S1lkFcwCFB", "Bylo-7Sy9S", "iclr_2020_Byekm0VtwS", "iclr_2020_Byekm0VtwS", "iclr_2020_Byekm0VtwS" ]
iclr_2020_rJggX0EKwS
The Benefits of Over-parameterization at Initialization in Deep ReLU Networks
It has been noted in existing literature that over-parameterization in ReLU networks generally improves performance. While there could be several factors involved behind this, we prove some desirable theoretical properties at initialization which may be enjoyed by ReLU networks. Specifically, it is known that He initialization in deep ReLU networks asymptotically preserves variance of activations in the forward pass and variance of gradients in the backward pass for infinitely wide networks, thus preserving the flow of information in both directions. Our paper goes beyond these results and shows novel properties that hold under He initialization: i) the norm of hidden activation of each layer is equal to the norm of the input, and, ii) the norm of weight gradient of each layer is equal to the product of norm of the input vector and the error at output layer. These results are derived using the PAC analysis framework, and hold true for finitely sized datasets such that the width of the ReLU network only needs to be larger than a certain finite lower bound. As we show, this lower bound depends on the depth of the network and the number of samples, and by the virtue of being a lower bound, over-parameterized ReLU networks are endowed with these desirable properties. For the aforementioned hidden activation norm property under He initialization, we further extend our theory and show that this property holds for a finite width network even when the number of data samples is infinite. Thus we overcome several limitations of existing papers, and show new properties of deep ReLU networks at initialization.
reject
The article studies benefits of over-parametrization and theoretical properties at initialization in ReLU networks. The reviewers raised concerns about the work being very close to previous works and also about the validity of some assumptions and derivations. Nonetheless, some reviewers mentioned that the analysis might be a starting point in understanding other phenomena and made some suggestions. However, the authors did not provide a rebuttal nor a revision.
train
[ "BkxInWuatH", "Hkx9eVWAYS", "SkgXIteScr", "SkxEBvuUcS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the norm of hidden activation of each layer and the norm of weight gradient of each layer for deep ReLU neural network. By using concentralization property of the random initialization, the paper derives their expected values and high probability range when the network width is sufficiently wide....
[ 1, 3, 3, 3 ]
[ 4, 5, 3, 3 ]
[ "iclr_2020_rJggX0EKwS", "iclr_2020_rJggX0EKwS", "iclr_2020_rJggX0EKwS", "iclr_2020_rJggX0EKwS" ]
iclr_2020_rygG7AEtvB
Finding Mixed Strategy Nash Equilibrium for Continuous Games through Deep Learning
Nash equilibrium has long been a desired solution concept in multi-player games, especially for those on continuous strategy spaces, which have attracted a rapidly growing amount of interests due to advances in research applications such as the generative adversarial networks. Despite the fact that several deep learning based approaches are designed to obtain pure strategy Nash equilibrium, it is rather luxurious to assume the existence of such an equilibrium. In this paper, we present a new method to approximate mixed strategy Nash equilibria in multi-player continuous games, which always exist and include the pure ones as a special case. We remedy the pure strategy weakness by adopting the pushforward measure technique to represent a mixed strategy in continuous spaces. That allows us to generalize the Gradient-based Nikaido-Isoda (GNI) function to measure the distance between the players' joint strategy profile and a Nash equilibrium. Applying the gradient descent algorithm, our approach is shown to converge to a stationary Nash equilibrium under the convexity assumption on payoff functions, the same popular setting as in previous studies. In numerical experiments, our method consistently and significantly outperforms recent works on approximating Nash equilibrium for quadratic games, general blotto games, and GAMUT games.
reject
The paper presents an algorithm to compute mixed-strategy Nash equilibria for continuous action space games. While the paper has some novelty, reviewers are generally unimpressed with the assumptions made, and the quality of the writing. Reviewers were also not swayed by the responses from the authors. Additionally, it could be argued that the paper is somewhat peripheral to the topic of the conference.¨ On balance, I would recommend reject for now; the paper needs more work.
train
[ "HJez7rpsjB", "H1lZW28sjH", "rJlYNwy5sr", "SkxZ9bycjS", "Hkl0_UxHsB", "Skg-r1GKYr", "BJguqGlHsH", "HylyFRWXjB", "SyluSCZXor", "HklBzRZmjB", "rJg8GDbTFB", "BklEtuqg5H" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks very much for your comment.\n\n(1) Zeroth order optimization can indeed be used to optimize neural network parameters. However, due to the fact that neural networks usually have a huge number of parameters, zeroth order optimization becomes less efficient and lacks theoretical guarantee. Also, zeroth order ...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 4, 5 ]
[ "H1lZW28sjH", "SyluSCZXor", "Hkl0_UxHsB", "BJguqGlHsH", "HklBzRZmjB", "iclr_2020_rygG7AEtvB", "HylyFRWXjB", "Skg-r1GKYr", "rJg8GDbTFB", "BklEtuqg5H", "iclr_2020_rygG7AEtvB", "iclr_2020_rygG7AEtvB" ]
iclr_2020_BkeMXR4KvS
DASGrad: Double Adaptive Stochastic Gradient
Adaptive moment methods have been remarkably successful for optimization under the presence of high dimensional or sparse gradients, in parallel to this, adaptive sampling probabilities for SGD have allowed optimizers to improve convergence rates by prioritizing examples to learn efficiently. Numerous applications in the past have implicitly combined adaptive moment methods with adaptive probabilities yet the theoretical guarantees of such procedures have not been explored. We formalize double adaptive stochastic gradient methods DASGrad as an optimization technique and analyze its convergence improvements in a stochastic convex optimization setting, we provide empirical validation of our findings with convex and non convex objectives. We observe that the benefits of the method increase with the model complexity and variability of the gradients, and we explore the resulting utility in extensions to transfer learning.
reject
The reviewers were confused by several elements of the paper, as mentioned in their reviews and, despite the authors' rebuttal, still have several areas of concerns. I encourage you to read the reviews carefully and address the reviewers' concerns for a future submission.
train
[ "BJxOsdd2jB", "HJgBuNd3ir", "Hye3nf-ptr", "HklOgsSRFr", "SyxiU5SatH" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1) Thanks for this comment, $\\beta_{1}(t)$ should be $\\beta_{1}{t}$ from adaptive moment methods.\n\n2) The matrix $V_{t}$ is defined in page 5 on the algorithm. Thanks for pointing your concerns regarding the convergence of Adam, we used AMSGrad fixed version and proof to account for the Adam convergence proble...
[ -1, -1, 6, 3, 3 ]
[ -1, -1, 3, 4, 3 ]
[ "SyxiU5SatH", "HklOgsSRFr", "iclr_2020_BkeMXR4KvS", "iclr_2020_BkeMXR4KvS", "iclr_2020_BkeMXR4KvS" ]
iclr_2020_ryeQmCVYPS
Defective Convolutional Layers Learn Robust CNNs
Robustness of convolutional neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs. Recent research suggests that the noises in adversarial examples break the textural structure, which eventually leads to wrong predictions by convolutional neural networks. To help a convolutional neural network make predictions relying less on textural information, we propose defective convolutional layers which contain defective neurons whose activations are set to be a constant function. As the defective neurons contain no information and are far different from the standard neurons in its spatial neighborhood, the textural features cannot be accurately extracted and the model has to seek for other features for classification, such as the shape. We first show that predictions made by the defective CNN are less dependent on textural information, but more on shape information, and further find that adversarial examples generated by the defective CNN appear to have semantic shapes. Experimental results demonstrate the defective CNN has higher defense ability than the standard CNN against various types of attack. In particular, it achieves state-of-the-art performance against transfer-based attacks without applying any adversarial training.
reject
The reviewers wondered about the practical application of this method, given that the performance was lower. The reviewers were also surprised by some of your claims and wanted you to explore them more deeply. On the positive side, the reviewers found your experiments to be very thorough. You also performed additional experiments during the rebuttal period. We hope that those experiments will help you to build a better paper as you work towards publishing this work.
test
[ "S1x5zqVhiB", "HylPFmN3iH", "H1gO5pmhsS", "HyesL3Q3ir", "SyxFZjSItB", "B1xJXVihYr", "rylp1fFAtS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank Reviewer #3 for the feedback and suggestions. The suggestions are really helpful in further improving our work.\n\nRegarding sampling of 4000 images in the patch experiment with confidence > 99%:\n\nThe phenomenon is similar between the images that are correctly predicted with confidence score > 99% and <...
[ -1, -1, -1, -1, 3, 1, 6 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "rylp1fFAtS", "H1gO5pmhsS", "SyxFZjSItB", "B1xJXVihYr", "iclr_2020_ryeQmCVYPS", "iclr_2020_ryeQmCVYPS", "iclr_2020_ryeQmCVYPS" ]
iclr_2020_r1eVX0EFvH
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
Adversarial examples are malicious inputs crafted to cause a model to misclassify them. In their most common instantiation, "perturbation-based" adversarial examples introduce changes to the input that leave its true label unchanged, yet result in a different model prediction. Conversely, "invariance-based" adversarial examples insert changes to the input that leave the model's prediction unaffected despite the underlying input's label having changed. So far, the relationship between these two notions of adversarial examples has not been studied, we close this gap. We demonstrate that solely achieving perturbation-based robustness is insufficient for complete adversarial robustness. Worse, we find that classifiers trained to be Lp-norm robust are more vulnerable to invariance-based adversarial examples than their undefended counterparts. We construct theoretical arguments and analytical examples to justify why this is the case. We then illustrate empirically that the consequences of excessive perturbation-robustness can be exploited to craft new attacks. Finally, we show how to attack a provably robust defense --- certified on the MNIST test set to have at least 87% accuracy (with respect to the original test labels) under perturbations of Linfinity-norm below epsilon=0.4 --- and reduce its accuracy (under this threat model with respect to an ensemble of human labelers) to 60% with an automated attack, or just 12% with human-crafted adversarial examples.
reject
The paper considers the relationship betwee: - perturbations to an input x which change predictions of a model but not the ground truth label - perturbations to an input x which do not change a model's prediction but do chance the ground truth label. The authors show that achieving robustness to the former need not guarantee robustness to the latter. While these ideas are interesting, the reviewers would like to see a tighter connection between the two forms of robustness developed.
train
[ "rygGyhMTFS", "rJlJAnX3YB", "B1eYn6KniH", "Bkl_5BcojS", "HkgfWr9ijB", "ByeOpEcsjS", "rkg0F4ciir", "ByxBzghkcH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\n================ Update after reading the rebuttal and the revised paper ======================================= \n\nI have now read the author rebuttal and the revised paper. I had raised two main issues with the paper in my initial review (see below): 1) experiments don't provide enough support for the main cl...
[ 3, 6, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_r1eVX0EFvH", "iclr_2020_r1eVX0EFvH", "iclr_2020_r1eVX0EFvH", "rJlJAnX3YB", "ByxBzghkcH", "rkg0F4ciir", "rygGyhMTFS", "iclr_2020_r1eVX0EFvH" ]
iclr_2020_BygSXCNFDB
Exploration Based Language Learning for Text-Based Games
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text. These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents. Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora. One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space. Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions. In this work, we propose to use the exploration approach of Go-Explore (Ecoffet et al., 2019) for solving text-based games. More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories. Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment. Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space.
reject
The paper applies the Go-Explore algorithm to text-based games and shows that it is able to solve text-based game with better sample efficiency and generalization than some alternatives. The Go-Explore algorithm is used to extract high reward trajectories that can be used to train a policy using a seq2seq model that maps observations to actions. Paper received 1 weak accept and 2 weak rejects. Initially the paper received three weak rejects, with the author response and revision convincing one reviewer to increase their score to a weak accept. Overall, the authors liked the paper and thought that it was well-written with good experiments. However, there is concern that the paper lacks technical novelty and would not be of interest to the broader ICLR community (beyond those that are interested in text-based games). Another concern reviewers expressed was that the proposed method was only compared against baselines with simple exploration strategies and that baselines with more advanced exploration strategies should be included. The AC agrees with above concerns and encourage the authors to improve their paper based on the reviewer feedback, and to consider resubmitting to a venue that is more focused on text-based games (perhaps an NLP conference).
train
[ "BJldEiamKH", "B1e9WmK2sH", "SkexqYFiiB", "H1xYdUShjB", "Syg2zKFsir", "BkxdUtYssr", "Byxbair19H", "ByljVLmfcS" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper applies the Go-Explore algorithm to the domain of text-based games and shows significant performance gains on Textworld's Coin Collector and Cooking sets of games. Additionally, the authors evaluate 3 different paradigms for training agents on (1) single games, (2) jointly on multiple games, and (3) tra...
[ 6, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_BygSXCNFDB", "H1xYdUShjB", "BJldEiamKH", "BkxdUtYssr", "ByljVLmfcS", "Byxbair19H", "iclr_2020_BygSXCNFDB", "iclr_2020_BygSXCNFDB" ]
iclr_2020_H1eD7REtPr
CAN ALTQ LEARN FASTER: EXPERIMENTS AND THEORY
Differently from the popular Deep Q-Network (DQN) learning, Alternating Q-learning (AltQ) does not fully fit a target Q-function at each iteration, and is generally known to be unstable and inefficient. Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance. Although Adam appears to be a natural solution, its performance in AltQ has rarely been studied before. In this paper, we first provide a solid exploration on how well AltQ performs with Adam. We then take a further step to improve the implementation by adopting the technique of parameter restart. More specifically, the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method. The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation. To the best of our knowledge, this is the first theoretical study on the Adam-type algorithms in Q-learning.
reject
The reviewers attempted to provide a fair assessment of this work, albeit with varying qualifications. Nevertheless, the depth and significance of the technical contribution was unanimously questioned, and the experimental evaluation was not considered to be convincing by any of the assessors. The criticisms are sufficient to ask the authors to further strengthen this work before it can be considered for a top conference.
train
[ "S1xkUljkqH", "HJxEMKiEYB", "H1gauDWnir", "H1xvN8bhor", "BygqxrW2sS", "H1x4aHQlcB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper describes a method to improve the AltQ algorithm (which is typically unstable and inefficient) by using a combination of an Adam optimizer and regularly restarting the internal parameters of the Adam optimizer. The approach is evaluated on both a synthetic problem and on Atari games.\n\nThe core of the ...
[ 3, 3, -1, -1, -1, 1 ]
[ 1, 5, -1, -1, -1, 3 ]
[ "iclr_2020_H1eD7REtPr", "iclr_2020_H1eD7REtPr", "HJxEMKiEYB", "S1xkUljkqH", "H1x4aHQlcB", "iclr_2020_H1eD7REtPr" ]
iclr_2020_HkxwmRVtwH
Gaussian Process Meta-Representations Of Neural Networks
Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty. It is, however, challenging to specify a meaningful and tractable prior over the network parameters. More crucially, many existing inference methods assume mean-field approximate posteriors, ignoring interactions between parameters in high-dimensional weight space. To this end, this paper introduces two innovations: (i) a Gaussian process-based hierarchical model for the network parameters based on recently introduced unit embeddings that can flexibly encode weight structures, and (ii) input-dependent contextual variables for the weight prior that can provide convenient ways to regularize the function space being modeled by the NN through the use of kernels. Furthermore, we develop an efficient structured variational inference scheme that alleviates the need to perform inference in the weight space whilst retaining and learning non-trivial correlations between network parameters. We show these models provide desirable test-time uncertainty estimates, demonstrate cases of modeling inductive biases for neural networks with kernels and demonstrate competitive predictive performance of the proposed model and algorithm over alternative approaches on a range of classification and active learning tasks.
reject
The authors propose an approach to Bayesian deep learning, by representing neural network weights as latent variables mapped through a Kronecker factored Gaussian process. The ideas have merit and are well-motivated. Reviewers were primarily concerned by the experimental validation, and lack of discussion and comparisons with related work. After the rebuttal, reviewers still expressed concern regarding both points, with no reviewer championing the work. One reviewer writes: "I have read the authors' rebuttal. I still have reservation regarding the gain of a GP over an NN in my original review and I do not think the authors have addressed this very convincingly -- while I agree that in general, sparse GP can match the performance of GP with a sufficiently large number of inducing inputs, the proposed method also incurs extra approximations so arguing for the advantage of the proposed method in term of the accurate approximate inference of sparse GP seems problematic." Another reviewer points out that the comment in the author rebuttal about Kronecker factored methods (Saatci, 2011) for non-Gaussian likelihoods and with variational inference being an open question is not accurate: SV-DKL (https://arxiv.org/abs/1611.00336) and other approaches (http://proceedings.mlr.press/v37/flaxman15.pdf) were specifically designed to address this question, and are implemented in popular packages. Moreover, there is highly relevant additional work on latent variable representations for neural network weights, inducing priors on p(w) through p(z), which is not discussed or compared against (https://arxiv.org/abs/1811.07006, https://arxiv.org/abs/1907.07504). The revision only includes a minor consideration of DKL in the appendix. While the ideas in the paper are promising, and the generally thoughtful exchanges were appreciated, there is clearly related work that should be discussed in the main text, with appropriate comparisons. With reviewers expressing additional reservations after rebuttal, and the lack of a clear champion, the paper would benefit from significant revisions in these directions. Note: In the text, it says: "However, obtaining p(w|D) and p(D) exactly is intractable when N is large or when the network is large and as such, approximation methods are often required." One cannot exactly obtain p(D), or the predictive distribution, regardless of N or the size of the network; exact inference is intractable because the relevant integrals cannot be expressed in closed form, since the parameters are mapped through non-linearities, in addition to typically non-Gaussian likelihoods.
train
[ "S1ly24CaKr", "rkxRezOpFH", "Byxfmb92jB", "S1lhZ0F3oS", "BJeDVuxhoB", "BJgWhXZojr", "HylfQkBYir", "BylDgG2Oir", "Byg0gWh_sS", "SkxQ78aPsH", "rkgbHvpviH", "BylqqvTvoH", "HkgGDDaDoH", "BJgS28awiS", "B1xym1CyqS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "**Summary**: This paper proposes a hierarchical Bayesian approach to hyper-networks by placing a Gaussian process prior over the latent representation for each weight. A stochastic variational inference scheme is then proposed to infer the posterior over both the Gaussian process and the weights themselves. Experi...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_HkxwmRVtwH", "iclr_2020_HkxwmRVtwH", "S1ly24CaKr", "iclr_2020_HkxwmRVtwH", "BJgWhXZojr", "HylfQkBYir", "Byg0gWh_sS", "HkgGDDaDoH", "rkgbHvpviH", "iclr_2020_HkxwmRVtwH", "S1ly24CaKr", "B1xym1CyqS", "rkgbHvpviH", "rkxRezOpFH", "iclr_2020_HkxwmRVtwH" ]
iclr_2020_HJew70NYvH
TPO: TREE SEARCH POLICY OPTIMIZATION FOR CONTINUOUS ACTION SPACES
Monte Carlo Tree Search (MCTS) has achieved impressive results on a range of discrete environments, such as Go, Mario and Arcade games, but it has not yet fulfilled its true potential in continuous domains.In this work, we introduceTPO, a tree search based policy optimization method for continuous environments. TPO takes a hybrid approach to policy optimization. Building the MCTS tree in a continuous action space and updating the policy gradient using off-policy MCTS trajectories are non-trivial. To overcome these challenges, we propose limiting tree search branching factor by drawing only few action samples from the policy distribution and define a new loss function based on the trajectories’ mean and standard deviations. Our approach led to some non-intuitive findings. MCTS training generally requires a large number of samples and simulations. However, we observed that bootstrappingtree search with a pre-trained policy allows us to achieve high quality results with a low MCTS branching factor and few number of simulations. Without the proposed policy bootstrapping, continuous MCTS would require a much larger branching factor and simulation count, rendering it computationally and prohibitively expensive. In our experiments, we use PPO as our baseline policy optimization algorithm. TPO significantly improves the policy on nearly all of our benchmarks. For example, in complex environments such as Humanoid, we achieve a 2.5×improvement over the baseline algorithm.
reject
The paper proposes a tree search based policy optimization methods for continuous action state spaces. The paper does not have a theoretical guarantee, but has empirical results. Reviewers brought up issues such as lack of using other policy optimizations methods (SAC, RERPI, etc.), sample inefficiency, and unclear difference with some other similar papers. Even though the authors have provided a rebuttal to address these issues, all the reviewers remain negative. So I can only recommend rejection at this stage.
train
[ "BJxaxoUqqH", "S1xeDjc2sS", "rkxdBs9hiS", "BJlAMjc2iB", "H1xnei5noB", "SkgKfHeRYS", "ByeT7Brx9B" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes Tree search Policy Optimization (TPO) algorithm for tasks with continuous action spaces. TPO works after a well trained policy can be obtained (PPO is used in the paper). After a well trained policy is obtained, TPO firstly uses the policy to do Monte Carlo Tree Search (MCTS), with at most 32 a...
[ 1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HJew70NYvH", "SkgKfHeRYS", "ByeT7Brx9B", "BJxaxoUqqH", "iclr_2020_HJew70NYvH", "iclr_2020_HJew70NYvH", "iclr_2020_HJew70NYvH" ]
iclr_2020_B1gd7REFDB
Context-Aware Object Detection With Convolutional Neural Networks
Although the state-of-the-art object detection methods are successful in detecting and classifying objects by leveraging deep convolutional neural networks (CNNs), these methods overlook the semantic context which implies the probabilities that different classes of objects occur jointly. In this work, we propose a context-aware CNN (or conCNN for short) that for the first time effectively enforces the semantics context constraints in the CNN-based object detector by leveraging the popular conditional random field (CRF) model in CNN. In particular, conCNN features a context-aware module that naturally models the mean-field inference method for CRF using a stack of common CNN operations. It can be seamlessly plugged into any existing region-based object detection paradigm. Our experiments using COCO datasets showcase that conCNN improves the average precision (AP) of object detection by 2 percentage points, while only introducing negligible extra training overheads.
reject
The paper proposes a contextual reasoning module following the approach proposed by the NIPS 2011 paper for object detection. Although the reviewers find the proposed approach reasonable, the experimental results are weak and noisy. Multiple reviewers believe that the paper will benefit from another review cycle, pointing out that the authors response confirmed that multiple additional (or redoing of) experiments are needed.
train
[ "SkxCuJN3ir", "BJlt36m2jH", "rkgUoCQnoH", "Sylu28cFYB", "rkgZwtqI9r", "BJguUtsIcS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the comments. Here we respond to your main concern.\n\nComment: \"My main concern is that the experimental results (Page 8 Table 2) does not support the merits of the proposed approach. When comparing with the existing Faster R-CNN + Relation work of [2], the improvement provided by the proposed appr...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 5, 4, 4 ]
[ "Sylu28cFYB", "rkgZwtqI9r", "BJguUtsIcS", "iclr_2020_B1gd7REFDB", "iclr_2020_B1gd7REFDB", "iclr_2020_B1gd7REFDB" ]
iclr_2020_BJeFQ0NtPS
Parallel Neural Text-to-Speech
In this work, we first propose ParaNet, a non-autoregressive seq2seq model that converts text to spectrogram. It is fully convolutional and obtains 46.7 times speed-up over Deep Voice 3 at synthesis while maintaining comparable speech quality using a WaveNet vocoder. ParaNet also produces stable alignment between text and speech on the challenging test sentences by iteratively improving the attention in a layer-by-layer manner. Based on ParaNet, we build the first fully parallel neural text-to-speech system using parallel neural vocoders, which can synthesize speech from text through a single feed-forward pass. We investigate several parallel vocoders within the TTS system, including variants of IAF vocoders and bipartite flow vocoder.
reject
The paper proposed a non-autoregressive attention based encoder-decoder model for text-to-sepectrogram using attention distillation. It is shown to bring good speedup to conventional autoregressive ones. The paper further adopted VAE for the vocoder training which trains from scratch although performs worse than existing method (e.g. ClariNet). The main concerns for this paper come from the unclear presentation: * As the reviewer pointed out, there're some misleading claims that the speedup gains was obtained without the consideration of the full context (i.e. not including the whole inference time). * The paper failed to clear present the architectures developed/used in the paper and the differences from those used in the literature. The reviewers suggested the use of diagram to aid the presentation. * The two contributions are unbalanced presented. Due to the complexities involved, it's better to explain things in more details. The authors acknowledged the reviewers comments during rebuttal, but did not make any changes to the paper.
test
[ "ryxYOSF2ir", "H1xnOZbnor", "H1eR-cCiiS", "H1gad04AKr", "B1xmNSiAYS", "BJgsIVtN9r", "BJloo6R_ur", "Hkg5fqD5dr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author" ]
[ "Thank you for the detailed comments and suggestions. Our response is as follows.\n\n1, We organize the paper in a way that focuses more on building a fully parallel neural TTS system by proposing ParaNet and pairing it with various parallel neural vocoders, which also leads to an interesting comparison of vocoders...
[ -1, -1, -1, 3, 6, 1, -1, -1 ]
[ -1, -1, -1, 5, 4, 4, -1, -1 ]
[ "H1gad04AKr", "B1xmNSiAYS", "BJgsIVtN9r", "iclr_2020_BJeFQ0NtPS", "iclr_2020_BJeFQ0NtPS", "iclr_2020_BJeFQ0NtPS", "iclr_2020_BJeFQ0NtPS", "BJloo6R_ur" ]
iclr_2020_rkgFXR4KPr
A Simple Recurrent Unit with Reduced Tensor Product Representations
Widely used recurrent units, including Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU), perform well on natural language tasks, but their ability to learn structured representations is still questionable. Exploiting reduced Tensor Product Representations (TPRs) --- distributed representations of symbolic structure in which vector-embedded symbols are bound to vector-embedded structural positions --- we propose the TPRU, a simple recurrent unit that, at each time step, explicitly executes structural-role binding and unbinding operations to incorporate structural information into learning. The gradient analysis of our proposed TPRU is conducted to support our model design, and its performance on multiple datasets shows the effectiveness of it. Furthermore, observations on linguistically grounded study demonstrate the interpretability of our TPRU.
reject
This paper has been reviewed by three reviewers and received scores such as 3/3/6. The reviewers took into account the rebuttal in their final verdict. The major criticism concerned the somewhat ad-hoc notion of interpretability, the analysis of vanishing/exploding gradients in TPRU is experimental lacking theory. Finally, all reviewers noted the paper is difficult to read and contains grammar issues etc. which does not help. On balance, we regret that this paper cannot be accepted to ICLR2020.
train
[ "Bklsv1j2oS", "SJxro8sodH", "ByeJA1orFr", "HklyTuFRFB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks sincerely for the comments and we have a new version uploaded. ", "This paper proposes a new recurrent unit with a simplified dynamics during training leading to more stable training algorithms and better performance. The paper is difficult to read because it assumes the reader is an expert on Tensor Prod...
[ -1, 6, 3, 3 ]
[ -1, 1, 1, 3 ]
[ "iclr_2020_rkgFXR4KPr", "iclr_2020_rkgFXR4KPr", "iclr_2020_rkgFXR4KPr", "iclr_2020_rkgFXR4KPr" ]
iclr_2020_Hklcm0VYDS
How noise affects the Hessian spectrum in overparameterized neural networks
Stochastic gradient descent (SGD) forms the core optimization method for deep neural networks. While some theoretical progress has been made, it still remains unclear why SGD leads the learning dynamics in overparameterized networks to solutions that generalize well. Here we show that for overparameterized networks with a degenerate valley in their loss landscape, SGD on average decreases the trace of the Hessian of the loss. We also generalize this result to other noise structures and show that isotropic noise in the non-degenerate subspace of the Hessian decreases its determinant. In addition to explaining SGDs role in sculpting the Hessian spectrum, this opens the door to new optimization approaches that may confer better generalization performance. We test our results with experiments on toy models and deep neural networks.
reject
The study of the impact of the noise on the Hessian is interesting and I commend the authors for attacking this difficult problem. After the rebuttal and discussion, the reviewers had two concerns: - The strength of the assumptions of the theorem - Assuming the assumptions are reasonable, the conclusions to draw given the current weak link between Hessian and generalization. I'm confident the authors will be able to address these issues for a later submission.
train
[ "BkermuA2tS", "S1gtovl5iB", "r1xd1Gqtjr", "BJxQhPWKoH", "S1giGv-FoS", "SJebdIbKir", "SkeUbS-KsS", "H1lum-xpYr", "BJxOztS89r" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of how noise coming from the gradient-based update affects the geometry of the hessian matrix when training a neural network. \n\nThe paper makes an interesting claim that around a local minimum, if the noise in SGD is aligned with the hessian matrix of the network, then doing SGD ...
[ 6, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_Hklcm0VYDS", "r1xd1Gqtjr", "S1giGv-FoS", "BkermuA2tS", "H1lum-xpYr", "BJxOztS89r", "iclr_2020_Hklcm0VYDS", "iclr_2020_Hklcm0VYDS", "iclr_2020_Hklcm0VYDS" ]
iclr_2020_rkgqm0VKwB
End-to-end named entity recognition and relation extraction using pre-trained language models
Named entity recognition (NER) and relation extraction (RE) are two important tasks in information extraction and retrieval (IE & IR). Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance. However, state-of-the-art joint models typically rely on external natural language processing (NLP) tools, such as dependency parsers, limiting their usefulness to domains (e.g. news) where those tools perform well. The few neural, end-to-end models that have been proposed are trained almost completely from scratch. In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model. Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train. On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin.
reject
This paper presents an end-to-end technique for named entity recognition, that uses pre-trained models so as to avoid long training times, and evaluates it against several baselines. The paper was reviewed by three experts working in this area. R1 recommends Reject, giving the opinion that although the paper is well-written and results are good, they feel the technique itself has little novelty and that the main reason the technique works well is using BERT. R2 recommends Weak Reject based on similar reasoning, that the approach consists of existing components (albeit combined in a novel way) and suggest some ablation experiments to isolate the source of the good performance. R3 recommends Weak Accept but feels it is "unsurprising" that BERT allows for faster training and higher accuracy. In their response, authors emphasize that the application of pretraining to named entity recognition is new, and that theirs is a methodological advance, not purely a practical one (as R1 suggests and other reviews imply). They also argue it is not possible to do a fair ablation study that removes BERT, but make an attempt. The reviewers chose to keep their scores after the response. Given the split decision, the AC also read the paper. It is clear the paper has significant merit and significant practical value, as the reviews indicate. However, given that three expert reviewers -- all of whom are NLP researchers at top institutions -- feel that the contribution of the paper is weak (in the context of the expectations of ICLR) makes it not possible for us to recommend acceptance at this time.
val
[ "SyeYlggviB", "Bkg8JkNJsB", "ryeNd3WQsr", "HJxKkZIRtB", "BJeklIo0tr", "SJlvvnJkcB", "HkgY2Kfv9B", "HkgRyog0YS", "B1l3qZskur", "SygMrl53vr", "H1l6GYl3DS", "SJe9dIW3Pr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "public", "public" ]
[ "Hello,\n\nThank you for your review of our paper. We appreciate the positive assessment of the clarity of our writing.\n\nRegarding the suggested ablations,\n\n1. For reasons outlined in our response to the public comment you reference, we do not believe this ablation (as suggested) would be meaningful. For conven...
[ -1, -1, -1, 6, 3, 1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ "BJeklIo0tr", "HJxKkZIRtB", "SJlvvnJkcB", "iclr_2020_rkgqm0VKwB", "iclr_2020_rkgqm0VKwB", "iclr_2020_rkgqm0VKwB", "HkgRyog0YS", "iclr_2020_rkgqm0VKwB", "H1l6GYl3DS", "SJe9dIW3Pr", "iclr_2020_rkgqm0VKwB", "H1l6GYl3DS" ]
iclr_2020_B1lsXREYvr
One-Shot Neural Architecture Search via Compressive Sensing
Neural architecture search (NAS), or automated design of neural network models, remains a very challenging meta-learning problem. Several recent works (called "one-shot" approaches) have focused on dramatically reducing NAS running time by leveraging proxy models that still provide architectures with competitive performance. In our work, we propose a new meta-learning algorithm that we call CoNAS, or Compressive sensing-based Neural Architecture Search. Our approach merges ideas from one-shot NAS approaches with iterative techniques for learning low-degree sparse Boolean polynomial functions. We validate our approach on several standard test datasets, discover novel architectures hitherto unreported, and achieve competitive (or better) results in both performance and search time compared to existing NAS approaches. Further, we provide theoretical analysis via upper bounds on the number of validation error measurements needed to perform reliable meta-learning; to our knowledge, these analysis tools are novel to the NAS literature and may be of independent interest.
reject
This paper proposed to use a compressive sensing approach for neural architecture search, similar to Harmonica for hyperparameter optimization. In the discussion, the reviewers noted that the empirical evaluation is not comparing apples to apples; the authors could not provide a fair evaluation. Code availability is not mentioned. The proof of theorem 3.2 was missing in the original submission and was only provided during the rebuttal. All reviewers gave rejecting scores, and I also recommend rejection.
train
[ "S1xpDjp5or", "BylD73zPoS", "rJxzDAMPjB", "Bkg3PaGwjS", "HJxQXdh-Fr", "rygy26eptr", "HJxy8jjAcH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for answering to all the points I wrote on the review. \n\nHowever, I am not still convinced regarding the correlation between the stand-alone architectures evaluated with the one-shot weights vs. retrained from scratch.\nOf course Bender et al. [1] show that this correlation is highly dependen...
[ -1, -1, -1, -1, 1, 3, 3 ]
[ -1, -1, -1, -1, 1, 3, 4 ]
[ "BylD73zPoS", "HJxy8jjAcH", "HJxQXdh-Fr", "rygy26eptr", "iclr_2020_B1lsXREYvr", "iclr_2020_B1lsXREYvr", "iclr_2020_B1lsXREYvr" ]
iclr_2020_Hyes70EYDB
Visual Interpretability Alone Helps Adversarial Robustness
Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks. In this paper, we theoretically show that with the correct measurement of interpretation, it is actually difficult to hide adversarial examples, as confirmed by experiments on MNIST, CIFAR-10 and Restricted ImageNet. Spurred by that, we develop a novel defensive scheme built only on robust interpretation (without resorting to adversarial loss minimization). We show that our defense achieves similar classification robustness to state-of-the-art robust training methods while attaining higher interpretation robustness under various settings of adversarial attacks.
reject
This work focuses on how one can design models with robustness of interpretations. While this is an interesting direction, the paper would benefit from a more careful treatment of its technical claims.
train
[ "BylQGnF_jr", "BJlD8OODoH", "r1lgak4PsB", "HkxDzAXPsB", "r1lqKyVwjH", "S1gJybVvsB", "S1lhseVDjB", "H1eezZEDjr", "BkxsmlNwsH", "rJlQeu8GiB", "rkeeRAvCtB", "S1eSBR_CtH" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your question. \n\nThe attack (fooling an interpretability method, not just the classifier) could be a threat model for a neural network whose usage relies jointly on interpretation maps and classification results. \n\nOne can use an interpretability method as a beneficial post-hoc supplement to visuali...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "BJlD8OODoH", "BkxsmlNwsH", "r1lqKyVwjH", "iclr_2020_Hyes70EYDB", "rJlQeu8GiB", "S1lhseVDjB", "BkxsmlNwsH", "rkeeRAvCtB", "S1eSBR_CtH", "iclr_2020_Hyes70EYDB", "iclr_2020_Hyes70EYDB", "iclr_2020_Hyes70EYDB" ]
iclr_2020_BklTQCEtwH
Curriculum Learning for Deep Generative Models with Clustering
Training generative models like Generative Adversarial Network (GAN) is challenging for noisy data. A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper. The curriculum construction is based on the centrality of underlying clusters in data points. The data points of high centrality takes priority of being fed into generative models during training. To make our algorithm scalable to large-scale data, the active set is devised, in the sense that every round of training proceeds only on an active subset containing a small fraction of already trained data and the incremental data of lower centrality. Moreover, the geometric analysis is presented to interpret the necessity of cluster curriculum for generative models. The experiments on cat and human-face data validate that our algorithm is able to learn the optimal generative models (e.g. ProGAN) with respect to specified quality metrics for noisy data. An interesting finding is that the optimal cluster curriculum is closely related to the critical point of the geometric percolation process formulated in the paper.
reject
The paper proposes a curriculum learning approach to training generative models like GANs. The reviewers had a number of questions and concerns related to specific details in the paper and experimental results. While the authors were able to address some of these concerns, the reviewers believe that further refinement is necessary before the paper is ready for publication.
train
[ "HkllQRtbsH", "SklENMcIoH", "Skx7STYbjB", "rkeN7KLoKS", "r1xYpE3w9B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nQ1: “I am wondering if they would not have been better off by describing the proposed approach in terms of outlier removal.”\nA1: Indeed, outlier removal is more clear and easier to understand. But we use “clustering” to emphasize the dynamic process when using cluster curriculum for training generative models....
[ -1, -1, -1, 6, 1 ]
[ -1, -1, -1, 3, 1 ]
[ "rkeN7KLoKS", "iclr_2020_BklTQCEtwH", "r1xYpE3w9B", "iclr_2020_BklTQCEtwH", "iclr_2020_BklTQCEtwH" ]
iclr_2020_BJlJVCEYDB
Neural networks with motivation
How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards. Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP), a basal ganglia structure involved in motivated behaviors. We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.
reject
This paper proposes a deep RL framework that incorporates motivation as input features, and is tested on 3 simplified domains, including one which is presented to rodents. While R2 found the paper well-written and interesting to read, a common theme among reviewer comments is that it’s not clear what the main contribution is, as it seems to simultaneously be claiming a ML contribution (motivation as a feature input helps with certain tasks) as well as a neuroscientific contribution (their agent exhibited representations that clustered similarly to those in animals). In trying to do both, it’s perhaps doing both a disservice. I think it’s commendable to try to bridge the fields of deep RL and neuroscience, and this is indeed an intriguing paper. However any such paper still needs to have a clear contribution. It seems that the ML contributions are too slight to be of general practical use, while the neuroscientific contributions are muddled somewhat. The authors several times mentioned the space constraints limiting their explanations. Perhaps this is an indication that they are trying to cover too much within one paper. I urge the authors to consider splitting it up into two separate works in order to give both the needed focus. I also have some concerns about the results themselves. R1 and R3 both mentioned that the comparison between the non-motivated agent and the motivated agent wasn’t quite fair, since one is essentially only given partial information. It’s therefore not clear how we should be interpreting the performance difference. Second, why was the non-motivated agent not analyzed in the same way as the motivated agent for the Pavlovian task? Isn’t this a crucial comparison to make, if one wanted to argue that the motivational salience is key to reproducing the representational similarities of the animals? (The new experiment with the random fixed weights is interesting, I would have liked to see those results.) For these reasons and the ones laid out in the extensive comments of the reviewers, I’m afraid I have to recommend reject.
val
[ "H1eJ5Dc3jS", "S1lkrUqiiB", "S1lar9ndjr", "rJlgLxBuoS", "S1latUrwsH", "rJgsMISPiH", "B1xu1JSPor", "SkgQWpVDiS", "HJxeysVDsB", "SyllsqVvjr", "B1la1uEvjr", "rJgOwFa1oB", "ryed1wPvKH", "B1gtEtq2YH" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the reviewer’s comments.\n\nWe would like to argue that the Reviewer #3 correctly understood most of the technical details from the original text. To increase clarity in the paper's presentation, we followed the Reviewer #3's recommendation and included all technical details in the revised manuscript...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "S1lkrUqiiB", "S1latUrwsH", "rJlgLxBuoS", "B1la1uEvjr", "rJgsMISPiH", "B1xu1JSPor", "SkgQWpVDiS", "rJgOwFa1oB", "SyllsqVvjr", "B1gtEtq2YH", "ryed1wPvKH", "iclr_2020_BJlJVCEYDB", "iclr_2020_BJlJVCEYDB", "iclr_2020_BJlJVCEYDB" ]
iclr_2020_S1e1EAEFPB
Perceptual Regularization: Visualizing and Learning Generalizable Representations
A deployable machine learning model relies on a good representation. Two desirable criteria of a good representation are to be understandable, and to generalize to new tasks. We propose a technique termed perceptual regularization that enables both visualization of the latent representation and control over the generality of the learned representation. In particular our method provides a direct visualization of the effect that adversarial attacks have on the internal representation of a deep network. By visualizing the learned representation, we are also able to understand the attention of a model, obtaining visual evidence that supervised networks learn task-specific representations. We show models trained with perceptual regularization learn transferrable features, achieving significantly higher accuracy in unseen tasks compared to standard supervised learning and multi-task methods.
reject
This paper proposes a new mechanism to visualize the latent space of a neural network. The idea is simple and the paper includes several experiments to test the effectiveness of the method. However, the method bears similarity to previous work and the evaluation does not sufficiently show quantitative improvements over other introspection techniques. The reviewers found this was a substantial problem and for this reason the paper is not ready for publication. The paper should improve its discussion of prior work and better establish its place in this regard.
train
[ "Hygzaao2iH", "r1ez-H92sS", "SJlwNGI8or", "SkxHLQUIjB", "SJebyXUUoB", "Bkg7e7ILjH", "SylCVZigtH", "HJeJcEfMtS", "HJeEts4PqH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your clarifications. Regarding the novelty of the paper (which is the main criticism), \n\n> The main contribution of our paper is to apply a simple method to obtain interesting observations which are completely novel: a) Visualization of the effect of adversarial attack on the latent encoding\n\nThi...
[ -1, -1, -1, -1, -1, -1, 6, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "SJlwNGI8or", "iclr_2020_S1e1EAEFPB", "HJeEts4PqH", "SylCVZigtH", "HJeJcEfMtS", "SJebyXUUoB", "iclr_2020_S1e1EAEFPB", "iclr_2020_S1e1EAEFPB", "iclr_2020_S1e1EAEFPB" ]
iclr_2020_BklxN0NtvB
Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as 2X greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
reject
This paper argues that NNs deployed to hardware needs to robust to additive noise and introduces two methods to achieve this. The reviewers liked aspects of the paper and the paper is borderline. However, all in all sufficient reservations were raised to put the paper below the threshold. The criticism was constructive and can be used in an updated version submitted to next conference. Rejection is recommended.
train
[ "B1eV9goooS", "SkxJx-3jsS", "ByxvEHhosr", "Hyekd8hjjB", "HylIKtijiS", "Bkl8CPGrjH", "rkeopjLGor", "Syg_H-lOKB", "SJxtkoSRKS", "BJgHvakR5S" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Author Response:\n\nMany thanks for taking the time to review our paper and the constructive comments. We've added additional experimental results and reworked the relevant sections of the paper in response to the reviewer feedback.\n\n- Q1 Noise model \nWe believe that the simple Gaussian noise model used in the ...
[ -1, -1, -1, -1, -1, 3, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, 3, -1, 5, 3, 1 ]
[ "Syg_H-lOKB", "Bkl8CPGrjH", "BJgHvakR5S", "rkeopjLGor", "SJxtkoSRKS", "iclr_2020_BklxN0NtvB", "iclr_2020_BklxN0NtvB", "iclr_2020_BklxN0NtvB", "iclr_2020_BklxN0NtvB", "iclr_2020_BklxN0NtvB" ]
iclr_2020_rkeZNREFDr
Not All Features Are Equal: Feature Leveling Deep Neural Networks for Better Interpretation
Self-explaining models are models that reveal decision making parameters in an interpretable manner so that the model reasoning process can be directly understood by human beings. General Linear Models (GLMs) are self-explaining because the model weights directly show how each feature contributes to the output value. However, deep neural networks (DNNs) are in general not self-explaining due to the non-linearity of the activation functions, complex architectures, obscure feature extraction and transformation process. In this work, we illustrate the fact that existing deep architectures are hard to interpret because each hidden layer carries a mix of low level features and high level features. As a solution, we propose a novel feature leveling architecture that isolates low level features from high level features on a per-layer basis to better utilize the GLM layer in the proposed architecture for interpretation. Experimental results show that our modified models are able to achieve competitive results comparing to main-stream architectures on standard datasets while being more self-explainable. Our implementations and configurations are publicly available for reproductions.
reject
This paper proposes to learn self-explaining neural networks using a feature leveling idea. Unfortunately, the reviewers have raised several concerns on the paper, including insufficiency of novelty, weakness on experiments, etc. The authors did not provide rebuttal. We hope the authors can improve the paper in future submission based on the comments.
train
[ "S1xlfwiujS", "SkgEmy9zsS", "rJgZ18JcFH", "Skl_RbuoKH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a categorization of inner layer weights to be linearly or non-linearly correlated with the output. The motivation on why this is important is somewhat weak in the paper. But I could see cases where this is important, if there is supporting evidence that it helps with interpretability. However, ...
[ 1, 3, 3, 3 ]
[ 4, 3, 5, 1 ]
[ "iclr_2020_rkeZNREFDr", "iclr_2020_rkeZNREFDr", "iclr_2020_rkeZNREFDr", "iclr_2020_rkeZNREFDr" ]
iclr_2020_SylWNC4FPH
Auto Completion of User Interface Layout Design Using Transformer-Based Tree Decoders
It has been of increasing interest in the field to develop automatic machineries to facilitate the design process. In this paper, we focus on assisting graphical user interface (UI) layout design, a crucial task in app development. Given a partial layout, which a designer has entered, our model learns to complete the layout by predicting the remaining UI elements with a correct position and dimension as well as the hierarchical structures. Such automation will significantly ease the effort of UI designers and developers. While we focus on interface layout prediction, our model can be generally applicable for other layout prediction problems that involve tree structures and 2-dimensional placements. Particularly, we design two versions of Transformer-based tree decoders: Pointer and Recursive Transformer, and experiment with these models on a public dataset. We also propose several metrics for measuring the accuracy of tree prediction and ground these metrics in the domain of user experience. These contribute a new task and methods to deep learning research.
reject
The paper introduces an interesting application of GNNs, but the reviewers find that the contribution is too limited and the motivation is too weak.
train
[ "BJxCz0ihsr", "BJeNDds2iH", "HylnnWjhor", "rygHqoLsKB", "H1g05tby5H", "r1edRi1-9r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. We have revised the paper to address the issues you brought up.\n\n- Contribution\nWe developed our approach based on Transformer models. We agree with the reviewer that the model novelty is relatively incremental. However, the focus of the paper is to contribute a new prediction probl...
[ -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "r1edRi1-9r", "H1g05tby5H", "rygHqoLsKB", "iclr_2020_SylWNC4FPH", "iclr_2020_SylWNC4FPH", "iclr_2020_SylWNC4FPH" ]
iclr_2020_SygfNCEYDH
Weakly-supervised Knowledge Graph Alignment with Adversarial Learning
This paper studies aligning knowledge graphs from different sources or languages. Most existing methods train supervised methods for the alignment, which usually require a large number of aligned knowledge triplets. However, such a large number of aligned knowledge triplets may not be available or are expensive to obtain in many domains. Therefore, in this paper we propose to study aligning knowledge graphs in fully-unsupervised or weakly-supervised fashion, i.e., without or with only a few aligned triplets. We propose an unsupervised framework to align the entity and relation embddings of different knowledge graphs with an adversarial learning framework. Moreover, a regularization term which maximizes the mutual information between the embeddings of different knowledge graphs is used to mitigate the problem of mode collapse when learning the alignment functions. Such a framework can be further seamlessly integrated with existing supervised methods by utilizing a limited number of aligned triples as guidance. Experimental results on multiple datasets prove the effectiveness of our proposed approach in both the unsupervised and the weakly-supervised settings.
reject
Thanks for your detailed feedback to the reviewers, which clarified us a lot in many respects. This paper potentially discusses an interesting problem, and the concern raised by Review #2 was addressed in the revised paper. However, given the high competition at ICLR2020, this paper is unfortunately below the bar. We hope that the reviewers' comments are useful for improving the paper for potential future publication. The
train
[ "BklJa_CtjS", "SygSFVRFsB", "SyeEjXRtjr", "rJlItzxroS", "S1lVTZvPKr", "Byl2uPu0Fr", "HJl5yDjEqS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all the reviewers for the helpful comments and suggestions!\nBased on the comments, we have made the following changes in the revised draft.\n\n1. We have added the comparison with the KBGAN paper (Cai & Wang, 2018) in the related work section, based on the suggestions from the reviewer #2.\...
[ -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 1, 4, 3 ]
[ "iclr_2020_SygfNCEYDH", "S1lVTZvPKr", "Byl2uPu0Fr", "HJl5yDjEqS", "iclr_2020_SygfNCEYDH", "iclr_2020_SygfNCEYDH", "iclr_2020_SygfNCEYDH" ]
iclr_2020_r1xfECEKvr
Analyzing the Role of Model Uncertainty for Electronic Health Records
In medicine, both ethical and monetary costs of incorrect predictions can be significant, and the complexity of the problems often necessitates increasingly complex models. Recent work has shown that changing just the random seed is enough for otherwise well-tuned deep neural networks to vary in their individual predicted probabilities. In light of this, we investigate the role of model uncertainty methods in the medical domain. Using RNN ensembles and various Bayesian RNNs, we show that population-level metrics, such as AUC-PR, AUC-ROC, log-likelihood, and calibration error, do not capture model uncertainty. Meanwhile, the presence of significant variability in patient-specific predictions and optimal decisions motivates the need for capturing model uncertainty. Understanding the uncertainty for individual patients is an area with clear clinical impact, such as determining when a model decision is likely to be brittle. We further show that RNNs with only Bayesian embeddings can be a more efficient way to capture model uncertainty compared to ensembles, and we analyze how model uncertainty is impacted across individual input features and patient subgroups.
reject
The paper considers an important problem in medical applications of deep learning, such as variability/stability of model's predictions in face of various perturbations in the model (e.g., random seed), and evaluates different approaches to capturing model uncertainty. However, it appears to be little innovation in terms of machine-learning methodology, so ICLR might not be the best venue for this work, while perhaps other venues focused more on medical applications might be a better fit.
test
[ "BkluZR93jH", "HklNAxs3jS", "SyecET5nir", "BklmTfihKS", "Skea4CjLcB", "rklZZfX8tS", "r1lbccrrYr", "H1etuF1JtS", "BkxMoE0AdH", "S1xlnJrpdS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "public" ]
[ "Thank you for your feedback!\n\n> I am not exactly sure if ICLR is the best venue for this submission, as there is quite little innovation in modelling methodology, and the empirical analysis is domain specific.\n\nThis paper highlights the field of medicine as an example of a field for which per-example model unc...
[ -1, -1, -1, 3, 3, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, 1, -1, -1, -1, -1, -1 ]
[ "BklmTfihKS", "rklZZfX8tS", "Skea4CjLcB", "iclr_2020_r1xfECEKvr", "iclr_2020_r1xfECEKvr", "r1lbccrrYr", "H1etuF1JtS", "BkxMoE0AdH", "S1xlnJrpdS", "iclr_2020_r1xfECEKvr" ]
iclr_2020_rkxXNR4tvH
Semantic Pruning for Single Class Interpretability
Convolutional Neural Networks (CNN) have achieved state-of-the-art performance in different computer vision tasks, but at a price of being computationally and power intensive. At the same time, only a few attempts were made toward a deeper understanding of CNNs. In this work, we propose to use semantic pruning technique toward not only CNN optimization but also as a way toward getting some insight information on convolutional filters correlation and interference. We start with a pre-trained network and prune it until it behaves as a single class classifier for a selected class. Unlike the more traditional approaches which apply retraining to the pruned CNN, the proposed semantic pruning does not use retraining. Conducted experiments showed that a) for each class there is a pruning ration which allows removing filters with either an increase or no loss of classification accuracy, b) pruning can improve the interference between filters used for classification of different classes c) effect between classification accuracy and correlation between pruned filters groups specific for different classes.
reject
The authors propose to use pruning to study/interpret learned CNNs. The reviewers believed the results were not surprising and/or had no practical relevance. Unlike in many cases, two of the reviewers acknowledged reading the rebuttals, but were unswayed.
train
[ "Syl8uSvDiH", "SJg4gzvviS", "SJxyyNwwsr", "HJlgifPDjr", "HyeEvevvoS", "SkgMdNkFKH", "rJldqmS6Kr", "HylbYWO6tS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. Since each filter is still like a black box, is it possible to visualize some result of the discovered interpretability?\n\nThank you for the suggestion. We also think this will improve the paper; however, due to shortage of time, we have not been able to include it in the article. But we definitely will includ...
[ -1, -1, -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "SkgMdNkFKH", "rJldqmS6Kr", "rJldqmS6Kr", "SJg4gzvviS", "HylbYWO6tS", "iclr_2020_rkxXNR4tvH", "iclr_2020_rkxXNR4tvH", "iclr_2020_rkxXNR4tvH" ]
iclr_2020_Hkl4EANFDH
Regularizing Trajectories to Mitigate Catastrophic Forgetting
Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective. However in most practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical parameters. In this paper, we argue for the importance of regularizing optimization trajectories directly. We derive a new co-natural gradient update rule for continual learning whereby the new task gradients are preconditioned with the empirical Fisher information of previously learnt tasks. We show that using the co-natural gradient systematically reduces forgetting in continual learning. Moreover, it helps combat overfitting when learning a new task in a low resource scenario.
reject
The submission proposes a 'co-natural' gradient update rule to precondition the optimization trajectory using a Fisher information estimate acquired from previous experience. This results in reduced sensitivity and forgetting when new tasks are learned. The reviews were mixed on this paper, and unfortunately not all reviewers had enough expertise in the field. After reading the paper carefully, I believe that the paper has significance and relevance to the field of continual learning, however it will benefit from more careful positioning with respect to other work as well as more empirical support. The application to the low-data-regime is interesting and could be expanded and refined in a future submission. The recommendation is for rejection.
train
[ "BJx1bKaTFH", "ryxtooH2jS", "B1lrIXGooB", "HygoSO6Lor", "rye67npUoS", "HJe47LaIiH", "B1gs04aIiS", "Hyxd4ffhFr", "rkeTDi2TYB" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper amends the gradient update rule for continual learning using a natural-gradient-style formulation in order to regularise the trajectory during learning to forget previous task(s) less. They show experiments where this 'co-natural gradient' update rule improves some baselines. They also provide experimen...
[ 6, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_Hkl4EANFDH", "B1lrIXGooB", "B1gs04aIiS", "Hyxd4ffhFr", "iclr_2020_Hkl4EANFDH", "rkeTDi2TYB", "BJx1bKaTFH", "iclr_2020_Hkl4EANFDH", "iclr_2020_Hkl4EANFDH" ]
iclr_2020_SJlVVAEKwS
Adversarial Imitation Attack
Deep learning models are known to be vulnerable to adversarial examples. A practical adversarial attack should require as little as possible knowledge of attacked models T. Current substitute attacks need pre-trained models to generate adversarial examples and their attack success rates heavily rely on the transferability of adversarial examples. Current score-based and decision-based attacks require lots of queries for the T. In this study, we propose a novel adversarial imitation attack. First, it produces a replica of the T by a two-player game like the generative adversarial networks (GANs). The objective of the generative model G is to generate examples which lead D returning different outputs with T. The objective of the discriminative model D is to output the same labels with T under the same inputs. Then, the adversarial examples generated by D are utilized to fool the T. Compared with the current substitute attacks, imitation attack can use less training data to produce a replica of T and improve the transferability of adversarial examples. Experiments demonstrate that our imitation attack requires less training data than the black-box substitute attacks, but achieves an attack success rate close to the white-box attack on unseen data with no query.
reject
This paper proposes to use a generative adversarial network to train a substitute that replicates (imitates) a learned model under attack. It then shows that the adversarial examples for the substitute can be effectively used to attack the learned model. The proposed approach leads to better success rates of attacking than other substitute-training approaches that require more training examples. The condition to get a well-trained imitation model is that a sufficient number of queries are obtained from the target model. This paper has valuable contributions by developing an imitation attacker. However, some key issues remain. In particular, I agree with R1 that the average number of queries per image is relatively high, even during training. In the rebuttal, the authors made the assumption that “suppose their method could make an infinite number of queries for target models”, which is unfortunately not realistic. Another point that I found confusing: at testing, I don’t see how you can use the imitation model D to generate adversarial samples (D is a discriminative model, not a generator); it should be G, right?
train
[ "Bkxx3wM3oS", "B1giDwM2sB", "ByeQWDM2jH", "BJxYga09Kr", "BklEYLy0tS", "SkeKJn8y5H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q1: ‘a bit unclear of the model configuration of the generator’\nA1: The model architecture of the generator is simple, and we found that the model configuration of the generator is not a key factor that influences training stability. In order to allow readers to get more information about our method, we add the m...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 4, 5, 1 ]
[ "BJxYga09Kr", "BklEYLy0tS", "SkeKJn8y5H", "iclr_2020_SJlVVAEKwS", "iclr_2020_SJlVVAEKwS", "iclr_2020_SJlVVAEKwS" ]
iclr_2020_BklLVAEKvH
Generalized Clustering by Learning to Optimize Expected Normalized Cuts
We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a differentiable loss function equivalent to the expected normalized cuts. Unlike much of the work in unsupervised deep learning, our trained model directly outputs final cluster assignments, rather than embeddings that need further processing to be usable. Our approach generalizes to unseen datasets across a wide variety of domains, including text, and image. Specifically, we achieve state-of-the-art results on popular unsupervised clustering benchmarks (e.g., MNIST, Reuters, CIFAR-10, and CIFAR-100), outperforming the strongest baselines by up to 10.9%. Our generalization results are superior (by up to 21.9%) to the recent top-performing clustering approach with the ability to generalize.
reject
This paper proposes a deep clustering method based on normalized cuts. As the general idea of deep clustering has been investigated a fair bit, the reviewers suggest a more thorough empirical validation. Myself, I would also like further justification of many of the choices within the algorithm, the effect of changing the architecture.
train
[ "Hygd6m3lKr", "Bkg-IHS5jB", "HkgmmEHcoH", "Bkg2sQrcjB", "BJeC87H5jH", "Hyx0Uc59tS", "BkgdoUmCYr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new clustering method, called CNC, which is composed of two-step procedures.\nIt first embeds an input dataset into a d-dimensional space, followed by performing relaxed normalized cut to detect clusters.\nAlthough the contribution of introducing a new relaxed formulation of the normalized cu...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_BklLVAEKvH", "iclr_2020_BklLVAEKvH", "Hygd6m3lKr", "Hyx0Uc59tS", "BkgdoUmCYr", "iclr_2020_BklLVAEKvH", "iclr_2020_BklLVAEKvH" ]
iclr_2020_ByePEC4KDS
Situating Sentence Embedders with Nearest Neighbor Overlap
As distributed approaches to natural language semantics have developed and diversified, embedders for linguistic units larger than words (e.g., sentences) have come to play an increasingly important role. To date, such embedders have been evaluated using benchmark tasks (e.g., GLUE) and linguistic probes. We propose a comparative approach, nearest neighbor overlap (N2O), that quantifies similarity between embedders in a task-agnostic manner. N2O requires only a collection of examples and is simple to understand: two embedders are more similar if, for the same set of inputs, there is greater overlap between the inputs' nearest neighbors. We use N2O to compare 21 sentence embedders and show the effects of different design choices and architectures.
reject
This paper proposes to analyze the space of known sentence-to-vector functions by comparing the ways in which they induce nearest neighbor lists in a text corpus. The primary results of the study are somewhat unclear, and the reviewers do not find the method to be novel enough—or sufficiently well motivated a priori—to warrant publication in spite of these results.
train
[ "SkxzoaQniS", "HyglRomisr", "SyeEdLChFS", "r1xgsDq6KB", "S1eHKnBw5r" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks a lot for your reply!\n\nI think a key point here is the statement: \"N2O can identify when two embedders are functionally similar and therefore not worth both exploring.\", which also summarizes the main claim/purpose of the paper.\n\nThis statement is not substantiated by the experiments in the paper, bec...
[ -1, -1, 3, 1, 1 ]
[ -1, -1, 3, 5, 3 ]
[ "HyglRomisr", "iclr_2020_ByePEC4KDS", "iclr_2020_ByePEC4KDS", "iclr_2020_ByePEC4KDS", "iclr_2020_ByePEC4KDS" ]
iclr_2020_rklw4AVtDH
Optimistic Adaptive Acceleration for Optimization
This paper considers a new variant of AMSGrad called Optimistic-AMSGrad. AMSGrad is a popular adaptive gradient based optimization algorithm that is widely used in training deep neural networks. The new variant assumes that mini-batch gradients in consecutive iterations have some underlying structure, which makes the gradients sequentially predictable. By exploiting the predictability and some ideas from Optimistic Online learning, the proposed algorithm can accelerate the convergence and also enjoys a tighter regret bound. We evaluate Optimistic-AMSGrad and AMSGrad in terms of various performance measures (i.e., training loss, testing loss, and classification accuracy on training/testing data), which demonstrate that Optimistic-AMSGrad improves AMSGrad.
reject
The paper introduces a variant of AMSGrad ("Optimistic-AMSGrad"), which integrates an estimate of the future gradient into the optimization problem. While the method is interesting, reviewers agree that novelty is on the low side. The motivation of the approach should also be clarified. The experimental section should be made stronger; in particular, reporting convincing wall-clock running time advantages is critical for validating the viability of the proposed approach.
val
[ "r1xxZUGRKB", "r1ek9rqjor", "SJeHuS5osB", "BJe9UB9iir", "Bkl-NUGTtB", "B1xqSC6atS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies an optimistic variant of AMSGrad algorithm, where an estimate of the future gradient is incorporated into the optimization problem. The main claim is that when we have good enough (distance from the ground truth is small) estimate of the unknown gradient, the proposed algorithm will enjoy lower ...
[ 3, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, 4, 4 ]
[ "iclr_2020_rklw4AVtDH", "Bkl-NUGTtB", "B1xqSC6atS", "r1xxZUGRKB", "iclr_2020_rklw4AVtDH", "iclr_2020_rklw4AVtDH" ]
iclr_2020_HJg_ECEKDr
Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data
This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. We show that such algorithms are possible via Generative Teaching Networks (GTNs), a general approach that is applicable to supervised, unsupervised, and reinforcement learning. GTNs are deep neural networks that generate data and/or training environments that a learner (e.g.\ a freshly initialized neural network) trains on before being tested on a target task. We then differentiate \emph{through the entire learning process} via meta-gradients to update the GTN parameters to improve performance on the target task. GTNs have the beneficial property that they can theoretically generate any type of data or training environment, making their potential impact large. This paper introduces GTNs, discusses their potential, and showcases that they can substantially accelerate learning. We also demonstrate a practical and exciting application of GTNs: accelerating the evaluation of candidate architectures for neural architecture search (NAS), which is rate-limited by such evaluations, enabling massive speed-ups in NAS. GTN-NAS improves the NAS state of the art, finding higher performing architectures when controlling for the search proposal mechanism. GTN-NAS also is competitive with the overall state of the art approaches, which achieve top performance while using orders of magnitude less computation than typical NAS methods. Overall, GTNs represent a first step toward the ambitious goal of algorithms that generate their own training data and, in doing so, open a variety of interesting new research questions and directions.
reject
Overview: This paper introduces a method to distill a large dataset into a smaller one that allows for faster training. The main application of this technique being studied is neural architecture search, which can be sped up by quickly evaluating architectures on the generated data rather than slowly evaluating them on the original data. Summary of discussion: During the discussion period, the authors appear to have updated the paper quite a bit, leading to the reviewers feeling more positive about it now than in the beginning. In particular, in the beginning, it appears to have been unclear that the distillation is merely used as a speedup trick, not to generate additional information out of thin air. The reviewers' scores left the paper below the decision boundary, but closely enough so that I read it myself. My own judgement: I like the idea, which I find very novel. However, I have to push back on the authors' claims about their good performance in NAS. This has several reasons: 1. In contrast to what is claimed by the authors, the comparison to graph hypernetworks (Zhang et al) is not fair, since the authors used a different protocol: Zhang et al sampled 800 networks and reported the performance (mean +/- std) of the 10 judged to be best by the hypernetwork. In contrast, the authors of the current paper sampled 1000 networks and reported the performance of the single one judged to be best. They repeated this procedure 5 times to get mean +/- std. The best architecture of 1000 is of course more likely to be strong than the average of the top 10 of 800. 2. The comparison to random search with weight sharing (here: 3.92% error) does not appear fair. The cited paper in Table 1 is *not* the paper introducing random search + weight sharing, but the neural architecture optimization paper. The original one reported an error of 2.85% +/- 0.08% with 4.3M params. That paper also has the full source code available, so the authors could have performed a true apples-to-apples comparison. 3. The authors' method requires an additional (one-time) cost for actually creating the 'fake' training data, so their runtimes should be increased by the 8h required for that. 4. The fact that the authors achieve 2.42% error doesn't mean much; that result is just based on scaling the network up to 100M params. (The network obtained by random search also achieves 2.51%.) As it stands, I cannot judge whether the authors' approach yields strong performance for NAS. In order to allow that conclusion, the authors would have to compare to another method based on the same underlying code base and experimental protocol. Also, the authors do not make code available at this time. Their method has a lot of bells and whistles, and I do not expect that I could reproduce it. They promise code, but it is unclear what this would include: the generated training data, code for training the networks, code for the meta-approach, etc? This would have been much easier to judge had the authors made the code available in anonymized fashion during the review. Because of these reasons, in terms of making progress on NAS, the paper does not quite clear the bar for me. The authors also evaluated their method in several other scenarios, including reinforcement learning. These results appear to be very promising, but largely preliminary due to lack of time in the rebuttal phase. Recommendation: The paper is very novel and the results appear very promising, but they are also somewhat preliminary. The reviewers' scores leave the paper just below the acceptance threshold and my own borderline judgement is not positive enough to overrule this. I believe that some more time, and one more iteration of reorganization and review, would allow this paper to ripen into a very strong paper. For a resubmission to the next venue, I would recommend to either perform an apples-to-apples comparison for NAS or reorganize and just use NAS as one of several equally-weighted possible applications. In the current form, I believe the paper is not using its full potential.
val
[ "HJxKcbi-5B", "BJlacioyqr", "HJlryzt3sr", "BJlAybFhjB", "BJljtQd3or", "SygGDA82oH", "BkeFVk4ior", "BJgfhTXDiS", "H1lnMpXPiS", "Bkl4SpQDiS", "BkgkKpmvjS", "Sye-yRQwoH", "ryl4MA7vsH", "SJgZp3Qvor", "rklUdhQDsB", "S1x3_iukqH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a meta-learning algorithm Generative Teaching Networks (GTN) to generate fake training data for models to learn more accurate models. In the inner loop, a generator produces training data and the learner takes gradient steps on this data. In the outer loop, the parameters of the generator are u...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HJg_ECEKDr", "iclr_2020_HJg_ECEKDr", "BJljtQd3or", "SygGDA82oH", "ryl4MA7vsH", "BkgkKpmvjS", "iclr_2020_HJg_ECEKDr", "HJxKcbi-5B", "BJlacioyqr", "BJlacioyqr", "HJxKcbi-5B", "HJxKcbi-5B", "S1x3_iukqH", "rklUdhQDsB", "iclr_2020_HJg_ECEKDr", "iclr_2020_HJg_ECEKDr" ]
iclr_2020_rkltE0VKwH
Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning
Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom platform.
reject
The authors present a method that utilizes intrinsic rewards to coordinate the exploration of agents in a multi-agent reinforcement learning setting. The reviewers agreed that the proposed approach was relatively novel and an interesting research direction for multiagent RL. However, the reviewers had substantial concerns about writing clarity, the significance of the contribution of the propose method, and the thoroughness of evaluation (particularly the number of agents used and limited baselines). While the writing clarity and several technical points (including addition ablations) were addressed in the rebuttal, the reviewers still felt that the core contribution of the work was a bit too marginal. Thus, I recommend this paper to be rejected at this time.
train
[ "H1xJLKdhKH", "S1xQNzaoor", "S1gOkfpiiB", "SkeGhZaioS", "rJeoQlwTKB", "SkgAf17b9H" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Contribution:\n\nThe paper proposes to use a set of handcrafted intrinsic rewards that depend on the novelty of an observation as perceived by the rest of the other agents. For each pair of reward and agent, they learn a policy and a value through actor critic method, and then a meta-policy choses at the beginning...
[ 6, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, 5, 4 ]
[ "iclr_2020_rkltE0VKwH", "H1xJLKdhKH", "SkgAf17b9H", "rJeoQlwTKB", "iclr_2020_rkltE0VKwH", "iclr_2020_rkltE0VKwH" ]
iclr_2020_Ske9VANKDH
An Optimization Principle Of Deep Learning?
Training deep neural networks (DNNs) has achieved great success in recent years. Modern DNN trainings utilize various types of training techniques that are developed in different aspects, e.g., activation functions for neurons, batch normalization for hidden layers, skip connections for network architecture and stochastic algorithms for optimization. Despite the effectiveness of these techniques, it is still mysterious how they help accelerate DNN trainings in practice. In this paper, we propose an optimization principle that is parameterized by γ>0 for stochastic algorithms in nonconvex and over-parameterized optimization. The principle guarantees the convergence of stochastic algorithms to a global minimum with a monotonically diminishing parameter distance to the minimizer and leads to a O(1/γK) sub-linear convergence rate, where K is the number of iterations. Through extensive experiments, we show that DNN trainings consistently obey the γ-optimization principle and its theoretical implications. In particular, we observe that the trainings that apply the training techniques achieve accelerated convergence and obey the principle with a large γ, which is consistent with the O(1/γK) convergence rate result under the optimization principle. We think the γ-optimization principle captures and quantifies the impacts of various DNN training techniques and can be of independent interest from a theoretical perspective.
reject
The paper is rejected based on unanimous reviews.
train
[ "S1gTwOG2Yr", "SkgvijzLtB", "S1eAE0MwsH", "ByeL56GDjB", "SygoeaGPsS", "Hkg8nsMwiH", "H1gGrehCOr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a \"gamma principle\" for stochastic updates in deep neural network optimization. First, the authors propose if Eq. (2) is satisfied then convergence is guaranteed (Thm. 1). Second, they use experimental results of Alexnet and Resnet-18 on cifar10/100 to show that Eq. (2) is satisfied by SGD, S...
[ 3, 1, -1, -1, -1, -1, 1 ]
[ 3, 5, -1, -1, -1, -1, 5 ]
[ "iclr_2020_Ske9VANKDH", "iclr_2020_Ske9VANKDH", "H1gGrehCOr", "SkgvijzLtB", "S1gTwOG2Yr", "iclr_2020_Ske9VANKDH", "iclr_2020_Ske9VANKDH" ]
iclr_2020_SyxjVRVKDB
Switched linear projections and inactive state sensitivity for deep neural network interpretability
We introduce switched linear projections for expressing the activity of a neuron in a ReLU-based deep neural network in terms of a single linear projection in the input space. The method works by isolating the active subnetwork, a series of linear transformations, that completely determine the entire computation of the deep network for a given input instance. We also propose that for interpretability it is more instructive and meaningful to focus on the patterns that deactive the neurons in the network, which are ignored by the exisiting methods that implicitly track only the active aspect of the network's computation. We introduce a novel interpretability method for the inactive state sensitivity (Insens). Comparison against existing methods shows that Insens is more robust (in the presence of noise), more complete (in terms of patterns that affect the computation) and a very effective interpretability method for deep neural networks
reject
This paper proposes a method to capture patterns of the so called “off” neurons using a newly proposed metric. The idea is interesting and worth pursuing. However, the paper needs another round of modification to improve both writing and experiments.
val
[ "S1g_D3_3jB", "BJg9itO3iB", "Byg-akO2sH", "HyggU9LhiH", "BkxxCrz2iB", "HJlv61GhsH", "rkg75bUcir", "rJxLsgU9jr", "BJxJDtl9or", "BJlpI5CYiH", "r1xYj6hKsr", "rklEnhhtjB", "HkeK4g-KoH", "BJxUOK6OoB", "SkxlgZMOjH", "r1xb4ciIoB", "HJgXvKjIoS", "Hklt9Yo8oB", "rklJ1VSbjH", "Syl-oOHTYB"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", ...
[ "Indeed, looking at the sanity check in Figure 7 of Insesn over different layers we do wonder about possibility of using some means of separating the decision impacting and input-reflecting information coming through Insens in the early layers. We are not able to incorporate classifier probes analysis into this ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 3, 5, 5 ]
[ "HyggU9LhiH", "Byg-akO2sH", "rJxLsgU9jr", "BkxxCrz2iB", "rkg75bUcir", "iclr_2020_SyxjVRVKDB", "r1xb4ciIoB", "rklEnhhtjB", "BJlpI5CYiH", "Hklt9Yo8oB", "HkeK4g-KoH", "BJxUOK6OoB", "BJxUOK6OoB", "SkxlgZMOjH", "iclr_2020_SyxjVRVKDB", "S1lRpmI9Kr", "rklJ1VSbjH", "Syl-oOHTYB", "iclr_20...
iclr_2020_SJeoE0VKDS
Novelty Search in representational space for sample efficient exploration
We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives. Our approach uses intrinsic rewards that are based on a weighted distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sample-efficient exploration with planning routines in representational space. One key element of our approach is that we perform more gradient steps in-between every environment step in order to ensure the model accuracy. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines.
reject
The two most experienced reviewers recommended the paper be rejected. The submission lacks technical depth, which calls the significance of the contribution into question. This work would be greatly strengthened by a theoretical justification of the proposed approach. The reviewers also criticized the quality of the exposition, noting that key parts of the presentation was unclear. The experimental evaluation was not considered to be sufficiently convincing. The review comments should be able to help the authors strengthen this work.
test
[ "S1gNLkLsoS", "rJghN08aFr", "SkgMTxMYjB", "BygE-3-YsH", "HkeWLTChtr", "rkeK9jTnFH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response.\n\n(1,2) I genuinely fail to see why the bonus would converge to 0. I think it would converge to omega -- the constraint enforced for successive embeddings. But in any case, finding k-nearest neighbours seems sensitive to the contents of the replay buffer. What is the size of the buffe...
[ -1, 1, -1, -1, 6, 3 ]
[ -1, 4, -1, -1, 1, 3 ]
[ "BygE-3-YsH", "iclr_2020_SJeoE0VKDS", "rkeK9jTnFH", "rJghN08aFr", "iclr_2020_SJeoE0VKDS", "iclr_2020_SJeoE0VKDS" ]
iclr_2020_SklnVAEFDB
BERT-AL: BERT for Arbitrarily Long Document Understanding
Pretrained language models attract lots of attentions, and they take advantage of the two-stages training process: pretraining on huge corpus and finetuning on specific tasks. Thereinto, BERT (Devlin et al., 2019) is a Transformer (Vaswani et al., 2017) based model and has been the state-of-the-art for many kinds of Nature Language Processing (NLP) tasks. However, BERT cannot take text longer than the maximum length as input since the maximum length is predefined during pretraining. When we apply BERT to long text tasks, e.g., document-level text summarization: 1) Truncating inputs by the maximum sequence length will decrease performance, since the model cannot capture long dependency and global information ranging the whole document. 2) Extending the maximum length requires re-pretraining which will cost a mass of time and computing resources. What's even worse is that the computational complexity will increase quadratically with the length, which will result in an unacceptable training time. To resolve these problems, we propose to apply Transformer to only model local dependency and recurrently capture long dependency by inserting multi-channel LSTM into each layer of BERT. The proposed model is named as BERT-AL (BERT for Arbitrarily Long Document Understanding) and it can accept arbitrarily long input without re-pretraining from scratch. We demonstrate BERT-AL's effectiveness on text summarization by conducting experiments on the CNN/Daily Mail dataset. Furthermore, our method can be adapted to other Transformer based models, e.g., XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019), for various NLP tasks with long text.
reject
This paper proposes a hybrid LSTM-Transformer method to use pretrained Transformers like BERT that have a fixed maximum sequence lengths on texts longer than that limit. The consensus of the reviewers is that the results aren't sufficient to justify the primary claims of the paper, and that—in addition—the missing details and ablations cast doubt on the reliability of those results. This is an interesting research direction, but substantial further experimental work would be needed to turn this into something that's ready for publication at a top venue.
train
[ "SJelPR1a9B", "SkeVjGKGqH", "Hke48Wc_KS", "S1gQytXw5r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a methodology to overcome the problem of processing long sequences with a pre-trained Transformer model, which suffers from high computational costs due to the complexity being quadratic in the length of the sequence. The authors also point out that BERT needs to be retrained from scratch if seq...
[ 3, 3, 3, 6 ]
[ 3, 5, 1, 1 ]
[ "iclr_2020_SklnVAEFDB", "iclr_2020_SklnVAEFDB", "iclr_2020_SklnVAEFDB", "iclr_2020_SklnVAEFDB" ]
iclr_2020_HJg6VREFDH
iWGAN: an Autoencoder WGAN for Inference
Generative Adversarial Networks (GANs) have been impactful on many problems and applications but suffer from unstable training. Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. We introduce a novel inference WGAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. The iWGAN jointly learns an encoder network and a generative network using an iterative primal dual optimization process. We establish the generalization error bound of iWGANs. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that our model greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of iWGANs by obtaining a competitive and stable performance with state-of-the-art for benchmark datasets.
reject
This paper proposes a new way to stabilise GAN training. The reviews were very mixed but taken together below acceptance threshold. Rejection is recommended with strong motivation to work on the paper for next conference. This is potentially an important contribution.
test
[ "S1xig5LvoB", "B1eELtUPiB", "SJeDmv8vsB", "Ske99gOnKH", "Sylubr6TtH", "HkgZBiELqr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the valuable comments and encouragement.\n\n--------------------------------\nComment 1: This paper presents an inference WGAN (iWGAN) which fully considers to reduce the difference between distributions of G(X) and Z, G(Z) and X. In this algorithm, the authors show a rigorous probabilist...
[ -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "Ske99gOnKH", "Sylubr6TtH", "HkgZBiELqr", "iclr_2020_HJg6VREFDH", "iclr_2020_HJg6VREFDH", "iclr_2020_HJg6VREFDH" ]
iclr_2020_r1l0VCNKwB
LOSSLESS SINGLE IMAGE SUPER RESOLUTION FROM LOW-QUALITY JPG IMAGES
Super Resolution (SR) is a fundamental and important low-level computer vision (CV) task. Different from traditional SR models, this study concentrates on a specific but realistic SR issue: How can we obtain satisfied SR results from compressed JPG (C-JPG) image, which widely exists on the Internet. In general, C-JPG can release storage space while keeping considerable quality in visual. However, further image processing operations, e.g., SR, will suffer from enlarging inner artificial details and result in unacceptable outputs. To address this problem, we propose a novel SR structure with two specifically designed components, as well as a cycle loss. In short, there are mainly three contributions to this paper. First, our research can generate high-qualified SR images for prevalent C-JPG images. Second, we propose a functional sub-model to recover information for C-JPG images, instead of the perspective of noise elimination in traditional SR approaches. Third, we further integrate cycle loss into SR solver to build a hybrid loss function for better SR generation. Experiments show that our approach achieves outstanding performance among state-of-the-art methods.
reject
Main summary: Sngle image super-resolution network that can generate high-resolution images from the corresponding C-JPG images Discussions reviewer 3: reviewer has a few issues including, claim the method is lossless, want more information about JPG revovering step reviewer 1: (not knowledgable): paper is well written and reviewer gives very few cons reviewer 2: main concerns are wrt novelty and technically sound Recommendation: the 2 more knowledgable reviwers mark this as Reject, I agree.
train
[ "rJxMjzFcoB", "Hkgt6CCpKH", "rylH1m2AKr", "Hyg0GawAqB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your kind remarks and suggestions. In the paper, we focus on a novel SR issue deriving from the practical SR application. None of similar work has been done on condition that there are existing JPG compression removal model and SR generating model. We have tried all existing SR models and can’t obtai...
[ -1, 3, 6, 1 ]
[ -1, 5, 1, 3 ]
[ "Hyg0GawAqB", "iclr_2020_r1l0VCNKwB", "iclr_2020_r1l0VCNKwB", "iclr_2020_r1l0VCNKwB" ]
iclr_2020_Skl1HCNKDr
Learning Generative Models using Denoising Density Estimators
Learning generative probabilistic models that can estimate the continuous density given a set of samples, and that can sample from that density is one of the fundamental challenges in unsupervised machine learning. In this paper we introduce a new approach to obtain such models based on what we call denoising density estimators (DDEs). A DDE is a scalar function, parameterized by a neural network, that is efficiently trained to represent a kernel density estimator of the data. In addition, we show how to leverage DDEs to develop a novel approach to obtain generative models that sample from given densities. We prove that our algorithms to obtain both DDEs and generative models are guaranteed to converge to the correct solutions. Advantages of our approach include that we do not require specific network architectures like in normalizing flows, ODE solvers as in continuous normalizing flows, nor do we require adversarial training as in generative adversarial networks (GANs). Finally, we provide experimental results that demonstrate practical applications of our technique.
reject
The majority of reviewers suggest that this paper is not yet ready for publication. The idea presented in the paper is interesting, but there are concerns about what experiments are done, what papers are cited, and how polished the paper is. This all suggests that the paper could benefit from a bit more time to thoughtfully go through some of the criticisms, and make sure that everything reviewers suggest is covered.
train
[ "H1x2OqunjH", "SJlAtI4zoS", "rkxBZVf-jH", "rkefjLGboB", "Hyxdozf-or", "HJxvuxujtr", "H1lgAj6y9B", "ryg_dkuQqB", "H1guxE9S5S", "SyxUUFYrcr", "r1xxKWCVcS", "HkguejX79B", "H1lRnVraFB", "SyxcRrbAtS" ]
[ "author", "public", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "public", "author" ]
[ "Dear reviewers, thank you again for your insightful feedback. We have uploaded a revised version of our paper that includes the following major changes:\n\n - We added a quantitative comparison with GAN models and Score-Matching using the Stacked-MNIST dataset, which consists of 10^3 classes of triple-digit images...
[ -1, -1, -1, -1, -1, 1, 3, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 5, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Skl1HCNKDr", "HJxvuxujtr", "H1lgAj6y9B", "HJxvuxujtr", "ryg_dkuQqB", "iclr_2020_Skl1HCNKDr", "iclr_2020_Skl1HCNKDr", "iclr_2020_Skl1HCNKDr", "SyxUUFYrcr", "r1xxKWCVcS", "HkguejX79B", "iclr_2020_Skl1HCNKDr", "iclr_2020_Skl1HCNKDr", "H1lRnVraFB" ]
iclr_2020_BkxgrAVFwH
Wasserstein-Bounded Generative Adversarial Networks
In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem. Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes. In this paper, we present a general framework named Wasserstein-Bounded GAN (WBGAN), which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term. Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection. Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants.
reject
The paper presents a framework named Wasserstein-bounded GANs which generalizes WGAN. The paper shows that WBGAN can improve stability. The reviewers raised several questions about the method and the experiments, but these were not addressed. I encourage the authors to revise the draft and resubmit to a different venue.
train
[ "B1gjePeUFB", "Hkgl4zLaKH", "rye7_6u1cS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new way of stabilizing Wasserstein GANs by using Sinkhorn distance to upper-bound the objective of WGAN's critic's loss during the training. GAN stabilization is a well-motivated problem and limiting the dramatic changes of discriminator loss clearly helps achieving this goal. Experiments show...
[ 1, 3, 6 ]
[ 5, 4, 3 ]
[ "iclr_2020_BkxgrAVFwH", "iclr_2020_BkxgrAVFwH", "iclr_2020_BkxgrAVFwH" ]
iclr_2020_BkgZSCEtvr
Continuous Graph Flow
In this paper, we propose Continuous Graph Flow, a generative continuous flow based method that aims to model complex distributions of graph-structured data. Once learned, the model can be applied to an arbitrary graph, defining a probability density over the random variables represented by the graph. It is formulated as an ordinary differential equation system with shared and reusable functions that operate over the graphs. This leads to a new type of neural graph message passing scheme that performs continuous message passing over time. This class of models offers several advantages: a flexible representation that can generalize to variable data dimensions; ability to model dependencies in complex data distributions; reversible and memory-efficient; and exact and efficient computation of the likelihood of the data. We demonstrate the effectiveness of our model on a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation from scene graphs. Our proposed model achieves significantly better performance compared to state-of-the-art models.
reject
Novelty of the proposed model is low. Experimental results are weak.
train
[ "ryeE5v8joS", "H1xDXdR7sS", "H1xMx907jH", "SylErbrzoS", "ByeLVtNWiS", "B1xZgPNbsH", "H1luoS4bsS", "BkgReQ9aKB", "SkgZi5y0Yr", "Byxc74-0Yr", "Hkgp-7dstr", "rylhIt7wKS" ]
[ "author", "author", "author", "public", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We would like to thank Reviewer2 for the constructive comments regarding the experiments.\n\n*Discussion about CGF and GNF. \nGNF and CGF are fundamentally two different flow based models for graph-structured data. Both these models have their own advantages as well as challenges and it is valuable that these co-e...
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, -1, -1 ]
[ "Byxc74-0Yr", "BkgReQ9aKB", "SkgZi5y0Yr", "B1xZgPNbsH", "iclr_2020_BkgZSCEtvr", "iclr_2020_BkgZSCEtvr", "Hkgp-7dstr", "iclr_2020_BkgZSCEtvr", "iclr_2020_BkgZSCEtvr", "iclr_2020_BkgZSCEtvr", "rylhIt7wKS", "iclr_2020_BkgZSCEtvr" ]
iclr_2020_SyezSCNYPB
Disentangled GANs for Controllable Generation of High-Resolution Images
Generative adversarial networks (GANs) have achieved great success at generating realistic samples. However, achieving disentangled and controllable generation still remains challenging for GANs, especially in the high-resolution image domain. Motivated by this, we introduce AC-StyleGAN, a combination of AC-GAN and StyleGAN, for demonstrating that the controllable generation of high-resolution images is possible with sufficient supervision. More importantly, only using 5% of the labelled data significantly improves the disentanglement quality. Inspired by the observed separation of fine and coarse styles in StyleGAN, we then extend AC-StyleGAN to a new image-to-image model called FC-StyleGAN for semantic manipulation of fine-grained factors in a high-resolution image. In experiments, we show that FC-StyleGAN performs well in only controlling fine-grained factors, with the use of instance normalization, and also demonstrate its good generalization ability to unseen images. Finally, we create two new datasets -- Falcor3D and Isaac3D with higher resolution, more photorealism, and richer variation, as compared to existing disentanglement datasets.
reject
The paper presents a model combining AC-GAN and StyleGAN for semi-supervised learning of disentangled generative adversarial networks. It also proposes new datasets of 3d images as benchmarks. The main claim is that the proposed model can achieve strong disentanglement property by using 1-5% of the annotations on the factors of variation. The technical contribution is moderate but the architecture itself is not highly novel. While the proposed method seems to work for controlled/synthetic datasets, overall technical contribution seems incremental and it's unclear whether it can perform well on larger-scale, real datasets. The experimental results on CelebA don't look convincing enough.
train
[ "BJxgb6u2jS", "HyeJ7i_hsH", "rJeWSDD5jr", "H1l5PLw5ir", "rkxJEUD9ir", "B1ldFEHFdH", "B1gjLKW0FB", "HJxsmxLE9S" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for very useful comments and suggestions. Below are majors changes in the revised paper according reviewers’ suggestions:\n\n- We want to emphasize that the motivation of this work is to investigate 1) how a *generic* disentanglement learning model behaves in the *high-resolution* image ...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 5, 5, 5 ]
[ "iclr_2020_SyezSCNYPB", "HJxsmxLE9S", "B1ldFEHFdH", "rkxJEUD9ir", "B1gjLKW0FB", "iclr_2020_SyezSCNYPB", "iclr_2020_SyezSCNYPB", "iclr_2020_SyezSCNYPB" ]
iclr_2020_BygfrANKvB
Learning to Make Generalizable and Diverse Predictions for Retrosynthesis
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions. Given a target compound, the task is to predict the likely chemical reactants to produce the target. This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of the molecules. Building on top of the popular Transformer architecture, we propose two novel pre-training methods that construct relevant auxiliary tasks (plausible reactions) for our problem. Furthermore, we incorporate a discrete latent variable model into the architecture to encourage the model to produce a diverse set of alternative predictions. On the 50k subset of reaction examples from the United States patent literature (USPTO-50k) benchmark dataset, our model greatly improves performance over the baseline, while also generating predictions that are more diverse.
reject
The authors present a new approach to improve performance for retro-synthesis using a seq2seq model, achieving significant improvement over the baseline. There are a number of lingering questions regarding the significance and impact of this work. Hence, my recommendation is to reject.
test
[ "HJlAN9CrjH", "HyeBVs0Bir", "BkxAJiArir", "r1gJ2cCSoS", "rJl66yICKB", "BkxFpvFMqS", "HkeLifuQ5B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the feedback and comments. We address each of them in turn:\n\n- \"Novelty and domain-specificity of pre-training methods\"\n\nConventional pre-training methods use a masked language modeling (MLM) objective (Devlin et., 2018), but this kind of objective does not work for our problem. Whe...
[ -1, -1, -1, -1, 6, 6, 1 ]
[ -1, -1, -1, -1, 1, 4, 5 ]
[ "HkeLifuQ5B", "rJl66yICKB", "BkxFpvFMqS", "HJlAN9CrjH", "iclr_2020_BygfrANKvB", "iclr_2020_BygfrANKvB", "iclr_2020_BygfrANKvB" ]
iclr_2020_rJg7BA4YDr
NEURAL EXECUTION ENGINES
Turing complete computation and reasoning are often regarded as necessary pre- cursors to general intelligence. There has been a significant body of work studying neural networks that mimic general computation, but these networks fail to generalize to data distributions that are outside of their training set. We study this problem through the lens of fundamental computer science problems: sorting and graph processing. We modify the masking mechanism of a transformer in order to allow them to implement rudimentary functions with strong generalization. We call this model the Neural Execution Engine, and show that it learns, through supervision, to numerically compute the basic subroutines comprising these algorithms with near perfect accuracy. Moreover, it retains this level of accuracy while generalizing to unseen data and long sequences outside of the training distribution.
reject
This paper investigates the problem of building a program execution engine with neural networks. While the reviewers find this paper to contain interesting ideas, the technical contributions, scope of experiments, and the presentation of results would need to be significantly improved in order for this work to reach the quality bar of ICLR.
train
[ "SJe2uhfzor", "rJx4R3MfiH", "Hkx1eYGMsB", "Bklg3iffsB", "SJl4vu-hFB", "rke3dweCtB", "HygQp_7b9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to write a detailed review. \n\nThe main contention appears to be that the generalization ability of the NEE is based on the iterative structure that we provide. In Section 4.1, we provide a baseline against this argument, and show that providing iterative structure does not guarantee...
[ -1, -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "HygQp_7b9r", "rke3dweCtB", "iclr_2020_rJg7BA4YDr", "SJl4vu-hFB", "iclr_2020_rJg7BA4YDr", "iclr_2020_rJg7BA4YDr", "iclr_2020_rJg7BA4YDr" ]
iclr_2020_ryeEr0EFvS
A Hierarchy of Graph Neural Networks Based on Learnable Local Features
Graph neural networks (GNNs) are a powerful tool to learn representations on graphs by iteratively aggregating features from node neighbourhoods. Many variant models have been proposed, but there is limited understanding on both how to compare different architectures and how to construct GNNs systematically. Here, we propose a hierarchy of GNNs based on their aggregation regions. We derive theoretical results about the discriminative power and feature representation capabilities of each class. Then, we show how this framework can be utilized to systematically construct arbitrarily powerful GNNs. As an example, we construct a simple architecture that exceeds the expressiveness of the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theory on both synthetic and real-world benchmarks, and demonstrate our example's theoretical power translates to state-of-the-art results on node classification, graph classification, and graph regression tasks.
reject
This paper proposes a modification to GCNs that generalizes the aggregation step to multiple levels of neighbors, that in theory, the new class of models have better discriminative power. The main criticism raised is that there is lack of sufficient evidence to distinguish this works theoretical contribution from that of Xu et al. Two reviewers also pointed out the concerns around experiment results and suggested to includes more recent state of the art SOTA results. While authors disagree that the contributions of their work is incremental, reviewers concerns are good samples of the general readers of this paper— general readers may also read this paper as incremental. We highly encourage authors to take another cycle of edits to better distinguish their work from others before future submissions.
train
[ "HyeFsf2Qjr", "SJeYnjiQir", "HJlPGpjQsS", "H1ekheP1oB", "SJxbd5v1sr", "ryeih6LJoS", "rygpeIGAKr", "H1xXObBAFH", "r1gNFCj0FS" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you Reviewer #3 for your comments. We have substantially revised the paper to address all your comments and uploaded the paper. We have addressed all of your other comments as below (labeling corresponds with your comments):\n\n1. We agree our Theorem 3 is a natural extension of Xu's original theorem but The...
[ -1, -1, -1, -1, -1, -1, 3, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "SJxbd5v1sr", "ryeih6LJoS", "H1ekheP1oB", "H1xXObBAFH", "rygpeIGAKr", "r1gNFCj0FS", "iclr_2020_ryeEr0EFvS", "iclr_2020_ryeEr0EFvS", "iclr_2020_ryeEr0EFvS" ]
iclr_2020_B1grSREtDH
Bayesian Residual Policy Optimization: Scalable Bayesian Reinforcement Learning with Clairvoyant Experts
Informed and robust decision making in the face of uncertainty is critical for robots that perform physical tasks alongside people. We formulate this as a Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs). While Bayes-optimality is theoretically the gold standard, existing algorithms do not scale well to continuous state and action spaces. We propose a scalable solution that builds on the following insight: in the absence of uncertainty, each latent MDP is easier to solve. We split the challenge into two simpler components. First, we obtain an ensemble of clairvoyant experts and fuse their advice to compute a baseline policy. Second, we train a Bayesian residual policy to improve upon the ensemble's recommendation and learn to reduce uncertainty. Our algorithm, Bayesian Residual Policy Optimization (BRPO), imports the scalability of policy gradient methods as well as the initialization from prior models. BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.
reject
This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work (see in particular the updates from AnonReviewer1), and I urge the authors to continue to develop refinements and extensions.
train
[ "BylbDBJaYH", "BJl0Yoo2jr", "S1xDuso3or", "B1l0Vos3iS", "r1xvIqjnjH", "BygdF6XLqH", "HJxPLs_qcr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs). The authors consider making decisions with experts, where each expert performs well under some latent MDPs. An ensemble of experts is constructed, and then a Bayesian residual policy is learned to balance exp...
[ 3, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_B1grSREtDH", "iclr_2020_B1grSREtDH", "BygdF6XLqH", "HJxPLs_qcr", "BylbDBJaYH", "iclr_2020_B1grSREtDH", "iclr_2020_B1grSREtDH" ]
iclr_2020_HylrB04YwH
Overparameterized Neural Networks Can Implement Associative Memory
Identifying computational mechanisms for memorization and retrieval is a long-standing problem at the intersection of machine learning and neuroscience. In this work, we demonstrate empirically that overparameterized deep neural networks trained using standard optimization methods provide a mechanism for memorization and retrieval of real-valued data. In particular, we show that overparameterized autoencoders store training examples as attractors, and thus, can be viewed as implementations of associative memory with the retrieval mechanism given by iterating the map. We study this phenomenon under a variety of common architectures and optimization methods and construct a network that can recall 500 real-valued images without any apparent spurious attractor states. Lastly, we demonstrate how the same mechanism allows encoding sequences, including movies and audio, instead of individual examples. Interestingly, this appears to provide an even more efficient mechanism for storage and retrieval than autoencoding single instances.
reject
The paper shows that overparameterized autoencoders can be trained to memorize a small number of training samples, which can be retrieved via fixed point iteration. After rounds of discussion with the authors, the reviewers agree that the idea is interesting and overall quality of writing and experiments is reasonable, but they were skeptical regarding the significance of the finding and impact to the field and thus encourage studying the phenomenon further and resubmitting in a future conference. I thus recommend rejecting this submission for now.
train
[ "BkxmAKuhiS", "SyeX7xY3or", "SJlEsh_hjr", "HkgugfG3iB", "B1lZwfXqoB", "S1gMyg0tiH", "HJlCRukzoB", "HJegoO1fir", "HJgqVm1for", "rJxPhC1FYr", "HJlgnPFiKS", "rJx5RNjiqS" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would appreciate if the reviewer could actually read our paper. We cite Zhang et al, 2019 in the first paragraph of the introduction. We do not cite our own paper, which precedes Zhang et al, in order to comply with the double blind policy. \n\nWe have done an extensive investigation of these basins of attracti...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 1 ]
[ "HkgugfG3iB", "BkxmAKuhiS", "HkgugfG3iB", "HJlCRukzoB", "S1gMyg0tiH", "HJegoO1fir", "rJxPhC1FYr", "HJlgnPFiKS", "rJx5RNjiqS", "iclr_2020_HylrB04YwH", "iclr_2020_HylrB04YwH", "iclr_2020_HylrB04YwH" ]
iclr_2020_HJlISCEKvB
Improving Multi-Manifold GANs with a Learned Noise Prior
Generative adversarial networks (GANs) learn to map samples from a noise distribution to a chosen data distribution. Recent work has demonstrated that GANs are consequently sensitive to, and limited by, the shape of the noise distribution. For example, a single generator struggles to map continuous noise (e.g. a uniform distribution) to discontinuous output (e.g. separate Gaussians) or complex output (e.g. intersecting parabolas). We address this problem by learning to generate from multiple models such that the generator's output is actually the combination of several distinct networks. We contribute a novel formulation of multi-generator models where we learn a prior over the generators conditioned on the noise, parameterized by a neural network. Thus, this network not only learns the optimal rate to sample from each generator but also optimally shapes the noise received by each generator. The resulting Noise Prior GAN (NPGAN) achieves expressivity and flexibility that surpasses both single generator models and previous multi-generator models.
reject
This paper introduces a modified GAN architecture that looks a lot like a mixture of experts, to address the problem of learning multiple disconnected manifolds. They show this method helps on 2D toy experiments, and artificial tasks where different datasets are combined, but not on CIFAR. They also introduced a new variant of FID that they claim is more sensitive to the improvements made by their model. R2 didn't seem to think too hard about the paper, and R3 seemed a bit dismissive. Overall the idea seems sensible but the particulars of this approach aren't all that well-motivated in my opinion, especially since the cost of the generator is increased. Why not just use a mixture of Gaussians in the original untransformed space? I also found the toy experiments unconvincing, particularly the claim that a standard GAN couldn't learn a mixture of 3 Gaussians. Learning a mixture of 8 Gaussians was one of the results in the unrolled GAN paper, for instance. The results on the mixed datasets experiments seem encouraging, but I'm afraid that proposing a new GAN architecture in 2019 requires even more baselines than the authors compared against, and the fact that the task was artificially constructed undercuts its importance.
train
[ "BJx2oY5toB", "HJeHGK9KoH", "r1lOFOqtoS", "Hyx9BI-F_B", "BJlAxDuqdr", "rkgCn9KRtS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Reviewer#3, we thank you for your time and effort in reading our manuscript!\n\nWe respectfully disagree with this reviewer and believe that she/he misunderstood our goals. Succinctly stated our motivating problem is as follows: Since a generator learns a smooth function of its input (which is normally a smooth is...
[ -1, -1, -1, 3, 8, 6 ]
[ -1, -1, -1, 5, 4, 5 ]
[ "Hyx9BI-F_B", "BJlAxDuqdr", "rkgCn9KRtS", "iclr_2020_HJlISCEKvB", "iclr_2020_HJlISCEKvB", "iclr_2020_HJlISCEKvB" ]
iclr_2020_H1lDSCEYPH
Beyond GANs: Transforming without a Target Distribution
While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets. For instance, to fool the discriminator, a generative adversarial network (GAN) exclusively trained to transform images of black-haired *men* to blond-haired *men* would need to change gender-related characteristics as well as hair color when given images of black-haired *women* as input. This is problematic, as often it is possible to obtain *a* pair of (source, target) distributions but then have a second source distribution where the target distribution is unknown. The computational challenge is that generative models are good at generation within the manifold of the data that they are trained on. However, generating new samples outside of the manifold or extrapolating "out-of-sample" is a much harder problem that has been less well studied. To address this, we introduce a technique called *neuron editing* that learns how neurons encode an edit for a particular transformation in a latent space. We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons. By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations. Our technique is general and works on a wide variety of data domains and applications. We first demonstrate it on image transformations and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.
reject
This paper presents a new generative modeling approach to transform between data domains via a neuron editing technique. The authors address the scenario of source to target domain translation that can be applied to a new source domain. While the reviewers acknowledged that the idea of neuron editing is interesting, they have raised several concerns that were viewed by AC as critical issues: (1) given the progress that have been made in the field, an empirical comparison with SOTA GANs models is required to assess the benefits/competitiveness of the proposed approach -- see R1’s comments, also [StarGAN by Choi et al, CVPR 2018], (2) the literature review is incomplete and requires a major revision -- see R1’s and R3’s suggestions, also [CYCADA by Hoffman et al, ICML 2018], (3) presentation clarity -- see R1’s and R2’s comments. AC suggests, in its current state the manuscript is not ready for a publication. We hope the detailed reviews are useful for improving and revising the paper.
train
[ "rkeKYn9toS", "HJgBm29FjH", "B1lr5ictoS", "HJgLTQdTYS", "SylMq1hk9r", "Bkxm9SqlqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Reviewer#3, thank you for your recommendation of accepting our paper, and thank you for the time and energy spent carefully reading it! We are glad that you found our writing clear and a valuable contribution to the literature.\n\nWe also appreciate being pointed to these relevant works. While our technique is dif...
[ -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, 1, 3, 3 ]
[ "HJgLTQdTYS", "SylMq1hk9r", "Bkxm9SqlqH", "iclr_2020_H1lDSCEYPH", "iclr_2020_H1lDSCEYPH", "iclr_2020_H1lDSCEYPH" ]
iclr_2020_BylDrRNKvH
Understanding Attention Mechanisms
Attention mechanisms have advanced the state of the art in several machine learning tasks. Despite significant empirical gains, there is a lack of theoretical analyses on understanding their effectiveness. In this paper, we address this problem by studying the landscape of population and empirical loss functions of attention-based neural networks. Our results show that, under mild assumptions, every local minimum of a two-layer global attention model has low prediction error, and attention models require lower sample complexity than models not employing attention. We then extend our analyses to the popular self-attention model, proving that they deliver consistent predictions with a more expressive class of functions. Additionally, our theoretical results provide several guidelines for designing attention mechanisms. Our findings are validated with satisfactory experimental results on MNIST and IMDB reviews dataset.
reject
This paper aims to theoretically understand the the benefit of attention mechanisms. The reviewers agreed that better understanding of attention mechanisms is an important direction. However, the paper studies a weaker form of attention which does not correspond well to the attention models using in the literature. The paper should better motivate why the theoretical results for this restrained model would carry over to more realistic mechanisms.
train
[ "SyxWAKbRYr", "H1evZoi3jH", "BkeWqqktiB", "rJeqVcJYoB", "rJePWokKjr", "BygCmj1Yir", "HygmoapJ9H", "rkxwfYlG9r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies attention and self-attention networks from the theoretical perspective, giving the first (as far as this reviewer knows) results proving that attention networks can generalize better than non-attention baselines. This has been observed empirically before and it is very good to start the analysis...
[ 6, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_BylDrRNKvH", "BkeWqqktiB", "HygmoapJ9H", "iclr_2020_BylDrRNKvH", "SyxWAKbRYr", "rkxwfYlG9r", "iclr_2020_BylDrRNKvH", "iclr_2020_BylDrRNKvH" ]
iclr_2020_S1g_S0VYvr
Learning to Combat Compounding-Error in Model-Based Reinforcement Learning
Despite its potential to improve sample complexity versus model-free approaches, model-based reinforcement learning can fail catastrophically if the model is inaccurate. An algorithm should ideally be able to trust an imperfect model over a reasonably long planning horizon, and only rely on model-free updates when the model errors get infeasibly large. In this paper, we investigate techniques for choosing the planning horizon on a state-dependent basis, where a state's planning horizon is determined by the maximum cumulative model error around that state. We demonstrate that these state-dependent model errors can be learned with Temporal Difference methods, based on a novel approach of temporally decomposing the cumulative model errors. Experimental results show that the proposed method can successfully adapt the planning horizon to account for state-dependent model accuracy, significantly improving the efficiency of policy learning compared to model-based and model-free baselines.
reject
The paper received mixed reviews: R (R3), WA (R2), A (R1). AC has read the reviews, rebuttal and paper. AC is concerned about the short planning horizon, which seems like a major issue: (i) as R1 notes, most MPC algorithms use much longer horizons as they find it helps performance and (ii) the claim of the approach to be able to pick the planning horizon is moot if its dynamic range is small. Overall, the paper is very borderline. The idea is interesting but without addressing longer horizons, the contribution is limited. Under guidance from the PCs, the AC feels that the paper just falls below the acceptance threshold and thus cannot be accepted unfortunately. The work is definitely interesting however and should be revised for a future submission.
train
[ "ryxG7NUmcS", "SJlGiJetcH", "Hkx2eJlhor", "r1lxs1gnsH", "BJgHQA1nsS", "BygwEZxFYr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "#rebuttal responses\n \nThe authors' reply does not convince me, and I still think the paper has some problems:\n(1) I do not believe that the cumulative model-error can not be learned efficiently;\n(2) Experimental results are weak as some baselines do not converge! \n\nThus I keep my rating as reject.\n\n#review...
[ 1, 6, -1, -1, -1, 8 ]
[ 4, 3, -1, -1, -1, 5 ]
[ "iclr_2020_S1g_S0VYvr", "iclr_2020_S1g_S0VYvr", "ryxG7NUmcS", "BygwEZxFYr", "SJlGiJetcH", "iclr_2020_S1g_S0VYvr" ]
iclr_2020_H1xKBCEYDr
Black-box Adversarial Attacks with Bayesian Optimization
We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples using information limited to loss function evaluations of input-output pairs. We use Bayesian optimization (BO) to specifically cater to scenarios involving low query budgets to develop query efficient adversarial attacks. We alleviate the issues surrounding BO in regards to optimizing high dimensional deep learning models by effective dimension upsampling techniques. Our proposed approach achieves performance comparable to the state of the art black-box adversarial attacks albeit with a much lower average query count. In particular, in low query budget regimes, our proposed method reduces the query count up to 80% with respect to the state of the art methods.
reject
The paper proposes a Bayesian optimization approach to creating adversarial examples. The general idea has been in the air for some years, and over the last year especially there have been a number of approaches using BayesOpt for this purpose. Reviewers raised concerns about differences between this approach and related work, and practical challenges in general for using BayesOpt in this domain (regarding dimensionality, etc.). The authors provided thoughtful responses, although some of these concerns still remain. The authors are encouraged to address all comments carefully in future revisions, which a sufficiently substantial that the paper would benefit from additional review.
train
[ "rkgOQMOAtS", "SyxmXbdojH", "Syel4g2KsB", "HJxxOnUQor", "SJlShiI7sr", "Byg9f5BXjr", "r1x7sqfQjS", "rylgel1fYr", "HyljiA_RFB" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a BO-based black-box attack generation method. In general, it is very well written and easy to follow. The main contribution is to combine BO with dimension reduction, which leads to the effectiveness in generating black-box adversarial examples in the regime of limited queries. However, I sti...
[ 6, -1, -1, -1, -1, -1, -1, 3, 1 ]
[ 5, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2020_H1xKBCEYDr", "iclr_2020_H1xKBCEYDr", "HJxxOnUQor", "r1x7sqfQjS", "HyljiA_RFB", "rylgel1fYr", "rkgOQMOAtS", "iclr_2020_H1xKBCEYDr", "iclr_2020_H1xKBCEYDr" ]
iclr_2020_ByxtHCVKwB
Targeted sampling of enlarged neighborhood via Monte Carlo tree search for TSP
The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with a variety of real-life applications. We tackle TSP by incorporating machine learning methodology and leveraging the variable neighborhood search strategy. More precisely, the search process is considered as a Markov decision process (MDP), where a 2-opt local search is used to search within a small neighborhood, while a Monte Carlo tree search (MCTS) method (which iterates through simulation, selection and back-propagation steps), is used to sample a number of targeted actions within an enlarged neighborhood. This new paradigm clearly distinguishes itself from the existing machine learning (ML) based paradigms for solving the TSP, which either uses an end-to-end ML model, or simply applies traditional techniques after ML for post optimization. Experiments based on two public data sets show that, our approach clearly dominates all the existing learning based TSP algorithms in terms of performance, demonstrating its high potential on the TSP. More importantly, as a general framework without complicated hand-crafted rules, it can be readily extended to many other combinatorial optimization problems.
reject
This paper contributes to the recently emerging literature about applying reinforcement learning methods to combinatorial optimization problems. The authors consider TSPs and propose a search method that interleaves greedy local search with Monte Carlo Tree Search (MCTS). This approach does not contain learned function approximation for transferring knowledge across problem instances, which is usually considered the main motivation for applying RL to comb opt problems. The reviewers state that, although the approach is a relatively straight-forward combination of two existing methods, it is in principle somewhat interesting. However, the experiments indicate a large gap to SOTA solvers for TSPs. No rebuttal was submitted. In absence of both SOTA results and methodological novelty, as assessed by the reviewers and my owm reading, I recommend to reject the paper in its current form.
train
[ "S1x3aFAFuS", "r1ezQrs3tB", "SylAGeiatB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new RL-based algorithm for solving the traveling salesman problem (TSP). Its main component is the combination of OR-based 2-opt search and learning-based k-opt search. Monte Carlo tress search is employed to train the learning-based k-opt search. The experimental result suggests state-of-the...
[ 1, 1, 3 ]
[ 3, 4, 4 ]
[ "iclr_2020_ByxtHCVKwB", "iclr_2020_ByxtHCVKwB", "iclr_2020_ByxtHCVKwB" ]
iclr_2020_rJl5rRVFvH
Way Off-Policy Batch Deep Reinforcement Learning of Human Preferences in Dialog
Most deep reinforcement learning (RL) systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment. This is a critical shortcoming for applying RL to real-world problems where collecting data is expensive, and models must be tested offline before being deployed to interact with the environment -- e.g. systems that learn from human interaction. Thus, we develop a novel class of off-policy batch RL algorithms which use KL-control to penalize divergence from a pre-trained prior model of probable actions. This KL-constraint reduces extrapolation error, enabling effective offline learning, without exploration, from a fixed batch of data. We also use dropout-based uncertainty estimates to lower bound the target Q-values as a more efficient alternative to Double Q-Learning. This Way Off-Policy (WOP) algorithm is tested on both traditional RL tasks from OpenAI Gym, and on the problem of open-domain dialog generation; a challenging reinforcement learning problem with a 20,000 dimensional action space. WOP allows for the extraction of multiple different reward functions post-hoc from collected human interaction data, and can learn effectively from all of these. We test real-world generalization by deploying dialog models live to converse with humans in an open-domain setting, and demonstrate that WOP achieves significant improvements over state-of-the-art prior methods in batch deep RL.
reject
This paper offers a possibly novel approach to regularizing policy learning to make it suitable for large-scale divergence in the underlying domain. Unfortunately all the reviewers are unanimous that the paper is not acceptable in present form. Insufficient clarity regarding the contribution relative to several references, some of which were missing from the submitted version, is perhaps the most significant issue in the view of the AC.
train
[ "SJgY9XFssr", "H1l-WEYjjS", "BJxBS4tjjr", "rkx10Mr1cH", "SkgQVvQ0tS", "SkgumSL19S" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank reviewer one for the feedback and evaluation.\n\nWe respectfully disagree with the reviewer that solely maximizing the reward could prevent the policy from diverging to generate non-realistic dialog. One of the biggest challenges of a dialog task is that rewards that fully encapsulate conversation quality...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 3, 1, 1 ]
[ "SkgQVvQ0tS", "rkx10Mr1cH", "SkgumSL19S", "iclr_2020_rJl5rRVFvH", "iclr_2020_rJl5rRVFvH", "iclr_2020_rJl5rRVFvH" ]
iclr_2020_rke2HRVYvH
Stochastic Prototype Embeddings
Supervised deep-embedding methods project inputs of a domain to a representational space in which same-class instances lie near one another and different-class instances lie far apart. We propose a probabilistic method that treats embeddings as random variables. Extending a state-of-the-art deterministic method, Prototypical Networks (Snell et al., 2017), our approach supposes the existence of a class prototype around which class instances are Gaussian distributed. The prototype posterior is a product distribution over labeled instances, and query instances are classified by marginalizing relative prototype proximity over embedding uncertainty. We describe an efficient sampler for approximate inference that allows us to train the model at roughly the same space and time cost as its deterministic sibling. Incorporating uncertainty improves performance on few-shot learning and gracefully handles label noise and out-of-distribution inputs. Compared to the state-of-the-art stochastic method, Hedged Instance Embeddings (Oh et al., 2019), we achieve superior large- and open-set classification accuracy. Our method also aligns class-discriminating features with the axes of the embedding space, yielding an interpretable, disentangled representation.
reject
The consensus of reviewers is that this paper is not acceptable in present form, and the AC concurs.
train
[ "B1xqrC5hoB", "B1gG2TG5oB", "SJlLZYfcoS", "S1evFDzciB", "HkgvVLf5sB", "S1xNTO8ptS", "Syx5X7zatB", "BJgLg5vH5H" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Moghaddam et al. [2] and more specifically, Chen et al. [1], model the distribution of same-class and different-class pairs in a probabilistic manner. However,\n * Embeddings are of input pairs, not single inputs.\n * Embeddings are deterministic, not stochastic. There is no marginalization over uncert...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "B1gG2TG5oB", "HkgvVLf5sB", "Syx5X7zatB", "S1xNTO8ptS", "BJgLg5vH5H", "iclr_2020_rke2HRVYvH", "iclr_2020_rke2HRVYvH", "iclr_2020_rke2HRVYvH" ]
iclr_2020_ryex8CEKPr
Knockoff-Inspired Feature Selection via Generative Models
We propose a feature selection algorithm for supervised learning inspired by the recently introduced knockoff framework for variable selection in statistical regression. While variable selection in statistics aims to distinguish between true and false predictors, feature selection in machine learning aims to reduce the dimensionality of the data while preserving the performance of the learning method. The knockoff framework has attracted significant interest due to its strong control of false discoveries while preserving predictive power. In contrast to the original approach and later variants that assume a given probabilistic model for the variables, our proposed approach relies on data-driven generative models that learn mappings from data space to a parametric space that characterizes the probability distribution of the data. Our approach requires only the availability of mappings from data space to a distribution in parametric space and from parametric space to a distribution in data space; thus, it can be integrated with multiple popular generative models from machine learning. We provide example knockoff designs using a variational autoencoder and a Gaussian process latent variable model. We also propose a knockoff score metric for a softmax classifier that accounts for the contribution of each feature and its knockoff during supervised learning. Experimental results with multiple benchmark datasets for feature selection showcase the advantages of our knockoff designs and the knockoff framework with respect to existing approaches.
reject
This manuscript proposes feature selection inspired by knockoffs, where the generative models are implemented using modern deep generative techniques. The resulting procedure is evaluated in a variety of empirical settings and shown to improve performance. The reviewers and AC agree that the problem studied is timely and interesting, as knockoffs combined with generative models have recently shown promise for inferential problems. However, the reviewers were unconvinced about the motivation of the work, and the strength of the empirical evaluation results. In the option of the AC, this work might be improved by focusing (both conceptually and empirically) on applications where inferential variable selection is most relevant e.g. causal settings, healthcare applications, and so on.
test
[ "rJgnuG3BsS", "HkexfMnHir", "HyeQfbhrjH", "Hye4vzrzFS", "Hke1qd36Yr", "SJgD5mIQqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thoughtful comments and careful reading of the manuscript. We have posted a revision to address some of your questions and provide some responses below (matching the order of the comments).\n\nWe agree with the reviewer that the statistical guarantees from the original knockoff variable framewor...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 4, 3, 1 ]
[ "Hye4vzrzFS", "Hke1qd36Yr", "SJgD5mIQqH", "iclr_2020_ryex8CEKPr", "iclr_2020_ryex8CEKPr", "iclr_2020_ryex8CEKPr" ]
iclr_2020_BklxI0VtDB
ROS-HPL: Robotic Object Search with Hierarchical Policy Learning and Intrinsic-Extrinsic Modeling
Despite significant progress in Robotic Object Search (ROS) over the recent years with deep reinforcement learning based approaches, the sparsity issue in reward setting as well as the lack of interpretability of the previous ROS approaches leave much to be desired. We present a novel policy learning approach for ROS, based on a hierarchical and interpretable modeling with intrinsic/extrinsic reward setting, to tackle these two challenges. More specifically, we train the low-level policy by deliberating between an action that achieves an immediate sub-goal and the one that is better suited for achieving the final goal. We also introduce a new evaluation metric, namely the extrinsic reward, as a harmonic measure of the object search success rate and the average steps taken. Experiments conducted with multiple settings on the House3D environment validate and show that the intelligent agent, trained with our model, can achieve a better object search performance (higher success rate with lower average steps, measured by SPL: Success weighted by inverse Path Length). In addition, we conduct studies w.r.t. the parameter that controls the weighted overall reward from intrinsic and extrinsic components. The results suggest it is critical to devise a proper trade-off strategy to perform the object search well.
reject
This paper introduces a two-level hierarchical reinforcement learning approach, applied to the problem of a robot searching for an object specified by an image. The system incorporates a human-specified subgoal space, and learns low-level policies that balance the intrinsic and extrinsic rewards. The method is tested in simulations against several baselines. The reviewer discussion highlighted strengths and weaknesses of the paper. One strength is the extensive comparisons with alternative approaches on this task. The main weakness is the paper did not adequately distinguish between which aspects of the system were generic to HRL and which aspects are particular to robot object search. The paper was not general enough to be understood as a generic HRL method. It was also ignoring much relevant background knowledge (robot mapping and navigation) if the paper is intended to be primarily about robot object search. The paper did not convince the reviewers that the proposed method was desirable for either hierarchical reinforcement learning or for robot object search. This paper is not ready for publication as the contribution was not sufficiently clear to the readers.
train
[ "SJx3B1Oucr", "B1eCqqGsjB", "SklzbdfjoB", "SJgejYGioS", "Bke-0uyAtr", "ByeFI71rcB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n------------------------------------------------------------------------------------\nRebuttal Response:\nThanks for the clarifications. Nevertheless, the rebuttal and the comments of the other reviewers did not convince me that this paper is ready for publication at ICLR and I keep my vote with weak reject. IMO...
[ 3, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, 1, 1 ]
[ "iclr_2020_BklxI0VtDB", "Bke-0uyAtr", "SJx3B1Oucr", "ByeFI71rcB", "iclr_2020_BklxI0VtDB", "iclr_2020_BklxI0VtDB" ]
iclr_2020_SJeeL04KvH
Robust Federated Learning Through Representation Matching and Adaptive Hyper-parameters
Federated learning is a distributed, privacy-aware learning scenario which trains a single model on data belonging to several clients. Each client trains a local model on its data and the local models are then aggregated by a central party. Current federated learning methods struggle in cases with heterogeneous client-side data distributions which can quickly lead to divergent local models and a collapse in performance. Careful hyper-parameter tuning is particularly important in these cases but traditional automated hyper-parameter tuning methods would require several training trials which is often impractical in a federated learning setting. We describe a two-pronged solution to the issues of robustness and hyper-parameter tuning in federated learning settings. We propose a novel representation matching scheme that reduces the divergence of local models by ensuring the feature representations in the global (aggregate) model can be derived from the locally learned representations. We also propose an online hyper-parameter tuning scheme which uses an online version of the REINFORCE algorithm to find a hyper-parameter distribution that maximizes the expected improvements in training loss. We show on several benchmarks that our two-part scheme of local representation matching and global adaptive hyper-parameters significantly improves performance and training robustness.
reject
This manuscript proposes strategies to improve both the robustness and accuracy of federated learning. Two proposals are online reinforcement learning for adaptive hyperparameter search, and local distribution matching to synchronize the learning trajectories of different local models. The reviewers and AC agree that the problem studied is timely and interesting, as it addresses known issues with federated learning. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and clarity of the conceptual and empirical results. Taken together, the AC's opinion is that the paper may not be ready for publication.
test
[ "HJxa_iwmsB", "B1lqNsvQsr", "HkgorYvQor", "B1lkMYvXiH", "S1lsVxBjFS", "S1lYvJP6tH", "BkePCjw6KB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their comments. We submitted a revision of the paper to address the concerns raised. The most notable changes in the paper are:\n1)To address the concern over the computational overhead of our method that has been raised by several reviewers, we added a discussion of this overhead to th...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 3, 4, 1 ]
[ "iclr_2020_SJeeL04KvH", "S1lsVxBjFS", "S1lYvJP6tH", "BkePCjw6KB", "iclr_2020_SJeeL04KvH", "iclr_2020_SJeeL04KvH", "iclr_2020_SJeeL04KvH" ]
iclr_2020_ryxW804FPH
ADAPTING PRETRAINED LANGUAGE MODELS FOR LONG DOCUMENT CLASSIFICATION
Pretrained language models (LMs) have shown excellent results in achieving human like performance on many language tasks. However, the most powerful LMs have one significant drawback: a fixed-sized input. With this constraint, these LMs are unable to utilize the full input of long documents. In this paper, we introduce a new framework to handle documents of arbitrary lengths. We investigate the addition of a recurrent mechanism to extend the input size and utilizing attention to identify the most discriminating segment of the input. We perform extensive validating experiments on patent and Arxiv datasets, both of which have long text. We demonstrate our method significantly outperforms state-of-the-art results reported in recent literature.
reject
This paper investigates ways of using pretrained transformer models like BERT for classification tasks on documents that are longer than a standard transformer can feasibly encode. This seems like a reasonable research goal, and none of the reviewers raised any concerns that seriously questioned the claims of the paper. However, neither of the more confident reviewers were convinced by the experiments in the paper (even after some private discussion) that the methods presented here represent a useful contribution. This is not an area that I (the area chair) know well, but it seems as though there aren't any easy fixes to suggest: Additional discussion of the choice of evaluation data (or new data), further ablations, and general refinement of the writing could help.
train
[ "B1lXR-pssr", "Hkg-TepsjB", "S1ekex6jjB", "HkxafCjfqB", "HkxSKurAYS", "SkxR9kKJ5H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to review our paper. We address the issues below, and have updated the manuscript accordingly.\n\nIssue: simple methods, weak contributions\n\nWhile it is the case our primary model (ATT-LM) combines existing techniques, the intention with our method is to have a model that is ready ...
[ -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, 5, 5, 3 ]
[ "HkxSKurAYS", "SkxR9kKJ5H", "HkxafCjfqB", "iclr_2020_ryxW804FPH", "iclr_2020_ryxW804FPH", "iclr_2020_ryxW804FPH" ]
iclr_2020_HJlWIANtPH
Neural Embeddings for Nearest Neighbor Search Under Edit Distance
The edit distance between two sequences is an important metric with many applications. The drawback, however, is the high computational cost of many basic problems involving this notion, such as the nearest neighbor search. A natural approach to overcoming this issue is to embed the sequences into a vector space such that the geometric distance in the target space approximates the edit distance in the original space. However, the known edit distance embedding algorithms, such as Chakraborty et al.(2016), construct embeddings that are data-independent, i.e., do not exploit any structure of embedded sets of strings. In this paper we propose an alternative approach, which learns the embedding function according to the data distribution. Our experiments show that the new algorithm has much better empirical performance than prior data-independent methods.
reject
This paper presents an approach to improving the calculation of embeddings for nearest-neighbor search with respect to edit distance. Reading the reviews, it seems that the paper is greatly improved over its previous version, but still has significant clarity issues. Given that these issues remain even after one major revision, I would suggest that the paper not be accepted for this ICLR, but that the authors carefully revise the paper for clarity and submit to a following submission opportunity. It may help to share the paper with others who are not familiar with the research until they can read it once and understand the method well. I have quoted Reviewer 3 below in the author discussion, where there are some additional clarity issues that may help being resolved: ---------- Some specifics are clear now with their new edition. * The [relationship between] cgk' & cgk not as clear as it could be. For example the algorithms are designed for bits. So one should assume that they are applying it on the bits of the characters. But this should be clarified in the manuscript. * Also still backpropagating through f' is not clear to me. * And in the text for inference they still say: "We randomly select 100 queries and use the remainder of the dataset as the base set" which should be "the remainder excluding the training set" or "including?".
train
[ "BJe0VGqnoS", "BJeozMdnsS", "SJgPAR6hYr", "HJxlNfsFsB", "SygCENHHjH", "BkerHGrSjH", "S1gaimrBiB", "rJggF6khFS", "BJljrxLpKr", "BkxL1N7e5S", "HklJ9PRk9r" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for your response! The question you ask (about \"rounding\") is very interesting but it appears to be quite non-trivial. We will investigate it over the next few weeks. ", "I have read the other reviews and authors' responses; I also briefly looked into the updated paper. These have clarified a number ...
[ -1, -1, 6, -1, -1, -1, -1, 6, 3, -1, -1 ]
[ -1, -1, 3, -1, -1, -1, -1, 3, 4, -1, -1 ]
[ "BJeozMdnsS", "S1gaimrBiB", "iclr_2020_HJlWIANtPH", "SJgPAR6hYr", "rJggF6khFS", "BJljrxLpKr", "SJgPAR6hYr", "iclr_2020_HJlWIANtPH", "iclr_2020_HJlWIANtPH", "HklJ9PRk9r", "iclr_2020_HJlWIANtPH" ]
iclr_2020_SkeXL0NKwH
Low Rank Training of Deep Neural Networks for Emerging Memory Technology
The recent success of neural networks for solving difficult decision tasks has incentivized incorporating smart decision making "at the edge." However, this work has traditionally focused on neural network inference, rather than training, due to memory and compute limitations, especially in emerging non-volatile memory systems, where writes are energetically costly and reduce lifespan. Yet, the ability to train at the edge is becoming increasingly important as it enables applications such as real-time adaptability to device drift and environmental variation, user customization, and federated learning across devices. In this work, we address four key challenges for training on edge devices with non-volatile memory: low weight update density, weight quantization, low auxiliary memory, and online learning. We present a low-rank training scheme that addresses these four challenges while maintaining computational efficiency. We then demonstrate the technique on a representative convolutional neural network across several adaptation problems, where it out-performs standard SGD both in accuracy and in number of weight updates.
reject
The reviewers generally agreed that the novelty of the work was very limited. This is not necessarily a deal-breaker for a largely applied contribution, but for an applied paper, the evaluation of the actual application on edge devices is not present. So if the main contribution is the application, and there is no evaluation of this application, then it does not seem like the paper is really complete. As such, I cannot recommend it for acceptance.
val
[ "Hkeyp1RFqB", "BJlMoRsWqB", "Hyxzg_vAFS", "HJlTK8qhjr", "Hygld8c2iH", "BJx9BI5nsH", "SyeTz8c3jH", "S1eXiBc3sr", "r1liBXXTKB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a low-rank training method targeting for edge devices. The main contribution is an algorithm called Streaming Kronecker-Sum Approximation. The authors claim that the proposed method addresses four key challenges of low weight update density, weight quantization, low auxiliary memory, and online...
[ 3, 3, 3, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, 1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_SkeXL0NKwH", "iclr_2020_SkeXL0NKwH", "iclr_2020_SkeXL0NKwH", "r1liBXXTKB", "Hyxzg_vAFS", "BJlMoRsWqB", "Hkeyp1RFqB", "iclr_2020_SkeXL0NKwH", "iclr_2020_SkeXL0NKwH" ]
iclr_2020_r1gNLAEFPS
Neural ODEs for Image Segmentation with Level Sets
We propose a novel approach for image segmentation that combines Neural Ordinary Differential Equations (NODEs) and the Level Set method. Our approach parametrizes the evolution of an initial contour with a NODE that implicitly learns from data a speed function describing the evolution. In addition, for cases where an initial contour is not available and to alleviate the need for careful choice or design of contour embedding functions, we propose a NODE-based method that evolves an image embedding into a dense per-pixel semantic label space. We evaluate our methods on kidney segmentation (KiTS19) and on salient object detection (PASCAL-S, ECSSD and HKU-IS). In addition to improving initial contours provided by deep learning models while using a fraction of their number of parameters, our approach achieves F scores that are higher than several state-of-the-art deep learning algorithms
reject
This paper addresses the classic medial image segmentation by combining Neural Ordinary Differential Equations (NODEs) and the level set method. The proposed method is evaluated on kidney segmentation and salient object detection problems. Reviewer #1 provided a brief review concerning ICLR is not the appropriate venue for this work. Reviewer #2 praises the underlying concept being interesting, while pointing out that the presentation and experiments of this work is not ready for publication yet. Reviewer #3 raises concerns on whether the methods are presented properly. The authors did not provide responses to any concerns. Given these concerns and overall negative rating (two weak reject and one reject), the AC recommends reject.
test
[ "rylKU1IVKr", "BJx6B8qpYS", "SyljiI4e5r", "HJxLOjasvH", "HJlti-yiwH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper proposes to utilize Neural ODEs (NODEs) and the Level Set Method (LSM) for the task of image segmentation. The argument is that the NODE can be used to learn the force function in an LSM and solve the contour evolution process. The authors propose two architectures and demonstrate promising performance...
[ 3, 1, 3, -1, -1 ]
[ 1, 5, 3, -1, -1 ]
[ "iclr_2020_r1gNLAEFPS", "iclr_2020_r1gNLAEFPS", "iclr_2020_r1gNLAEFPS", "HJlti-yiwH", "iclr_2020_r1gNLAEFPS" ]
iclr_2020_BJerUCEtPB
Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients
Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans are more sensitive to the lower-frequency (larger-scale) patterns we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients.
reject
The authors propose a regularized for convolutional kernels that seeks to improve adversarial robustness of CNNs and produce more perceptually aligned gradients. While the topic studied by the paper is interesting, reviewers pointed out several deficiencies with the empirical evaluation that call into question the validity of the claims made by the authors. In particular: 1) Adversarial evaluation protocol: There are several red flags in the way the authors perform adversarial evaluation. The authors use a pre-defined adversarial attack toolbox (Foolbox) but are unable to produce successful attacks even for large perturbation radii - this suggests that the attack is not tuned properly. Further, the authors present results over the best case performance over several attacks, which is dubious since the goal of adversarial evaluation is to reveal the worst case performance of the model. 2) Perceptual alignment: The claim of perceptually aligned gradients also does not seem sufficiently justified given the experimental results, since the improvement over the baseline is quite marginal. Here too, the authors report failure of a standard visualization technique that has been successfully used in prior work, calling into question the validity of these results. The authors did not participate in the rebuttal phase and the reviewers maintained their scores after the initial reviews. Overall, given the significant flaws in the empirical evaluation, I recommend that the paper be rejected. I encourage the authors to rerun their experiments following the feedback from reviewers 1 and 3 and resubmit the paper with a more careful empirical evaluation.
train
[ "SkleCwjsYH", "Hygk9pgCKH", "BJgpiCsy5B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Paper summary: This paper argues that reducing the reliance of neural networks on high-frequency components of images could help robustness against adversarial examples. To attain this goal, the authors propose a new regularization scheme that encourages convolutional kernels to be smoother. The authors augment s...
[ 1, 1, 1 ]
[ 5, 5, 4 ]
[ "iclr_2020_BJerUCEtPB", "iclr_2020_BJerUCEtPB", "iclr_2020_BJerUCEtPB" ]
iclr_2020_HJeIU0VYwB
ADA+: A GENERIC FRAMEWORK WITH MORE ADAPTIVE EXPLICIT ADJUSTMENT FOR LEARNING RATE
Although adaptive algorithms have achieved significant success in training deep neural networks with faster training speed, they tend to have poor generalization performance compared to SGD with Momentum(SGDM). One of the state-of-the-art algorithms, PADAM, is proposed to close the generalization gap of adaptive methods while lacking an internal explanation. This work pro- poses a general framework, in which we use an explicit function Φ(·) as an adjustment to the actual step size, and present a more adaptive specific form AdaPlus(Ada+). Based on this framework, we analyze various behaviors brought by different types of Φ(·), such as a constant function in SGDM, a linear function in Adam, a concave function in Padam and a concave function with offset term in AdaPlus. Empirically, we conduct experiments on classic benchmarks both in CNN and RNN architectures and achieve better performance(even than SGDM).
reject
In this paper, the authors proposed a general framework, which uses an explicit function as an adjustment to the actual learning rate, and presented a more adaptive specific form Ada+. Based on this framework, they analyzed various behaviors brought by different types of the function. Empirical experiments on benchmarks demonstrate better performance than some baseline algorithms. The main concern of this paper is: (1) lack of justification or interpretation for the proposed framework; (2) the performance of the proposed algorithm is on a par with Padam; (3) missing comparison with some other baselines on more benchmark datasets. Plus, the authors did not submit response. I agree with the reviewers’ evaluation.
train
[ "SygSmKDatB", "S1gUbdrTFH", "H1g4Pa9zcS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes a general framework for adaptive algorithms, and presents a specific form: ADAPLUS. In the theory part, this work gives convergence analysis of ADAPLUS. For experiments, this work analyzes several algorithm's empirical performances including SGDM, ADAM, AMSGRAD, PADAM, ADAPLUS on CV and NLP task...
[ 3, 3, 1 ]
[ 3, 3, 5 ]
[ "iclr_2020_HJeIU0VYwB", "iclr_2020_HJeIU0VYwB", "iclr_2020_HJeIU0VYwB" ]
iclr_2020_rkePU0VYDr
A Perturbation Analysis of Input Transformations for Adversarial Attacks
The existence of adversarial examples, or intentional mis-predictions constructed from small changes to correctly predicted examples, is one of the most significant challenges in neural network research today. Ironically, many new defenses are based on a simple observation - the adversarial inputs themselves are not robust and small perturbations to the attacking input often recover the desired prediction. While the intuition is somewhat clear, a detailed understanding of this phenomenon is missing from the research literature. This paper presents a comprehensive experimental analysis of when and why perturbation defenses work and potential mechanisms that could explain their effectiveness (or ineffectiveness) in different settings.
reject
This paper presents an analysis on different methods of noise injection in adversarial examples, using gaussian noise for example. There are important issues raised by reviewers 1 & 2 about some conclusions not being well supported by the experiments and the utility/importance of some conclusions. After a discussion among reviewers, as of now all 3 reviewers stand by the decision that substantial improvements, and analysis can be made in the paper. Thus, Im recommending a Rejection.
train
[ "B1xDnB_aKH", "HJg-Z7MpKr", "rkg0xD_hjS", "rkxEhIS2oB", "rJgEv1nujH", "HyleLr_OoB", "H1ei1Lm-jr", "ByxxO61cor", "SJl1MBCKjr", "HyxyKWpnYr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper studies the noise injection as defense methods against adversarial perturbations. It presents several experiments on the relationship between clean and robust accuracy. Conclusions of this study are (1-1) several defense methods have the same underlying mechanism (noise injection) and behave similarly...
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_rkePU0VYDr", "iclr_2020_rkePU0VYDr", "HyleLr_OoB", "HyleLr_OoB", "B1xDnB_aKH", "HJg-Z7MpKr", "HyxyKWpnYr", "SJl1MBCKjr", "H1ei1Lm-jr", "iclr_2020_rkePU0VYDr" ]
iclr_2020_Syeu8CNYvS
MODELLING BIOLOGICAL ASSAYS WITH ADAPTIVE DEEP KERNEL LEARNING
Due to the significant costs of data generation, many prediction tasks within drug discovery are by nature few-shot regression (FSR) problems, including accurate modelling of biological assays. Although a number of few-shot classification and reinforcement learning methods exist for similar applications, we find relatively few FSR methods meeting the performance standards required for such tasks under real-world constraints. Inspired by deep kernel learning, we develop a novel FSR algorithm that is better suited to these settings. Our algorithm consists of learning a deep network in combination with a kernel function and a differentiable kernel algorithm. As the choice of the kernel is critical, our algorithm learns to find the appropriate one for each task during inference. It thus performs more effectively with complex task distributions, outperforming current state-of-the-art algorithms on both toy and novel, real-world benchmarks that we introduce herein. By introducing novel benchmarks derived from biological assays, we hope that the community will progress towards the development of FSR algorithms suitable for use in noisy and uncertain environments such as drug discovery.
reject
This work applies deep kernel learning to the problem of few shot regression for modeling biological assays. To deal with sparse data on new tasks, the authors propose to adapt the learned kernel to each task. Reviews were mixed about the method and experiments, some reviewers were satisfied with the author rebuttal while others did not support acceptance during the discussion period. Some reviewers ultimately felt that the experimental results were too weak to warrant publication. On the binding task the method is comparable with simpler baselines, and some felt that the gains on antibacterial were unconvincing. Other reviewers felt that there remained simpler baselines to compare with, for example ablating the affects of learning the kernel with simple hand picking one. While authors commented they tried this, there were no details given on the results or what exactly they tried. Based on the reviewer discussion, the work feels too preliminary in its current form to warrant publication in ICLR. However, given that there are clearly some interesting ideas proposed in this work, I recommend resubmitting with stronger experimental evidence that the method helps over baselines.
test
[ "BygUBEE3oH", "H1luXI4nsS", "S1e-gL4noB", "rJemABVhjS", "r1eOtBE3jr", "SyefwSV3sr", "HylKJH42sH", "SkeVO44noH", "rJxWx_3O5S", "Bkxkgp7qqr", "BJltL5L55H", "BJgdYgX9qr", "B1l2o9Jp5B", "rye5AH_E5H", "SygFo6h75H", "HyeAZo-htH" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "We offer a sincere thank you to all reviewers for their insightful comments and suggestions to improve our paper. We have addressed the primary concerns of each reviewer by responding directly to your comments and have updated the manuscript accordingly. We have also taken special care to clarify any confusing sec...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 8, 3, 6, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4, 4, -1, -1, -1 ]
[ "iclr_2020_Syeu8CNYvS", "rJxWx_3O5S", "BJgdYgX9qr", "BJgdYgX9qr", "Bkxkgp7qqr", "Bkxkgp7qqr", "BJltL5L55H", "B1l2o9Jp5B", "iclr_2020_Syeu8CNYvS", "iclr_2020_Syeu8CNYvS", "iclr_2020_Syeu8CNYvS", "iclr_2020_Syeu8CNYvS", "iclr_2020_Syeu8CNYvS", "SygFo6h75H", "HyeAZo-htH", "iclr_2020_Syeu8...
iclr_2020_rkguLC4tPB
Unknown-Aware Deep Neural Network
An important property of image classification systems in the real world is that they both accurately classify objects from target classes (``knowns'') and safely reject unknown objects (``unknowns'') that belong to classes not present in the training data. Unfortunately, although the strong generalization ability of existing CNNs ensures their accuracy when classifying known objects, it also causes them to often assign an unknown to a target class with high confidence. As a result, simply using low-confidence detections as a way to detect unknowns does not work well. In this work, we propose an Unknown-aware Deep Neural Network (UDN for short) to solve this challenging problem. The key idea of UDN is to enhance existing CNNs to support a product operation that models the product relationship among the features produced by convolutional layers. This way, missing a single key feature of a target class will greatly reduce the probability of assigning an object to this class. UDN uses a learned ensemble of these product operations, which allows it to balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns. To further improve the performance of UDN at detecting unknowns, we propose an information-theoretic regularization strategy that incorporates the objective of rejecting unknowns into the learning process of UDN. We experiment on benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN, adding unknowns by injecting one dataset into another. Our results demonstrate that UDN significantly outperforms state-of-the-art methods at rejecting unknowns by 25 percentage points improvement in accuracy, while still preserving the classification accuracy.
reject
This paper proposes the unknown-aware deep neural network (UDN), which can discover out-of-distribution samples for CNN classifiers. Experiments show that the proposed method has an improved rejection accuracy while maintaining a good classification accuracy on the test set. Three reviewers have split reviews. Reviewer #2 provides positive review for this work, while indicating that he is not an expert in image classification. Reviewer #1 agrees that the topic is interesting, yet the experiment is not so convincing, especially with limited and simple databases. Reviewer #3 shared the similar concern that the experiments are not sufficient. Further, R3 felt that the main idea is not well explained. The ACs concur these major concerns and agree that the paper can not be accepted at its current state.
train
[ "Sye25E-3FS", "SylGXQV2sB", "S1x9NBEhor", "ByeyxZN2iB", "ByxRE_5hFH", "rJlN0QoQ5S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a neural network architecture for image classification, which can more accurately recognize the unknown class that is not presented in the training data than the prior work. The key idea is to organize the features into a binary tree and use the product of probabilities along the paths to the l...
[ 8, -1, -1, -1, 3, 3 ]
[ 1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_rkguLC4tPB", "ByxRE_5hFH", "Sye25E-3FS", "rJlN0QoQ5S", "iclr_2020_rkguLC4tPB", "iclr_2020_rkguLC4tPB" ]
iclr_2020_ryxF80NYwS
Neural Clustering Processes
Mixture models, a basic building block in countless statistical models, involve latent random variables over discrete spaces, and existing posterior inference methods can be inaccurate and/or very slow. In this work we introduce a novel deep learning architecture for efficient amortized Bayesian inference over mixture models. While previous approaches to amortized clustering assumed a fixed or maximum number of mixture components and only amortized over the continuous parameters of each mixture component, our method amortizes over the local discrete labels of all the data points, and performs inference over an unbounded number of mixture components. The latter property makes our method natural for the challenging case of nonparametric Bayesian models, where the number of mixture components grows with the dataset. Our approach exploits the exchangeability of the generative models and is based on mapping distributed, permutation-invariant representations of discrete arrangements into varying-size multinomial conditional probabilities. The resulting algorithm parallelizes easily, yields iid samples from the approximate posteriors along with a normalized probability estimate of each sample (a quantity generally unavailable using Markov Chain Monte Carlo) and can easily be applied to both conjugate and non-conjugate models, as training only requires samples from the generative model. We also present an extension of the method to models of random communities (such as infinite relational or stochastic block models). As a scientific application, we present a novel approach to neural spike sorting for high-density multielectrode arrays.
reject
This paper uses neural amortized inference for clustering processes to automatically tune the number of clusters based on the observed data. The main contribution of the paper is the design of the posterior parametrization based on the DeepSet method. The reviewers feel that the paper has limited novelty since it mainly follows from existing methodologies. Also, experiments are limited and not all comparisons are made.
val
[ "SJxZrE8BcS", "SkgTmMI8oS", "HJlx9-LIiH", "SylFkW8LsB", "Hyemu9Byqr", "H1g5dPCpFr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nIn this paper, the authors consider the neural amortized inference for clustering processes, in which the number of cluster can be automatically adapted based on the observed samples. The proposed algorithm largely follows the standard variational auto-encoder. The major contribution of the paper is the design o...
[ 3, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, 5, 3 ]
[ "iclr_2020_ryxF80NYwS", "H1g5dPCpFr", "Hyemu9Byqr", "SJxZrE8BcS", "iclr_2020_ryxF80NYwS", "iclr_2020_ryxF80NYwS" ]
iclr_2020_BkgqL0EtPH
{COMPANYNAME}11K: An Unsupervised Representation Learning Dataset for Arrhythmia Subtype Discovery
We release the largest public ECG dataset of continuous raw signals for representation learning containing over 11k patients and 2 billion labelled beats. Our goal is to enable semi-supervised ECG models to be made as well as to discover unknown subtypes of arrhythmia and anomalous ECG signal events. To this end, we propose an unsupervised representation learning task, evaluated in a semi-supervised fashion. We provide a set of baselines for different feature extractors that can be built upon. Additionally, we perform qualitative evaluations on results from PCA embeddings, where we identify some clustering of known subtypes indicating the potential for representation learning in arrhythmia sub-type discovery.
reject
This paper introduces a new ECG dataset. While I appreciate the efforts to clarify several points raised by the reviewers, I still believe this contribution to be of limited interest to the broad ICLR community. As such, I suggest this paper to be submitted to a more specialised venue.
train
[ "rJetsletir", "ryg6feKotr", "HyxavWxtir", "HygH-rostB" ]
[ "author", "official_reviewer", "author", "official_reviewer" ]
[ "It seems that you appreciate this work but decided to reject this paper due to missing details and the inclusion of an experimental section. Baseline experiments for dataset papers provide a useful benchmark for the community to compare with and guide further work on the topic. Discussions with cardiologists have ...
[ -1, 3, -1, 3 ]
[ -1, 3, -1, 3 ]
[ "HygH-rostB", "iclr_2020_BkgqL0EtPH", "ryg6feKotr", "iclr_2020_BkgqL0EtPH" ]
iclr_2020_Ske5UANYDB
Benefit of Interpolation in Nearest Neighbor Algorithms
The over-parameterized models attract much attention in the era of data science and deep learning. It is empirically observed that although these models, e.g. deep neural networks, over-fit the training data, they can still achieve small testing error, and sometimes even outperform traditional algorithms which are designed to avoid over-fitting. The major goal of this work is to sharply quantify the benefit of data interpolation in the context of nearest neighbors (NN) algorithm. Specifically, we consider a class of interpolated weighting schemes and then carefully characterize their asymptotic performances. Our analysis reveals a U-shaped performance curve with respect to the level of data interpolation, and proves that a mild degree of data interpolation strictly improves the prediction accuracy and statistical stability over those of the (un-interpolated) optimal kNN algorithm. This theoretically justifies (predicts) the existence of the second U-shaped curve in the recently discovered double descent phenomenon. Note that our goal in this study is not to promote the use of interpolated-NN method, but to obtain theoretical insights on data interpolation inspired by the aforementioned phenomenon.
reject
The authors show that data interpolation in the context of nearest neighbor algorithms, can sometime strictly improve performance. The paper is poorly written for an ICLR audience and the added value compared to extensive prior work in the area is not clearly demonstrated.
val
[ "SygzwYFssr", "HJxqBtYijS", "B1eGR_toiS", "rkx3udtsiS", "B1eNXtKjiH", "rJl9DU5MoH", "rklTS2CAKS", "rJeZ9nU15S" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The goal of this work is not to pursue the rate of convergence of interpolated-NN (which has been done in Belkin (2018) and Xing (2019). ) Our main theorem provides the EXACT MSE and Regret, rather than rate result up to unknown multiplicative constants. Therefore, it is not a comparable result to Belkin (2018) an...
[ -1, -1, -1, -1, -1, 6, 1, 3 ]
[ -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "rklTS2CAKS", "rJeZ9nU15S", "rJeZ9nU15S", "rJl9DU5MoH", "rklTS2CAKS", "iclr_2020_Ske5UANYDB", "iclr_2020_Ske5UANYDB", "iclr_2020_Ske5UANYDB" ]
iclr_2020_rkgiURVFDS
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing
This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier. In this work, we propose a strategy to build classifiers that are certifiably robust against a strong variant of label-flipping, where the adversary can target each test example independently. In other words, for each test point, our classifier makes a prediction and includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Our approach leverages randomized smoothing, a technique that has previously been used to guarantee test-time robustness to adversarial manipulation of the input to a classifier. Further, we obtain these certified bounds with no additional runtime cost over standard classification. On the Dogfish binary classification task from ImageNet, in the face of an adversary who is allowed to flip 10 labels to individually target each test point, the baseline undefended classifier achieves no more than 29.3% accuracy; we obtain a classifier that maintains 64.2% certified accuracy against the same adversary.
reject
The authors develop a certified defense for label-flipping attacks (where an adversary can flip labels of a small number of training set samples) based on the randomized smoothing technique developed for certified defenses to adversarial perturbations of the input. The framework applies to least-squares classifiers acting on pretrained features learned by a deep network. The authors show that the resulting framework can obtain significant improvements in certified accuracy against targeted label flipping attacks for each test example. While the paper makes some interesting contributions, the reviewers had the following shared concerns regarding the paper: 1) Reality of threat model: The threat model assumes that the adversary has access to the model and all of the training data (so as to choose which labels to flip), which is very unlikely in practice. 2) Limitation to least squares on pre-trained features: The only practical instantiation of the framework presented in the paper is on least squares classifiers acting on pre-trained features learned by a deep network. In the rebuttal phase, the authors clarified some of the more minor concerns raised by the reviewers, but the above concerns remained. Overall, I feel that this paper is borderline - If the authors extend the applicability of the framework (for example relaxing the restriction on pre-training the deep features) and motivating the threat model more strongly, this could be an interesting paper.
val
[ "BygjoYalsB", "HJxYTUXjoH", "BJeJqUQojB", "B1lYELQsiS", "H1xIiv2tjB", "HklRUvploB", "SyxduPagsS", "Hkl5bLTliH", "Hklgwt92Yr", "BkenaNa6tH", "SJeyIIenFB" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comments! Below we respond to all of these, and hope that the clarifications can improve your evaluation of the paper.\n\n1. The cost of training our algorithm is simply the cost of training the pre-trained classifier. In other words, there is zero increase to training time, and in fact in some cas...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "SJeyIIenFB", "H1xIiv2tjB", "Hklgwt92Yr", "BkenaNa6tH", "BygjoYalsB", "Hklgwt92Yr", "HklRUvploB", "BkenaNa6tH", "iclr_2020_rkgiURVFDS", "iclr_2020_rkgiURVFDS", "iclr_2020_rkgiURVFDS" ]
iclr_2020_rygoURNYvS
Pre-trained Contextual Embedding of Source Code
The source code of a program not only serves as a formal description of an executable task, but it also serves to communicate developer intent in a human-readable form. To facilitate this, developers use meaningful identifier names and natural-language documentation. This makes it possible to successfully apply sequence-modeling approaches, shown to be effective in natural-language processing, to source code. A major advancement in natural-language understanding has been the use of pre-trained token embeddings; BERT and other works have further shown that pre-trained contextual embeddings can be extremely powerful and can be finetuned effectively for a variety of downstream supervised tasks. Inspired by these developments, we present the first attempt to replicate this success on source code. We curate a massive corpus of Python programs from GitHub to pre-train a BERT model, which we call Code Understanding BERT (CuBERT). We also pre-train Word2Vec embeddings on the same dataset. We create a benchmark of five classification tasks and compare finetuned CuBERT against sequence models trained with and without the Word2Vec embeddings. Our results show that CuBERT outperforms the baseline methods by a margin of 2.9-22%. We also show its superiority when finetuned with smaller datasets, and over fewer epochs.
reject
The paper presents CuBERT (Code Understanding BERT), which is BERT-inspired pretraining/finetuning setup, for source code contextual embedding. The embedding results are tested on classification tasks to demonstrate the effectiveness of CuBERT. This is an interesting application paper that extends existing models to source code analysis. The authors did a good job at motivating the applications, describing the proposed models and discussing the experiments. The authors also agree to share all the datasets and source code so that the experiment results can be replicated and compared with by other researchers. One major concern is the lack of strong baselines. All reviewers are concerned about this issue. The paper could lead to a good publication in the future if the issues can be addressed.
val
[ "S1lNE5poor", "HyeWwisosr", "r1gz76diir", "BygpADHoiB", "rkxNHZXoir", "SJgw3J7osS", "rylnnhfsir", "HkxPK6RqsH", "S1eV3hAqoS", "HJxaoap9iH", "ryeldyfcjS", "rke73iWciB", "B1l63cWcjH", "H1lNfO-9sB", "BygAMHy5_H", "SklbqFhatB", "Hkgs8EAk5r" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "It would be great if the datasets and code can be shared so that readers can reproduce the results in the paper. ", "Thank you for the answers and clarifications. I read the new draft, and it addresses my concerns much better — I particularly appreciated addition of a more complex version of the VarMisuse task.\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "SJgw3J7osS", "r1gz76diir", "H1lNfO-9sB", "rkxNHZXoir", "S1eV3hAqoS", "rylnnhfsir", "HkxPK6RqsH", "HJxaoap9iH", "B1l63cWcjH", "rke73iWciB", "iclr_2020_rygoURNYvS", "BygAMHy5_H", "SklbqFhatB", "Hkgs8EAk5r", "iclr_2020_rygoURNYvS", "iclr_2020_rygoURNYvS", "iclr_2020_rygoURNYvS" ]
iclr_2020_ryxn8RNtvr
NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
The problem of explaining deep learning models, and model predictions generally, has attracted intensive interest recently. Many successful approaches forgo global approximations in order to provide more faithful local interpretations of the model’s behavior. LIME develops multiple interpretable models, each approximating a large neural network on a small region of the data manifold, and SP-LIME aggregates the local models to form a global interpretation. Extending this line of research, we propose a simple yet effective method, NormLIME, for aggregating local models into global and class-specific interpretations. A human user study strongly favored the class-specific interpretations created by NormLIME to other feature importance metrics. Numerical experiments employing Keep And Retrain (KAR) based feature ablation across various baselines (Random, Gradient-based, LIME, SHAP) confirms NormLIME’s effectiveness for recognizing important features.
reject
The paper aims to extract the set of features explaining a class, from a trained DNN classifier. The proposed approach relies on LIME (Ribeiro et al. 2016), modified as follows: i) around a point x, a linearized sparse approximation of the classifier is found (as in LIME); ii) for a given class, the importance of a feature aggregates the relative absolute weight of this feature in the linearized sparse approximations above; iii) the explanation is made of the top features in terms of importance. This simple modification yields visual explanations that significantly better match the human perception than the SOTA competitors. The experimental setting based on the human evaluation via a Mechanical Turk setting is the second contribution of the approach. The feature importance measure is also assessed along a Keep and Retrain mechanism, showing that the approach selects actually relevant features in terms of prediction. Incidentally, it would be good to see the sensitivity of the method to parameter $k$ (in Eq. 1). As noted by Rev#1, NormLIME is simple (and simplicity is a strength) and it demonstrates its effectiveness on the MNIST data. However, as noted by Rev#4, it is hard to assess the significance of the approach from this only dataset. It is understood that the Mechanical Turk-based assessment can only be used with a sufficiently simple problem. However, complementary experiments on ImageNet for instance, e.g., showing which pixels are retained to classify an image as a husky dog, would be much appreciated to confirm the merits and investigate the limitations of the approach.
train
[ "BJeRKr52jr", "HJgnZXqhiH", "B1lNtM53iS", "ByxujsOoFB", "Hygc2fIhtS", "rJxzAY-a9S" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "To address the comments first: Yes it is possible to apply this method to any generic explainer. We haven’t experimented, but it is similar to a weighted version of SmoothGrad except the local models are not localized to the same point. Hence the resulting interpretation is no longer local. Regarding the choice of...
[ -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, 4, 1, 3 ]
[ "ByxujsOoFB", "Hygc2fIhtS", "rJxzAY-a9S", "iclr_2020_ryxn8RNtvr", "iclr_2020_ryxn8RNtvr", "iclr_2020_ryxn8RNtvr" ]
iclr_2020_SJl28R4YPr
Graph Neural Networks for Reasoning 2-Quantified Boolean Formulas
It is valuable yet remains challenging to apply neural networks in logical reasoning tasks. Despite some successes witnessed in learning SAT (Boolean Satisfiability) solvers for propositional logic via Graph Neural Networks (GNN), there haven't been any successes in learning solvers for more complex predicate logic. In this paper, we target the QBF (Quantified Boolean Formula) satisfiability problem, the complexity of which is in-between propositional logic and predicate logic, and investigate the feasibility of learning GNN-based solvers and GNN-based heuristics for the cases with a universal-existential quantifier alternation (so-called 2QBF problems). We conjecture, with empirical support, that GNNs have certain limitations in learning 2QBF solvers, primarily due to the inability to reason about a set of assignments. Then we show the potential of GNN-based heuristics in CEGAR-based solvers and explore the interesting challenges to generalize them to larger problem instances. In summary, this paper provides a comprehensive surveying view of applying GNN-based embeddings to 2QBF problems and aims to offer insights in applying machine learning tools to more complicated symbolic reasoning problems.
reject
This work investigates the use of graph NNs for solving 2QBF . The authors provide empirical evidence that for this type of satisfiability decision problem, GNNs are not able to provide solutions and claim this is due to the message passing mechanism that cannot afford for complex reasoning. Finally, the authors propose a number of heuristics that extend GNNs and show that these improve their performance. 2-QBF problem is used as a playground since, as the authors also point, their complexity is in between that of predicate and propositional logic. This on its own is not bad, as it can be used as a minimal environment for the type of investigation the authors are interested. That being said, I find a number a number of flaws in the current form of the paper (some of them pointed by R3 as well), with the main issue being that of lack experimental rigor. Given the restricted set of problems the authors consider, I think the experiments on identifying pathologies of GNNs on this setup could have gone more in depth. Let me be specific. 1) The bad performance is attributed to message-passing. However, this feels anecdotal at the moment and authors do not provide firm conclusions about that. The only evidence they provide is that performance becomes better with more message-passing iterations they allow. This is a hint though to dive deeper rather than a firm conclusion. For example do we know if the finding about sensitivity to message-passing is due to the small size of the network or the training procedure? 2) To add on that, there is virtually no information on the paper about the specifics of the experimental setup, so the reader cannot be convinced that the negative results do not arise from a bad experimental configuration (e.g., small size of network). 3) Moreover, the negative results here, as the authors point, seem to contradict previous work, providing negative results against GNNs. Again, this is a valuable contribution if that is indeed the case, but again the paper does not provide enough evidence. In lieu of a convincing set of experiments, the paper could provide a proof (as also asked by R3). However with no proof and not strong empirical evidence that this result does not feel ready to get published at ICLR. Overall, I think this paper with a bit more rigor could be a very good submission for a later conference. However, as it stands I cannot recommend acceptance.
train
[ "HJgGOTcDtS", "B1xdwB9TKr", "ByefKW4V5r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper first presents GNN architectures to solve 2-QBFs. They show that similar GNN architectures which work for propositional logic do not transfer to 2-QBFs, and provide some explanation for the result. Finally, they show how GNN modules can be used to speed up existing 2-QBF solvers instead. I mostly like t...
[ 6, 8, 3 ]
[ 1, 1, 4 ]
[ "iclr_2020_SJl28R4YPr", "iclr_2020_SJl28R4YPr", "iclr_2020_SJl28R4YPr" ]
iclr_2020_H1lTUCVYvH
Rethinking Curriculum Learning With Incremental Labels And Adaptive Compensation
Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum. While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads. In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which introduces a novel approach to curriculum learning. LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples. It works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn. In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution. We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10. We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks. We further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties.
reject
While the reviewers appreciated the ideas presented in the paper and their novelty, there were major concerns raised about the experimental evaluation. Due to the serious doubts that the reviewers raised about the effectiveness of the proposed approach, I do not think that the paper is quite ready for publication at this time, though I would encourage the authors to revise and resubmit the work at the next opportunity.
train
[ "BJeWvQqcFr", "H1x4pfa_oH", "Hkl2Nmpujr", "BkgeLUtPsS", "rkeXjeEPsr", "SJgN1K_0FS", "rkeSQLs0KS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "1. Summary:\n\nThis paper proposes a novel direction for curriculum learning. Previous works in the area of curriculum learning focused on choosing easier samples first and harder samples later when learning the neural network models. This is problematic since we need to first compute how difficult each samples ...
[ 3, -1, -1, -1, -1, 1, 6 ]
[ 3, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_H1lTUCVYvH", "SJgN1K_0FS", "H1x4pfa_oH", "rkeSQLs0KS", "BJeWvQqcFr", "iclr_2020_H1lTUCVYvH", "iclr_2020_H1lTUCVYvH" ]
iclr_2020_BylT8RNKPH
A Base Model Selection Methodology for Efficient Fine-Tuning
While the accuracy of image classification achieves significant improvement with deep Convolutional Neural Networks (CNN), training a deep CNN is a time-consuming task because it requires a large amount of labeled data and takes a long time to converge even with high performance computing resources. Fine-tuning, one of the transfer learning methods, is effective in decreasing time and the amount of data necessary for CNN training. It is known that fine-tuning can be performed efficiently if the source and the target tasks have high relativity. However, the technique to evaluate the relativity or transferability of trained models quantitatively from their parameters has not been established. In this paper, we propose and evaluate several metrics to estimate the transferability of pre-trained CNN models for a given target task by featuremaps of the last convolutional layer. We found that some of the proposed metrics are good predictors of fine-tuned accuracy, but their effectiveness depends on the structure of the network. Therefore, we also propose to combine two metrics to get a generally applicable indicator. The experimental results reveal that one of the combined metrics is well correlated with fine-tuned accuracy in a variety of network structure and our method has a good potential to reduce the burden of CNN training.
reject
This paper proposes to speed up finetuning of pretrained deep image classification networks by predicting the success rate of a zoom of pre-trained networks without completely running them on the test set. The idea is that a sensible measure from the output layer might well correlate with the performance of the network. All reviewers consider this is an important problem and a good direction to make the effort. However, various concerns are raised and all reviewers unanimously rate weak reject. The major concerns include the unclear relationship between the metrics and the fine-tuning performance, non- comprehensive experiments, poor writing quality. The authors respond to Reviewers’ concerns but did not change the major concerns. The ACs concur the concerns and the paper can not be accepted at its current state.
train
[ "S1x958KoiH", "r1xwvUFoiB", "r1g5hBKsoB", "SJxrFStjoH", "HkgI8AGkYr", "H1xRBbkYtB", "r1lkxZb0FH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. Why during fine-tuning, do you use SGD instead of Adam?\nReply: Since SGD is simpler than Adam, we thought that the results and findings using SGD could be more general and interpretable.\n\n2. What kind of correlation metrics do you use in Eqn. (4)? And will the correlation metric influence the effectiveness o...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 5, 4, 3 ]
[ "HkgI8AGkYr", "H1xRBbkYtB", "r1lkxZb0FH", "iclr_2020_BylT8RNKPH", "iclr_2020_BylT8RNKPH", "iclr_2020_BylT8RNKPH", "iclr_2020_BylT8RNKPH" ]