paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_B1gX8kBtPr
Universal Approximation with Certified Networks
Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, it is currently very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function f, there exists a network n such that: (i) n approximates f arbitrarily close, and (ii) simple interval bound propagation of a region B through n yields a result that is arbitrarily close to the optimal output of f on B. Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.
accept-poster
This work shows that there exist neural networks that can be certified by interval bound propagation. It provides interesting and surprising theoretical insights, although analysis requires the networks to be impractically large and hence does not directly yield practical advances.
train
[ "Hyl3mPnOsB", "BygpXxpDjH", "S1lbHrKwiB", "BkefWiKwir", "ByehldKDjS", "r1gcEwFvsS", "Hyg928YvoH", "SkenkkVpFB", "Hkg_xk_TFH", "HJgjoVkWcr", "SyefKVndur" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Thank you for the questions.\n\nQ: What is the best upper bound on network size that can be given? What do you think is the best upper bound that could be achieved with this approach? You say \"This drastically reduces the number of neurons\", but it is not clear to me what the new result should be. It sounds from...
[ -1, -1, -1, -1, -1, -1, -1, 3, 8, 6, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, -1 ]
[ "BygpXxpDjH", "Hyg928YvoH", "iclr_2020_B1gX8kBtPr", "Hkg_xk_TFH", "HJgjoVkWcr", "SkenkkVpFB", "SkenkkVpFB", "iclr_2020_B1gX8kBtPr", "iclr_2020_B1gX8kBtPr", "iclr_2020_B1gX8kBtPr", "iclr_2020_B1gX8kBtPr" ]
iclr_2020_rkeIIkHKvS
Measuring and Improving the Use of Graph Information in Graph Neural Networks
Graph neural networks (GNNs) have been widely used for representation learning on graph data. However, there is limited understanding on how much performance GNNs actually gain from graph data. This paper introduces a context-surrounding GNN framework and proposes two smoothness metrics to measure the quantity and quality of information obtained from graph data. A new, improved GNN model, called CS-GNN, is then devised to improve the use of graph information based on the smoothness values of a graph. CS-GNN is shown to achieve better performance than existing methods in different types of real graphs.
accept-poster
Two reviewers are positive about this paper while the other reviewer is negative. The low-scoring reviewer did not respond to discussions. I also read the paper and found it interesting. Thus an accept is recommended.
train
[ "BJxsAEBNor", "r1x6NrBEsH", "B1lwIIHVjr", "B1eu28SVsB", "ryl-NEH4jS", "SJlJQTncFH", "ryxJ4-ExqH", "H1ep9tkO9H" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments and the detailed suggestions. We try to address your main concerns below and hopefully this will make the contributions of our paper clearer.\n\nFirst, we would like to discuss your concern that our method is close to GAT. While our GNN model, i.e., CS-GNN, is also an attention-based m...
[ -1, -1, -1, -1, -1, 8, 3, 8 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "ryxJ4-ExqH", "ryxJ4-ExqH", "ryxJ4-ExqH", "SJlJQTncFH", "H1ep9tkO9H", "iclr_2020_rkeIIkHKvS", "iclr_2020_rkeIIkHKvS", "iclr_2020_rkeIIkHKvS" ]
iclr_2020_HJgLLyrYwB
State-only Imitation with Transition Dynamics Mismatch
Imitation Learning (IL) is a popular paradigm for training agents to achieve complicated goals by leveraging expert behavior, rather than dealing with the hardships of designing a correct reward function. With the environment modeled as a Markov Decision Process (MDP), most of the existing IL algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitator policy is to be learned. This is uncharacteristic of many real-life scenarios where discrepancies between the expert and the imitator MDPs are common, especially in the transition dynamics function. Furthermore, obtaining expert actions may be costly or infeasible, making the recent trend towards state-only IL (where expert demonstrations constitute only states or observations) ever so promising. Building on recent adversarial imitation approaches that are motivated by the idea of divergence minimization, we present a new state-only IL algorithm in this paper. It divides the overall optimization objective into two subproblems by introducing an indirection step and solves the subproblems iteratively. We show that our algorithm is particularly effective when there is a transition dynamics mismatch between the expert and imitator MDPs, while the baseline IL methods suffer from performance degradation. To analyze this, we construct several interesting MDPs by modifying the configuration parameters for the MuJoCo locomotion tasks from OpenAI Gym.
accept-poster
This paper addresses the setting of imitation learning from state observations only, where the system dynamics under which the demonstrations are performed differs from the target environment. The paper proposes to circumvent this dynamics shift with an algorithm whereby the target policy is trained to imitate its own past trajectories, re-ranked based on the similarity in state occupancies as judged by a WGAN critic. The reviewers found the paper to be clearly written and enjoyable. The paper improved considerably through reviewers feedback. Notably, a behavior cloning from observations (BCO) baseline was added, which was stronger than the authors expected but still helped highlight the strength of the proposed method by comparison. R1 had a particularly productive multiple round exchange, clarifying the description of previous work, clarifying the details of the proposed procedure and strengthening the presentation of empirical evidence. This work compellingly addresses an important problem, and in its final form is a polished piece of work. I recommend acceptance.
train
[ "SJeATxLnYr", "BJebQ1o2iS", "ryeRChQ2iB", "BygOeTvssB", "HkxzZ1dioB", "BkgygtDjoH", "r1gZxPviiS", "S1lu8BPijr", "B1lITN63FS", "SJg24Pj6tS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe manuscript considers the problem of imitation learning when the system dynamics of the agent are different from the dynamics of the expert. The paper proposes Indirect Imitation Learning (I2L), which aims to perform imitation learning with respect to a trajectory buffer that contains some of the prev...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_HJgLLyrYwB", "ryeRChQ2iB", "HkxzZ1dioB", "SJeATxLnYr", "BygOeTvssB", "B1lITN63FS", "SJg24Pj6tS", "iclr_2020_HJgLLyrYwB", "iclr_2020_HJgLLyrYwB", "iclr_2020_HJgLLyrYwB" ]
iclr_2020_ByxdUySKvS
Adversarial AutoAugment
Data augmentation (DA) has been widely utilized to improve generalization in training deep neural networks. Recently, human-designed data augmentation has been gradually replaced by automatically learned augmentation policy. Through finding the best policy in well-designed search space of data augmentation, AutoAugment (Cubuk et al., 2019) can significantly improve validation accuracy on image classification tasks. However, this approach is not computationally practical for large-scale problems. In this paper, we develop an adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss. The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization. In contrast to prior work, we reuse the computation in target network training for policy evaluation, and dispense with the retraining of the target network. Compared to AutoAugment, this leads to about 12x reduction in computing cost and 11x shortening in time overhead on ImageNet. We show experimental results of our approach on CIFAR-10/CIFAR-100, ImageNet, and demonstrate significant performance improvements over state-of-the-art. On CIFAR-10, we achieve a top-1 test error of 1.36%, which is the currently best performing single model. On ImageNet, we achieve a leading performance of top-1 accuracy 79.40% on ResNet-50 and 80.00% on ResNet-50-D without extra data.
accept-poster
This paper proposes a method to learn data augmentation policies using an adversarial loss. In contrast to AutoAugment where an augmentation policy generator is trained by RL (computationally expensive), the authors propose to train a policy generator and the target classifier simultaneously. This is done in an adversarial fashion by computing augmentation policies which increase the loss of the classifier. The authors show that this approach leads to roughly an order of magnitude improvement in computational cost over AutoAugment, while improving the test performance. The reviewers agree that the presentation is clear and that the proposed method is sound, and that there is a significant practical benefit of using such a technique. As most of the concerns were addressed in the discussion phase, I will recommend acceptance of this paper. We ask the authors to update the manuscript to address the remaining (minor) concerns.
train
[ "HJx91T8hjr", "rkg8m0BKtr", "SJgRylr2sr", "HygeSPE3jr", "r1el_7gojr", "SJepTa1jsS", "S1lbi609iS", "ryeaPUKKsH", "B1l0K2oLjS", "BkxK-ssIjr", "rJlknssIjH", "SkeDSmkQ5r", "SJxZB33S5S" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much. The helpful discussion improves the paper substantially. We will consider your suggestion.", "This paper proposes a technique called Adversarial AutoAugment which dynamically learns good data augmentation policies during training. An adversarial approach is used: a target network tries to ac...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "SJgRylr2sr", "iclr_2020_ByxdUySKvS", "HygeSPE3jr", "r1el_7gojr", "S1lbi609iS", "BkxK-ssIjr", "ryeaPUKKsH", "B1l0K2oLjS", "rkg8m0BKtr", "SJxZB33S5S", "SkeDSmkQ5r", "iclr_2020_ByxdUySKvS", "iclr_2020_ByxdUySKvS" ]
iclr_2020_BJgd81SYwr
Meta Dropout: Learning to Perturb Latent Features for Generalization
A machine learning model that generalizes well should obtain low errors on unseen test examples. Thus, if we know how to optimally perturb training examples to account for test examples, we may achieve better generalization performance. However, obtaining such perturbation is not possible in standard machine learning frameworks as the distribution of the test data is unknown. To tackle this challenge, we propose a novel regularization method, meta-dropout, which learns to perturb the latent features of training examples for generalization in a meta-learning framework. Specifically, we meta-learn a noise generator which outputs a multiplicative noise distribution for latent features, to obtain low errors on the test instances in an input-dependent manner. Then, the learned noise generator can perturb the training examples of unseen tasks at the meta-test time for improved generalization. We validate our method on few-shot classification datasets, whose results show that it significantly improves the generalization performance of the base model, and largely outperforms existing regularization methods such as information bottleneck, manifold mixup, and information dropout.
accept-poster
This paper proposes a type of adaptive dropout to regularize gradient based meta-learning models. The reviewers found the idea interesting and it is supported by improvements on standard benchmarks. The authors addressed several concerns of the reviewers during the rebutal phase. In particular, revisions added results against other regularization mthods. We recommend that further attention is given to ablations, in particular the baseline proposed by Reviewer 1.
train
[ "rJxKsYxycS", "HkgSRBhniH", "HJe3wYbPoS", "ryg4oUWPsH", "rJgYVv-PoB", "BJgDya-vsB", "ByemJtbwsB", "H1lRMzRpYr", "S1e1sc_GqB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose to meta-learn, using MAML, the mean of an elementwise, input-dependent, multiplicative noise to improve generalization in few-shot learning.\nThe motivation is that meta-learning the noise allows to learn how to best perturb examples in order to improve generlization. This claim is supported b...
[ 8, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_BJgd81SYwr", "ByemJtbwsB", "iclr_2020_BJgd81SYwr", "S1e1sc_GqB", "rJxKsYxycS", "H1lRMzRpYr", "H1lRMzRpYr", "iclr_2020_BJgd81SYwr", "iclr_2020_BJgd81SYwr" ]
iclr_2020_HkgsUJrtDB
Rényi Fair Inference
Machine learning algorithms have been increasingly deployed in critical automated decision-making systems that directly affect human lives. When these algorithms are solely trained to minimize the training/test error, they could suffer from systematic discrimination against individuals based on their sensitive attributes, such as gender or race. Recently, there has been a surge in machine learning society to develop algorithms for fair machine learning. In particular, several adversarial learning procedures have been proposed to impose fairness. Unfortunately, these algorithms either can only impose fairness up to linear dependence between the variables, or they lack computational convergence guarantees. In this paper, we use Rényi correlation as a measure of fairness of machine learning models and develop a general training framework to impose fairness. In particular, we propose a min-max formulation which balances the accuracy and fairness when solved to optimality. For the case of discrete sensitive attributes, we suggest an iterative algorithm with theoretical convergence guarantee for solving the proposed min-max problem. Our algorithm and analysis are then specialized to fair classification and fair clustering problems. To demonstrate the performance of the proposed Rényi fair inference framework in practice, we compare it with well-known existing methods on several benchmark datasets. Experiments indicate that the proposed method has favorable empirical performance against state-of-the-art approaches.
accept-poster
The paper addresses the problem of fair representation learning. The authors propose to use Rényi correlation as a measure of (in)dependence between the predictor and the sensitive attribute and developed a general training framework to impose fairness with theoretical properties. The empirical evaluations have been performed using standard benchmarks for fairness methods and the SOTA baselines -- all this supports the main claims of this work's contributions. All the reviewers and AC agree that this work has made a valuable contribution and recommend acceptance. Congratulations to the authors!
train
[ "r1eBV_Q5sS", "rJxe__mcsS", "SylrUPXqsH", "HyxTwiqAKH", "r1gYZ0Tg5B", "HJlAucY_qB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "First, we thank the reviewer for his/her valuable assessment of our paper. Below, we address the main concerns of the reviewer and mention the revisions we have made on the paper. \n\n[Fair word embeddings]\nWe appreciate the reviewer for bringing this point up. The notions of fairness and the proposed optimizatio...
[ -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, 3, 3, 1 ]
[ "r1gYZ0Tg5B", "HyxTwiqAKH", "HJlAucY_qB", "iclr_2020_HkgsUJrtDB", "iclr_2020_HkgsUJrtDB", "iclr_2020_HkgsUJrtDB" ]
iclr_2020_SJlRUkrFPS
Learning transport cost from subset correspondence
Learning to align multiple datasets is an important problem with many applications, and it is especially useful when we need to integrate multiple experiments or correct for confounding. Optimal transport (OT) is a principled approach to align datasets, but a key challenge in applying OT is that we need to specify a cost function that accurately captures how the two datasets are related. Reliable cost functions are typically not available and practitioners often resort to using hand-crafted or Euclidean cost even if it may not be appropriate. In this work, we investigate how to learn the cost function using a small amount of side information which is often available. The side information we consider captures subset correspondence---i.e. certain subsets of points in the two data sets are known to be related. For example, we may have some images labeled as cars in both datasets; or we may have a common annotated cell type in single-cell data from two batches. We develop an end-to-end optimizer (OT-SI) that differentiates through the Sinkhorn algorithm and effectively learns the suitable cost function from side information. On systematic experiments in images, marriage-matching and single-cell RNA-seq, our method substantially outperform state-of-the-art benchmarks.
accept-poster
The paper proposes an algorithm for learning a transport cost function that accurately captures how two datasets are related by leveraging side information such as a subset of correctly labeled points. The reviewers believe that this is an interesting and novel idea. There were several questions and comments, which the authors adequately addressed. I recommend that the paper be accepted.
train
[ "rJeKM2Z2oH", "H1e1pibnir", "rylf9sZ2ir", "r1lYrm46FS", "SklG_8hatS", "BJx8MZsRFS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thoughtful review and suggestions. \n\nRegarding your motivation for dataset alignment, aligning datasets across different conditions is a very important problem in biology, especially in single cell analysis. This is highlighted in several recent high-profile papers, e.g. Butler et al Nature Bi...
[ -1, -1, -1, 3, 6, 8 ]
[ -1, -1, -1, 4, 3, 1 ]
[ "r1lYrm46FS", "SklG_8hatS", "BJx8MZsRFS", "iclr_2020_SJlRUkrFPS", "iclr_2020_SJlRUkrFPS", "iclr_2020_SJlRUkrFPS" ]
iclr_2020_SklkDkSFPB
BlockSwap: Fisher-guided Block Substitution for Network Compression on a Budget
The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks. However, not all blocks are created equally; for a required compute budget there may exist a potent combination of many different cheap blocks, though exhaustively searching for such a combination is prohibitively expensive. In this work, we develop BlockSwap: a fast algorithm for choosing networks with interleaved block types by passing a single minibatch of training data through randomly initialised networks and gauging their Fisher potential. These networks can then be used as students and distilled with the original large network as a teacher. We demonstrate the effectiveness of the chosen networks across CIFAR-10 and ImageNet for classification, and COCO for detection, and provide a comprehensive ablation study of our approach. BlockSwap quickly explores possible block configurations using a simple architecture ranking system, yielding highly competitive networks in orders of magnitude less time than most architecture search techniques (e.g. under 5 minutes on a single GPU for CIFAR-10).
accept-poster
Two reviewers recommend acceptance. One reviewer is negative, however, does not provide reasons for rejection. The AC read the paper and agrees with the positive reviewers. in that the paper provides value for the community on an important topic of network compression.
test
[ "BJxriKiZiH", "Syghz8ibsr", "S1ljr-j-ir", "ByxSoEyLKr", "rkxUDhtpFB", "rkl7kPYxcS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would first like to thank the reviewer for their detailed comments and analysis of our work. We are glad they liked the idea, and we are particularly happy that they appreciated the simplicity of the method, as we believe that complicated methods can present a major obstacle to deployment.\n\nWe answer the quer...
[ -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, 4, 4, 3 ]
[ "ByxSoEyLKr", "rkxUDhtpFB", "rkl7kPYxcS", "iclr_2020_SklkDkSFPB", "iclr_2020_SklkDkSFPB", "iclr_2020_SklkDkSFPB" ]
iclr_2020_Syx1DkSYwB
Variance Reduction With Sparse Gradients
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients to reduce the variance of stochastic gradients. Compared to SGD, these methods require at least double the number of operations per update to model parameters. To reduce the computational cost of these methods, we introduce a new sparsity operator: The random-top-k operator. Our operator reduces computational complexity by estimating gradient sparsity exhibited in a variety of applications by combining the top-k operator and the randomized coordinate descent operator. With this operator, large batch gradients offer an extra benefit beyond variance reduction: A reliable estimate of gradient sparsity. Theoretically, our algorithm is at least as good as the best algorithm (SpiderBoost), and further excels in performance whenever the random-top-k operator captures gradient sparsity. Empirically, our algorithm consistently outperforms SpiderBoost using various models on various tasks including image classification, natural language processing, and sparse matrix factorization. We also provide empirical evidence to support the intuition behind our algorithm via a simple gradient entropy computation, which serves to quantify gradient sparsity at every iteration.
accept-poster
Congratulations on getting your paper accepted to ICLR. Please make sure to incorporate the reviewers' suggestions for the final version.
val
[ "S1g8clOTFH", "Bylt-T73oH", "ByeDcCHjiS", "Hyexcsydor", "H1eEA5y_sr", "B1ldttyujr", "S1xl3O1uiB", "rJx0y3HZYS", "S1gBwSJRtB" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper aims at improving the computational cost of variance reduction methods while preserving their benefits regarding the fast provable convergence. The existing variance reduction based methods suffer from higher per-iteration gradient query complexity as compared to the vanilla mini-batch SGD, which limits...
[ 6, -1, -1, -1, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_Syx1DkSYwB", "iclr_2020_Syx1DkSYwB", "H1eEA5y_sr", "S1gBwSJRtB", "rJx0y3HZYS", "S1g8clOTFH", "iclr_2020_Syx1DkSYwB", "iclr_2020_Syx1DkSYwB", "iclr_2020_Syx1DkSYwB" ]
iclr_2020_Byg1v1HKDB
Abductive Commonsense Reasoning
Abductive reasoning is inference to the most plausible explanation. For example, if Jenny finds her house in a mess when she returns from work, and remembers that she left a window open, she can hypothesize that a thief broke into her house and caused the mess, as the most plausible explanation. While abduction has long been considered to be at the core of how people interpret and read between the lines in natural language (Hobbs et al., 1988), there has been relatively little research in support of abductive natural language inference and generation. We present the first study that investigates the viability of language-based abductive reasoning. We introduce a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations. Based on this dataset, we conceptualize two new tasks – (i) Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and (ii) Abductive NLG: a conditional generation task for explaining given observations in natural language. On Abductive NLI, the best model achieves 68.9% accuracy, well below human performance of 91.4%. On Abductive NLG, the current best language generators struggle even more, as they lack reasoning capabilities that are trivial for humans. Our analysis leads to new insights into the types of reasoning that deep pre-trained language models fail to perform—despite their strong performance on the related but more narrowly defined task of entailment NLI—pointing to interesting avenues for future research.
accept-poster
This paper presents a dataset, created using a combination of existing resources, crowdsourcing, and model-based filtering, that aims to tests models' understanding of typical progressions of events in everyday situations. The dataset represents a challenge for a range of state of the art models for NLP and commonsense reasoning, and also can be used productively as a training task in transfer learning. After some discussion, reviewers came to a consensus that this represents an interesting contribution and a potentially valuable resource. There were some concerns—not fully resolved—about the implications of using model-based filtering during data creation, but these were not so serious as to invalidate the primary contributions of the paper. While the thematic fit with ICLR is a bit weak—the primary contribution of the paper appears to be a dataset and task definition, rather than anything specific to representation learning—there are relevant secondary contributions, and I think that this work will be practically of interest to a reasonable fraction of the ICLR audience.
train
[ "rylgzEOqsB", "rkxSWudcor", "H1gU-PuqiS", "Sye4_Uu9or", "BklqO2_AKH", "BJlqUqW3tH", "ByltNYc0tr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank AnonReviewer1 for their positive comments about the interesting-ness of our proposed abductive reasoning tasks (inference and generation) and the associated benchmark dataset. We address specific concerns individually below:\n\nDiscussion about e-SNLI:\nA key distinction between e-SNLI and Abductive-NLI i...
[ -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, 3, 4, 5 ]
[ "ByltNYc0tr", "iclr_2020_Byg1v1HKDB", "BJlqUqW3tH", "BklqO2_AKH", "iclr_2020_Byg1v1HKDB", "iclr_2020_Byg1v1HKDB", "iclr_2020_Byg1v1HKDB" ]
iclr_2020_Byg-wJSYDS
Discrepancy Ratio: Evaluating Model Performance When Even Experts Disagree on the Truth
In most machine learning tasks unambiguous ground truth labels can easily be acquired. However, this luxury is often not afforded to many high-stakes, real-world scenarios such as medical image interpretation, where even expert human annotators typically exhibit very high levels of disagreement with one another. While prior works have focused on overcoming noisy labels during training, the question of how to evaluate models when annotators disagree about ground truth has remained largely unexplored. To address this, we propose the discrepancy ratio: a novel, task-independent and principled framework for validating machine learning models in the presence of high label noise. Conceptually, our approach evaluates a model by comparing its predictions to those of human annotators, taking into account the degree to which annotators disagree with one another. While our approach is entirely general, we show that in the special case of binary classification, our proposed metric can be evaluated in terms of simple, closed-form expressions that depend only on aggregate statistics of the labels and not on any individual label. Finally, we demonstrate how this framework can be used effectively to validate machine learning models using two real-world tasks from medical imaging. The discrepancy ratio metric reveals what conventional metrics do not: that our models not only vastly exceed the average human performance, but even exceed the performance of the best human experts in our datasets.
accept-poster
This paper tackles an interesting problem: "How should we evaluate models when the test data contains noisy labels?". This is a particularly relevant question in the medical imaging domain where expert annotators often disagree with each other. The paper proposes a new metric "discrepancy ratio" which computes the ratio how often the model disagrees with humans to how often humans disagree with each other. The paper shows that under certain noise models for the human annotations the discrepancy ratio can exactly determine when a model is more accurate than humans, whereas commonly used baselines such as comparing with the majority vote do not have this property. Reviewers were satisfied with the author rebuttal, particularly with the clarification that the goal of the metric is to accurately determine when model performance exceeds that of human annotators, and not to better rank models. The metric should be quite useful, assuming users are cautious of the limitations described by the authors.
train
[ "r1eMkC3mnS", "BJg_rIUitr", "SJxfhzwjtS", "rylN2D7DoB", "SygxEwQvsH", "r1xu9IQwsH", "Syx3rIXDor", "HkexErQvjH", "rJxRqQToFr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors asked me that:\n\"We would like to ask for clarification from the reviewer on this comment. Is the reviewer asking for an additional panel to demonstrate how the Relative F1 score scales with the label swap probability?\"\n\nThe short answer is yes. If I understand correctly, there is no F1 score scale...
[ -1, 8, 6, -1, -1, -1, -1, -1, 6 ]
[ -1, 1, 4, -1, -1, -1, -1, -1, 3 ]
[ "rylN2D7DoB", "iclr_2020_Byg-wJSYDS", "iclr_2020_Byg-wJSYDS", "SygxEwQvsH", "BJg_rIUitr", "Syx3rIXDor", "SJxfhzwjtS", "rJxRqQToFr", "iclr_2020_Byg-wJSYDS" ]
iclr_2020_HJgSwyBKvr
Weakly Supervised Disentanglement with Guarantees
Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning. Recently, concerns about the viability of learning disentangled representations in a purely unsupervised manner has spurred a shift toward the incorporation of weak supervision. However, there is currently no formalism that identifies when and how weak supervision will guarantee disentanglement. To address this issue, we provide a theoretical framework to assist in analyzing the disentanglement guarantees (or lack thereof) conferred by weak supervision when coupled with learning algorithms based on distribution matching. We empirically verify the guarantees and limitations of several weak supervision methods (restricted labeling, match-pairing, and rank-pairing), demonstrating the predictive power and usefulness of our theoretical framework.
accept-poster
This paper first discusses some concepts related to disentanglement. The authors propose to decompose disentanglement into two distinct concepts: consistency and restrictiveness. Then, a calculus of disentanglement is introduced to reveal the relationship between restrictiveness and consistency. The proposed concepts are applied to analyze weak supervision methods. The reviewers ultimately decided this paper is well-written and has content which is of general interest to the ICLR community.
train
[ "rkll00W3jS", "HylboUWhsB", "BylIHsasjB", "Bke7wjpsjr", "rkez3PlAtr", "HylxlVijor", "B1gJb49sor", "SJgQ9TMl5r", "S1ehjYSojS", "SyereqHjiS", "SJgpa_BijS", "HJeDluCQjB", "HyeZnJF6KH", "r1lvXDuHYr", "ryli6_oGKr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Dear Reviewer,\n\nThank you for your kind reconsideration of our paper. We will adjust our paper accordingly to emphasize that consistency and restrictiveness are primarily intended as tools for theoretical analysis.\n\nRegarding your hypothesis that using consistency and restrictiveness metrics for model selectio...
[ -1, -1, -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, 3, -1, -1 ]
[ -1, -1, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, 4, -1, -1 ]
[ "HylxlVijor", "B1gJb49sor", "HyeZnJF6KH", "BylIHsasjB", "iclr_2020_HJgSwyBKvr", "HJeDluCQjB", "SJgQ9TMl5r", "iclr_2020_HJgSwyBKvr", "SJgpa_BijS", "S1ehjYSojS", "SJgQ9TMl5r", "rkez3PlAtr", "iclr_2020_HJgSwyBKvr", "ryli6_oGKr", "iclr_2020_HJgSwyBKvr" ]
iclr_2020_SJlHwkBYDH
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs. However, under the black-box setting, most existing adversaries often have a poor transferability to attack other defense models. In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods to improve the transferability of adversarial examples, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM). NI-FGSM aims to adapt Nesterov accelerated gradient into the iterative attacks so as to effectively look ahead and improve the transferability of adversarial examples. While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid "overfitting” on the white-box model being attacked and generate more transferable adversarial examples. NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models. Empirical results on ImageNet dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks.
accept-poster
Under the optimization formulation of adversarial attack, this paper proposes two methods to improve the transferability of adversarial examples, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM). NI-FGSM adapts Nesterov accelerated gradient into the iterative attacks to effectively look ahead and avoid the “missing” of the global maximum, and SIM optimizes the adversarial perturbations over the scale copies of the input images so as to avoid “overfitting” on the white-box model being attacked and generate more transferable adversarial examples. Empirical results demonstrate the effectiveness of the proposed methods. The ideas are sensible, and the empirical studies were strengthened during rebuttal.
train
[ "HkepTE9csB", "Byl6Fbc9jB", "S1gH-At9oB", "rklpKcYcsS", "HJl0R8y5KB", "SyxQje86tr", "BygheOf79r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your insightful comments. We have performed the corresponding revision based on your constructive suggestions. \n\nA1. Thank you for the valuable suggestion. We agree that the comparison of Nesterov Accelerated Gradient (NAG) with other momentum methods is important. Indeed, we have compared our NI-...
[ -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 5, 3, 5 ]
[ "SyxQje86tr", "HJl0R8y5KB", "BygheOf79r", "iclr_2020_SJlHwkBYDH", "iclr_2020_SJlHwkBYDH", "iclr_2020_SJlHwkBYDH", "iclr_2020_SJlHwkBYDH" ]
iclr_2020_SJgIPJBFvH
Fantastic Generalization Measures and Where to Find Them
Generalization of deep networks has been intensely researched in recent years, resulting in a number of theoretical bounds and empirically motivated measures. However, most papers proposing such measures only study a small set of models, leaving open the question of whether these measures are truly useful in practice. We present the first large scale study of generalization bounds and measures in deep networks. We train over two thousand CIFAR-10 networks with systematic changes in important hyper-parameters. We attempt to uncover potential causal relationships between each measure and generalization, by using rank correlation coefficient and its modified forms. We analyze the results and show that some of the studied measures are very promising for further research.
accept-poster
This paper provides a valuable survey, summary, and empirical comparison of many generalization quantities from throughout the literature. It is comprehensive, thorough, and will be useful to a variety of researchers (both theoretical and applied).
test
[ "ryeab8ZKjS", "S1xtJSZtjH", "HkeGTU-KoH", "ByxTYS-KsB", "SJewlC1jFr", "ByxTlM6e5H", "HJxvboKicr", "H1eyz2M2DH", "rJxvnPrqwS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for your encouraging review! We next address your questions:\n\n 1) The stochasticity due to initialization and optimization: We have added this discussion to the revision. We ran all experiments 5 times and calculated the standard deviation (table 8&9). The resulting standard deviation shows that our...
[ -1, -1, -1, -1, 8, 3, 8, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 3, -1, -1 ]
[ "SJewlC1jFr", "iclr_2020_SJgIPJBFvH", "ByxTlM6e5H", "HJxvboKicr", "iclr_2020_SJgIPJBFvH", "iclr_2020_SJgIPJBFvH", "iclr_2020_SJgIPJBFvH", "rJxvnPrqwS", "iclr_2020_SJgIPJBFvH" ]
iclr_2020_BJxwPJHFwS
Robustness Verification for Transformers
Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees. However, previous methods can usually only handle neural networks with relatively simple architectures. In this paper, we consider the robustness verification problem for Transformers. Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous works. We resolve these challenges and develop the first robustness verification algorithm for Transformers. The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation. These bounds also shed light on interpreting Transformers as they consistently reflect the importance of different words in sentiment analysis.
accept-poster
A robustness verification method for transformers is presented. While robustness verification has previously been attempted for other types of neural networks, this is the first method for transformers. Reviewers are generally happy with the work done, but there were complaints about not comparing with and citing previous work, and only analyzing a simple one-layer version of transformers. The authors convincingly respond to these complaints. I think that the paper can be accepted, given that the reviewers' complaints have been addressed and the paper seems to be sufficiently novel and have practical importance for understanding transformers.
train
[ "rJeXo8rYjH", "BkeqhHSKsr", "SkeQbNBKir", "r1xFcQBYjH", "S1elFelaKS", "r1xpvEcpKH", "ryxVD3SG5B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the constructive feedback. We address point 1, 2, and 4 in “weakness bullets” below (we have addressed point 3 in another reply).\n\n(1) Regarding comparison with previous SOTA methods, and time cost comparison:\n\nBecause we propose the first algorithm for verifying Transformers, no prev...
[ -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, 3, 3, 1 ]
[ "S1elFelaKS", "S1elFelaKS", "r1xpvEcpKH", "ryxVD3SG5B", "iclr_2020_BJxwPJHFwS", "iclr_2020_BJxwPJHFwS", "iclr_2020_BJxwPJHFwS" ]
iclr_2020_HJgcvJBFvB
Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning
Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images. In this paper, we propose a simple technique to improve a generalization ability of deep RL agents by introducing a randomized (convolutional) neural network that randomly perturbs input observations. It enables trained agents to adapt to new domains by learning robust features invariant across varied and randomized environments. Furthermore, we consider an inference method based on the Monte Carlo approximation to reduce the variance induced by this randomization. We demonstrate the superiority of our method across 2D CoinRun, 3D DeepMind Lab exploration and 3D robotics control tasks: it significantly outperforms various regularization and data augmentation methods for the same purpose.
accept-poster
This submission proposes an RL method for learning policies that generalize better in novel visual environments. The authors propose to introduce some noise in the feature space rather than in the input space as is typically done for visual inputs. They also propose an alignment loss term to enforce invariance to the random perturbation. Reviewers agreed that the experimental results were extensive and that the proposed method is novel and works well. One reviewer felt that the experiments didn’t sufficiently demonstrate invariance to additional potential domain shifts. AC believes that additional experiments to probe this would indeed be interesting but that the demonstrated improvements when compared to existing image perturbation methods and existing regularization methods is sufficient experimental justification of the usefulness of the approach. Two reviewers felt that the method should be more extensively compared to “data augmentation” methods for computer vision tasks. AC believes that the proposed method is not only a data augmentation method given that the added loss tries to enforce representation invariance to perturbations as well. As such comparisons to feature adaptation techniques to tackle domain shift would be appropriate but it is reasonable to consider this line of comparison beyond the scope of this particular work. Ac agrees with the majority opinion that the submission should be accepted.
train
[ "BJlOAwOpYr", "S1lX_V5ijS", "S1gKJLqijS", "SyxUfr9ojB", "rJePkVciir", "Hyl49oREcH", "Hke62XQY9H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes methods to improve generalization in deep reinforcement learning with an emphasis on unseen environments. The main contribution is essentially a data augmentation technique that perturbs the input observations using a noise generated from the range space of a random convolutional network. The e...
[ 6, -1, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_HJgcvJBFvB", "Hyl49oREcH", "iclr_2020_HJgcvJBFvB", "BJlOAwOpYr", "Hke62XQY9H", "iclr_2020_HJgcvJBFvB", "iclr_2020_HJgcvJBFvB" ]
iclr_2020_rke2P1BFwS
Tensor Decompositions for Temporal Knowledge Base Completion
Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries of the form (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.
accept-poster
The authors propose a new algorithm based on tensor decompositions for the problem of knowledge base completion. They also introduce new regularisers to augment their method. They also propose an new dataset for temporal KB completion. All the reviews agreed that the paper addresses an important problem and presents interesting results. The authors diligently responded to reviewer queries and addressed most of the concerns raised by the reviewers. Since all the reviewers are in agreement, I recommend that this paper be accepted.
train
[ "BJeL9wg4YB", "Ske_fgwVsH", "BJlXSxDVsH", "rye601PVoB", "r1eF91vVjB", "SkeVp-x0FS", "SJeB8K5AYS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors study an important problem, i.e., time-aware link prediction in a knowledge base. Specifically, the authors focus on predicting the missing link in a quadruple, i.e., (subject, predicate, ?, timestamp). In particular, the authors design a new tensor (order 4) factorization based method w...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rke2P1BFwS", "SkeVp-x0FS", "BJeL9wg4YB", "SJeB8K5AYS", "iclr_2020_rke2P1BFwS", "iclr_2020_rke2P1BFwS", "iclr_2020_rke2P1BFwS" ]
iclr_2020_HkxTwkrKDB
On Universal Equivariant Set Networks
Using deep neural networks that are either invariant or equivariant to permutations in order to learn functions on unordered sets has become prevalent. The most popular, basic models are DeepSets (Zaheer et al. 2017) and PointNet (Qi et al. 2017). While known to be universal for approximating invariant functions, DeepSets and PointNet are not known to be universal when approximating equivariant set functions. On the other hand, several recent equivariant set architectures have been proven equivariant universal (Sannai et al. 2019, Keriven and Peyre 2019), however these models either use layers that are not permutation equivariant (in the standard sense) and/or use higher order tensor variables which are less practical. There is, therefore, a gap in understanding the universality of popular equivariant set models versus theoretical ones. In this paper we close this gap by proving that: (i) PointNet is not equivariant universal; and (ii) adding a single linear transmission layer makes PointNet universal. We call this architecture PointNetST and argue it is the simplest permutation equivariant universal model known to date. Another consequence is that DeepSets is universal, and also PointNetSeg, a popular point cloud segmentation network (used e.g., in Qi et al. 2017) is universal. The key theoretical tool used to prove the above results is an explicit characterization of all permutation equivariant polynomial layers. Lastly, we provide numerical experiments validating the theoretical results and comparing different permutation equivariant models.
accept-poster
This paper shows that DeepSets and PointNet, which are known to be universal for approximating functions, are also universal for approximating equivariant set functions. Reviewer are in agreement that this paper is interesting and makes important contributions. However, they feel the paper could be written to be more accessible. Based on the reviews and discussions following author response, I recommend accepting this paper. I appreciate the authors for an interesting paper and look forward to seeing it at the conference.
train
[ "S1ljmyyjsH", "SJgNL-s_ir", "S1gah0BboB", "H1xgIgLWjr", "BJxuByU-oS", "HylapLxoKH", "SyxwgLc0tB", "rke3Frfx5S" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their comments. We have uploaded a revision of our paper. The following are the changes made:\n\nReviewer1: We added an appendix in which we discuss the ability of equivariant universal models to approximate an equivariant graph convolution network. We also added an experiment where we u...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2020_HkxTwkrKDB", "H1xgIgLWjr", "HylapLxoKH", "rke3Frfx5S", "SyxwgLc0tB", "iclr_2020_HkxTwkrKDB", "iclr_2020_HkxTwkrKDB", "iclr_2020_HkxTwkrKDB" ]
iclr_2020_rklk_ySYPB
Provable robustness against all adversarial lp-perturbations for p≥1
In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific lp-perturbation models have been developed, we show that they do not come with any guarantee against other lq-perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt l1- \textit{and} l∞-perturbations and show how that leads to the first provably robust models wrt any lp-norm for p≥1.
accept-poster
This paper extends the degree to which ReLU networks can be provably resistant to a broader class of adversarial attacks using a MMR-Universal regularization scheme. In particular, the first provably robust model in terms of lp norm perturbations is developed, where robustness holds with respect to *any* p greater than or equal to one (as opposed to prior work that may only apply to specific lp-norm perturbations). While I support accepting this paper based on the strong reviews and significant technical contribution, one potential drawback is the lack of empirical tests with a broader cohort of representative CNN architectures (as pointed out by R1). In this regard, the rebuttal promises that additional experiments with larger models will be added in the future to the final version, but obviously such results cannot be used to evaluate performance at this time.
train
[ "rkeaaOD7qS", "rJlbXqZsjS", "B1x6FtZisB", "BJlX4K-jjB", "BylLN_bojr", "ryegXTSpYr", "BygCdma6FH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe author proposed MMR regularization for the provable robustness of union of l-1 and l-infty balls, which is robust to any l-p norm for p>=1. \n\nStrengths:\n1. The paper is well organized. \n2. The theoretical part is completed and experiments are done on MNIST, Fashion-MNIST, GTS, CIFAR-10. \n3. The ...
[ 6, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_rklk_ySYPB", "ryegXTSpYr", "BJlX4K-jjB", "BygCdma6FH", "rkeaaOD7qS", "iclr_2020_rklk_ySYPB", "iclr_2020_rklk_ySYPB" ]
iclr_2020_B1eyO1BFPr
Don't Use Large Mini-batches, Use Local SGD
Mini-batch stochastic gradient methods (SGD) are state of the art for distributed training of deep neural networks. Drastic increases in the mini-batch sizes have lead to key efficiency and scalability gains in recent years. However, progress faces a major roadblock, as models trained with large batches often do not generalize well, i.e. they do not show good accuracy on new data. As a remedy, we propose a \emph{post-local} SGD and show that it significantly improves the generalization performance compared to large-batch training on standard benchmarks while enjoying the same efficiency (time-to-accuracy) and scalability. We further provide an extensive study of the communication efficiency vs. performance trade-offs associated with a host of \emph{local SGD} variants.
accept-poster
The authors propose a simple modification of local SGD for parallel training, starting with standard SGD and then switching to local SGD. The resulting method provides good results and makes a practical contribution. Please carefully account for reviewer comments in future revisions.
train
[ "H1lcegDaKr", "HklP6PCtiH", "HJx9y-jOsS", "SJffksdsr", "rkeltAcOir", "r1xplIO6YS", "r1gp2D8yqS", "S1g1nOZDKr", "rylU6gfSYB", "SkeiU0ZHFH", "H1eGZobrYS", "H1e8UK-SKS", "ryxvs7ZBFS", "rkxB3y-BKH", "HkxrQ7yHKH", "rJgWOs2EYS", "B1xFASpXFS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "public", "official_reviewer", "public", "official_reviewer", "author", "public" ]
[ "In this paper, the authors propose a variant of local SGD: post-local SGD, which improves the generalization performance compared to large-batch SGD. This paper also empirically studies the trade-off between communication efficiency and performance. Additionally, this paper proposes hierarchical local SGD. The pap...
[ 6, -1, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_B1eyO1BFPr", "HJx9y-jOsS", "H1lcegDaKr", "r1xplIO6YS", "r1gp2D8yqS", "iclr_2020_B1eyO1BFPr", "iclr_2020_B1eyO1BFPr", "rylU6gfSYB", "H1eGZobrYS", "H1e8UK-SKS", "HkxrQ7yHKH", "ryxvs7ZBFS", "rkxB3y-BKH", "HkxrQ7yHKH", "rJgWOs2EYS", "B1xFASpXFS", "iclr_2020_B1eyO1BFPr" ]
iclr_2020_B1eWOJHKvB
Kernel of CycleGAN as a principal homogeneous space
Unpaired image-to-image translation has attracted significant interest due to the invention of CycleGAN, a method which utilizes a combination of adversarial and cycle consistency losses to avoid the need for paired data. It is known that the CycleGAN problem might admit multiple solutions, and our goal in this paper is to analyze the space of exact solutions and to give perturbation bounds for approximate solutions. We show theoretically that the exact solution space is invariant with respect to automorphisms of the underlying probability spaces, and, furthermore, that the group of automorphisms acts freely and transitively on the space of exact solutions. We examine the case of zero pure CycleGAN loss first in its generality, and, subsequently, expand our analysis to approximate solutions for extended CycleGAN loss where identity loss term is included. In order to demonstrate that these results are applicable, we show that under mild conditions nontrivial smooth automorphisms exist. Furthermore, we provide empirical evidence that neural networks can learn these automorphisms with unexpected and unwanted results. We conclude that finding optimal solutions to the CycleGAN loss does not necessarily lead to the envisioned result in image-to-image translation tasks and that underlying hidden symmetries can render the result useless.
accept-poster
This paper theoretically studied one of the fundamental issue in CycleGAN (recently gained much attention for image-to-image translation). The authors analyze the space of exact and approximated solutions under automorphisms. Reviewers mostly agree with theoretical value of the paper. Some concerns on practical values are also raised, e.g., limited or no-surprising experimental results. In overall, I think this is a boarderline paper. But, I am a bit toward acceptance as the theoretical contribution is solid, and potentially beneficial to many future works on unpaired image-to-image translation.
train
[ "H1e26UE6tr", "rylknOkRFr", "B1l_X-GJcS", "HygDhZNnjH", "SkelelV3sB", "H1exJyVnoB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I have read the rebuttal of the authors . Thank you for you answer and for addressing some concerns. While the question addressed is important, the theory presented here does not seem to hint to a solution, hence I am keeping my score. \n\n###\nSummary of the paper: \n\nThis paper shows that the cycle GAN loss...
[ 3, 6, 8, -1, -1, -1 ]
[ 4, 1, 3, -1, -1, -1 ]
[ "iclr_2020_B1eWOJHKvB", "iclr_2020_B1eWOJHKvB", "iclr_2020_B1eWOJHKvB", "H1e26UE6tr", "rylknOkRFr", "B1l_X-GJcS" ]
iclr_2020_ryxGuJrFvS
Distributionally Robust Neural Networks
Overparameterized neural networks can be highly accurate on average on an i.i.d. test set, yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization---stronger-than-typical L2 regularization or early stopping---we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm for the group DRO setting and provide convergence guarantees for the new algorithm.
accept-poster
This paper proposes distributionally robust optimization (DRO) to learn robust models that minimize worst-case training loss over a set of pre-defined groups. They find that increased regularization is necessary for worst-group performance in the overparametrized regime (something that is not needed for non-robust average performance). This is an interesting paper and I recommend acceptance. The discussion phase suggested a change in the title which slightly overstated the paper's contributions (a comment which I agree with). The authors agreed to change the title in the final version.
train
[ "HJg4BjDy5B", "HJgQOqWqiB", "HkeL89WqsB", "ryxpHc-qoS", "Syge19-5jr", "rJgFdjSYFr", "BJexEXB09r" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper describes a method of training neural networks to be robust to a worse case mixture of a set of predefined example attributes. This is done with a loss in accuracy in the average case but improvements in the worse case. The proposed algorithm is relatively simple and convergence rates are also given for...
[ 8, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_ryxGuJrFvS", "BJexEXB09r", "rJgFdjSYFr", "iclr_2020_ryxGuJrFvS", "HJg4BjDy5B", "iclr_2020_ryxGuJrFvS", "iclr_2020_ryxGuJrFvS" ]
iclr_2020_Hkx7_1rKwS
On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach
Many tasks in modern machine learning can be formulated as finding equilibria in sequential games. In particular, two-player zero-sum sequential games, also known as minimax optimization, have received growing interest. It is tempting to apply gradient descent to solve minimax optimization given its popularity and success in supervised learning. However, it has been noted that naive application of gradient descent fails to find some local minimax and can converge to non-local-minimax points. In this paper, we propose Follow-the-Ridge (FR), a novel algorithm that provably converges to and only converges to local minimax. We show theoretically that the algorithm addresses the notorious rotational behaviour of gradient dynamics, and is compatible with preconditioning and positive momentum. Empirically, FR solves toy minimax problems and improves the convergence of GAN training compared to the recent minimax optimization algorithms.
accept-poster
The submission proposes a novel solution for minimax optimization which has strong theoretical and empirical results as well as broad relevance for the community. The approach, Follow-the-Ridge, has theoretical guarantees and is compatible with preconditioning and momentum optimization strategies. The paper is well-written and the authors engaged in a lengthy discussion with the reviewers, leading to a clearer understanding of the paper for all. The reviews all recommend acceptance.
train
[ "r1xED6s2YB", "BJxZKhUnoB", "BylKxgP3oH", "S1lLhU1hsB", "SJlGh5Aoor", "rke54KqoiH", "r1g7wOsooH", "BJx9afqiiB", "SJxTI6tior", "rylHGhdjjH", "SJeHw7FsiH", "SJlg77SioH", "B1eWY0Voir", "SklQELDcjB", "SJePYIDciH", "HyxYimzssH", "rkls3yVosH", "BJlOStb5ir", "ByxHav0yoB", "S1gN8fC1jS"...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", ...
[ "Summary: This paper designs a set of dynamics for learning in games called follow-the-ridge with the goal of finding local stackelberg equilibria. The main theoretical results show that the only stable attractors of the dynamics are stackelberg equilibria. Moreover, the authors give a deterministic convergence rat...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1, -1 ]
[ "iclr_2020_Hkx7_1rKwS", "r1xED6s2YB", "rkls3yVosH", "SJlGh5Aoor", "rke54KqoiH", "rylHGhdjjH", "rylHGhdjjH", "SJxTI6tior", "SJeHw7FsiH", "SJePYIDciH", "rylHGhdjjH", "B1eWY0Voir", "SklQELDcjB", "BJlOStb5ir", "BJlOStb5ir", "BJlOStb5ir", "H1gvHQuxir", "ByxHav0yoB", "r1xED6s2YB", "i...
iclr_2020_SJxSOJStPr
A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning
Despite the growing interest in continual learning, most of its contemporary works have been studied in a rather restricted setting where tasks are clearly distinguishable, and task boundaries are known during training. However, if our goal is to develop an algorithm that learns as humans do, this setting is far from realistic, and it is essential to develop a methodology that works in a task-free manner. Meanwhile, among several branches of continual learning, expansion-based methods have the advantage of eliminating catastrophic forgetting by allocating new resources to learn new data. In this work, we propose an expansion-based approach for task-free continual learning. Our model, named Continual Neural Dirichlet Process Mixture (CN-DPM), consists of a set of neural network experts that are in charge of a subset of the data. CN-DPM expands the number of experts in a principled way under the Bayesian nonparametric framework. With extensive experiments, we show that our model successfully performs task-free continual learning for both discriminative and generative tasks such as image classification and image generation.
accept-poster
This paper proposes an expansion-based approach for task-free continual learning, using a Bayesian nonparametric framework (a Dirichlet process mixture model). It was well-reviewed, with reviewers agreeing that the paper is well-written, the experiments are thorough, and the results are impressive. Another positive is that the code has been released, meaning it’s likely to be reproducible. The main concern shared among reviewers is the limited novelty of the approach, which I also share. Reviewers all mentioned that the approach itself isn’t novel, but they like the contribution of applying it to task-free continual learning. This wasn’t mentioned, but I’m concerned about the overlap between this approach and CURL (Rao et al 2019) published in NeurIPS 2019, which also deals with task-free continual learning using a generative, nonparametric approach. Could the authors comment on this in their final version? In sum, it seems that this paper is well-done, with reproducible experiments and impressive results, but limited novelty. Given that reviewers are all satisfied with this, I’m willing to recommend acceptance.
train
[ "BklLaCQFsH", "r1lgVyEYsr", "r1e__A7KiB", "r1l_mbemoB", "rke8X-igiB", "Syx2wD4nKB", "r1x6ECnhYr", "rygcWQYf9r" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the thoughtful review. Below, we present our answers to the questions.\n\n\n1. Feasibility of the posterior inference\n\nThis is closely related to the first question of R1. CN-DPM may not be successful when the discrepancy between tasks is too marginal. Conversely, the posterior inference gets easie...
[ -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "Syx2wD4nKB", "r1x6ECnhYr", "rygcWQYf9r", "rke8X-igiB", "iclr_2020_SJxSOJStPr", "iclr_2020_SJxSOJStPr", "iclr_2020_SJxSOJStPr", "iclr_2020_SJxSOJStPr" ]
iclr_2020_ryeHuJBtPH
Hyper-SAGNN: a self-attention based graph neural network for hypergraphs
Graph representation learning for hypergraphs can be utilized to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. We believe that Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.
accept-poster
This work introduces a new neural network model that can represent hyperedges of variable size, which is experimentally shown to improve or match the state of the art on several problems. Both reviewers were in favor of acceptance given the method's strong performance, and had their concerns resolved by the rebuttals and the discussion. I am therefore recommending acceptance.
train
[ "SkgvBi60YB", "HJlYRqICtH", "ByxHQc2ojr", "rkld5J3oor", "S1eSbb3isH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a new graph neural network model capable of performing tasks over hyperedges of variable type and size (i.e. different number and types of graph nodes connected by different hyperedges). They experimentally verify its effectiveness over the previous state of the art on several datasets and tas...
[ 8, 8, -1, -1, -1 ]
[ 1, 1, -1, -1, -1 ]
[ "iclr_2020_ryeHuJBtPH", "iclr_2020_ryeHuJBtPH", "iclr_2020_ryeHuJBtPH", "HJlYRqICtH", "SkgvBi60YB" ]
iclr_2020_HyxjOyrKvr
Neural Epitome Search for Architecture-Agnostic Network Compression
Traditional compression methods including network pruning, quantization, low rank factorization and knowledge distillation all assume that network architectures and parameters should be hardwired. In this work, we propose a new perspective on network compression, i.e., network parameters can be disentangled from the architectures. From this viewpoint, we present the Neural Epitome Search (NES), a new neural network compression approach that learns to find compact yet expressive epitomes for weight parameters of a specified network architecture end-to-end. The complete network to compress can be generated from the learned epitome via a novel transformation method that adaptively transforms the epitomes to match shapes of the given architecture. Compared with existing compression methods, NES allows the weight tensors to be independent of the architecture design and hence can achieve a good trade-off between model compression rate and performance given a specific model size constraint. Experiments demonstrate that, on ImageNet, when taking MobileNetV2 as backbone, our approach improves the full-model baseline by 1.47% in top-1 accuracy with 25% MAdd reduction and AutoML for Model Compression (AMC) by 2.5% with nearly the same compression ratio. Moreover, taking EfficientNet-B0 as baseline, our NES yields an improvement of 1.2% but are with 10% less MAdd. In particular, our method achieves a new state-of-the-art results of 77.5% under mobile settings (<350M MAdd). Code will be made publicly available.
accept-poster
The paper proposed a novel way to compress arbitrary networks by learning epitiomes and corresponding transformations of them to reconstruct the original weight tensors. The idea is very interesting and the paper presented good experimental validations of the proposed method on state-of-the-art models and showed good MAdd reduction. The authors also put a lot of efforts addressing the concerns of all the reviewers by improving the presentation of the paper, which although can still be further improved, and adding more explanations and validations on the proposed method. Although there's still concerns on whether the reduction of MAdd really transforms to computation reduction, all the reviewers agreed the paper is interesting and useful and further development of such work would be useful too.
train
[ "rkenIi7GqB", "Bkxrjd83jS", "rJxZSF8hsB", "rygGUdInjr", "Skej2PP_jH", "HJe0vwDOsH", "B1gMrWSXFS", "S1lORAAAcB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this work, the authors describe a technique for compressing neural networks by learning a so-called Epitome (E), and a transformation function (\\theta) such that the weights for each layer can be constructed using \\theta(E). The epitome and the transformation function can be learnt jointly while optimizing th...
[ 6, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_HyxjOyrKvr", "B1gMrWSXFS", "B1gMrWSXFS", "B1gMrWSXFS", "rkenIi7GqB", "S1lORAAAcB", "iclr_2020_HyxjOyrKvr", "iclr_2020_HyxjOyrKvr" ]
iclr_2020_SJxzFySKwH
On the Equivalence between Positional Node Embeddings and Structural Graph Representations
This work provides the first unifying theoretical framework for node (positional) embeddings and structural graph representations, bridging methods like matrix factorization and graph neural networks. Using invariant theory, we show that relationship between structural representations and node embeddings is analogous to that of a distribution and its samples. We prove that all tasks that can be performed by node embeddings can also be performed by structural representations and vice-versa. We also show that the concept of transductive and inductive learning is unrelated to node embeddings and graph representations, clearing another source of confusion in the literature. Finally, we introduce new practical guidelines to generating and using node embeddings, which further augments standard operating procedures used today.
accept-poster
The paper shows the relationship between node embeddings and structural graph representations. By careful definition of what structural node representation means, and what node embedding means, using the permutation group, the authors show in Theorem 2 that node embeddings cannot represent any extra information that is not already in the structural representation. The paper then provide empirical experiments on three tasks, and show in a fourth task an illustration of the theoretical results. The reviewers of the paper scored the paper highly, but with low confidence. I read the paper myself (unfortunately not with a lot of time), with the aim of increasing the confidence of the resulting decision. The main gap in the paper is between the phrases "structural node representation" and "node embedding", and their theoretical definitions. The analogy of distribution and its samples follows unsurprisingly from the definitions (8 and 12), but the interpretation of those definitions as the corresponding English phrases is not obvious by only looking at the definitions. There also seems to be a sleight of hand going on with the most expressive representations (Definitions 9 and 11), which is used to make the conditional independence statement of Theorem 2. The authors should clarify in the final version whether the existence of such a representation can be shown, or even better a constructive way to get it from data. Given the significance of the theoretical results, the authors should improve the introduction of the two main concepts by: - relating them to prior work (one way is to move Section 5 towards the front) - explaining in greater detail why Definitions 8 and 12 correspond to the two concepts. For example expanding the part of the proof of Corollary 1 about SVD, to make clear what Definition 12 means. - a corresponding simple example of Definition 8 to relate to a classical method. The paper provides a nice connection between two disparate concepts. Unfortunately, the connection uses graph invariance and equivariance, which is unfamiliar to many of the ICLR audience. On balance, I believe that the authors can improve the presentation such that a reader can understand the implications of the connection without being an expert in graph isomorphism. As such, I am recommending an accept.
train
[ "S1ewogbg5r", "r1ePLWdhsH", "SJeRzb_njH", "H1gU0lOnoH", "SkgQ7VmcjB", "r1g1qS8VjH", "r1gcBHIVjr", "B1eQ7B8NjS", "SkgiTN8EjS", "HyeXHFGS9B", "SkgAYSx2qr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors present mostly theoretical analysis indicating the equivalence of embeddings and structural graph representations. The authors argue that while most of the earlier work consider these to be different, they are actually the same and give theory and empirical results to back up this claim.\n\nThis is not...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_SJxzFySKwH", "r1g1qS8VjH", "r1gcBHIVjr", "B1eQ7B8NjS", "iclr_2020_SJxzFySKwH", "S1ewogbg5r", "HyeXHFGS9B", "SkgAYSx2qr", "iclr_2020_SJxzFySKwH", "iclr_2020_SJxzFySKwH", "iclr_2020_SJxzFySKwH" ]
iclr_2020_S1g8K1BFwS
Probability Calibration for Knowledge Graph Embedding Models
Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives. We get significantly better results than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.
accept-poster
The paper proposes a novel method to calibrate a knowledge graph embedding method when ground truth negatives are not available. Essentially, the method relies on generating corrupted triples as negative examples to be used by known approaches (Platt scaling and isotonic regression). This is claimed as the first approach of probability calibration for knowledge graph embedding models, which is considered to be very relevant for practitioners working on knowledge graph embedding (although this is a narrow audience). The paper does not propose a wholly novel method for probability calibration. Instead, the value in experimental insights provided. Some reviewers would have liked to see a more in-depth analysis, but reviewers appreciated the thoroughness of the results in the clear articulation of the findings and the fact that multiple datasets and models are studied. There was an animated discussion about this paper, but the paper seems a useful contribution to the ICLR community and I would like to recommend acceptance.
test
[ "HkeLm0hpqH", "BkgKWAKc9S", "ryg1q3W3jr", "rJeJL3ZhsB", "HJxEVhZ3sB", "rygfk2bhoS", "HygS2oZnjr", "BJgfSVlTKB", "HJxwSZbY9r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This is the first work that studies probability calibration for knowledge graph embedding models. In the case where ground-truth negatives are available the authors directly use off-the-shelf established calibration techniques (Platt scaling, isotonic regression). When ground-truth negatives are not available they...
[ 6, 8, -1, -1, -1, -1, -1, 6, 3 ]
[ 3, 4, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_S1g8K1BFwS", "iclr_2020_S1g8K1BFwS", "iclr_2020_S1g8K1BFwS", "BJgfSVlTKB", "HJxwSZbY9r", "BkgKWAKc9S", "HkeLm0hpqH", "iclr_2020_S1g8K1BFwS", "iclr_2020_S1g8K1BFwS" ]
iclr_2020_BylsKkHYvH
Why Not to Use Zero Imputation? Correcting Sparsity Bias in Training Neural Networks
Handling missing data is one of the most fundamental problems in machine learning. Among many approaches, the simplest and most intuitive way is zero imputation, which treats the value of a missing entry simply as zero. However, many studies have experimentally confirmed that zero imputation results in suboptimal performances in training neural networks. Yet, none of the existing work has explained what brings such performance degradations. In this paper, we introduce the variable sparsity problem (VSP), which describes a phenomenon where the output of a predictive model largely varies with respect to the rate of missingness in the given input, and show that it adversarially affects the model performance. We first theoretically analyze this phenomenon and propose a simple yet effective technique to handle missingness, which we refer to as Sparsity Normalization (SN), that directly targets and resolves the VSP. We further experimentally validate SN on diverse benchmark datasets, to show that debiasing the effect of input-level sparsity improves the performance and stabilizes the training of neural networks.
accept-poster
This paper investigates the problem of using zero imputation when input features are missing. The authors study this problem, propose a solution, and evaluate on several benchmark datasets. The reviewers were generally positive about the paper, but had some questions and concerns about the experimental results. The authors addressed these concerns in the rebuttal. The reviewers are generally satisfied and believe that the paper should be accepted.
train
[ "r1lmuyH9tr", "SkxqwIb3or", "Bkl5QuoYoH", "Bkl49OsYoS", "rkl98aE9or", "SkeTlustsB", "Hkx5MFotjH", "Hkg_4YiFiH", "rylbIKoFsH", "SJeshdoFoH", "HJxNcFitiH", "BJgXpqjYsS", "Bkei_wiFoB", "rkgNqJ7iKS", "r1eWppVRFr", "S1g-T-zX5B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "Zero imputation is studied from a different view by investigating its impact on prediction variability and a normalization scheme is proposed to alleviate this variation. This normalization scales the input neural network so that the output would not be affected much. While such simple yet helpful algorithms are p...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, -1 ]
[ "iclr_2020_BylsKkHYvH", "Bkl5QuoYoH", "r1lmuyH9tr", "r1lmuyH9tr", "Hkx5MFotjH", "r1lmuyH9tr", "rkgNqJ7iKS", "rkgNqJ7iKS", "rkgNqJ7iKS", "r1lmuyH9tr", "rkgNqJ7iKS", "S1g-T-zX5B", "r1eWppVRFr", "iclr_2020_BylsKkHYvH", "iclr_2020_BylsKkHYvH", "iclr_2020_BylsKkHYvH" ]
iclr_2020_Hkx1qkrKPr
DropEdge: Towards Deep Graph Convolutional Networks on Node Classification
Over-fitting and over-smoothing are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification. In particular, over-fitting weakens the generalization ability on small dataset, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. This paper proposes DropEdge, a novel and flexible technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter and also a message passing reducer. Furthermore, we theoretically demonstrate that DropEdge either reduces the convergence speed of over-smoothing or relieves the information loss caused by it. More importantly, our DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance. Extensive experiments on several benchmarks verify that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically visualized and validated as well. Codes are released on~https://github.com/DropEdge/DropEdge.
accept-poster
The paper proposes a very simple but thoroughly evaluated and investigated idea for improving generalization in GCNs. Though the reviews are mixed, and in the post-rebuttal discussion the two negative reviewers stuck to their ratings, the area chair feels that there are no strong grounds for rejection in the negative reviews. Accept.
train
[ "BketVLLVsS", "ByxtBPLNsr", "rJx638IVjB", "B1eC_r5atS", "rJlykwUUcr", "ryl-3sBhcH", "rke283wbYS", "r1lmckKyYr", "HyxLTU-p_r", "S1xtst6ndH", "HyxukVbidr", "HklLbb8quH", "ryxwiUP4uS", "S1lB1PPVdr", "r1l2CcPKdS", "S1lR897_dH", "ryxAKgmudB", "Syemfc3E_B", "r1l1bxGEOr", "BJgOg_9WOH"...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "official_reviewer", "public", "public", "author", "author", "author", "author", "public", "public", "public", "public", "public", "public", "public"...
[ "We really thank the reviewer for the recognition of our contributions to the experimental evaluations and the theoretical justification. Here, we would like to provide more explanations to address the reviewer's concerns.\n  \nQ1. The novelty of DropEdge.\n\nWe agree that our DropEdge is simple and is inspired by...
[ -1, -1, -1, 3, 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, 3, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "ryl-3sBhcH", "B1eC_r5atS", "rJlykwUUcr", "iclr_2020_Hkx1qkrKPr", "iclr_2020_Hkx1qkrKPr", "iclr_2020_Hkx1qkrKPr", "iclr_2020_Hkx1qkrKPr", "HyxukVbidr", "S1xtst6ndH", "HklLbb8quH", "iclr_2020_Hkx1qkrKPr", "r1l2CcPKdS", "BJgOg_9WOH", "r1l1bxGEOr", "Syemfc3E_B", "ryxAKgmudB", "S1lB1PPVd...
iclr_2020_BJe-91BtvH
Masked Based Unsupervised Content Transfer
We consider the problem of translating, in an unsupervised manner, between two domains where one contains some additional information compared to the other. The proposed method disentangles the common and separate parts of these domains and, through the generation of a mask, focuses the attention of the underlying network to the desired augmentation alone, without wastefully reconstructing the entire target. This enables state-of-the-art quality and variety of content translation, as demonstrated through extensive quantitative and qualitative evaluation. Our method is also capable of adding the separate content of different guide images and domains as well as remove existing separate content. Furthermore, our method enables weakly-supervised semantic segmentation of the separate part of each domain, where only class labels are provided. Our code is available at https://github.com/rmokady/mbu-content-tansfer.
accept-poster
This paper extends the prior work on disentanglement and attention guided translation to instance-based unsupervised content transfer. The method is somewhat complicated, with five different networks and a multi-component loss function, however the importance of each component appears to be well justified in the ablation study. Overall the reviewers agree that the experimental section is solid and supports the proposed method well. It demonstrates good performance across a number of transfer tasks, including transfer to out-of-domain images, and that the method outperforms the baselines. For these reasons, I recommend the acceptance of this paper.
train
[ "SkgBIOljjr", "S1eCIHHcoH", "BJxWXS6VsB", "Bygy8Up4jH", "HJlLRHaNoB", "BylcPH6NoB", "S1g1wmaEsr", "H1eBm1OYOB", "S1xJ_D2jtr", "BylwOzYpqr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your question! \n\nThe mask in z(a,a) is generated using the encodings of the common and specific parts of a, and z(b,b) uses the encodings of the common and specific parts of b. Also, a and b are not symmetrical and since images in A do not contain the specific part, their mask should be minimal.\n\...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 1, 3, 1 ]
[ "S1eCIHHcoH", "BJxWXS6VsB", "S1xJ_D2jtr", "iclr_2020_BJe-91BtvH", "H1eBm1OYOB", "S1xJ_D2jtr", "BylwOzYpqr", "iclr_2020_BJe-91BtvH", "iclr_2020_BJe-91BtvH", "iclr_2020_BJe-91BtvH" ]
iclr_2020_BJlZ5ySKPH
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters.
accept-poster
The paper proposes a new architecture for unsupervised image2image translation. Following the revision/discussion, all reviewers agree that the proposed ideas are reasonable, well described, convincingly validated, and of clear though limited novelty. Accept.
train
[ "rJeTLRLCFr", "SJl2YNBMjB", "BklxmksZjB", "SkgVvj-ZiS", "B1g6gslaFr", "Syg1daPM9B", "HJeuDHzfYr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "I have read the authors' rebuttal and satisfied with their response. Novelty is a little on the lower side, but thorough writing, results, and insightful comparisons make up for this in my opinion. I have updated my score to 8: Accept.\n\n=====\n\nThis paper proposes an approach to perform image translation called...
[ 8, -1, -1, -1, 6, 6, -1 ]
[ 1, -1, -1, -1, 4, 4, -1 ]
[ "iclr_2020_BJlZ5ySKPH", "B1g6gslaFr", "rJeTLRLCFr", "Syg1daPM9B", "iclr_2020_BJlZ5ySKPH", "iclr_2020_BJlZ5ySKPH", "iclr_2020_BJlZ5ySKPH" ]
iclr_2020_rkem91rtDB
Inductive and Unsupervised Representation Learning on Graph Structured Objects
Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain. It is also challenging to make graph learning inductive and unsupervised at the same time, as learning processes guided by reconstruction error based loss functions inevitably demand graph similarity evaluation that is usually computationally intractable. In this paper, we propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects. Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph. By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism. Using public benchmark datasets, our empirical study suggests the proposed SEED framework is able to achieve up to 10% improvement, compared with competitive baseline methods.
accept-poster
The paper focuses on the problem of finding dense representations of graph-structured objects in an unsupervised manner. The authors propose a novel framework for solving this problem and show that it improves over competitive baselines. The reviewers generally liked the paper, although were concerned with the strength of the experimental results. During the discussion phase, the authors bolstered the experimental results. The reviewers are satisfied with the resulting paper and agree that it should be accepted.
train
[ "SJgTQ4kwFB", "SJglRfghoS", "B1gi4Thsor", "Hklsbarior", "rklsO2xtiH", "SklWvalKoB", "rkxn8kbtoB", "HkxYLFlKiH", "HkeAKNxptr", "S1xbh5Gg9r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a method for learning graph embeddings and focus specifically on a setting where not all graphs are part of the training data (the inductive setting). The core problem of graph embedding methods is to find a learnable function that maps arbitrary graphs into a fixed-sized vector representation....
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_rkem91rtDB", "SklWvalKoB", "iclr_2020_rkem91rtDB", "HkeAKNxptr", "HkeAKNxptr", "SJgTQ4kwFB", "SJgTQ4kwFB", "S1xbh5Gg9r", "iclr_2020_rkem91rtDB", "iclr_2020_rkem91rtDB" ]
iclr_2020_Bke89JBtvB
Batch-shaping for learning conditional channel gated networks
We present a method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational cost. This is achieved by gating the deep-learning architecture on a fine-grained-level. Individual convolutional maps are turned on/off conditionally on features in the network. To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner. We also introduce a generally applicable tool batch-shaping that matches the marginal aggregate posteriors of features in a neural network to a pre-specified prior distribution. We use this novel technique to force gates to be more conditional on the data. We present results on CIFAR-10 and ImageNet datasets for image classification, and Cityscapes for semantic segmentation. Our results show that our method can slim down large architectures conditionally, such that the average computational cost on the data is on par with a smaller architecture, but with higher accuracy. In particular, on ImageNet, our ResNet50 and ResNet34 gated networks obtain 74.60% and 72.55% top-1 accuracy compared to the 69.76% accuracy of the baseline ResNet18 model, for similar complexity. We also show that the resulting networks automatically learn to use more features for difficult examples and fewer features for simple examples.
accept-poster
The paper describes a method to train a convolutional network with large capacity, where channel-gating (input conditioned) is implemented - thus, only parts of the network are used at inference time. The paper builds over previous work, with the main contribution being a "batch-shaping" technique that regularizes the channel gating to follow a beta distribution, combined with L0 regularization. The paper shows that ResNet trained with this technique can achieve higher accuracy with lower theoretical MACs. Weakness of the paper is that more engineering would be required to convert the theoretical MACs into actual running time - which would further validate the practicality of the approach.
train
[ "rkeAl96dir", "rkgAVHjOjr", "Bkl6sIs_sH", "SJepfyO6Fr", "rJe55cygqH", "SygoXqOR5r", "H1l-C03WYB", "Bkekd9Q0dr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We thank the reviewer for their thoughtful review and constructive suggestions. Responses are included inline:\n\n- We moved Figure 7 to the main paper as suggested. A regular neural network does show similar patterns. However, the interpretability of the model behavior could potentially become easier with gated n...
[ -1, -1, -1, 8, 6, 6, -1, -1 ]
[ -1, -1, -1, 3, 4, 4, -1, -1 ]
[ "rJe55cygqH", "SygoXqOR5r", "SJepfyO6Fr", "iclr_2020_Bke89JBtvB", "iclr_2020_Bke89JBtvB", "iclr_2020_Bke89JBtvB", "Bkekd9Q0dr", "iclr_2020_Bke89JBtvB" ]
iclr_2020_B1xwcyHFDr
Learning Robust Representations via Multi-View Information Bottleneck
The information bottleneck principle provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label while minimizing the amount of other, excess information in the representation. The original formulation, however, requires labeled data to identify the superfluous information. In this work, we extend this ability to the multi-view unsupervised setting, where two views of the same underlying entity are provided but the label is unknown. This enables us to identify superfluous information as that not shared by both views. A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art results on the Sketchy dataset and label-limited versions of the MIR-Flickr dataset. We also extend our theory to the single-view setting by taking advantage of standard data augmentation techniques, empirically showing better generalization capabilities when compared to common unsupervised approaches for representation learning.
accept-poster
This paper extends the information bottleneck method to the unsupervised representation learning under the multi-view assumption. The work couples the multi-view InfoMax principle with the information bottleneck principle to derive an objective which encourages the representations to contain only the information shared by both views and thus eliminate the effect of independent factors of variations. Recent advances in estimating lower-bounds on mutual information are applied to perform approximate optimisation in practice. The authors empirically validate the proposed approach in two standard multi-view settings. Overall, the reviewers found the presentation clear, and the paper well written and well motivated. The issues raised by the reviewers were addressed in the rebuttal and we feel that the work is well suited for ICLR. We ask the authors to carefully integrate the detailed comments from the reviewers into the manuscript. Finally, the work should investigate and briefly establish a connection to [1]. [1] Wang et al. "Deep Multi-view Information Bottleneck". International Conference on Data Mining 2019 (https://epubs.siam.org/doi/pdf/10.1137/1.9781611975673.5)
train
[ "SJgPFlDnoH", "HJgPS1v2oH", "HygAlJDhsH", "rJgX9T8njB", "rJgolpUhsS", "SkglAno0YS", "BJxd8x7rqH", "HJx75ajTtr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1) How limiting is the multi-view assumption? Are there well-known cases where it doesn't hold? I feel it would be hard to use, say, with text. Has this been discussed in the literature? Some pointers or discussion would be interesting.\n\nWe answered this above as part of shared question (1)\n\n2) Sketchy dataset...
[ -1, -1, -1, -1, -1, 8, 6, 8 ]
[ -1, -1, -1, -1, -1, 1, 5, 4 ]
[ "SkglAno0YS", "HJx75ajTtr", "HJx75ajTtr", "BJxd8x7rqH", "iclr_2020_B1xwcyHFDr", "iclr_2020_B1xwcyHFDr", "iclr_2020_B1xwcyHFDr", "iclr_2020_B1xwcyHFDr" ]
iclr_2020_SJeq9JBFvH
Deep probabilistic subsampling for task-adaptive compressed sensing
The field of deep learning is commonly concerned with optimizing predictive models using large pre-acquired datasets of densely sampled datapoints or signals. In this work, we demonstrate that the deep learning paradigm can be extended to incorporate a subsampling scheme that is jointly optimized under a desired minimum sample rate. We present Deep Probabilistic Subsampling (DPS), a widely applicable framework for task-adaptive compressed sensing that enables end-to end optimization of an optimal subset of signal samples with a subsequent model that performs a required task. We demonstrate strong performance on reconstruction and classification tasks of a toy dataset, MNIST, and CIFAR10 under stringent subsampling rates in both the pixel and the spatial frequency domain. Due to the task-agnostic nature of the framework, DPS is directly applicable to all real-world domains that benefit from sample rate reduction.
accept-poster
This paper introduces a probabilistic data subsampling scheme that can be optimized end-to-end. The experimental evaluation is a bit weak, focusing mostly on toy-scale problems, and I would have liked to see a discussion of bias in the Gumbel-max gradient estimator. It's also not clear how the free hyperparameters for this method were chosen, which makes me suspect they were tuned on the test set. However, the overall idea is sensible, and the area seems under-explored.
train
[ "Syerno7kqH", "SkxGMh42oS", "HygTnR92iS", "BJxGYGYijH", "BJxwQLGOsH", "B1gZYamuor", "r1gTIpmuiH", "BkeuqUfOsr", "H1lPXEMuoH", "r1xxdbB9KH", "r1lbykR6YB" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a novel DPS(Deep Probabilistic Subsampling) framework for the task-adaptive subsampling case, which attempts to resolve the issue of end-to-end optimization of an optimal subset of signal with jointly learning a sub-Nyquist sampling scheme and a predictive model for downstream tasks. The pa...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_SJeq9JBFvH", "BJxGYGYijH", "H1lPXEMuoH", "r1gTIpmuiH", "r1lbykR6YB", "Syerno7kqH", "Syerno7kqH", "r1lbykR6YB", "r1xxdbB9KH", "iclr_2020_SJeq9JBFvH", "iclr_2020_SJeq9JBFvH" ]
iclr_2020_SJx0q1rtvS
Robust anomaly detection and backdoor attack detection via differential privacy
Outlier detection and novelty detection are two important topics for anomaly detection. Suppose the majority of a dataset are drawn from a certain distribution, outlier detection and novelty detection both aim to detect data samples that do not fit the distribution. Outliers refer to data samples within this dataset, while novelties refer to new samples. In the meantime, backdoor poisoning attacks for machine learning models are achieved through injecting poisoning samples into the training dataset, which could be regarded as “outliers” that are intentionally added by attackers. Differential privacy has been proposed to avoid leaking any individual’s information, when aggregated analysis is performed on a given dataset. It is typically achieved by adding random noise, either directly to the input dataset, or to intermediate results of the aggregation mechanism. In this paper, we demonstrate that applying differential privacy could improve the utility of outlier detection and novelty detection, with an extension to detect poisoning samples in backdoor attacks. We first present a theoretical analysis on how differential privacy helps with the detection, and then conduct extensive experiments to validate the effectiveness of differential privacy in improving outlier detection, novelty detection, and backdoor attack detection.
accept-poster
Thanks for the submission. This paper leverages the stability of differential privacy for the problems of anomaly and backdoor attack detection. The reviewers agree that this application of differential privacy is novel. The theory of the paper appears to be a bit weak (with very strong assumptions on the private learner), although it reflects the basic underlying idea of the detection technique. The paper also provides some empirical evaluation of the technique.
train
[ "SJlQuoeTcB", "HygQfGu2jH", "H1gknyb_or", "H1xu6x-Oor", "Bke8IlWuoB", "S1gm7kORtB", "Syxdja9Mqr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper leverages differential privacy’s stability properties to investigate its use for improved anomaly and backdoor attack detection. Under an assumption (called “uniformly asymptotic empirical risk minimization”), the authors show that difference between the expected loss of a differentially private learnin...
[ 6, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_SJx0q1rtvS", "H1gknyb_or", "SJlQuoeTcB", "S1gm7kORtB", "Syxdja9Mqr", "iclr_2020_SJx0q1rtvS", "iclr_2020_SJx0q1rtvS" ]
iclr_2020_B1gHokBKwS
Learning to Guide Random Search
We are interested in derivative-free optimization of high-dimensional functions. The sample complexity of existing methods is high and depends on problem dimensionality, unlike the dimensionality-independent rates of first-order methods. The recent success of deep learning suggests that many datasets lie on low-dimensional manifolds that can be represented by deep nonlinear models. We therefore consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold. We develop an online learning approach that learns this manifold while performing the optimization. In other words, we jointly learn the manifold and optimize the function. Our analysis suggests that the presented method significantly reduces sample complexity. We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems. Our method achieves significantly lower sample complexity than Augmented Random Search, Bayesian optimization, covariance matrix adaptation (CMA-ES), and other derivative-free optimization algorithms.
accept-poster
This paper develops a methodology to perform global derivative-free optimization of high dimensional functions through random search on a lower dimensional manifold that is carefully learned with a neural network. In thorough experiments on reinforcement learning tasks and a real world airfoil optimization task, the authors demonstrate the effectiveness of their method compared to strong baselines. The reviewers unanimously agreed that the paper was above the bar for acceptance and thus the recommendation is to accept. An interesting direction for future work might be to combine this methodology with REMBO. REMBO seems competitive in the experiments (but maybe doesn't work as well early on since the model needs to learn the manifold). Learning both the low dimensional manifold to do the optimization over and then performing a guided search through Bayesian optimization instead of a random strategy might get the best of both worlds?
train
[ "H1gLHy9L9B", "Hyx0NWWCcB", "r1lilgjFjB", "B1lfR0FYjS", "rkeVidtYjH", "H1ltjUFtoH", "B1lK0Nv6KS", "rye3yb6ptr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nIn this paper, the authors first improve the gradient estimator in (Flaxman et al., 2004) zeroth-order optimization by exploiting low-rank structure. Then, the authors exploit machine learning to automatically discover the lower dimensional space in which the optimization is actually conducted. The authors justi...
[ 6, 8, -1, -1, -1, -1, 6, 6 ]
[ 3, 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_B1gHokBKwS", "iclr_2020_B1gHokBKwS", "B1lK0Nv6KS", "rye3yb6ptr", "H1gLHy9L9B", "Hyx0NWWCcB", "iclr_2020_B1gHokBKwS", "iclr_2020_B1gHokBKwS" ]
iclr_2020_B1lDoJSYDH
Lagrangian Fluid Simulation with Continuous Convolutions
We present an approach to Lagrangian fluid simulation with a new type of convolutional network. Our networks process sets of moving particles, which describe fluids in space and time. Unlike previous approaches, we do not build an explicit graph structure to connect the particles but use spatial convolutions as the main differentiable operation that relates particles to their neighbors. To this end we present a simple, novel, and effective extension of N-D convolutions to the continuous domain. We show that our network architecture can simulate different materials, generalizes to arbitrary collision geometries, and can be used for inverse problems. In addition, we demonstrate that our continuous convolutions outperform prior formulations in terms of accuracy and speed.
accept-poster
The paper proposes an approach for N-D continuous convolution on unordered particle set and applies it to Lagrangian fluid simulation. All reviewers found the paper to be a novel and useful contribution towards the problem of N-D continuous convolution on unordered particles. I recommend acceptance.
train
[ "HJxau-3tir", "r1gOjm0YiS", "SygIGXDRtB", "HygzfjhFjB", "B1xTMG3toH", "Bkl6MZhKsr", "HylajenFiH", "rkx3iqf0KH", "H1li4T0GcS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank R4 for the comments and suggestions.\n\nQ: “However, one major concern I had was that it seems all of the training data was generated in box-like environments, which could easily lead to overfitting. This was alleviated by the results showing that although the network was trained only in boxes, it general...
[ -1, -1, 8, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, 5, 3 ]
[ "rkx3iqf0KH", "Bkl6MZhKsr", "iclr_2020_B1lDoJSYDH", "iclr_2020_B1lDoJSYDH", "H1li4T0GcS", "HylajenFiH", "SygIGXDRtB", "iclr_2020_B1lDoJSYDH", "iclr_2020_B1lDoJSYDH" ]
iclr_2020_rkxDoJBYPB
Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs
We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler. Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training. This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours. We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage. In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks.
accept-poster
The submission presents an approach that leverages machine learning to optimize the placement and scheduling of computation graphs (such as TensorFlow graphs) by a compiler. The work is interesting and well-executed. All reviewers recommend accepting the paper.
train
[ "HJxVQ5MnjH", "S1xv-cfhsH", "r1ezDrz2iS", "rkezBNn5oH", "ryxXud915S", "SJlyOpyNcH", "SJlqiP0S5H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> - Could the authors clarify why the two methods mentioned in “Learning to directly predict a solution” has quadratic complexity w.r.t. # of nodes and whereas REGEL is linear?\n\nLet n be the number of nodes in the input graph for which placement and scheduling decisions need to be predicted. Predicting the decis...
[ -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "S1xv-cfhsH", "SJlyOpyNcH", "SJlqiP0S5H", "ryxXud915S", "iclr_2020_rkxDoJBYPB", "iclr_2020_rkxDoJBYPB", "iclr_2020_rkxDoJBYPB" ]
iclr_2020_SylKikSYDH
Compressive Transformers for Long-Range Sequence Modelling
We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.
accept-poster
The paper proposes a "compressive transformer", an extension of the transformer, that keeps a compressed long term memory in addition to the fixed sized memory. Both memories can be queried using attention weights. Unlike TransfomerXL that discards the oldest memories, the authors propose to "compress" those memories. The main contribution of this work is that that it introduces a model that can handle extremely long sequences. The authors also introduces a new language modeling dataset based on text from Project Gutenberg that has much longer sequences of words than existing datasets. They provide comprehensive experiments comparing against different compression strategies and compares against previous methods, showing that this method is able to result in lower word-level perplexity. In addition, the authors also present evaluations on speech, and image sequences for RL. Initially the paper received weak positive responses from the reviewers. The reviewers pointed out some clarity issues with details of the method and figures and some questions about design decisions. After rebuttal, all of the reviewers expressed that they were very satisfied with the authors responses and increased their scores (for a final of 2 accepts and 1 weak accept). The authors have provided a thorough and well-written paper, with comprehensive and convincing experiments. In addition, the ability to model long-range sequences and dependencies is an important problem and the AC agrees that this paper makes a solid contribution in tackling that problem. Thus, acceptance is recommended.
train
[ "HkxPYrrhYS", "rkloDrohKB", "Hylu-DRpYH", "ryl_UkcYoH", "BkgQZoLdoB", "B1e2NFwSjH", "ByltkFPHsB", "ByebjUDHjB", "rJgAnjgk5S", "B1eU8key9B", "HyxIZ7ZaYr", "rkxqwxZatB", "SylGq1FVdH", "BkeY9OZfdH", "Skxx2RqbOr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "author", "author", "public", "public", "public" ]
[ "This paper proposes a way to compress past hidden states for modeling long sequences. Attention is used to query the compressed representation. The authors introduce several methods for compression such as convolution, pooling etc. The outcome is a versatile model that enables long-range sequence modeling, achievi...
[ 8, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_SylKikSYDH", "iclr_2020_SylKikSYDH", "iclr_2020_SylKikSYDH", "BkgQZoLdoB", "iclr_2020_SylKikSYDH", "HkxPYrrhYS", "rkloDrohKB", "Hylu-DRpYH", "B1eU8key9B", "rkxqwxZatB", "SylGq1FVdH", "BkeY9OZfdH", "iclr_2020_SylKikSYDH", "Skxx2RqbOr", "iclr_2020_SylKikSYDH" ]
iclr_2020_HylAoJSKvH
A Stochastic Derivative Free Optimization Method with Momentum
We consider the problem of unconstrained minimization of a smooth objective function in Rd in setting where only function evaluations are possible. We propose and analyze stochastic zeroth-order method with heavy ball momentum. In particular, we propose, SMTP, a momentum version of the stochastic three-point method (STP) Bergou et al. (2019). We show new complexity results for non-convex, convex and strongly convex functions. We test our method on a collection of learning to continuous control tasks on several MuJoCo Todorov et al. (2012) environments with varying difficulty and compare against STP, other state-of-the-art derivative-free optimization algorithms and against policy gradient methods. SMTP significantly outperforms STP and all other methods that we considered in our numerical experiments. Our second contribution is SMTP with importance sampling which we call SMTP_IS. We provide convergence analysis of this method for non-convex, convex and strongly convex objectives.
accept-poster
A new method for derivative free optimization including momentum and importance sampling is proposed. All reviewers agreed that the paper deserves acceptance. Acceptance is recommended.
train
[ "Hye_vaCb5B", "Sye7wOqniB", "ryxmy_9nsH", "B1g-jNqhoS", "S1lvKE9noS", "rJxwjjnhYr", "Hye-lZhE5B" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors extend recent the recent stochastic three-point (STP) method to allow for Polyak-style momentum, as well as momentum with importance sampling. They provide a range of analysis that mostly extends existing STP results to the STP+momentum case. Most of these results are similar in spirit to stochastic gr...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_HylAoJSKvH", "rJxwjjnhYr", "Hye_vaCb5B", "S1lvKE9noS", "Hye-lZhE5B", "iclr_2020_HylAoJSKvH", "iclr_2020_HylAoJSKvH" ]
iclr_2020_SylzhkBtDB
Understanding and Improving Information Transfer in Multi-Task Learning
We investigate multi-task learning approaches that use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-activated models. Our key observation is that whether or not tasks' data are well-aligned can significantly affect the performance of multi-task learning. We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer. Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtained a 2.35% GLUE score average improvement on 5 GLUE tasks over BERT LARGE using our alignment method. We also design an SVD-based task re-weighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset.
accept-poster
Many existing approaches in multi-task learning rely on intuitions about how to transfer information. This paper, instead, tries to answer what does "information transfer" even mean in this context. Such ideas have already been presented in the past, but the approach taken here is novel, rigorous and well-explained. The reviewers agreed that this is a good paper, although they wished to see the analysis conducted using more practical models. For the camera ready version it would help to make the paper look less dense.
train
[ "SJeBOI5jjB", "SylK5Ywoir", "rJea5zaPir", "SkgiOcavjB", "r1x_6VTwsS", "rkee1Z6PiS", "rJlRuaviKB", "SJeNT1T3YH", "rkl9a2T3KS" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We use alternative optimization in our implementation of Alg 1. For each epoch, we iterate over all the task batches. If the current batch is from task $i$, then the SGD is applied on $A_i$ and $R_i$. The other parameters are fixed. We have revised Alg 1 to clarify this step and also included the description of ou...
[ -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "SylK5Ywoir", "rJea5zaPir", "rkl9a2T3KS", "rJlRuaviKB", "SJeNT1T3YH", "iclr_2020_SylzhkBtDB", "iclr_2020_SylzhkBtDB", "iclr_2020_SylzhkBtDB", "iclr_2020_SylzhkBtDB" ]
iclr_2020_HklXn1BKDH
Learning To Explore Using Active Neural SLAM
This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called `Active Neural SLAM'. Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned SLAM module, and global and local policies. The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our approach over past learning and geometry-based approaches. The proposed model can also be easily transferred to the PointGoal task and was the winning entry of the CVPR 2019 Habitat PointGoal Navigation Challenge.
accept-poster
The paper presents a method for visual robot navigation in simulated environments. The proposed method combines several modules, such as mapper, global policy, planner, local policy for point-goal navigation. The overall approach is reasonable and the pipeline can be modularly trained. The experimental results on navigation tasks show strong performance, especially in generalization settings.
train
[ "BJgKgprsjr", "B1xj41B9sS", "Bklq_0EqsH", "SylTeTEqsH", "SyeU6jNqor", "S1lLOLkZYr", "SklnIsw2FB", "r1e0iqQIqB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your answers, they address my questions.", "We thank the reviewers for the helpful feedback. The reviewers have appreciated our realistic experimental design, strong generalization results, and ablation studies. They found our experiments convincing in their comparisons with the state of the art. W...
[ -1, -1, -1, -1, -1, 6, 3, 8 ]
[ -1, -1, -1, -1, -1, 4, 1, 3 ]
[ "Bklq_0EqsH", "iclr_2020_HklXn1BKDH", "S1lLOLkZYr", "SklnIsw2FB", "r1e0iqQIqB", "iclr_2020_HklXn1BKDH", "iclr_2020_HklXn1BKDH", "iclr_2020_HklXn1BKDH" ]
iclr_2020_HJem3yHKwH
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks
Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their adoption in safety-critical applications such as self-driving cars, drones, and healthcare. Notably, DNNs are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications. In this work, we propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks. EMPIR is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. EMPIR overcomes this limitation to achieve the ``best of both worlds", i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble. Further, as low precision DNN models have significantly lower computational and storage requirements than full precision models, EMPIR models only incur modest compute and memory overheads compared to a single full-precision model (<25% in our evaluations). We evaluate EMPIR across a suite of DNNs for 3 different image recognition tasks (MNIST, CIFAR-10 and ImageNet) and under 4 different adversarial attacks. Our results indicate that EMPIR boosts the average adversarial accuracies by 42.6%, 15.2% and 10.5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs.
accept-poster
This paper proposed to apply emsembles of high precision deep networks and low precision ones to improve the robustness against adversarial attacks while not increase the cost in time and memory heavily. Experiments on different tasks under various types of adversarial attacks show the proposed method improves the robustness of the models without sacrificing the accuracy on normal input. The idea is simple and effective. Some reviewers have had concerns on the novelty of the idea and the comparisons with related work but I think the authors give convincing answers to these questions.
train
[ "HklscZ5doS", "SJx1vWc_sr", "B1x6WxqOoB", "H1xg6DlTFr", "BkgEnMcyqS", "HJgE7loI5H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their positive comments. As correctly pointed out by the reviewer, this work was intended to showcase an alternative low-cost approach to increasing the robustness of deep learning models through the use of low-precision models, without sacrificing accuracy on the original unperturbed exa...
[ -1, -1, -1, 6, 6, 1 ]
[ -1, -1, -1, 1, 3, 5 ]
[ "H1xg6DlTFr", "BkgEnMcyqS", "HJgE7loI5H", "iclr_2020_HJem3yHKwH", "iclr_2020_HJem3yHKwH", "iclr_2020_HJem3yHKwH" ]
iclr_2020_rkxNh1Stvr
Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel
Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough: the uncertainty (i.e. risk or confidence) of that prediction must also be estimated. Standard NNs, which are most often used in such tasks, do not provide uncertainty information. Existing approaches address this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not predict as accurately as standard NNs. In this paper, a new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN. The behavior of the NN is captured by modeling its prediction residuals with a Gaussian Process, whose kernel includes both the NN's input and its output. The framework is justified theoretically and evaluated in twelve real-world datasets, where it is found to (1) provide reliable estimates of uncertainty, (2) reduce the error of the point predictions, and (3) scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient for building real-world NN applications.
accept-poster
This paper presents a method to model uncertainty in deep learning regressors by applying a post-hoc procedure. Specifically, the authors model the residuals of neural networks using Gaussian processes, which provide a principled Bayesian estimate of uncertainty. The reviewers were initially mixed and a fourth reviewer was brought in for an additional perspective. The reviewers found that the paper was well written, well motivated and found the methodology sensible and experiments compelling. AnonReviewer4 raised issues with the theoretical exposition of the paper (going so far as to suggest that moving the theory into the supplementary and using the reclaimed space for additional clarifications would make the paper stronger). The reviewers found the author response compelling and as a result the reviewers have come to a consensus to accept. Thus the recommendation is to accept the paper. Please do take the reviewer feedback into account in preparing the camera ready version. In particular, please do address the remaining concerns from AnonReviewer4 regarding the theoretical portion of the paper. It seems that the methodological and empirical portions of the paper are strong enough to stand on their own (and therefore the recommendation for an accept). Adding theory just for the sake of having theory seems to detract from the message (particularly if it is irrelevant or incorrect as initially pointed out by the reviewer).
test
[ "S1gTU-Ev5S", "HJx95fEwYH", "HkgA-Il2jH", "HyebnYqnoB", "HJxRrPxhiS", "HyeqxqPniB", "ByeMeJe3oS", "Skl8jSxnjr", "rJxjWOxhjr", "BklZCwgnsB", "ryeapbx2sB", "BJgfqye3sH", "BkleE8xhor", "Syge0X1BqB", "ryx5FWhcqS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "# Summary\n\nThe authors propose a method for post-hoc correction and predictive variance estimation for neural network models. The method fits a GP to the model residuals, and learns a composite kernel that combines two kernels defined on the input space and the model’s output space (called RIO, R for residual, a...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_rkxNh1Stvr", "iclr_2020_rkxNh1Stvr", "HJx95fEwYH", "HyeqxqPniB", "ryeapbx2sB", "ryeapbx2sB", "iclr_2020_rkxNh1Stvr", "Syge0X1BqB", "BklZCwgnsB", "HJxRrPxhiS", "S1gTU-Ev5S", "ryx5FWhcqS", "HkgA-Il2jH", "iclr_2020_rkxNh1Stvr", "iclr_2020_rkxNh1Stvr" ]
iclr_2020_H1gBhkBFDH
B-Spline CNNs on Lie groups
Group convolutional neural networks (G-CNNs) can be used to improve classical CNNs by equipping them with the geometric structure of groups. Central in the success of G-CNNs is the lifting of feature maps to higher dimensional disentangled representations, in which data characteristics are effectively learned, geometric data-augmentations are made obsolete, and predictable behavior under geometric transformations (equivariance) is guaranteed via group theory. Currently, however, the practical implementations of G-CNNs are limited to either discrete groups (that leave the grid intact) or continuous compact groups such as rotations (that enable the use of Fourier theory). In this paper we lift these limitations and propose a modular framework for the design and implementation of G-CNNs for arbitrary Lie groups. In our approach the differential structure of Lie groups is used to expand convolution kernels in a generic basis of B-splines that is defined on the Lie algebra. This leads to a flexible framework that enables localized, atrous, and deformable convolutions in G-CNNs by means of respectively localized, sparse and non-uniform B-spline expansions. The impact and potential of our approach is studied on two benchmark datasets: cancer detection in histopathology slides (PCam dataset) in which rotation equivariance plays a key role and facial landmark localization (CelebA dataset) in which scale equivariance is important. In both cases, G-CNN architectures outperform their classical 2D counterparts and the added value of atrous and localized group convolutions is studied in detail.
accept-poster
The paper describes principles for endowing a neural architecture with invariance with respect to a Lie group. The contribution is that these principles can accommodate discrete and continuous groups, through approximation via a base family (B-splines). The main criticisms were related to the intelligibility of the paper and the practicality of the approach, implementation-wise. Significant improvements have been done and the paper has been partially rewritten during the rebuttal period. Other criticisms were related to the efficiency of the approach, regarding how the property of invariance holds under the approximations done. These comments were addressed in the rebuttal and the empirical comparison with data augmentation also supports the merits of the approach. This leads me to recommend acceptance. I urge the authors to extend the description and discussion about the experimental validation.
train
[ "ByeIRAAjiS", "BJxluT0ijS", "Hke2w8J7jH", "rJgO04yXjr", "HyxYiVyXjS", "Hkg4Wa0MiH", "SkgVdEbAFB", "HyeS9jyxqS", "BJgI5GfF9B" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "P.s. With respect to implementation challenges: Here is a code snipped (also promised to Rev1) that illustrates the type of coding that needs to be done. See code link above for more detail.\n\n** In a “group class” file “SE2.py” we define:\n\nclass H:\n # Group product: two rotation angles simply add up\n def...
[ -1, -1, -1, -1, -1, -1, 8, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 1 ]
[ "Hke2w8J7jH", "iclr_2020_H1gBhkBFDH", "SkgVdEbAFB", "HyxYiVyXjS", "HyeS9jyxqS", "BJgI5GfF9B", "iclr_2020_H1gBhkBFDH", "iclr_2020_H1gBhkBFDH", "iclr_2020_H1gBhkBFDH" ]
iclr_2020_Skx82ySYPH
Neural Outlier Rejection for Self-Supervised Keypoint Learning
Identifying salient points in images is a crucial component for visual odometry, Structure-from-Motion or SLAM algorithms. Recently, several learned keypoint methods have demonstrated compelling performance on challenging benchmarks. However, generating consistent and accurate training data for interest-point detection in natural images still remains challenging, especially for human annotators. We introduce IO-Net (i.e. InlierOutlierNet), a novel proxy task for the self-supervision of keypoint detection, description and matching. By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching. Second, we introduce KeyPointNet, a keypoint-network architecture that is especially amenable to robust keypoint detection and description. We design the network to allow local keypoint aggregation to avoid artifacts due to spatial discretizations commonly used for this task, and we improve fine-grained keypoint descriptor performance by taking advantage of efficient sub-pixel convolutions to upsample the descriptor feature-maps to a higher operating resolution. Through extensive experiments and ablative analysis, we show that the proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art.
accept-poster
This paper proposes a solid (if somewhat incremental) improvement on an interesting and well-studied problem. I suggest accepting it.
test
[ "HJeKkwvk5H", "rklyawTEYS", "Bkes8ku3sS", "BkluihCpYr", "SJg5VTHnsH", "rJgDnhXcjH", "Hkx7DhXqoH", "Bylh6imcsr", "S1gX9jm5sr", "BygUyjQ5iH", "HklcBcmcsr" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "The following work proposes several improvements over prior works in unsupervised/self-supervised keypoint-descriptor learning such as Christiansen et al. One improvement is the relaxation of the cell-boundaries for keypoint prediction -- specifically allowing keypoints anchored at the cell's center to be offset i...
[ 6, 8, -1, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, -1, 1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Skx82ySYPH", "iclr_2020_Skx82ySYPH", "SJg5VTHnsH", "iclr_2020_Skx82ySYPH", "Bylh6imcsr", "rklyawTEYS", "rklyawTEYS", "BkluihCpYr", "BkluihCpYr", "HJeKkwvk5H", "iclr_2020_Skx82ySYPH" ]
iclr_2020_SylO2yStDr
Reducing Transformer Depth on Demand with Structured Dropout
Overparametrized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality than when training from scratch or using distillation.
accept-poster
This paper presents Layerdrop, which is a method for structured dropout which allows you to train one model, and then prune to a desired depth at test time. This is a simple method which is exciting because you can get a smaller, more efficient model at test time for free, as it does not need fine tuning. They show strong results on machine translation, language modelling and a couple of other NLP benchmarks. The reviews are consistently positive, with significant author and reviewer discussion. This is clearly an approach which merits attention, and should be included in ICLR.
test
[ "HJxQ91dooH", "Hye41XHsoB", "rye_ClKqsH", "r1g895NcjB", "HJlfzJEYiH", "HkgC027tjH", "S1xKtsXYjB", "SkgVhtmtsH", "Hkg3glcHtr", "S1xLPNk3YH", "BklKzkFpYS", "r1lYne7iFS", "H1gVxO9vYS", "H1gM1Ef8Yr", "HklXwwmCOS", "S1x8GuyW_B", "rJg38HoyuS" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public" ]
[ "Thanks for your fast response! \n\nRe: dev results - We will make clear that the BERT results are already on dev set (and edit the caption of Figure 3) and the LM results as well in the ablations, by adding them to Appendix Table 8 which also shows dev results on WMT and Long Form Question Answering (the first par...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "Hye41XHsoB", "HJlfzJEYiH", "r1g895NcjB", "S1xKtsXYjB", "Hkg3glcHtr", "iclr_2020_SylO2yStDr", "S1xLPNk3YH", "BklKzkFpYS", "iclr_2020_SylO2yStDr", "iclr_2020_SylO2yStDr", "iclr_2020_SylO2yStDr", "H1gVxO9vYS", "iclr_2020_SylO2yStDr", "HklXwwmCOS", "iclr_2020_SylO2yStDr", "rJg38HoyuS", ...
iclr_2020_HJeT3yrtDr
Cross-Lingual Ability of Multilingual BERT: An Empirical Study
Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data. In this work, we provide a comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability. We study the impact of linguistic properties of the languages, the architecture of the model, and the learning objectives. The experimental study is done in the context of three typologically different languages -- Spanish, Hindi, and Russian -- and using two conceptually different NLP tasks, textual entailment and named entity recognition. Among our key conclusions is the fact that the lexical overlap between languages plays a negligible role in the cross-lingual success, while the depth of the network is an integral part of it. All our models and implementations can be found on our project page: http://cogcomp.org/page/publication_view/900.
accept-poster
This paper introduces a set of new analysis methods to try to better understand the reasons that multilingual BERT succeeds. The findings substantially bolster the hypothesis behind the original multilingual BERT work: that this kind of model discovers and uses substantial structural and semantic correspondences between languages in a fully unsupervised setting. This is a remarkable result with serious implications for representation learning work more broadly. All three reviewers saw ways in which the paper could be expanded or improved, and one reviewer argued that the novelty and scope of the paper are below the standard for ICLR. However, I am inclined to side with the two more confident reviewers and argue for acceptance. I don't see any substantive reasons to reject the paper, the methods are novel and appropriate (even in light of the prior work that exists on this question), and the results are surprising and relevant a high-profile ongoing discussion in the literature on representation learning for language.
train
[ "rylalmYhiH", "SJlML-Y2iB", "BJg2ogFnir", "rkl0klY2sS", "ryekGZ3ijB", "B1xvDADRYH", "Sklz-Z8JcH", "rJlUTFWI9B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have updated our paper with additional experiments that strengthen our contributions. Please refer to the appendix.\n\nStructural Similarity: \nWe found that reviewers and others are slightly concerned about the structural similarity, mainly due to its abstract nature. To illustrate the necessity of structural ...
[ -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2020_HJeT3yrtDr", "B1xvDADRYH", "Sklz-Z8JcH", "rJlUTFWI9B", "B1xvDADRYH", "iclr_2020_HJeT3yrtDr", "iclr_2020_HJeT3yrtDr", "iclr_2020_HJeT3yrtDr" ]
iclr_2020_rkl03ySYDH
SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition
The ability to decompose complex multi-object scenes into meaningful abstractions like objects is fundamental to achieve higher-level cognition. Previous approaches for unsupervised object-oriented scene representation learning are either based on spatial-attention or scene-mixture approaches and limited in scalability which is a main obstacle towards modeling real-world scenes. In this paper, we propose a generative latent variable model, called SPACE, that provides a unified probabilistic modeling framework that combines the best of spatial-attention and scene-mixture approaches. SPACE can explicitly provide factorized object representations for foreground objects while also decomposing background segments of complex morphology. Previous models are good at either of these, but not both. SPACE also resolves the scalability problems of previous methods by incorporating parallel spatial-attention and thus is applicable to scenes with a large number of objects without performance degradations. We show through experiments on Atari and 3D-Rooms that SPACE achieves the above properties consistently in comparison to SPAIR, IODINE, and GENESIS. Results of our experiments can be found on our project website: https://sites.google.com/view/space-project-page
accept-poster
The paper makes a reasonable contribution to generative modeling for unsupervised scene decomposition. The revision and rebuttal addressed the primary criticisms concerning the qualitative comparison and clarity, which caused some of the reviewers to increase their rating. I think the authors have adequately addressed the reviewer concerns. The final version of the paper should still strive to improve clarity, and strengthen the evaluation and ablation studies.
test
[ "S1xhornvcH", "r1g_bxtAYS", "HkxPUtvwsr", "HkePhKL3or", "BJliMFUnjS", "HJx8RO8hoS", "Hkl0ycvvsH", "HyejZnvPjS", "r1x9ajDwiS", "ByxTGjPvsB", "SkglMXumjr", "HJep1tq-qr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n\nIn this paper, the authors propose a generative latent variable model, which is named as SPACE, for unsupervised scene decomposition. The proposed model is built on a hierarchical mixture model: one component for generating foreground and the other one for generating the background, while the model for generat...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_rkl03ySYDH", "iclr_2020_rkl03ySYDH", "iclr_2020_rkl03ySYDH", "HyejZnvPjS", "HJx8RO8hoS", "SkglMXumjr", "S1xhornvcH", "r1x9ajDwiS", "r1g_bxtAYS", "HJep1tq-qr", "iclr_2020_rkl03ySYDH", "iclr_2020_rkl03ySYDH" ]
iclr_2020_rkg-TJBFPB
RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments
Exploration in sparse reward environments remains one of the key challenges of model-free reinforcement learning. Instead of solely relying on extrinsic rewards provided by the environment, many state-of-the-art methods use intrinsic rewards to encourage exploration. However, we show that existing methods fall short in procedurally-generated environments where an agent is unlikely to visit a state more than once. We propose a novel type of intrinsic reward which encourages the agent to take actions that lead to significant changes in its learned state representation. We evaluate our method on multiple challenging procedurally-generated tasks in MiniGrid, as well as on tasks with high-dimensional observations used in prior work. Our experiments demonstrate that this approach is more sample efficient than existing exploration methods, particularly for procedurally-generated MiniGrid environments. Furthermore, we analyze the learned behavior as well as the intrinsic reward received by our agent. In contrast to previous approaches, our intrinsic reward does not diminish during the course of training and it rewards the agent substantially more for interacting with objects that it can control.
accept-poster
This paper tackles the problem of exploration in deep reinforcement learning in procedurally-generated environments, where the same state is rarely encountered twice. The authors show that existing methods do not perform well in these settings and propose an approach based on intrinsic reward bonus to address this problem. More specifically, they combine two existing ideas for training RL policies: 1) using implicit reward based on latent state representations (Pathak et al. 2017) and 2) using implicit rewards based on difference between subsequent states (Marino et al. 2019). Most concerns of the reviewers have been addressed in the rebuttals. Given that it builds so closely on existing ideas, the main weakness of this work seems to be the novelty. The strength of this paper resides in the extensive experiments and analysis that highlight the shortcomings of current techniques and provide insight into the behaviour of trained agents, in addition to proposing a strategy which improves upon existing methods. The reviewers all agree that the paper should be accepted. I therefore recommend acceptance.
train
[ "S1xw9yhZcH", "SkxCUr2QsH", "Bkg-fH3QiH", "rkgST4hQsS", "SyxiSE2XiB", "r1l45lqTKH", "HkxHBiPAKr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\nThis paper proposes a Rewarding Impact-Driven Exploration (RIDE), which is an intrinsic exploration bonus for procedurally-generated environments. RIDE is built upon the ICM architecture (Pathak et al. 2017), which learns a state feature representation by minimizing the L2 distance between the actual next...
[ 6, -1, -1, -1, -1, 8, 6 ]
[ 3, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rkg-TJBFPB", "HkxHBiPAKr", "r1l45lqTKH", "S1xw9yhZcH", "iclr_2020_rkg-TJBFPB", "iclr_2020_rkg-TJBFPB", "iclr_2020_rkg-TJBFPB" ]
iclr_2020_SkxQp1StDH
Low-dimensional statistical manifold embedding of directed graphs
We propose a novel node embedding of directed graphs to statistical manifolds, which is based on a global minimization of pairwise relative entropy and graph geodesics in a non-linear way. Each node is encoded with a probability density function over a measurable space. Furthermore, we analyze the connection of the geometrical properties of such embedding and their efficient learning procedure. Extensive experiments show that our proposed embedding is better preserving the global geodesic information of graphs, as well as outperforming existing embedding models on directed graphs in a variety of evaluation metrics, in an unsupervised setting.
accept-poster
The paper proposes an embedding for nodes in a directed graph, which takes into account the asymmetry. The proposed method learns an embedding of a node as an exponential distribution (e.g. Gaussian), on a statistical manifold. The authors also provide an approximation for large graphs, and show that the method performs well in empirical comparisons. The authors were very responsive in the discussion phase, providing new experiments in response to the reviews. This is a nice example where a good paper is improved by several extra suggestions by reviewers. I encourage the authors to provide all the software for reproducing their work in the final version. Overall, this is a great paper which proposes a new graph embedding approach that is scalable and provides nice empirical results.
train
[ "BklRqyqtsH", "SkxP3rGcjr", "HygOs6FtoB", "HJgXmiYFsr", "Sye7nK0ItH", "B1lxARc3KS", "SyetJDjTYH" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n1. In our paper, we have used the following baselines for directed graphs: HOPE [Ou et al. in ACM SIGKDD 2016], APP [ZHOU et al. in AAAI 2017] and Graph2Gauss [BOJCHEVSKI et al. in ICLR 2018]. All of them (HOPE, APP, Graph2Gauss) have explicitly written that they consider directions. We will make sure to stress ...
[ -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, 5, 4, 1 ]
[ "B1lxARc3KS", "HJgXmiYFsr", "SyetJDjTYH", "Sye7nK0ItH", "iclr_2020_SkxQp1StDH", "iclr_2020_SkxQp1StDH", "iclr_2020_SkxQp1StDH" ]
iclr_2020_rJg76kStwH
Efficient Probabilistic Logic Reasoning with Graph Neural Networks
Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.
accept-poster
This paper is far more borderline than the review scores indicate. The authors certainly did themselves no favours by posting a response so close to the end of the discussion period, but there was sufficient time to consider the responses after this, and it is somewhat disappointing that the reviewers did not engage. Reviewer 2 states that their only reason for not recommending acceptance is the lack of experiments on more than one KG. The authors point out they have experiments on more than one KG in the paper. From my reading, this is the case. I will consider R2 in favour of the paper in the absence of a response. Reviewer 3 gives a fairly clear initial review which states the main reasons they do not recommend acceptance. While not an expert on the topic of GNNs, I have enough of a technical understanding to deem that the detailed response from the authors to each of the points does address these concerns. In the absence of a response from the reviewer, it is difficult to ascertain whether they would agree, but I will lean towards assuming they are satisfied. Reviewer 1 gives a positive sounding review, with as main criticism "Overall, the work of this paper seems technically sound but I don’t find the contributions particularly surprising or novel. Along with plogicnet, there have been many extensions and applications of Gnns, and I didn’t find that the paper expands this perspective in any surprising way." This statement is simply re-asserted after the author response. I find this style of review entirely inappropriate and unfair: it is not a the role of a good scientific publication to "surprise". If it is technically sound, and in an area that the reviewer admits generates interest from reviewers, vague weasel words do not a reason for rejection make. I recommend acceptance.
train
[ "rkgx3TK2tr", "S1xWpo55or", "HJxrNxsciS", "BJlgEyicsr", "SkgLKByAtS", "HygnUyGAYH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors propose a system, called ExpressGNN, that combines MLNs and GNNs. This system is able to perform inference and learning the weights of the logic formulas.\n\nThe proposed approach seems valid and really intriguing. Moreover the problems it tackles, i.e. inference and learning over big kno...
[ 3, -1, -1, -1, 3, 1 ]
[ 3, -1, -1, -1, 4, 4 ]
[ "iclr_2020_rJg76kStwH", "SkgLKByAtS", "HygnUyGAYH", "rkgx3TK2tr", "iclr_2020_rJg76kStwH", "iclr_2020_rJg76kStwH" ]
iclr_2020_BJe8pkHFwS
GraphSAINT: Graph Sampling Based Inductive Learning Method
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the "neighbor explosion" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970).
accept-poster
All three reviewers advocated acceptance. The AC agrees, feeling the paper is interesting.
train
[ "HJl2Jtsfor", "SyeIxl6MiB", "S1x9OrPhjH", "BklyDVDhsH", "rygiO3JFjS", "SJe_auDjFr", "BJeVLY_aFS", "rJlxKLYhcH", "rkgtzFwvFB", "B1e-fZBLtr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thanks a lot for your valuable feedback. We will state our assumption on the theorem more explicitly in our next revision. \n\nAnswer to the question:\n\nYes, there is a typo in Equation 3. Thanks for pointing this out! The correct expression should be $\\mathbb{E}(L_\\text{batch})=\\frac{1}{|\\mathbb{G}|} \\sum\\...
[ -1, -1, -1, -1, -1, 6, 6, 6, -1, -1 ]
[ -1, -1, -1, -1, -1, 5, 1, 3, -1, -1 ]
[ "BJeVLY_aFS", "SJe_auDjFr", "SyeIxl6MiB", "HJl2Jtsfor", "rJlxKLYhcH", "iclr_2020_BJe8pkHFwS", "iclr_2020_BJe8pkHFwS", "iclr_2020_BJe8pkHFwS", "B1e-fZBLtr", "iclr_2020_BJe8pkHFwS" ]
iclr_2020_HyxY6JHKwr
You Only Train Once: Loss-Conditional Training of Deep Networks
In many machine learning problems, loss functions are weighted sums of several terms. A typical approach to dealing with these is to train multiple separate models with different selections of weights and then either choose the best one according to some criterion or keep multiple models if it is desirable to maintain a diverse set of solutions. This is inefficient both at training and at inference time. We propose a method that allows replacing multiple models trained on one loss function each by a single model trained on a distribution of losses. At test time a model trained this way can be conditioned to generate outputs corresponding to any loss from the training distribution of losses. We demonstrate this approach on three tasks with parametrized losses: beta-VAE, learned image compression, and fast style transfer.
accept-poster
The paper proposes and validates a simple idea of training a neural network for a parametric family of losses, using a popular AdaIN mechanism. Following the rebuttal and the revision, all three reviewers recommend acceptance (though weakly). There is a valid concern about the overlap with an ICLR19-workshop paper with essentially the same idea, however the submission is broader in scope and validates the idea on several applications.
train
[ "S1gGP5untH", "rJxU3GVhsH", "ByxPVGVhiS", "HkgibGN3oS", "S1etcbNhjS", "HJegbBKCFr", "r1lkpJl6qS", "B1gpCylRFH", "rJeHvbXItr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Due to the late rebuttal, I was not able to respond during discussion time.\n\nQ1, Q2) sound and look convincing.\nQ3) I still cannot find details on this in the paper. How the validation set was chosen? What was the size? The authors need to make all the experiments fully reproducible.\nQ4, Q5) ok\n\nMy concerns ...
[ 6, -1, -1, -1, -1, 6, 6, -1, -1 ]
[ 4, -1, -1, -1, -1, 3, 4, -1, -1 ]
[ "iclr_2020_HyxY6JHKwr", "iclr_2020_HyxY6JHKwr", "r1lkpJl6qS", "HJegbBKCFr", "S1gGP5untH", "iclr_2020_HyxY6JHKwr", "iclr_2020_HyxY6JHKwr", "rJeHvbXItr", "iclr_2020_HyxY6JHKwr" ]
iclr_2020_rke3TJrtPS
Projection-Based Constrained Policy Optimization
We consider the problem of learning control policies that optimize a reward function while satisfying constraints due to considerations of safety, fairness, or other costs. We propose a new algorithm - Projection-Based Constrained Policy Optimization (PCPO), an iterative method for optimizing policies in a two-step process - the first step performs an unconstrained update while the second step reconciles the constraint violation by projecting the policy back onto the constraint set. We theoretically analyze PCPO and provide a lower bound on reward improvement, as well as an upper bound on constraint violation for each policy update. We further characterize the convergence of PCPO with projection based on two different metrics - L2 norm and Kullback-Leibler divergence. Our empirical results over several control tasks demonstrate that our algorithm achieves superior performance, averaging more than 3.5 times less constraint violation and around 15% higher reward compared to state-of-the-art methods.
accept-poster
The paper proposes a new algorithm for solving constrained MDPs called Projection Based Constrained Policy Optimization. Compared to CPO, it projects the solution back to the feasible region after each step, which results in improvements on some of the tasks considered. The problem addressed is relevant, as many tasks could have important constraints e.g. concerning fairness or safety. The method is supported through theory and empirical results. It is great to have theoretical bounds on the policy improvement and constraint violation of the algorithm, although they only apply to the intractable version of the algorithm (another approximate algorithm is proposed that is used in practice). The experimental evidence is a bit mixed, with the best of the proposed projections (based on the KL approach) sometimes beating CPO but also sometimes being beaten by it, both on the obtained reward and on constraint satisfaction. The method only considers a single constraint. I'm not sure how trivial it would be to add more than one constraint. The reviewers also mention that the paper does not implement TRPO as in the original paper, as in the original paper the step size in the direction of the natural gradient is refined with a line search if the original step size (calculated using the quadratic expansion of the expected KL) does violate the original constraints. (Line search on the constraint as mentioned by the authors would be a different issue). Futhermore, the quadratic expansion of the KL is symmetric around the current policy in parameter space. This means that starting from a feasible solution the trust region should always overlap with the constraint set when feasibility is maintained, going somewhat agains the argument for PCPO as opposed to CPO brought up by the authors in the discussion with R2. I would also show this symmetry in illustrations such as Fig 1 to aid understanding.
train
[ "S1eaRqTtjH", "SyemnsTFoH", "HJx656pYjH", "rkeuT36KjH", "rJxNx-bBKB", "B1xGfhvptS", "BkgdO3S0FS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank Reviewer #1 for the helpful and insightful feedback. We have updated a version based on your suggestions. We provide answers to individual questions below.\n\nReviewer #1’s comment #1: Comparison between theorem 3.1 and theorem 3.2, and the proof\nResponse: The difference between these theorems is that th...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "rJxNx-bBKB", "B1xGfhvptS", "iclr_2020_rke3TJrtPS", "BkgdO3S0FS", "iclr_2020_rke3TJrtPS", "iclr_2020_rke3TJrtPS", "iclr_2020_rke3TJrtPS" ]
iclr_2020_ryxC6kSYPr
Infinite-Horizon Differentiable Model Predictive Control
This paper proposes a differentiable linear quadratic Model Predictive Control (MPC) framework for safe imitation learning. The infinite-horizon cost is enforced using a terminal cost function obtained from the discrete-time algebraic Riccati equation (DARE), so that the learned controller can be proven to be stabilizing in closed-loop. A central contribution is the derivation of the analytical derivative of the solution of the DARE, thereby allowing the use of differentiation-based learning methods. A further contribution is the structure of the MPC optimization problem: an augmented Lagrangian method ensures that the MPC optimization is feasible throughout training whilst enforcing hard constraints on state and input, and a pre-stabilizing controller ensures that the MPC solution and derivatives are accurate at each iteration. The learning capabilities of the framework are demonstrated in a set of numerical studies.
accept-poster
This paper develops a linear quadratic model predictive control approach for safe imitation learning. The main contribution is an analytic solution for the derivative of the discrete-time algebraic Riccati equation (DARE). This allows an infinite horizon optimality objective to be used with differentiation-based learning methods. An additional contribution is the problem reformulation with a pre-stabilizing controller and the support of state constraints throughout the learning process. The method is tested on a damped-spring system and a vehicle platooning problem. The reviewers and the author response covered several topics. The reviewers appreciated the research direction and theoretical contributions of this work. The reviewers main concern was the experimental evaluation, which was originally limited to a damped spring system. The authors added another experiment for a substantially more complex continuous control domain. In response to the reviewers, the authors also described how this work relates to non-linear control problems. The authors also clarified the ability of the proposed method to handle state-based constraints that are not handled by earlier methods. The reviewers were largely satisfied with these changes. This paper should be accepted as the reviewers are satisfied that the paper has useful contributions.
train
[ "rkeMnzRk9H", "S1en1khhiB", "SkxpsghhoH", "H1gTi35hjr", "HyxliPNhor", "r1e8h2-MsH", "HJlVVT-GsH", "r1xCKjZfoB", "H1x0gLTRKH", "rJeZNvcxcB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper shows how to use the Discrete-time Algebraic Riccati Equation (DARE) to provide infinite horizon stability & optimality to differentiable MPC learning. The paper also shows how to use DARE to derive a pre-stabilizing (linear state-feedback) controller. The paper provides a theoretical characterization ...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_ryxC6kSYPr", "H1gTi35hjr", "S1en1khhiB", "HJlVVT-GsH", "iclr_2020_ryxC6kSYPr", "rkeMnzRk9H", "H1x0gLTRKH", "rJeZNvcxcB", "iclr_2020_ryxC6kSYPr", "iclr_2020_ryxC6kSYPr" ]
iclr_2020_SkeAaJrKDS
Combining Q-Learning and Search with Amortized Value Estimates
We introduce "Search with Amortized Value Estimates" (SAVE), an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search (MCTS). In SAVE, a learned prior over state-action values is used to guide MCTS, which estimates an improved set of state-action values. The new Q-estimates are then used in combination with real experience to update the prior. This effectively amortizes the value computation performed by MCTS, resulting in a cooperative relationship between model-free learning and model-based search. SAVE can be implemented on top of any Q-learning agent with access to a model, which we demonstrate by incorporating it into agents that perform challenging physical reasoning tasks and Atari. SAVE consistently achieves higher rewards with fewer training steps, and---in contrast to typical model-based search approaches---yields strong performance with very small search budgets. By combining real experience with information computed during search, SAVE demonstrates that it is possible to improve on both the performance of model-free learning and the computational cost of planning.
accept-poster
This paper proposes Search with Amortized Value Estimates (SAVE) that combines Q-learning and MCTS. SAVE uses the estimated Q-values obtained by MCTS at the root node to update the value network, and uses the learned value function to guide MCTS. The rebuttal addressed the reviewers’ concerns, and they are now all positive about the paper. I recommend acceptance.
train
[ "Hkxs3EFlcr", "SJgkxPunoB", "B1eSGNwnjB", "rkxz7yRciH", "rJgc-yRqir", "SkxOuncqjH", "S1eRClLYjH", "HJx6w1CWsH", "BygniAabiH", "rJlddRTbiS", "ByxdSCpWjS", "BklMPEJx5H", "H1xECZ5_qS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes Search with Amortized Value Estimates (SAVE), which combines Q-learning and Monte-Carlo Tree Search (MCTS). SAVE makes use of the estimated Q-values obtained by MCTS at the root node (Q_MCTS), rather than using only the resulting action or counts to learn a policy. It trains the amortized value...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_SkeAaJrKDS", "B1eSGNwnjB", "HJx6w1CWsH", "rJgc-yRqir", "iclr_2020_SkeAaJrKDS", "S1eRClLYjH", "iclr_2020_SkeAaJrKDS", "BklMPEJx5H", "Hkxs3EFlcr", "Hkxs3EFlcr", "H1xECZ5_qS", "iclr_2020_SkeAaJrKDS", "iclr_2020_SkeAaJrKDS" ]
iclr_2020_Hye1RJHKwB
Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Generative adversarial networks (GANs) have shown great success in applications such as image generation and inpainting. However, they typically require large datasets, which are often not available, especially in the context of prediction tasks such as image segmentation that require labels. Therefore, methods such as the CycleGAN use more easily available unlabelled data, but do not offer a way to leverage additional labelled data for improved performance. To address this shortcoming, we show how to factorise the joint data distribution into a set of lower-dimensional distributions along with their dependencies. This allows splitting the discriminator in a GAN into multiple "sub-discriminators" that can be independently trained from incomplete observations. Their outputs can be combined to estimate the density ratio between the joint real and the generator distribution, which enables training generators as in the original GAN framework. We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available. For the Cityscapes segmentation task in particular, our method also improves accuracy by an absolute 14.9% over CycleGAN while using only 25 additional paired examples.
accept-poster
All three reviewers appreciate the new method (FactorGAN) for training generative networks from incomplete observations. At the same time, the quality of the experimental results can still be improved. On balance, the paper will make a good poster.
train
[ "rkxZuQfJcH", "rklgOOV9jr", "r1lnE_4qjS", "HkxJkw45sS", "Ske4xC31ir", "SygA0eq1qr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors present FactorGANs, which handle missing data scenarios by constructing conditional marginal estimates from ratios of joint and marginal distributions, estimated with GANs. FactorGANs are applied to the problem of semi-supervised (paired+unpaired) translation and demonstrate good performance.\n\nStreng...
[ 6, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, 4, 3 ]
[ "iclr_2020_Hye1RJHKwB", "rkxZuQfJcH", "SygA0eq1qr", "Ske4xC31ir", "iclr_2020_Hye1RJHKwB", "iclr_2020_Hye1RJHKwB" ]
iclr_2020_SkgGCkrKvH
Decentralized Deep Learning with Arbitrary Communication Compression
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter.
accept-poster
The authors present an algorithm CHOCO-SGD to make use of communication compression in a decentralized setting. This is an interesting problem, and the paper is well-motivated and well-written. On the theoretical side, the authors prove the convergence rate of the algorithm on non-convex smooth functions, which shows a nearly linear speedup. The experimental results on several benchmark datasets validate the algorithm achieves better performance than baselines. These can be made more convincing by comparing with more baselines (including DeepSqueeze and other centralized algorithms with a compression scheme), and on larger datasets. The authors should also clarify results on consensus.
val
[ "SkxoY4D2oH", "SyeX0EMciB", "HygWCrGcjr", "BJgWiHfqjB", "B1l17Sz5sS", "rkeaNDmTKS", "BJgEjGYpKH", "Hkx924tpKr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers again for useful comments. \n\nWe did our best to provide new results for the additional experiments. \n", "Thank you for your positive assessment of our work. \n\n1.&2. Thank you for spotting this. We addressed these comments in the revision.\n\n[Experiments on Imagenet]\nFrom Table 1 we ...
[ -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "HygWCrGcjr", "Hkx924tpKr", "iclr_2020_SkgGCkrKvH", "rkeaNDmTKS", "BJgEjGYpKH", "iclr_2020_SkgGCkrKvH", "iclr_2020_SkgGCkrKvH", "iclr_2020_SkgGCkrKvH" ]
iclr_2020_SylL0krYPS
Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control
Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states.
accept-poster
This paper considers adversarial attacks in continuous action model-based deep reinforcement learning. An optimisation-based approach is presented, and evaluated on Mujoco tasks. There were two main concerns from the reviewers. The first was that the approach requires strong assumptions, but in the rebuttal some relaxations were demonstrated (e.g., not attacking every step). Additionally, there were issues raised with the choice of baselines, but in the discussion the reviewers did not agree on any other reasonable baselines to use. This is a novel and interesting contribution nonetheless, which could open the field to much additional discussion, and so should be accepted.
val
[ "SkgbcLA1cS", "Hkxbh_c3sB", "BJga0IY3jB", "HkgbGBQhir", "SygTDK73oH", "Hke-i_X3jr", "rkgXl_Qnir", "HJllfUXhoS", "Bye037XhoB", "ByljlkVFFH", "S1eTSCxTtr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents an adversarial attack for perturbing\nthe actions or observations of an agent acting near-optimally\nin an MDP so that the policy performs poorly.\nI think understanding the sensitivity of a policy to\nslight perturbations in the actions it takes or the\nobservations that it receives is importa...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_SylL0krYPS", "Bye037XhoB", "SygTDK73oH", "ByljlkVFFH", "Hke-i_X3jr", "rkgXl_Qnir", "S1eTSCxTtr", "HkgbGBQhir", "SkgbcLA1cS", "iclr_2020_SylL0krYPS", "iclr_2020_SylL0krYPS" ]
iclr_2020_ryxK0JBtPr
Gradient ℓ1 Regularization for Quantization Robustness
We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths as energy and memory requirements of the application change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for ``on the fly'' post-training quantization to various bit-widths. We show that by modeling quantization as a ℓ∞-bounded perturbation, the first-order term in the loss expansion can be regularized using the ℓ1-norm of gradients. We experimentally validate our method on different vision architectures on CIFAR-10 and ImageNet datasets and show that the regularization of a neural network using our method improves robustness against quantization noise.
accept-poster
Reviewers uniformly suggest acceptance. Please take their comments into account in the camera-ready. Congratulations!
train
[ "r1eFtky0tr", "r1gvs0imcr", "r1evGIc2jr", "r1lcj_tDor", "SJgXpIKwsB", "ryxSorYvjH", "HyxmIYPEYH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper models the quantization errors of weights and activations as additive l_inf bounded perturbations and uses first-order approximation of loss function to derive a gradient norm penalty regularization that encourage the network's robustness to any bit-width quantization. The authors claim that this method...
[ 6, 6, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2020_ryxK0JBtPr", "iclr_2020_ryxK0JBtPr", "r1eFtky0tr", "r1eFtky0tr", "r1gvs0imcr", "HyxmIYPEYH", "iclr_2020_ryxK0JBtPr" ]
iclr_2020_rkxs0yHFPH
SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes
Event-based neuromorphic systems promise to reduce the energy consumption of deep neural networks by replacing expensive floating point operations on dense matrices by low energy, sparse operations on spike events. While these systems can be trained increasingly well using approximations of the backpropagation algorithm, this usually requires high precision errors and is therefore incompatible with the typical communication infrastructure of neuromorphic circuits. In this work, we analyze how the gradient can be discretized into spike events when training a spiking neural network. To accelerate our simulation, we show that using a special implementation of the integrate-and-fire neuron allows us to describe the accumulated activations and errors of the spiking neural network in terms of an equivalent artificial neural network, allowing us to largely speed up training compared to an explicit simulation of all spike events. This way we are able to demonstrate that even for deep networks, the gradients can be discretized sufficiently well with spikes if the gradient is properly rescaled. This form of spike-based backpropagation enables us to achieve equivalent or better accuracies on the MNIST and CIFAR10 datasets than comparable state-of-the-art spiking neural networks trained with full precision gradients. The algorithm, which we call SpikeGrad, is based on only accumulation and comparison operations and can naturally exploit sparsity in the gradient computation, which makes it an interesting choice for a spiking neuromorphic systems with on-chip learning capacities.
accept-poster
This paper proposes a learning framework for spiking neural networks that exploits the sparsity of the gradient during backpropagation to reduce the computational cost of training. The method is evaluated against prior works that use full precision gradients and shown comparable performance. Overall, the contribution of the paper is solid, and after a constructive rebuttal cycle, all reviewers reached a consensus of weak accept. Therefore, I recommend accepting this submission.
train
[ "BkgT1iYcYS", "HklkGczhsS", "BylBV_MhiB", "HylXmDzhjB", "rJlYxLacKr", "S1x7zEO9oB", "SJgz3mV7oS", "rJlKh0QmoB", "r1liPFXmiH", "rJgnwZSTYH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a first framework of large-scale spiking neural network that exploits the the sparsity of the gradient during backpropagation, to save training energy. \nLater, it provides detailed analysis to show the equivalence of accumulated response and the corresponding integer activation ANN.\n\nThe pap...
[ 6, -1, -1, -1, 6, -1, -1, -1, -1, 6 ]
[ 5, -1, -1, -1, 1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_rkxs0yHFPH", "BkgT1iYcYS", "S1x7zEO9oB", "rJgnwZSTYH", "iclr_2020_rkxs0yHFPH", "rJlKh0QmoB", "BkgT1iYcYS", "rJlYxLacKr", "rJgnwZSTYH", "iclr_2020_rkxs0yHFPH" ]
iclr_2020_HJlnC1rKPB
On the Relationship between Self-Attention and Convolutional Layers
Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis. Our code is publicly available.
accept-poster
This paper studies the relationship between attention networks such as Transformers and convolutional networks. The paper shows that a special case of attention can be cast as convolution. However this link depends on using relative positional embeddings and generalization to other encodings are not given in the paper. The reviewers found the results correct, but we caution that the writing should better reflect the caveats of the approach.
train
[ "H1eGmNvsFr", "rJg607FnsS", "Bkxt7WL2jr", "S1gMv2aYoS", "rkxRCMjKoS", "SJe23bsYsH", "ryxN-estjr", "HkgXKBVXtr", "ryxtiK90Fr" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper claims that 1. multi-head self-attention(MHSA) is at least as powerful as convolutions by showing that a CONV can be cast as a special case of MHSA and 2. that in practice, MHSA often mimic convolutional layers.\n\nThese claims are interesting and timely, given that there has been a fair amount of recent...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HJlnC1rKPB", "iclr_2020_HJlnC1rKPB", "S1gMv2aYoS", "SJe23bsYsH", "HkgXKBVXtr", "H1eGmNvsFr", "ryxtiK90Fr", "iclr_2020_HJlnC1rKPB", "iclr_2020_HJlnC1rKPB" ]
iclr_2020_HyxJ1xBYDH
Learning-Augmented Data Stream Algorithms
The data stream model is a fundamental model for processing massive data sets with limited memory and fast processing time. Recently Hsu et al. (2019) incorporated machine learning techniques into the data stream model in order to learn relevant patterns in the input data. Such techniques were encapsulated by training an oracle to predict item frequencies in the streaming model. In this paper we explore the full power of such an oracle, showing that it can be applied to a wide array of problems in data streams, sometimes resulting in the first optimal bounds for such problems. Namely, we apply the oracle to counting distinct elements on the difference of streams, estimating frequency moments, estimating cascaded aggregates, and estimating moments of geometric data streams. For the distinct elements problem, we obtain the first memory-optimal algorithms. For estimating the p-th frequency moment for 0<p<2 we obtain the first algorithms with optimal update time. For estimating the p-the frequency moment for p>2 we obtain a quadratic saving in memory. We empirically validate our results, demonstrating also our improvements in practice.
accept-poster
This paper theoretically analyzes the use of an oracle to predict various quantities in data stream models. Building upon Hsu et al., (2019), the overriding goal is to examine the degree to which such an oracle is can provide memory and time improvements across broad streaming regimes. In doing so, optimal bounds are derived in conjunction with a heavy hitter oracle. Although the rebuttal and discussion period did not lead to a consensus in the scoring of this paper, two reviewers were highly supportive. However, the primary criticism from the lone dissenting reviewer was based on the high-level presentation and motivation, and in particular, the impression that the paper read more like a STOC theory paper. In this regard though, my belief is that the authors can easily tailor a revision to increase the accessibility to a wider ICLR audience.
train
[ "Syem-mAisB", "SyxcJXRjjr", "Hkl2wG0joB", "BJloMuTnFr", "HJxwv99GqS", "Sye1Z22R5B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the comments on our paper.\n- We have included the result concerning a noisy oracle for the F_p moment estimation problem in the paper.\n- We like the question of minimizing the number of oracle calls. This is an interesting open problem and we intend to explore it in future work.\n", "...
[ -1, -1, -1, 8, 8, 3 ]
[ -1, -1, -1, 3, 1, 1 ]
[ "BJloMuTnFr", "HJxwv99GqS", "Sye1Z22R5B", "iclr_2020_HyxJ1xBYDH", "iclr_2020_HyxJ1xBYDH", "iclr_2020_HyxJ1xBYDH" ]
iclr_2020_B1e-kxSKDH
Structured Object-Aware Physics Prediction for Video Modeling and Planning
When humans observe a physical system, they can easily locate components, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem. In this paper, we present STOVE, a novel state-space model for videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over hundreds of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control, in a task with heavily interacting objects.
accept-poster
The paper presents a method for modeling videos with object-centric structured representations. The paper is well written and clearly motivated. Using a Graph Neural Network for modeling latent physics is a sensible idea and can be beneficial for planning/control. Experimental results show improved performance over the baselines. After the rebuttal, many questions/concerns from the reviewers were addressed, and all reviewers recommend weak acceptance.
train
[ "r1lMMgfssr", "H1lg7JLcsB", "S1gNkPJnYr", "HkgCxPNusB", "rylOjQdDuS", "rJlPFpxdiH", "HylUcrfQsr", "ryg8vXzXjr", "rJlOFefQjr", "SJxYFRbQoS", "BJgAmQbAFB" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Thank you for updating your evaluation. We are glad that the revision addressed your concerns.", "Thank you for the clear response and updated document. I have updated my review in response as I now believe that the paper should be accepted.", "In this paper the authors present a graph neural network for model...
[ -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 5, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "H1lg7JLcsB", "HylUcrfQsr", "iclr_2020_B1e-kxSKDH", "rJlPFpxdiH", "iclr_2020_B1e-kxSKDH", "HylUcrfQsr", "rylOjQdDuS", "S1gNkPJnYr", "BJgAmQbAFB", "iclr_2020_B1e-kxSKDH", "iclr_2020_B1e-kxSKDH" ]
iclr_2020_Hyl7ygStwB
Incorporating BERT into Neural Machine Translation
The recently proposed BERT (Devlin et al., 2019) has shown great power on a variety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mechanisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at https://github.com/bert-nmt/bert-nmt
accept-poster
The authors propose a novel way of incorporating a large pretrained language model (BERT) into neural machine translation using an extra attention model for both the NMT encoder and decoder. The paper presents thorough experimental design, with strong baselines and consistent positive results for supervised, semi-supervised and unsupervised experiments. The reviewers all mentioned lack of clarity in the writing and there was significant discussion with the authors. After improvements and clarifications, all reviewers agree that this paper would make a good contribution to ICLR and be of general use to the field.
test
[ "rJxBSkziYr", "BkxicGFioB", "H1xIHxKsjr", "rklGWyKojB", "HyeCZACtYr", "rJgB3-oRtS", "S1epjIzD5B", "SkeX8nWv5S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper explores the use of BERT to improve Neural Machine Translation (NMT) both in supervised, semi-supervised and unsupervised settings. The authors first show that using BERT to initialize the encoder and/or the decoder does not bring any clear improvement, while using it as a feature extractor performs bet...
[ 6, -1, -1, -1, 6, 6, -1, -1 ]
[ 5, -1, -1, -1, 5, 5, -1, -1 ]
[ "iclr_2020_Hyl7ygStwB", "rJxBSkziYr", "HyeCZACtYr", "rJgB3-oRtS", "iclr_2020_Hyl7ygStwB", "iclr_2020_Hyl7ygStwB", "SkeX8nWv5S", "iclr_2020_Hyl7ygStwB" ]
iclr_2020_HkeryxBtPB
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier's decision boundary. Our study shows that maximizing margins can be achieved by minimizing the adversarial loss on the decision boundary at the "shortest successful perturbation", demonstrating a close connection between adversarial losses and the margins. We propose Max-Margin Adversarial (MMA) training to directly maximize the margins to achieve adversarial robustness. Instead of adversarial training with a fixed ϵ, MMA offers an improvement by enabling adaptive selection of the "correct" ϵ as the margin individually for each datapoint. In addition, we rigorously analyze adversarial training with the perspective of margin maximization, and provide an alternative interpretation for adversarial training, maximizing either a lower or an upper bound of the margins. Our experiments empirically confirm our theory and demonstrate MMA training's efficacy on the MNIST and CIFAR10 datasets w.r.t. ℓ∞ and ℓ2 robustness.
accept-poster
This work presents a new loss function that combines the usual cross-entropy term with a margin maximization term applied to the correctly classified examples. There have been a lot of recent ideas on how to incorporate margin into the training process for deep learning. The paper differs from those in the way that it computes margin. The paper shows that training with the proposed max margin loss results in robustness against some adversarial attacks. There were initially some concerns about baseline comparisons; one of the reviewers requesting comparison against TRADES, and the other making comments on CW-L2. In response, authors ran additional experiments and listed those in their rebuttal and in the revised draft. This led some reviewers to raise their initial scores. At the end, majority of reviewers recommended accept. Alongside with them, I find extensions of classic large margin ideas to deep learning settings (when margin is not necessarily defined at the output layer) an important research direction for constructing deep models that are robust and can generalize.
train
[ "r1xsfMaxFr", "SJxahVPkcS", "H1xiiPCosr", "HklombujiB", "B1lW_kdosr", "BJeaEE88sr", "SJlA4H8UiB", "r1er1S8UoB", "S1ey_4ULsH", "Hyet07UIoS", "ByxNq7ULjH", "H1llwmULiS", "SyenNZBnFS", "Skg9K9pV_r", "Hklcw_H1uH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Summary: \nThis paper proposes an adaptive margin-based adversarial training (eg. MMA) approach to train robust DNNs by maximizing the shortest margin of inputs to the decision boundary. Theoretical analyses have been provided to understand the connection between robust optimization and margin maximization. The ma...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1 ]
[ "iclr_2020_HkeryxBtPB", "iclr_2020_HkeryxBtPB", "B1lW_kdosr", "BJeaEE88sr", "Hyet07UIoS", "r1xsfMaxFr", "r1xsfMaxFr", "r1xsfMaxFr", "r1xsfMaxFr", "SJxahVPkcS", "SyenNZBnFS", "iclr_2020_HkeryxBtPB", "iclr_2020_HkeryxBtPB", "Hklcw_H1uH", "iclr_2020_HkeryxBtPB" ]
iclr_2020_rkgU1gHtvr
Infinite-horizon Off-Policy Policy Evaluation with Multiple Behavior Policies
We consider off-policy policy evaluation when the trajectory data are generated by multiple behavior policies. Recent work has shown the key role played by the state or state-action stationary distribution corrections in the infinite horizon context for off-policy policy evaluation. We propose estimated mixture policy (EMP), a novel class of partially policy-agnostic methods to accurately estimate those quantities. With careful analysis, we show that EMP gives rise to estimates with reduced variance for estimating the state stationary distribution correction while it also offers a useful induction bias for estimating the state-action stationary distribution correction. In extensive experiments with both continuous and discrete environments, we demonstrate that our algorithm offers significantly improved accuracy compared to the state-of-the-art methods.
accept-poster
The authors present a method to address off-policy policy evaluation in the infinite horizon case, when the available data comes from multiple unknown behavior policies. Their solution -- the estimated mixture policy -- combines recent ideas from both infinite horizon OPE and regression importance sampling, a recent importance sampling based method. At first, the reviewers were concerned about writing clarity, feasibility in the continuous case, and comparisons to contemporary methods like DualDICE. After the rebuttal period, the reviewers agreed that all the major issues had been addressed through clarifications, rewriting, code release, and additional empirical comparisons. Thus, I recommend to accept this paper.
train
[ "SJx9Xl7aYB", "SklbL9CnFB", "SJe5dmAijS", "H1e6OxCjjH", "Bkl8G1RsjB", "BJgZdaaoiB", "Sygmew3xqH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "After rebuttal:\nThank author for the clarification. The new version looks better and I tend to accept the paper in the current version.\n=========\nThis paper provides a algorithm to solve infinite horizon off policy evaluation with multiple behavior policies by estimate a mixed policy under regression, and follo...
[ 6, 6, -1, -1, -1, -1, 3 ]
[ 4, 3, -1, -1, -1, -1, 5 ]
[ "iclr_2020_rkgU1gHtvr", "iclr_2020_rkgU1gHtvr", "SklbL9CnFB", "SJx9Xl7aYB", "Sygmew3xqH", "iclr_2020_rkgU1gHtvr", "iclr_2020_rkgU1gHtvr" ]
iclr_2020_rylwJxrYDS
vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.
accept-poster
This paper proposes a new self-supervised pre-trained speech model that improves speech recognition performance. The idea combines an earlier pre-training approach (wav2vec) with discretization followed by BERT-style masked reconstruction. The result is a fairly complex approach, with not too much novelty but with a good amount of engineering and analysis, and ultimately very good performance. The reviewers agree that the work deserves publication at ICLR, and the authors have addressed some of the reviewer concerns in their revision. The complexity of the approach may mean that it is not immediately widely adopted by others, but it is a good proof of concept and may well inspire other related work. I believe the ICLR community will find this work interesting.
train
[ "r1e9Ipmo_r", "H1gRCgthsH", "ByeQLeYhsH", "BJlsGxtnsS", "BkxWkgY2sB", "ryllakFnsS", "HyxsH-ORFH", "B1xXOITI9H", "rJxLRp2s9H" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nOverview:\n\nThis paper considers unsupervised (or self-supervised) discrete representation learning of speech using a combination of a recent vector quantized neural network discritization method and future time step prediction. Discrete representations are fine-tuned by using these as input to a BERT model; th...
[ 8, -1, -1, -1, -1, -1, 8, 6, 8 ]
[ 5, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2020_rylwJxrYDS", "iclr_2020_rylwJxrYDS", "rJxLRp2s9H", "B1xXOITI9H", "HyxsH-ORFH", "r1e9Ipmo_r", "iclr_2020_rylwJxrYDS", "iclr_2020_rylwJxrYDS", "iclr_2020_rylwJxrYDS" ]
iclr_2020_BygdyxHFDS
Meta-learning curiosity algorithms
We hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in an agent's life in order to expose it to experiences that enable it to obtain high rewards over the course of its lifetime. We formulate the problem of generating curious behavior as one of meta-learning: an outer loop will search over a space of curiosity mechanisms that dynamically adapt the agent's reward signal, and an inner loop will perform standard reinforcement learning using the adapted reward signal. However, current meta-RL methods based on transferring neural network weights have only generalized between very similar tasks. To broaden the generalization, we instead propose to meta-learn algorithms: pieces of code similar to those designed by humans in ML papers. Our rich language of programs combines neural networks with other building blocks such as buffers, nearest-neighbor modules and custom loss functions. We demonstrate the effectiveness of the approach empirically, finding two novel curiosity algorithms that perform on par or better than human-designed published curiosity algorithms in domains as disparate as grid navigation with image inputs, acrobot, lunar lander, ant and hopper.
accept-poster
This paper proposes meta-learning auxiliary rewards as specified by a DSL. The approach was considered innovative and the results interesting by all reviewers. The paper is clearly of an acceptable standard, with the main concerns raised by reviewers having been addressed (admittedly at the 11th hour) by the authors during the discussion period. Accept.
train
[ "H1gaST1RYB", "SylcTN43sS", "Syehlm42sB", "SJlc0SVnsS", "BkgcsU4hiS", "H1gsicx6FH", "B1lw3b8TFr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to meta-learn a curiosity module via neural architecture search. The curiosity module, which outputs a meta-reward derived from the agent’s history of transitions, is optimized via black box search in order to optimize the agent’s lifetime reward over a (very) long horizon. The agent in contras...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_BygdyxHFDS", "H1gsicx6FH", "B1lw3b8TFr", "H1gaST1RYB", "iclr_2020_BygdyxHFDS", "iclr_2020_BygdyxHFDS", "iclr_2020_BygdyxHFDS" ]
iclr_2020_SygKyeHKDH
Making Efficient Use of Demonstrations to Solve Hard Exploration Problems
This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.
accept-poster
This paper tackles hard-exploration RL problems using learning from demonstrations. The idea is to combine the existing R2D2 algorithms with imitation learning from human demonstrations. Experiments are conducted on a new set of challenging tasks, highlighting limitations of strong current baseline while highlighting the strength of the proposed approach. The contribution is two-folds: the proposed algorithm which clear outperforms previous SOTA agents and the set of benchmarks. All reviewers being positive about this paper, I therefore recommend acceptance.
test
[ "rJl2wiVt9S", "rkeHw5y9or", "SJeB6YkqjB", "Skgbqt19ir", "Bkx4SKkqsH", "rkgS-t2htH", "HkgqZVuaYS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this work, R2D3 (Recurrent Replay Distributed DQN from Demonstration), which combines R2D2 [1] with imitation learning (IL), is proposed. Similar to the existing works on “reinforcement learning (RL) with demonstration” such as DQfD, DDPGfD, policy optimization with demonstration (POfD) [2], hard exploration co...
[ 6, -1, -1, -1, -1, 6, 8 ]
[ 4, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_SygKyeHKDH", "iclr_2020_SygKyeHKDH", "rkgS-t2htH", "HkgqZVuaYS", "rJl2wiVt9S", "iclr_2020_SygKyeHKDH", "iclr_2020_SygKyeHKDH" ]
iclr_2020_Hkl9JlBYvr
VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning
Trading off exploration and exploitation in an unknown environment is key to maximising expected return during learning. A Bayes-optimal policy, which does so optimally, conditions its actions not only on the environment state but on the agent’s uncertainty about the environment. Computing a Bayes-optimal policy is however intractable for all but the smallest tasks. In this paper, we introduce variational Bayes-Adaptive Deep RL (variBAD), a way to meta-learn to perform approximate inference in an unknown environment, and incorporate task uncer- tainty directly during action selection. In a grid-world domain, we illustrate how variBAD performs structured online exploration as a function of task uncertainty. We further evaluate variBAD on MuJoCo domains widely used in meta-RL and show that it achieves higher online return than existing methods.
accept-poster
This paper considers the problem of transfer learning among families of MDP, and proposes a variational Bayesian approach to learn a probabilistic model of a new problem drawn from the same distribution as previous tasks, which is then leveraged during action selection. After discussion, the three respondent reviewers converged to the opinion that the paper is novel and interesting, and well evaluated. (Reviewer 1 never responded to any questions the authors or me, so I have disregarded their review.) I am therefore recommending an accept.
train
[ "HkgOjdCb5r", "Hkgsq_lhsS", "Hye3gDe2iH", "BylpcZxoor", "SyxKPjfroH", "rkeFf-CcjH", "ryeWBjZXiH", "HkeZujWmsB", "ByxG4q-XiS", "ByeWje_-jB", "B1lTqBu_KB", "SkeDSGonKS" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThis paper considers a version of reinforcement learning problem where an unknown prior distribution over Markov decision processes are assumed and the learner can sample from it. After sampling a MDP, a standard reinforcement learning is done. Then the paper investigates the Bayes-optimal strategy for ...
[ 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, 1, 8 ]
[ 1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_Hkl9JlBYvr", "BylpcZxoor", "ByxG4q-XiS", "rkeFf-CcjH", "iclr_2020_Hkl9JlBYvr", "SyxKPjfroH", "SkeDSGonKS", "B1lTqBu_KB", "HkgOjdCb5r", "B1lTqBu_KB", "iclr_2020_Hkl9JlBYvr", "iclr_2020_Hkl9JlBYvr" ]
iclr_2020_ryl3ygHYDB
Lookahead: A Far-sighted Alternative of Magnitude-based Pruning
Magnitude-based pruning is one of the simplest methods for pruning neural networks. Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures. Based on the observation that magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization. Our experimental results demonstrate that the proposed method consistently outperforms magnitude-based pruning on various networks, including VGG and ResNet, particularly in the high-sparsity regime. See https://github.com/alinlab/lookahead_pruning for codes.
accept-poster
This paper introduces a pruning criterion which is similar to magnitude-based pruning, but which accounts for the interactions between layers. The reviewers have gone through the paper carefully, and after back-and-forth with the authors, they are all satisfied with the paper and support acceptance.
train
[ "Bkgyu_mq5r", "SJx4Fh7nsB", "S1xq-3m2sB", "rklT2imhoB", "H1gzqsX2iH", "SJe7b2Hjir", "HJlG50bnqr", "BkgBN_OqsH", "SylR_f9tir", "HygqMm9tsr", "rkg5cECtsH", "Syl2AZ5KjH", "HklMrm5tiH", "BklvCf5YsH", "BkgrU-5tiS", "SJla8mLA_r", "S1eyoo0F9B" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "*Summary*\nThe paper proposes a multi-layer alternative to magnitude-based pruning. The operations entailed in the previous, current, and subsequent layers are treated as linear operations (by omitting any nonlinearities), weights are selected for pruning to minimize the \"Frobenius distortion\", the Frobenius nor...
[ 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_ryl3ygHYDB", "iclr_2020_ryl3ygHYDB", "SJe7b2Hjir", "rkg5cECtsH", "BkgBN_OqsH", "Syl2AZ5KjH", "iclr_2020_ryl3ygHYDB", "HklMrm5tiH", "Bkgyu_mq5r", "S1eyoo0F9B", "HygqMm9tsr", "BkgrU-5tiS", "SJla8mLA_r", "SylR_f9tir", "HJlG50bnqr", "iclr_2020_ryl3ygHYDB", "iclr_2020_ryl3ygHYD...
iclr_2020_rJxWxxSYvB
Spike-based causal inference for weight alignment
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
accept-poster
All authors agree the paper is well written, and there is a good consensus on acceptance. The last reviewer was concerned about a lack of diversity in datasets, but this was addressed in the rebuttal.
train
[ "S1eQziLmFB", "H1xV5ffosr", "HyxVOMzior", "Hyxe0-GoiS", "BJx5tbGjjB", "H1lpyw_atB", "H1lQdqkRKB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper introduces a training mechanism for spiking neural nets that employs a causal inference technique, called RDD, for adjustment of backward spiking weights. This technique induces the backward influence strengths to be reciprocal to the forward ones, bringing desirable symmetry properties.\n\nPros:\n * The...
[ 6, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_rJxWxxSYvB", "iclr_2020_rJxWxxSYvB", "S1eQziLmFB", "H1lpyw_atB", "H1lQdqkRKB", "iclr_2020_rJxWxxSYvB", "iclr_2020_rJxWxxSYvB" ]
iclr_2020_Hkg-xgrYvH
Empirical Bayes Transductive Meta-Learning with Synthetic Gradients
We propose a meta-learning approach that learns from multiple tasks in a transductive setting, by leveraging the unlabeled query set in addition to the support set to generate a more powerful model for each task. To develop our framework, we revisit the empirical Bayes formulation for multi-task learning. The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior on the query set of each task. We derive a novel amortized variational inference that couples all the variational posteriors via a meta-model, which consists of a synthetic gradient network and an initialization network. Each variational posterior is derived from synthetic gradient descent to approximate the true posterior on the query set, although where we do not have access to the true gradient. Our results on the Mini-ImageNet and CIFAR-FS benchmarks for episodic few-shot classification outperform previous state-of-the-art methods. Besides, we conduct two zero-shot learning experiments to further explore the potential of the synthetic gradient.
accept-poster
Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission.
train
[ "Skx5esH3iB", "S1gFwdapYr", "HkgSFawkqr", "Hkgz8nS3oH", "Bklq9Z83or", "BJgV-MI3sB", "Hkev_eo2oB", "SJedK9owqS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Dear R4, \n\nThank you for your insightful comments. Below we address one by one the issues that you mentioned.\n\n“1. The first paragraph on page 5, which describes the key step of syntheising gradient, can be made clearer”\n\nAnswer: We agree and have added Figure 2 to illustrate the idea of the amortization usi...
[ -1, 6, 6, -1, -1, -1, -1, 6 ]
[ -1, 3, 4, -1, -1, -1, -1, 4 ]
[ "SJedK9owqS", "iclr_2020_Hkg-xgrYvH", "iclr_2020_Hkg-xgrYvH", "HkgSFawkqr", "S1gFwdapYr", "Bklq9Z83or", "iclr_2020_Hkg-xgrYvH", "iclr_2020_Hkg-xgrYvH" ]
iclr_2020_rke7geHtwH
Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning
Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.
accept-poster
The authors present a novel stable RL algorithm for the batch off-policy setting, through the use of a learned prior. Initially, reviewers had significant concerns about (1) reproducibility, (2) technical details, including the non-negativity of the lagrange multiplier, (3) a lack of separation between performance contributions of ABM and MPO, (4) baseline comparisons. The authors satisfactorily clarified points (1)-(3) and the simulated baseline comparisons for (4) seem reasonable in light of how long the real robot experiments took, as reported by the authors. Futhermore, the reviewers all agree on the contribution of the core ideas. Thus, I recommend this paper for acceptance.
train
[ "B1eFTysoKS", "H1eygCrnjr", "Byehanrnir", "HJl6c3HnjS", "S1gZXAlnKH", "HygcOQSTKH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a method for “offline RL” (a.k.a “batch RL”), i.e. reinforcement learning from a given static dataset, with no option to perform on-policy data collection. Contrary to prior work (Fujimoto et al, Kumar et al, Agarwal et al) in this area which focuses on making Q-learning robust in the offline R...
[ 6, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rke7geHtwH", "B1eFTysoKS", "HygcOQSTKH", "S1gZXAlnKH", "iclr_2020_rke7geHtwH", "iclr_2020_rke7geHtwH" ]
iclr_2020_r1lPleBFvH
Understanding the Limitations of Conditional Generative Models
Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts. They are a natural choice to solve discriminative tasks in a robust manner as they jointly optimize for predictive performance and accurate modeling of the input distribution. In this work, we investigate robust classification with likelihood-based generative models from a theoretical and practical perspective to investigate if they can deliver on their promises. Our analysis focuses on a spectrum of robustness properties: (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled in-distribution inputs. Our theoretical result reveals that it is impossible to guarantee detectability of adversarially-perturbed inputs even for near-optimal generative classifiers. Experimentally, we find that while we are able to train robust models for MNIST, robustness completely breaks down on CIFAR10. We relate this failure to various undesirable model properties that can be traced to the maximum likelihood training objective. Despite being a common choice in the literature, our results indicate that likelihood-based conditional generative models may are surprisingly ineffective for robust classification.
accept-poster
This paper presents theoretical results showing the conditional generative models cannot be robust. The paper also provide counter examples and some empirical evidence showing that the theory is reflected in practice. Some reviewers doubt how much of the theory holds in reality, but still they think that this paper could be a useful for the community. After the rebuttal period, R2 increased their score and it seems that with the current score the paper can be accepted.
train
[ "Sklh8s_atB", "S1ljBraStH", "SyxZWEY3oH", "HJlNVQYnsB", "SkxwWQt2sr", "S1eyhzF2jH", "rJx8deVBcS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Update: I thank the authors' for their response, and have read the other reviews.\n\nThis paper demonstrates some theoretical and practical limitations on the use of likelihood based generative models for detecting adversarial examples. They construct a simple counterexample showing that there are adversarial exam...
[ 6, 6, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, 1 ]
[ "iclr_2020_r1lPleBFvH", "iclr_2020_r1lPleBFvH", "S1ljBraStH", "Sklh8s_atB", "rJx8deVBcS", "iclr_2020_r1lPleBFvH", "iclr_2020_r1lPleBFvH" ]
iclr_2020_Hyl9xxHYPr
Demystifying Inter-Class Disentanglement
Learning to disentangle the hidden factors of variations within a set of observations is a key task for artificial intelligence. We present a unified formulation for class and content disentanglement and use it to illustrate the limitations of current methods. We therefore introduce LORD, a novel method based on Latent Optimization for Representation Disentanglement. We find that latent optimization, along with an asymmetric noise regularization, is superior to amortized inference for achieving disentangled representations. In extensive experiments, our method is shown to achieve better disentanglement performance than both adversarial and non-adversarial methods that use the same level of supervision. We further introduce a clustering-based approach for extending our method for settings that exhibit in-class variation with promising results on the task of domain translation.
accept-poster
This paper proposes a novel method for class-supervised disentangled representation learning. The method augments an autoencoder with asymmetric noise regularisation and is able to disentangled content (class) and style information from each other. The reviewers agree that the method achieves impressive empirical results and significantly outperforms the baselines. Furthermore, the authors were able to alleviate some of the initial concerns raised by the reviewers during the discussion stage by providing further experimental results and modifying the paper text. By the end of the discussion period some of the reviewers raised their scores and everyone agreed that the paper should be accepted. Hence, I am happy to recommend acceptance.
train
[ "rklwiTzhiH", "rkeZVQEecr", "r1xm5GG2sr", "H1xVCs-nsH", "HklL3yW9sB", "ryla4a5BiB", "H1lSLh9SoH", "SklcVicBor", "SJxMpdc5KS", "HyeLAxM9FB" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the dedicated and fruitful review.\n\nTable 5 only shows losses on Cars3D as we have fully assessed the results of the entire ablation study only on Cars3D (as presented in Table 3). As per your request, an extended ablation evaluation on the other datasets (including individual loss valu...
[ -1, 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "r1xm5GG2sr", "iclr_2020_Hyl9xxHYPr", "H1xVCs-nsH", "HklL3yW9sB", "SklcVicBor", "HyeLAxM9FB", "SJxMpdc5KS", "rkeZVQEecr", "iclr_2020_Hyl9xxHYPr", "iclr_2020_Hyl9xxHYPr" ]
iclr_2020_S1g6xeSKDS
Mixed-curvature Variational Autoencoders
Euclidean space has historically been the typical workhorse geometry for machine learning applications due to its power and simplicity. However, it has recently been shown that geometric spaces with constant non-zero curvature improve representations and performance on a variety of data types and downstream tasks. Consequently, generative models like Variational Autoencoders (VAEs) have been successfully generalized to elliptical and hyperbolic latent spaces. While these approaches work well on data with particular kinds of biases e.g. tree-like data for a hyperbolic VAE, there exists no generic approach unifying and leveraging all three models. We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature is fixed or learnable. This generalizes the Euclidean VAE to curved latent spaces and recovers it when curvatures of all latent space components go to 0.
accept-poster
This paper studies generalizations of Variational Autoencoders to Non-Euclidean domains, modeled as products of constant curvature Riemannian manifolds. The framework allows to simultaneously learn the latent representations as well as the curvature of the latent domain. Reviewers were unanimous at highlighting the significance of this work at developing non-Euclidean tools for generative modeling. Despite the somewhat preliminary nature of the empirical evaluation, there was consensus that the paper puts forward interesting tools that might spark future research in this direction. Given those positive assessments, the AC recommends acceptance.
train
[ "rJljvhvsoS", "rkeOhAPTYB", "ryey_6Gsir", "HyehuQCusr", "Bye_MXC_ir", "SyxZzGAdiH", "B1eOr-Rdjr", "HyxpFesnKS", "r1lBH4iE9S" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the quick reply and the extensive feedback. \n\n- As can be seen from the experiments, the sign agnostic models (universal component, denoted $\\mathbb{U}$) do perform better in higher dimensions than models with fixed signs, which we also mentioned in Section 4, in the second paragraph of the Summar...
[ -1, 8, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 1 ]
[ "ryey_6Gsir", "iclr_2020_S1g6xeSKDS", "HyehuQCusr", "Bye_MXC_ir", "rkeOhAPTYB", "r1lBH4iE9S", "HyxpFesnKS", "iclr_2020_S1g6xeSKDS", "iclr_2020_S1g6xeSKDS" ]
iclr_2020_r1x0lxrFPS
BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations
Binary Neural Networks (BNNs) have been garnering interest thanks to their compute cost reduction and memory savings. However, BNNs suffer from performance degradation mainly due to the gradient mismatch caused by binarizing activations. Previous works tried to address the gradient mismatch problem by reducing the discrepancy between activation functions used at forward pass and its differentiable approximation used at backward pass, which is an indirect measure. In this work, we use the gradient of smoothed loss function to better estimate the gradient mismatch in quantized neural network. Analysis using the gradient mismatch estimator indicates that using higher precision for activation is more effective than modifying the differentiable approximation of activation function. Based on the observation, we propose a new training scheme for binary activation networks called BinaryDuo in which two binary activations are coupled into a ternary activation during training. Experimental results show that BinaryDuo outperforms state-of-the-art BNNs on various benchmarks with the same amount of parameters and computing cost.
accept-poster
Three reviewers suggest acceptance. Reviewers were impressed by the thoroughness of the author response. Please take reviewer comments into account in the camera ready. Congratulations!
train
[ "HJgd6tglcS", "HyemulP_sB", "HygLemDdiS", "ByeFTEDOir", "BkeuMLwujS", "SJg8uvvusr", "Skga_kvOiS", "HklEl_vusS", "H1ldnneeiH", "rJgT-SZ0tB", "HJgBJh3KuS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposes a new measure of gradient mismatch for training binary networks, and additionally proposes a method for getting better performance out of binary networks by initializing them to behave like a ternary network.\n\nI found the new measure of gradient deviation fairly underdeveloped, and I suspect ...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 1, 4, -1 ]
[ "iclr_2020_r1x0lxrFPS", "rJgT-SZ0tB", "HJgd6tglcS", "HJgd6tglcS", "HJgd6tglcS", "HJgd6tglcS", "rJgT-SZ0tB", "H1ldnneeiH", "iclr_2020_r1x0lxrFPS", "iclr_2020_r1x0lxrFPS", "iclr_2020_r1x0lxrFPS" ]
iclr_2020_HklxbgBKvr
Model-based reinforcement learning for biological sequence design
The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact. Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations. In response, we propose using reinforcement learning (RL) based on proximal-policy optimization (PPO) for biological sequence design. RL provides a flexible framework for optimization generative sequence models to achieve specific criteria, such as diversity among the high-quality sequences discovered. We propose a model-based variant of PPO, DyNA-PPO, to improve sample efficiency, where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds. To accommodate the growing number of observations across rounds, the simulator model is automatically selected at each round from a pool of diverse models of varying capacity. On the tasks of designing DNA transcription factor binding sites, designing antimicrobial proteins, and optimizing the energy of Ising models based on protein structure, we find that DyNA-PPO performs significantly better than existing methods in settings in which modeling is feasible, while still not performing worse in situations in which a reliable model cannot be learned.
accept-poster
The paper proposes a model based proximal policy optimization reinforcement learning algorithm for designing biological sequences. The policy of for a new round is trained on data generated by a simulator. The paper presents empirical results on designing sequences for transcription factor binding sites, antimicrobial proteins, and Ising model protein structures. Two of the reviewers are happy to accept the paper, and the third reviewer was not confident. The paper has improved significantly during the discussion period, and the authors have updated the approach as well as improved the presented results in response to comments raised by the reviewers. This is a good example of how an open review process with a long discussion period can improve the quality of accepted papers. A new method, several nice applications, based on a combination of two ideas (simulating a model to train a policy RL method, and discrete space search as RL). This is a good addition to the ICLR literature.
train
[ "S1xqbYYhjB", "SJxKdDt3oS", "ryeOxDK3or", "BJxXnrF3iH", "rylwJMHTKH", "ryxDkJqCYH", "S1gxUQJD9B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> Penalizing the reward if the same sequence is seen multiple times seems decent and works well compared to the entropy regularization but it is still questionable if it is the best solution for biological sequences. Showing results when the reward is penalized using the hamming loss or the biological similarity ...
[ -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "rylwJMHTKH", "ryxDkJqCYH", "S1gxUQJD9B", "iclr_2020_HklxbgBKvr", "iclr_2020_HklxbgBKvr", "iclr_2020_HklxbgBKvr", "iclr_2020_HklxbgBKvr" ]
iclr_2020_Hkem-lrtvH
BayesOpt Adversarial Attack
Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input. Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries. Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost. We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction. We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.
accept-poster
This paper proposes a query-efficient black-box attack that uses Bayesian optimization in combination with Bayesian model selection to optimize over the adversarial perturbation and the optimal degree of search space dimension reduction. The method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks. The paper should be further improved in the final version (e.g., including more results on ImageNet data).
val
[ "SJlXnJtvjS", "rye_muURtr", "S1lJ4g_voS", "BJg3u7uDiS", "BJep-hOPjB", "HJx2OHC2tS", "Bkxnc7g79r" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for his insightful comments. We address the concerns below: \n\n1. \"Need a discussion on the surprising phenomenon that vanilla GP-BO can work at all for such problem of extraordinarily high dimensionality.\"\nPlease refer to our reply to all Reviewers for the detailed discussion.\n\n2. \"a ...
[ -1, 6, -1, -1, -1, 3, 6 ]
[ -1, 5, -1, -1, -1, 5, 3 ]
[ "HJx2OHC2tS", "iclr_2020_Hkem-lrtvH", "iclr_2020_Hkem-lrtvH", "Bkxnc7g79r", "rye_muURtr", "iclr_2020_Hkem-lrtvH", "iclr_2020_Hkem-lrtvH" ]
iclr_2020_HkgsWxrtPB
Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies
We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. The agent needs to quickly adapt to the task over few episodes during adaptation phase to maximize the return in the test phase. Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. To facilitate learning, we adopt an intrinsic reward inspired by upper confidence bound (UCB) that encourages efficient exploration. Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter, and to adapt more efficiently than existing meta RL and hierarchical RL methods.
accept-poster
This work formulates and tackles a few-shot RL problem called subtask graph inference, where hierarchical tasks are characterized by a graph describing all subtasks and their dependencies. In other words, each task consists of multiple subtasks and completing a subtask provides a reward. The authors propose a meta-RL approach to meta-train a policy that infers the subtask graph from any new task data in a few shots. Empirical experiments are performed on different domains, including Startcraft II, highlighting the efficiency and scalability of the proposed approach. Most concerns of reviewers were addressed in the rebuttal. The main remaining concerns about this work are that it is mainly an extension of Sohn et al. (2018), making the contribution somewhat incremental, and that its applicability is limited to problems where subtasks are provided. However, all reviewers being positive about this paper, I would still recommend acceptance.
train
[ "HJgCe5G2iB", "rJlWTrU2oH", "HyxG49f2jB", "r1g4ssfhjB", "SyxbwjGnsH", "B1eAkDUuKH", "S1lrmJG2tH", "r1l0Qu3nKS" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the reviewer for their positive evaluation and detailed, constructive comments. We have updated the draft to address some concerns, and below we answer to the questions in more detail.\n\n>>> “Why MSGI-Meta and RL^2 would overfit in the SC2LE case and are unable to adapt to new tasks. Is that a limit...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 1, 4, 4 ]
[ "S1lrmJG2tH", "r1g4ssfhjB", "S1lrmJG2tH", "r1l0Qu3nKS", "B1eAkDUuKH", "iclr_2020_HkgsWxrtPB", "iclr_2020_HkgsWxrtPB", "iclr_2020_HkgsWxrtPB" ]
iclr_2020_ryx6WgStPB
Hypermodels for Exploration
We study the use of hypermodels to represent epistemic uncertainty and guide exploration. This generalizes and extends the use of ensembles to approximate Thompson sampling. The computational cost of training an ensemble grows with its size, and as such, prior work has typically been limited to ensembles with tens of elements. We show that alternative hypermodels can enjoy dramatic efficiency gains, enabling behavior that would otherwise require hundreds or thousands of elements, and even succeed in situations where ensemble methods fail to learn regardless of size. This allows more accurate approximation of Thompson sampling as well as use of more sophisticated exploration schemes. In particular, we consider an approximate form of information-directed sampling and demonstrate performance gains relative to Thompson sampling. As alternatives to ensembles, we consider linear and neural network hypermodels, also known as hypernetworks. We prove that, with neural network base models, a linear hypermodel can represent essentially any distribution over functions, and as such, hypernetworks do not extend what can be represented.
accept-poster
This paper considers ensemble of deep learning models in order to quantify their epistemic uncertainty and use this for exploration in RL. The authors first show that limiting the ensemble to a small number of models, which is typically done for computational reasons, can severely limit the approximation of the posterior, which can translate into poor learning behaviours (e.g. over-exploitation). Instead, they propose a general approach based on hypermodels which can achieve the benefits of a large ensemble of models without the computational issues. They perform experiments in the bandit setting supporting their claim. They also provide a theoretical contribution, proving that an arbitrary distribution over functions can be represented by a linear hypermodel. The decision boundary for this paper is unclear given the confidence of reviewers and their scores. However, the tackled problem is important, and the proposed approach is sound and backed up by experiments. Most of reviewers concerns seemed to be addressed by the rebuttal, with the exception of few missing references which the authors should really consider adding. I would therefore recommend acceptance.
val
[ "HJgv4jb5YH", "r1gZ_23X5H", "BylA9vPnsH", "BJgUqGcdir", "S1xIag9usB", "rJlvT-lGiB", "rkxXO-lGsB", "B1lsm-ezor", "SJlvWZxziS", "B1eEz8qRKH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors demonstrate advantages of a linear hypermodel over an ensemble method in exploration guided by epistemic uncertainty. They perform an empirical study in the bandit setting and claim that their approach both outperforms the ensemble method and offers a significant increase in computational efficiency. T...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 5, 1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_ryx6WgStPB", "iclr_2020_ryx6WgStPB", "r1gZ_23X5H", "B1eEz8qRKH", "iclr_2020_ryx6WgStPB", "iclr_2020_ryx6WgStPB", "HJgv4jb5YH", "B1eEz8qRKH", "r1gZ_23X5H", "iclr_2020_ryx6WgStPB" ]
iclr_2020_HkgeGeBYDB
RaPP: Novelty Detection with Reconstruction along Projection Pathway
We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces. We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input. In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance. Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches. Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks.
accept-poster
The paper proposes to extend the autoencoder loss in a deep generative model to include per-latent-layer loss terms. Two variants are proposed: SAP (simple aggregation along pathway) and NAP (normalized aggregation along pathway). SAP is simply the sum of the squared norm, while NAP performs decorrelation and normalization of the magnitude. This was viewed as novel by the reviewers, and the experiments supported the proposed approach. In the post rebuttal phase, the inclusion of an ablation study has led to an upgrade in the reviewer recommendation. As a result, there was a unanimous opinion that the paper is suitable for publication at ICLR.
train
[ "HylFcgFRFB", "B1x9jGHjFH", "HJgBJ1i3jr", "HyxKako2jS", "rJgiUNqjjH", "rklHQgohoH", "BygE_gj3sr", "BklJSBISjH", "S1lmbNX0cB", "BkeM2DIHoB", "r1gVRNLSjr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author" ]
[ "I have read the reviews and the comments.\n\nI appreciate the effort of the authors. I feel positive about the paper and I think it should be accepted.\n\nI confirm my rating.\n\n=================\nThe paper proposes a new method for novelty detection that is based on measuring the reconstruction error in latent s...
[ 6, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, 1, -1, -1 ]
[ "iclr_2020_HkgeGeBYDB", "iclr_2020_HkgeGeBYDB", "S1lmbNX0cB", "HylFcgFRFB", "B1x9jGHjFH", "B1x9jGHjFH", "B1x9jGHjFH", "HylFcgFRFB", "iclr_2020_HkgeGeBYDB", "B1x9jGHjFH", "S1lmbNX0cB" ]
iclr_2020_BJgZGeHFPH
Dynamics-Aware Embeddings
In this paper we consider self-supervised representation learning to improve sample efficiency in reinforcement learning (RL). We propose a forward prediction objective for simultaneously learning embeddings of states and actions. These embeddings capture the structure of the environment's dynamics, enabling efficient policy learning. We demonstrate that our action embeddings alone improve the sample efficiency and peak performance of model-free RL on control from low-dimensional states. By combining state and action embeddings, we achieve efficient learning of high-quality policies on goal-conditioned continuous control from pixel observations in only 1-2 million environment steps.
accept-poster
This paper studies how self-supervised objectives can improve representations for efficient RL. The reviewers are generally in agreement that the method is interesting, the paper is well-written, and the results are convincing. The paper should be accepted.
train
[ "Bkl-M2D2sr", "SkeVUnNnjS", "HyeWCm6oiB", "r1lVq9f-5H", "BJlOLl_jjB", "r1llrMWjjH", "SkgGlfbooB", "HyeooW-oor", "BJeoXWbsoB", "SkeQW6MAKB", "ByxvmHURtr", "HyxtfU92qB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the clarification.", "In this work the abstract action space is learned ahead of time, though this need not be true in general. When learning the action space online, you are exactly right that the shifting map between abstract and raw actions needs to be corrected. HIRO is a good option for this. ...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, 8, 6, 3 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "SkeVUnNnjS", "HyeWCm6oiB", "SkgGlfbooB", "iclr_2020_BJgZGeHFPH", "r1lVq9f-5H", "SkeQW6MAKB", "ByxvmHURtr", "r1lVq9f-5H", "HyxtfU92qB", "iclr_2020_BJgZGeHFPH", "iclr_2020_BJgZGeHFPH", "iclr_2020_BJgZGeHFPH" ]
iclr_2020_HkxCzeHFDB
Functional Regularisation for Continual Learning with Gaussian Processes
We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for Continual Learning, avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function. To achieve this we rely on a Gaussian process obtained by treating the weights of the last layer of a neural network as random and Gaussian distributed. Then, the training algorithm sequentially encounters tasks and constructs posterior beliefs over the task-specific functions by using inducing point sparse Gaussian process methods. At each step a new task is first learnt and then a summary is constructed consisting of (i) inducing inputs – a fixed-size subset of the task inputs selected such that it optimally represents the task – and (ii) a posterior distribution over the function values at these inputs. This summary then regularises learning of future tasks, through Kullback-Leibler regularisation terms. Our method thus unites approaches focused on (pseudo-)rehearsal with those derived from a sequential Bayesian inference perspective in a principled way, leading to strong results on accepted benchmarks.
accept-poster
The authors introduce a framework for continual learning in neural networks based on sparse Gaussian process methods. The reviewers had a number of questions and concerns, that were adequately addressed during the discussion phase. This is an interesting addition to the continual learning literature. Please be sure to update the paper based on the discussion.
train
[ "HkeRF7LnKH", "HklXQqHKir", "HygVsr7miH", "rJgDaEQXsr", "B1gBw8Q7oS", "HJlEyIXXiH", "B1xoID52tS", "HJeWwjLpqH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper develops a continual learning method based on Gaussian Processes (GPs) applied in the way introduced by prior work as Deep Kernel Learning (DKL). The proposed method summarizes tasks as sparse GPs and use them as regularizers for the subsequent tasks in order to avoid catastrophic forgetting. Salleviatin...
[ 3, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_HkxCzeHFDB", "B1gBw8Q7oS", "HkeRF7LnKH", "HJeWwjLpqH", "HkeRF7LnKH", "B1xoID52tS", "iclr_2020_HkxCzeHFDB", "iclr_2020_HkxCzeHFDB" ]
iclr_2020_BkxSmlBFvr
You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings
Knowledge graph embedding (KGE) models learn algebraic representations of the entities and relations in a knowledge graph. A vast number of KGE techniques for multi-relational link prediction have been proposed in the recent literature, often with state-of-the-art performance. These approaches differ along a number of dimensions, including different model architectures, different training strategies, and different approaches to hyperparameter optimization. In this paper, we take a step back and aim to summarize and quantify empirically the impact of each of these dimensions on model performance. We report on the results of an extensive experimental study with popular model architectures and training strategies across a wide range of hyperparameter settings. We found that when trained appropriately, the relative performance differences between various model architectures often shrinks and sometimes even reverses when compared to prior results. For example, RESCAL~\citep{nickel2011three}, one of the first KGE models, showed strong performance when trained with state-of-the-art techniques; it was competitive to or outperformed more recent architectures. We also found that good (and often superior to prior studies) model configurations can be found by exploring relatively few random samples from a large hyperparameter space. Our results suggest that many of the more advanced architectures and techniques proposed in the literature should be revisited to reassess their individual benefits. To foster further reproducible research, we provide all our implementations and experimental results as part of the open source LibKGE framework.
accept-poster
The authors analyze knowledge graph embedding models for multi-relational link predictions. Three reviewers like the work and recommend acceptance. The paper further received several positive comments from the public. This is solid work and should be accepted.
train
[ "SyeyqnfijH", "S1x__2kciB", "SygQK6kqoS", "S1g1xsJqoH", "H1gAye5RtB", "Syljyis0Kr", "SkemmQtecS", "r1xoUNL2YB", "BkgToE_PtH", "SJgHpTUPtr", "H1edT58ztH", "BklXjDgCur", "HygxCrNiOB", "r1x8cXwS_B", "HkeWpqyWdr", "HJxeXqyb_H", "BkgFtc1Z_H", "rylDuHjyuH", "Hkg-AmjJdH", "r1g5orsAPH"...
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "public", "public", "author", "public", "author", "author", "author", "public", "public", "public" ]
[ "We have added Fig. 9 to our paper along the lines discussed above. The figure suggests that decent (but often not very good) configurations can be found by simply training for less than 400 epochs.", "We thank you for your feedback and appreciate your support. In what follows, we briefly comment on the points ra...
[ -1, -1, -1, -1, 6, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 3, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "S1x__2kciB", "Syljyis0Kr", "H1gAye5RtB", "SkemmQtecS", "iclr_2020_BkxSmlBFvr", "iclr_2020_BkxSmlBFvr", "iclr_2020_BkxSmlBFvr", "BkgToE_PtH", "H1edT58ztH", "BklXjDgCur", "iclr_2020_BkxSmlBFvr", "iclr_2020_BkxSmlBFvr", "r1x8cXwS_B", "iclr_2020_BkxSmlBFvr", "rylDuHjyuH", "r1g5orsAPH", ...
iclr_2020_H1eqQeHFDS
AdvectiveNet: An Eulerian-Lagrangian Fluidic Reservoir for Point Cloud Processing
This paper presents a novel physics-inspired deep learning approach for point cloud processing motivated by the natural flow phenomena in fluid mechanics. Our learning architecture jointly defines data in an Eulerian world space, using a static background grid, and a Lagrangian material space, using moving particles. By introducing this Eulerian-Lagrangian representation, we are able to naturally evolve and accumulate particle features using flow velocities generated from a generalized, high-dimensional force field. We demonstrate the efficacy of this system by solving various point cloud classification and segmentation problems with state-of-the-art performance. The entire geometric reservoir and data flow mimic the pipeline of the classic PIC/FLIP scheme in modeling natural flow, bridging the disciplines of geometric machine learning and physical simulation.
accept-poster
This paper treats the task of point cloud learning as a dynamic advection problem in conjunction with a learned background velocity field. The resulting system, which bridges geometric machine learning and physical simulation, achieves promising performance on various classification and segmentation problems. Although the initial scores were mixed, all reviewers converged to acceptance after the rebuttal period. For example, a better network architecture, along with an improved interpolation stencil and initialization, lead to better performance (now rivaling the state-of-the-art) as compared to the original submission. This helps to mitigate an initial reviewer concern in terms of competitiveness with existing methods like PointCNN or SE-Net. Likewise, interesting new experiments such as PIC vs. FLIP were included.
train
[ "ryxSuiIjFB", "H1g3s5IisH", "SJghlklntH", "HJgxfRhEqS" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents a method for point-based learning that is inspired by a hybrid Eulerian-Lagrangian fluid simulation method. The work first explains how the simulation algorithm is mapped to the learning problem: MLPs are employed to learn sets of particle based features which are mapped to a Eulerian grid. A se...
[ 6, -1, 6, 6 ]
[ 5, -1, 5, 3 ]
[ "iclr_2020_H1eqQeHFDS", "iclr_2020_H1eqQeHFDS", "iclr_2020_H1eqQeHFDS", "iclr_2020_H1eqQeHFDS" ]
iclr_2020_Sye57xStvB
Never Give Up: Learning Directed Exploration Strategies
We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies. We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies, thereby encouraging the agent to repeatedly revisit all states in its environment. A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control. We employ the framework of Universal Value Function Approximators to simultaneously learn many directed exploration policies with the same neural network, with different trade-offs between exploration and exploitation. By using the same neural network for different degrees of exploration/exploitation, transfer is demonstrated from predominantly exploratory policies yielding effective exploitative policies. The proposed method can be incorporated to run with modern distributed RL agents that collect large amounts of experience from many actors running in parallel on separate environment instances. Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344.0%. Notably, the proposed method is the first algorithm to achieve non-zero rewards (with a mean score of 8,400) in the game of Pitfall! without using demonstrations or hand-crafted features.
accept-poster
This paper tackles hard-exploration RL problems. The idea is to learn separate exploration and exploitation strategies using the same network (representation). The exploration is driven by intrinsic rewards, which are generated using an episodic memory and a lifelong novelty modules. Several experiments (simple and Atari domains) show that the proposed approach compares favourably with the baselines. The work is novel both in terms of the episodic curiosity metric and its integration with the life-long curiosity metric, and the results are convincing. All reviewers being positive about this paper, I therefore recommend acceptance.
train
[ "H1lEldIhtH", "BkgvBeU5jr", "H1luPLVOsH", "r1xyl8VOir", "B1euYoQ_iB", "HJxKMiQOoS", "SJlUtgf2Yr", "rkgUCLxTKr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "-after rebuttal:\nI read the replies from the authors and re-read the modified version of the paper and I believe there has been a noticeable improvement in the presentation. I still think it could be improved more (in terms of wording and better exposition of the results) but due to the in-place improvements I in...
[ 6, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_Sye57xStvB", "B1euYoQ_iB", "H1lEldIhtH", "H1lEldIhtH", "SJlUtgf2Yr", "rkgUCLxTKr", "iclr_2020_Sye57xStvB", "iclr_2020_Sye57xStvB" ]
iclr_2020_ByexElSYDr
Fair Resource Allocation in Federated Learning
Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair (specifically, a more uniform) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets with both convex and non-convex models, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency.
accept-poster
This manuscript proposes and analyzes a federated learning procedure with more uniform performance across devices, motivated as resulting in a fairer performance distribution. The resulting algorithm is tunable in terms of the fairness-performance tradeoff and is evaluated on a variety of datasets. The reviewers and AC agree that the problem studied is timely and interesting, as there is limited work on fairness in federated learning. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and clarity of the conceptual and empirical results. In reviews and discussion, the reviewers noted insufficient justification of the approach and results, particularly in terms of broad empirical evaluation, and sensitivity of the results to misestimation of various constants. In the opinion of the AC, while the paper can be much improved, it seems to be technically correct, and the results are of sufficiently broad interest to consider publication.
train
[ "SygvdKqYsH", "r1gqMq5KiH", "B1xcCt5Yor", "H1eYY_9KoH", "HJlUqN9Kor", "rJeNTLRhFr", "BJg7OyyM9r", "SJxeQHzEqH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n[Accuracy discrepancies with previous work] \nWe note that the goal of our experiments is not to show superior accuracy on a certain benchmark, but rather to show that our q-FFL objective and our proposed algorithms provide a flexible tradeoff between performance (accuracy) and fairness on a range of datasets. T...
[ -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "H1eYY_9KoH", "rJeNTLRhFr", "BJg7OyyM9r", "SJxeQHzEqH", "iclr_2020_ByexElSYDr", "iclr_2020_ByexElSYDr", "iclr_2020_ByexElSYDr", "iclr_2020_ByexElSYDr" ]
iclr_2020_B1xMEerYvB
Smooth markets: A basic mechanism for organizing gradient-based learners
With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative results from game theory show there is little hope of understanding or controlling general n-player games. We therefore introduce smooth markets (SM-games), a class of n-player games with pairwise zero sum interactions. SM-games codify a common design pattern in machine learning that includes some GANs, adversarial training, and other recent algorithms. We show that SM-games are amenable to analysis and optimization using first-order methods.
accept-poster
The paper discusses smooth market games and demonstrate the merit of the approach. The reviewers agree on the quality of the paper, and the comments have been addressed well by the authors.
val
[ "S1x3tao5iB", "SJljlpi9sr", "SJepamwKYr", "H1gVKATRFS" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their time and detailed feedback. \n\n1. More background material. \nThe reviewer is correct, the paper covers a wide range of topics quite rapidly. We will provide more discussion in the related work section and also the Appendix to help orient readers.\n\n2. Move Lemma 1 before Definiti...
[ -1, -1, 8, 8 ]
[ -1, -1, 3, 1 ]
[ "H1gVKATRFS", "SJepamwKYr", "iclr_2020_B1xMEerYvB", "iclr_2020_B1xMEerYvB" ]
iclr_2020_BJgQ4lSFPH
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Recently, the pre-trained language model, BERT (and its robustly optimized version RoBERTa), has attracted a lot of attention in natural language understanding (NLU), and achieved state-of-the-art accuracy in various NLU tasks, such as sentiment classification, natural language inference, semantic textual similarity and question answering. Inspired by the linearization exploration work of Elman, we extend BERT to a new model, StructBERT, by incorporating language structures into pre-training. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. As a result, the new model is adapted to different levels of language understanding required by downstream tasks. The StructBERT with structural pre-training gives surprisingly good empirical results on a variety of downstream tasks, including pushing the state-of-the-art on the GLUE benchmark to 89.0 (outperforming all published models at the time of model submission), the F1 score on SQuAD v1.1 question answering to 93.0, the accuracy on SNLI to 91.7.
accept-poster
This paper proposes a pair of complementary word- and sentence-level pretraining objectives for BERT-style models, and shows that they are empirically effective, especially when used with an already-pretrained RoBERTa model. Work of this kind has been extremely impactful in NLP, and so I'm somewhat biased toward acceptance: If this isn't published, it seems likely that other groups will go to the trouble to replicate roughly these experiments. However, I think the paper is borderline. Reviewers were impressed by the results, but not convinced that the ablations and analyses were sufficient to motivate the proposed methods, suggesting that some variants of the proposed methods could likely be substantially better. In addition, I agree strongly with R3 that framing this work around 'language structure' is disingenuous, and actively misleads readers about the contribution to the paper.
train
[ "HJgNRaZ0tr", "rJxVMwHhsH", "S1gP2REhjH", "H1eXLxr2oB", "rJer0FEJqB", "H1lYNb_e5S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to use additional structures within and between sentences for pre-training BERT. The basic idea is to shuffle either some n-grams within sentences or the sentences in texts, then train the model to predict the correct orders. Experiments in this work show that, with this additional training obj...
[ 3, -1, -1, -1, 8, 6 ]
[ 5, -1, -1, -1, 4, 3 ]
[ "iclr_2020_BJgQ4lSFPH", "HJgNRaZ0tr", "H1lYNb_e5S", "rJer0FEJqB", "iclr_2020_BJgQ4lSFPH", "iclr_2020_BJgQ4lSFPH" ]
iclr_2020_BJg4NgBKvH
Training binary neural networks with real-to-binary convolutions
This paper shows how to train binary networks to within a few percent points (~3-5%) of the full precision counterpart. We first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances and carefully adjusting the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution, additional significant accuracy gains can be obtained. We materialize this idea in two complementary ways: (1) with a loss function, during training, by matching the spatial attention maps computed at the output of the binary and real-valued convolutions, and (2) in a data-driven manner, by using the real-valued activations, available during inference prior to the binarization process, for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than 5% top-1 accuracy on ImageNet and reduces the gap to its real-valued counterpart to less than 3% and 5% top-1 accuracy on CIFAR-100 and ImageNet respectively when using a ResNet-18 architecture. Code available at https://github.com/brais-martinez/real2binary
accept-poster
This paper proposes methodology to train binary neural networks. The reviewers and authors engaged in a constructive discussion. All the reviewers like the contributions of the paper. Acceptance is therefore recommended.
train
[ "rkedHE72sS", "B1eQf7Whor", "HygFsW-hir", "rJe8vWZ2iH", "Skl1k-bniB", "B1eNcAxhsS", "S1lgK6m-oB", "S1eork6liS", "B1l7Lb0ysS", "rygS_Rv15B", "HJx2ejZS5r", "Hkls16xPqS", "B1lneTXwcH", "SygUKEvt5S", "HyeRiKbtcH", "B1lW7dAwqH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "R4.1: yes, 2bits NNs and 1bit NNs are not straightforwardly comparable. R4.4 this makes the comparison between models difficult as the overall performance (e.g. latency) highly depends on the implementation (see the answer from Da Quexian: about latency on real devices). R4.3 as data augmentation and mix-up are co...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 1, 1, -1, -1, -1 ]
[ "B1eNcAxhsS", "rygS_Rv15B", "HJx2ejZS5r", "Skl1k-bniB", "Hkls16xPqS", "B1lneTXwcH", "HJx2ejZS5r", "B1l7Lb0ysS", "iclr_2020_BJg4NgBKvH", "iclr_2020_BJg4NgBKvH", "iclr_2020_BJg4NgBKvH", "iclr_2020_BJg4NgBKvH", "iclr_2020_BJg4NgBKvH", "HyeRiKbtcH", "B1lW7dAwqH", "iclr_2020_BJg4NgBKvH" ]
iclr_2020_SylVNerFvr
Permutation Equivariant Models for Compositional Generalization in Language
Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance. Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models. Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.
accept-poster
This paper proposes an equivariant sequence-to-sequence model for dealing with compositionality of language. They show these models are better at SCAN tasks. Reviewers expressed two major concerns: 1) Limited clarity of section 4 which makes the paper difficult to understand. 2) Whether this could generalize to more complex types of compositionality. Authors responded by revising Section 4 and answering the question of generalization. While the reviewers are not 100% satisfied, they agree there is enough novel contribution in this paper. I thank the authors for submitting and look forward to seeing a clearer revision in the conference.
train
[ "SJlQ3BLkqr", "Hyl3-5ICFS", "H1eazy1iiS", "rJgX-aA5iS", "ByeyyTRqor", "BJgrD305oB", "Hyluf3R9jB", "HkgCK-1RKB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\nSummary\n---\n\n(motivation)\nConsider SCAN, a synthetic task where setences like S1=\"jump twice and run left\" are supposed to be translated into action sequences like A1=JUMP JUMP LTURN RUN. One might replace the word \"jump\" in S1 with \"walk\" then translate to get A2=WALK WALK LTURN RUN. If instead S1 is ...
[ 8, 6, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_SylVNerFvr", "iclr_2020_SylVNerFvr", "iclr_2020_SylVNerFvr", "ByeyyTRqor", "Hyl3-5ICFS", "SJlQ3BLkqr", "HkgCK-1RKB", "iclr_2020_SylVNerFvr" ]
iclr_2020_HJloElBYvB
Phase Transitions for the Information Bottleneck in Representation Learning
In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation? In this paper, we set out to answer these questions by studying multiple phase transitions in the IB objective: IB_β[p(z|x)] = I(X; Z) − βI(Y; Z) defined on the encoding distribution p(z|x) for input X, target Y and representation Z, where sudden jumps of dI(Y; Z)/dβ and prediction accuracy are observed with increasing β. We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes. Using second-order calculus of variations, we derive a formula that provides a practical condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models. We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings. Based on the theory, we present an algorithm for discovering phase transition points. Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10.
accept-poster
This submission presents a theoretical study of phase transitions in IB: adjusting the IB parameter leads to step-wise behaviour of the prediction. Quoting R3: “The core result is given by theorem 1: the phase transition betas necessarily satisfy an equation, where the LHS is expressed in terms of an optimal perturbation of the encoding function X->Z.” This paper received a borderline review and two votes for weak accept. The main comment for the borderline review was about the rigor of a proof and the use of << symbols. The authors have updated the proof using limits as requested, addressing this primary concern. On the balance, the paper makes a strong contribution to understanding an important learning setting and a contribution to theoretical understanding of the behavior of information bottleneck predictors.
val
[ "BylPScBuoB", "SkxA-tDviH", "BkeEi_wPjS", "r1lf7dvPsS", "SklpmME05H", "HkgODnIrtr", "r1gosu5aFB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewers for the constructive reviews! We have revised the paper according to the reviewers' comments, and provided detailed responses to each reviewer. A summary of the modification in the revised paper is as follows:\n\n(1) We have rewritten the theorems and proofs using limits instea...
[ -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 1, 3, 4 ]
[ "iclr_2020_HJloElBYvB", "HkgODnIrtr", "r1gosu5aFB", "SklpmME05H", "iclr_2020_HJloElBYvB", "iclr_2020_HJloElBYvB", "iclr_2020_HJloElBYvB" ]
iclr_2020_HkejNgBtPB
Variational Template Machine for Data-to-Text Generation
How to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations.Learning such templates is prohibitive since it often requires a large paired <table,description>, which is seldom available. This paper explores the problem of automatically learning reusable "templates" from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b) we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able to generate more diversely while keeping a good fluency and quality.
accept-poster
The paper addresses the problem of generating descriptions from structured data. In particular a Variational Template Machine which explicitly disentangles templates from semantic content. They empirically demonstrate that their model performs better than existing methods on different methods. This paper has received a strong acceptance from two reviewers. In particular, the reviewers have appreciated the novelty and empirical evaluation of the proposed approach. R3 has raised quite a few concerns but I feel they were adequately addressed by the reviewers. Hence, I recommend that the paper be accepted.
train
[ "rkxYRkH3sH", "HJepPJHniB", "HJg5xRE3jS", "ryxcKa4hsr", "B1x-h8r9dH", "HJlIc6pdYH", "SkeetYT-5H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks very much for your valuable comments.\n\nQ: It would be good if the authors could provide an analysis of the computational costs of their methods, as well as of the considered competitors. \n\nA: We compare the training and testing time cost on the WIKI dataset, and with raw data added, VTM spends more time...
[ -1, -1, -1, -1, 8, 3, 8 ]
[ -1, -1, -1, -1, 5, 5, 1 ]
[ "B1x-h8r9dH", "HJlIc6pdYH", "SkeetYT-5H", "iclr_2020_HkejNgBtPB", "iclr_2020_HkejNgBtPB", "iclr_2020_HkejNgBtPB", "iclr_2020_HkejNgBtPB" ]
iclr_2020_r1laNeBYPB
Memory-Based Graph Networks
Graph neural networks (GNNs) are a class of deep models that operate on data with arbitrary topology represented as graphs. We introduce an efficient memory layer for GNNs that can jointly learn node representations and coarsen the graph. We also introduce two new networks based on this layer: memory-based GNN (MemGNN) and graph memory network (GMN) that can learn hierarchical graph representations. The experimental results shows that the proposed models achieve state-of-the-art results in eight out of nine graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data.
accept-poster
Four reviewers have assessed this paper and they have scored it as 6/6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission.
train
[ "B1xbJPHpYH", "SkgoeVO2jS", "Hkgk-7wqoH", "rJgM2hZeqB", "BJlnBpQtir", "Byxl0jmYoS", "HkgA4sXKjS", "H1gxH9XKiS", "BJleWuQFjB", "BygMPvXtjH", "B1xaBdyN9S", "BJeuzS4ucB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents \"memory layer\" to simultaneously do graph representation learning and pooling in a hierarchical way. It shares the same spirit with the previous models (DiffPool and Mincut pooling) which cluster nodes and learn representation of the coarsened graph. In DiffPool, Graph convolutional Neural Net...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_r1laNeBYPB", "Hkgk-7wqoH", "H1gxH9XKiS", "iclr_2020_r1laNeBYPB", "iclr_2020_r1laNeBYPB", "B1xbJPHpYH", "B1xbJPHpYH", "rJgM2hZeqB", "B1xaBdyN9S", "BJeuzS4ucB", "iclr_2020_r1laNeBYPB", "iclr_2020_r1laNeBYPB" ]