paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_rklOg6EFwS
Improving Adversarial Robustness Requires Revisiting Misclassified Examples
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by imperceptible perturbations. A range of defense techniques have been proposed to improve DNN robustness to adversarial examples, among which adversarial training has been demonstrated to be the most effective. Adversarial training is often formulated as a min-max optimization problem, with the inner maximization for generating adversarial examples. However, there exists a simple, yet easily overlooked fact that adversarial examples are only defined on correctly classified (natural) examples, but inevitably, some (natural) examples will be misclassified during training. In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training. Specifically, we find that misclassified examples indeed have a significant impact on the final robustness. More surprisingly, we find that different maximization techniques on misclassified examples may have a negligible influence on the final robustness, while different minimization techniques are crucial. Motivated by the above discovery, we propose a new defense algorithm called {\em Misclassification Aware adveRsarial Training} (MART), which explicitly differentiates the misclassified and correctly classified examples during the training. We also propose a semi-supervised extension of MART, which can leverage the unlabeled data to further improve the robustness. Experimental results show that MART and its variant could significantly improve the state-of-the-art adversarial robustness.
accept-poster
This paper presents modifications to the adversarial training loss that yield improvements in adversarial robustness. While some reviewers were concerned by the lack of mathematical elegance in the proposed method, there is consensus that the proposed method clears a tough bar by increasing SOTA robustness on CIFAR-10.
train
[ "H1eyg-mJ9H", "SklVmkA9ir", "BylGoTXcjH", "BklJo_3Bir", "SJlUNm3HsS", "S1lh5x3SsH", "Hyl_k_T2YH", "HJxsqDY7cH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper essentially presented a viewpoint, i.e. misclassified examples may have a significant impact on the final robustness.\nThe authors conducted a series of qualitative experiments to verify \n1) Misclassified examples have more impact on the final robustness than correctly classified examples.\n2) For miscl...
[ 6, -1, -1, -1, -1, -1, 6, 8 ]
[ 5, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_rklOg6EFwS", "BylGoTXcjH", "SJlUNm3HsS", "Hyl_k_T2YH", "H1eyg-mJ9H", "HJxsqDY7cH", "iclr_2020_rklOg6EFwS", "iclr_2020_rklOg6EFwS" ]
iclr_2020_SylOlp4FvH
V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control
Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.
accept-poster
This paper proposes an extension of MPO for on-policy reinforcement learning. The proposed method achieved promising results in a relatively hyper-parameter insensitive manner. One concern of the reviewers is the lack of comparison with previous works, such as original MPO, which has been partially addressed by the authors in rebuttal. In addition, Blind Review #3 has some concerns with the fairness of the experimental comparison, though other reviews accept the comparison using standardized benchmark. Overall, the paper proposes a promising extension of MPO; thus, I recommend it for acceptance.
val
[ "H1eIln83iB", "SJgQXIShiS", "H1xZZD2KsB", "SyxO1w3tir", "Bygmi8hFor", "HklBFUnFsB", "rkxb78hYsS", "r1guwy_2FB", "SyeDKDGJcr", "HJg9i7Sl5r", "HJgj-EQz_H", "HygILBnCDB" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We have added explicit references to Section 4.1 and 4.2 and now preview the derivation. This does indeed improve readability, thank you. We also fixed the typo, thanks for reading so carefully!\n\nWe have now removed the stop gradient from the derivation of the M-step, as the algorithm itself does not require it....
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, -1, -1 ]
[ "SJgQXIShiS", "Bygmi8hFor", "SyxO1w3tir", "r1guwy_2FB", "HklBFUnFsB", "SyeDKDGJcr", "HJg9i7Sl5r", "iclr_2020_SylOlp4FvH", "iclr_2020_SylOlp4FvH", "iclr_2020_SylOlp4FvH", "HygILBnCDB", "iclr_2020_SylOlp4FvH" ]
iclr_2020_S1xFl64tDr
Interpretable Complex-Valued Neural Networks for Privacy Protection
Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features. We study the possibility of preventing such adversarial inference, yet without too much accuracy degradation. We propose a generic method to revise the neural network to boost the challenge of inferring input attributes from features, while maintaining highly accurate outputs. In particular, the method transforms real-valued features into complex-valued ones, in which the input is hidden in a randomized phase of the transformed features. The knowledge of the phase acts like a key, with which any party can easily recover the output from the processing result, but without which the party can neither recover the output nor distinguish the original input. Preliminary experiments on various datasets and network structures have shown that our method significantly diminishes the adversary's ability in inferring about the input while largely preserves the resulting accuracy.
accept-poster
The reviewers are unanimous in their opinion that this paper offers a novel approach to secure edge learning. I concur. Reviewers mention clarity, but I find the latest paper clear enough.
train
[ "SJeLYESoKB", "r1gdMyUpYB", "B1lGNL4qiS", "r1g9Xr45jH", "rklPgr4qoS", "ryxy2OvJjS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "After rebuttal,\n\nI really appreciate the authors' effort during the rebuttal, and most of my concerns are addressed well. \n\n===\n\nSummary:\n\nThis paper proposed a complex-valued neural network to protect the input data from hidden features of DNNs. Specifically, the authors introduce (1) encoder: producing a...
[ 6, 6, -1, -1, -1, 6 ]
[ 1, 1, -1, -1, -1, 1 ]
[ "iclr_2020_S1xFl64tDr", "iclr_2020_S1xFl64tDr", "SJeLYESoKB", "r1gdMyUpYB", "ryxy2OvJjS", "iclr_2020_S1xFl64tDr" ]
iclr_2020_r1gixp4FPH
Accelerating SGD with momentum for over-parameterized learning
Nesterov SGD is widely used for training modern neural networks and other machine learning models. Yet, its advantages over SGD have not been theoretically clarified. Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD. Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD. This is in contrast to the classical results in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov's method over optimal gradient descent. To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD. The resulting algorithm, which we call MaSS, converges for same step sizes as SGD. We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov's method. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, Nesterov SGD and Adam.
accept-poster
The authors provide an empirical and theoretical exploration of Nesterov momentum, particularly in the over-parametrized settings. Nesterov momentum has attracted great interest at various times in deep learning, but its properties and practical utility are not well understood. This paper makes an important step towards shedding some light on this approach for training models with a large number of parameters.
train
[ "BJgXkQrmoS", "SyldVMSXiB", "S1lLS-GZoH", "SkgtBCT1YH", "rJly18kqKr", "r1gR1zv3KB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the encouraging comments. \n\n>>”In H2, it is mentioned that the algorithm is restarted (the momentum is reset) when the learning rate is annealed. Was this also done for SGD+nesterov? Also, I think it is an important implementation detail that should be mentioned outside of the appendix”\n\nThe repo...
[ -1, -1, -1, 3, 8, 8 ]
[ -1, -1, -1, 5, 4, 3 ]
[ "r1gR1zv3KB", "rJly18kqKr", "SkgtBCT1YH", "iclr_2020_r1gixp4FPH", "iclr_2020_r1gixp4FPH", "iclr_2020_r1gixp4FPH" ]
iclr_2020_B1esx6EYvr
A critical analysis of self-supervision, or what we can learn from a single image
We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabelled images are used for training. We conclude that: (1) the weights of the early layers of deep networks contain limited information about the statistics of natural images, that (2) such low-level statistics can be learned through self-supervision just as well as through strong supervision, and that (3) the low-level statistics can be captured via synthetic transformations instead of using a large image dataset.
accept-poster
This paper studies the effectiveness of self-supervised approaches by characterising how much information they can extract from a given dataset of images on a per-layer basis. Based on an empirical evaluation of RotNet, BiGAN, and DeepCluster, the authors argue that the early layers of CNNs can be effectively learned from a single image coupled with strong data augmentation. Secondly, the authors also provide some empirical evidence that supervision might still necessary to learn the deeper layers (even in the presence of millions of images for self-supervision). Overall, the reviews agree that the paper is well written and timely given the growing popularity of self-supervised methods. Given that most of the issues raised by the reviewers were adequately addressed in the rebuttal, I will recommend acceptance. We ask the authors to include additional experiments requested by the reviewers (they are valuable even if the conclusions are not perfectly aligned with the main message).
train
[ "Ske6OaZ-qr", "BkgPU07CFS", "r1lgQemhjB", "BJx7C1bfjH", "Hkl_Uybzor", "Syg3upgMjH", "r1x7498kcS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Update 11/21\nWith the additional experiments (testing a new image, testing fine-tuning of hand-crafted features), additions to related work, and clarifications, I am happy to raise my score to accept. Overall, I think this paper is a nice sanity check on recent self-supervision methods. In the future, I am quite ...
[ 6, 6, -1, -1, -1, -1, 1 ]
[ 3, 4, -1, -1, -1, -1, 5 ]
[ "iclr_2020_B1esx6EYvr", "iclr_2020_B1esx6EYvr", "iclr_2020_B1esx6EYvr", "BkgPU07CFS", "r1x7498kcS", "Ske6OaZ-qr", "iclr_2020_B1esx6EYvr" ]
iclr_2020_SygagpEKwB
Disentangling Factors of Variations Using Few Labels
Learning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a limited amount of supervision, for example through manual labeling of (some) factors of variation in a few training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 52000 models under well-defined and reproducible experimental conditions. We observe that a small number of labeled examples (0.01--0.5% of the data set), with potentially imprecise and incomplete labels, is sufficient to perform model selection on state-of-the-art unsupervised models. Further, we investigate the benefit of incorporating supervision into the training process. Overall, we empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.
accept-poster
This paper addresses the problem of learning disentangled representations and shows that the introduction of a few labels corresponding to the desired factors of variation can be used to increase the separation of the learned representation. There were mixed scores for this work. Two reviewers recommended weak acceptance while one reviewer recommended rejection. All reviewers and authors agreed that the main conclusion that the labeled factors of variation can be used to improved disentanglement is perhaps expected. However, reviewers 2 and 3 argue that this work presents extensive experimental evidence to support this claim which will be of value to the community. The main concerns of R1 center around a lack of clear analysis and synthesis of the large number of experiments. Though there is a page limit we encourage the authors to revise their manuscript with a specific focus on clarity and take-away messages from their results. After careful consideration of all reviewer comments and author rebuttals the AC recommends acceptance of this work. The potential contribution of the extensive experimental evidence warrants presentation at ICLR. However, again, we encourage the authors to consider ways to mitigate the concerns of R1 in their final manuscript.
train
[ "r1lytPh9ir", "ryxyH98XYr", "rJeLLYEdsB", "BklhRqW7sr", "Hklmq5ZQjH", "S1gcXcbXsB", "SJeN77VTYH", "r1x0CupCYH" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "GENERAL COMMENT:\nWe disagree with the reviewer's post-rebuttal response that \"no clarifications were made\". The reviewer's main concern appears to be \"This paper needs a substantial rewrite to make clear what specific contributions are from the multitude of experiments run in this study.\" We strongly disagree...
[ -1, 1, -1, -1, -1, -1, 6, 6 ]
[ -1, 1, -1, -1, -1, -1, 3, 3 ]
[ "BklhRqW7sr", "iclr_2020_SygagpEKwB", "iclr_2020_SygagpEKwB", "ryxyH98XYr", "SJeN77VTYH", "r1x0CupCYH", "iclr_2020_SygagpEKwB", "iclr_2020_SygagpEKwB" ]
iclr_2020_Bylx-TNKvH
Functional vs. parametric equivalence of ReLU networks
We address the following question: How redundant is the parameterisation of ReLU networks? Specifically, we consider transformations of the weight space which leave the function implemented by the network intact. Two such transformations are known for feed-forward architectures: permutation of neurons within a layer, and positive scaling of all incoming weights of a neuron coupled with inverse scaling of its outgoing weights. In this work, we show for architectures with non-increasing widths that permutation and scaling are in fact the only function-preserving weight transformations. For any eligible architecture we give an explicit construction of a neural network such that any other network that implements the same function can be obtained from the original one by the application of permutations and rescaling. The proof relies on a geometric understanding of boundaries between linear regions of ReLU networks, and we hope the developed mathematical tools are of independent interest.
accept-poster
This work proves that the weights of feed-forward ReLU networks are determined, up to a specified set of symmetries, by the functions they define. Reviewers found the paper easy to read and the proof technically sound. There was some debate over the motivation for the paper, Reviewer 1 argues that there is no practical significance for the result, a point that the authors do not deny. I appreciate the concerns raised by Reviewer 1, theorists in machine learning should think carefully about the motivation for their work. However, while there is no clear practical significance of this work, I believe there is value to accepting it. Because the considered question concerns a sufficiently fundamental property of neural networks, and the proof is both easy to read and provides insights into a well studied class of models, I believe many researchers will find value in reading this paper.
val
[ "SJxkVFSoKB", "HkgHROFFsS", "SylTylPHir", "BygPU3ISor", "HJxI1gIriH", "Bygn2z2nYr", "HygiRiyRtB", "BJlnq_gZKS", "Bkef8RXAuS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "\nIn this paper, the authors studied the equivalence class of ReLU networks with non-increasing weights, and proved that permutation and scaling are the only function preserving weight transformations. The proof technique is novel, and provides some insights in the geometry space of the loss surface. I think the p...
[ 3, -1, -1, -1, -1, 8, 6, -1, -1 ]
[ 5, -1, -1, -1, -1, 1, 1, -1, -1 ]
[ "iclr_2020_Bylx-TNKvH", "SJxkVFSoKB", "SJxkVFSoKB", "Bygn2z2nYr", "HygiRiyRtB", "iclr_2020_Bylx-TNKvH", "iclr_2020_Bylx-TNKvH", "Bkef8RXAuS", "iclr_2020_Bylx-TNKvH" ]
iclr_2020_SyxIWpVYvr
Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models
Likelihood-based generative models are a promising resource to detect out-of-distribution (OOD) inputs which could compromise the robustness or reliability of a machine learning system. However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data. In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models' likelihoods. We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison. We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates.
accept-poster
I have read the paper and the reviews carefully. Despite the numerical scores, I think this paper is above the bar for ICLR, and recommend acceptance. This paper addresses the now-well-known problem that generative models often assign higher likelihoods to out-of-distribution examples, rendering likelihoods useless for OOD detection. They diagnose this as resulting from differences in compressibility of the input, and propose to compensate for this by comparing the log-likelihood to the description length from a strong image compressor. They show this performs well against a variety of OOD detection methods. The idea is a natural one, and certainly should have been one of the first things tried in addressing this phenomenon. I'm a little surprised it hasn't been done before, but none of the reviewers or I are aware of a prior reference, so AFAIK it's novel. One reviewer believes the contribution is small; while it's simple, I think the field will benefit from a careful implementation and testing of this approach. Multiple reviewers raise the concern of whether generative models' bias towards low-complexity inputs is just a matter of needing better generative models. I don't think so: even arbitrarily good generative models will still be limited by the inherent compressibility of an input (e.g. as measured by Kolmogorov complexity). I'm also not concerned about the lack of an explicit threshold; if one has proposed a good score function, there are many ways one could choose a threshold, depending on the task.
train
[ "HygYvt0RFr", "HJlR5sBjoH", "H1gkB-h5or", "rkxYFuWQjB", "SJgrUOZ7jr", "B1e4WOZQor", "SkeLO7hqFr", "SygCwHW0Fr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper analyzes the peculiar case that deep generative models often assign a higher likelihood to other datasets than they were trained on. The running hypothesis here is, that input complexity plays a central role. Measuring a proxy for input complexity shows that it is tightly anticorrelated with likelihood ...
[ 6, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_SyxIWpVYvr", "H1gkB-h5or", "SJgrUOZ7jr", "HygYvt0RFr", "SygCwHW0Fr", "SkeLO7hqFr", "iclr_2020_SyxIWpVYvr", "iclr_2020_SyxIWpVYvr" ]
iclr_2020_SJgob6NKvH
RTFM: Generalising to New Environment Dynamics via Reading
Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.
accept-poster
This paper proposes RTFM, a new model in the field of language-conditioned policy learning. This approach is promising and important in reinforcement learning because of the difficulty to learn policies in new environments. Reviewers appreciate the importance of the problem and the effective approach. After the author response which addressed some of the major concerns, reviewers feel more positive about the paper. They comment, though, that presentation could be clearer, and the limitations of using synthetic data should be discussed in depth. I thank the authors for submitting this paper.
train
[ "SJerltuHcB", "BylH2AYTFS", "r1eZ4mqgsr", "SJl21mqesB", "Sye4Af9xjS", "rJlzOGclsB", "S1gpAKB15H", "Ske5hhi9DH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author" ]
[ "First of all, I should acknowledge that this is a last-minute emergency review. Second, I should also acknowledge that I haven’t actively followed the specific research topic handled in this paper, although I work quite a lot in reinforcement learning and dialogue systems. The last paper I read on the topic was Br...
[ 6, 6, -1, -1, -1, -1, 6, -1 ]
[ 3, 1, -1, -1, -1, -1, 1, -1 ]
[ "iclr_2020_SJgob6NKvH", "iclr_2020_SJgob6NKvH", "S1gpAKB15H", "Sye4Af9xjS", "BylH2AYTFS", "SJerltuHcB", "iclr_2020_SJgob6NKvH", "iclr_2020_SJgob6NKvH" ]
iclr_2020_B1l2bp4YwS
What graph neural networks cannot learn: depth vs width
This paper studies the expressive power of graph neural networks falling within the message-passing framework (GNNmp). Two results are presented. First, GNNmp are shown to be Turing universal under sufficient conditions on their depth, width, node attributes, and layer expressiveness. Second, it is discovered that GNNmp can lose a significant portion of their power when their depth and width is restricted. The proposed impossibility statements stem from a new technique that enables the repurposing of seminal results from distributed computing and leads to lower bounds for an array of decision, optimization, and estimation problems involving graphs. Strikingly, several of these problems are deemed impossible unless the product of a GNNmp's depth and width exceeds a polynomial of the graph size; this dependence remains significant even for tasks that appear simple or when considering approximation.
accept-poster
This paper provides a theoretical background for the expressive power of graph convolutional networks. The results are obviously useful, and the discussion went in the positive way. All reviewers recommend accepting, and I am with them.
train
[ "BJxkAi7AFS", "BJgvajA15r", "B1lohYs9oH", "SkglkFo5jr", "ryx1zOoqor", "SkgSO7s9ir", "r1xzqQvMiS", "rkxbq22pYH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "This paper studies theoretical properties of GNN in particular their expressive power. There are many recent works on this topic and the 2019 ICLR paper 'How Powerful are Graph Neural Networks?' is the closes related to this paper. In the 2019 paper connects GNN with the Weisfeiler-Lehman graph isomorphism test i...
[ 6, 8, -1, -1, -1, -1, -1, 8 ]
[ 3, 5, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_B1l2bp4YwS", "iclr_2020_B1l2bp4YwS", "r1xzqQvMiS", "rkxbq22pYH", "BJxkAi7AFS", "BJgvajA15r", "iclr_2020_B1l2bp4YwS", "iclr_2020_B1l2bp4YwS" ]
iclr_2020_BkepbpNFwr
Progressive Memory Banks for Incremental Domain Adaptation
This paper addresses the problem of incremental domain adaptation (IDA) in natural language processing (NLP). We assume each domain comes one after another, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the domains that we have encountered. We adopt the recurrent neural network (RNN) widely used in NLP, but augment it with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition. The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity. We learn the new memory slots and fine-tune existing parameters by back-propagation. Experimental results show that our approach achieves significantly better performance than fine-tuning alone. Compared with expanding hidden states, our approach is more robust for old domains, shown by both empirical and theoretical results. Our model also outperforms previous work of IDA including elastic weight consolidation and progressive neural networks in the experiments.
accept-poster
This paper introduces an RNN based approach to incremental domain adaptation in natural language processing, where the RNN is progressively augmented with the parameterized memory bank which is shown to be better than expanding the RNN states. Reviewers and AC acknowledge that this paper is well written with interesting ideas and practical value. Domain adaptation in the incremental setting, where domains come in a streaming way with only the current one accessible, can find some realistic application scenarios. The proposed extensible attention mechanism is solid and works well on several NLP tasks. Several concerns were raised by the reviewers regarding the comparative and ablation studies, which were well resolved in the rebuttal. The authors are encouraged to generalize their approach to other application domains other than NLP to show the generality of their approach. I recommend acceptance.
val
[ "r1gnymt9cH", "Hklyku7AYB", "r1gIw_JlcB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n###Summary###\nThis paper introduces incremental domain adaptation for natural language processing, assuming that each domain comes one after another and only the current domain can be accessed in the application scenario. The basic framework of this paper is based on RNN but augmented with the directly paramet...
[ 6, 6, 6 ]
[ 4, 4, 3 ]
[ "iclr_2020_BkepbpNFwr", "iclr_2020_BkepbpNFwr", "iclr_2020_BkepbpNFwr" ]
iclr_2020_H1e0Wp4KvH
Automated curriculum generation through setter-solver interactions
Reinforcement learning algorithms use correlations between policies and rewards to improve agent performance. But in dynamic or sparsely rewarding environments these correlations are often too small, or rewarding events are too infrequent to make learning feasible. Human education instead relies on curricula –the breakdown of tasks into simpler, static challenges with dense rewards– to build up to complex behaviors. While curricula are also useful for artificial agents, hand-crafting them is time consuming. This has lead researchers to explore automatic curriculum generation. Here we explore automatic curriculum generation in rich,dynamic environments. Using a setter-solver paradigm we show the importance of considering goal validity, goal feasibility, and goal coverage to construct useful curricula. We demonstrate the success of our approach in rich but sparsely rewarding 2D and 3D environments, where an agent is tasked to achieve a single goal selected from a set of possible goals that varies between episodes, and identify challenges for future work. Finally, we demonstrate the value of a novel technique that guides agents towards a desired goal distribution. Altogether, these results represent a substantial step towards applying automatic task curricula to learn complex, otherwise unlearnable goals, and to our knowledge are the first to demonstrate automated curriculum generation for goal-conditioned agents in environments where the possible goals vary between episodes.
accept-poster
The authors introduce a method to automatically generate a learning curriculum (of goals) in a sparse reward RL setting, examining several criteria for goal setting to induce a useful curriculum. The reviewers agreed that this was an exciting research direction but also had concerns about baseline comparisons, clarity of some technical points, hyperparameter tuning (and the effect on the strength of empirical results), and computational tractability. After discussion, the reviewers felt most of these points were sufficiently addressed. Thus, I recommend acceptance at this time.
train
[ "rJgFyaPJ9S", "HJgk9s5TYH", "HJl9XcMzjr", "rklzjKzfiH", "Bkes_FzMiB", "r1x6rtzziS", "H1lArVYnKS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper tackles the task of automatically inducing a curriculum for agents learning through reinforcement. Specifically, they use two agents — a setter agent that sets goals, and a solver agent that solves the goals provided by the setter. While this has been explored before, the difficulty lies in training bo...
[ 6, 6, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2020_H1e0Wp4KvH", "iclr_2020_H1e0Wp4KvH", "H1lArVYnKS", "Bkes_FzMiB", "HJgk9s5TYH", "rJgFyaPJ9S", "iclr_2020_H1e0Wp4KvH" ]
iclr_2020_BJg1f6EFDB
On Identifiability in Transformers
In this paper we delve deep in the Transformer architecture by investigating two of its core components: self-attention and contextual embeddings. In particular, we study the identifiability of attention weights and token embeddings, and the aggregation of context into hidden tokens. We show that, for sequences longer than the attention head dimension, attention weights are not identifiable. We propose effective attention as a complementary tool for improving explanatory interpretations based on attention. Furthermore, we show that input tokens retain to a large degree their identity across the model. We also find evidence suggesting that identity information is mainly encoded in the angle of the embeddings and gradually decreases with depth. Finally, we demonstrate strong mixing of input information in the generation of contextual embeddings by means of a novel quantification method based on gradient attribution. Overall, we show that self-attention distributions are not directly interpretable and present tools to better understand and further investigate Transformer models.
accept-poster
This paper investigates the identifiability of attention distributions in the context of Transformer architectures. The main result is that, if the sentence length is long enough, difference choices of attention weights may result in the same contextual embeddings (i.e. the attention weights are not identifiable). A notion of "effective attention" is proposed that projects out the null space from attention weights. In the discussion period, there were some doubts about the technical correctness of the identifiability result that were clarified by the authors. The attention matrix A results from a softmax transformation, therefore each of its rows is constrained to be in the probability simplex -- i.e. we have A >= 0 (elementwise) and A1 = 1. In the present version of the paper, when analyzing the null space of T (Eqs. 4 and 5) this constraint on A is not taken into account. In particular, in Eq. 5 the existence of a \tilde{A} in the null space of T is not clear at all, since for (A + \tilde{A})T = AT to hold we would need to require, besides A >= 0 and A1 = 1, that A + \tilde{A} >= 0 and A1 + \tilde{A}1 = 1, i.e. \tilde{A} <= A (elementwise) \tilde{A}1 = 0 The present version of the paper does not make it clear that the intersection of the null space of T with these two constraints is non-empty in general -- which would be necessary for attention not to be identifiable, one of the main points of the paper. The authors acknowledged this concern and provided a proof. I suggest the following simplified version of their proof: We're looking for a vector \tilde{A} satisfying (1) \tilde{A}’*T = 0 (to be in the null space of T) and (2) \tilde{A}’*1 = 0 (3) \tilde{A} >= -A (to make sure A + \tilde{A} are in the probability simplex). Conditions (1) and (2) are equivalent to require \tilde{A} to be in the null space of [T; 1]. It is fine to assume this null space exists for a general T (it will be a linear subspace of dimension ds - dv - 1). To take into account condition (3) here’s a simpler proof: since A is a probability vector coming from a softmax transformation (hence it is strictly > 0 elementwise), there is some epsilon > 0 such that any point in the ball centered on 0 with radius epsilon is >= -A. Since the null space of [T; 1] contains 0, any point \tilde{A} in the intersection of this null space with the epsilon-ball above satisfies (1), (2), and (3). This should work for any ds - dv > 1 and as long as A is not a one-hot distribution (otherwise it collapses to a single point \tilde{A} = 0). I am less convinced about the justification to use an “effective attention” which is not in the probability simplex, though (not even in the null space of [T; 1] but only null(T)). That part deserves more clarification in the paper. I recommend acceptance of this paper provided these clarifications are provided and the proof is included in the final version.
train
[ "B1xrDOr3jB", "rylD-hNhjB", "Hygm-yVhiS", "BkePFwsijB", "HygsW3p8ir", "Syxjcj68sS", "SygXZjTUir", "rygqy5YTtB", "rkxDvlu0tr", "SyejH3d6FH" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks again for helping us improve the paper.", "I have read the reviews and responses (thank you, authors!) and on the basis of those comments and the numerous clarifications and substantial additions to the paper I am raising my score to 8: Accept.", "Thanks for the clarification and constructive comments. ...
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "rylD-hNhjB", "SygXZjTUir", "BkePFwsijB", "HygsW3p8ir", "SyejH3d6FH", "rygqy5YTtB", "rkxDvlu0tr", "iclr_2020_BJg1f6EFDB", "iclr_2020_BJg1f6EFDB", "iclr_2020_BJg1f6EFDB" ]
iclr_2020_H1exf64KwH
Exploring Model-based Planning with Policy Networks
Model-based reinforcement learning (MBRL) with model-predictive control or online planning has shown great potential for locomotion control tasks in both sample efficiency and asymptotic performance. Despite the successes, the existing planning methods search from candidate sequences randomly generated in the action space, which is inefficient in complex high-dimensional environments. In this paper, we propose a novel MBRL algorithm, model-based policy planning (POPLIN), that combines policy networks with online planning. More specifically, we formulate action planning at each time-step as an optimization problem using neural networks. We experiment with both optimization w.r.t. the action sequences initialized from the policy network, and also online optimization directly w.r.t. the parameters of the policy network. We show that POPLIN obtains state-of-the-art performance in the MuJoCo benchmarking environments, being about 3x more sample efficient than the state-of-the-art algorithms, such as PETS, TD3 and SAC. To explain the effectiveness of our algorithm, we show that the optimization surface in parameter space is smoother than in action space. Further more, we found the distilled policy network can be effectively applied without the expansive model predictive control during test time for some environments such as Cheetah. Code is released.
accept-poster
This paper proposes a model-based policy optimization approach that uses both a policy and model to plan online at test time. The paper includes significant contributions and strong results in comparison to a number of prior works, and is quite relevant to the ICLR community. There are a couple of related works that are missing [1,2] that combine learned policies and learned models, but generally the discussion of prior work is thorough. Overall, the paper is clearly above the bar for acceptance. [1] https://arxiv.org/pdf/1703.04070.pdf [2] https://arxiv.org/pdf/1904.05538.pdf
train
[ "H1e3tFvstH", "B1e6w2V2or", "BklYenE3iB", "BJeLws4nsS", "Hylu2qNhsH", "SJeV9O1dYB", "Hkxyvbc6Fr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThis work provides a novel model-based reinforcement learning algorithm for continuous domains (Mujoco) dubbed POPLIN. The presented algorithm is similar in vein to the state-of-the-art PETS algorithm, a planning algorithm that uses state-unconditioned action proposal distributions to identify good acti...
[ 8, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_H1exf64KwH", "iclr_2020_H1exf64KwH", "SJeV9O1dYB", "H1e3tFvstH", "Hkxyvbc6Fr", "iclr_2020_H1exf64KwH", "iclr_2020_H1exf64KwH" ]
iclr_2020_rke-f6NKvS
Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling
Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned policy. We introduce a notion of conservatively extrapolated value functions, which provably lead to policies with self-correction. We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation. We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks. We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.
accept-poster
The paper introduces Value Iteration with Negative Sampling (VINS) algorithm as a method to accelerate RL using expert demonstrations. VINS learns an initial value function that has a smaller value at states not encounter during the demonstrations. The reviewers raised several issues regarding the assumptions, theoretical results, and experiments. The method seems to be most natural for robotic control problems. Nonetheless, it seems that the rebuttal addressed most of the concerns, and two of the reviewers increased their scores accordingly. Since we have three Weak Accepts, I believe this paper can be accepted at the conference.
train
[ "H1gCYBFkcB", "S1liW_icKH", "rkxZvWG3iH", "rJlyzZGnjB", "HJlj1VxhoH", "BkxaamgniH", "Hyg0iQghsS", "Bylgr7yRFr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\nThis paper tackles an issue imitation learning approaches face. More specifically, policies learned in this manner can often fail when they encounter new states not seen in demonstrations. The paper proposes a method for learning value functions that are more conservative on unseen states, which encourages the l...
[ 6, 6, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_rke-f6NKvS", "iclr_2020_rke-f6NKvS", "Bylgr7yRFr", "S1liW_icKH", "H1gCYBFkcB", "H1gCYBFkcB", "H1gCYBFkcB", "iclr_2020_rke-f6NKvS" ]
iclr_2020_SJezGp4YPr
Geometric Insights into the Convergence of Nonlinear TD Learning
While there are convergence guarantees for temporal difference (TD) learning when using linear function approximators, the situation for nonlinear models is far less understood, and divergent examples are known. Here we take a first step towards extending theoretical convergence guarantees to TD learning with nonlinear function approximation. More precisely, we consider the expected learning dynamics of the TD(0) algorithm for value estimation. As the step-size converges to zero, these dynamics are defined by a nonlinear ODE which depends on the geometry of the space of function approximators, the structure of the underlying Markov chain, and their interaction. We find a set of function approximators that includes ReLU networks and has geometry amenable to TD learning regardless of environment, so that the solution performs about as well as linear TD in the worst case. Then, we show how environments that are more reversible induce dynamics that are better for TD learning and prove global convergence to the true value function for well-conditioned function approximators. Finally, we generalize a divergent counterexample to a family of divergent problems to demonstrate how the interaction between approximator and environment can go wrong and to motivate the assumptions needed to prove convergence.
accept-poster
This paper takes steps towards a theory of convergence for TD(0) with non-linear function approximation. The paper provides two theoretical results. One result bounds the error when training the sum of linear and homogenous parameterized functions. The second result shows global convergence when the environment dynamics are sufficiently reversible and the differentiable function approximation is sufficiently well-conditioned. The paper provides additional insight using a family of environments with partially reversible dynamics. The reviewers commented on several aspects of this work. The reviewers wrote that the presentation was clear and that the topic was relevant. The reviewers were satisfied with the correctness of the results. The reviewers liked the result that state value function estimation error is bounded when using homogeneous functions. They also noted that the deep networks in common use are not homogeneous so this result does not apply directly. The result showing global convergence of TD(0) with partial reversibility was also appreciated. Finally, the reviewers liked the family of examples. This paper is acceptable for publication as the presentation was clear, the results are solid, and the research direction could lead to additional insights.
train
[ "ryxJd6UDiB", "HJxnrp8DoS", "B1e8zT8PsB", "HkgWl6IDiH", "B1eT1g545S", "HygHpb4qcr", "rJl4tiBa9S", "HkxUUm6Tcr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the detailed comments and we will respond to each of the comments in order.\n\nBy neighborhood we mean a set containing the true value function. In this case the set that the approximate value function is attracted to is a ball in the mu-norm with radius B as defined in Theorem 1, which ...
[ -1, -1, -1, -1, 8, 6, 3, 8 ]
[ -1, -1, -1, -1, 1, 1, 5, 4 ]
[ "rJl4tiBa9S", "B1eT1g545S", "HygHpb4qcr", "HkxUUm6Tcr", "iclr_2020_SJezGp4YPr", "iclr_2020_SJezGp4YPr", "iclr_2020_SJezGp4YPr", "iclr_2020_SJezGp4YPr" ]
iclr_2020_H1emfT4twB
Few-shot Text Classification with Distributional Signatures
In this paper, we explore meta-learning for few-shot text classification. Meta-learning has shown strong performance in computer vision, where low-level patterns are transferable across learning tasks. However, directly applying this approach to text is challenging--lexical features highly informative for one task may be insignificant for another. Thus, rather than learning solely from words, our model also leverages their distributional signatures, which encode pertinent word occurrence patterns. Our model is trained within a meta-learning framework to map these signatures into attention scores, which are then used to weight the lexical representations of words. We demonstrate that our model consistently outperforms prototypical networks learned on lexical knowledge (Snell et al., 2017) in both few-shot text classification and relation classification by a significant margin across six benchmark datasets (20.0% on average in 1-shot classification).
accept-poster
This paper proposes a meta-learning approach for few-shot text classification. The main idea is to use an attention mechanism over the distributional signatures of the inputs to weight word importance. Experiments on text classification datasets show that the proposed method improves over baselines in 1-shot and 5-shot settings. The paper addresses an important problem of learning from a few labeled examples. The proposed approach makes sense and the results clearly show the strength of the proposed approach. R1 had some questions regarding the proposed method and experimental details. I believe this have been addressed by the authors in their rebuttal. R2 suggested that the authors clarified their experimental setup with respect to prior work and improved the clarity of their paper. The authors have made some adjustments based on this feedback, including adding new sections in the appendix. R3 had concerns regarding the contribution of the approach and whether it trades variance for bias. The authors have addressed most of these concerns and R3 has updated their review accordingly. I think all the reviewers gave valuable feedbacks that have been incorporated by the authors to improve their paper. While the overall scores remain low, I believe that they would have been increased had R1 and R2 reassessed the revised submission. I recommend to accept this paper.
train
[ "rklewo2-9r", "Bklwa634sB", "ryl0H6hEiS", "Byx7SzJZiB", "Hyl3_-kWsH", "ryeH-6BycH", "BkglFISTYS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "UPDATE: Based on the extensive improvements by the authors, I have updated my rating. However, I still have doubts about the potential of this approach to reach practically useful levels of accuracy.\n\nThis paper introduces a simple method to weight pretrained lexical features for use in meta learning of few-shot...
[ 6, -1, -1, -1, -1, 1, 3 ]
[ 5, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_H1emfT4twB", "BkglFISTYS", "iclr_2020_H1emfT4twB", "rklewo2-9r", "ryeH-6BycH", "iclr_2020_H1emfT4twB", "iclr_2020_H1emfT4twB" ]
iclr_2020_rkeNfp4tPr
Escaping Saddle Points Faster with Stochastic Momentum
Stochastic gradient descent (SGD) with stochastic momentum is popular in nonconvex stochastic optimization and particularly for the training of deep neural networks. In standard SGD, parameters are updated by improving along the path of the gradient at the current iterate on a batch of examples, where the addition of a ``momentum'' term biases the update in the direction of the previous change in parameters. In non-stochastic convex optimization one can show that a momentum adjustment provably reduces convergence time in many settings, yet such results have been elusive in the stochastic and non-convex settings. At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e.g. ADAM, AMSGrad, etc. Yet theoretical justification for the use of stochastic momentum has remained a significant open question. In this paper we propose an answer: stochastic momentum improves deep network training because it modifies SGD to escape saddle points faster and, consequently, to more quickly find a second order stationary point. Our theoretical results also shed light on the related question of how to choose the ideal momentum parameter--our analysis suggests that β∈[0,1) should be large (close to 1), which comports with empirical findings. We also provide experimental findings that further validate these conclusions.
accept-poster
This paper studies the impact of using momentum to escape saddle points. They show that a heavy use of momentum improves the convergence rate to second order stationary points. The reviewers agreed that this type of analysis is interesting and helps understand the benefits of this standard method in deep learning. The authors were able to address most of the concerns of the reviewers during rebutal, but is borderline due to lingering concerns about the presentation of the results. We encourage the authors to give more thought to the presentation before publication.
train
[ "rJef0QxAKH", "ryxDWRRaFr", "Ske5JRjsiH", "Skxo4PPiir", "rygUZNMssH", "Sye_nW4IjB", "rkgV_lV8jr", "rkxkUk48jB", "B1lLT0QLir", "rylP6jm8or", "BylwHh78ir", "HyeaI6L0KB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "*Summary*\nThis paper studies the impact of momentum for escaping Saddle points with SGD (+momentum) in a non-convex optimization setting. \nThey prove that using a large momentum value (i.e. close to one) provides a better constant for the convergence rate to second order stationary points.\nThe approach is well ...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_rkeNfp4tPr", "iclr_2020_rkeNfp4tPr", "Sye_nW4IjB", "rygUZNMssH", "rkgV_lV8jr", "HyeaI6L0KB", "rkxkUk48jB", "B1lLT0QLir", "rJef0QxAKH", "ryxDWRRaFr", "rylP6jm8or", "iclr_2020_rkeNfp4tPr" ]
iclr_2020_HJgEMpVFwB
Adversarial Policies: Attacking Deep Reinforcement Learning
Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers. However, an attacker is not usually able to directly modify another agent's observations. This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial? We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents. The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior. We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent. Videos are available at https://adversarialpolicies.github.io/.
accept-poster
This paper demonstrates that for deep RL problems one can construct adversarial examples where the examples don't really need to be even better than the best opponent. Surprisingly, sometimes, the adversarial opponent is less capable than normal opponents which the victim plays successfully against, yet they can disrupt the policies. The authors present a physically realistic threat model and demonstrate that adversarial policies can exist in this threat model. The reviewers agree with this paper presents results (proof of concept) that is "timely" and the RL community will benefit from this result. Based on reviewers comment, I recommend to accept this paper.
train
[ "SylJdDmKtr", "Sylzno3oiS", "SJl39ihsjB", "Hyg_UVu5iH", "H1eq47OcjH", "S1xWUXjUsH", "S1l7TWAmoH", "rJgWrbR7jH", "BkeFTl0moH", "Syxtou3uFS", "rJlKI8M49H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank the authors for the response. The update looks good to me. The discussion section looks much better now.\n----------------------------------------\nSummary\nThis paper conducts research on adversarial policy against a fixed and black-box policy (victim). In this setting, the victim has a fixed policy but the...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_HJgEMpVFwB", "Hyg_UVu5iH", "H1eq47OcjH", "BkeFTl0moH", "rJlKI8M49H", "rJgWrbR7jH", "SylJdDmKtr", "rJlKI8M49H", "Syxtou3uFS", "iclr_2020_HJgEMpVFwB", "iclr_2020_HJgEMpVFwB" ]
iclr_2020_rJgUfTEYvH
VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation
Generative models that can model and predict sequences of future events can, in principle, learn to capture complex real-world phenomena, such as physical interactions. However, a central challenge in video prediction is that the future is highly uncertain: a sequence of past observations of events can imply many possible futures. Although a number of recent works have studied probabilistic models that can represent uncertain futures, such models are either extremely expensive computationally as in the case of pixel-level autoregressive models, or do not directly optimize the likelihood of the data. To our knowledge, our work is the first to propose multi-frame video prediction with normalizing flows, which allows for direct optimization of the data likelihood, and produces high-quality stochastic predictions. We describe an approach for modeling the latent space dynamics, and demonstrate that flow-based generative models offer a viable and competitive approach to generative modeling of video.
accept-poster
The authors explore the use of flow-based models for video prediction. The idea is interesting. The paper is well-written. It is a good paper worthwhile presenting in ICLR. For final version, we suggest that the authors can significantly improve the experiments: (1) report results on human motion datasets; (2) include the results by the FVD metric.
train
[ "HJe8zpxjYr", "BJx9IOSisS", "HylEO9Hoir", "Byxtvr4ssr", "rkgkwXVoir", "S1eSqUQojS", "SkgfX6HsjS", "rylSiSzbsH", "SygDedWRtH", "HJetRuM0KH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This paper presents a stochastic model based on Glow for conditional video generation. The major novelty of this work is to introduce the flow-based models to video modeling and learn the video dynamics via the dependencies of the latent variables. The general idea is reasonable and the proposed model is technical...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_rJgUfTEYvH", "HJe8zpxjYr", "BJx9IOSisS", "rkgkwXVoir", "SygDedWRtH", "HJetRuM0KH", "iclr_2020_rJgUfTEYvH", "SygDedWRtH", "iclr_2020_rJgUfTEYvH", "iclr_2020_rJgUfTEYvH" ]
iclr_2020_BkxpMTEtPB
GLAD: Learning Sparse Graph Recovery
Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an ℓ1 regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.
accept-poster
The paper proposes a neural network architecture to address the problem of estimating a sparse precision matrix from data, which can be used for inferring conditional independence if the random variables are gaussian. The authors propose an Alternating Minimisation procedure for solving the l1 regularized maximum likelihood which can be unrolled and parameterized. This method is shown to converge faster at inference time than other methods and it is also far more effective in terms of training time compared to an existing data driven method. Reviewers had good initial impressions of this paper, pointing out the significance of the idea and the soundness of the setup. After a productive rebuttal phase the authors significantly improved the readibility and successfully clarified the remaining concerns of the reviewers. This AC thus recommends acceptance.
train
[ "HygGLUDcir", "BJlLQCOSKB", "rylult-dsr", "r1eHWOZuor", "r1gqUDWdor", "Syldaha4iH", "rJelxHe8ir", "Bkl23Nx8sB", "H1lchXZSiB", "HklJkBeHoH", "HJlxv6C4iB", "Skxon8paYH" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their constructive comments and helping us improve our paper. Listing down a summary of changes made in the new version based on the recommendations received:\n1. Made the explanation in our introduction more descriptive for clarifying the doubts raised about input and output of the ...
[ -1, 8, -1, -1, -1, 8, -1, -1, -1, -1, -1, 6 ]
[ -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_BkxpMTEtPB", "iclr_2020_BkxpMTEtPB", "r1eHWOZuor", "BJlLQCOSKB", "Skxon8paYH", "iclr_2020_BkxpMTEtPB", "Bkl23Nx8sB", "Syldaha4iH", "HklJkBeHoH", "HJlxv6C4iB", "Syldaha4iH", "iclr_2020_BkxpMTEtPB" ]
iclr_2020_rJeg7TEYwB
Pruned Graph Scattering Transforms
Graph convolutional networks (GCNs) have achieved remarkable performance in a variety of network science learning tasks. However, theoretical analysis of such approaches is still at its infancy. Graph scattering transforms (GSTs) are non-trainable deep GCN models that are amenable to generalization and stability analyses. The present work addresses some limitations of GSTs by introducing a novel so-termed pruned (p)GST approach. The resultant pruning algorithm is guided by a graph-spectrum-inspired criterion, and retains informative scattering features on-the-fly while bypassing the exponential complexity associated with GSTs. It is further established that pGSTs are stable to perturbations of the input graph signals with bounded energy. Experiments showcase that i) pGST performs comparably to the baseline GST that uses all scattering features, while achieving significant computational savings; ii) pGST achieves comparable performance to state-of-the-art GCNs; and iii) Graph data from various domains lead to different scattering patterns, suggesting domain-adaptive pGST network architectures.
accept-poster
Main content: Authors developed graph scattering transforms (GST) with a pruning algorithm, with the aim to reduce the running time and space cost, improve robustness to perturbations on input graph signal, and encourage flexibility for domain adaption. Discussion: reviewer 1: likes the idea, considers it to be elegant and work well. some questions regarding the proofs in the paper but it sounds like authors have addressed concerns. reviewer 2: solid paper and results, has questons on stability results, like reviewer 2. reviewer 3: likes the idea, including good sufficient theoretical analysis and algorthmic stability. concern is around complexity analysis but sounds like the authors have addressed the concerns. Recommendation: Well written solid paper with good proofs. Authors addressed any reviewer concerns and all 3 reviewres vote weak accept. This is good for poster.
train
[ "BJluIJRpYS", "BylBAys3jH", "BJgpdDghsr", "BkxSp8l2or", "BJe3TnjosB", "rJgKojioiB", "Skgr39ojiB", "SJeJxLojsH", "HygTQ7jjir", "rJlnxSdzoH", "HklFB48pFr" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "A scattering transform on graphs consists in the cascade of wavelets, modulus non-linearity and a low-pass filter. The wavelets and the low-pass are designed in the spectral domain, which is computationally extensive. Instead to compute any cascades of wavelets, this paper proposes to prune scattering paths which ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_rJeg7TEYwB", "BkxSp8l2or", "Skgr39ojiB", "SJeJxLojsH", "iclr_2020_rJeg7TEYwB", "HklFB48pFr", "SJeJxLojsH", "BJluIJRpYS", "rJlnxSdzoH", "iclr_2020_rJeg7TEYwB", "iclr_2020_rJeg7TEYwB" ]
iclr_2020_BJlzm64tDH
Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge. In this work, we further investigate the extent to which pretrained models such as BERT capture knowledge using a zero-shot fact completion task. Moreover, we propose a simple yet effective weakly supervised pretraining objective, which explicitly forces the model to incorporate knowledge about real-world entities. Models trained with our new objective yield significant improvements on the fact completion task. When applied to downstream tasks, our model consistently outperforms BERT on four entity-related question answering datasets (i.e., WebQuestions, TriviaQA, SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains.
accept-poster
This submission proposes a secondary objective when learning language models like BERT that improves the ability of such models to learn entity-centric information. This additional objective involves predicting whether an entity has been replaced. Replacement entities are mined using wikidata. Strengths: -The proposed method is simple and shows significant performance improvements for various tasks including fact completion and question answering. Weaknesses: -The experimental settings and data splits were not always clear. This was sufficiently addressed in a revised version. -The paper could have probed performance on tasks involving less common entities. The reviewer consensus was to accept this submission.
train
[ "HyxwTpAVoB", "S1lCG5PIsH", "HkxKZJyBiS", "SJgOezYBiS", "rJxh5MyHsB", "Bkxt_nQFtr", "Bkeu-1ZTKH", "r1llQTd0FS" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the detailed comments, we have updated the paper accordingly. Please see our reply below:\n\nOn Fact Completion Evaluation:\n\nCandidate selection: For each relation, we use the groundtruth answer entities from all queries as the candidate set. Using the full set of entities of the same type could re...
[ -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "r1llQTd0FS", "SJgOezYBiS", "Bkeu-1ZTKH", "rJxh5MyHsB", "Bkxt_nQFtr", "iclr_2020_BJlzm64tDH", "iclr_2020_BJlzm64tDH", "iclr_2020_BJlzm64tDH" ]
iclr_2020_rklB76EKPr
Can gradient clipping mitigate label noise?
Gradient clipping is a widely-used technique in the training of deep networks, and is generally motivated from an optimisation lens: informally, it controls the dynamics of iterates, thus enhancing the rate of convergence to a local minimum. This intuition has been made precise in a line of recent works, which show that suitable clipping can yield significantly faster convergence than vanilla gradient descent. In this paper, we propose a new lens for studying gradient clipping, namely, robustness: informally, one expects clipping to provide robustness to noise, since one does not overly trust any single sample. Surprisingly, we prove that for the common problem of label noise in classification, standard gradient clipping does not in general provide robustness. On the other hand, we show that a simple variant of gradient clipping is provably robust, and corresponds to suitably modifying the underlying loss function. This yields a simple, noise-robust alternative to the standard cross-entropy loss which performs well empirically.
accept-poster
This paper studies the effect of clipping on mitigating label noise. The authors demonstrate that standard gradient clipping does not suffice for achieving robustness to label noise. The authors suggest a noise-robust alternative. In the discussion the reviewers raised some interesting questions and technical detailed but mostly agreed that the paper is well-written with nice contributions. I concur with the reviewers that this is a nicely written paper with good contributions. I recommend acceptance but recommend the authors continue to improve their paper based on the reviewers' suggestions.
train
[ "BJggf3N3Kr", "Hke2FQomsB", "SygGu7oXoH", "SkeULmo7sS", "SkgTDboqtH", "r1e6vm9pFH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nGradient clipping has been studied as an optimization technique and also as a tool for privacy preserving, but in this paper, it studies the robustness properties of gradient clipping. More specifically, the main question of the paper is: Can gradient clipping mitigate label noise? The paper reveals ...
[ 6, -1, -1, -1, 8, 6 ]
[ 3, -1, -1, -1, 5, 3 ]
[ "iclr_2020_rklB76EKPr", "SkgTDboqtH", "BJggf3N3Kr", "r1e6vm9pFH", "iclr_2020_rklB76EKPr", "iclr_2020_rklB76EKPr" ]
iclr_2020_HJedXaEtvS
Editable Neural Networks
These days deep neural networks are ubiquitously used in a wide range of tasks, from image classification and machine translation to face identification and self-driving cars. In many applications, a single model error can lead to devastating financial, reputational and even life-threatening consequences. Therefore, it is crucially important to correct model mistakes quickly as they appear. In this work, we investigate the problem of neural network editing - how one can efficiently patch a mistake of the model on a particular sample, without influencing the model behavior on other samples. Namely, we propose Editable Training, a model-agnostic training technique that encourages fast editing of the trained model. We empirically demonstrate the effectiveness of this method on large-scale image classification and machine translation tasks.
accept-poster
This paper proposes a method which patches/edits a pre-trained neural network's predictions on problematic data points. They do this without the need for retraining the network on the entire data, by only using a few steps of stochastic gradient descent, and thereby avoiding influencing model behaviour on other samples. The post patching training can encourage reliability, locality and efficiency by using a loss function which incorporates these three criteria weighted by hyperparameters. Experiments are done on CIFAR-10 toy experiments, large-scale image classification with adversarial examples, and machine translation. The reviews are generally positive, with significant author response, a new improved version of the paper, and further discussion. This is a well written paper with convincing results, and it addresses a serious problem for production models, I therefore recommend that it is accepted.
train
[ "r1x1pGP0tH", "Sygcaq5jsB", "B1l1TZCcjS", "SJe1orYYir", "SkerIVUOoH", "SkxXbpUeiB", "rkxhnGUxjH", "Syxy7gwboH", "ryx3VUkXsH", "BylifkEWqH", "SklpoELX9r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n-----------\nUpdate after the authors' response: the authors addressed some of my concerns and presented some new results that improve the paper. I am therefore upgrading my score to \"Weak Accept\".\n-----------\n\nThis paper proposes a way to effectively \"patch\" and edit a pre-trained neural network's predic...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8 ]
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HJedXaEtvS", "rkxhnGUxjH", "SJe1orYYir", "SkerIVUOoH", "SkxXbpUeiB", "r1x1pGP0tH", "BylifkEWqH", "SklpoELX9r", "iclr_2020_HJedXaEtvS", "iclr_2020_HJedXaEtvS", "iclr_2020_HJedXaEtvS" ]
iclr_2020_SJetQpEYvB
LEARNING EXECUTION THROUGH NEURAL CODE FUSION
As the performance of computer systems stagnates due to the end of Moore’s Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn static representations of source code, these representations do not understand how code executes at runtime. In this work, we propose a new approach using GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related algorithm classification task.
accept-poster
This paper presents a method to learn representations of programs via code and execution. The paper presents an interesting method, and results on branch prediction and address pre-fetching are conclusive. The only main critiques associated with this paper seemed to be (1) potential lack of interest to the ICLR community, and (2) lack of comparison to other methods that similarly improve performance using other varieties of information. I am satisfied by the authors' responses to these concerns, and believe the paper warrants acceptance.
train
[ "Hkl6SPwGiS", "H1lysUvzjr", "Byl-vIwziB", "HJlUX4sTFH", "BkeiWrw1qH", "H1e_TZJS9H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to review the paper. \n\nBinary: We only mean this as an empirical claim. We observed surprisingly good performance with the binary representation, and we dug into how it was generalizing by running the experiments in section 4.6. These results show the generalization of the binary re...
[ -1, -1, -1, 6, 8, 3 ]
[ -1, -1, -1, 1, 1, 1 ]
[ "H1e_TZJS9H", "HJlUX4sTFH", "BkeiWrw1qH", "iclr_2020_SJetQpEYvB", "iclr_2020_SJetQpEYvB", "iclr_2020_SJetQpEYvB" ]
iclr_2020_BJgqQ6NYvB
FasterSeg: Searching for Faster Real-time Semantic Segmentation
We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to "collapsing" to low-latency yet poor-accuracy models. Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run. The teacher-student distillation further boosts the student model’s accuracy. Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg. For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy.
accept-poster
This paper presents neural architecture search for semantic segmentation, with search space that integrates multi-resolution branches. The method also uses a regularization to overcome the issue of learned networks collapsing to low-latency but poor accuracy models. Another interesting contribution is a collaborative search procedure to simultaneously search for student and teacher networks in a single run. All reviewers agree that the proposed method is well-motivated and shows promising empirical results. Author response satisfactorily addressed most of the points raised by the reviewers. I recommend acceptance.
train
[ "H1xVcoRjFS", "SkenQmO19S", "H1lO_ys8oB", "B1g0jWi8iB", "HJeJ7Ws8or", "B1eeZa58jB", "HklZAC5IjH", "SkevyWznYr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents an automatically designed semantic segmentation network utilising neural architecture search. The proposed method is discovered from a search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To calibrate the balance ...
[ 8, 6, -1, -1, -1, -1, -1, 8 ]
[ 5, 4, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_BJgqQ6NYvB", "iclr_2020_BJgqQ6NYvB", "SkenQmO19S", "H1xVcoRjFS", "SkevyWznYr", "SkenQmO19S", "SkenQmO19S", "iclr_2020_BJgqQ6NYvB" ]
iclr_2020_rygjmpVFvB
Difference-Seeking Generative Adversarial Network--Unseen Sample Generation
Unseen data, which are not samples from the distribution of training data and are difficult to collect, have exhibited importance in numerous applications, ({\em e.g.,} novelty detection, semi-supervised learning, and adversarial training). In this paper, we introduce a general framework called \textbf{d}ifference-\textbf{s}eeking \textbf{g}enerative \textbf{a}dversarial \textbf{n}etwork (DSGAN), to generate various types of unseen data. Its novelty is the consideration of the probability density of the unseen data distribution as the difference between two distributions pd¯ and pd whose samples are relatively easy to collect. The DSGAN can learn the target distribution, pt, (or the unseen data distribution) from only the samples from the two distributions, pd and pd¯. In our scenario, pd is the distribution of the seen data, and pd¯ can be obtained from pd via simple operations, so that we only need the samples of pd during the training. Two key applications, semi-supervised learning and novelty detection, are taken as case studies to illustrate that the DSGAN enables the production of various unseen data. We also provide theoretical analyses about the convergence of the DSGAN.
accept-poster
The authors propose a way to generate unseen examples in GANs by learning the difference of two distributions for which we have access. The majority of reviewers agree on the originality and practicality of the idea.
train
[ "Bkg3cauHjr", "Skg49XNciH", "SyesrjuHsB", "BkgZq5uHor", "r1efiwuroH", "HJe5Adneor", "S1gKDV-0Fr", "BygFHVOeqB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for all the comments!\n\nWe list all the modifications as follows.\n\n1. We add Sec. F in appendix for the ablation study on different $\\alpha$.\n2. We add Sec. G in appendix to demonstrate the sample quality of DSGAN on CelebA.", "In this revision, our modifications are listed as follows.\n\n1. We add m...
[ -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2020_rygjmpVFvB", "iclr_2020_rygjmpVFvB", "S1gKDV-0Fr", "BygFHVOeqB", "HJe5Adneor", "iclr_2020_rygjmpVFvB", "iclr_2020_rygjmpVFvB", "iclr_2020_rygjmpVFvB" ]
iclr_2020_HJepXaVYDr
Stochastic AUC Maximization with Deep Neural Networks
Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification. However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data. In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model. Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a {\it non-convex concave} min-max problem. The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well. In particular, we propose to explore Polyak-\L{}ojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme. An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate. Our experimental results demonstrate the effectiveness of the proposed algorithms.
accept-poster
The paper proposed using stochastic AUC for dealing with imbalanced data. This paper provides useful insights and experiments on this important problem. I recommend acceptance.
train
[ "Syxs8KUnoH", "SkgauQLnjH", "H1xgTHnssH", "HJlRg82sir", "SJxEOH3sjS", "H1lKWB2ooH", "rkximJxaKH", "HklnD5Ke9r", "rJxAPtR45H" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the insightful question. \n\nPlease note that there is a multiplicative constant $\\mu$ in the definition of PL condition. Recall that the definition of PL condition of function $\\phi$ is $\\phi(v)-\\phi(v_*)\\leq\\frac{1}{2\\mu}\\|\\nabla\\phi(v)\\|^2$, where $\\mu>0$ and $v_*$ is the global minima. I...
[ -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 1, 4, 1 ]
[ "SkgauQLnjH", "H1xgTHnssH", "rkximJxaKH", "iclr_2020_HJepXaVYDr", "HklnD5Ke9r", "rJxAPtR45H", "iclr_2020_HJepXaVYDr", "iclr_2020_HJepXaVYDr", "iclr_2020_HJepXaVYDr" ]
iclr_2020_ByxT7TNFvH
Semantically-Guided Representation Learning for Self-Supervised Monocular Depth
Self-supervised learning is showing great promise for monocular depth estimation, using geometry as the only source of supervision. Depth networks are indeed capable of learning representations that relate visual appearance to 3D properties by implicitly leveraging category-level patterns. In this work we investigate how to leverage more directly this semantic structure to guide geometric representation learning, while remaining in the self-supervised regime. Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions. Furthermore, we propose a two-stage training process to overcome a common semantic bias on dynamic objects via resampling. Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories.
accept-poster
The paper proposes a using pixel-adaptive convolutions to leverage semantic labels in self-supervised monocular depth estimation. Although there were initial concerns of the reviewers regarding the technical details and limited experiments, the authors responded reasonably to the issues raised by the reviewers. Reviewer2, who gave a weak reject rating, did not provide any answer to the authors comments. We do not see any major flaws to reject this paper.
train
[ "HyeZcWk9Kr", "B1lhpqP3sS", "HJe4RjJOoS", "rkxq23y_jH", "r1gzNC1uoB", "SylGZRJdsS", "BJe3ja1ujH", "H1lYWaJdiB", "B1lyKhJOoS", "SkgdigaTFH", "rkxuEDryqr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposes to leverage a pre-trained semantic segmentation network to learn semantically adaptive filters for self-supervised monocular depth estimation. Additionally, a simple two-stage training heuristic is proposed to improve depth estimation performance for dynamic objects that move in a way that induc...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_ByxT7TNFvH", "iclr_2020_ByxT7TNFvH", "iclr_2020_ByxT7TNFvH", "rkxuEDryqr", "HyeZcWk9Kr", "HyeZcWk9Kr", "HyeZcWk9Kr", "SkgdigaTFH", "rkxuEDryqr", "iclr_2020_ByxT7TNFvH", "iclr_2020_ByxT7TNFvH" ]
iclr_2020_rJx1Na4Fwr
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.
accept-poster
The submission proposes a robustness certification technique for smoothed classifiers for a given l_2 attack radius. Strengths: -The majority opinion is that this work is a non-trivial extension of prior work to provide radius certification. -The work is more efficient that strong recent baselines and provides better performance. -It successfully achieves this while avoiding adversarial training, which is another novel aspect. Weaknesses: -There were some initial concerns about missing experiments and unfair comparisons but these were sufficiently addressed in the discussion. AC shares the majority opinion and recommends acceptance.
train
[ "SkeTVpZxcH", "Sylsy_D5oB", "r1xu1ETQsB", "Skx7Pmpmir", "H1eXFmTmjS", "r1xYHETXjS", "SJlcg4aXiS", "rJeCqcNIKr", "ryg6D-wptS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper improves the robustness of smoothed classifiers by maximizing the certified radius, which is more efficient than adversarially train the smoothed classifier and achieves higher average robust radius and better certified robustness when the radius is not much larger than the training sigma. It proposes a...
[ 8, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rJx1Na4Fwr", "rJeCqcNIKr", "rJeCqcNIKr", "SkeTVpZxcH", "ryg6D-wptS", "iclr_2020_rJx1Na4Fwr", "r1xu1ETQsB", "iclr_2020_rJx1Na4Fwr", "iclr_2020_rJx1Na4Fwr" ]
iclr_2020_Skgy464Kvr
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Adversarial examples raise questions about whether neural network models are sensitive to the same visual features as humans. In this paper, we first detect adversarial examples or otherwise corrupted images based on a class-conditional reconstruction of the input. To specifically attack our detection mechanism, we propose the Reconstructive Attack which seeks both to cause a misclassification and a low reconstruction error. This reconstructive attack produces undetected adversarial examples but with much smaller success rate. Among all these attacks, we find that CapsNets always perform better than convolutional networks. Then, we diagnose the adversarial examples for CapsNets and find that the success of the reconstructive attack is highly related to the visual similarity between the source and target class. Additionally, the resulting perturbations can cause the input image to appear visually more like the target class and hence become non-adversarial. This suggests that CapsNets use features that are more aligned with human perception and have the potential to address the central issue raised by adversarial examples.
accept-poster
This paper presents a mechanism for capsule networks to defend against adversarial examples, and a new attack, the reconstruction attack. The differing success of this attacks on capsnets and convnets is used to argue that capsnets find features that are more similar to what humans use. Reviewers generally like the paper, but took instance with the strength of the claim (about the usefulness of the examples) and argued that the paper might not be as novel as it claims. Still, this seems like a valuable contribution that should be published.
train
[ "ByeNI910FS", "rylNjbVcjB", "Skl8EWE5or", "ryl7blVcsH", "SJg7sxEcjS", "rkeJjZa0tB", "SJxlSj5VcS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a new defense method for capsule networks. For both white-box and black-box settings, the proposed CapNets has shown superior performance than two variants of CNNs. The visualizations of adversarial examples generated by the CapNets are more aligned with the human perception which is very insig...
[ 6, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_Skgy464Kvr", "ByeNI910FS", "rkeJjZa0tB", "iclr_2020_Skgy464Kvr", "SJxlSj5VcS", "iclr_2020_Skgy464Kvr", "iclr_2020_Skgy464Kvr" ]
iclr_2020_SJeQEp4YDH
GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification
The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains. Devising a definitive defense against such attacks is proven to be challenging, and the methods relying on detecting adversarial samples are only valid when the attacker is oblivious to the detection mechanism. In this paper we present an adversarial example detection method that provides performance guarantee to norm constrained adversaries. The method is based on the idea of training adversarial robust subspace detectors using generative adversarial training (GAT). The novel GAT objective presents a saddle point problem similar to that of GANs; it has the same convergence property, and consequently supports the learning of class conditional distributions. We demonstrate that the saddle point problem could be reasonably solved by PGD attack, and further use the learned class conditional generative models to define generative detection/classification models that are both robust and more interpretable. We provide comprehensive evaluations of the above methods, and demonstrate their competitive performances and compelling properties on adversarial detection and robust classification problems.
accept-poster
This work addresses the problem of detecting an adversarial attack. This is a challenging problem as the detection mechanism itself is also vulnerable to attack. The paper proposes asymmetrical adversarial training as a robust solution. This approach partitions the feature space according to the output of the robust classifier and trains an adversarial example detector per partition. The paper demonstrates improvements over state-of-the-art detection techniques. All three reviewers recommend acceptance of this work. Some positive points include the paper being well-written with strong experimental evidence. One potential difficulty with the proposed approach is the additional computational cost associated with a per class adversarial attack detector. The authors have responded to this concern by claiming that the straightforward version of their approach is K times slower (10 in the case of 10 classes), but their integrated version is 2x slower as they only run the detector associated with the example-specific class prediction. We encourage the authors to include a discussion on computational cost in the final version. In addition, there was a community comment about black-box testing which will be of relevance to many in the community. The authors have already provided additional experiments to address this question as well as code to reproduce the new experiment. Overall, the paper addresses an important problem with a two-step solution of training a robust model and detecting potentially perturbed samples per class. This is a novel solution with comprehensive experiments and therefore recommend acceptance.
train
[ "ryloEnTVKS", "H1lI1gUwiB", "r1emkzy8iB", "rJeXAGbLoH", "S1eVLOH8jB", "rklkefNcFr", "B1xdR_S6YH", "SklzDtn-9H", "S1lmGGSatS", "S1eCbL0IKr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Dear reviewers and readers,\n\nOur source code is available at https://github.com/xuwangyin . Please leave a comment if you have any problem with the code. \n\nThanks!", "Dear Reviewers,\n\nThank you again for your thought-provoking comments and your valuable time. We have responded to all comments in detail and...
[ -1, -1, -1, -1, -1, 6, 6, 6, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, 1, 1, -1, -1 ]
[ "iclr_2020_SJeQEp4YDH", "iclr_2020_SJeQEp4YDH", "rklkefNcFr", "B1xdR_S6YH", "SklzDtn-9H", "iclr_2020_SJeQEp4YDH", "iclr_2020_SJeQEp4YDH", "iclr_2020_SJeQEp4YDH", "S1eCbL0IKr", "iclr_2020_SJeQEp4YDH" ]
iclr_2020_r1lL4a4tDB
Variational Recurrent Models for Solving Partially Observable Control Tasks
In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner.
accept-poster
The authors propose to decompose control in a POMDP into learning a model of the environment (via a VRNN) and learning a feed-forward policy that has access to both the environment and environment model. They argue that learning the recurrent environment model is easier than learning a recurrent policy. They demonstrate improved performance over existing state-of-the-art approaches on several PO tasks. Reviewers found the motivation for the proposed approach convincing and the experimental results proved the effectiveness of the method. The authors response resolved reviewers concerns, so as a result, I recommend acceptance.
test
[ "SkgNF3n7iS", "Sklk8C2pKB", "rkx3DcUhsS", "HJg-z9n7iH", "rJgTIOnQir", "BJefHq2QiB", "r1xUEF3QsB", "SylTqP96FS", "H1gTJc8QqH" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for pointing some shortages of our paper out. We posted a revised version according to the reviews, and here we shortly introduce what has been modified:\n\n1. The original Fig.1 has been divided into two (now Fig.1 and Fig.2) for clarity. \n2. We added a new section 5.4 to discuss how conv...
[ -1, 6, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_r1lL4a4tDB", "iclr_2020_r1lL4a4tDB", "BJefHq2QiB", "Sklk8C2pKB", "SylTqP96FS", "HJg-z9n7iH", "H1gTJc8QqH", "iclr_2020_r1lL4a4tDB", "iclr_2020_r1lL4a4tDB" ]
iclr_2020_rJeINp4KwH
Population-Guided Parallel Policy Search for Reinforcement Learning
In this paper, a new population-guided parallel learning scheme is proposed to enhance the performance of off-policy reinforcement learning (RL). In the proposed scheme, multiple identical learners with their own value-functions and policies share a common experience replay buffer, and search a good policy in collaboration with the guidance of the best policy information. The key point is that the information of the best policy is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search region by the multiple learners. The guidance by the previous best policy and the enlarged range enable faster and better policy search, and monotone improvement of the expected cumulative return by the proposed scheme is proved theoretically. Working algorithms are constructed by applying the proposed scheme to the twin delayed deep deterministic (TD3) policy gradient algorithm, and numerical results show that the constructed P3S-TD3 outperforms most of the current state-of-the-art RL algorithms, and the gain is significant in the case of sparse reward environment.
accept-poster
The paper proposes a new approach to multi-actor RL, which ensure diversity and performance of the population of actors, by distilling the policy of the best performing agent in a soft way and maintaining some distance between the agents. The authors show improved performance over several state-of-the-art mono-actor algorithms and over several other multi-actor RL algorithms. Initially, reviewers were concerned with magnitude of the contribution/novelty, as well as some technical issues (e.g. the beta update), and relative lack of baseline comparisons. However, after discussion the reviewers largely agree that their main concerns have been addressed. Therefore, I recommend this paper for acceptance.
val
[ "r1lL6hgdKH", "B1e8Q9IiuH", "BJlQNc2joB", "HygG-q3iiS", "HJgwiY2osS", "S1enLtnjsB", "ryxMDGOKur" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors propose another method of doing population-based training of RL policies. During the training process, there are N workers running in N copies of the environment, each with different parameter settings for the policies and value networks. Each worker pushes data to a shared replay buffer of experience....
[ 6, 8, -1, -1, -1, -1, 3 ]
[ 4, 3, -1, -1, -1, -1, 4 ]
[ "iclr_2020_rJeINp4KwH", "iclr_2020_rJeINp4KwH", "ryxMDGOKur", "B1e8Q9IiuH", "r1lL6hgdKH", "iclr_2020_rJeINp4KwH", "iclr_2020_rJeINp4KwH" ]
iclr_2020_HkePNpVKPB
Compositional languages emerge in a neural iterated learning model
The principle of compositionality, which enables natural language to represent complex concepts via a structured combination of simpler ones, allows us to convey an open-ended set of messages using a limited vocabulary. If compositionality is indeed a natural property of language, we may expect it to appear in communication protocols that are created by neural agents via grounded language learning. Inspired by the iterated learning framework, which simulates the process of language evolution, we propose an effective neural iterated learning algorithm that, when applied to interacting neural agents, facilitates the emergence of a more structured type of language. Indeed, these languages provide specific advantages to neural agents during training, which translates as a larger posterior probability, which is then incrementally amplified via the iterated learning procedure. Our experiments confirm our analysis, and also demonstrate that the emerged languages largely improve the generalization of the neural agent communication.
accept-poster
This paper examines the correspondence between topological similarity of languages (correlation between the message space and object space) and ability to learn quickly in a situation of emergent communication between agents. While this paper is not without issues, it does seem to present a nice contribution that all of the reviewers appreciated to some extent. I think it will spark further discussions in this area, and thus can recommend it for acceptance.
train
[ "Skev5xvMcB", "B1gQtlE3iB", "HkeVnkN3jB", "ByxMZbIFoS", "S1xgIIHdjB", "SyeiU2UAtr", "rJeDXYBPsS", "BJxWCjoLiB", "rkl_9ij8oS", "BJgb92sUsH", "ByxE_2iIsS", "Byg766iLoH", "SJlcjaoLsS", "SJgrsqoLsS", "rkeMC5i8sS", "r1l61jj8sB", "HJxHxtFZiB", "BJgKdiolir", "H1grR5wesS", "SkgR8TeAKH"...
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "This paper proposed a neural iterated learning algorithm to encourage the dominance of high compositional language in the multi-agent communication game. The author shows that the iterative training of two agents playing a referential game can incrementally increase the agent to use the language with high topologi...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HkePNpVKPB", "SkgR8TeAKH", "Skev5xvMcB", "SyeiU2UAtr", "rJeDXYBPsS", "iclr_2020_HkePNpVKPB", "ByxE_2iIsS", "Skev5xvMcB", "Skev5xvMcB", "SyeiU2UAtr", "SyeiU2UAtr", "SkgR8TeAKH", "SkgR8TeAKH", "Skev5xvMcB", "SyeiU2UAtr", "SkgR8TeAKH", "BJgKdiolir", "H1grR5wesS", "Skev5xv...
iclr_2020_SJxhNTNYwB
Black-Box Adversarial Attack with Transferable Model-based Embedding
We present a new method for black-box adversarial attack. Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient search within the embedding space to attack an unknown target network. The method produces adversarial perturbations with high level semantic patterns that are easily transferable. We show that this approach can greatly improve the query efficiency of black-box adversarial attack across different target network architectures. We evaluate our approach on MNIST, ImageNet and Google Cloud Vision API, resulting in a significant reduction on the number of queries. We also attack adversarially defended networks on CIFAR10 and ImageNet, where our method not only reduces the number of queries, but also improves the attack success rate.
accept-poster
This paper proposes a new black-box adversarial attack approach which learns a low-dimensional embedding using a pretrained model and then performs efficient search in the embedding space to attack target networks. The proposed approach can produce perturbation with semantic patterns that are easily transferable and improve the query efficiency in black-box attacks. All reviewers are in support of the paper after author response. I am very happy to recommend accept.
train
[ "rJgyVOG6tr", "BkejP5_8iH", "B1lPScdLor", "Hkllm5_Ljr", "H1x_HPraFB", "r1l9pBNPqS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a new method for black-box adversarial attacks which tries to learn a low-dimensional embedding using a pretrained model and then performs efficient search within the embedding space to attack the target network. The proposed method can produce perturbation with semantic patterns are easily tr...
[ 6, -1, -1, -1, 8, 6 ]
[ 5, -1, -1, -1, 3, 1 ]
[ "iclr_2020_SJxhNTNYwB", "rJgyVOG6tr", "H1x_HPraFB", "r1l9pBNPqS", "iclr_2020_SJxhNTNYwB", "iclr_2020_SJxhNTNYwB" ]
iclr_2020_rJehNT4YPr
I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively
The learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training. On the other hand, the trained classifiers have traditionally been evaluated on small and fixed sets of test images, which are deemed to be extremely sparsely distributed in the space of all natural images. It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations. Inspired by efficient stimulus selection for testing perceptual models in psychophysical and physiological studies, we present an alternative framework for comparing image classifiers, which we name the MAximum Discrepancy (MAD) competition. Rather than comparing image classifiers using fixed test images, we adaptively sample a small test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy. Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers, and provides useful insights on potential ways to improve them. We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition. Codes can be found at https://github.com/TAMU-VITA/MAD.
accept-poster
This paper proposes a new way of comparing classifiers, which does not use fixed test set and adaptively sample it from an arbitrarily large corpus of unlabeled images, i.e. replacing the conventional test-set-based evaluation methods with a more flexible mechanism. The main proposal is to build a test set adaptively in a manner that captures how classifiers disagree, as measured by the wordnet tree. As noted by R2, this work has the potential to be of interest to a broad audience and can motivate many subsequent works. While the reviewers acknowledged the importance of this work, they raised several concerns: (1) the proposed approach is immature to be considered for benchmarking yet (R1,R4), (2) selecting k and studying its influence on the performance ( R1, R3, R4), (3) the proposed approach requires data annotation which might not be straightforward -- (R3, R4). The authors provided a detailed rebuttal addressing the reviewer concerns. There is reviewer disagreement on this paper. The comments from R3 were valuable for the discussion, but at the same time too brief to be adequately addressed by the authors. The comments from emergency reviewer were helpful in making the decision. AC decided to recommend acceptance of the paper seeing its valuable contributions towards re-thinking the evaluation of current SOTA models.
test
[ "SyenX6MYjr", "BJlV02zYor", "Hkl673fKoH", "BJx9Y2wuiB", "SkgcWZsPsr", "BJgM_eowiS", "HJxHV7JPjB", "HJloCFFSiB", "ryePhK-giS", "BJgfCUWlsB", "HkgKMIsTFr", "r1xYhv5J5H" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Q5: \"we believe a much larger test set in the order of millions or even billions must be used\" I think this goes against the rest of the paper (ie a small high quality test set is ok) - and even with infinite budget annotating billions of test images from the same distribution would be of little use.\n\nResponse...
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 3, 5 ]
[ "BJx9Y2wuiB", "BJx9Y2wuiB", "BJx9Y2wuiB", "iclr_2020_rJehNT4YPr", "HJxHV7JPjB", "HJxHV7JPjB", "iclr_2020_rJehNT4YPr", "ryePhK-giS", "HkgKMIsTFr", "r1xYhv5J5H", "iclr_2020_rJehNT4YPr", "iclr_2020_rJehNT4YPr" ]
iclr_2020_HkgaETNtDB
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
In natural language processing, it has been observed recently that generalization could be greatly improved by finetuning a large-scale language model pretrained on a large unlabeled corpus. Despite its recent success and wide adoption, finetuning a large pretrained language model on a downstream task is prone to degenerate performance when there are only a small number of training instances available. In this paper, we introduce a new regularization technique, to which we refer as “mixout”, motivated by dropout. Mixout stochastically mixes the parameters of two models. We show that our mixout technique regularizes learning to minimize the deviation from one of the two models and that the strength of regularization adapts along the optimization trajectory. We empirically evaluate the proposed mixout and its variants on finetuning a pretrained language model on downstream tasks. More specifically, we demonstrate that the stability of finetuning and the average accuracy greatly increase when we use the proposed approach to regularize finetuning of BERT on downstream tasks in GLUE.
accept-poster
This paper presents mixout, a regularization method that stochastically mixes parameters of a pretrained language model and a target language model. Experiments on GLUE show that the proposed technique improves the stability and accuracy of finetuning a pretrained BERT on several downstream tasks. The paper is well written and the proposed idea is applicable in many settings. The authors have addressed reviewers concerns' during the rebuttal period and all reviewers are now in agreement that this paper should be accepted. I think this paper would be a good addition to ICLR and recommend to accept it.
train
[ "BJxftmXRqS", "rye8Ro4nor", "HkxBPMH8jH", "r1lFiNrUir", "SklcH4HLiH", "r1xnAZSLjS", "HkgxWvcTKS", "S1elsTST9S" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors introduce a new regularization technique for the specific task of finetuning models. It's inspired by dropout and stochastically mixes source and target weights in order to avoid moving the parameters towards 0.\n\nThe authors provide a theoretical justification as to why mixout would do useful things ...
[ 6, -1, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_HkgaETNtDB", "iclr_2020_HkgaETNtDB", "S1elsTST9S", "SklcH4HLiH", "BJxftmXRqS", "HkgxWvcTKS", "iclr_2020_HkgaETNtDB", "iclr_2020_HkgaETNtDB" ]
iclr_2020_BkglSTNFDB
Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP
A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. (2018) proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards \emph{without} accessing a generative model. We show that the \textit{sample complexity of exploration} of our algorithm is bounded by O~(SAϵ2(1−γ)7). This improves the previously best known result of O~(SAϵ4(1−γ)8) in this setting achieved by delayed Q-learning (Strehlet al., 2006),, and matches the lower bound in terms of ϵ as well as S and A up to logarithmic factors.
accept-poster
In this paper, the authors extended Q-learning with UCB exploration bonus by Jin et al. to infinite-horizon MDP with discounted rewards without accessing a generative model, and proved nearly optimal regret bound for finite-horizon episodic MDP. The authors also proved PAC-type sample complexity of exploration, which matches the lower bound up to logarithmic factors. Overall this is a solid theoretical reinforcement learning work. After author response, we reached a unanimous agreement to accept this paper.
train
[ "HJgxIUpioS", "HyxP6tXTtB", "r1eZ4jA9iS", "S1leQbZqir", "HJlWhQI79B", "BylcpEQYsS", "r1xYsw1rjS", "SyeQKO1HoB", "HJxbUu1SoB", "B1xjlOyroB", "r1gUR9K1qr", "H1x2aR8Z9S" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We sincerely thank the reviewer for the quick response and new comments.\n\nRegarding the definition of PAC in infinite horizon settings: Actually this has been extensively discussed in the past two decades. It is believed that sample complexity of exploration is the most natural PAC measurement in this setting. P...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, 5, -1, -1, -1, -1, -1, 4, 1 ]
[ "BylcpEQYsS", "iclr_2020_BkglSTNFDB", "S1leQbZqir", "SyeQKO1HoB", "iclr_2020_BkglSTNFDB", "r1xYsw1rjS", "HJlWhQI79B", "HyxP6tXTtB", "r1gUR9K1qr", "H1x2aR8Z9S", "iclr_2020_BkglSTNFDB", "iclr_2020_BkglSTNFDB" ]
iclr_2020_SJxWS64FwH
Deep Network Classification by Scattering and Homotopy Dictionary Learning
We introduce a sparse scattering deep convolutional neural network, which provides a simple model to analyze properties of deep representation learning for classification. Learning a single dictionary matrix with a classifier yields a higher classification accuracy than AlexNet over the ImageNet 2012 dataset. The network first applies a scattering transform that linearizes variabilities due to geometric transformations such as translations and small deformations. A sparse ℓ1 dictionary coding reduces intra-class variability while preserving class separation through projections over unions of linear spaces. It is implemented in a deep convolutional network with a homotopy algorithm having an exponential convergence. A convergence proof is given in a general framework that includes ALISTA. Classification results are analyzed on ImageNet.
accept-poster
After the rebuttal period the ratings on this paper increased and it now has a strong assessment across reviewers. The AC recommends acceptance.
train
[ "HJl5HgCpFB", "rJg3_X7osS", "ryla7UXjsB", "ByeTcFMiiH", "rJxLcUzssB", "SkgtJzcntH", "SkeHSjoe9H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n## Summary\n\nThe paper proposes an interpretable architecture for image classification based on a scattering transform and sparse dictionary learning approach. The scattering transform acts as a pre-trained interpretable feature extractor that does not require data. A sparse dictionary on top of this representa...
[ 8, -1, -1, -1, -1, 6, 8 ]
[ 1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_SJxWS64FwH", "HJl5HgCpFB", "SkgtJzcntH", "SkeHSjoe9H", "iclr_2020_SJxWS64FwH", "iclr_2020_SJxWS64FwH", "iclr_2020_SJxWS64FwH" ]
iclr_2020_H1gmHaEKwB
Data-Independent Neural Pruning via Coresets
Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy. Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources. The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample. We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample. Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs. Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest. We apply this framework in a layer-by-layer fashion from the top to the bottom. Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input x∈Rd, including an adversarial one. We demonstrate the effectiveness of our method on popular network architectures. In particular, our coresets yield 90% compression of the LeNet-300-100 architecture on MNIST while improving the accuracy.
accept-poster
The rebuttal period influenced R1 to raise their rating of the paper. The most negative reviewer did not respond to the author response. This work proposes an interesting approach that will be of interest to the community. The AC recommends acceptance.
test
[ "SkgB97XXqB", "S1girxv_iH", "r1ePEqLuiB", "Byeh_dLdjH", "SygftbOCKr", "rJlj2ZzVcH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "------ after reading authors' response ------\nThanks for the complete response and revision. The new Fig 4 is very nice, and this helps address my concerns. I'm more favorable of the paper now, changing from \"Weak Accept\" to \"Accept\".\n\nNote a small typo in the displayed equation just above Fig 4: the sum is...
[ 8, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, 1, 1 ]
[ "iclr_2020_H1gmHaEKwB", "SkgB97XXqB", "SygftbOCKr", "rJlj2ZzVcH", "iclr_2020_H1gmHaEKwB", "iclr_2020_H1gmHaEKwB" ]
iclr_2020_BkgXHTNtvS
Bounds on Over-Parameterization for Guaranteed Existence of Descent Paths in Shallow ReLU Networks
We study the landscape of squared loss in neural networks with one-hidden layer and ReLU activation functions. Let m and d be the widths of hidden and input layers, respectively. We show that there exist poor local minima with positive curvature for some training sets of size n≥m+2d−2. By positive curvature of a local minimum, we mean that within a small neighborhood the loss function is strictly increasing in all directions. Consequently, for such training sets, there are initialization of weights from which there is no descent path to global optima. It is known that for n≤m, there always exist descent paths to global optima from all initial weights. In this perspective, our results provide a somewhat sharp characterization of the over-parameterization required for "existence of descent paths" in the loss landscape.
accept-poster
This article investigates the optimization landscape of shallow ReLU networks, showing that for sufficiently narrow networks there are data sets for which there is no descent paths to the global minimiser. The topic and the nature of the results is very interesting. The reviewers found that this article makes important contributions in a relevant line of investigation and had generally positive ratings. The authors' responses addressed questions from the initial reviews, and the discussion helped identifying questions for future study departing from the present contribution.
train
[ "SkeUbodptr", "Syxifrvdir", "rJlAaVwOjB", "r1lu5o0TtB" ]
[ "official_reviewer", "author", "author", "official_reviewer" ]
[ "This paper analyzes the existence of descent paths from any initial point to the global minimum for the two-layer ReLU network and gives a better characterization of the network width that guarantees the descent path property. Concretely, the paper shows that there exists poor local minima under the case of $n > m...
[ 6, -1, -1, 6 ]
[ 3, -1, -1, 4 ]
[ "iclr_2020_BkgXHTNtvS", "SkeUbodptr", "r1lu5o0TtB", "iclr_2020_BkgXHTNtvS" ]
iclr_2020_ByeNra4FDB
Novelty Detection Via Blurring
Conventional out-of-distribution (OOD) detection schemes based on variational autoencoder or Random Network Distillation (RND) are known to assign lower uncertainty to the OOD data than the target distribution. In this work, we discover that such conventional novelty detection schemes are also vulnerable to the blurred images. Based on the observation, we construct a novel RND-based OOD detector, SVD-RND, that utilizes blurred images during training. Our detector is simple, efficient in test time, and outperforms baseline OOD detectors in various domains. Further results show that SVD-RND learns a better target distribution representation than the baselines. Finally, SVD-RND combined with geometric transform achieves near-perfect detection accuracy in CelebA domain.
accept-poster
The paper proposes a new method for out-of-distribution detection by combining random network distillation (RND) and blurring (via SVD). The proposed idea is very simple but achieves strong empirical performance, outperforming baseline methods in several OOD detection benchmarks. There were many detailed questions raised by the reviewers but they got mostly resolved, and all reviewers recommend acceptance, and this AC agrees that it is an interesting and effective method worth presenting at ICLR.
train
[ "rJeaGSQoKr", "B1xtec95ir", "BJxTL59qjr", "Bygv8r9qjS", "Bkx8zr9csS", "rkeCTWq9sH", "SkxW3Xq9iB", "rJemcX4AYH", "Sylw6lsCYS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "UPDATE: \nI acknowledge that I‘ve read the author responses as well as the other reviews. \nI appreciate the clarifications and improvements made to the paper. I‘ve updated my score to 6 Weak Accept. \n\n####################\n\nThis paper presents the idea to use blurred images as regularizing examples to improve ...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_ByeNra4FDB", "rJeaGSQoKr", "rJeaGSQoKr", "rJemcX4AYH", "rJemcX4AYH", "Sylw6lsCYS", "iclr_2020_ByeNra4FDB", "iclr_2020_ByeNra4FDB", "iclr_2020_ByeNra4FDB" ]
iclr_2020_B1x6BTEKwr
Piecewise linear activations substantially shape the loss surfaces of neural networks
Understanding the loss surface of a neural network is fundamentally important to the understanding of deep learning. This paper presents how piecewise linear activation functions substantially shape the loss surfaces of neural networks. We first prove that {\it the loss surfaces of many neural networks have infinite spurious local minima} which are defined as the local minima with higher empirical risks than the global minima. Our result demonstrates that the networks with piecewise linear activations possess substantial differences to the well-studied linear neural networks. This result holds for any neural network with arbitrary depth and arbitrary piecewise linear activation functions (excluding linear functions) under most loss functions in practice. Essentially, the underlying assumptions are consistent with most practical circumstances where the output layer is narrower than any hidden layer. In addition, the loss surface of a neural network with piecewise linear activations is partitioned into multiple smooth and multilinear cells by nondifferentiable boundaries. The constructed spurious local minima are concentrated in one cell as a valley: they are connected with each other by a continuous path, on which empirical risk is invariant. Further for one-hidden-layer networks, we prove that all local minima in a cell constitute an equivalence class; they are concentrated in a valley; and they are all global minima in the cell.
accept-poster
Quoting R3: "This paper studies the theoretical property of neural network's loss surface. The main contribution is to prove that the loss surface of every neural network (with arbitrary depth) with piecewise linear activations has infinite spurious local minima." There were split reviews, with two reviewers recommending acceptance and one recommending rejection. During a robust rebuttal and discussion phase, both R2 and R3's appreciation for the work was strengthened. The authors also provided a robust response to R1, whose main concerns included (i) that the paper's analysis is limited to piecewise linear activation functions, (ii) technical questions about the difficulty of proving theorem 2, which appear to have been answered in the discussion, and (iii) concerns about the strength of the language employed. On the balance, the reviewers were positively impressed with the relevance of the theoretical study and its contributions. Genuine shortcomings and misunderstandings were systematically resolved during the rebuttal process.
train
[ "B1e6XZRcsB", "SJg4hxCcoH", "BylzHCT5sS", "r1xCmsqKjH", "rkxc9bmFjB", "S1eHva6_ir", "Bkl1dQB2YH", "HkeojKmBjS", "ByxK47GSsH", "B1gUH9Rfor", "rklHAmAVsB", "S1lXLEcNsr", "S1xg7CLEiH", "Skx7pZyVjS", "SklDLOHXiS", "SJlQN4Z7sH", "Syeg-vRziH", "HkgQJvRMoS", "Byg9OHRfjS", "S1lvlHCMjB"...
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "public", "official_reviewer", "author", "author", "author", "author", "author", ...
[ "Q2.3: Mathematically speaking, I agree there is a difference between deep and shallow. And I do appreciate the effort to really prove it. But the explanation is not for the technical nontriviality of proving \"deep\" instead of shallow.\n\nA2.3: Thank you for the recognization of our effort. We would love to clari...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "r1xCmsqKjH", "r1xCmsqKjH", "r1xCmsqKjH", "Byg9OHRfjS", "S1eHva6_ir", "B1gUH9Rfor", "iclr_2020_B1x6BTEKwr", "ByxK47GSsH", "rklHAmAVsB", "Bkl1dQB2YH", "S1lXLEcNsr", "Skx7pZyVjS", "SklDLOHXiS", "SJlQN4Z7sH", "iclr_2020_B1x6BTEKwr", "Syeg-vRziH", "B1g3dnrRKH", "B1g3dnrRKH", "SJxEpgt...
iclr_2020_B1lGU64tDr
Relational State-Space Model for Stochastic Multi-Object Systems
Real-world dynamical systems often consist of multiple stochastic subsystems that interact with each other. Modeling and forecasting the behavior of such dynamics are generally not easy, due to the inherent hardness in understanding the complicated interactions and evolutions of their constituents. This paper introduces the relational state-space model (R-SSM), a sequential hierarchical latent variable model that makes use of graph neural networks (GNNs) to simulate the joint state transitions of multiple correlated objects. By letting GNNs cooperate with SSM, R-SSM provides a flexible way to incorporate relational information into the modeling of multi-object dynamics. We further suggest augmenting the model with normalizing flows instantiated for vertex-indexed random variables and propose two auxiliary contrastive objectives to facilitate the learning. The utility of R-SSM is empirically evaluated on synthetic and real time series datasets.
accept-poster
The paper proposed what is termed Relational State Space Model (R-SSM) that can be used for modeling interacting time-series data. The model essentially consists of a set of (nonlinear) state space models whose states are jointly evolved in a way that take into account a known interaction structure between them (the relational part, even though technically it is just a coupling structure -- the term relational structure in the past has been used for models with objects and classes, for example see the difference between "coupled HMM" vs "relational HMM"). The authors also proposed a graph normalizing flow operation to model the joint state evolution. The main weakness of the paper is in the complexity of the model. However, from a modeling point of view, R-SSM seems suitable in situation when the interaction structure is known, and this is demonstrated in the experimental results when comparing against the baselines.
train
[ "HJeoZfEKiB", "SygFRKocor", "HJgLdOCIiH", "ByxfCSnyqS", "SyxZ56f9jr", "rklcBcUdsr", "SygRpc8OsS", "r1l0rn7DoH", "S1eg3T6ZoS", "r1eQTVd4qB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for your constructive feedback. We address your concerns in order.\n\n1. RE: Lack of relevant baselines.\n(1) >\"baselines are overall focused on recurrent and autoregressive models which seem underequipped to address these problems\"\nFirst of all, we clarify that this statement is not accurate because sto...
[ -1, -1, -1, 6, -1, -1, -1, -1, 3, 6 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, 1, 3 ]
[ "S1eg3T6ZoS", "SyxZ56f9jr", "iclr_2020_B1lGU64tDr", "iclr_2020_B1lGU64tDr", "SygRpc8OsS", "ByxfCSnyqS", "rklcBcUdsr", "r1eQTVd4qB", "iclr_2020_B1lGU64tDr", "iclr_2020_B1lGU64tDr" ]
iclr_2020_rJxX8T4Kvr
Learning Efficient Parameter Server Synchronization Policies for Distributed SGD
We apply a reinforcement learning (RL) based approach to learning optimal synchronization policies used for Parameter Server-based distributed training of machine learning models with Stochastic Gradient Descent (SGD). Utilizing a formal synchronization policy description in the PS-setting, we are able to derive a suitable and compact description of states and actions, allowing us to efficiently use the standard off-the-shelf deep Q-learning algorithm. As a result, we are able to learn synchronization policies which generalize to different cluster environments, different training datasets and small model variations and (most importantly) lead to considerable decreases in training time when compared to standard policies such as bulk synchronous parallel (BSP), asynchronous parallel (ASP), or stale synchronous parallel (SSP). To support our claims we present extensive numerical results obtained from experiments performed in simulated cluster environments. In our experiments training time is reduced by 44 on average and learned policies generalize to multiple unseen circumstances.
accept-poster
The authors consider a parameter-server setup where the learner acts a server communicating updated weights to workers and receiving gradient updates from them. A major question then relates in the synchronisation of the gradient updates, for which couple of *fixed* heuristics exists that trade-off accuracy of updates (BSP) for speed (ASP) or even combine the two allowing workers to be at most k steps out-of-sync. Instead, the authors propose to learn a synchronisation policy using RL. The authors perform results on a simulated and real environment. Overall, the RL-based method seems to provide some improvement over the fixed protocols, however the margin between the fixed and the RL get smaller in the real clusters. This is actually the main concern raised by the reviewers as well (especially R2) -- the paper in its initial submission did not include the real cluster results, rather these were added at the rebuttal. I find this to be an interesting real-world application of RL and I think it provides an alternative environment for testing RL algorithms beyond simulated environments. As such, I’m recommending acceptance. However, I do ask the authors to be upfront with the real cluster results and move them in the main paper.
train
[ "BJlqk6sjsr", "rklzmaFUiS", "r1e5aiYLiB", "r1lBQhF8sH", "ByltAYYLsB", "SJgDVutUjB", "HJle0nKIjH", "SyeD3aF8sB", "H1e_dFFLiB", "SkedtDontS", "BJlg5IHCFS", "Byl_kmjk9S" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your responses to my review. I appreciate the effort that went into running the initial experiments on a real cluster. At this stage, I still don't feel that this evaluation is sufficient to convince me to raise my review score. I would expect that the workload (e.g., larger dataset and model) can ha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 1 ]
[ "r1e5aiYLiB", "SkedtDontS", "BJlg5IHCFS", "BJlg5IHCFS", "Byl_kmjk9S", "iclr_2020_rJxX8T4Kvr", "SkedtDontS", "SkedtDontS", "Byl_kmjk9S", "iclr_2020_rJxX8T4Kvr", "iclr_2020_rJxX8T4Kvr", "iclr_2020_rJxX8T4Kvr" ]
iclr_2020_ryg48p4tPH
Action Semantics Network: Considering the Effects of Actions in Multiagent Systems
In multiagent systems (MASs), each agent makes individual decisions but all of them contribute globally to the system evolution. Learning in MASs is difficult since each agent's selection of actions must take place in the presence of other co-learning agents. Moreover, the environmental stochasticity and uncertainties increase exponentially with the increase in the number of agents. Previous works borrow various multiagent coordination mechanisms into deep learning architecture to facilitate multiagent coordination. However, none of them explicitly consider action semantics between agents that different actions have different influences on other agents. In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents. ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them. ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance. Experimental results on StarCraft II micromanagement and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with several network architectures.
accept-poster
The authors address the challenge of sample-efficient learning in multi-agent systems. They propose a model that distinguishes actions in terms of their semantics, specifically in terms of whether they influence the acting agent and environment or whether they influence other agents. This additional structure is shown to substantially benefit learning speed when composed with a range of state of the art multi-agent RL algorithms. During the rebuttal, technical questions were well addressed and the overall quality of the paper improved. The paper provides interesting novel insights on how the proposed structure improves learning.
val
[ "SJgwr3-TtS", "BJlkkc1DsB", "B1gQ0um4oB", "BkxFAPQNjS", "H1xavvm4jB", "HJg38MWCKS", "r1lZZbJMqH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a neural network architecture that provides an agent-agent based embeddings that are used for actions that directly affect specific agent. Proposed architectural choice exploits (implicitly assumed) independence of some actions wrt. observations of agents that it is not directly affecting. Auth...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_ryg48p4tPH", "B1gQ0um4oB", "SJgwr3-TtS", "HJg38MWCKS", "r1lZZbJMqH", "iclr_2020_ryg48p4tPH", "iclr_2020_ryg48p4tPH" ]
iclr_2020_SkxBUpEKwH
Vid2Game: Controllable Characters Extracted from Real-World Videos
We extract a controllable model from a video of a person performing a certain activity. The model generates novel image sequences of that person, according to user-defined control signals, typically marking the displacement of the moving body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person. The method is based on two networks. The first maps a current pose, and a single-instance control signal to the next pose. The second maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.
accept-poster
This paper proposes to extract a character from a video, manually control the character, and render into the background in real time. The rendered video can have arbitrary background and capture both the dynamics and appearance of the person. All three reviewers praises the visual quality of the synthesized video and the paper is well written with extensive details. Some concerns are raised. For example, despite an excellent engineering effort, there is few things the reader would scientifically learn from this paper. Additional ablation study on each component would also help the better understanding of the approach. Given the level of efforts, the quality of the results and the reviewers’ comments, the ACs recommend acceptance as a poster.
val
[ "B1lphCIxoB", "rkeqvIaJor", "SkxzOluyor", "Hkx_-lO1oB", "H1gcRli3YB", "Byxe5udgcS" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We agree with most of the comments.", "The paper presents an approach to extract a character from a video and then maneuver that character in the plane, optionally with other backgrounds. The character is then redrawn into the background with a neural net, and all of this is done in real time.\n\nAll in all, thi...
[ -1, 6, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, 5, 3 ]
[ "rkeqvIaJor", "iclr_2020_SkxBUpEKwH", "H1gcRli3YB", "Byxe5udgcS", "iclr_2020_SkxBUpEKwH", "iclr_2020_SkxBUpEKwH" ]
iclr_2020_B1l8L6EtDS
Self-Adversarial Learning with Comparative Discrimination for Text Generation
Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples. To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation. In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples. During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples. This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse. Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation.
accept-poster
This paper proposes a method for improving training of text generation with GANs by performing discrimination between different generated examples, instead of solely between real and generated examples. R3 and R1 appreciated the general idea, and thought that while there are still concerns, overall the paper seems to be interesting enough to warrant publication at ICLR. R2 has a rating of "weak reject", but I tend to agree with the authors that comparison with other methods that use different model architectures is orthogonal to the contribution of this paper. In sum, I think that this paper would likely make a good contribution to ICLR and recommend acceptance.
train
[ "HklAibsRYr", "SyeAycg2oB", "BkxpIGkhjB", "HklMyhAMjB", "H1xDr_AziB", "SJeCKuRziB", "BJl00PCGir", "HJgVe1df9H", "SygZ4flAYH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper describes a self-adversarial method to train a GAN for text generation that circumventes the problems of mode collapse and reward sparsity. They replace the traditional binary discriminator with a comparative discriminator, which provides the generator with more frequent rewards that are not al...
[ 8, -1, -1, -1, -1, -1, -1, 3, 8 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_B1l8L6EtDS", "BkxpIGkhjB", "BJl00PCGir", "iclr_2020_B1l8L6EtDS", "SygZ4flAYH", "HJgVe1df9H", "HklAibsRYr", "iclr_2020_B1l8L6EtDS", "iclr_2020_B1l8L6EtDS" ]
iclr_2020_ryxOUTVYDH
Robust training with ensemble consensus
Since deep neural networks are over-parameterized, they can memorize noisy examples. We address such a memorization issue in the presence of label noise. From the fact that deep neural networks cannot generalize to neighborhoods of memorized features, we hypothesize that noisy examples do not consistently incur small losses on the network under a certain perturbation. Based on this, we propose a novel training method called Learning with Ensemble Consensus (LEC) that prevents overfitting to noisy examples by removing them based on the consensus of an ensemble of perturbed networks. One of the proposed LECs, LTEC outperforms the current state-of-the-art methods on noisy MNIST, CIFAR-10, and CIFAR-100 in an efficient manner.
accept-poster
This paper proposes an ensemble method to identify noisy labels in the training data of supervised learning. The underlying hypothesis is that examples with label noise require memorization. The paper proposes methods to identify and remove bad training examples by retaining only the training data that maintains low losses after perturbations to the model parameters. This idea is developed in several candidate ensemble algorithms. One of the proposed ensemble methods exceeds the performance of state-of-the-art methods on MNIST, CIFAR-10 and CIFAR-100. The reviewers found several strengths and a few weaknesses in the paper. The paper was well motivated and clear. The proposed solution was novel and plausible. The experiments were comprehensive. The reviewers identified several parts of the paper that could be more clear or where more detail could be provided, including a complexity analysis and extended experiments. The author response addressed the reviewer questions directly and also in a revised document. In the discussion phase, the reviewers were largely satisfied that their concerns were addressed. This paper should be accepted for publication as the paper presents a clear problem and solution method along with convincing evidence of method's merits.
test
[ "Bkg1C7AjoS", "rkgwA61njr", "HJxVK9CioH", "H1ge8YenjS", "BJgtvXIzYB", "SyxRhFeaYH", "rkeszSQ8qr", "r1l-Lo_LqS", "BkeqDNvH5B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for your detailed feedback on our paper. We would like to answer your questions as follows:\n\nQ1. The authors proposed to perturb model parameters in order to find noisy training examples. Is there any reason that the authors did not perturb feature values in order to find noisy training examples? I wou...
[ -1, -1, -1, -1, 3, 6, 8, -1, -1 ]
[ -1, -1, -1, -1, 1, 3, 3, -1, -1 ]
[ "SyxRhFeaYH", "BJgtvXIzYB", "rkeszSQ8qr", "iclr_2020_ryxOUTVYDH", "iclr_2020_ryxOUTVYDH", "iclr_2020_ryxOUTVYDH", "iclr_2020_ryxOUTVYDH", "BkeqDNvH5B", "iclr_2020_ryxOUTVYDH" ]
iclr_2020_SklOUpEYvB
Identifying through Flows for Recovering Latent Representations
Identifiability, or recovery of the true latent representations from which the observed data originates, is de facto a fundamental goal of representation learning. Yet, most deep generative models do not address the question of identifiability, and thus fail to deliver on the promise of the recovery of the true latent sources that generate the observations. Recent work proposed identifiable generative modelling using variational autoencoders (iVAE) with a theory of identifiability. Due to the intractablity of KL divergence between variational approximate posterior and the true posterior, however, iVAE has to maximize the evidence lower bound (ELBO) of the marginal likelihood, leading to suboptimal solutions in both theory and practice. In contrast, we propose an identifiable framework for estimating latent representations using a flow-based model (iFlow). Our approach directly maximizes the marginal likelihood, allowing for theoretical guarantees on identifiability, thereby dispensing with variational approximations. We derive its optimization objective in analytical form, making it possible to train iFlow in an end-to-end manner. Simulations on synthetic data validate the correctness and effectiveness of our proposed method and demonstrate its practical advantages over other existing methods.
accept-poster
Main content: Blind review #1 summarizes it well: This paper is about learning an identifiable generative model, iFlow, that builds upon a recent result on nonlinear ICA. The key idea is providing side information to identify the latent representation, i.e., essentially a prior conditioned on extra information such as labels and restricting the mapping to flows for being able to compute the likelihood. As the loglikelihood of a flow model is readily available, a direct approach can be used for learning that optimizes both the prior and the observation model. -- Discussion: Reviewer questions were mostly about clarification, which the authors addressed during the rebuttal period. -- Recommendation and justification: All reviewers agree the paper is a weak accept based on degree of depth, novelty, and impact.
train
[ "HklCCT9ecr", "SJeepPCzsH", "HJxYQkyXsB", "rJx50KRfjr", "SJl5FcOf9B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper is about learning an identifiable generative model, iFlow, that builds upon a recent result on nonlinear ICA. The key idea is providing side information to identify the latent representation, i.e., essentially a prior conditioned on extra information such as labels and restricting the mapping to flows f...
[ 6, -1, -1, -1, 6 ]
[ 3, -1, -1, -1, 1 ]
[ "iclr_2020_SklOUpEYvB", "HklCCT9ecr", "SJl5FcOf9B", "HklCCT9ecr", "iclr_2020_SklOUpEYvB" ]
iclr_2020_BkeWw6VFwr
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing
It is well-known that classifiers are vulnerable to adversarial perturbations. To defend against adversarial perturbations, various certified robustness results have been derived. However, existing certified robustnesses are limited to top-1 predictions. In many real-world applications, top-k predictions are more relevant. In this work, we aim to derive certified robustness for top-k predictions. In particular, our certified robustness is based on randomized smoothing, which turns any classifier to a new classifier via adding noise to an input example. We adopt randomized smoothing because it is scalable to large-scale neural networks and applicable to any classifier. We derive a tight robustness in ℓ2 norm for top-k predictions when using randomized smoothing with Gaussian noise. We find that generalizing the certified robustness from top-1 to top-k predictions faces significant technical challenges. We also empirically evaluate our method on CIFAR10 and ImageNet. For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62.8\% when the ℓ2-norms of the adversarial perturbations are less than 0.5 (=127/255). Our code is publicly available at: \url{https://github.com/jjy1994/Certify_Topk}.
accept-poster
The paper extends the work on randomized smoothing for certifiably robust classifiers developed in prior work to a weaker specification requiring that the set of top-k predictions remain unchanged under adversarial perturbations of the input (rather than just the top-1). This enables the authors to achieve stronger results on robustness of classifiers on CIFAR10 and ImageNet (where the authors report the top-5 accuracy). This is an interesting extension of certified defenses that is likely to be relevant for complex prediction tasks with several classes (ImageNet and beyond), where top-1 robustness may be difficult and unrealistic to achieve. The reviewers were in consensus on acceptance and minor concerns were alleviated during the rebuttal phase. I therefore recommend acceptance.
train
[ "r1lZzDODsH", "SkgZDld6Yr", "SyeQ3xEPir", "rygi0v5liS", "rylPsDceiH", "SJxN8wqesH", "Hkxarun2tB", "Hyxynw6pFS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the insightful comments and carefully reading of our responses. ", "This paper builds upon the random smoothing technique for top-1 prediction proposed by Cohen et al. for certifying top-k predictions with probabilistic guarantees, which enjoys good scalability to large neural networks ...
[ -1, 6, -1, -1, -1, -1, 6, 6 ]
[ -1, 5, -1, -1, -1, -1, 4, 3 ]
[ "SyeQ3xEPir", "iclr_2020_BkeWw6VFwr", "rylPsDceiH", "Hkxarun2tB", "SkgZDld6Yr", "Hyxynw6pFS", "iclr_2020_BkeWw6VFwr", "iclr_2020_BkeWw6VFwr" ]
iclr_2020_r1xGP6VYwH
Optimistic Exploration even with a Pessimistic Initialisation
Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL). In the tabular case, all provably efficient model-free algorithms rely on it. However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient tabular algorithms. In particular, in scenarios with only positive rewards, Q-values are initialised at their lowest possible values due to commonly used network initialisation schemes, a pessimistic initialisation. Merely initialising the network to output optimistic Q-values is not enough, since we cannot ensure that they remain optimistic for novel state-action pairs, which is crucial for exploration. We propose a simple count-based augmentation to pessimistically initialised Q-values that separates the source of optimism from the neural network. We show that this scheme is provably efficient in the tabular setting and extend it to the deep RL setting. Our algorithm, Optimistic Pessimistically Initialised Q-Learning (OPIQ), augments the Q-value estimates of a DQN-based agent with count-derived bonuses to ensure optimism during both action selection and bootstrapping. We show that OPIQ outperforms non-optimistic DQN variants that utilise a pseudocount-based intrinsic motivation in hard exploration tasks, and that it predicts optimistic estimates for novel state-action pairs.
accept-poster
The paper propose a scheme to enable optimistic initialization in the deep RL setting, and shows that it's helpful. The reviewers agreed that the paper is well-motivated and executed, but had some minor reservations (e.g. about the proposal scaling in practice). In an example of a successful rebuttal two of the reviewers raised their scores after the authors clarified the paper and added an experiment on Montezuma's revenge. The paper proposes a useful, simple and practical idea on the bridge between tabular and deep RL, and I gladly recommend acceptance.
train
[ "rJeE7x9-5r", "S1g-THFhiB", "SkxvBHthjS", "rke3ol-ItS", "HkeeoJ-iir", "rJe0fZhKiS", "BJe-11eDoH", "S1ljc0kPir", "SJxJIRJDor", "BJgEwAgZ5S" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "#rebuttal responses\n I am pleased by the authors' responses. Thus I change the score to weak accept.\n\n#review\nThis paper presented OPIQ, a model-free algorithm that does not rely on an optimistic initialization\nto ensure efficient exploration. OPIQ augments the Q-values with a new count-based optimism bonus. ...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ 3, -1, -1, 5, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_r1xGP6VYwH", "HkeeoJ-iir", "rJe0fZhKiS", "iclr_2020_r1xGP6VYwH", "BJe-11eDoH", "SJxJIRJDor", "rke3ol-ItS", "BJgEwAgZ5S", "rJeE7x9-5r", "iclr_2020_r1xGP6VYwH" ]
iclr_2020_SygXPaEYvH
VL-BERT: Pre-training of Generic Visual-Linguistic Representations
We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single model on the leaderboard of the VCR benchmark.
accept-poster
The paper proposed a new pretrained language model which can take visual information into the embeddings. Experiments showed state-of-the-art results on three downstream tasks. The paper is well written and detailed comparisons with related work are given. There are some concerns about the clarity and novelty raised by the reviewers which is answered in details and I think the paper is acceptable.
train
[ "r1ghNL9qoB", "SJeJP4cqjr", "rkluSQc5oH", "SyxqeQccoB", "ByxhvaYpYr", "SyxJtUoRtr", "S1ludY4GqS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We feel we can well address the concerns of R#3, and hope R#3 give a second thought about the paper.\n\nQ#1: Concerns about novelty.\n\nA#1: First of all, the existence of concurrent works does not hurt the novelty of our method. And it should not be a reason for rejecting the paper. One cannot forecast what other...
[ -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "ByxhvaYpYr", "iclr_2020_SygXPaEYvH", "SyxJtUoRtr", "S1ludY4GqS", "iclr_2020_SygXPaEYvH", "iclr_2020_SygXPaEYvH", "iclr_2020_SygXPaEYvH" ]
iclr_2020_SkxSv6VFvS
Deformable Kernels: Adapting Effective Receptive Fields for Object Deformation
Convolutional networks are not aware of an object's geometric variations, which leads to inefficient utilization of model and data capacity. To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation. This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field. Yet adapting the receptive field does not quite reach the actual goal -- what really matters to the network is the *effective* receptive field (ERF), which reflects how much each pixel contributes. It is thus natural to design other approaches to adapt the ERF directly during runtime. In this work, we instantiate one possible solution as Deformable Kernels (DKs), a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched. At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects. This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values. We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories. Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime. In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works.
accept-poster
In my opinion, this paper is borderline (but my expertise is not in this area) and the reviewers are too uncertain to be of help in making an informed decision.
train
[ "SkxYphBsir", "SkxDm5xCYH", "SkgzaqHjir", "SJgy9-BisB", "Bkgp-8SsiH", "H1guw8eW9S", "H1eXSFlSqr" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We have updated our paper during the rebuttal session to include more details regarding to \n + implementation of the operators (Appendix A), \n + model specifics we used for experiments (Appendix B),\n + and more visualization of ERFs under different forms of object deformation (Appendix C).\n\nWe will m...
[ -1, 6, -1, -1, -1, 6, 6 ]
[ -1, 1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_SkxSv6VFvS", "iclr_2020_SkxSv6VFvS", "SkxDm5xCYH", "H1eXSFlSqr", "H1guw8eW9S", "iclr_2020_SkxSv6VFvS", "iclr_2020_SkxSv6VFvS" ]
iclr_2020_BygSP6Vtvr
Ensemble Distribution Distillation
Ensembles of models often yield improvements in system performance. These ensemble approaches have also been empirically shown to yield robust measures of uncertainty, and are capable of distinguishing between different forms of uncertainty. However, ensembles come at a computational and memory cost which may be prohibitive for many applications. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve an accuracy comparable to that of an ensemble. However, information about the diversity of the ensemble, which can yield estimates of different forms of uncertainty, is lost. This work considers the novel task of Ensemble Distribution Distillation (EnD^2) - distilling the distribution of the predictions from an ensemble, rather than just the average prediction, into a single model. EnD^2 enables a single model to retain both the improved classification performance of ensemble distillation as well as information about the diversity of the ensemble, which is useful for uncertainty estimation. A solution for EnD^2 based on Prior Networks, a class of models which allow a single neural network to explicitly model a distribution over output distributions, is proposed in this work. The properties of EnD^2 are investigated on both an artificial dataset, and on the CIFAR-10, CIFAR-100 and TinyImageNet datasets, where it is shown that EnD^2 can approach the classification performance of an ensemble, and outperforms both standard DNNs and Ensemble Distillation on the tasks of misclassification and out-of-distribution input detection.
accept-poster
The paper investigates how to distill an ensemble effectively (using a prior network) in order to reap the benefits of uncertainty estimation provided by ensembling (in addition to the accuracy gains provided by ensembling). Overall, the paper is nicely written, and makes a valuable contribution. The authors also addressed most of the initial concerns raised by the reviewers. I recommend the paper for acceptance, and encourage the authors to take into account the reviewer feedback when preparing the final version.
val
[ "rygXFaj3tB", "H1lCf2NFor", "r1xzMickcB", "r1goHTNYjH", "H1l_eIBFoH", "H1xidUSYjS", "BJePTRNYiS", "BkxPqUmUtH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "\n\n========= Post Rebuttal ========= \n\n\nI appreciate the authors' effort in addressing the raised issues. I think the revised paper has higher quality in that the additional ablation studies are very useful to understand the effectiveness of the method and the importance of the hyperparameters. It now also has...
[ 6, -1, 6, -1, -1, -1, -1, 8 ]
[ 4, -1, 1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_BygSP6Vtvr", "iclr_2020_BygSP6Vtvr", "iclr_2020_BygSP6Vtvr", "BkxPqUmUtH", "rygXFaj3tB", "rygXFaj3tB", "r1xzMickcB", "iclr_2020_BygSP6Vtvr" ]
iclr_2020_B1lLw6EYwB
Gap-Aware Mitigation of Gradient Staleness
Cloud computing is becoming increasingly popular as a platform for distributed training of deep neural networks. Synchronous stochastic gradient descent (SSGD) suffers from substantial slowdowns due to stragglers if the environment is non-dedicated, as is common in cloud computing. Asynchronous SGD (ASGD) methods are immune to these slowdowns but are scarcely used due to gradient staleness, which encumbers the convergence process. Recent techniques have had limited success mitigating the gradient staleness when scaling up to many workers (computing nodes). In this paper we define the Gap as a measure of gradient staleness and propose Gap-Aware (GA), a novel asynchronous-distributed method that penalizes stale gradients linearly to the Gap and performs well even when scaling to large numbers of workers. Our evaluation on the CIFAR, ImageNet, and WikiText-103 datasets shows that GA outperforms the currently acceptable gradient penalization method, in final test accuracy. We also provide convergence rate proof for GA. Despite prior beliefs, we show that if GA is applied, momentum becomes beneficial in asynchronous environments, even when the number of workers scales up.
accept-poster
The authors propose a novel approach for measuring gradient staleness and use this measure to penalize stale gradients in an asynchronous stochastic gradient set up. Following previous work, they provide a convergence proof for their approach. Most importantly, they provide extensive evaluations comparing against previous approaches and show impressive gains over previous work. After the author response, the primary concerns from reviewers is regarding the gap between the proposed method and single worker SGD/synchronous SGD. I feel that the authors have made compelling arguments that ASGD is an important optimization paradigm to consider, so their improvements in narrowing the gap are of interest to the community. There were some concerns about the novelty of the theory, and my impression is that theorem is straightforward to prove based on assumptions and previous work, however, I view the main contribution of the paper as empirical. This paper is borderline, but I think the impressive empirical results over existing work on ASGD is a worthwhile contribution and others will find it interesting, so I am recommending acceptance.
train
[ "Sylx4uze9H", "Hkga3y35iS", "rklFnWdcoH", "BJxz2j3FsS", "HkepkNFesr", "S1xMsPtxjB", "rJe6bXtljH", "Hyejd9jotH", "B1lQL-d6YH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies training large machine learning models in a distributed setup. For such a setup, as the number of workers increases, employing synchronous stochastic gradient descent incurs a significant delay due to the presence of straggling workers. Using asynchronous methods should circumvent the issue of s...
[ 6, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_B1lLw6EYwB", "rklFnWdcoH", "BJxz2j3FsS", "HkepkNFesr", "B1lQL-d6YH", "Sylx4uze9H", "Hyejd9jotH", "iclr_2020_B1lLw6EYwB", "iclr_2020_B1lLw6EYwB" ]
iclr_2020_SJxDDpEKvH
Counterfactuals uncover the modular structure of deep generative models
Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data. However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision. While previous work has focused on exploiting statistical independence to \textit{disentangle} latent factors, we argue that such requirement can be advantageously relaxed and propose instead a non-statistical framework that relies on identifying a modular organization of the network, based on counterfactual manipulations. Our experiments support that modularity between groups of channels is achieved to a certain degree on a variety of generative models. This allowed the design of targeted interventions on complex image datasets, opening the way to applications such as computationally efficient style transfer and the automated assessment of robustness to contextual changes in pattern recognition systems.
accept-poster
This paper provides a fresh application of tools from causality theory to investigate modularity and disentanglement in learned deep generative models. It also goes one step further towards making these models more transparent by studying their internal components. While there is still margin for improving the experiments, I believe this paper is a timely contribution to the ICLR/ML community. This paper has high-variance in the reviewer scores. But I believe the authors did a good job with the revision and rebuttal. I recommend acceptance.
test
[ "S1eYvp42jB", "B1lzZnEhiS", "HJguPKE3oB", "BJxcC6MWjH", "rkeRx2DpFB", "BJlEdHcYqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your useful comments and positive review of our work. We uploaded a revision of the manuscript based on all reviewers’ comments and address yours specifically in this reply.\n\n1) “How early does this sort of modularity arise over the course of training? Does it vary for GANs versus beta-vae like mod...
[ -1, -1, -1, 8, 8, 3 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "rkeRx2DpFB", "BJlEdHcYqH", "BJxcC6MWjH", "iclr_2020_SJxDDpEKvH", "iclr_2020_SJxDDpEKvH", "iclr_2020_SJxDDpEKvH" ]
iclr_2020_BJeKwTNFvB
Physics-as-Inverse-Graphics: Unsupervised Physical Parameter Estimation from Video
We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available. Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states. We address this problem through a \textit{physics-as-inverse-graphics} approach that brings together vision-as-inverse-graphics and differentiable physics engines, where objects and explicit state and velocity representations are discovered by the model. This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control. Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects (such as ball-spring or 3-body gravitational systems), due to its ability to build dynamics into the model as an inductive bias. We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system. We also show that the controller's interpretability provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation.
accept-poster
The submission presents an approach to estimating physical parameters from video. The approach is sensible and is presented fairly well. The main criticism is that the approach is only demonstrated in simplistic "toy" settings. Nevertheless, the reviewers recommend (weakly) accepting the paper and the AC concurs.
train
[ "ryl2_pasoS", "HkgvJIR9iS", "Bygy8cvFoB", "r1eBE5vYiH", "BJxRhYPtjS", "HJxujYvYjB", "BylhQFvYjB", "H1exMYPtoB", "BylN9dvYjH", "rkxP3xuEYB", "ryeX_6kAtH", "S1xGzyBJcH" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to review our responses.\n\nWe decided against running these experiments since we believe that most systems of interest where we are looking to estimate physical parameters have at most 3 objects (vide Physics101). While we could have simply made a 5-object bouncing ball dataset, we b...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "HkgvJIR9iS", "H1exMYPtoB", "rkxP3xuEYB", "rkxP3xuEYB", "ryeX_6kAtH", "ryeX_6kAtH", "S1xGzyBJcH", "S1xGzyBJcH", "iclr_2020_BJeKwTNFvB", "iclr_2020_BJeKwTNFvB", "iclr_2020_BJeKwTNFvB", "iclr_2020_BJeKwTNFvB" ]
iclr_2020_HJeiDpVFPr
An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality
Distances are pervasive in machine learning. They serve as similarity measures, loss functions, and learning targets; it is said that a good distance measure solves a task. When defining distances, the triangle inequality has proven to be a useful constraint, both theoretically---to prove convergence and optimality guarantees---and empirically---as an inductive bias. Deep metric learning architectures that respect the triangle inequality rely, almost exclusively, on Euclidean distance in the latent space. Though effective, this fails to model two broad classes of subadditive distances, common in graphs and reinforcement learning: asymmetric metrics, and metrics that cannot be embedded into Euclidean space. To address these problems, we introduce novel architectures that are guaranteed to satisfy the triangle inequality. We prove our architectures universally approximate norm-induced metrics on Rn, and present a similar result for modified Input Convex Neural Networks. We show that our architectures outperform existing metric approaches when modeling graph distances and have a better inductive bias than non-metric approaches when training data is limited in the multi-goal reinforcement learning setting.
accept-poster
This paper proposes a neural network approach to approximate distances, based on a representation of norms in terms of convex homogeneous functions. The authors show universal approximation of norm-induced metrics and present applications to value-function approximation in RL and graph distance problems. Reviewers were in general agreement that this is a solid paper, well-written and with compelling results. The AC shares this positive assessment and therefore recommends acceptance.
train
[ "r1epIKj2sr", "HJeozQQ2ir", "Hkx-hcqNjH", "BJxB7a9Nir", "S1loT2qNjS", "B1lP9s9VsB", "HJgoBv0JcB", "HyxOWBlB9B", "r1lR8p5IqS", "Ske1Vfqa9H" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the response and additional reference! I've read over the other reviews and responses and still maintain my original score of an accept. ", "We thank the reviewers for their time and comments. We have uploaded a minor revision based on the feedback. In particular, we added a paragraph to Section 4 on ...
[ -1, -1, -1, -1, -1, -1, 8, 3, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 1 ]
[ "Hkx-hcqNjH", "iclr_2020_HJeiDpVFPr", "HJgoBv0JcB", "Ske1Vfqa9H", "r1lR8p5IqS", "HyxOWBlB9B", "iclr_2020_HJeiDpVFPr", "iclr_2020_HJeiDpVFPr", "iclr_2020_HJeiDpVFPr", "iclr_2020_HJeiDpVFPr" ]
iclr_2020_ryenvpEKDr
A Constructive Prediction of the Generalization Error Across Scales
The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.
accept-poster
The paper presents a very interesting idea for estimating the held-out error of deep models as a function of model and data set size. The authors intuit what the shape of the error should be, then they fit the parameters of a function of the desired shape and show that this has predictive power. I find this idea quite refreshing and the paper is well written with good experiments. Please make sure that the final version contains the cross-validation results provided during the rebuttal.
train
[ "H1l4XwSwiH", "HyejNTPssH", "BJxakKXRFS", "H1xrAsVPor", "SkxT1pNwir", "B1giWnNwsB", "rkeru_QAYH", "HJl_1dRl5r" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We would like the thank the reviewers for their helpful comments. We have updated the paper accordingly. Please see detailed responses in the individual comments on each review.", "I would like to thank the authors for updating the results of their error estimation experiments with 10 fold cross validation, whic...
[ -1, -1, 6, -1, -1, -1, 8, 1 ]
[ -1, -1, 1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_ryenvpEKDr", "H1xrAsVPor", "iclr_2020_ryenvpEKDr", "rkeru_QAYH", "HJl_1dRl5r", "BJxakKXRFS", "iclr_2020_ryenvpEKDr", "iclr_2020_ryenvpEKDr" ]
iclr_2020_BJlguT4YPr
Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base
We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.
accept-poster
This paper proposes an approach to representing a symbolic knowledge base as a sparse matrix, which enables the use of differentiable neural modules for inference. This approach scales to large knowledge bases and is demonstrated on several tasks. Post-discussion and rebuttal, all three reviewers are in agreement that this is an interesting and useful paper. There was intiially some concern about clarity and polish, but these have been resolved upon rebuttal and discussion. Therefore I recommend acceptance.
val
[ "BklxFnq2YS", "S1xi_7koYB", "rJl7JTbijS", "HJgpu3Wiir", "Bkefz3bsoB", "S1lWRZMTYS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes sparse-matrix KB representation for end-to-end KB reasoning tasks. They demonstrate that their algorithm is scalable to large knowledge graphs which is the central contribution of the paper.\nThey apply this to a bunch of tasks such as KB Completion and KBQA. This is done by mapping the query to...
[ 6, 6, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, 4 ]
[ "iclr_2020_BJlguT4YPr", "iclr_2020_BJlguT4YPr", "S1xi_7koYB", "BklxFnq2YS", "S1lWRZMTYS", "iclr_2020_BJlguT4YPr" ]
iclr_2020_HJlfuTEtvB
CLN2INV: Learning Loop Invariants with Continuous Logic Networks
Program verification offers a framework for ensuring program correctness and therefore systematically eliminating different classes of bugs. Inferring loop invariants is one of the main challenges behind automated verification of real-world programs which often contain many loops. In this paper, we present the Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants directly from program execution traces. Unlike existing neural networks, CLNs can learn precise and explicit representations of formulas in Satisfiability Modulo Theories (SMT) for loop invariants from program execution traces. We develop a new sound and complete semantic mapping for assigning SMT formulas to continuous truth values that allows CLNs to be trained efficiently. We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset. CLN2INV is the first tool to solve all 124 theoretically solvable problems in the Code2Inv dataset. Moreover, CLN2INV takes only 1.1 second on average for each problem, which is 40 times faster than existing approaches. We further demonstrate that CLN2INV can even learn 12 significantly more complex loop invariants than the ones required for the Code2Inv dataset.
accept-poster
This paper implements a novel architecture for inferring loop invariants in verification (though the paper bridges to compilers). The idea is novel and the paper is well executed. It is not the usual topic for ICLR, but not presents an important application of deep learning done well, and it has interesting implications for program synthesis. Therefore, I recommend acceptance.
train
[ "HJgTNLJFsB", "rklXASJtjB", "rJeB9uhBir", "rklXNHgroB", "r1l631xroB", "rklrlliQjS", "r1gfEdsysr", "SygB2FUk5r" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer" ]
[ "Thank you for taking the time to review our submission and providing thoughtful suggestions. We have made revisions to the submission based on your earlier feedback as follows: \n\n1.) As recommended, we have expanded out related works to specifically discuss relaxation efforts for satisfiability problems like Cir...
[ -1, -1, -1, -1, -1, 3, -1, 8 ]
[ -1, -1, -1, -1, -1, 3, -1, 1 ]
[ "SygB2FUk5r", "rklXNHgroB", "rklXNHgroB", "r1l631xroB", "rklrlliQjS", "iclr_2020_HJlfuTEtvB", "SygB2FUk5r", "iclr_2020_HJlfuTEtvB" ]
iclr_2020_HygrdpVKvr
NAS evaluation is frustratingly hard
Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. The code used is available at https://github.com/antoyang/NAS-Benchmark.
accept-poster
Summary: This paper provides comprehensive empirical evidence for some of the systemic issues in the NAS community, for example showing that several published NAS algorithms do not outperform random sampling on previously unseen data and that the training pipeline is more important in the DARTS space than the exact choice of neural architecture. I very much appreciate that code is available for reproducibility. Reviewer scores and discussion: The reviewers' scores have very high variance: 2/3 reviewers gave clear acceptance scores (8,8), very much liking the paper, whereas one reviewer gave a clear rejection score (1). In the discussion between the reviewers and the AC, despite the positive comments of the other reviewers, AnonReviewer 2 defended his/her position, arguing that the novelty is too low given previous works. The other reviewers argued against this, emphasizing that it is an important contribution to show empirical evidence for the importance of the training protocol (note that the intended contribution is *not* to introduce these training protocols; they are taken from previous work). Due to the high variance, I read the paper myself in detail. Here are my own two cents: - It is not new to compare to a single random sample. Sciuto et al clearly proposed this first; see Figure 1 (c) in https://arxiv.org/abs/1902.08142 - The systematic experiments showing the importance of the training pipeline are very useful, providing proper and much needed empirical evidence for the many existing suggestions that this might be the case. Figure 3 is utterly convincing. - Throughout, it would be good to put the work into perspective a bit more. E.g., correlations have been studied by many authors before. Also, the paper cites the best practice checklist in the beginning, but does not mention it in the section on best practices (my view is that this paper is in line with that checklist and provides important evidence for several points in it; the checklist also contains other points not being discussed in this paper; it would be good to know whether this paper suggests any new points for the checklist). Recommendation: Overall, I firmly believe that this paper is an important contribution to the NAS community. It may be viewed by some as "just" running some experiments, but the experiments it shows are very informative and will impact the community and help guide it in the right direction. I therefore recommend acceptance (as a poster).
train
[ "H1xfPBr3FH", "SylHHra8iB", "BJe39ytloS", "HygNjE9gor", "rygE_lYejr", "HygAVxO3OH", "Hyezhj16YS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThe paper scrutinizes commonly used evaluation strategies for neural architecture search.\nThe first contribution is to compare architectures found by 5 different search strategies from the literature against randomly sampled architectures from the same underlying search space.\nThe paper shows that across diffe...
[ 8, -1, -1, -1, -1, 1, 8 ]
[ 4, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_HygrdpVKvr", "HygNjE9gor", "HygAVxO3OH", "H1xfPBr3FH", "Hyezhj16YS", "iclr_2020_HygrdpVKvr", "iclr_2020_HygrdpVKvr" ]
iclr_2020_B1eY_pVYvB
Efficient and Information-Preserving Future Frame Prediction and Beyond
Applying resolution-preserving blocks is a common practice to maximize information preservation in video prediction, yet their high memory consumption greatly limits their application scenarios. We propose CrevNet, a Conditionally Reversible Network that uses reversible architectures to build a bijective two-way autoencoder and its complementary recurrent predictor. Our model enjoys the theoretically guaranteed property of no information loss during the feature extraction, much lower memory consumption and computational efficiency. The lightweight nature of our model enables us to incorporate 3D convolutions without concern of memory bottleneck, enhancing the model's ability to capture both short-term and long-term temporal dependencies. Our proposed approach achieves state-of-the-art results on Moving MNIST, Traffic4cast and KITTI datasets. We further demonstrate the transferability of our self-supervised learning method by exploiting its learnt features for object detection on KITTI. Our competitive results indicate the potential of using CrevNet as a generative pre-training strategy to guide downstream tasks.
accept-poster
This paper introduces a new approach that consists of the invertible autoencoder and a reversible predictive module (RPM) for video future-frame prediction. Reviewers agree that the paper is well-written and the contributions are clear. It achieves new state-of-the-art results on a diverse set of video prediction datasets and with techniques that enable more efficient computation and memory footprint. Also, the video representation learned in a self-supervised way by the approach can have good generalization ability on downstream tasks such as object detection. The concerns of the paper were relatively minor, and were successfully addressed in the rebuttal. AC feels that this work makes a solid contribution with well-designed model and strong empirical performance, which will attain wide interests in the area of video future-frame prediction and self-supervised video representation learning. Hence, I recommend accepting this paper.
train
[ "B1xCq-c6tB", "S1eme707oB", "ByxTLfAmor", "ryx_nG0QoB", "B1x9JLh6Fr", "BkxpyxE79H" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces Conditionally Reversible Network (CrevNet) that consists of the invertible autoencoder and a reversible predictive module (RPM). The two-way autoencoder is an invertible network that preserves the volume with no information loss while reducing memory consumption by using bijective downsamplin...
[ 6, -1, -1, -1, 6, 3 ]
[ 5, -1, -1, -1, 3, 3 ]
[ "iclr_2020_B1eY_pVYvB", "B1xCq-c6tB", "BkxpyxE79H", "B1x9JLh6Fr", "iclr_2020_B1eY_pVYvB", "iclr_2020_B1eY_pVYvB" ]
iclr_2020_HygsuaNFwr
Order Learning and Its Application to Age Estimation
We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes. To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is `greater than,' `similar to,' or `smaller than' the other. Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison results, the class of the input can be estimated reliably. We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance. Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner.
accept-poster
This paper addresses a promising method for order learning and applies the new ideas of multiple-chain learning and anchor selection to age estimation and aesthetic regression. The decision regarding instance class is made by comparing it with anchor instances in the same chain and maximizing the consistency among the comparison results. In a multi-chain setting, each chain may correspond to a higher-level attribute class, for example, gender or ethnic group. Supervised and unsupervised learning of multiple ordered chains is proposed. As rightly acknowledged by R4: “What more promising is the unsupervised chains, which could automatically search for a more optimal multi-chain division scheme than the pre-defined data division.” All three reviewers and AC agree that the proposed approach is interesting and shows promising results. There are several potential weaknesses and suggestions to further strengthen this work: (1) more quantitative results are needed for assessing the benefits of this approach (R3, R4) -- see R3’s request to complete the results for FG-Net, to include the results for CLAP2016 and a comparison with the SOTA method BridgeNet. Pleased to report that the authors have revised the manuscript and have included performance of the arithmetic scheme as well as the geometric scheme for FG-Net. Also the authors have provided some initial evaluations of BridgeNet and promised to report the final results as well as the results for CLAP2016 in the final version. (2) R3 and R4 have expressed concerns regarding using the geometric ratio of the class distances in age estimation and that the improvement may be caused by the data distribution that favours it (R4) or because the baseline methods are not fine-tuned in the same manner (R3). The authors have partially addressed this concern in the rebuttal. There is a large body of work in computer vision that is focused on relative comparison of samples based on attributes (e.g. age) that is not clearly articulated in the discussions / baseline comparisons (1CH) -- see the seminal work [Relative attributes by Parikh and Grauman, ICCV2011] and the follow up works. Considering the author response, the AC decided that the most crucial concerns have been addressed in the revision and that the paper could be accepted, but the authors are strongly urged to include additional results that were promised in the rebuttal for the final revision.
train
[ "BkgfsCoijr", "rye32TojjS", "H1xU9KYDoB", "Hke9YexXiH", "B1lsKTpGir", "Byg6NpMCKH", "HygmIfiAKB" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We are training the proposed algorithm (1CH) on MORPH II using setting A. The training is not complete yet, but its current results are as follows.\n\n ----------------------------------------------------------\n BridgeNet MAE=2.38\t CS=91.0%*\n Proposed(1CH) MAE=2.44\t CS=91.2%\n --...
[ -1, -1, 6, -1, -1, 8, 6 ]
[ -1, -1, 4, -1, -1, 1, 3 ]
[ "Hke9YexXiH", "H1xU9KYDoB", "iclr_2020_HygsuaNFwr", "HygmIfiAKB", "Byg6NpMCKH", "iclr_2020_HygsuaNFwr", "iclr_2020_HygsuaNFwr" ]
iclr_2020_HJgJtT4tvB
ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning
Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.
accept-poster
Main content: Blind review #1 summarizes it well: This paper presents a new reading comprehension dataset for logical reasoning. It is a multi-choice problem where questions are mainly from GMAT and LSAT, containing 4139 data points. The analyses of the data demonstrate that questions require diverse types of reasoning such as finding necessary/sufficient assumptions, whether statements strengthen/weaken the argument or explain/resolve the situation. The paper includes comprehensive experiments with baselines to identify bias in the dataset, where the answer-options-only model achieves near half (random is 25%). Based on this result, the test set is split into the easy and hard set, which will help better evaluation of the future models. The paper also reports the numbers on the split data using competitive baselines where the models achieve low performance on the hard set. -- Discussion: While the authors agree this is an important direction, there are reservations concerning the small size of the dataset, that have not been fully addressed. -- Recommendation and justifcation: I still believe this paper should be accepted as the existing datasets for reading comprehension are inadequate and it is important for the field not to be climbing the wrong hill.
train
[ "r1gcuJ9qtH", "r1lhnRVTOB", "r1gjVaysiB", "HkeNChJijH", "r1lPkUJiiB", "SkgqJD1siB", "B1laI41ooH", "SJe7pOKoKH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\nPaper Summary:\n\nThis paper presents a machine reading comprehension dataset called ReClor. It is different from existing datasets in that ReClor targets logical reasoning. The authors identified biased data points and separated the testing dataset into biased and non-biased sets. Experimental results show that...
[ 6, 8, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_HJgJtT4tvB", "iclr_2020_HJgJtT4tvB", "HkeNChJijH", "r1lhnRVTOB", "SJe7pOKoKH", "r1gcuJ9qtH", "iclr_2020_HJgJtT4tvB", "iclr_2020_HJgJtT4tvB" ]
iclr_2020_SJgMK64Ywr
AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures
Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to include the time dimension, using modules such as 3D convolutions, or by using two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity and spatio-temporal interactions for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin. We obtain 58.6% mAP on Charades and 34.27% accuracy on Moments-in-Time.
accept-poster
The submission applies architecture search to find effective architectures for video classification. The work is not terribly innovative, but the results are good. All reviewers recommend accepting the paper.
test
[ "Hkln1LQnoB", "r1xft7kdiH", "rked7QJ_ir", "rkx9gQk_iH", "S1lGGDPUtB", "r1l1KJOAFH", "HkgiqJWDqB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the reply. Nevertheless, I think the NAS-baseline I proposed will further evidence the contribution but does not take away your contribution. ", "We thank the reviewer for the comments and questions. Please find our answers to the comments below:\n\n1. \"Most work in video action recognition tend not ...
[ -1, -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, -1, 5, 1, 4 ]
[ "rkx9gQk_iH", "S1lGGDPUtB", "r1l1KJOAFH", "HkgiqJWDqB", "iclr_2020_SJgMK64Ywr", "iclr_2020_SJgMK64Ywr", "iclr_2020_SJgMK64Ywr" ]
iclr_2020_H1gfFaEYDS
Adversarially Robust Representations with Smooth Encoders
This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data. We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO). We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation. This is a key hurdle in the effective use of representations for data-efficient learning and transfer. To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations. To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations. We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure.
accept-poster
This paper proposes an novel way of expanding our VAE toolkit by tying it to adversarial robustness. It should be thus of interest to the respective communities.
train
[ "rkgkgZjYiB", "HklWVgsFoB", "rylXL1jYjr", "Sygb7qk0tr", "HJg8AR0-9H", "rJe5MJ5_9B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the constructive feedback.\n\n++++++++++++++++++++++\n#3a (The illustration of the problem in VAE is interesting. However, one missing point is to theoretically quantify the effect of the proposed regularization (in some simple cases). In particular, it is claimed that the regularization ...
[ -1, -1, -1, 6, 3, 8 ]
[ -1, -1, -1, 3, 4, 5 ]
[ "Sygb7qk0tr", "HJg8AR0-9H", "rJe5MJ5_9B", "iclr_2020_H1gfFaEYDS", "iclr_2020_H1gfFaEYDS", "iclr_2020_H1gfFaEYDS" ]
iclr_2020_S1g7tpEYDS
From Variational to Deterministic Autoencoders
Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of the VAE. We observe that sampling a stochastic encoder in a Gaussian VAE can be interpreted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and implicit regularization schemes, can lead to an equally smooth and meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data points, we introduce an ex-post density estimation step that can be readily applied to the proposed framework as well as existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules.
accept-poster
This paper proposes an extension to deterministic autoencoders, namely instead of noise injection in the encoders of VAEs to use deterministic autoencoders with an explicit regularization term on the latent representations. While the reviewers agree that the paper studies an important question for the generative modeling community, the paper has been limited in terms of theoretical analysis and experimental validation. The authors, however, provided further experimental results to support the claims empirically during the discussion period and the reviewers agree that the paper is now acceptable for publication in ICLR-2020.
val
[ "rkxPP-2itH", "BkgpIOihYS", "rJgNoi5nsS", "r1egWs9hjB", "S1gILqc3oB", "BkgjBne5sr", "SJltgjgcsS", "ByxI2wx9or", "HJxWtux5oS", "HygPryzMjB", "B1xzMA-fir", "r1g1rpZziB", "rJxoPh-fsS", "r1lYVM27cr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper propose an extension to deterministic autoencoders. Motivated from VAEs, the authors propose RAEs, which replace the noise injection in the encoders of VAEs with an explicit regularization term on the latent representations. As a result, the model becomes a deterministic autoencoder with a L_2 regulariz...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_S1g7tpEYDS", "iclr_2020_S1g7tpEYDS", "iclr_2020_S1g7tpEYDS", "BkgpIOihYS", "ByxI2wx9or", "iclr_2020_S1g7tpEYDS", "rkxPP-2itH", "r1lYVM27cr", "BkgpIOihYS", "rkxPP-2itH", "BkgpIOihYS", "r1lYVM27cr", "iclr_2020_S1g7tpEYDS", "iclr_2020_S1g7tpEYDS" ]
iclr_2020_SkxLFaNKwB
Computation Reallocation for Object Detection
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CR-MobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.
accept-poster
The submission applies architecture search to object detection architectures. The work is fairly incremental but the results are reasonable. After revision, the scores are 8, 6, 6, 3. The reviewer who gave "3" wrote after the authors' responses and revision that "Authors' responses partly resolved my concerns on the experiments. I have no object to accept this paper. [sic]". The AC recommends adopting the majority recommendation and accepting the paper.
train
[ "SJgu9zzXtr", "SygeNOnwqB", "SkgZyD0qoS", "Bkx4r4ldiH", "H1xHIyldsr", "HkxZLK1uiB", "Hye28XkOjS", "Sye9kzy_oS", "H1x_OVc85S", "SkeuiUTw9H" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "<Strengths> \n+ This paper performs architecture search for object detection, especially for the computation allocation across different resolutions. It is a new application for NAS research. \n+ The proposed approach shows some marginal improvement of object detection accuracy across multiple backbones and datase...
[ 3, 6, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_SkxLFaNKwB", "iclr_2020_SkxLFaNKwB", "iclr_2020_SkxLFaNKwB", "SygeNOnwqB", "SkeuiUTw9H", "SJgu9zzXtr", "Sye9kzy_oS", "H1x_OVc85S", "iclr_2020_SkxLFaNKwB", "iclr_2020_SkxLFaNKwB" ]
iclr_2020_rylvYaNYDH
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
As deep reinforcement learning driven by visual perception becomes more widely used there is a growing need to better understand and probe the learned agents. Understanding the decision making process and its relationship to visual inputs can be very valuable to identify problems in learned behavior. However, this topic has been relatively under-explored in the research community. In this work we present a method for synthesizing visual inputs of interest for a trained agent. Such inputs or states could be situations in which specific actions are necessary. Further, critical states in which a very high or a very low reward can be achieved are often interesting to understand the situational awareness of the system as they can correspond to risky states. To this end, we learn a generative model over the state space of the environment and use its latent space to optimize a target function for the state of interest. In our experiments we show that this method can generate insights for a variety of environments and reinforcement learning methods. We explore results in the standard Atari benchmark games as well as in an autonomous driving simulator. Based on the efficiency with which we have been able to identify behavioural weaknesses with this technique, we believe this general approach could serve as an important tool for AI safety applications.
accept-poster
This paper proposes a tool to visualizing the behaviour of deep RL agents, for example to observe the behaviour of an agent in critical scenarios. The idea is to learn a generative model of the environment and use it to artificially generate novel states in order to induce specific agent actions. States can then be generated such as to optimize a given target function, for example states where the agent takes a specific actions or states which are high/low reward. They evaluate the proposed visualization on Atari games and on a driving simulation environment, where the authors use their approach, to investigate the behaviour of different deep RL agents such as DQN. The paper is very controversial. On the one hand, as far as we know, this is the first approach that explicitly generates states that are meant to induce specific agent behaviour, although one could relate this to adversarial samples generation. Interpretability in deep RL is a known problem and this work could bring an interesting tool to the community. However, the proposed approach lacks theoretical foundations, thus feels quite ad-hoc, and results are limited to a qualitative, visual, evaluation. At the same time, one could say that the approach is not more ad hoc than other gradient saliency visualization approaches, and one could argue that the lack of theoretical soundness is due to the difficulty of defining good measures of interpretability and that apply well to image-based environments. Nonetheless, this paper is a step in the good direction in a field that could really benefit from it.
train
[ "HyxtJXydtS", "Syxk_h7IFH", "rkeifKv3jB", "B1euuIOHoB", "rkxGkUdrsB", "SkgaerdHiS", "rkxjJnqCFS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The authors propose learning a generative model of states to visualize the behavior of different RL agents. Given states s from the environment, a VAE is trained to reconstruct states s, with an added loss term encouraging the action of the agent to stay the same between the original s and reconstructed s. The L2 ...
[ 6, 3, -1, -1, -1, -1, 8 ]
[ 4, 1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_rylvYaNYDH", "iclr_2020_rylvYaNYDH", "SkgaerdHiS", "Syxk_h7IFH", "HyxtJXydtS", "rkxjJnqCFS", "iclr_2020_rylvYaNYDH" ]
iclr_2020_HygDF6NFPB
A Fair Comparison of Graph Neural Networks for Graph Classification
Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works. As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.
accept-poster
The paper provides a careful, reproducible empirical comparison of 5 graph neural network models on 9 datasets for graph classification. The paper shows that baseline methods that use only node features (either counting node types, or summing node features) can be competitive. The authors also provide some guidelines for ways to improve reproducibility in empirical comparisons of graph classification. The authors responded well to the issued raised during review, and updated the paper during the discussion period. The reviewers improved their score, and while there were reservations about the comprehensiveness of the set of experiments, they all agreed that the paper provides a solid empirical contribution to the literature. As machine learning becomes increasingly popular, papers that perform a careful empirical survey of baselines provide an important sanity check that future work can be built upon. Therefore, this paper, while not covering all possible graph neural network questions, provides an excellent starting point for future work to extend.
train
[ "B1gg3FECKr", "HJlHur8QYS", "S1e7s3Hnor", "r1x03iFKor", "HyeZHNWwsH", "HJx7uGWvoS", "SJx-p-WPoH", "BkxIneZvsS", "ryxNVWbDiS", "BJlNA1Zvor", "SyxyKk-viH", "B1eA8kWvir", "B1e2H0gPjB", "S1gvCalPoB", "BJgN1kMXiS", "ByliE0f49r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "This paper provides an empirical comparison between several existing graph classification algorithms, aimed at providing a fair comparison among them, as well as proposing a simple baseline that does not take into account graph structural information.\n\nOverall, I found the experimental section very thorough and ...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_HygDF6NFPB", "iclr_2020_HygDF6NFPB", "BkxIneZvsS", "BJgN1kMXiS", "BJlNA1Zvor", "iclr_2020_HygDF6NFPB", "BJlNA1Zvor", "BJlNA1Zvor", "BJlNA1Zvor", "HJlHur8QYS", "B1e2H0gPjB", "B1e2H0gPjB", "B1gg3FECKr", "ByliE0f49r", "iclr_2020_HygDF6NFPB", "iclr_2020_HygDF6NFPB" ]
iclr_2020_r1e_FpNFDr
Generalization bounds for deep convolutional neural networks
We prove bounds on the generalization error of convolutional networks. The bounds are in terms of the training loss, the number of parameters, the Lipschitz constant of the loss and the distance from the weights to the initial weights. They are independent of the number of pixels in the input, and the height and width of hidden feature maps. We present experiments using CIFAR-10 with varying hyperparameters of a deep convolutional network, comparing our bounds with practical generalization gaps.
accept-poster
The authors present several theorems bounding the generalization error of a class of conv nets (CNNs) with high probability by O(sqrt(W(beta + log(lambda)) + log(1/delta)]/sqrt(n)), where W is the number of weights, beta is the distance from initialization in operator norm, lambda is the margin, n is the number of data, and the bound holds with prob. at least 1-delta. (They also present a bound that is tighter when the empirical risk is small.) The bounds are "size free" in the sense that they do not depend on the size of the *input*, which is assumed to be, say, a d x d image. While there is dependence on the number of parameters, W, there is no implicit dependence on d here. The paper received the following feedback: 1. Reviewer 3 mostly had clarifying questions, especially with respect to (essentially independent) work by Wei and Ma. Reviewer 3 also pressed the authors to discuss how the bounds compared in absolute terms to the bounds of Bartlett et al. The authors stated that they did not have explicit constants to make such a comparison. Reviewer 3 was satisfied enough to raise their score to a 6. 2. Reviewer 1 admitted they were not experts and raised some issues around novelty/simplicity. I do not think the simplicity of the paper is a drawback. The reviewers unfortunately did not participate in the rebuttal, despite repeated attempts. 3. Reviewer 2 argued for weak reject, despite an interaction with the authors. The reviewer raised the issue of bounds based on control of the Lipschitz constant. The conversation was slightly marred by a typo in the reviewers original comment. I don't believe the authors ultimately responded to the reviewer's point. There was another discussion about simultaneously work and compression-based bounds. I would agree with the authors that they need not have cited simultaneous work, especially since the details are quite different. Ultimately, this reviewer still argued for rejection (weakly). After the rebuttal period ended, the reviewers raised some further concerns with me. I tried to assess these on my own, and ended up with my own questions. I raise these in no particular order. Each of them may have a simple resolution. In that case, the authors should take them as possible sources of confusion. Addressing them may significantly improve the readability of the paper. i. Lemma A.3. The order of quantification is poorly expressed and so I was not confident in the statement. In particular, the theorem starts \forall \eta >0 \exists C, .... but then C is REINTRODUCED later, subsequent to existential quantification over M, B, and d and so it seems there is dependence. If there is no dependence, this presentation is sloppy and should be fixed. ii. Lemma A.4, the same dependence of C on M, B and d holds here and this is quite problematic for the later applications. If this constant is independent of these quantities, then the order of quantifiers has been stated incorrectly. Again, this is sloppy if it is wrong. If it's correct, then we need to know how C grows. Based on other claims by the authors, it is my understanding that, in both cases, the constant C does not depend on M, B, or d. Regardless, the authors should clarify the dependence. If C does in fact depend on these quantities, and the conclusions change, the paper should be retracted. iii. Proof of Lemma 2.3. I'd remind the reader that the parametrization maps the unit ball to G. iv. The bound depends on control of operator norms and empirical margins. It is not clear how these interact and whether, for margin parameters necessary to achieve small empirical margin risk, the bounds pick up dependence on other aspects of the learning problem (e.g., depth). I think the only way to assess this would be to investigate these quantities empirically, say, by varying the size and depth of the network on a fixed data set, trained to achieve the same empirical risk (or margin). I'll add that I was also disappointed that the authors did not attempt to address any of the issues by a revision of the actual paper. In particular, the authors promise several changes that would have been straightforward to make in the two weeks of rebuttal. Instead, the reviewers and myself are left to imagine how things would change. I see at least two promises: A. To walk back some of the empirical claims about distance from initialization that are based on somewhat flimsy empirical evaluations. I would add to this the need to investigate how the margin and operator norms depend on depth empirically. B. Attribute Dziugate and Roy for establishing the first bounds in terms of distance from initialization, though their bounds were numerical. I think a mention of simultaneously work would also be generous, even if not strictly necessary.
train
[ "Byx_mr2g9H", "HJejsKV3ir", "rJgzWheqiS", "BJga7KcFjB", "SygeSFOusS", "rygCM0IOsB", "BygaxE0WsS", "rkeWdQAbjr", "HJeB9b0-jB", "H1xkWFYgcr", "rkegKo2aKH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThe paper describes new norm-based generalization bounds that were specifically adapted to convolutional neural networks. Since convolutional neural networks do not explicitly depend on the input dimension, these bounds share the same property. Further additional improvement over Bartlett et al. ‘17 bound, is th...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_r1e_FpNFDr", "rJgzWheqiS", "BJga7KcFjB", "BygaxE0WsS", "rygCM0IOsB", "HJeB9b0-jB", "rkegKo2aKH", "H1xkWFYgcr", "Byx_mr2g9H", "iclr_2020_r1e_FpNFDr", "iclr_2020_r1e_FpNFDr" ]
iclr_2020_rye5YaEtPr
SAdam: A Variant of Adam for Strongly Convex Functions
The Adam algorithm has become extremely popular for large-scale machine learning. Under convexity condition, it has been proved to enjoy a data-dependent O(T) regret bound where T is the time horizon. However, whether strong convexity can be utilized to further improve the performance remains an open problem. In this paper, we give an affirmative answer by developing a variant of Adam (referred to as SAdam) which achieves a data-dependent O(log⁡T) regret bound for strongly convex functions. The essential idea is to maintain a faster decaying yet under controlled step size for exploiting strong convexity. In addition, under a special configuration of hyperparameters, our SAdam reduces to SC-RMSprop, a recently proposed variant of RMSprop for strongly convex functions, for which we provide the first data-dependent logarithmic regret bound. Empirical results on optimizing strongly convex functions and training deep networks demonstrate the effectiveness of our method.
accept-poster
The reviewers all appreciated the results. They expressed doubts regarding the discrepancy between the assumptions made and the reality of the loss of deep networks. I share these concerns with the reviewers but also believe that, due to the popularity of Adam, a careful analysis of a variant is worthy of publication.
train
[ "ryghUaohjB", "ByZyAKc2jS", "HkepewY3oB", "HJg5icVniH", "H1lCDDQtsS", "Hyxyk3cSsB", "r1eacscHoB", "rkezMo5rjr", "SJxbm19sKB", "HJgSWr8htH", "Syg4z4lAFr" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the response.\n\nQ1: However, given that ICLR is a specialized conference (compared to ICML or Neurips), assuming a very strong structure, which does not hold in many applications that ICLR community is interested in, causes me to justify the suitability of the paper for the conference.\n...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "ByZyAKc2jS", "HkepewY3oB", "HJg5icVniH", "rkezMo5rjr", "r1eacscHoB", "SJxbm19sKB", "HJgSWr8htH", "Syg4z4lAFr", "iclr_2020_rye5YaEtPr", "iclr_2020_rye5YaEtPr", "iclr_2020_rye5YaEtPr" ]
iclr_2020_SJlsFpVtDB
Continual Learning with Bayesian Neural Networks for Non-Stationary Data
This work addresses continual learning for non-stationary data, using Bayesian neural networks and memory-based online variational Bayes. We represent the posterior approximation of the network weights by a diagonal Gaussian distribution and a complementary memory of raw data. This raw data corresponds to likelihood terms that cannot be well approximated by the Gaussian. We introduce a novel method for sequentially updating both components of the posterior approximation. Furthermore, we propose Bayesian forgetting and a Gaussian diffusion process for adapting to non-stationary data. The experimental results show that our update method improves on existing approaches for streaming data. Additionally, the adaptation methods lead to better predictive performance for non-stationary data.
accept-poster
This paper introduces an algorithm for online Bayesian learning of both streaming and non-stationary data. The algorithmic choices are heuristic but motivated by sensible principles. The reviewers' main concerns were with novelty, but because the paper was well-written and addressing an important problem they all agreed it should be accepted.
train
[ "H1xtbmP2FB", "rkxMz-wniH", "rygJAyfhiB", "HJlXEHvioH", "BJlGBMQojB", "rkx65-Xssr", "rJl_NgmoiB", "H1xZi0GijB", "SyeVlFw1qr", "rygdmkifqr" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Post-rebuttal: My questions below have been addressed and the submission has been modified accordingly. My concern regarding novelty remains unchanged but I still suggest acceptance since the contributions are of practical interests and the paper is well written.\n\n1. Summary:\nThis proposes considers neural netw...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_SJlsFpVtDB", "rJl_NgmoiB", "HJlXEHvioH", "BJlGBMQojB", "H1xtbmP2FB", "SyeVlFw1qr", "rygdmkifqr", "iclr_2020_SJlsFpVtDB", "iclr_2020_SJlsFpVtDB", "iclr_2020_SJlsFpVtDB" ]
iclr_2020_rylnK6VtDH
Multiplicative Interactions and Where to Find Them
We explore the role of multiplicative interaction as a unifying framework to describe a range of classical and modern neural network architectural motifs, such as gating, attention layers, hypernetworks, and dynamic convolutions amongst others. Multiplicative interaction layers as primitive operations have a long-established presence in the literature, though this often not emphasized and thus under-appreciated. We begin by showing that such layers strictly enrich the representable function classes of neural networks. We conjecture that multiplicative interactions offer a particularly powerful inductive bias when fusing multiple streams of information or when conditional computation is required. We therefore argue that they should be considered in many situation where multiple compute or information paths need to be combined, in place of the simple and oft-used concatenation operation. Finally, we back up our claims and demonstrate the potential of multiplicative interactions by applying them in large-scale complex RL and sequence modelling tasks, where their use allows us to deliver state-of-the-art results, and thereby provides new evidence in support of multiplicative interactions playing a more prominent role when designing new neural network architectures.
accept-poster
This paper provides a unifying perspective regarding a variety of popular DNN architectures in terms of the inclusion of multiplicative interaction layers. Such layers increase the representational power of conventional linear layers, which the paper argues can induce a useful inductive bias in practical scenarios such as when multiple streams of information are fused. Empirical support is provided to validate these claims and showcase the potential of multiplicative interactions in occupying broader practical roles. All reviewers agreed to accept this paper, although some concerns were raised in terms of novelty, clarity, and the relationship with state-of-the-art models. However, the author rebuttal and updated revision are adequate, and I believe that this paper should be accepted.
train
[ "BJlQXYsl5B", "r1ehDoH_ir", "S1ebql-7oB", "BklwSe-msS", "HkgyJKGZor", "BJl4_UpTYH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper explores different types of multiplicative interactions. This allows understanding of both efficacy of multiplicative interaction (i.e., MI vs MLP), and the common between multiplicative-type models (e.g., hypernetworks, FiLM, gating, attention etc). The authors also find MI models able to achieve a sta...
[ 8, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, 5, 1 ]
[ "iclr_2020_rylnK6VtDH", "BJlQXYsl5B", "BJl4_UpTYH", "HkgyJKGZor", "iclr_2020_rylnK6VtDH", "iclr_2020_rylnK6VtDH" ]
iclr_2020_Bkeeca4Kvr
FEW-SHOT LEARNING ON GRAPHS VIA SUPER-CLASSES BASED ON GRAPH SPECTRAL MEASURES
We propose to study the problem of few-shot graph classification in graph neural networks (GNNs) to recognize unseen classes, given limited labeled graph examples. Despite several interesting GNN variants being proposed recently for node and graph classification tasks, when faced with scarce labeled examples in the few-shot setting, these GNNs exhibit significant loss in classification performance. Here, we present an approach where a probability measure is assigned to each graph based on the spectrum of the graph’s normalized Laplacian. This enables us to accordingly cluster the graph base-labels associated with each graph into super-classes, where the L^p Wasserstein distance serves as our underlying distance metric. Subsequently, a super-graph constructed based on the super-classes is then fed to our proposed GNN framework which exploits the latent inter-class relationships made explicit by the super-graph to achieve better class label separation among the graphs. We conduct exhaustive empirical evaluations of our proposed method and show that it outperforms both the adaptation of state-of-the-art graph classification methods to few-shot scenario and our naive baseline GNNs. Additionally, we also extend and study the behavior of our method to semi-supervised and active learning scenarios.
accept-poster
The authors propose a method for few-shot learning for graph classification. The majority of reviewers agree on the novelty of the proposed method and that the problem is interesting. The authors have addressed all major concerns.
test
[ "SyeJ_2r0tH", "B1x0N0pmsr", "rylOp2aXsH", "S1x3ALTQjS", "HyxyssTQsr", "BylX4567or", "B1l8tM0QoB", "ryxUOGLHcH", "S1g3KSyaqr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces few-shot graph classification problem and proposes super-class based graph neural network (GNN) to solve it. Experiments on two datasets demonstrate that the proposed model outperforms a number of baseline methods. Some ablation study and analysis are also provided. Followings are my detail r...
[ 6, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_Bkeeca4Kvr", "rylOp2aXsH", "SyeJ_2r0tH", "ryxUOGLHcH", "BylX4567or", "S1g3KSyaqr", "iclr_2020_Bkeeca4Kvr", "iclr_2020_Bkeeca4Kvr", "iclr_2020_Bkeeca4Kvr" ]
iclr_2020_BJl-5pNKDB
On Computation and Generalization of Generative Adversarial Imitation Learning
Generative Adversarial Imitation Learning (GAIL) is a powerful and practical approach for learning sequential decision-making policies. Different from Reinforcement Learning (RL), GAIL takes advantage of demonstration data by experts (e.g., human), and learns both the policy and reward function of the unknown environment. Despite the significant empirical progresses, the theory behind GAIL is still largely unknown. The major difficulty comes from the underlying temporal dependency of the demonstration data and the minimax computational formulation of GAIL without convex-concave structure. To bridge such a gap between theory and practice, this paper investigates the theoretical properties of GAIL. Specifically, we show: (1) For GAIL with general reward parameterization, the generalization can be guaranteed as long as the class of the reward functions is properly controlled; (2) For GAIL, where the reward is parameterized as a reproducing kernel function, GAIL can be efficiently solved by stochastic first order optimization algorithms, which attain sublinear convergence to a stationary solution. To the best of our knowledge, these are the first results on statistical and computational guarantees of imitation learning with reward/policy function ap- proximation. Numerical experiments are provided to support our analysis.
accept-poster
The paper provides a theoretical analysis of the recent and popular Generative Adversarial Imitation Learning (GAIL) approach. Valuable new insights on generalization and convergence are developed, and put GAIL on a stronger theoretical foundation. Reviewer questions and suggestions were largely addressed during the rebuttal.
train
[ "rkg-kdH3ir", "S1gkJDy2oS", "H1e0Bv1noS", "BylGsvynjH", "SklghBKaYS", "HJlR4KlCKH", "Hkx21bQxqS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the clarifications. I did not have time yet to look at them in detail, but they will likely be useful during the post-rebuttal.", "We appreciate your valuable comments and questions.\n\nQ: Regarding the proof sketch of Theorem 1.\n\nA: The expectation on $\\phi$ is indeed taken with respect to the ...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 4, 3, 1 ]
[ "S1gkJDy2oS", "HJlR4KlCKH", "SklghBKaYS", "Hkx21bQxqS", "iclr_2020_BJl-5pNKDB", "iclr_2020_BJl-5pNKDB", "iclr_2020_BJl-5pNKDB" ]
iclr_2020_BylVcTNtDS
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Due to insufficient training data and the high computational cost to train a deep neural network from scratch, transfer learning has been extensively used in many deep-neural-network-based applications. A commonly used transfer learning approach involves taking a part of a pre-trained model, adding a few layers at the end, and re-training the new layers with a small dataset. This approach, while efficient and widely used, imposes a security vulnerability because the pre-trained model used in transfer learning is usually publicly available, including to potential attackers. In this paper, we show that without any additional knowledge other than the pre-trained model, an attacker can launch an effective and efficient brute force attack that can craft instances of input to trigger each target class with high confidence. We assume that the attacker has no access to any target-specific information, including samples from target classes, re-trained model, and probabilities assigned by Softmax to each class, and thus making the attack target-agnostic. These assumptions render all previous attack models inapplicable, to the best of our knowledge. To evaluate the proposed attack, we perform a set of experiments on face recognition and speech recognition tasks and show the effectiveness of the attack. Our work reveals a fundamental security weakness of the Softmax layer when used in transfer learning settings.
accept-poster
The reviewers were generally in agreement that the paper presents a valuable contribution and should be accepted for publication. However, I would strongly encourage the authors to carefully read over the reviews and address the suggestions and concerns insofar as possible for the final.
train
[ "SJgHO8bnFH", "rkxNFu9jsr", "S1l04OcoiS", "rJlZhmBfjS", "HJxyY7BziB", "SklqSmrzsr", "H1gIREuItr", "rklYDMstKB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposed a method of attacking deep neural networks that were trained using transfer learning. The primary claim is that the proposed technique only requires knowledge of the base model (i.e., if the frozen parameters taken from pretrained VGG, ResNET models are known, then that is sufficient) and doesn’...
[ 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_BylVcTNtDS", "rklYDMstKB", "SJgHO8bnFH", "H1gIREuItr", "rklYDMstKB", "SJgHO8bnFH", "iclr_2020_BylVcTNtDS", "iclr_2020_BylVcTNtDS" ]
iclr_2020_rJeIcTNtvS
Low-Resource Knowledge-Grounded Dialogue Generation
Responding with knowledge has been recognized as an important capability for an intelligent conversational agent. Yet knowledge-grounded dialogues, as training data for learning such a response generation model, are difficult to obtain. Motivated by the challenge in practice, we consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available. In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model. By this means, the major part of the model can be learned from a large number of ungrounded dialogues and unstructured documents, while the remaining small parameters can be well fitted using the limited training examples. Evaluation results on two benchmarks indicate that with only 1/8 training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.
accept-poster
The paper considers the problem of knowledge-grounded dialogue generation with low resources. The authors propose to disentangle the model into three components that can be trained on separate data, and achieve SOTA on three datasets. The reviewers agree that this is a well-written paper with a good idea, and strong empirical results, and I happily recommend acceptance.
train
[ "BkgyhNioqH", "BJeJKg73jH", "Bkle6ymnoB", "S1eNaW73sB", "rJgl4bQ3oH", "ryed7RM8tH", "r1em_DSCYH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies knowledge-grounded dialogue response generation in the low-resource setting. More precisely, it proposes a disentangled decoder consisting of three components: language model, context processor, and document reader. Disentangled decoder architecture provides a flexibility to train (or pre-train)...
[ 6, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2020_rJeIcTNtvS", "BkgyhNioqH", "iclr_2020_rJeIcTNtvS", "ryed7RM8tH", "r1em_DSCYH", "iclr_2020_rJeIcTNtvS", "iclr_2020_rJeIcTNtvS" ]
iclr_2020_B1gF56VYPH
Deep 3D Pan via local adaptive "t-shaped" convolutions with global and local adaptive dilations
Recent advances in deep learning have shown promising results in many low-level vision tasks. However, solving the single-image-based view synthesis is still an open problem. In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery. We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or “Deep 3D Pan”, with “t-shaped” adaptive kernels equipped with globally and locally adaptive dilations. Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image’s pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given. Extensive experiments were performed on the KITTI, CityScapes, and our VICLAB_STEREO indoors dataset to prove the efficacy of our method. Our monster-net significantly outperforms the state-of-the-art method (SOTA) by a large margin in all metrics of RMSE, PSNR, and SSIM. Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry. Moreover, the disparity information that can be extracted from the “t-shaped” kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method.
accept-poster
Two reviewers recommend acceptance while one is negative. The authors propose t-shaped kernels for view synthesis, focusing on stereo images. AC finds the problem and method interesting and the results to be sufficiently convincing to warrant acceptance.
train
[ "rkxUbQk2jH", "SJlEThBVoH", "BkgmL3S4jB", "B1euNLVnFH", "ryeKDII6YB", "rJem4jsAtB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Please take a quick look at the revision uploaded on the 12th where we included the comments from: \n* Reviewer 3 on extending our t-shaped kernel to any rigid motion and increased citation frequency (we are a little bit limited here as we run out of space due to the ICLR citation style).\n* Reviewer 2 on minor wr...
[ -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, 1, 3, 3 ]
[ "iclr_2020_B1gF56VYPH", "ryeKDII6YB", "rJem4jsAtB", "iclr_2020_B1gF56VYPH", "iclr_2020_B1gF56VYPH", "iclr_2020_B1gF56VYPH" ]
iclr_2020_HJxK5pEYvr
Tree-Structured Attention with Hierarchical Accumulation
Incorporating hierarchical structures like constituency trees has been shown to be effective for various natural language processing (NLP) tasks. However, it is evident that state-of-the-art (SOTA) sequence-based models like the Transformer struggle to encode such structures inherently. On the other hand, dedicated models like the Tree-LSTM, while explicitly modeling hierarchical structures, do not perform as efficiently as the Transformer. In this paper, we attempt to bridge this gap with Hierarchical Accumulation to encode parse tree structures into self-attention at constant time complexity. Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT'14 English-German task. It also yields improvements over Transformer and Tree-LSTM on three text classification tasks. We further demonstrate that using hierarchical priors can compensate for data shortage, and that our model prefers phrase-level attentions over token-level attentions.
accept-poster
This paper incorporates tree-structured information about a sentence into how transformers process it. Results are improved. The paper is clear. Reviewers liked it. Clear accept.
train
[ "H1g_PfRoYS", "BJgV33DTYB", "HklbTMp0YH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper extends transformers by enabling them to incorporate hierarchical structures like constituency trees structured attention with hierarchical accumulation. In particular, they modified the architecture of transformers such that they can learn phrase-level attention scores, and use them in the final assign...
[ 8, 6, 6 ]
[ 3, 5, 3 ]
[ "iclr_2020_HJxK5pEYvr", "iclr_2020_HJxK5pEYvr", "iclr_2020_HJxK5pEYvr" ]
iclr_2020_SkgscaNYPS
The asymptotic spectrum of the Hessian of DNN throughout training
The dynamics of DNNs during gradient descent is described by the so-called Neural Tangent Kernel (NTK). In this article, we show that the NTK allows one to gain precise insight into the Hessian of the cost of DNNs: we obtain a full characterization of the asymptotics of the spectrum of the Hessian, at initialization and during training.
accept-poster
This paper studies the spectrum of the Hessian through training, making connections with the NTK limit. While many of the results are perhaps unsurprising, and more empirically driven, together the paper represents a valuable contribution towards our understanding of generalization in deep learning. Please carefully account for the reviewer comments in the final version.
train
[ "SkgmG-iQ9B", "H1eMFX_VjS", "ryefUmuVoS", "SkgNTGuNor", "BJeJ57o3FS", "BklmK1PpKS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper uses the Neural Tangent Kernel (NTK) to presents an asymptotic analysis of the evolution of Hessian of the loss (w.r.t. model parameters) throughout training. The authors leverage the Neural Tangent Kernel to analyze the evolution of the Hessian of the loss w.r.t the model parameters. Specifically the a...
[ 3, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, 3, 5 ]
[ "iclr_2020_SkgscaNYPS", "SkgmG-iQ9B", "BJeJ57o3FS", "BklmK1PpKS", "iclr_2020_SkgscaNYPS", "iclr_2020_SkgscaNYPS" ]
iclr_2020_H1lhqpEYPr
Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games
We study discrete-time mean-field Markov games with infinite numbers of agents where each agent aims to minimize its ergodic cost. We consider the setting where the agents have identical linear state transitions and quadratic cost func- tions, while the aggregated effect of the agents is captured by the population mean of their states, namely, the mean-field state. For such a game, based on the Nash certainty equivalence principle, we provide sufficient conditions for the existence and uniqueness of its Nash equilibrium. Moreover, to find the Nash equilibrium, we propose a mean-field actor-critic algorithm with linear function approxima- tion, which does not require knowing the model of dynamics. Specifically, at each iteration of our algorithm, we use the single-agent actor-critic algorithm to approximately obtain the optimal policy of the each agent given the current mean- field state, and then update the mean-field state. In particular, we prove that our algorithm converges to the Nash equilibrium at a linear rate. To the best of our knowledge, this is the first success of applying model-free reinforcement learn- ing with function approximation to discrete-time mean-field Markov games with provable non-asymptotic global convergence guarantees.
accept-poster
The authors propose an actor-critic method for finding Nash equilibrium in linear-quadratic mean field games and establish linear convergence under some assumptions. There were some minor concerns about motivation and clarity, especially with regards to the simulator. In an extensive and interactive rebuttal, the authors were able to argue that their results/methods, which appear to be rather specialized to the LQ setting, offer insight/methods beyond the LQ setting.
train
[ "Byx4wYANtr", "BJlDfbinor", "Skl4qtNjir", "BygVhSmojH", "SJxx7axosH", "H1gIOuhqjr", "SkxwPf0coH", "HkgarI_9iH", "Hyg2uGaqsS", "BkgiHbt5oB", "SklwnhBqjB", "Byegc4d7iH", "HJxu0YQqsr", "B1l7iIoFor", "Hygrb7uQoS", "H1xajzdQor", "rkecTMdmsB", "HkeV7N2CKS", "ByxCE7vQcS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "...
[ "Summary and Decision \n\nThis paper studied an actor-critic learning algorithm for solving a mean field game. More specifically, the authors showed a particular actor-critic algorithm converges for linear-quadratic games with a quantitative bound in Theorem 4.1. Notably, results on learning algorithms for solving ...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_H1lhqpEYPr", "Skl4qtNjir", "SJxx7axosH", "Hygrb7uQoS", "SklwnhBqjB", "BkgiHbt5oB", "Hyg2uGaqsS", "HJxu0YQqsr", "H1gIOuhqjr", "HkgarI_9iH", "B1l7iIoFor", "Byx4wYANtr", "rkecTMdmsB", "Byegc4d7iH", "HkeV7N2CKS", "ByxCE7vQcS", "H1xajzdQor", "iclr_2020_H1lhqpEYPr", "iclr_20...
iclr_2020_SJx-j64FDr
In Search for a SAT-friendly Binarized Neural Network Architecture
Analyzing the behavior of neural networks is one of the most pressing challenges in deep learning. Binarized Neural Networks are an important class of networks that allow equivalent representation in Boolean logic and can be analyzed formally with logic-based reasoning tools like SAT solvers. Such tools can be used to answer existential and probabilistic queries about the network, perform explanation generation, etc. However, the main bottleneck for all methods is their ability to reason about large BNNs efficiently. In this work, we analyze architectural design choices of BNNs and discuss how they affect the performance of logic-based reasoners. We propose changes to the BNN architecture and the training procedure to get a simpler network for SAT solvers without sacrificing accuracy on the primary task. Our experimental results demonstrate that our approach scales to larger deep neural networks compared to existing work for existential and probabilistic queries, leading to significant speed ups on all tested datasets.
accept-poster
This paper studies how the architecture and training procedure of binarized neural networks can be changed in order to make it easier for SAT solvers to verify certain properties of them. All of the reviewers were positive about the paper, and their questions were addressed to their satisfaction, so all reviewers are in favor of accepting the paper. I therefore recommend acceptance.
val
[ "BkgsCUn0qS", "ryx-rJkoiS", "BkgKPTXYsS", "HkxKUnWYsr", "HJlUFu5Mor", "rygn1tcfiS", "Hyl2Vz5ziH", "H1ep15J9OH", "Syxn9SsnYr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper deals with the scalability of Binarized Neural Networks (BNNs) and their representation in Boolean logic. This encoding enables SAT solvers to reason about the underlying variables and query existential or counting clauses. The main contribution of this paper is the analysis of the architectural design ...
[ 8, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 1, -1, -1, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_SJx-j64FDr", "HkxKUnWYsr", "iclr_2020_SJx-j64FDr", "Hyl2Vz5ziH", "Syxn9SsnYr", "H1ep15J9OH", "BkgsCUn0qS", "iclr_2020_SJx-j64FDr", "iclr_2020_SJx-j64FDr" ]
iclr_2020_SJg7spEYDS
Generative Ratio Matching Networks
Deep generative models can learn to generate realistic-looking images, but many of the most effective methods are adversarial and involve a saddlepoint optimization, which requires a careful balancing of training between a generator network and a critic network. Maximum mean discrepancy networks (MMD-nets) avoid this issue by using kernel as a fixed adversary, but unfortunately, they have not on their own been able to match the generative quality of adversarial training. In this work, we take their insight of using kernels as fixed adversaries further and present a novel method for training deep generative models that does not involve saddlepoint optimization. We call our method generative ratio matching or GRAM for short. In GRAM, the generator and the critic networks do not play a zero-sum game against each other, instead, they do so against a fixed kernel. Thus GRAM networks are not only stable to train like MMD-nets but they also match and beat the generative quality of adversarially trained generative networks.
accept-poster
The paper proposes a training method for generative adversarial network that avoids solving a zero-sum game between the generator and the critic, hence leading to more stable optimization problems. It is similar to MMD-GAN, in which MMD is computed on a projected low-dim space, but the projection is trained to match the density ratio between the observed and the latent space. The reviewers raised several questions. Most of them have been addressed after several rounds of discussions. Overall, they are all positive about this paper, so I recommend acceptance. I encourage the authors to incorporate those discussions in their revised paper.
train
[ "HkgbUoiioH", "BkgbEu5dKS", "B1xPogdsjH", "Skg6DUpFsS", "rJx8F1kYiS", "BylSbyR3YH", "H1eiS5wdoH", "r1gza58dsH", "BkxC02DUsH", "SJxsWsULjr", "BJlNTtUIjr", "S1eK79IIoB", "HkebyY-VFB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thank you for the very clear refutation of my doubts. I was indeed confused by the LOTUS being in \"reverse,\" but I'm happy now. :)\n\nIt might be helpful for the final version of the paper to include this more detailed derivation in the appendix.", "In this paper, authors propose a new generative adversarial n...
[ -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 5, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 5 ]
[ "Skg6DUpFsS", "iclr_2020_SJg7spEYDS", "S1eK79IIoB", "H1eiS5wdoH", "BJlNTtUIjr", "iclr_2020_SJg7spEYDS", "r1gza58dsH", "BkxC02DUsH", "SJxsWsULjr", "HkebyY-VFB", "BylSbyR3YH", "BkgbEu5dKS", "iclr_2020_SJg7spEYDS" ]
iclr_2020_rylHspEKPr
Learning to Represent Programs with Property Signatures
We introduce the notion of property signatures, a representation for programs and program specifications meant for consumption by machine learning algorithms. Given a function with input type τ_in and output type τ_out, a property is a function of type: (τ_in, τ_out) → Bool that (informally) describes some simple property of the function under consideration. For instance, if τ_in and τ_out are both lists of the same type, one property might ask ‘is the input list the same length as the output list?’. If we have a list of such properties, we can evaluate them all for our function to get a list of outputs that we will call the property signature. Crucially, we can ‘guess’ the property signature for a function given only a set of input/output pairs meant to specify that function. We discuss several potential applications of property signatures and show experimentally that they can be used to improve over a baseline synthesizer so that it emits twice as many programs in less than one-tenth of the time.
accept-poster
The authors propose improved techniques for program synthesis by introducing the idea of property signatures. Property signatures help capture the specifications of the program and the authors show that using such property signatures they can synthesise programs more efficiently. I think it is an interesting work. Unfortunately, one of the reviewers has strong reservations about the work. However, after reading the reviewer's comments and the author's rebuttal to these comments I am convinced that the initial reservations of R1 have been adequately addressed. Similarly, the authors have done a great job of addressing the concerns of the other reviewers and have significantly updated their paper (including more experiments to address some of the concerns). Unfortunately R1 did not participate in subsequent discussions and it is not clear whether he/she read the rebuttal. Given the efforts put in by the authors to address different concerns of all the reviewers and considering the positive ratings given by the other two reviewers I recommend that this paper be accepted. Authors, Please include all the modifications done during the rebuttal period in your final version. Also move the comparison with DeepCoder to the main body of the paper.
train
[ "HJge9Qkx9r", "r1xnqi9noB", "r1l4aLe9oS", "H1l-7Slqir", "HklF9bg9ir", "Byldv8pesB", "H1xC3Npeir", "rkxbx93xsS", "Hkln52sgiB", "BJxYtKBlcr", "Skxqu-xjcH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I would like to be able to recommend accepting this paper, but I can't. It describes two contributions to the community that could be valuable:\n\n1: searcho, a programming language designed for studying program synthesis via search over programs\n2: A method of automatically choosing interesting features for gui...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_rylHspEKPr", "iclr_2020_rylHspEKPr", "Skxqu-xjcH", "HJge9Qkx9r", "iclr_2020_rylHspEKPr", "BJxYtKBlcr", "HJge9Qkx9r", "Skxqu-xjcH", "iclr_2020_rylHspEKPr", "iclr_2020_rylHspEKPr", "iclr_2020_rylHspEKPr" ]
iclr_2020_SJeLopEYDH
V4D: 4D Convolutional Neural Networks for Video-level Representation Learning
Most existing 3D CNN structures for video representation learning are clip-based methods, and do not consider video-level temporal evolution of spatio-temporal features. In this paper, we propose Video-level 4D Convolutional Neural Networks, namely V4D, to model the evolution of long-range spatio-temporal representation with 4D convolutions, as well as preserving 3D spatio-temporal representations with residual connections. We further introduce the training and inference methods for the proposed V4D. Extensive experiments are conducted on three video recognition benchmarks, where V4D achieves excellent results, surpassing recent 3D CNNs by a large margin.
accept-poster
This paper proposes video-level 4D CNNs and the corresponding training and inference methods for improved video representation learning. The proposed model achieves state-of-the-art performance on three action recognition tasks. Reviewers agree that the idea well motivated and interesting, but were initially concerned with positioning with respect to the related work, novelty, and computational tractability. As these issues were mostly resolved during the discussion phase, I will recommend the acceptance of this paper. We ask the authors to address the points raised during the discussion to the manuscript, with a focus on the tradeoff between the improved performance and computational cost.
train
[ "SklAqZB3oB", "BkexWBXKor", "S1xUCqYpYH", "B1g4kymYjS", "ByeBzTMtiS", "HyecCk_TFH", "rkgoA2UCKB" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We have updated a third version of the paper. In the appendix, we further add visualization results and compare V4D with 3D TSN by implementing 3D Class Activation Maps.", "Thank you for your comments and suggestions. We will address the issues you mentioned.\n\n\n1.\tThank you for the insightful suggestion. We ...
[ -1, -1, 6, -1, -1, 6, 3 ]
[ -1, -1, 5, -1, -1, 4, 3 ]
[ "iclr_2020_SJeLopEYDH", "HyecCk_TFH", "iclr_2020_SJeLopEYDH", "S1xUCqYpYH", "rkgoA2UCKB", "iclr_2020_SJeLopEYDH", "iclr_2020_SJeLopEYDH" ]
iclr_2020_B1gqipNYwH
Option Discovery using Deep Skill Chaining
Autonomously discovering temporally extended actions, or skills, is a longstanding goal of hierarchical reinforcement learning. We propose a new algorithm that combines skill chaining with deep neural networks to autonomously discover skills in high-dimensional, continuous domains. The resulting algorithm, deep skill chaining, constructs skills with the property that executing one enables the agent to execute another. We demonstrate that deep skill chaining significantly outperforms both non-hierarchical agents and other state-of-the-art skill discovery techniques in challenging continuous control tasks.
accept-poster
This paper tackles the problem of autonomous skill discovery by recursively chaining skills backwards from the goal in a deep learning setting, taking the initial conditions of one skill to be the goal of the previous one. The approach is evaluated on several domains and compared against other state of the art algorithms. This is clearly a novel and interesting paper. Two minor outstanding issues are that the domains are all related to navigation, and it would be interesting to see the approach on other domains, and that the method involves a fair bit of engineering in piecing different methods together. Regardless, this paper should be accepted.
train
[ "Bkgp_NaFYB", "S1gYb6EhiS", "rye4yG5oiS", "HJg6B0IjsB", "rJxauwPciH", "HJgqn0lYoS", "HkgSWexYor", "BkliFcR_jr", "SJxLXHtaFS", "Syx0cm0pKB" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of learning suitable action abstractions (i.e., options or skills) that can be composed hierarchically to solve control tasks. The starting point for the paper is the (classic) observation that one skill should end where another can start. The paper then proposes a recursive algorith...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_B1gqipNYwH", "rye4yG5oiS", "BkliFcR_jr", "rJxauwPciH", "HJgqn0lYoS", "Syx0cm0pKB", "SJxLXHtaFS", "Bkgp_NaFYB", "iclr_2020_B1gqipNYwH", "iclr_2020_B1gqipNYwH" ]
iclr_2020_HyxG3p4twS
Quantifying the Cost of Reliable Photo Authentication via High-Performance Learned Lossy Representations
Detection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online. We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives. We design a lightweight trainable lossy image codec, that delivers competitive rate-distortion performance, on par with best hand-engineered alternatives, but has lower computational footprint on modern GPU-enabled platforms. Our results show that significant improvements in manipulation detection accuracy are possible at fractional costs in bandwidth/storage. Our codec improved the accuracy from 37% to 86% even at very low bit-rates, well below the practicality of JPEG (QF 20).
accept-poster
The paper introduces a new image compression approach that preserves the patterns indicating image manipulation. The reviewers appreciate the idea and the method. Please take into account the suggestions of Reviewer1, when preparing the final version.
train
[ "Bygdj379iB", "rkgpVnXqiB", "rkxW2iQcoB", "BJeC0l9yjS", "H1l-LJJAKS", "Byegr-2J5B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. ", "Thank you for detailed comments. \n\nOur setup involves a forensic analysis network which learns to distinguish basic image manipulations. This corresponds to a well-established foundational test scenario, which can later be built upon to deliver practical manipulation detection ...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 1, 1, 5 ]
[ "H1l-LJJAKS", "BJeC0l9yjS", "Byegr-2J5B", "iclr_2020_HyxG3p4twS", "iclr_2020_HyxG3p4twS", "iclr_2020_HyxG3p4twS" ]
iclr_2020_rkgz2aEKDr
On the Variance of the Adaptive Learning Rate and Beyond
The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate -- its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam.
accept-poster
The paper considers an important topic of the warmup in deep learning, and investigates the problem of the adaptive learning rate. While the paper is somewhat borderline, the reviewers agree that it might be useful to present it to the ICLR community.
train
[ "HkxT4aeSor", "Hyl4O6eHiH", "Skl7-oeHiB", "SyenwjxriB", "BJeTkngrsH", "Hye4w1Tyjr", "HJxzMg4AYH", "SyeHRMd1cS", "Hkxsh6bZcS", "HJgZiW0ddH", "S1ew6gRO_B", "BkgVzLSdOS", "HJla5-ydOH", "S1ljoi3Dur", "SJlhBzow_H", "B1l02LcDuB" ]
[ "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "author", "public" ]
[ "> **Math Derivations**\n\nOne purpose of our study is to rigorously explore the underpinning of warmup — we believe that only functioning as the outcome of math derivations, our method can explicitly handle the variance issue. However, the comment criticizes math derivations as insufficient to justify our algorith...
[ -1, -1, -1, -1, -1, -1, 6, 6, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 5, 1, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "Hye4w1Tyjr", "Hye4w1Tyjr", "HJxzMg4AYH", "SyeHRMd1cS", "Hkxsh6bZcS", "iclr_2020_rkgz2aEKDr", "iclr_2020_rkgz2aEKDr", "iclr_2020_rkgz2aEKDr", "iclr_2020_rkgz2aEKDr", "S1ew6gRO_B", "BkgVzLSdOS", "HJla5-ydOH", "SJlhBzow_H", "iclr_2020_rkgz2aEKDr", "B1l02LcDuB", "iclr_2020_rkgz2aEKDr" ]
iclr_2020_H1lmhaVtvr
Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery
Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. This shaping is difficult to specify by hand, particularly when the task is learned from raw observations, such as images. In this paper, we study how we can automatically learn dynamical distances: a measure of the expected number of time steps to reach a given goal state from any other state. These dynamical distances can be used to provide well-shaped reward functions for reaching new goals, making it possible to learn complex tasks efficiently. We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples. We evaluate our method both on a real-world robot and in simulation. We show that our method can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and just ten preference labels, without any other supervision. Videos of the learned skills can be found on the project website: https://sites.google.com/view/dynamical-distance-learning
accept-poster
The authors present a method to learn the expected number of time steps to reach any given state from any other state in a reinforcement learning setting. They show that these so-called dynamical distances can be used to increase learning efficiency by helping to shape reward. After some initial discussion, the reviewers had concerns about the applicability of this method to continuing problems without a clear goal state, learning issues due to the dependence of distance estimates on policy (and vice versa), experimental thoroughness, and a variety of smaller technical issues. While some of these were resolved, the largest outstanding issue is whether the proper comparisons were made to existing work other than DIAYN. The authors appear to agree that additional baselines would benefit the paper, but are uncertain whether this can occur in time. Nonetheless, after discussion the reviewers all appeared to agree on the merit of the core idea, though I strongly encourage the authors to address as many technical and baseline issues as possible before the camera ready deadline. In summary, I recommend this paper for acceptance.
val
[ "HJgmdWcJqr", "HJxXo4ShoB", "SyenCMHhjS", "rJxWw7wcjH", "SJlKw_r5iS", "S1l-_6UGtB", "rJelXZdYoB", "BylYpXBDiH", "HygnSiQQsS", "S1gDcxUXjH", "r1gip2Qmir", "SJghMgk0YH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "I'm afraid I found this paper somewhat confusing and hard to see the big picture but I also acknowledge that I am not an expert in deep, model-free RL and my RL experience is mostly in model-based RL and I am happy for this to be taken into consideration when evaluating my review - apologies if I miss something th...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ 1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_H1lmhaVtvr", "HygnSiQQsS", "rJxWw7wcjH", "S1gDcxUXjH", "rJelXZdYoB", "iclr_2020_H1lmhaVtvr", "BylYpXBDiH", "r1gip2Qmir", "HJgmdWcJqr", "SJghMgk0YH", "S1l-_6UGtB", "iclr_2020_H1lmhaVtvr" ]
iclr_2020_HkgB2TNYPS
A Theoretical Analysis of the Number of Shots in Few-Shot Learning
Few-shot classification is the task of predicting the category of an example from a set of few labeled examples. The number of labeled examples per category is called the number of shots (or shot number). Recent works tackle this task through meta-learning, where a meta-learner extracts information from observed tasks during meta-training to quickly adapt to new tasks during meta-testing. In this formulation, the number of shots exploited during meta-training has an impact on the recognition performance at meta-test time. Generally, the shot number used in meta-training should match the one used in meta-testing to obtain the best performance. We introduce a theoretical analysis of the impact of the shot number on Prototypical Networks, a state-of-the-art few-shot classification method. From our analysis, we propose a simple method that is robust to the choice of shot number used during meta-training, which is a crucial hyperparameter. The performance of our model trained for an arbitrary meta-training shot number shows great performance for different values of meta-testing shot numbers. We experimentally demonstrate our approach on different few-shot classification benchmarks.
accept-poster
The reviewers generally found the paper's contribution to be valuable and informative, and I believe that this paper should be accepted for publication and a poster presentation. I would strongly recommend to the authors to carefully read over the reviews and address any comments or concerns that were not yet addressed in the rebuttal.
train
[ "H1g-Zyvc5B", "BkeqTsTFor", "HkgzZnaYor", "r1eNvsTFoS", "Byl_4sTKoS", "rygYKpMVcB", "ByeLxCR1qB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update 11/21\nWith the additional experiments and text clarifications, I'm happy to raise my score to accept.\n\nSummary: This paper addresses the dependence of few-shot classification with Prototypical Networks on “shot”, or the number of examples given per class. Typically, performance suffers if the algorithm i...
[ 8, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_HkgB2TNYPS", "rygYKpMVcB", "ByeLxCR1qB", "H1g-Zyvc5B", "iclr_2020_HkgB2TNYPS", "iclr_2020_HkgB2TNYPS", "iclr_2020_HkgB2TNYPS" ]
iclr_2020_SyxL2TNtvr
Unsupervised Model Selection for Variational Disentangled Representation Learning
Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks. To extend the benefits of disentangled representations to more complex domains and practical applications, it is important to enable hyperparameter tuning and model selection of existing unsupervised approaches without requiring access to ground truth attribute labels, which are not available for most datasets. This paper addresses this problem by introducing a simple yet robust and reliable method for unsupervised disentangled model selection. We show that our approach performs comparably to the existing supervised alternatives across 5400 models from six state of the art unsupervised disentangled representation learning model classes. Furthermore, we show that the ranking produced by our approach correlates well with the final task performance on two different domains.
accept-poster
The authors address the important and understudied problem of tuning of unsupervised models, in particular variational models for learning disentangled representations. They propose an unsupervised measure for model selection that correlates well with performance on multiple tasks. After significant fruitful discussion with the reviewers and resulting revisions, many reviewer concerns have been addressed. There are some remaining concerns that there may still be a gap in the theoretical basis for the application of the proposed measure to some models, that for different downstream tasks the best model selection criteria may vary, and that the method might be too cumbersome and not quite reliable enough for practitioners to use it broadly. All of that being said, the reviewers (and I) agree that the approach is sufficiently interesting, and the empirical results sufficiently convincing, to make the paper a good contribution and hopefully motivation for additional methods addressing this problem.
train
[ "rkehdCaaKH", "SyewUag2jH", "Byx8ouwAFr", "Syl9XBhooH", "HJxUH8voiS", "r1xDKSPjir", "BylnVVwioB", "rJl4JpvqjS", "SJxor_UcjB", "rkxuk7PujS", "rJl9JqrDir", "SygVY8UXir", "rygVXyLQiB", "ryx4jaGQjB", "HkeGvpGmoB", "ryx9uOzmjB", "SJgQGjb39r", "rylxuPrq_r", "Byec_l_zuH", "ByxXVo1fuS"...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public", ...
[ "The paper proposes a metric for unsupervised model (and hyperparameter) selection for VAE-based models. The essential basis for the metric is to rank the models based on how much disentanglement they provide. This method relies on a key observation from this paper [A] viz., disentangled representations by any VAE-...
[ 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1 ]
[ 1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1 ]
[ "iclr_2020_SyxL2TNtvr", "Syl9XBhooH", "iclr_2020_SyxL2TNtvr", "r1xDKSPjir", "iclr_2020_SyxL2TNtvr", "rkxuk7PujS", "SJxor_UcjB", "iclr_2020_SyxL2TNtvr", "SygVY8UXir", "rJl9JqrDir", "HkeGvpGmoB", "rygVXyLQiB", "ryx9uOzmjB", "rkehdCaaKH", "Byx8ouwAFr", "SJgQGjb39r", "iclr_2020_SyxL2TNtv...
iclr_2020_BkgnhTEtDS
Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
Recommendation is a prevalent application of machine learning that affects many users; therefore, it is important for recommender models to be accurate and interpretable. In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems. In particular, we propose to interpret feature interactions from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes. By not assuming the structure of the recommender system, our approach can be used in general settings. In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction. We found that our interaction interpretations are both informative and predictive, e.g., significantly outperforming existing recommender models. What's more, the same approach to interpret interactions can provide new insights into domains even beyond recommendation, such as text and image classification.
accept-poster
The paper extracts feature interactions in recommender systems and studies the effect of these interactions on the recommendations. While the focus is on recommender systems the authors claim that the ideas can be generalised to other domains also. All reviewers found the empirical results and analysis thereof to be very interesting and useful. This paper saw a healthy discussion between the authors and reviewers and all reviewers agreed that this paper makes a useful contribution. I recommend that the authors address all the concerns of the reviewers in the final version of the paper.
train
[ "rklPOdey9H", "SyeDbrwtir", "Bker94DYoH", "BJlHAmvYoB", "HklcwLdRFr", "ryxiVxxs9H" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a method to detect which features in the input of recommender systems are interacted each other, i.e., combining them behaves useful information, and examines to feed extracted interactions directly into the recommender systems to measure effects on actual recommendation.\n\nThe interaction dete...
[ 6, -1, -1, -1, 6, 6 ]
[ 1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_BkgnhTEtDS", "HklcwLdRFr", "rklPOdey9H", "ryxiVxxs9H", "iclr_2020_BkgnhTEtDS", "iclr_2020_BkgnhTEtDS" ]
iclr_2020_B1x62TNtDS
Understanding the Limitations of Variational Mutual Information Estimators
Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.
accept-poster
This paper presents a critical appraisal of variational mutual information estimators, and suggests a slight variance-reducing improvement based on clipping density ratio estimates, and prove that this reduces variance (at the cost of bias). They also propose a set of criteria they term "self-consistency" for evaluation of MI estimators and, and show convincingly that variational MI estimators fall short with respect to these. Reviewers were generally positive about the contribution, and were happy with improvements made. While somewhat limited in scope, I believe this is nonetheless a valuable contribution to the conversation surrounding mutual information objectives that have become popular recently. I therefore recommend acceptance.
train
[ "S1lVRYFatS", "Sye0PZtoFr", "rkgLMqFl9r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work summarizes the existing methods of mutual information estimation in a variational inference framework and describes the limitations in terms of bias-variance tradeoffs. Further, the authors care about the self-consistency, namely, independence, data processing, and additivity, which are properties of bot...
[ 6, 6, 6 ]
[ 3, 3, 1 ]
[ "iclr_2020_B1x62TNtDS", "iclr_2020_B1x62TNtDS", "iclr_2020_B1x62TNtDS" ]
iclr_2020_BkxfaTVFwH
GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations
Generative latent-variable models are emerging as promising tools in robotics and reinforcement learning. Yet, even though tasks in these domains typically involve distinct objects, most state-of-the-art generative models do not explicitly capture the compositional nature of visual scenes. Two recent exceptions, MONet and IODINE, decompose scenes into objects in an unsupervised fashion. Their underlying generative processes, however, do not account for component interactions. Hence, neither of them allows for principled sampling of novel scenes. Here we present GENESIS, the first object-centric generative model of 3D visual scenes capable of both decomposing and generating scenes by capturing relationships between scene components. GENESIS parameterises a spatial GMM over images which is decoded from a set of object-centric latent variables that are either inferred sequentially in an amortised fashion or sampled from an autoregressive prior. We train GENESIS on several publicly available datasets and evaluate its performance on scene generation, decomposition, and semi-supervised learning.
accept-poster
This paper offers a new method for scene generation. While there is some debate on the semantics of ‘generative’ and ‘3d’, on balance the reviewers were positive and more so after rebuttal. I concur with their view that this paper deserves to be accepted.
train
[ "HJxO0sg6YH", "Bke06RuRYr", "H1ln2c42oH", "SJg2Qa7noH", "Hke-OBuciS", "rJeMdaSEiB", "HJgmtjSEor", "Hkgd-TrVir", "HJxK4nB4iB", "BylK-ZHRYS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "UPDATE: I appreciated the authors' discussion. The authors addressed my questions satisfactorily, and I maintain my original rating of accept.\n\n----\nSummary: This papers tackles the question of building an object-centric latent variable generative model of scenes that can sample novel scenes with coherent objec...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_BkxfaTVFwH", "iclr_2020_BkxfaTVFwH", "Hkgd-TrVir", "Hke-OBuciS", "rJeMdaSEiB", "HJxO0sg6YH", "BylK-ZHRYS", "HJxK4nB4iB", "Bke06RuRYr", "iclr_2020_BkxfaTVFwH" ]
iclr_2020_BJgza6VtPB
Language GANs Falling Short
Traditional natural language generation (NLG) models are trained using maximum likelihood estimation (MLE) which differs from the sample generation inference procedure. During training the ground truth tokens are passed to the model, however, during inference, the model instead reads its previously generated samples - a phenomenon coined exposure bias. Exposure bias was hypothesized to be a root cause of poor sample quality and thus many generative adversarial networks (GANs) were proposed as a remedy since they have identical training and inference. However, many of the ensuing GAN variants validated sample quality improvements but ignored loss of sample diversity. This work reiterates the fallacy of quality-only metrics and clearly demonstrate that the well-established technique of reducing softmax temperature can outperform GANs on a quality-only metric. Further, we establish a definitive quality-diversity evaluation procedure using temperature tuning over local and global sample metrics. Under this, we find that MLE models consistently outperform the proposed GAN variants over the whole quality-diversity space. Specifically, we find that 1) exposure bias appears to be less of an issue than the complications arising from non-differentiable, sequential GAN training; 2) MLE trained models provide a better quality/diversity trade-off compared to their GAN counterparts, all while being easier to train, easier to cross-validate, and less computationally expensive.
accept-poster
Main content: Blind review #1 summarizes it well: Recently many language GAN papers have been published to overcome the so called exposure bias, and demonstrated improvements in natural language generation in terms of sample quality, some works propose to assess the generation in terms of diversity, however, quality and diversity are two conflicting measures that are hard to meet. This paper is a groundbreaking work that proposes receiver operating curve or Pareto optimality for quality and diversity measures, and shows that simple temperature sweeping in MLE generates the best quality-diversity curves than all language GAN models through comprehensive experiments. It points out a good target that language GANs should aims at. -- Discussion: The main reservation was the originality of the idea of using temperature sweep in the softmax. However, it turns out this idea came from the authors in the first place, which they have not been able to state directly due to the anonymity requirement. Per the program chair's instruction to direct this to the area chair, I think this has been handled correctly. -- Recommendation and justification: This paper should be accepted. It provides readers with insight in that it illuminates a misconception of how important exposure bias has been assumed to be, and provides a less expensive MLE based way to train than GAN counterparts.
test
[ "rJeZWKD2sS", "HkxLcNH2oB", "rkx9Nl1zcH", "ByeapaqosB", "rJlyE45jsr", "rJeSMAtijS", "HkxpmSPjjr", "rkeOHFSDoH", "SkeCCwZWoB", "Sygas2rMor", "r1lEsKWzoH", "HyeAqdnCFr" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "thanks for pointing this out, we'll carefully study this other submission and will consider citing it", "Yes, that will resolve my concern. I increased the score from 3 to 6.", "This paper concerns the limitation of the quality-only evaluation metric for text generation models. Instead, a desirable evaluation ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "rJeSMAtijS", "ByeapaqosB", "iclr_2020_BJgza6VtPB", "rJlyE45jsr", "SkeCCwZWoB", "rkeOHFSDoH", "SkeCCwZWoB", "Sygas2rMor", "rkx9Nl1zcH", "r1lEsKWzoH", "HyeAqdnCFr", "iclr_2020_BJgza6VtPB" ]