paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_S1xKYJSYwS
VAENAS: Sampling Matters in Neural Architecture Search
Neural Architecture Search (NAS) aims at automatically finding neural network architectures within an enormous designed search space. The search space usually contains billions of network architectures which causes extremely expensive computing costs in searching for the best-performing architecture. One-shot and gradient-based NAS approaches have recently shown to achieve superior results on various computer vision tasks such as image recognition. With the weight sharing mechanism, these methods lead to efficient model search. Despite their success, however, current sampling methods are either fixed or hand-crafted and thus ineffective. In this paper, we propose a learnable sampling module based on variational auto-encoder (VAE) for neural architecture search (NAS), named as VAENAS, which can be easily embedded into existing weight sharing NAS framework, e.g., one-shot approach and gradient-based approach, and significantly improve the performance of searching results. VAENAS generates a series of competitive results on CIFAR-10 and ImageNet in NasNet-like search space. Moreover, combined with one-shot approach, our method achieves a new state-of-the-art result for ImageNet classification model under 400M FLOPs with 77.4\% in ShuffleNet-like search space. Finally, we conduct a thorough analysis of VAENAS on NAS-bench-101 dataset, which demonstrates the effectiveness of our proposed methods.
reject
This paper proposes to represent the distribution w.r.t. which neural architecture search (NAS) samples architectures through a variational autoencoder, rather than through a fully factorized distribution (as previous work did). In the discussion, a few things improved (causing one reviewer to increase his/her score from 1 to 3), but it became clear that the empirical evaluation has issues, with a different search space being used for the method than for the baselines. There was unanimous agreement for rejection. I agree with this judgement and thus recommend rejection.
test
[ "SylMS-x2tB", "ByxyBt_hsH", "BklVk9d2oB", "Hkl-_u_3ir", "BJliF3yE9B", "Bkeu4_xZqB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "============ comments after rebuttal\nI would like to thank the authors for addressing some of my concerns. I believe the new results under Q2 and Q3 are useful additions to strengthen the paper.\n\nAs for the authors' comments for Q1, I'd like to point out that a \"larger\" search space is not necessarily more di...
[ 3, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, 3, 3 ]
[ "iclr_2020_S1xKYJSYwS", "Bkeu4_xZqB", "SylMS-x2tB", "BJliF3yE9B", "iclr_2020_S1xKYJSYwS", "iclr_2020_S1xKYJSYwS" ]
iclr_2020_SkeKtyHYPS
Data Augmentation in Training CNNs: Injecting Noise to Images
Noise injection is a fundamental tool for data augmentation, and yet there is no widely accepted procedure to incorporate it with learning frameworks. This study analyzes the effects of adding or applying different noise models of varying magnitudes to Convolutional Neural Network (CNN) architectures. Noise models that are distributed with different density functions are given common magnitude levels via Structural Similarity (SSIM) metric in order to create an appropriate ground for comparison. The basic results are conforming with the most of the common notions in machine learning, and also introduces some novel heuristics and recommendations on noise injection. The new approaches will provide better understanding on optimal learning procedures for image classification.
reject
This paper studies the effect of various data augmentation methods on image classification tasks. The authors propose the structural similarity as a measure of the magnitude of the various types of data augmentation noise they consider and argue that it is outperforms PSNR as a measure of the intensity of the noise. The authors performed an empirical analysis showing that speckle noise leads to improved CNN models on two subsets of ImageNet. While there is merit in thoroughly analysing data augmentation schemes for training CNNs, the reviewers argued that the main claims of the work were not substantiated and the raised issues were not addressed in the rebuttal. I will hence recommend rejection of this paper.
val
[ "ryxfxYLKYr", "BylWZ0s6YH", "rJxOAIYJcr", "HJe6hPSv_r", "rylesyBjPr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "The paper studies the effect of various data augmentation methods on image classification tasks. The Authors propose the Structural Similarity (SSIM) as a measure of the magnitude of the various types of data augmentation noise they consider. The Authors argue that SSIM is superior to PSNR as a measure of the inte...
[ 3, 1, 3, -1, -1 ]
[ 4, 4, 3, -1, -1 ]
[ "iclr_2020_SkeKtyHYPS", "iclr_2020_SkeKtyHYPS", "iclr_2020_SkeKtyHYPS", "iclr_2020_SkeKtyHYPS", "iclr_2020_SkeKtyHYPS" ]
iclr_2020_SJecKyrKPH
ICNN: INPUT-CONDITIONED FEATURE REPRESENTATION LEARNING FOR TRANSFORMATION-INVARIANT NEURAL NETWORK
We propose a novel framework, ICNN, which combines the input-conditioned filter generation module and a decoder based network to incorporate contextual information present in images into Convolutional Neural Networks (CNNs). In contrast to traditional CNNs, we do not employ the same set of learned convolution filters for all input image instances. And our proposed decoder network serves the purpose of reducing the transformation present in the input image by learning to construct a representative image of the input image class. Our proposed joint supervision of input-aware framework when combined with techniques inspired by Multi-instance learning and max-pooling, results in a transformation-invariant neural network. We investigated the performance of our proposed framework on three MNIST variations, which covers both rotation and scaling variance, and achieved 0.98% error on MNIST-rot-12k, 1.12% error on Half-rotated MNIST and 0.68% error on Scaling MNIST, which is significantly better than the state-of-the-art results. Our proposed model also showcased consistent improvement on the CIFAR dataset. We make use of visualization to further prove the effectiveness of our input-aware convolution filters. Our proposed convolution filter generation framework can also serve as a plugin for any CNN based architecture and enhance its modeling capacity.
reject
This paper proposes a CNN that is invariant to input transformation, by making two modifications on top of the TI-pooling architecture: the input-dependent convolutional filters, and a decoder network to ensure fully transformation invariant. Reviewer #1 concerns the limited novelty, unconvincing experimental results. Reviewer #2 praises the paper being well written, but is not convinced by the significance of the contributions. The authors respond to Reviewer #2 but did not change the rating. Reviewer #3 especially concerns that the paper is not well positioned with respect to the related prior work. Given these concerns and overall negative rating (two weak reject and one reject), the AC recommends reject.
train
[ "rkgVT2jyir", "Hygt0c_Jir", "rylH3T2dKB", "B1g0BlpjKB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Why decoder is needed?\n-> Dimension of extracted max pool features is different from the representative image. Therefore, a reconstruction decoder is employed. Another solution would have been to map the representative image to a lower dimension space and use it directly to calculate L-2 distance with max pool f...
[ -1, 3, 1, 3 ]
[ -1, 5, 5, 5 ]
[ "B1g0BlpjKB", "iclr_2020_SJecKyrKPH", "iclr_2020_SJecKyrKPH", "iclr_2020_SJecKyrKPH" ]
iclr_2020_Byg5KyHYwr
Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks
Imitation learning from human-expert demonstrations has been shown to be greatly helpful for challenging reinforcement learning problems with sparse environment rewards. However, it is very difficult to achieve similar success without relying on expert demonstrations. Recent works on self-imitation learning showed that imitating the agent's own past good experience could indirectly drive exploration in some environments, but these methods often lead to sub-optimal and myopic behavior. To address this issue, we argue that exploration in diverse directions by imitating diverse trajectories, instead of focusing on limited good trajectories, is more desirable for the hard-exploration tasks. We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards. Our method significantly outperforms existing self-imitation learning and count-based exploration methods on various hard-exploration tasks with local optima. In particular, we report a state-of-the-art score of more than 20,000 points on Montezumas Revenge without using expert demonstrations or resetting to arbitrary states.
reject
This paper addresses the problem of exploration in challenging RL environments using self-imitation learning. The idea behind the proposed approach is for the agent to imitate a diverse set of its own past trajectories. To achieve this, the authors introduce a policy conditioned on trajectories. The proposed approach is evaluated on various domains including Atari Montezuma's Revenge and MuJoCo. Given that the evaluation is purely empirical, the major concern is in the design of experiments. The amount of stochasticity induced by the random initial state alone does not lead to convincing results regarding the performance of the proposed approach compared with baselines (e.g. Go-Explore). With such simple stochasticity, it is not clear why one could not use a model to recover from it and then rely on an existing technique like Go-Explore. Although this paper tackles an important problem (hard-exploration RL tasks), all reviewers agreed that this limitation is crucial and I therefore recommend to reject this paper.
train
[ "SyxtVdWg9S", "HyeWXOo3iH", "HJxc0JhjsH", "HkxBme2ijr", "rJxv9CsijH", "rylzEpoojB", "ryexs85aYS", "rJgVvrnCKS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Note: the style-formatting of this paper has been heavily tweaked, and so the evaluation should be calibrated for a 9-page paper.\n\nThis paper proposes an approach for diverse self-imitation for hard exploration problems. The idea is leverage recently proposed self-imitation approaches for learning to imitate go...
[ 6, -1, -1, -1, -1, -1, 1, 3 ]
[ 3, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_Byg5KyHYwr", "HJxc0JhjsH", "ryexs85aYS", "ryexs85aYS", "rJgVvrnCKS", "SyxtVdWg9S", "iclr_2020_Byg5KyHYwr", "iclr_2020_Byg5KyHYwr" ]
iclr_2020_ryestJBKPB
Graph Neural Networks for Soft Semi-Supervised Learning on Hypergraphs
Graph-based semi-supervised learning (SSL) assigns labels to initially unlabelled vertices in a graph. Graph neural networks (GNNs), esp. graph convolutional networks (GCNs), inspired the current-state-of-the art models for graph-based SSL problems. GCNs inherently assume that the labels of interest are numerical or categorical variables. However, in many real-world applications such as co-authorship networks, recommendation networks, etc., vertex labels can be naturally represented by probability distributions or histograms. Moreover, real-world network datasets have complex relationships going beyond pairwise associations. These relationships can be modelled naturally and flexibly by hypergraphs. In this paper, we explore GNNs for graph-based SSL of histograms. Motivated by complex relationships (those going beyond pairwise) in real-world networks, we propose a novel method for directed hypergraphs. Our work builds upon existing works on graph-based SSL of histograms derived from the theory of optimal transportation. A key contribution of this paper is to establish generalisation error bounds for a one-layer GNN within the framework of algorithmic stability. We also demonstrate our proposed methods' effectiveness through detailed experimentation on real-world data. We have made the code available.
reject
This paper proposes and evaluates using graph convolutional networks for semi-supervised learning of probability distributions (histograms). The paper was reviewed by three experts, all of whom gave a Weak Reject rating. The reviewers acknowledged the strengths of the paper, but also had several important concerns including quality of writing and significance of the contribution, in addition to several more specific technical questions. The authors submitted a response that addressed these concerns to some extent. However, in post-rebuttal discussions, the reviewers chose not to change their ratings, feeling that quality of writing still needed to be improved and that overall a significant revision and another round of peer review would be needed. In light of these reviews, we are not able to recommend accepting the paper, but hope the authors will find the suggestions of the reviewers helpful in preparing a revision for another venue.
train
[ "H1ezioD2jS", "B1l8cvghsH", "B1l_Erl2sS", "HkgWCIxnor", "rkllgc3atH", "BygAYy6y5H", "Bygmcc07cB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their reviews. \nAll the reviewers expressed concerns on the presentation (paper writing). We have addressed the concerns and uploaded a revised version of our submission. We give a summary of our rebuttal below.\n\n\n$\\textbf{Reviewers #2 and #3 suggested evaluation on additional ...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2020_ryestJBKPB", "rkllgc3atH", "Bygmcc07cB", "BygAYy6y5H", "iclr_2020_ryestJBKPB", "iclr_2020_ryestJBKPB", "iclr_2020_ryestJBKPB" ]
iclr_2020_ByeAK1BKPB
Projected Canonical Decomposition for Knowledge Base Completion
The leading approaches to tensor completion and link prediction are based on the canonical polyadic (CP) decomposition of tensors. While these approaches were originally motivated by low rank approximations, the best performances are usually obtained for ranks as high as permitted by computation constraints. For large scale factorization problems where the factor dimensions have to be kept small, the performances of these approaches tend to drop drastically. The other main tensor factorization model, Tucker decomposition, is more flexible than CP for fixed factor dimensions, so we expect Tucker-based approaches to yield better performance under strong constraints on the number of parameters. However, as we show in this paper through experiments on standard benchmarks of link prediction in knowledge bases, ComplEx, a variant of CP, achieves similar performances to recent approaches based on Tucker decomposition on all operating points in terms of number of parameters. In a control experiment, we show that one problem in the practical application of Tucker decomposition to large-scale tensor completion comes from the adaptive optimization algorithms based on diagonal rescaling, such as Adagrad. We present a new algorithm for a constrained version of Tucker which implicitly applies Adagrad to a CP-based model with an additional projection of the embeddings onto a fixed lower dimensional subspace. The resulting Tucker-style extension of ComplEx obtains similar best performances as ComplEx, with substantial gains on some datasets under constraints on the number of parameters.
reject
The paper proposes a tensor decomposition method that interpolates between Tucker and CP decompositions. The authors also propose an optimization algorithms (AdaImp) and argue that it has superior performance against AdaGrad in this tesnor decomposition task. The approach is evaluated on some NLP tasks. The reviewers raised some concerns related to clarity, novelty, and strength of experiments. As part of addressing reviewers concerns, the authors reported their own results on MurP and Tucker (instead of quoting results from reference papers). While the reviewers greatly appreciated these experiments as well as authors' response to their questions and feedback, the concerns largely remained unresolved. In particular, R2 found the gain achieved by AdaImp not significantly large compared to Adagrad. In addition, R2 found very limited evaluation on how AdaImp outperforms Adagrad (thus little evidence to support that claim). Finally, AdaImp lacks any theoretical analysis (unlike Adagrad).
train
[ "BygaD5i_jS", "rJxI8qs_oH", "SklWEco_ir", "SygDbciOoH", "B1gTZuqaKB", "rkl0nrbb5S", "SJeYwE_VqB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "(A) The idea of combining CP and Tucker is not new. For example, Tomioka et al. (2010; Section 3.4) considered the Tucker-CP patterns (CP decomposition of the Tucker core). Although they used the Tucker-CP model to improve the interpretability rather than link prediction, the paper needs to make some attribution t...
[ -1, -1, -1, -1, 3, 8, 3 ]
[ -1, -1, -1, -1, 5, 3, 3 ]
[ "B1gTZuqaKB", "rkl0nrbb5S", "SJeYwE_VqB", "iclr_2020_ByeAK1BKPB", "iclr_2020_ByeAK1BKPB", "iclr_2020_ByeAK1BKPB", "iclr_2020_ByeAK1BKPB" ]
iclr_2020_rJxyqkSYDH
A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training of DNNs
Training neural networks on image datasets generally require extensive experimentation to find the optimal learning rate regime. Especially, for the cases of adversarial training or for training a newly synthesized model, one would not know the best learning rate regime beforehand. We propose an automated algorithm for determining the learning rate trajectory, that works across datasets and models for both natural and adversarial training, without requiring any dataset/model specific tuning. It is a stand-alone, parameterless, adaptive approach with no computational overhead. We theoretically discuss the algorithm's convergence behavior. We empirically validate our algorithm extensively. Our results show that our proposed approach \emph{consistently} achieves top-level accuracy compared to SOTA baselines in the literature in natural training, as well as in adversarial training.
reject
This paper proposes an automatic tuning procedure for the learning rate of SGD. Reviewers were in agreement over several of the shortcomings of the paper, in particular its heuristic nature. They also took the time to provide several ways of improving the work which I suggest the authors follow should they decide to resubmit it to a later conference.
train
[ "Hygg_4zosH", "B1e-hZfsoH", "rJlPaXGojH", "r1glYXfioB", "H1eS-rGssB", "rkguCfMjsS", "HyeOzMfjsr", "H1xlCzUqtr", "Hyx_lophKr", "S1xV-3gVqr" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThank you for the insightful and thorough comments and review. \n\n1. We have tried on a different dataset FMNIST as suggested. \n\n2.We have experimentally verified the performance dependence of AALR on batchsize. We found that the generalization performance of AALR remains unaffected wrt batchsize. A more deta...
[ -1, -1, -1, -1, -1, -1, -1, 1, 1, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, 1, 3, 5 ]
[ "Hyx_lophKr", "iclr_2020_rJxyqkSYDH", "S1xV-3gVqr", "S1xV-3gVqr", "H1xlCzUqtr", "S1xV-3gVqr", "S1xV-3gVqr", "iclr_2020_rJxyqkSYDH", "iclr_2020_rJxyqkSYDH", "iclr_2020_rJxyqkSYDH" ]
iclr_2020_B1elqkrKPH
Learning robust visual representations using data augmentation invariance
Deep convolutional neural networks trained for image object categorization have shown remarkable similarities with representations found across the primate ventral visual stream. Yet, artificial and biological networks still exhibit important differences. Here we investigate one such property: increasing invariance to identity-preserving image transformations found along the ventral stream. Despite theoretical evidence that invariance should emerge naturally from the optimization process, we present empirical evidence that the activations of convolutional neural networks trained for object categorization are not robust to identity-preserving image transformations commonly used in data augmentation. As a solution, we propose data augmentation invariance, an unsupervised learning objective which improves the robustness of the learned representations by promoting the similarity between the activations of augmented image samples. Our results show that this approach is a simple, yet effective and efficient (10 % increase in training time) way of increasing the invariance of the models while obtaining similar categorization performance.
reject
This paper introduces an unsupervised learning objective that attempts to improve the robustness of the learnt representations. This approach is empirically demonstrated on cifar10 and tiny imagenet with different network architectures including all convolutional net, wide residual net and dense net. Two of three reviewers felt that the paper was not suitable for publication at ICLR in its current form. Self supervision based on preserving network outputs despite data transformations is a relatively minor contribution, the framing of the approach as inspired by biological vision notwithstanding. Several references, including at a past ICLR include: http://openaccess.thecvf.com/content_CVPR_2019/papers/Kolesnikov_Revisiting_Self-Supervised_Visual_Representation_Learning_CVPR_2019_paper.pdf and Gidaris, P. Singh, and N. Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), 2018.
train
[ "SkgrSdX3iB", "r1gkbvQ3iH", "SyeKcLQhiH", "Syxxveg0tH", "rylAHbq7cS", "BklwIQg0cH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We first sincerely thank the reviewer for their feedback. We especially appreciate the interesting suggestions.\n\n\"I do, however, rate it as Weak Accept only for one reason: I would expect that making the model more robust should improve classification accuracy. But according to the paper, accuracy does not impr...
[ -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "rylAHbq7cS", "Syxxveg0tH", "BklwIQg0cH", "iclr_2020_B1elqkrKPH", "iclr_2020_B1elqkrKPH", "iclr_2020_B1elqkrKPH" ]
iclr_2020_rkgl51rKDB
Efficient meta reinforcement learning via meta goal generation
Meta reinforcement learning (meta-RL) is able to accelerate the acquisition of new tasks by learning from past experience. Current meta-RL methods usually learn to adapt to new tasks by directly optimizing the parameters of policies over primitive actions. However, for complex tasks which requires sophisticated control strategies, it would be quite inefficient to to directly learn such a meta-policy. Moreover, this problem can become more severe and even fail in spare reward settings, which is quite common in practice. To this end, we propose a new meta-RL algorithm called meta goal-generation for hierarchical RL (MGHRL) by leveraging hierarchical actor-critic framework. Instead of directly generate policies over primitive actions for new tasks, MGHRL learns to generate high-level meta strategies over subgoals given past experience and leaves the rest of how to achieve subgoals as independent RL subtasks. Our empirical results on several challenging simulated robotics environments show that our method enables more efficient and effective meta-learning from past experience and outperforms state-of-the-art meta-RL and Hierarchical-RL methods in sparse reward settings.
reject
This paper combines PEARL with HAC to create a hierarchical meta-RL algorithm that operates on goals at the high level and learns low-level policies to reach those goals. Reviewers remarked that it’s well-presented and well-organized, with enough details to be mostly reproducible. In the experiments conducted, it appears to show strong results. However there was strong consensus on two major weaknesses that render this paper unpublishable in its current form: 1) the continuous control tasks used don’t seem to require hierarchy, and 2) the baselines don’t appear to be appropriate. Reviewers remarked that a vital missing baseline is HER, and that it’s unfair to compare to PEARL, which is a more general meta-RL algorithm. The authors don’t appear to have made revisions in response to these concerns. All reviewers made useful and constructive comments, and I urge the authors to take them into consideration when revising for a future submission.
test
[ "r1lVZ2-ZqH", "SyllxgTtFS", "HkxcTp05iS", "r1x9iTAcsH", "r1lfqFR5iS", "rkg-Q1gy9S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Update 11/21\nI maintain my score. I like the idea and hope the authors improve the paper and submit to a future conference.\n\nSummary\nThis paper combines hierarchical RL with meta-learning. The idea is that high-level plans transfer across settings (e.g. picking up a mug), while low-level execution may differ a...
[ 1, 1, -1, -1, -1, 3 ]
[ 4, 4, -1, -1, -1, 5 ]
[ "iclr_2020_rkgl51rKDB", "iclr_2020_rkgl51rKDB", "SyllxgTtFS", "rkg-Q1gy9S", "r1lVZ2-ZqH", "iclr_2020_rkgl51rKDB" ]
iclr_2020_rkgb9kSKwS
Spectral Nonlocal Block for Neural Network
The nonlocal network is designed for capturing long-range spatial-temporal dependencies in several computer vision tasks. Although having shown excellent performances, it needs an elaborate preparation for both the number and position of the building blocks. In this paper, we propose a new formulation of the nonlocal block and interpret it from the general graph signal processing perspective, where we view it as a fully-connected graph filter approximated by Chebyshev polynomials. The proposed nonlocal block is more efficient and robust, which is a generalized form of existing nonlocal blocks (e.g. nonlocal block, nonlocal stage). Moreover, we give the stable hypothesis and show that the steady-state of the deeper nonlocal structure should meet with it. Based on the stable hypothesis, a full-order approximation of the nonlocal block is derived for consecutive connections. Experimental results illustrate the clear-cut improvement and practical applicability of the generalized nonlocal block on both image and video classification tasks.
reject
This paper proposes a new formulation of the non-local block and interpret it from the graph view. The idea is interesting and the experimental results seems to be promising. Reviewer has two major concerns. The first is the presentation, which is not clear enough. The second is the experimental design and analysis. The authors add more video dataset in the revision, but still lack comprehensive experimental analysis for video-based applications. Overall, the idea of non-local block from graph view is interesting. However, the presentation of the paper needs further polish and thus does not meet the standard of ICLR
test
[ "S1luaFS3ir", "rkgAEzGhiB", "r1eqZWr3sS", "BJx4ktM3oH", "rJx8yhU2sr", "BJxv3o8njH", "Hyg-EusOKH", "rkeQXGZ_qS", "S1gW3qf29r", "rke6CgqpqH" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ ">6. Explanation of table 4.2. Where do we see different number of nonlocal units.\n\nSorry for the typo. Table.4.2 is actually the Table.4 mentioned in the context, we have corrected it in the updated version.\n\nThe experimental results of different numbers are shown in Table.4. According to the results of DP1 an...
[ -1, -1, -1, -1, -1, -1, 6, 6, 3, 1 ]
[ -1, -1, -1, -1, -1, -1, 3, 1, 5, 3 ]
[ "r1eqZWr3sS", "rkeQXGZ_qS", "rke6CgqpqH", "Hyg-EusOKH", "BJxv3o8njH", "S1gW3qf29r", "iclr_2020_rkgb9kSKwS", "iclr_2020_rkgb9kSKwS", "iclr_2020_rkgb9kSKwS", "iclr_2020_rkgb9kSKwS" ]
iclr_2020_HyeG9yHKPr
Causally Correct Partial Models for Reinforcement Learning
In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions. However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images). For this reason, previous works have considered partial models, which model only part of the observation. In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning. To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations.
reject
The authors show that in a reinforcement learning setting, partial models can be causally incorrect, leading to improper evaluation of policies that are different from those used to collect the data for the model. They then propose a backdoor correction to this problem that allows the model to generalize properly by separating the effects of the stochasticity of the environment and the policy. The reviewers had substantial concerns about both issues of clarity and the clear, but largely undiscussed, connection to off-policy policy evaluation (OPPE). In response, the authors made a significant number of changes for the sake of clarity, as well as further explained the differences between their approach and the OPPE setting. First, OPPE is not typically model-based. Second, while an importance sampling solution would be technically possible, by re-training the model based on importance-weighted experiences, this would need to be done for every evaluation policy considered, whereas the authors' solution uses a fundamentally different approach of causal reasoning so that a causally correct model can be learned once and work for all policies. After much discussion, the reviewers could not come to a consensus about the validity of these arguments. Futhermore, there were lingering questions about writing clarity. Thus, in the future, it appears the paper could be significantly improved if the authors cite more of the off policy evaluation literature, in addition to their added textual clairifications of the relation of their work to that body of work. Overall, my recommendation at this time is to reject this paper.
train
[ "rkez7nP55B", "rkx7BVYhsB", "ryecJfD3iB", "HygH9Gmhir", "Sye2G8I_iH", "B1xcYH8uoS", "HkxGiNLuiS", "Bklram8_jH", "HJl1FwQaFH", "H1eOrcwaKB", "rJgKAsG09B" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "SUMMARY:\nThe authors apply ideas from causal learning to the problem of model learning in the context of sequential decision making problems. They show that models typically learned in this context can be problematic when used for planning. The authors then reformulate the model-learning problem using a causal le...
[ 8, -1, -1, -1, -1, -1, -1, -1, 1, 3, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 4, 1 ]
[ "iclr_2020_HyeG9yHKPr", "ryecJfD3iB", "HkxGiNLuiS", "iclr_2020_HyeG9yHKPr", "HJl1FwQaFH", "H1eOrcwaKB", "rkez7nP55B", "rJgKAsG09B", "iclr_2020_HyeG9yHKPr", "iclr_2020_HyeG9yHKPr", "iclr_2020_HyeG9yHKPr" ]
iclr_2020_SyxM51BYPB
A new perspective in understanding of Adam-Type algorithms and beyond
First-order adaptive optimization algorithms such as Adam play an important role in modern deep learning due to their super fast convergence speed in solving large scale optimization problems. However, Adam's non-convergence behavior and regrettable generalization ability make it fall into a love-hate relationship to deep learning community. Previous studies on Adam and its variants (refer as Adam-Type algorithms) mainly rely on theoretical regret bound analysis, which overlook the natural characteristic reside in such algorithms and limit our thinking. In this paper, we aim at seeking a different interpretation of Adam-Type algorithms so that we can intuitively comprehend and improve them. The way we chose is based on a traditional online convex optimization algorithm scheme known as mirror descent method. By bridging Adam and mirror descent, we receive a clear map of the functionality of each part in Adam. In addition, this new angle brings us a new insight on identifying the non-convergence issue of Adam. Moreover, we provide new variant of Adam-Type algorithm, namely AdamAL which can naturally mitigate the non-convergence issue of Adam and improve its performance. We further conduct experiments on various popular deep learning tasks and models, and the results are quite promising.
reject
In this paper, the authors draw upon online convex optimization in order to derive a different interpretation of Adam-Type algorithms, allowing them to identify the functionality of each part of Adam. Based on these observations, the authors derive a new Adam-Type algorithm, AdamAL and test it in 2 computer vision datasets using 3 CNN architectures. The main concern shared by all reviewers is the lack of novelty but also rigor both on the experimental and theoretical justification provided by the authors. After having read carefully the reviews and main points of the paper, I will side with the reviewers, thus not recommending acceptance of this paper.
train
[ "SJg7hHphtH", "H1xy8D7-iS", "S1lPzDQWiB", "SyetOLmboH", "rJlnmSE-qH", "H1gj6JT7qr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis work proposed a framework to analyze both Adam-type algorithms and SGD-type algorithms. The authors considered both of them as specialized cases of mirror descent algorithms and provided a new algorithm AdamAL. The authors showed experiments to backup their theoretical results. \n\nPros:\n\nThe au...
[ 3, -1, -1, -1, 1, 3 ]
[ 4, -1, -1, -1, 4, 1 ]
[ "iclr_2020_SyxM51BYPB", "SJg7hHphtH", "rJlnmSE-qH", "H1gj6JT7qr", "iclr_2020_SyxM51BYPB", "iclr_2020_SyxM51BYPB" ]
iclr_2020_H1g79ySYvB
Revisiting Gradient Episodic Memory for Continual Learning
Gradient Episodic Memory (GEM) is an effective model for continual learning, where each gradient update for the current task is formulated as a quadratic program problem with inequality constraints that alleviate catastrophic forgetting of previous tasks. However, practical use of GEM is impeded by several limitations: (1) the data examples stored in the episodic memory may not be representative of past tasks; (2) the inequality constraints appear to be rather restrictive for competing or conflicting tasks; (3) the inequality constraints can only avoid catastrophic forgetting but can not assure positive backward transfer. To address these issues, in this paper we aim at improving the original GEM model via three handy techniques without extra computational cost. Experiments on MNIST Permutations and incremental CIFAR100 datasets demonstrate that our techniques enhance the performance of GEM remarkably. On CIFAR100 the average accuracy is improved from 66.48% to 68.76%, along with the backward (knowledge) transfer growing from 1.38% to 4.03%.
reject
This paper proposes an extension of Gradient Episodic Memory (GEM) namely support examples, soft gradient constraints, and positive backward transfer. The authors argue that experiments on MNIST and CIFAR show that the proposed method consistently improves over the original GEM. All three reviewers are not convinced with experiments in the paper. R1 and R3 mentioned that the improvements over GEM appear to be small. R2 and R3 also have some concerns without results with multiple runs. R3 has questions about hyperparameter tuning. The authors also appears to be missing recent developments in this area (e.g., A-GEM). The authors did not provide a rebuttal to these concerns. I agree with the reviewers and recommend rejecting this paper.
train
[ "HJxngHKiKB", "Bylpt08hYS", "SkeYd-i2FB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an extension to gradient episodic memory (GEM) to improve its performance and backwards transfer. Specifically, the proposed method selects \"support examples\" to represent each task (versus the last M examples for GEM); introduces slack variables to ensure the constraints imposed by GEM are n...
[ 1, 3, 1 ]
[ 4, 4, 4 ]
[ "iclr_2020_H1g79ySYvB", "iclr_2020_H1g79ySYvB", "iclr_2020_H1g79ySYvB" ]
iclr_2020_BkgNqkHFPr
Enhanced Convolutional Neural Tangent Kernels
Recent research shows that for training with l2 loss, convolutional neural networks (CNNs) whose width (number of channels in convolutional layers) goes to infinity, correspond to regression with respect to the CNN Gaussian Process kernel (CNN-GP) if only the last layer is trained, and correspond to regression with respect to the Convolutional Neural Tangent Kernel (CNTK) if all layers are trained. An exact algorithm to compute CNTK (Arora et al., 2019) yielded the finding that classification accuracy of CNTK on CIFAR-10 is within 6-7% of that of the corresponding CNN architecture (best figure being around 78%) which is interesting performance for a fixed kernel. Here we show how to significantly enhance the performance of these kernels using two ideas. (1) Modifying the kernel using a new operation called Local Average Pooling (LAP) which preserves efficient computability of the kernel and inherits the spirit of standard data augmentation using pixel shifts. Earlier papers were unable to incorporate naive data augmentation because of the quadratic training cost of kernel regression. This idea is inspired by Global Average Pooling (GAP), which we show for CNN-GP and CNTK, GAP is equivalent to full translation data augmentation. (2) Representing the input image using a pre-processing technique proposed by Coates et al. (2011), which uses a single convolutional layer composed of random image patches. On CIFAR-10 the resulting kernel, CNN-GP with LAP and horizontal flip data augmentation achieves 89% accuracy, matching the performance of AlexNet (Krizhevsky et al., 2012). Note that this is the best such result we know of for a classifier that is not a trained neural network. Similar improvements are obtained for Fashion-MNIST.
reject
This paper was assessed by three reviewers who scored it as 6/3/6. The reviewers liked some aspects of this paper e.g., a good performance, but they also criticized some aspects of work such as inventing new names for existing pooling operators, observation that large parts of improvements come from the pre-processing step rather than the proposed method, suspected overfitting. Taking into account all positives and negatives, AC feels that while the proposed idea has some positives, it also falls short of the quality required by ICLR2020, thus it cannot be accepted at this time. AC strongly encourages authors to go through all comments (especially these negative ones), address them and resubmit an improved version to another venue.
train
[ "Byxqt4VztH", "ryltv8TTFB", "SJgsaAK2sB", "H1eOHZN2or", "H1lNYzCooB", "Hke5xNAijH", "ryxC7H0oir", "H1x7kXRosB", "S1gzyEd6FH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper shows that there is a one-to-one correspondence between pixel-shift based data augmentation and average pooling operations in CNN-NNGP/NTK based ridge regression. Interestingly, the authors show that standard average pooling + flatten can lead to a better performance than simple global average pooling. ...
[ 3, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_BkgNqkHFPr", "iclr_2020_BkgNqkHFPr", "H1eOHZN2or", "H1lNYzCooB", "ryltv8TTFB", "Byxqt4VztH", "iclr_2020_BkgNqkHFPr", "S1gzyEd6FH", "iclr_2020_BkgNqkHFPr" ]
iclr_2020_rkeNqkBFPB
Deep automodulators
We introduce a novel autoencoder model that deviates from traditional autoencoders by using the full latent vector to independently modulate each layer in the decoder. We demonstrate how such an 'automodulator' allows for a principled approach to enforce latent space disentanglement, mixing of latent codes, and a straightforward way to utilize prior information that can be construed as a scale-specific invariance. Unlike GANs, autoencoder models can directly operate on new real input samples. This makes our model directly suitable for applications involving real-world inputs. As the architectural backbone, we extend recent generative autoencoder models that retain input identity and image sharpness at high resolutions better than VAEs. We show that our model achieves state-of-the-art latent space disentanglement and achieves high quality and diversity of output samples, as well as faithfulness of reconstructions.
reject
The manuscript proposes an autoencoder architecture incorporating two recent architectural innovations from the GAN literature (progressive growing & feature-wise modulation), trained with the adversarial generator-encoder paradigm with a novel cyclic loss meant to encourage disentangling, and procedure for enforcing layerwise invariances. The authors demonstrate coarse/fine visual transfer on generative modeling of face images, as well as generative modeling results on several Large Scale Scene Understanding (LSUN) datasets. Reviewers generally found the results somewhat compelling and the ideas valuable and well-motivated, but criticized the presentation clarity, lack of ablation studies, and that the claims made were not sufficiently supported by the empirical evidence. The authors revised, and while it was agreed that clarity was improved, some reviewers were still not satisfied with the level of clarity (the revision appeared at the very end of the discussion period, unfortunately not allowing for any further refinement). Ablation studies were added in the revised manuscript, which were appreciated, but seemed to suggest that the proposed loss function was of mixed utility: while style-mixing quantitatively improved, overall sample quality appeared to suffer. As the reviewers remain unconvinced as to the significance of the contribution and the clarity of its presentation, I recommend rejection at this time, while encouraging the authors to further refine the presentation of their ideas for a future resubmission.
val
[ "rJx497-hoB", "ryliKbbhoH", "SJe2blW2jS", "rkgdpClhsS", "HkxmXW7AYS", "rkgGbTwr5r", "rJxBHBNP5H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the detailed requests and literature pointers. We hope that our extensive rewrites of Sec. 3 especially will address most of these questions. We also added the references to the other models you provided.\n\nThe parts about layer independence and style transfer have been rephrased. A working definiti...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 5, 5, 1 ]
[ "HkxmXW7AYS", "rkgGbTwr5r", "rJxBHBNP5H", "iclr_2020_rkeNqkBFPB", "iclr_2020_rkeNqkBFPB", "iclr_2020_rkeNqkBFPB", "iclr_2020_rkeNqkBFPB" ]
iclr_2020_HJxV5yHYwB
Solving single-objective tasks by preference multi-objective reinforcement learning
There ubiquitously exist many single-objective tasks in the real world that are inevitably related to some other objectives and influenced by them. We call such task as the objective-constrained task, which is inherently a multi-objective problem. Due to the conflict among different objectives, a trade-off is needed. A common compromise is to design a scalar reward function through clarifying the relationship among these objectives using the prior knowledge of experts. However, reward engineering is extremely cumbersome. This will result in behaviors that optimize our reward function without actually satisfying our preferences. In this paper, we explicitly cast the objective-constrained task as preference multi-objective reinforcement learning, with the overall goal of finding a Pareto optimal policy. Combined with Trajectory Preference Domination we propose, a weight vector that reflects the agent's preference for each objective can be learned. We analyzed the feasibility of our algorithm in theory, and further proved in experiments its better performance compared to those that design the reward function by experts.
reject
The paper considers planning through the lenses both of a single and multiple objectives. The paper then discusses the pareto frontiers of this optimization. While this is an interesting direction, the reviewers feel a more careful comparison to related work is needed.
val
[ "Byg4aCzE5S", "rygBjki6tB", "Hke2FZqFiH", "SkxhzAFtiH", "Syg1Lx9Yor" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thank the authors for the response. I agree with R2 that the paper lacks comparisons with previous works. I will stick to my previous decision.\n----------------------------------------\nSummary\nThis paper presents a new approach for single-objective reinforcement learning by preferencing multi-objective reinforc...
[ 3, 1, -1, -1, -1 ]
[ 3, 4, -1, -1, -1 ]
[ "iclr_2020_HJxV5yHYwB", "iclr_2020_HJxV5yHYwB", "Syg1Lx9Yor", "Byg4aCzE5S", "rygBjki6tB" ]
iclr_2020_HygS91rYvH
Universal Adversarial Attack Using Very Few Test Examples
Adversarial attacks such as Gradient-based attacks, Fast Gradient Sign Method (FGSM) by Goodfellow et al.(2015) and DeepFool by Moosavi-Dezfooli et al. (2016) are input-dependent, small pixel-wise perturbations of images which fool state of the art neural networks into misclassifying images but are unlikely to fool any human. On the other hand a universal adversarial attack is an input-agnostic perturbation. The same perturbation is applied to all inputs and yet the neural network is fooled on a large fraction of the inputs. In this paper, we show that multiple known input-dependent pixel-wise perturbations share a common spectral property. Using this spectral property, we show that the top singular vector of input-dependent adversarial attack directions can be used as a very simple universal adversarial attack on neural networks. We evaluate the error rates and fooling rates of three universal attacks, SVD-Gradient, SVD-DeepFool and SVD-FGSM, on state of the art neural networks. We show that these universal attack vectors can be computed using a small sample of test inputs. We establish our results both theoretically and empirically. On VGG19 and VGG16, the fooling rate of SVD-DeepFool and SVD-Gradient perturbations constructed from observing less than 0.2% of the validation set of ImageNet is as good as the universal attack of Moosavi-Dezfooli et al. (2017a). To prove our theoretical results, we use matrix concentration inequalities and spectral perturbation bounds. For completeness, we also discuss another recent approach to universal adversarial perturbations based on (p, q)-singular vectors, proposed independently by Khrulkov & Oseledets (2018), and point out the simplicity and efficiency of our universal attack as the key difference.
reject
The paper proposes to get universal adversarial examples using few test samples. The approach is very close to the Khrulkov & Oseledets, and the abstract for some reason claims that it was proposed independently, which looks like a very strange claim. Overall, all reviewers recommend rejection, and I agree with them.
test
[ "HJxMVJT3KS", "ByxmcCp6tB", "SyliF_gxcH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studied the problem of universal adversarial attack which is an input-agnostic perturbation. The authors proposed to use the top singular vector of input-dependent adversarial attack directions to perform universal adversarial attacks. The authors evaluated the error rates and fooling rates for three at...
[ 3, 3, 3 ]
[ 5, 3, 3 ]
[ "iclr_2020_HygS91rYvH", "iclr_2020_HygS91rYvH", "iclr_2020_HygS91rYvH" ]
iclr_2020_rJg851rYwH
Making the Shoe Fit: Architectures, Initializations, and Tuning for Learning with Privacy
Because learning sometimes involves sensitive data, standard machine-learning algorithms have been extended to offer strong privacy guarantees for training data. However, in practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training with a different optimizer, but using the same model architecture that performed well in a non-privacy-preserving setting. This approach leads to less than ideal privacy/utility tradeoffs, as we show here. Instead, we propose that model architectures and initializations are chosen and hyperparameter tuning is performed, ab initio, explicitly for privacy-preserving training. Using this paradigm, we achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the fundamental learning procedures or differential-privacy analysis.
reject
This paper presents experimental evidence that learning with privacy requires optimization of the model settings (architectures and initializations) that are not identical to those used when learning without privacy. While acknowledging potential usefulness of this work for practitioners, the reviewers expressed several important concerns such as (1) lack of SOTA baseline comparisons, (2) lack of clarity of the empirical evaluation protocols, (3) large models (that are widely used in practice) have not been studied in the paper, (4) low technical novelty. The authors have successfully addressed some of the concerns regarding (1) and (2). However (3) and (4) make it difficult to assess the benefits of the proposed approach for the community and were viewed by AC as critical issues. We hope the detailed reviews are useful for improving and revising the paper.
train
[ "Skx5Yj8pcr", "HkgRvJUhor", "H1xIuTmhoB", "SJgELT72sH", "ryeAJTm2sB", "SyePp2XnsB", "HylXFnQ2oH", "BkeOIhXniB", "SJxqstEyqB", "B1gcmtH15B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents experimental evidence that learning with privacy requires approaches that are not identical to those used when learning without privacy. These approaches include re-considering different model choices (i.e., its structure and activation functions), its initialization, and its optimization proce...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_rJg851rYwH", "H1xIuTmhoB", "SJgELT72sH", "Skx5Yj8pcr", "SyePp2XnsB", "B1gcmtH15B", "BkeOIhXniB", "SJxqstEyqB", "iclr_2020_rJg851rYwH", "iclr_2020_rJg851rYwH" ]
iclr_2020_SJgw51HFDr
Sparse Weight Activation Training
Training convolutional neural networks (CNNs) is time consuming. Prior work has explored how to reduce the computational demands of training by eliminating gradients with relatively small magnitude. We show that eliminating small magnitude components has limited impact on the direction of high-dimensional vectors. However, in the context of training a CNN, we find that eliminating small magnitude components of weight and activation vectors allows us to train deeper networks on more complex datasets versus eliminating small magnitude components of gradients. We propose Sparse Weight Activation Training (SWAT), an algorithm that embodies these observations. SWAT reduces computations by 50% to 80% with better accuracy at a given level of sparsity versus the Dynamic Sparse Graph algorithm. SWAT also reduces memory footprint by 23% to 37% for activations and 50% to 80% for weights.
reject
The paper is proposed a rejection based on majority reviews.
train
[ "rylmGRcnjB", "SkekD9c2oS", "S1euzFc2iB", "BJlFU6ucjr", "ryxlrvZqjB", "SylCZRFtor", "SylcTNqXsH", "Bye0erq7jr", "Hyx9DUqmsH", "rklaf8q7jB", "S1xw9KbyKH", "ryxGyqqk9H" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We have added the comparison with the lottery ticket hypothesis in the appendix. Also, we have added a detailed performance estimation on the sparse accelerator in Appendix C. \n\n", "We have resolved all of your mentioned issues and have added most of the rebuttal discussions either in the appendix or clarifie...
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, 5 ]
[ "ryxlrvZqjB", "Bye0erq7jr", "BJlFU6ucjr", "rklaf8q7jB", "SylCZRFtor", "iclr_2020_SJgw51HFDr", "ryxGyqqk9H", "ryxGyqqk9H", "S1xw9KbyKH", "S1xw9KbyKH", "iclr_2020_SJgw51HFDr", "iclr_2020_SJgw51HFDr" ]
iclr_2020_rJe_cyrKPB
GroSS Decomposition: Group-Size Series Decomposition for Whole Search-Space Training
We present Group-size Series (GroSS) decomposition, a mathematical formulation of tensor factorisation into a series of approximations of increasing rank terms. GroSS allows for dynamic and differentiable selection of factorisation rank, which is analogous to a grouped convolution. Therefore, to the best of our knowledge, GroSS is the first method to simultaneously train differing numbers of groups within a single layer, as well as all possible combinations between layers. In doing so, GroSS trains an entire grouped convolution architecture search-space concurrently. We demonstrate this with a proof-of-concept exhaustive architecure search with a performance objective. GroSS represents a significant step towards liberating network architecture search from the burden of training and finetuning.
reject
The authors use a Tucker decomposition to represent the weights of a network, for efficient computation. The idea is natural, and preliminary results promising. The main concern was lack of empirical validation and comparisons. While the authors have provided partial additional results in the rebuttal, which is appreciated, a thorough set of experiments and comparisons would ideally be included in a new version of the paper, and then considered again in review.
test
[ "BJlh8d6LoH", "rkesS2pIoH", "Hyg1RnaIoS", "Bkeqau6Lir", "Hyx77nBRFS", "HkgEA-hk5r", "HJloTnFxqB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "“I did not know about the expansion function, and while I trust the authors that it is correctly used, I would have like either more explanations on how it works or some reference.”\n\nAs far as we are aware, we are the first to exploit the expansion of grouped convolution weights. We can provide more detail on th...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 5, 3, 1 ]
[ "HJloTnFxqB", "Hyx77nBRFS", "rkesS2pIoH", "HkgEA-hk5r", "iclr_2020_rJe_cyrKPB", "iclr_2020_rJe_cyrKPB", "iclr_2020_rJe_cyrKPB" ]
iclr_2020_Bkxd9JBYPH
Representing Model Uncertainty of Neural Networks in Sparse Information Form
This paper addresses the problem of representing a system's belief using multi-variate normal distributions (MND) where the underlying model is based on a deep neural network (DNN). The major challenge with DNNs is the computational complexity that is needed to obtain model uncertainty using MNDs. To achieve a scalable method, we propose a novel approach that expresses the parameter posterior in sparse information form. Our inference algorithm is based on a novel Laplace Approximation scheme, which involves a diagonal correction of the Kronecker-factored eigenbasis. As this makes the inversion of the information matrix intractable - an operation that is required for full Bayesian analysis, we devise a low-rank approximation of this eigenbasis and a memory-efficient sampling scheme. We provide both a theoretical analysis and an empirical evaluation on various benchmark data sets, showing the superiority of our approach over existing methods.
reject
This paper presents a variant of recently developed Kronecker-factored approximations to BNN posteriors. It corrects the diagonal entries of the approximate Hessian, and in order to make this scalable, approximates the Kronecker factors as low-rank. The approach seems reasonable, and is a natural thing to try. The novelty is fairly limited, however, and the calculations are mostly routine. In terms of the experiments: it seems like it improved the Frobenius norm of the error, though it's not clear to me that this would be a good measure of practical effectiveness. On the toy regression experiment, it's hard for me to tell the difference from the other variational methods. It looks like it helped a bit in the quantitative comparisons, though the improvement over K-FAC doesn't seem significant enough to justify acceptance purely based on the results. Reviewers felt like there was a potentially useful idea here and didn't spot any serious red flags, but didn't feel like the novelty or the experimental results were enough to justify acceptance. I tend to agree with this assessment.
train
[ "B1xS0tY3oH", "HJxc4WunoH", "HJxo21UPoH", "HJx192Dnor", "Bklgj6SDsB", "r1xd1v3u9H", "HJgI0TqosH", "SyxXqyLvsB", "SJlxE0rPsB", "rJlKRnHPoH", "r1gQ31udtS", "S1xjOAscqr", "rJxnUJb6cH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\nWe sincerely thank all the reviewers for their time and efforts. The paper has been thoroughly revised in light of your thoughtful feedback. In this post, we attempt to shortly summarize the main points of paper and changes in the new revision.\n\nA summary:\n\nThis work presents a sparse information form of m...
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 1, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2020_Bkxd9JBYPH", "rJlKRnHPoH", "SyxXqyLvsB", "HJgI0TqosH", "S1xjOAscqr", "iclr_2020_Bkxd9JBYPH", "HJxo21UPoH", "r1xd1v3u9H", "r1gQ31udtS", "rJxnUJb6cH", "iclr_2020_Bkxd9JBYPH", "iclr_2020_Bkxd9JBYPH", "iclr_2020_Bkxd9JBYPH" ]
iclr_2020_H1lK5kBKvr
Semi-supervised 3D Face Reconstruction with Nonlinear Disentangled Representations
Recovering 3D geometry shape, albedo and lighting from a single image has wide applications in many areas, which is also a typical ill-posed problem. In order to eliminate the ambiguity, face prior knowledge like linear 3D morphable models (3DMM) learned from limited scan data are often adopted to the reconstruction process. However, methods based on linear parametric models cannot generalize well for facial images in the wild with various ages, ethnicity, expressions, poses, and lightings. Recent methods aim to learn a nonlinear parametric model using convolutional neural networks (CNN) to regress the face shape and texture directly. However, the models were only trained on a dataset that is generated from a linear 3DMM. Moreover, the identity and expression representations are entangled in these models, which hurdles many facial editing applications. In this paper, we train our model with adversarial loss in a semi-supervised manner on hybrid batches of unlabeled and labeled face images to exploit the value of large amounts of unlabeled face images from unconstrained photo collections. A novel center loss is introduced to make sure that different facial images from the same person have the same identity shape and albedo. Besides, our proposed model disentangles identity, expression, pose, and lighting representations, which improves the overall reconstruction performance and facilitates facial editing applications, e.g., expression transfer. Comprehensive experiments demonstrate that our model produces high-quality reconstruction compared to state-of-the-art methods and is robust to various expression, pose, and lighting conditions.
reject
This paper proposes a semi-supervised method for reconstructing 3D faces from images via a disentangled representation. The method builds on previous work by Tran et al (2018, 2019). While some results presented in the paper show that this method works well, all reviewers agree that the authors should have provided more experimental evidence to convincingly demonstrate the benefits of their method. The reviewers are also unconvinced by how computationally expensive this method is or by the contributions of the unlabelled data to the performance of the proposed model. Given that the authors did not address the reviewers’ concerns, and for the reasons stated above, I recommend rejecting this paper.
val
[ "rkx9eFq6tH", "HklBjUQHtB", "Skx902J2FS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Overview:\nThis paper introduces a model for image-based facial 3D reconstruction. The proposed model is an encoder-decoder architecture that is trained in semi-supervised way to map images to sets of vectors representing identity (which encodes albedo and geometry), pose, expression and lighting. The encoder is a...
[ 3, 1, 3 ]
[ 4, 3, 5 ]
[ "iclr_2020_H1lK5kBKvr", "iclr_2020_H1lK5kBKvr", "iclr_2020_H1lK5kBKvr" ]
iclr_2020_S1eq9yrYvH
Subjective Reinforcement Learning for Open Complex Environments
Solving tasks in open environments has been one of the long-time pursuits of reinforcement learning researches. We propose that data confusion is the core underlying problem. Although there exist methods that implicitly alleviate it from different perspectives, we argue that their solutions are based on task-specific prior knowledge that is constrained to certain kinds of tasks and lacks theoretical guarantees. In this paper, Subjective Reinforcement Learning Framework is proposed to state the problem from a broader and systematic view, and subjective policy is proposed to represent existing related algorithms in general. Theoretical analysis is given about the conditions for the superiority of a subjective policy, and the relationship between model complexity and the overall performance. Results are further applied as guidance for algorithm designing without task-specific prior knowledge about tasks.
reject
The authors propose a learning framework to reframe non-stationary MDPs as smaller stationary MDPs, thus hopefully addressing problems with contradictory or continually changing environments. A policy is learned for each sub-MDP, and the authors present theoretical guarantees that the reframing does not inhibit agent performance. The reviewers discussed the paper and the authors' rebuttal. They were mainly concerned that the submission offered no practical implementation or demonstration of feasibility, and secondarily concerned that the paper was unclearly written and motivated. The authors' rebuttal did not resolve these issues. My recommendation is to reject the submission and encourage the authors to develop an empirical validation of their method before resubmitting.
train
[ "B1evH6posB", "BkxTucDssr", "rJgpj9tDjB", "ryedVcYPsr", "Hye0HYKwsB", "SklceeAvKS", "SJlHqPKaKS", "rJg8QSNJ9S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Roughly speaking, the updated paper changes some writing problems, replaces sums by integrals around R, and replaces \"R\" by \"Risk\" where appropriate. While it is a small improvement, the whole paper still lacks a lot of clarity - a sentiment reflected by both other reviewers too.\n\nThis also does not address ...
[ -1, -1, -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, -1, -1, 1, 1, 4 ]
[ "Hye0HYKwsB", "ryedVcYPsr", "SklceeAvKS", "SJlHqPKaKS", "rJg8QSNJ9S", "iclr_2020_S1eq9yrYvH", "iclr_2020_S1eq9yrYvH", "iclr_2020_S1eq9yrYvH" ]
iclr_2020_ByxoqJrtvr
Learning to Reach Goals Without Reinforcement Learning
Imitation learning algorithms provide a simple and straightforward approach for training control policies via standard supervised learning methods. By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator -- typically a person -- to provide the demonstrations. In this paper, we ask: can we use imitation learning to train effective policies without any expert demonstrations? The key observation that makes this possible is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can still serve as optimal examples for other tasks. In particular, in the setting where the tasks correspond to different goals, every trajectory is a successful demonstration for the state that it actually reaches. Informed by this observation, we propose a very simple algorithm for learning behaviors without any demonstrations, user-provided reward functions, or complex reinforcement learning methods. Our method simply maximizes the likelihood of actions the agent actually took in its own previous rollouts, conditioned on the goal being the state that it actually reached. Although related variants of this approach have been proposed previously in imitation learning settings with example demonstrations, we present the first instance of this approach as a method for learning goal-reaching policies entirely from scratch. We present a theoretical result linking self-supervised imitation learning and reinforcement learning, and empirical results showing that it performs competitively with more complex reinforcement learning methods on a range of challenging goal reaching problems.
reject
The authors present an algorithm that utilizes ideas from imitation learning to improve on goal-conditioned policy learning methods that rely on RL, such as hindsight experience replay. Several issues of clarity and the correctness of the main theoretical result were addressed during the rebuttal period in way that satisfied the reviewers with respect to their concerns in these areas. However, after discussion, the reviewers still felt that there were some fundamental issues with the paper, namely that the applicability of this method to more general RL problems (complex reward functions rather than signle state goals, time ) is unclear. The basic idea seems interesting, but it needs further development, and non-trivial modifications, to be broadly applicable as an approach to problems that RL is typically used on. Thus, I recommend rejection of the paper at this time.
train
[ "SkxlVY42sr", "rkx2BrNiiS", "S1ewOro5jB", "rkeZrXFcoS", "S1eiHXW9iB", "r1gszulcjH", "B1lVxde5oS", "Hkxzpwx5sr", "H1xKkYT3tS", "B1e5iMfpYS", "S1xCmqT0tB" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "So I believe the on-policy equvalence you describe in the rebuttal is correct when J_GCSL(pi) is evaluated for trajectories sampled from pi (and becomes a weaker approximation as pi and pi_old deviate). The way Theorem 4.1 is presented just does not make this clear. I would suggest reorganizing that section to r...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 5 ]
[ "rkx2BrNiiS", "S1ewOro5jB", "r1gszulcjH", "S1eiHXW9iB", "B1lVxde5oS", "H1xKkYT3tS", "B1e5iMfpYS", "S1xCmqT0tB", "iclr_2020_ByxoqJrtvr", "iclr_2020_ByxoqJrtvr", "iclr_2020_ByxoqJrtvr" ]
iclr_2020_rklhqkHFDB
LARGE SCALE REPRESENTATION LEARNING FROM TRIPLET COMPARISONS
In this paper, we discuss the fundamental problem of representation learning from a new perspective. It has been observed in many supervised/unsupervised DNNs that the final layer of the network often provides an informative representation for many tasks, even though the network has been trained to perform a particular task. The common ingredient in all previous studies is a low-level feature representation for items, for example, RGB values of images in the image context. In the present work, we assume that no meaningful representation of the items is given. Instead, we are provided with the answers to some triplet comparisons of the following form: Is item A more similar to item B or item C? We provide a fast algorithm based on DNNs that constructs a Euclidean representation for the items, using solely the answers to the above-mentioned triplet comparisons. This problem has been studied in a sub-community of machine learning by the name "Ordinal Embedding". Previous approaches to the problem are painfully slow and cannot scale to larger datasets. We demonstrate that our proposed approach is significantly faster than available methods, and can scale to real-world large datasets. Thereby, we also draw attention to the less explored idea of using neural networks to directly, approximately solve non-convex, NP-hard optimization problems that arise naturally in unsupervised learning problems.
reject
The authors demonstrate how neural networks can be used to learn vectorial representations of a set of items given only triplet comparisons among those items. The reviewers had some concerns regarding the scale of the experiments and strength of the conclusions: empirically, it seemed like there should be more truly large-scale experiments considering that this is a selling point; there should have been more analysis and/or discussion of why/how the neural networks help; and the claim that deep networks are approximately solving an NP-hard problem seemed unimportant as they are routinely used for this purpose in ML problems. With a combination of improved experiments and revised discussion/analysis, I believe a revised version of this paper could make a good submission to a future conference.
train
[ "HJe1Rvq2oH", "HkxS8w92oS", "HkgcGv52iH", "H1gENCyjjS", "ryeO3Av5sB", "ryln6geqiS", "rklIgxx5oH", "SJgYkJgqsS", "SJgNYnkcsS", "HJxl4O3AYB", "HygLj-cG9B", "rkgfiCmT5r", "rkxBU7f0qH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their constructive feedback! We address all the reviewer’s comments below and made necessary revisions to the paper.\n1) What are the nice properties of using the specific loss function and do we lose something by relaxation?\nShort answer: Using the hinge loss leads not to a relaxation b...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "H1gENCyjjS", "H1gENCyjjS", "H1gENCyjjS", "SJgNYnkcsS", "iclr_2020_rklhqkHFDB", "rkgfiCmT5r", "rkxBU7f0qH", "HygLj-cG9B", "HJxl4O3AYB", "iclr_2020_rklhqkHFDB", "iclr_2020_rklhqkHFDB", "iclr_2020_rklhqkHFDB", "iclr_2020_rklhqkHFDB" ]
iclr_2020_r1enqkBtwr
Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as n−β where n is the number of training examples and β an exponent that depends on both data and algorithm. In this work we measure β when applying kernel methods to real datasets. For MNIST we find β≈0.4 and for CIFAR10 β≈0.1. Remarkably, β is the same for regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we introduce the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption --- namely that the data are sampled from a regular lattice --- we derive analytically β for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, β depends only on the training data and their dimension. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, our results quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on n. With this definition one obtains reasonable effective smoothness estimates for MNIST and CIFAR10.
reject
The paper studies, theoretically and empirically, the problem when generalization error decreases as $n^{-\beta}$ where $\beta$ is not $\frac{1}{2}$. It analyses a Teacher-Student problem where the Teacher generates data from a Gaussian random field. The paper provides a theorem that derives $\beta$ for Gaussian and Laplace kernels, and show empirical evidence supporting the theory using MNIST and CIFAR. The reviews contained two low scores, both of which were not confident. A more confident reviewer provided a weak accept score, and interacted multiple times with the authors during the discussion period (which is one of the nice things about the ICLR review process). However, this reviewer also noted that ICLR may not be the best venue for this work. Overall, while this paper shows promise, the negative review scores show that the topic may not be the best fit to the ICLR audience.
train
[ "Sye_AQX0FH", "Skeke2rhjB", "rJl0KNrniH", "SygHlIsjiS", "BJlIOv6LiB", "rJgzzP6UjS", "B1lvLLaLiB", "HklSLYXrcr", "HylkirUAYS" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper experimentally investigates how fast the generalization error decreases when some specific kernel functions are used in real datasets. This paper conducted numerical experiments on several datasets to investigate the decreasing rate of the generalization error, and the rate is determined for such datase...
[ 3, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_r1enqkBtwr", "rJl0KNrniH", "SygHlIsjiS", "rJgzzP6UjS", "Sye_AQX0FH", "HylkirUAYS", "HklSLYXrcr", "iclr_2020_r1enqkBtwr", "iclr_2020_r1enqkBtwr" ]
iclr_2020_HJxp9kBFDS
Invariance vs Robustness of Neural Networks
Neural networks achieve human-level accuracy on many standard datasets used in image classification. The next step is to achieve better generalization to natural (or non-adversarial) perturbations as well as known pixel-wise adversarial perturbations of inputs. Previous work has studied generalization to natural geometric transformations (e.g., rotations) as invariance, and generalization to adversarial perturbations as robustness. In this paper, we examine the interplay between invariance and robustness. We empirically study the following two cases:(a) change in adversarial robustness as we improve only the invariance using equivariant models and training augmentation, (b) change in invariance as we improve only the adversarial robustness using adversarial training. We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger rotations but while doing so, their adversarial robustness does not improve, or worse, it can even drop significantly on datasets such as MNIST. As a plausible explanation for this phenomenon we observe that the average perturbation distance of the test points to the decision boundary decreases as the model learns larger and larger rotations. On the other hand, we take adversarially trained LeNet and ResNet models which have good \ell_\infty adversarial robustness on MNIST and CIFAR-10, and observe that adversarially training them with progressively larger norms keeps their rotation invariance essentially unchanged. In fact, the difference between test accuracy on unrotated test data and on randomly rotated test data upto \theta , for all \theta in [0, 180], remains essentially unchanged after adversarial training . As a plausible explanation for the observed phenomenon we show empirically that the principal components of adversarial perturbations and perturbations given by small rotations are nearly orthogonal
reject
This paper examines the interplay between the related ideas of invariance and robustness in deep neural network models. Invariance is the notion that small perturbations to an input image (such as rotations or translations) should not change the classification of that image. Robustness is usually taken to be the idea that small perturbations to input images (e.g. noise, whether white or adversarial) should not significantly affect the model's performance. In the context of this paper, robustness is mostly considered in terms of adversarial perturbations that are imperceptible to humans and created to intentionally disrupt a model's accuracy. The results of this investigation suggests that these ideas are mostly unrelated: equivariant models (with architectures designed to encourage the learning of invariances) that are trained with data augmentation whereby input images are given random rotations do not seem to offer any additional adversarial robustness, and similarly using adversarial training to combat adversarial noise does not seem to confer any additional help for learning rotational invariance. (In some cases, these types of training on the one hand seem to make invariance to the other type of perturbations even worse.) Unfortunately, the reviewers do not believe the technical results are of sufficient interest to warrant publication at this time.
train
[ "rkx9Ros2FS", "r1lG531aKr", "rJxQy7DCKS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper examines the interplay between the related ideas of invariance and robustness in deep neural network models. Invariance is the notion that small perturbations to an input image (such as rotations or translations) should not change the classification of that image. Robustness is usually taken to be the i...
[ 3, 1, 3 ]
[ 4, 3, 5 ]
[ "iclr_2020_HJxp9kBFDS", "iclr_2020_HJxp9kBFDS", "iclr_2020_HJxp9kBFDS" ]
iclr_2020_BkxackSKvH
Learning Entailment-Based Sentence Embeddings from Natural Language Inference
Large datasets on natural language inference are a potentially valuable resource for inducing semantic representations of natural language sentences. But in many such models the embeddings computed by the sentence encoder goes through an MLP-based interaction layer before predicting its label, and thus some of the information about textual entailment is encoded in the interpretation of sentence embeddings given by this parameterised MLP. In this work we propose a simple interaction layer based on predefined entailment and contradiction scores applied directly to the sentence embeddings. This parameter-free interaction model achieves results on natural language inference competitive with MLP-based models, demonstrating that the trained sentence embeddings directly represent the information needed for textual entailment, and the inductive bias of this model leads to better generalisation to other related datasets.
reject
This paper proposes a method for learning sentence embeddings such that entailment and contradiction relationships between sentence pairs can be inferred by a simple parameter-free operation on the vectors for the two sentences. Reviewers found the method and the results interesting, but in private discussion, couldn't reach a consensus on what (if any) substantial valuable contributions the paper had proven. The performance of the method isn't compellingly strong in absolute or relative terms, yielding doubts about the value of the method for entailment applications, and the reviewers didn't see a strong enough motivation for the line of work to justify publishing it as a tentative or exploratory effort at ICLR.
test
[ "HJgW7iEhYr", "SkgU_Tl9sS", "SJljPhlcor", "Skxf25e9oS", "ryx2DriFtr", "Bkgq7QUptr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n***Update***\nI'd like to thank the authors for responding to my questions and for the additional experiments. I think the new sentence embedding experiments make the paper quite a bit stronger - it would be interesting to scale them up to using SNLI + MNLI to see how much further they can go (right now they are...
[ 6, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, 5, 5 ]
[ "iclr_2020_BkxackSKvH", "ryx2DriFtr", "HJgW7iEhYr", "Bkgq7QUptr", "iclr_2020_BkxackSKvH", "iclr_2020_BkxackSKvH" ]
iclr_2020_Ske6qJSKPH
Scheduling the Learning Rate Via Hypergradients: New Insights and a New Algorithm
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. This allows us to explicitly search for schedules that achieve good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rates, the hypergradient, and based on this we introduce a novel online algorithm. Our method adaptively interpolates between two recently proposed techniques (Franceschi et al., 2017; Baydin et al.,2018), featuring increased stability and faster convergence. We show empirically that the proposed technique compares favorably with baselines and related methodsin terms of final test accuracy.
reject
First, I'd like to apologize once again for failing to secure a third reviewer for this paper. To compensate, I checked the paper more thoroughly than standard. The area of online adaptation of the learning rate is of great importance and I appreciate the authors' effort in that direction. The authors carefully abundantly cite the research on gradient-based hyperparameter optimization but I would have appreciated to also see past works on stochastic line search (for instance "A stochastic line-search method with convergence rate") or statistical methods ("Using Statistics to Automate Stochastic Optimization"). The issue with these methods is that, despite usually very positive claims in the paper, they are not that competitive against a carefully tuned fixed schedule and end up not being used in practice. Hence, it is critical to develop a convincing experimental section to assuage doubts. Unfortunately, the experimental section of this work is a bit lacking, as pointed by both reviewers. I would like to comment on two points specifically: - First, no plot uses wall-clock time as the x-axis. Since the authors state that it can be up to 4 times as slow per iteration, the gains compared to a carefully tuned schedule are unclear. - Second, the use of a single (albeit two variants) dataset also leads to skepticism. Datasets have vastly different optimization properties and, by not using a wide range of them, one can miss the true sensitivity of the proposed algorithm. While I do not think that the paper is ready for publication, I feel like there is a clear path to an improved version that could be submitted to a later conference.
train
[ "HyxaEj1zqB", "rkeIq2w3oS", "rJlwitIoiS", "HyxPrb8ijS", "rklSLolsjS", "rkxda68for", "ryxSgYLMsr", "SkgnLRm0tB" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "In this paper, the authors introduce a hypergradient optimization algorithm for finding learning rate schedules that maximize test set accuracy. The proposed algorithm adaptively interpolates between two recently proposed hyperparameter optimization algorithms and performs comparably in terms of convergence and ge...
[ 6, -1, -1, -1, -1, -1, -1, 1 ]
[ 3, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_Ske6qJSKPH", "iclr_2020_Ske6qJSKPH", "HyxPrb8ijS", "rklSLolsjS", "rkxda68for", "SkgnLRm0tB", "HyxaEj1zqB", "iclr_2020_Ske6qJSKPH" ]
iclr_2020_HkxCcJHtPr
CAT: Compression-Aware Training for bandwidth reduction
Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving visual processing tasks. One of the major obstacles hindering the ubiquitous use of CNNs for inference is their relatively high memory bandwidth requirements, which can be a main energy consumer and throughput bottleneck in hardware accelerators. Accordingly, an efficient feature map compression method can result in substantial performance gains. Inspired by quantization-aware training approaches, we propose a compression-aware training (CAT) method that involves training the model in a way that allows better compression of feature maps during inference. Our method trains the model to achieve low-entropy feature maps, which enables efficient compression at inference time using classical transform coding methods. CAT significantly improves the state-of-the-art results reported for quantization. For example, on ResNet-34 we achieve 73.1% accuracy (0.2% degradation from the baseline) with an average representation of only 1.79 bits per value. Reference implementation accompanies the paper.
reject
This work propose a compression-aware training (CAT) method to allows efficient compression of feature maps during inference. I read the paper myself. The proposed method is quite straightforward and looks incremental compared with existing approaches based on entropy regularization.
train
[ "rklhGYNC5B", "BJgsBhO8oB", "B1x25mjeiS", "rJxtzQieiB", "HkgO4zslsS", "rJgdsSVysS", "rJghghXPtr", "BklsH3PCcr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The format of the paper does not meet the requirement of ICLR. Due to this, I will give a 3. I suggest the authors to change it as soon as possible.\n\nBesides that, the main idea of the paper is to regularize the training of a neural network to reduce the entropy of its activations. There are extensive experiment...
[ 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HkxCcJHtPr", "rJxtzQieiB", "rJghghXPtr", "rklhGYNC5B", "BklsH3PCcr", "rklhGYNC5B", "iclr_2020_HkxCcJHtPr", "iclr_2020_HkxCcJHtPr" ]
iclr_2020_S1xJikHtDH
Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration
Generative Adversarial Networks (GANs) is a powerful family of models that learn an underlying distribution to generate synthetic data. Many existing studies of GANs focus on improving the realness of the generated image data for visual applications, and few of them concern about improving the quality of the generated data for training other classifiers---a task known as the model compatibility problem. As a consequence, existing GANs often prefer generating `easier' synthetic data that are far from the boundaries of the classifiers, and refrain from generating near-boundary data, which are known to play an important roles in training the classifiers. To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data. In particular, we introduce an auxiliary Boundary-Calibration loss (BC-loss) into the generator of GAN to match the statistics between the posterior distributions of original data and generated data with respect to the boundaries of the pre-trained classifiers. The BC-loss is provably unbiased and can be easily coupled with different GAN variants to improve their model compatibility. Experimental results demonstrate that BCGANs not only generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
reject
The paper presents a method for increasing the "model compatibility" of Generative Adversarial Networks by adding a term to the loss function relating to classification boundaries. The reviewers recognized the importance of the problem, but several concerns were raised about the clarity of the paper, as well as the significance of the experimental results.
train
[ "BJgUq9uoKH", "HkgBnP1b2r", "r1elU4ZhsS", "BJlUb4bnjr", "HJlLQ8ERYH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "In this work authors consider a problem of 'model compatibility' of GANs, i.e. usefullness of the generated samples for classification tasks. Proposed 'Boundary Calibration' GAN attempts to tackle this issue by adding non-adversarial terms to discriminator, obtained as outputs of the classifiers trained on the ori...
[ 3, 3, -1, -1, 3 ]
[ 4, 3, -1, -1, 4 ]
[ "iclr_2020_S1xJikHtDH", "iclr_2020_S1xJikHtDH", "HJlLQ8ERYH", "BJgUq9uoKH", "iclr_2020_S1xJikHtDH" ]
iclr_2020_HyxgoyHtDB
Policy Optimization by Local Improvement through Search
Imitation learning has emerged as a powerful strategy for learning initial policies that can be refined with reinforcement learning techniques. Most strategies in imitation learning, however, rely on per-step supervision either from expert demonstrations, referred to as behavioral cloning or from interactive expert policy queries such as DAgger. These strategies differ on the state distribution at which the expert actions are collected -- the former using the state distribution of the expert, the latter using the state distribution of the policy being trained. However, the learning signal in both cases arises from the expert actions. On the other end of the spectrum, approaches rooted in Policy Iteration, such as Dual Policy Iteration do not choose next step actions based on an expert, but instead use planning or search over the policy to choose an action distribution to train towards. However, this can be computationally expensive, and can also end up training the policy on a state distribution that is far from the current policy's induced distribution. In this paper, we propose an algorithm that finds a middle ground by using Monte Carlo Tree Search (MCTS) to perform local trajectory improvement over rollouts from the policy. We provide theoretical justification for both the proposed local trajectory search algorithm and for our use of MCTS as a local policy improvement operator. We also show empirically that our method (Policy Optimization by Local Improvement through Search or POLISH) is much faster than methods that plan globally, speeding up training by a factor of up to 14 in wall clock time. Furthermore, the resulting policy outperforms strong baselines in both reinforcement learning and imitation learning.
reject
Thanks for your detailed responses to the reviewers, which helped us a lot to better understand your paper. However, given that the current manuscript still contains many unclear parts, we decided not to accept the paper. We hope that the reviewers' comments help you improve your paper for potential future submission.
train
[ "H1gDz3KpYS", "Bygc2W9njS", "HylNqb53iS", "S1eqLZ92or", "rkgsiAyOOS", "S1g0TOI8KS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n[Summary]\nThis paper proposes POLISH, an imitation learning algorithm that provides a balance between Behavioral Cloning (BC) and DAgger. The algorithm reduces the mismatch between the target policy and an expert policy on states obtained from starting at the target policy's state distribution and following the...
[ 3, -1, -1, -1, 3, 1 ]
[ 3, -1, -1, -1, 5, 1 ]
[ "iclr_2020_HyxgoyHtDB", "rkgsiAyOOS", "S1g0TOI8KS", "H1gDz3KpYS", "iclr_2020_HyxgoyHtDB", "iclr_2020_HyxgoyHtDB" ]
iclr_2020_HklliySFDS
Continual Learning with Gated Incremental Memories for Sequential Data Processing
The ability to learn over changing task distributions without forgetting previous knowledge, also known as continual learning, is a key enabler for scalable and trustworthy deployments of adaptive solutions. While the importance of continual learning is largely acknowledged in machine vision and reinforcement learning problems, this is mostly under-documented for sequence processing tasks. This work focuses on characterizing and quantitatively assessing the impact of catastrophic forgetting and task interference when dealing with sequential data in recurrent neural networks. We also introduce a general architecture, named Gated Incremental Memory, for augmenting recurrent models with continual learning skills, whose effectiveness is demonstrated through the benchmarks introduced in this paper.
reject
This manuscript describes a continual learning approach where individual instances consist of sequences, such as language modeling. The paper consists of a definition of a problem setting, tasks in that problem setting, baselines (not based on existing continual learning approaches, which the authors argue is to highlight the need for such techniques, but with which the reviewers took issue), and a novel architecture. Reviews focused on the gravity of the contribution. R1 and R2, in particular, argued that the paper is written as though the problem/benchmark definition is the main contribution. R2 mentions that in spite of this, the methods section jumps directly into the candidate architecture. As mentioned above, several reviewers also took issue with the fact that existing CL techniques are not employed as baselines. The authors engaged with reviewers and promised updates, but did not take the opportunity to update their paper. As many of the reviewers' comments remain unaddressed and the authors' updates did not materialize, I recommend rejection, and encourage the authors to incorporate the feedback they have received in a future submission.
test
[ "rJlJckeOjB", "H1ebIkl_jH", "rylS_A1OoS", "r1esT_GTKr", "r1xv8YTatr", "r1lsUfWpKr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We agree that the model size is the main limitation GIM. However, as you say, it is reasonable to assume that the number of hidden units of each module could decrease as the number of modules increases. Inter-modules connections can foster reuse of previous features, thus reducing the need to learn them from scrat...
[ -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "r1lsUfWpKr", "r1esT_GTKr", "r1xv8YTatr", "iclr_2020_HklliySFDS", "iclr_2020_HklliySFDS", "iclr_2020_HklliySFDS" ]
iclr_2020_H1gZsJBYwH
Hybrid Weight Representation: A Quantization Method Represented with Ternary and Sparse-Large Weights
Previous ternarizations such as the trained ternary quantization (TTQ), which quantized weights to three values (e.g., {−Wn, 0,+Wp}), achieved the small model size and efficient inference process. However, the extreme limit on the number of quantization steps causes some degradation in accuracy. To solve this problem, we propose a hybrid weight representation (HWR) method which produces a network consisting of two types of weights, i.e., ternary weights (TW) and sparse-large weights (SLW). The TW is similar to the TTQ’s and requires three states to be stored in memory with 2 bits. We utilize the one remaining state to indicate the SLW which is referred to as very rare and greater than TW. In HWR, we represent TW with values while SLW with indices of values. By encoding SLW, the networks can preserve their model size with improving their accuracy. To fully utilize HWR, we also introduce a centralized quantization (CQ) process with a weighted ridge (WR) regularizer. They aim to reduce the entropy of weight distributions by centralizing weights toward ternary values. Our comprehensive experiments show that HWR outperforms the state-of-the-art compressed models in terms of the trade-off between model size and accuracy. Our proposed representation increased the AlexNet performance on CIFAR-100 by 4.15% with only1.13% increase in model size.
reject
The paper proposes a hybrid weighs representation method in deep networks. The authors propose to utilize the extra state in 2-bit ternary representation to encode large weight values. The idea is simple and straightforward. The main concern is on the experimental results. The use of mixed bit width for neural network quantization is not new, but the authors only compare with basic quantization method in the original submission. In the revised version of the paper, the proposed method performs significantly worse than recent quantization methods such as PACT and QIL. Moreover, writing can be improved, and parts of the paper need to be clarified.
train
[ "Byxk-x1AFr", "Byei_Y9liB", "SJe_ApUejr", "B1lF3WYliS", "B1g3htmatr", "SJlv2o_Jqr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper works on weight quantization in deep networks. The authors propose to utilize the extra state in 2-bit ternary representation to encode large weight values. The authors also propose to use a weighted ridge regularizer which contains a \"part of L1\" term to make the weights with large values sparse.\n\n...
[ 3, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, 5, 1 ]
[ "iclr_2020_H1gZsJBYwH", "B1g3htmatr", "SJlv2o_Jqr", "Byxk-x1AFr", "iclr_2020_H1gZsJBYwH", "iclr_2020_H1gZsJBYwH" ]
iclr_2020_Hyxfs1SYwH
Alleviating Privacy Attacks via Causal Learning
Machine learning models, especially deep neural networks have been shown to reveal membership information of inputs in the training data. Such membership inference attacks are a serious privacy concern, for example, patients providing medical records to build a model that detects HIV would not want their identity to be leaked. Further, we show that the attack accuracy amplifies when the model is used to predict samples that come from a different distribution than the training set, which is often the case in real world applications. Therefore, we propose the use of causal learning approaches where a model learns the causal relationship between the input features and the outcome. Causal models are known to be invariant to the training distribution and hence generalize well to shifts between samples from the same distribution and across different distributions. First, we prove that models learned using causal structure provide stronger differential privacy guarantees than associational models under reasonable assumptions. Next, we show that causal models trained on sufficiently large samples are robust to membership inference attacks across different distributions of datasets and those trained on smaller sample sizes always have lower attack accuracy than corresponding associational models. Finally, we confirm our theoretical claims with experimental evaluation on 4 datasets with moderately complex Bayesian networks. We observe that neural network-based associational models exhibit upto 80% attack accuracy under different test distributions and sample sizes whereas causal models exhibit attack accuracy close to a random guess. Our results confirm the value of the generalizability of causal models in reducing susceptibility to privacy attacks.
reject
Maintaining the privacy of membership information contained within the data used to train machine learning models is paramount across many application domains. Moreover, this risk can be more acute when the model is used to make predictions using out-of-sample data. This paper applies a causal learning framework to mitigate this problem, motivated by the fact that causal models can be invariant to the training distribution and therefore potentially more resistant to certain privacy attacks. Both theoretical and empirical results are provided in support of this application of causal modeling. Overall, during the rebuttal period there was no strong support for this paper, and one reviewer in particular mentioned lingering unresolved yet non-trivial concerns. For example, to avoid counter-examples raised the reviewer, a deterministic labeling function must be introduced, which trivializes the distribution p(Y|X) and leads to a problematic training and testing scenario from a practical standpoint. Similarly the theoretical treatment involving Markov blankets was deemed confusing and/or misleading even after careful inspection of all author response details. At the very least, this suggests that another round of review is required to clarify these issues before publication, and hence the decision to reject at this time.
train
[ "BklhfeIKjr", "HyxvmkIFjH", "SygxVTHYiH", "S1xL2WPdjS", "rJxRKNwujS", "rJlBIGPOjS", "SJxQuPxuir", "SyeCQLgdjr", "SyePtPLvjB", "HkgBXMUDoS", "B1eaRRSPoS", "SklDkRrDir", "HJlDQaBvjB", "BJxTSqHDjH", "ByxkWp-WjS", "r1lAKWKcKH", "ByxFctTrcS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The short answer is that the generative mechanism for a particular data distribution is not always the causal mechanism. Based on our answer on the need for the definition of a causal Markov Blanket, we really need to find the causal mechanism to construct a causal MB. In causal inference literature, this phenomen...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 1, 4 ]
[ "rJlBIGPOjS", "rJxRKNwujS", "S1xL2WPdjS", "SyeCQLgdjr", "HJlDQaBvjB", "SJxQuPxuir", "SyePtPLvjB", "HkgBXMUDoS", "HkgBXMUDoS", "HJlDQaBvjB", "r1lAKWKcKH", "ByxFctTrcS", "ByxkWp-WjS", "iclr_2020_Hyxfs1SYwH", "iclr_2020_Hyxfs1SYwH", "iclr_2020_Hyxfs1SYwH", "iclr_2020_Hyxfs1SYwH" ]
iclr_2020_SyxGoJrtPr
SPROUT: Self-Progressing Robust Training
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems. Current robust training methods such as adversarial training explicitly specify an ``attack'' (e.g., ℓ∞-norm bounded perturbation) to generate adversarial examples during model training in order to improve adversarial robustness. In this paper, we take a different perspective and propose a new framework SPROUT, self-progressing robust training. During model training, SPROUT progressively adjusts training label distribution via our proposed parametrized label smoothing technique, making training free of attack generation and more scalable. We also motivate SPROUT using a general formulation based on vicinity risk minimization, which includes many robust training methods as special cases. Compared with state-of-the-art adversarial training methods (PGD-ℓ∞ and TRADES) under ℓ∞-norm bounded attacks and various invariance tests, SPROUT consistently attains superior performance and is more scalable to large neural networks. Our results shed new light on scalable, effective and attack-independent robust training methods.
reject
This paper proposes a new training technique to produce a learned model robust against adversarial attacks -- without explicitly training on example attacked images. The core idea being that such a training scheme has the potential to reduce the cost in terms of training time for obtaining robustness, while also potentially increasing the clean performance. The method does so by proposing a version of label smoothing and doing two forms of data augmentations (gaussian noise and mixup). The reviewers were mixed on this work. Two recommended weak reject while one recommended weak accept. All agreed that this work addressed an important problem and that the proposed solution was interesting. The authors and reviewers actively engaged in a discussion, in some cases with multiple back and forths. The main concern of the reviewers is the inconclusive experimental evidence. Though the authors did demonstrate strong performance on PGD attacks, the reviewers had concerns about some attack settings like epsilon and how that may unfairly disadvantage the baselines. In addition, the results on CW presented a different story than the results with PGD. Therefore, we do not recommend this work for acceptance in its current form. The work offers strong preliminary evidence of a potential solution to provide robustness without direct adversarial training, but more analysis and explanation of when each component of their proposed solution should increase robustness is needed.
test
[ "rJlmCMu3sH", "SJgRZwmisS", "SklbGQCtjS", "B1llENCFsB", "S1ltyVRKiS", "r1eLHmRYsS", "ryxPtG13KB", "HkeBBIr3KS", "rkgYpift9B" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Response to extra questions/comments:\n\nWe also thank the reviewer for your responsiveness and efforts for reviewing our submission. We did find your comments very helpful in further strengthening our research findings and in improving the presentation of this paper. We have managed to perform all the extra comme...
[ -1, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "SJgRZwmisS", "B1llENCFsB", "HkeBBIr3KS", "rkgYpift9B", "rkgYpift9B", "ryxPtG13KB", "iclr_2020_SyxGoJrtPr", "iclr_2020_SyxGoJrtPr", "iclr_2020_SyxGoJrtPr" ]
iclr_2020_SJeQi1HKDH
Learning with Social Influence through Interior Policy Differentiation
Animals develop novel skills not only through the interaction with the environment but also from the influence of the others. In this work we model the social influence into the scheme of reinforcement learning, enabling the agents to learn both from the environment and from their peers. Specifically, we first define a metric to measure the distance between policies then quantitatively derive the definition of uniqueness. Unlike previous precarious joint optimization approaches, the social uniqueness motivation in our work is imposed as a constraint to encourage the agent to learn a policy different from the existing agents while still solve the primal task. The resulting algorithm, namely Interior Policy Differentiation (IPD), brings about performance improvement as well as a collection of policies that solve a given task with distinct behaviors
reject
The paper proposes a mechanism for obtaining diverse policies for solving a task by posing it as a multi-agent problem, and incentivizing the agents to be different from each other via maximizing total variation. The reviewers agreed that this is an interesting idea, but had issues with the placement and exact motivations -- precisely what kind of diversity is the work after, why, and what accordingly related approaches does it need to be compared to. Some reviewers also found the technical and exposition clarity to be lacking. Given the consensus, I recommend rejection at this time, but encourage the authors to take the reviewers' feedback into account and resubmit to another venue.
val
[ "BkxvJHLDsB", "B1xKqSYboB", "HkladEF-jH", "HkxNv7YZjr", "S1g9D58Rtr", "r1lkFUpCKr", "BklTKgR-qB", "ryeNTMTUOr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Thank you for the rebuttal. After consideration, I'm keeping my score the same, as I am still not convinced by the utility of the policy diversity argument. I'd encourage the authors to explore their method in a concrete setting where this has demonstrable advantages. ", "Thank you for the insightful comments.\n...
[ -1, -1, -1, -1, 3, 3, 3, -1 ]
[ -1, -1, -1, -1, 4, 1, 4, -1 ]
[ "B1xKqSYboB", "S1g9D58Rtr", "r1lkFUpCKr", "BklTKgR-qB", "iclr_2020_SJeQi1HKDH", "iclr_2020_SJeQi1HKDH", "iclr_2020_SJeQi1HKDH", "iclr_2020_SJeQi1HKDH" ]
iclr_2020_SJgXs1HtwH
TreeCaps: Tree-Structured Capsule Networks for Program Source Code Processing
Program comprehension is a fundamental task in software development and maintenance processes. Software developers often need to understand a large amount of existing code before they can develop new features or fix bugs in existing programs. Being able to process programming language code automatically and provide summaries of code functionality accurately can significantly help developers to reduce time spent in code navigation and understanding, and thus increase productivity. Different from natural language articles, source code in programming languages often follows rigid syntactical structures and there can exist dependencies among code elements that are located far away from each other through complex control flows and data flows. Existing studies on tree-based convolutional neural networks (TBCNN) and gated graph neural networks (GGNN) are not able to capture essential semantic dependencies among code elements accurately. In this paper, we propose novel tree-based capsule networks (TreeCaps) and relevant techniques for processing program code in an automated way that encodes code syntactical structures and captures code dependencies more accurately. Based on evaluation on programs written in different programming languages, we show that our TreeCaps-based approach can outperform other approaches in classifying the functionalities of many programs.
reject
This paper proposes an application of capsule networks to code modeling. I see the potential in this approach, but as the reviewers pointed out, in the current draft there are significant issues with respect to both clarity of motivating the work, and in the empirical results (which start at a much lower baseline than previous work). I am not recommending acceptance at this time, but would encourage the reviewers to clarify the issues raised in the reviews for future submission.
val
[ "HJxpWZAiiS", "SklWo-RsjH", "B1g1azRooB", "S1xcQm8k5B", "r1eiYk0oiH", "r1llKxqKYH", "SklkAftTtH" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewer for his valuable time, helpful feedback and insightful suggestions to further improve our study. \n\nQ2-1: It doesn’t appear that some of the motivation for capsule networks on images didn’t seem to transfer neatly to this setting; for example, there is no equivalent of inverse...
[ -1, -1, -1, 1, -1, 3, 1 ]
[ -1, -1, -1, 3, -1, 5, 4 ]
[ "S1xcQm8k5B", "SklkAftTtH", "r1llKxqKYH", "iclr_2020_SJgXs1HtwH", "iclr_2020_SJgXs1HtwH", "iclr_2020_SJgXs1HtwH", "iclr_2020_SJgXs1HtwH" ]
iclr_2020_rygEokBKPS
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies
We introduce a new black-box attack achieving state of the art performances. Our approach is based on a new objective function, borrowing ideas from ℓ∞-white box attacks, and particularly designed to fit derivative-free optimization requirements. It only requires to have access to the logits of the classifier without any other information which is a more realistic scenario. Not only we introduce a new objective function, we extend previous works on black box adversarial attacks to a larger spectrum of evolution strategies and other derivative-free optimization methods. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. Our models achieve, with a budget limited to 10,000 queries, results up to 99.2% of success rate against InceptionV3 classifier with 630 queries to the network on average in the untargeted attacks setting, which is an improvement by 90 queries of the current state of the art. In the targeted setting, we are able to reach, with a limited budget of 100,000, 100% of success rate with a budget of 6,662 queries on average, i.e. we need 800 queries less than the current state of the art.
reject
This paper proposes a new black-box adversarial attack based on tiling and evolution strategies. While the experimental results look promising, the main concern of the reviewers is the novelty of the proposed algorithm, and many things need to be improved in terms of clarity and experiments. The paper does not gather sufficient support from the reviewers even after author response. I encourage the authors to improve this paper and resubmit to future conference.
train
[ "rkxAlx2iKS", "HylN_kKRKr", "SJlZXsL2iH", "BJxAAmW9oH", "SkeUlIkviH", "rJxFN8ywsS", "Bylcoryvir", "SkgdbG96KH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposed a new query efficient black-box attack algorithm using better evolution strategies. The authors also add tiling trick to make the attack even more efficient. The experimental results show that the proposed method achieves state-of-the-art attack efficiency in black-box setting.\n\nThe paper ind...
[ 3, 3, -1, -1, -1, -1, -1, 3 ]
[ 4, 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_rygEokBKPS", "iclr_2020_rygEokBKPS", "iclr_2020_rygEokBKPS", "HylN_kKRKr", "HylN_kKRKr", "SkgdbG96KH", "rkxAlx2iKS", "iclr_2020_rygEokBKPS" ]
iclr_2020_SJlEs1HKDr
Attentive Sequential Neural Processes
Sequential Neural Processes (SNP) is a new class of models that can meta-learn a temporal stochastic process of stochastic processes by modeling temporal transition between Neural Processes. As Neural Processes (NP) suffers from underfitting, SNP is also prone to the same problem, even more severely due to its temporal context compression. Applying attention which resolves the problem of NP, however, is a challenge in SNP, because it cannot store the past contexts over which it is supposed to apply attention. In this paper, we propose the Attentive Sequential Neural Processes (ASNP) that resolve the underfitting in SNP by introducing a novel imaginary context as a latent variable and by applying attention over the imaginary context. We evaluate our model on 1D Gaussian Process regression and 2D moving MNIST/CelebA regression. We apply ASNP to implement Attentive Temporal GQN and evaluate on the moving-CelebA task.
reject
This manuscript outlines a method to improve the described under-fitting issues of sequential neural processes. The primary contribution is an attention mechanism depending on a context generated through an RNN network. Empirical evaluation indicates empirical results on some benchmark tasks. In reviews and discussion, the reviewers and AC agreed that the results look promising, albeit on somewhat simplified tasks. It was also brought up in reviews and discussions that the technical contributions seem to be incremental. This combined with limited empirical evaluation suggests that this work might be preliminary for conference publication. Overall, the manuscript in its current state is borderline and would be significantly improved wither by additional conceptual contributions, or by a more thorough empirical evaluation.
train
[ "S1lXjVv2KH", "rylOAyonsB", "BkeKZHc2oS", "BJlWHUD3iH", "r1emYWw3sS", "BJl48Ww2ir", "Sylh5bkbir", "rkgjX7waFr", "ryeKOFZW5S" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Authors present a method to address the problem of underfitting found in sequential neural processes. They cover the literature appropriately in regards to neural processes and developments pertaining to tackling the underfitting problem by applying an attention mechanism. Although, this has successfully been achi...
[ 6, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_SJlEs1HKDr", "BkeKZHc2oS", "BJl48Ww2ir", "iclr_2020_SJlEs1HKDr", "S1lXjVv2KH", "rkgjX7waFr", "ryeKOFZW5S", "iclr_2020_SJlEs1HKDr", "iclr_2020_SJlEs1HKDr" ]
iclr_2020_B1eBoJStwr
Semi-supervised semantic segmentation needs strong, high-dimensional perturbations
Consistency regularization describes a class of approaches that have yielded ground breaking results in semi-supervised classification problems. Prior work has established the cluster assumption\,---\,under which the data distribution consists of uniform class clusters of samples separated by low density regions\,---\,as key to its success. We analyze the problem of semantic segmentation and find that the data distribution does not exhibit low density regions separating classes and offer this as an explanation for why semi-supervised segmentation is a challenging problem. We then identify the conditions that allow consistency regularization to work even without such low-density regions. This allows us to generalize the recently proposed CutMix augmentation technique to a powerful masked variant, CowMix, leading to a successful application of consistency regularization in the semi-supervised semantic segmentation setting and reaching state-of-the-art results in several standard datasets.
reject
This paper proposes a method for semi-supervised semantic segmentation through consistency (with respect to various perturbations) regularization. While the reviewers believe that this paper contains interesting ideas and that it has been substantially improved from its original form, it is not yet ready for acceptance to ICLR-2020. With a little bit of polish, this paper is likely to be accepted at another venue.
train
[ "HyxVtjH3sB", "HyxCccH3iB", "rJg6t8rPsr", "H1lvGISPoB", "HkxEhHrDoS", "SJlKzMliKB", "H1eLF2upFH", "BklLvzGCFS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have added a little more to Section 3.2 paragraph 4, in that we have stated that the perturbations should be high dimensional in order to adequately constrain a decision boundary in the high-dimensional space of natural images.", "We have added results for ICT, CutOut, CutMix and CowOut using DeepLab2 for Cit...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 3, 1, 4 ]
[ "H1lvGISPoB", "HkxEhHrDoS", "SJlKzMliKB", "H1eLF2upFH", "BklLvzGCFS", "iclr_2020_B1eBoJStwr", "iclr_2020_B1eBoJStwr", "iclr_2020_B1eBoJStwr" ]
iclr_2020_ryedjkSFwr
Global Momentum Compression for Sparse Communication in Distributed SGD
With the rapid growth of data, distributed stochastic gradient descent~(DSGD) has been widely used for solving large-scale machine learning problems. Due to the latency and limited bandwidth of network, communication has become the bottleneck of DSGD when we need to train large scale models, like deep neural networks. Communication compression with sparsified gradient, abbreviated as \emph{sparse communication}, has been widely used for reducing communication cost in DSGD. Recently, there has appeared one method, called deep gradient compression~(DGC), to combine memory gradient and momentum SGD for sparse communication. DGC has achieved promising performance in practice. However, the theory about the convergence of DGC is lack. In this paper, we propose a novel method, called \emph{\underline{g}}lobal \emph{\underline{m}}omentum \emph{\underline{c}}ompression~(GMC), for sparse communication in DSGD. GMC also combines memory gradient and momentum SGD. But different from DGC which adopts local momentum, GMC adopts global momentum. We theoretically prove the convergence rate of GMC for both convex and non-convex problems. To the best of our knowledge, this is the first work that proves the convergence of distributed momentum SGD~(DMSGD) with sparse communication and memory gradient. Empirical results show that, compared with the DMSGD counterpart without sparse communication, GMC can reduce the communication cost by approximately 100 fold without loss of generalization accuracy. GMC can also achieve comparable~(sometimes better) performance compared with DGC, with an extra theoretical guarantee.
reject
The author propose a method called global momentum compression for sparse communication setting, and provided some theoretical results on the convergence rate. The convergence result is interesting, but the underlying assumptions used in the analysis appear very strong. Moreover, the proposed algorithm has limited novelty as it is only a minor modification. Another main concern is that the proposed algorithm shows little performance improvement in the experiments. Moreover, more related algorithms should be included in the experimental comparison.
train
[ "HJg4VKwbKB", "B1gmm0BAKB", "HJgWOKP0YH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Gradient sparsification is an important technique to reduce the communication overhead in distributed training. In this paper, the authors proposed a training method called global momentum compression (GMC) for distributed momentum SGD with sparse gradient. Following existing gradient sparsification techniques suc...
[ 3, 3, 3 ]
[ 3, 3, 4 ]
[ "iclr_2020_ryedjkSFwr", "iclr_2020_ryedjkSFwr", "iclr_2020_ryedjkSFwr" ]
iclr_2020_SyeYiyHFDH
Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Nonconvex Optimization
Although Adam is a very popular algorithm for optimizing the weights of neural networks, it has been recently shown that it can diverge even in simple convex optimization examples. Therefore, several variants of Adam have been proposed to circumvent this convergence issue. In this work, we study the algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate. The bound on the adaptive step size depends on the Lipschitz constant of the gradient of the objective function and provides safe theoretical adaptive step sizes. Under this boundedness assumption, we show a novel first order convergence rate result in both deterministic and stochastic contexts. Furthermore, we establish convergence rates of the function value sequence using the Kurdyka-Lojasiewicz property.
reject
The reviewers have reached consensus that while the paper is interesting, it could use more time. We urge the authors to continue their investigations.
train
[ "BJlNQLdatS", "SJxODyRTtB", "H1exRRr0FH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper provides convergence analyses for momentum methods using adaptive step size for non-convex problems under a bounded assumption on learning rates. Concretely, a sublinear convergence rate under a general setting and improved convergence rates under KL-condition are provided.\n\nAn interesting point of th...
[ 3, 3, 3 ]
[ 3, 3, 3 ]
[ "iclr_2020_SyeYiyHFDH", "iclr_2020_SyeYiyHFDH", "iclr_2020_SyeYiyHFDH" ]
iclr_2020_S1eqj1SKvr
TOWARDS FEATURE SPACE ADVERSARIAL ATTACK
We propose a new type of adversarial attack to Deep Neural Networks (DNNs) for image classification. Different from most existing attacks that directly perturb input pixels. Our attack focuses on perturbing abstract features, more specifically, features that denote styles, including interpretable styles such as vivid colors and sharp outlines, and uninterpretable ones. It induces model misclassfication by injecting style changes insensitive for humans, through an optimization procedure. We show that state-of-the-art pixel space adversarial attack detection and defense techniques are ineffective in guarding against feature space attacks.
reject
This paper provides an improved feature space adversarial attack. However, the contribution is unclear in its significance, in part due to an important prior reference was omitted (song et al.) Unfortunately the paper is borderline, and not above the bar for acceptance in the current pool.
train
[ "SJgzniW5sr", "BygAhn-5sr", "HkxaE3ZcoH", "SklK3X3ptB", "ryxidrATYB", "BJlmoHxaKH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank for your constructive comments. Here we list your concerns and answer them one by one.\n\nR2Q1: novelty\nIt is a novel way to conduct feature space attack with the combination of style transfer and manipulation of model internal embedding, as pointed out by Review #1. Common style transfer task requires a...
[ -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, 3, 5, 4 ]
[ "ryxidrATYB", "BJlmoHxaKH", "SklK3X3ptB", "iclr_2020_S1eqj1SKvr", "iclr_2020_S1eqj1SKvr", "iclr_2020_S1eqj1SKvr" ]
iclr_2020_BJlisySYPS
Modelling the influence of data structure on learning in neural networks
The lack of crisp mathematical models that capture the structure of real-world data sets is a major obstacle to the detailed theoretical understanding of deep neural networks. Here, we first demonstrate the effect of structured data sets by experimentally comparing the dynamics and the performance of two-layer networks trained on two different data sets: (i) an unstructured synthetic data set containing random i.i.d. inputs, and (ii) a simple canonical data set such as MNIST images. Our analysis reveals two phenomena related to the dynamics of the networks and their ability to generalise that only appear when training on structured data sets. Second, we introduce a generative model for data sets, where high-dimensional inputs lie on a lower-dimensional manifold and have labels that depend only on their position within this manifold. We call it the *hidden manifold model* and we experimentally demonstrate that training networks on data sets drawn from this model reproduces both the phenomena seen during training on MNIST.
reject
The paper examines the idea that real world data is highly structured / lies on a low-dimensional manifold. The authors show differences in neural network dynamics when trained on structured (MNIST) vs. unstructured datasets (random), and show that "structure" can be captured by their new "hidden manifold" generative model that explicitly considers some low-dimensional manifold. The reviewers perceived a lack of actionable insights following the paper, since in general these ideas are known, and for MNIST to be a limited dataset, despite finding the paper generally clear and correct. Following the discussion, I must recommend rejection at this time, but highly encourage the authors to take the insights developed in the paper a bit further and submit to another venue. E.g. trying to improve our algorithms by considering the inductive bias of structure of the hidden manifold, or developing a systematic and quantifiable notion of structure for many different datasets that correlate with difficulty of training would both be great contributions.
train
[ "HklA9S5hsH", "r1g-YBknoB", "ByleSIzooB", "Syx18KbjiS", "BkxrJY-sjH", "ryx4oO-iir", "Skxrfl2ptB", "BJePTrbx9H", "HkeSXiWb9B" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I suppose my point was more of a meta-scientific one: many projects aim to identify something interesting by comparing between [synthetic toy dataset] and MNIST, and many such projects find something interesting. The moment those results are tried on [something more complicated than MNIST], they either fail, or t...
[ -1, -1, -1, -1, -1, -1, 1, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "ryx4oO-iir", "ByleSIzooB", "BkxrJY-sjH", "Skxrfl2ptB", "BJePTrbx9H", "HkeSXiWb9B", "iclr_2020_BJlisySYPS", "iclr_2020_BJlisySYPS", "iclr_2020_BJlisySYPS" ]
iclr_2020_HygaikBKvS
Off-Policy Actor-Critic with Shared Experience Replay
We investigate the combination of actor-critic reinforcement learning algorithms with uniform large-scale experience replay and propose solutions for two challenges: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning. We employ those insights to accelerate hyper-parameter sweeps in which all participating agents run concurrently and share their experience via a common replay module. To this end we analyze the bias-variance tradeoffs in V-trace, a form of importance sampling for actor-critic methods. Based on our analysis, we then argue for mixing experience sampled from replay with on-policy experience, and propose a new trust region scheme that scales effectively to data distributions where V-trace becomes unstable. We provide extensive empirical validation of the proposed solution. We further show the benefits of this setup by demonstrating state-of-the-art data efficiency on Atari among agents trained up until 200M environment frames.
reject
The paper presents an off-policy actor-critic scheme where i) a buffer storing the trajectories from several agents is used (off-policy replay) and mixed with the on-line data from the current agent; ii) a trust-region estimator is used to select trajectories that are sufficiently close to the current policy (e.g. in the sense of a KL divergence). As noted by the reviews, the results are impressive. Quite a few concerns still remain: * After Fig. 1 (revised version), what matters is the shared replay, where the agent actually benefits from the experience of 9 other different agents; this implies that the population based training observes 9x more frames than the no-shared version, and the question whether the comparison is fair is raised; * the trust-region estimator might reduce the data seen by the agent, leading it to overfit the past (Fig. 3, left); * the influence of the $b$ hyper-parameter (the trust threshold) is not discussed. In standard trust region-based optimization methods, the trust region is gradually narrowed, suggesting that parameter $b$ here should evolve along time.
train
[ "SyeIzEp_iH", "SkgcOXT_iH", "SJxs4m6doS", "rkgGtMTdiH", "SylTACkgjH", "ryxo9N4wKH", "rJget8QTtH", "HygTpk4Z9S" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the question. We have addressed it in the updated version of the paper. In Figure 1 we now also present a single agent that uses the same hyper-parameter schedule that was published by Espeholt et al. (2018). This agent obtains a score of 431% human normalized median across the 57 atari games, achiev...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 1, 5, 5 ]
[ "SylTACkgjH", "ryxo9N4wKH", "rJget8QTtH", "HygTpk4Z9S", "iclr_2020_HygaikBKvS", "iclr_2020_HygaikBKvS", "iclr_2020_HygaikBKvS", "iclr_2020_HygaikBKvS" ]
iclr_2020_BJgRsyBtPB
A Greedy Approach to Max-Sliced Wasserstein GANs
Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse. Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures. This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which results a sufficient approximation of the high-dimensional Wasserstein distance. In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can result faster convergence and better approximation of the original distribution.
reject
The paper proposes a variant of the max-sliced Wasserstein distance, where instead of sorting, a greedy assignment is performed. As no theory is provided, the paper is purely of experimental nature. Unfortunately the work is too preliminary to warrant publication at this time, and would need further experimental or theoretical strengthening to be of general interest to the ICLR community.
val
[ "rkxm6Fp2tr", "BylRM78J9S", "rygE9UQgqS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a variant of the max-sliced Wasserstein distance, where instead of sorting, a greedy assignment is performed. As no theory is provided, the paper is purely of experimental nature. \n\nConsidering the above, the experimental evaluation is way too preliminary:\n\n- Looking at the generated images,...
[ 1, 1, 1 ]
[ 4, 1, 3 ]
[ "iclr_2020_BJgRsyBtPB", "iclr_2020_BJgRsyBtPB", "iclr_2020_BJgRsyBtPB" ]
iclr_2020_SygRikHtvS
Coresets for Accelerating Incremental Gradient Methods
Many machine learning problems reduce to the problem of minimizing an expected risk. Incremental gradient (IG) methods, such as stochastic gradient descent and its variants, have been successfully used to train the largest of machine learning models. IG methods, however, are in general slow to converge and sensitive to stepsize choices. Therefore, much work has focused on speeding them up by reducing the variance of the estimated gradient or choosing better stepsizes. An alternative strategy would be to select a carefully chosen subset of training data, train only on that subset, and hence speed up optimization. However, it remains an open question how to achieve this, both theoretically as well as practically, while not compromising on the quality of the final model. Here we develop CRAIG, a method for selecting a weighted subset (or coreset) of training data in order to speed up IG methods. We prove that by greedily selecting a subset S of training data that minimizes the upper-bound on the estimation error of the full gradient, running IG on this subset will converge to the (near)optimal solution in the same number of epochs as running IG on the full data. But because at each epoch the gradients are computed only on the subset S, we obtain a speedup that is inversely proportional to the size of S. Our subset selection algorithm is fully general and can be applied to most IG methods. We further demonstrate practical effectiveness of our algorithm, CRAIG, through an extensive set of experiments on several applications, including logistic regression and deep neural networks. Experiments show that CRAIG, while achieving practically the same loss, speeds up IG methods by up to 10x for convex and 3x for non-convex (deep learning) problems.
reject
This paper investigates the practical and theoretical consequences of speeding up training using incremental gradient methods (such as stochastic descent) by calculating the gradients with respect to a specifically chosen sparse subset of data. The reviewers were quite split on the paper. On the one hand, there was a general excitement about the direction of the paper. The idea of speeding up gradient descent is of course hugely relevant to the current machine learning landscape. The approach was also considered novel, and the paper well-written. However, the reviewers also pointed out multiple shortcomings. The experimental section was deemed to lack clarity and baselines. The results on standard dataset were very different from expected, causing worry about the reliability, although this has partially been addressed in additional experiments. The applicability to deep learning and large dataset, as well as the significance of time saved by using this method, were other worries. Unfortunately, I have to agree with the majority of the reviewers that the idea is fascinating, but that more work is required for acceptance to ICLR.
train
[ "rkgf0hu2iS", "H1xpWpO2iB", "rygP1n_hsr", "BkghxNl2OS", "Hkg22VYtKH", "H1l_fhnnqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for insightful feedback and for acknowledging the novelty and the algorithmic strength of our work. The reviewer asks great questions, and we provide detailed answers below. \n\nRE: Approximation of $d_{ij}$s\nBetter estimation of $d_{ij}$s results in a smaller error in estimating the full gr...
[ -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "Hkg22VYtKH", "BkghxNl2OS", "H1l_fhnnqH", "iclr_2020_SygRikHtvS", "iclr_2020_SygRikHtvS", "iclr_2020_SygRikHtvS" ]
iclr_2020_BJgyn1BFwS
Global Adversarial Robustness Guarantees for Neural Networks
We investigate global adversarial robustness guarantees for machine learning models. Specifically, given a trained model we consider the problem of computing the probability that its prediction at any point sampled from the (unknown) input distribution is susceptible to adversarial attacks. Assuming continuity of the model, we prove measurability for a selection of local robustness properties used in the literature. We then show how concentration inequalities can be employed to compute global robustness with estimation error upper-bounded by ϵ, for any ϵ>0 selected a priori. We utilise the methods to provide statistically sound analysis of the robustness/accuracy trade-off for a variety of neural networks architectures and training methods on MNIST, Fashion-MNIST and CIFAR. We empirically observe that robustness and accuracy tend to be negatively correlated for networks trained via stochastic gradient descent and with iterative pruning techniques, while a positive trend is observed between them in Bayesian settings.
reject
The authors propose a framework for estimating "global robustness" of a neural network, defined as the expected value of "local robustness" (robustness to small perturbations) over the data distribution. The authors prove the the local robustness metric is measurable and that under this condition, derive a statistically efficient estimator. The authors use gradient based attacks to approximate local robustness in practice and report extensive experimental results across several datasets. While the paper does make some interesting contributions, the reviewers were concerned about the following issues: 1) The measurability result, while technically important, is not surprising and does not add much insight algorithmically or statistically into the problem at hand. Outside of this, the paper does not make any significant technical contributions. 2) The paper is poorly organized and does not clearly articulate the main contributions and significance of these relative to prior work. 3) The fact that the local robustness metric is approximated via gradient based attacks makes the final results void of any guarantees, since there are no guarantees that gradient based attacks compute the worst case adversarial perturbation. This calls into question the main contribution claim of the paper on computing global robustness guarantees. While some of the technical aspects of the reveiwers' concerns were clarified during the discussion phase, this was not sufficient to address the fundamental issues raised above. Hence, I recommend rejection.
train
[ "Skl1SoDAFH", "rygk6TWqiH", "Byeyd6Zqir", "Skgtq0ZqjH", "SylLPC-5sS", "S1xJZR-5sH", "rJxOXFAoFS", "SkxcESapFS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the adversarial robustness of neural networks by giving theoretical guarantees, providing statistical estimators and running experiments. It is a lot of work and it is reasonably written. The problem is that a fair bit of it is quite basic: for example the measurability property is very much exp...
[ 3, -1, -1, -1, -1, -1, 1, 1 ]
[ 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_BJgyn1BFwS", "SkxcESapFS", "Skl1SoDAFH", "rJxOXFAoFS", "rJxOXFAoFS", "rJxOXFAoFS", "iclr_2020_BJgyn1BFwS", "iclr_2020_BJgyn1BFwS" ]
iclr_2020_H1xJhJStPS
Equilibrium Propagation with Continual Weight Updates
Equilibrium Propagation (EP) is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT), but with a learning rule local in space. Given an input x and associated target y, EP proceeds in two phases: in the first phase neurons evolve freely towards a first steady state; in the second phase output neurons are nudged towards y until they reach a second steady state. However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically. This is a major impediment to the biological plausibility of EP and its efficient hardware implementation. In this work, we propose a version of EP named Continual Equilibrium Propagation (C-EP) where neuron and synapse dynamics occur simultaneously throughout the second phase, so that the weight update becomes local in time. We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT (Theorem 1). We demonstrate training with C-EP on MNIST and generalize C-EP to neural networks where neurons are connected by asymmetric connections. We show through experiments that the more the network updates follows the gradients of BPTT, the best it performs in terms of training. These results bring EP a step closer to biology while maintaining its intimate link with backpropagation.
reject
Main content: paper introduces a new variant of equilibrium propagation algorithm that continually updates the weights making it unnecessary to save steady states. T Summary of discussion: reviewer 1: likes the idea but points out many issues with the proofs. reviewer 2: he really likes the novelty of paper, but review is not detailed, particularly discussing pros/cons. reviewer 3: likes the ideas but has questions on proofs, and also questions why MNIST is used as the evaluation tasks. Recommendation: interesting idea but writing/proofs could be clarified better. Vote reject.
train
[ "ByxzbYN3oS", "HJxY6VU4sS", "H1ewjm8EiH", "SJlBGB8Nir", "SyxII4IEiB", "rygnS_8nYH", "BkxSJJoJ9H", "H1e010yl9r" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their valuable comments, which have help us improve our manuscript - see our revised version.\n\nBased on their feedback, we have now revised our manuscript with the following amendments:\n\n1- To address the request of Reviewer # 3, we have now defined precisely what we meant by \"biolo...
[ -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, 4, 1, 3 ]
[ "iclr_2020_H1xJhJStPS", "rygnS_8nYH", "BkxSJJoJ9H", "H1e010yl9r", "H1ewjm8EiH", "iclr_2020_H1xJhJStPS", "iclr_2020_H1xJhJStPS", "iclr_2020_H1xJhJStPS" ]
iclr_2020_Sygg3JHtwB
Step Size Optimization
This paper proposes a new approach for step size adaptation in gradient methods. The proposed method called step size optimization (SSO) formulates the step size adaptation as an optimization problem which minimizes the loss function with respect to the step size for the given model parameters and gradients. Then, the step size is optimized based on alternating direction method of multipliers (ADMM). SSO does not require the second-order information or any probabilistic models for adapting the step size, so it is efficient and easy to implement. Furthermore, we also introduce stochastic SSO for stochastic learning environments. In the experiments, we integrated SSO to vanilla SGD and Adam, and they outperformed state-of-the-art adaptive gradient methods including RMSProp, Adam, L4-Adam, and AdaBound on extensive benchmark datasets.
reject
The paper is rejected based on unanimous reviews.
train
[ "Bkecwky9KH", "S1xEkJKysr", "ryeaLodyoS", "HkxuABPC5S", "S1l1vLzkqB", "SkeABj_TKH", "rkx1fl00YH", "HJx3p4tTYH", "r1xG0jhCuB" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "public", "public", "author", "public" ]
[ "First, I would like to point out that there has not been a conclusion or discussion section included, therefore the paper appears to be incomplete.\nAside from this the main contribution of the paper is a study on optimising the step size in gradient methods. They achieve this through the use of alternating direct...
[ 3, -1, -1, 3, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Sygg3JHtwB", "Bkecwky9KH", "HkxuABPC5S", "iclr_2020_Sygg3JHtwB", "rkx1fl00YH", "iclr_2020_Sygg3JHtwB", "HJx3p4tTYH", "SkeABj_TKH", "iclr_2020_Sygg3JHtwB" ]
iclr_2020_H1gx3kSKPS
Stein Bridging: Enabling Mutual Reinforcement between Explicit and Implicit Generative Models
Deep generative models are generally categorized into explicit models and implicit models. The former assumes an explicit density form whose normalizing constant is often unknown; while the latter, including generative adversarial networks (GANs), generates samples using a push-forward mapping. In spite of substantial recent advances demonstrating the power of the two classes of generative models in many applications, both of them, when used alone, suffer from respective limitations and drawbacks. To mitigate these issues, we propose Stein Bridging, a novel joint training framework that connects an explicit density estimator and an implicit sample generator with Stein discrepancy. We show that the Stein Bridge induces new regularization schemes for both explicit and implicit models. Convergence analysis and extensive experiments demonstrate that the Stein Bridging i) improves the stability and sample quality of the GAN training, and ii) facilitates the density estimator to seek more modes in data and alleviate the mode-collapse issue. Additionally, we discuss several applications of Stein Bridging and useful tricks in practical implementation used in our experiments.
reject
The paper proposes a generative model that jointly trains an implicit generative model and an explicit energy based model using Stein's method. There are concerns about technical correctness of the proofs and the authors are advised to look carefully into the points raised by the reviewers.
test
[ "SklGwvQpKH", "B1l7gurisS", "rkeBXBZiiH", "SJeaJON9oH", "BJxgEG-joH", "B1xDd0essH", "BkeDzdV5oH", "HyepbwEcsS", "SJe9euyhFS", "rklud-o3tH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your rebuttal. The paper improved after the rebuttal but I still think point 5 in the rebuttal is problematic since using d'=d, may not guarantee that we have a proper metric as claimed by the authors. I m updating my score as a weak reject for this paper.\n\nSummary of the paper: \n\nThe paper prop...
[ 3, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_H1gx3kSKPS", "rkeBXBZiiH", "BJxgEG-joH", "rklud-o3tH", "B1xDd0essH", "BkeDzdV5oH", "SJe9euyhFS", "SklGwvQpKH", "iclr_2020_H1gx3kSKPS", "iclr_2020_H1gx3kSKPS" ]
iclr_2020_SkxWnkStvS
Searching for Stage-wise Neural Graphs In the Limit
Search space is a key consideration for neural architecture search. Recently, Xie et al. (2019a) found that randomly generated networks from the same distribution perform similarly, which suggest we should search for random graph distributions instead of graphs. We propose graphon as a new search space. A graphon is the limit of Cauchy sequence of graphs and a scale-free probabilistic distribution, from which graphs of different number of vertices can be drawn. This property enables us to perform NAS using fast, low-capacity models and scale the found models up when necessary. We develop an algorithm for NAS in the space of graphons and empirically demonstrate that it can find stage-wise graphs that outperform DenseNet and other baselines on ImageNet.
reject
This paper proposes a graphon-based search space for neural architecture search. Unfortunately, the paper as currently stands and the small effect sizes in the experimental results raise questions about the merits of actually employing such a search space for the specific task of NAS. The reviewers expressed concerns that the results do not convincingly support graphon being a superior search space as claimed in the paper.
train
[ "B1ljIu0jjr", "HyxpXICoor", "Hyxu-HCojB", "B1xkzupttH", "BklbTSL0Fr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I agree that their proposed model allows for more architectures but in practice it is not much stronger than WS-G. \n\nWe have updated results and as graph sizes increase, performance gaps become more apparent and we go up to Densenet 264 where connectivity improvements results in improvements of up to 0.8%.\n\n\n...
[ -1, -1, -1, 3, 1 ]
[ -1, -1, -1, 5, 4 ]
[ "B1xkzupttH", "BklbTSL0Fr", "iclr_2020_SkxWnkStvS", "iclr_2020_SkxWnkStvS", "iclr_2020_SkxWnkStvS" ]
iclr_2020_ryGWhJBtDB
Hyperparameter Tuning and Implicit Regularization in Minibatch SGD
This paper makes two contributions towards understanding how the hyperparameters of stochastic gradient descent affect the final training loss and test accuracy of neural networks. First, we argue that stochastic gradient descent exhibits two regimes with different behaviours; a noise dominated regime which typically arises for small or moderate batch sizes, and a curvature dominated regime which typically arises when the batch size is large. In the noise dominated regime, the optimal learning rate increases as the batch size rises, and the training loss and test accuracy are independent of batch size under a constant epoch budget. In the curvature dominated regime, the optimal learning rate is independent of batch size, and the training loss and test accuracy degrade as the batch size rises. We support these claims with experiments on a range of architectures including ResNets, LSTMs and autoencoders. We always perform a grid search over learning rates at all batch sizes. Second, we demonstrate that small or moderately large batch sizes continue to outperform very large batches on the test set, even when both models are trained for the same number of steps and reach similar training losses. Furthermore, when training Wide-ResNets on CIFAR-10 with a constant batch size of 64, the optimal learning rate to maximize the test accuracy only decays by a factor of 2 when the epoch budget is increased by a factor of 128, while the optimal learning rate to minimize the training loss decays by a factor of 16. These results confirm that the noise in stochastic gradients can introduce beneficial implicit regularization.
reject
Authors provide an empirical evaluation of batch size and learning rate selection and its effect on training and generalization performance. As the authors and reviewers note, this is an active area of research with many closely related results to the contributions of this paper already existing in the literature. In light of this work, reviewers felt that this paper did not clearly place itself in the appropriate context to make its contributions clear. Following the rebuttal, reviewers minds remained unchanged.
train
[ "rkgI_uAYjB", "rye3zaZ7or", "HkemB2WXiB", "ryltuFZ7sr", "r1l1CEFwKr", "rJxkq6waYr", "BJgmhEfTcH", "S1xad3lftr", "rJgnJRYXuS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "I've read your response and my score remains unchanged because I haven't seen any update of the paper.", "We thank the reviewer for their comments.\n\nAlthough our primary contributions are empirical, we also provided a detailed theoretical discussion in section 2, where we give a clear and simple account of why...
[ -1, -1, -1, -1, 3, 3, 3, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 3, -1, -1 ]
[ "ryltuFZ7sr", "BJgmhEfTcH", "rJxkq6waYr", "r1l1CEFwKr", "iclr_2020_ryGWhJBtDB", "iclr_2020_ryGWhJBtDB", "iclr_2020_ryGWhJBtDB", "rJgnJRYXuS", "iclr_2020_ryGWhJBtDB" ]
iclr_2020_rklMnyBtPB
Adversarial Robustness Against the Union of Multiple Perturbation Models
Owing to the susceptibility of deep learning systems to adversarial attacks, there has been a great deal of work in developing (both empirically and certifiably) robust classifiers, but the vast majority has defended against single types of attacks. Recent work has looked at defending against multiple attacks, specifically on the MNIST dataset, yet this approach used a relatively complex architecture, claiming that standard adversarial training can not apply because it "overfits" to a particular norm. In this work, we show that it is indeed possible to adversarially train a robust model against a union of norm-bounded attacks, by using a natural generalization of the standard PGD-based procedure for adversarial training to multiple threat models. With this approach, we are able to train standard architectures which are robust against l_inf, l_2, and l_1 attacks, outperforming past approaches on the MNIST dataset and providing the first CIFAR10 network trained to be simultaneously robust against (l_inf, l_2, l_1) threat models, which achieves adversarial accuracy rates of (47.6%, 64.3%, 53.4%) for (l_inf, l_2, l_1) perturbations with epsilon radius = (0.03,0.5,12).
reject
Thanks to the authors for submitting the paper and providing further explanations and experiments. This paper aims to ensure robustness against several perturbation models simultaneously. While the authors' response has addressed several issues raised by the reviewers, the concern on the lack of novelty remains. Overall, there is not enough support among the reviewers for the paper to be accepted.
train
[ "SklAX8hm5H", "SyeDf91Ojr", "BJeDyFyOsr", "r1g1oFy_ir", "HklqfdkdjB", "H1g-LzbbiB", "rJxZsWWWiS", "SJxeVZ-bjS", "rJgRBxZ-iB", "SkexvzFOKB", "HylxMAzTtB", "SJl0iGAcFH", "HkxGjYo5FH", "H1xW1_ptdS", "HylF9H3F_S", "rkeCf_Ukur", "rkeBvmUJOr", "S1ehCh5pPr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "public", "author", "public" ]
[ "The paper proposes to do adversarial training on multiple L_p norm perturbation models simultaneously, to make the model robust against various types of attacks. \n\n[Novelty] I feel this is just a natural extension of adversarial training. If we define the perturbation set in PGD to be S, then in general S can be...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 6, 1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_rklMnyBtPB", "rJxZsWWWiS", "SJxeVZ-bjS", "H1g-LzbbiB", "iclr_2020_rklMnyBtPB", "HylxMAzTtB", "SkexvzFOKB", "SklAX8hm5H", "iclr_2020_rklMnyBtPB", "iclr_2020_rklMnyBtPB", "iclr_2020_rklMnyBtPB", "HkxGjYo5FH", "H1xW1_ptdS", "HylF9H3F_S", "rkeCf_Ukur", "rkeBvmUJOr", "S1ehCh5pP...
iclr_2020_SygD31HFvB
A Novel Analysis Framework of Lower Complexity Bounds for Finite-Sum Optimization
This paper studies the lower bound complexity for the optimization problem whose objective function is the average of n individual smooth convex functions. We consider the algorithm which gets access to gradient and proximal oracle for each individual component. For the strongly-convex case, we prove such an algorithm can not reach an \eps-suboptimal point in fewer than Ω((n+κn)log⁡(1/\eps)) iterations, where κ is the condition number of the objective function. This lower bound is tighter than previous results and perfectly matches the upper bound of the existing proximal incremental first-order oracle algorithm Point-SAGA. We develop a novel construction to show the above result, which partitions the tridiagonal matrix of classical examples into n groups to make the problem difficult enough to stochastic algorithms. This construction is friendly to the analysis of proximal oracle and also could be used in general convex and average smooth cases naturally.
reject
The paper considers a lower bound complexity for the convex problems. The reviewers worry about whether the scope of this paper fit in ICLR, the initialization issues, and the novelty and some other problems.
train
[ "r1xmsSQhiS", "HklQaUm2jS", "S1lhVSXhiB", "rJetIwmnsH", "S1xPXyAiYH", "H1ezmonaYS", "rJxQHktG5r", "BkxbWS7wtr", "SyxXlwcXYB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thanks for the reviewer's insightful and helpful comments. \n\n1. We have appended a new lower bound in the case of $\\kappa = \\mathcal{O}(n)$, which matches the upper bound of IFO algorithm (Hannah et al., 2018) (see Table 1, Theorem 3.1 and Section 4.1 in our latest version of the paper). Please note that the...
[ -1, -1, -1, -1, 3, 8, 6, -1, -1 ]
[ -1, -1, -1, -1, 3, 3, 3, -1, -1 ]
[ "S1xPXyAiYH", "H1ezmonaYS", "rJxQHktG5r", "iclr_2020_SygD31HFvB", "iclr_2020_SygD31HFvB", "iclr_2020_SygD31HFvB", "iclr_2020_SygD31HFvB", "SyxXlwcXYB", "iclr_2020_SygD31HFvB" ]
iclr_2020_SkeP3yBFDS
Reducing Computation in Recurrent Networks by Selectively Updating State Neurons
Recurrent Neural Networks (RNN) are the state-of-the-art approach to sequential learning. However, standard RNNs use the same amount of computation at each timestep, regardless of the input data. As a result, even for high-dimensional hidden states, all dimensions are updated at each timestep regardless of the recurrent memory cell. Reducing this rigid assumption could allow for models with large hidden states to perform inference more quickly. Intuitively, not all hidden state dimensions need to be recomputed from scratch at each timestep. Thus, recent methods have begun studying this problem by imposing mainly a priori-determined patterns for updating the state. In contrast, we now design a fully-learned approach, SA-RNN, that augments any RNN by predicting discrete update patterns at the fine granularity of independent hidden state dimensions through the parameterization of a distribution of update-likelihoods driven entirely by the input data. We achieve this without imposing assumptions on the structure of the update pattern. Better yet, our method adapts the update patterns online, allowing different dimensions to be updated conditional to the input. To learn which to update, the model solves a multi-objective optimization problem, maximizing accuracy while minimizing the number of updates based on a unified control. Using publicly-available datasets we demonstrate that our method consistently achieves higher accuracy with fewer updates compared to state-of-the-art alternatives. Additionally, our method can be directly applied to a wide variety of models containing RNN architectures.
reject
This paper introduces a new RNN architecture which uses a small network to decide which cells get updated at each time step, with the goal of reducing computational cost. The idea makes sense, although it requires the use of a heuristic gradient estimator because of the non-differentiability of the update gate. The main problem with this paper in my view is that the reduction in FLOPS was not demonstrated to correspond to a reduction in wallclock time, and I don't expect it would, since the sparse updates are different for each example in each batch, and only affect one hidden unit at a time. The only discussion of this problem is "we compute the FLOPs for each method as a surrogate for wall-clock time, which is hardware-dependent and often fluctuates dramatically in practice." Because this method reduces predictive accuracy, the reduction in FLOPS should be worth it! Minor criticism: 1) Figure 1 is confusing, showing not the proposed architecture in general but instead the connections remaining after computing the sparse updates.
train
[ "B1xwoW0njB", "Byxm0iP5KH", "BklWehDioS", "rJgFtz44iB", "B1lnoIMQiB", "BJecY0BnFr", "S1xB6zspFr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the additional clarification and experiment, it helped to contextualize the difficulty of the problem and value added of the method. I've revised my score accordingly.", "Summary: This paper proposes selective activation RNN (SA-RNN), by using an update coordinator to determine which subset of the RNN...
[ -1, 6, -1, -1, -1, 6, 6 ]
[ -1, 1, -1, -1, -1, 4, 4 ]
[ "BklWehDioS", "iclr_2020_SkeP3yBFDS", "Byxm0iP5KH", "BJecY0BnFr", "S1xB6zspFr", "iclr_2020_SkeP3yBFDS", "iclr_2020_SkeP3yBFDS" ]
iclr_2020_rJgD2ySFDr
Neural Communication Systems with Bandwidth-limited Channel
Reliably transmitting messages despite information loss due to a noisy channel is a core problem of information theory. One of the most important aspects of real world communication is that it may happen at varying levels of information transfer. The bandwidth-limited channel models this phenomenon. In this study we consider learning joint coding with the bandwidth-limited channel. Although, classical results suggest that it is asymptotically optimal to separate the sub-tasks of compression (source coding) and error correction (channel coding), it is well known that for finite block-length problems, and when there are restrictions to the computational complexity of coding, this optimality may not be achieved. Thus, we empirically compare the performance of joint and separate systems, and conclude that joint systems outperform their separate counterparts when coding is performed by flexible learnable function approximators such as neural networks. Specifically, we cast the joint communication problem as a variational learning problem. To facilitate this, we introduce a differentiable and computationally efficient version of this channel. We show that our design compensates for the loss of information by two mechanisms: (i) missing information is modelled by a prior model incorporated in the channel model, and (ii) sampling from the joint model is improved by auxiliary latent variables in the decoder. Experimental results justify the validity of our design decisions through improved distortion and FID scores.
reject
There was some support for this paper, but it was on the borderline and significant concerns were raised. It did not compare to the exiting related literature on communications, compression, and coding. There were significant issues with clarity.
train
[ "H1gA7XOEhS", "r1g4BlWssB", "BJgDWQejsS", "r1ginQgssH", "B1g4UQY3Kr", "ryeWu1pp9S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nPaper Summary:\n\nThe paper proposes to use ML methods, specifically neural networks, to learn source and/or channel coding systems, either jointly or separately. Specifically, they investigate these systems under the bandwidth-limited channel. They investigate their models applied to the task of transferring i...
[ 3, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, 1, 1 ]
[ "iclr_2020_rJgD2ySFDr", "B1g4UQY3Kr", "iclr_2020_rJgD2ySFDr", "ryeWu1pp9S", "iclr_2020_rJgD2ySFDr", "iclr_2020_rJgD2ySFDr" ]
iclr_2020_BJg_2JHKvH
Semi-Supervised Learning with Normalizing Flows
We propose Flow Gaussian Mixture Model (FlowGMM), a general-purpose method for semi-supervised learning based on a simple and principled probabilistic framework. We approximate the joint distribution of the labeled and unlabeled data with a flexible mixture model implemented as a Gaussian mixture transformed by a normalizing flow. We train the model by maximizing the exact joint likelihood of the labeled and unlabeled data. We evaluate FlowGMM on a wide range of semi-supervised classification problems across different data types: AG-News and Yahoo Answers text data, MNIST, SVHN and CIFAR-10 image classification problems as well as tabular UCI datasets. FlowGMM achieves promising results on image classification problems and outperforms the competing methods on other types of data. FlowGMM learns an interpretable latent repesentation space and allows hyper-parameter free feature visualization at real time rates. Finally, we show that FlowGMM can be calibrated to produce meaningful uncertainty estimates for its predictions.
reject
This paper offers a novel method for semi-supervised learning using GMMs. Unfortunately the novelty of the contribution is unclear, and the majority of the reviewers find the paper is not acceptable in present form. The AC concurs.
test
[ "SJlkOR-cFH", "rkeEIJV3ir", "rJg2gb6ooB", "S1gJ3l6ioS", "H1xq0J6siH", "r1xTxK8CYr", "BygksbcRtB", "BkeV9headB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "The paper describes how to use normalising flows for Semi Supervised Learning (SSL). Briefly, the method consists in finding a (bijective) map for transforming a mixture of Gaussians into a density approximating the empirical data-distribution -- as usual for flow methods, the parameters are found through likeliho...
[ 6, -1, -1, -1, -1, 1, 1, -1 ]
[ 3, -1, -1, -1, -1, 3, 3, -1 ]
[ "iclr_2020_BJg_2JHKvH", "rJg2gb6ooB", "BygksbcRtB", "SJlkOR-cFH", "r1xTxK8CYr", "iclr_2020_BJg_2JHKvH", "iclr_2020_BJg_2JHKvH", "iclr_2020_BJg_2JHKvH" ]
iclr_2020_HJxKhyStPH
Toward Understanding The Effect of Loss Function on The Performance of Knowledge Graph Embedding
Knowledge graphs (KGs) represent world's facts in structured forms. KG completion exploits the existing facts in a KG to discover new ones. Translation-based embedding model (TransE) is a prominent formulation to do KG completion. Despite the efficiency of TransE in memory and time, it suffers from several limitations in encoding relation patterns such as symmetric, reflexive etc. To resolve this problem, most of the attempts have circled around the revision of the score function of TransE i.e., proposing a more complicated score function such as Trans(A, D, G, H, R, etc) to mitigate the limitations. In this paper, we tackle this problem from a different perspective. We show that existing theories corresponding to the limitations of TransE are inaccurate because they ignore the effect of loss function. Accordingly, we pose theoretical investigations of the main limitations of TransE in the light of loss function. To the best of our knowledge, this has not been investigated so far comprehensively. We show that by a proper selection of the loss function for training the TransE model, the main limitations of the model are mitigated. This is explained by setting upper-bound for the scores of positive samples, showing the region of truth (i.e., the region that a triple is considered positive by the model). Our theoretical proofs with experimental results fill the gap between the capability of translation-based class of embedding models and the loss function. The theories emphasis the importance of the selection of the loss functions for training the models. Our experimental evaluations on different loss functions used for training the models justify our theoretical proofs and confirm the importance of the loss functions on the performance.
reject
The paper analyses the effect of different loss functions for TransE and argues that certain limitations of TransE can be mitigated by choosing more appropriate loss functions. The submission then proposes TransComplEx to further improve results. This paper received four reviews, with three recommending rejection, and one recommending weak acceptance. A main concern was in the clarity of motivating the different models. Another was in the relatively low performance of RotatE compared with [1], which was raised by multiple reviewers. The authors provided extensive responses to the concerns raised by the reviewers. However, at least the implementation of RotatE remains of concern, with the response of the authors indicating "Please note that we couldn’t use exactly the same setting of RotatE due to limitations in our infrastructure." On the balance, a majority of reviewers felt that the paper was not suitable for publication in its current form.
train
[ "BJevC-Onsr", "BkldNUq2sS", "B1eqmUAsoH", "B1erEVO5oH", "rygRl-HcjB", "H1xjq6N5ir", "SJeLKcEqor", "Hkg-SNR_or", "HyenwvQbsS", "H1eX02mVYB", "BygbP8ShYH", "BylVtzu35S", "rklmtk369H" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Hello,\n\nThank you for the valuable comments and interest.\n\nComment:\"I think this is the main contribution of this paper\".\n\nResponse:\nWe would agree. The main contribution of the paper is re-investigation of the limitations of the translation-based class of embedding models reported in the recent work. \n\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 1 ]
[ "HyenwvQbsS", "iclr_2020_HJxKhyStPH", "BygbP8ShYH", "H1eX02mVYB", "BylVtzu35S", "BylVtzu35S", "BylVtzu35S", "rklmtk369H", "BylVtzu35S", "iclr_2020_HJxKhyStPH", "iclr_2020_HJxKhyStPH", "iclr_2020_HJxKhyStPH", "iclr_2020_HJxKhyStPH" ]
iclr_2020_H1eF3kStPS
Redundancy-Free Computation Graphs for Graph Neural Networks
Graph Neural Networks (GNNs) are based on repeated aggregations of information across nodes’ neighbors in a graph. However, because common neighbors are shared between different nodes, this leads to repeated and inefficient computations.We propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN graph representation that explicitly avoids redundancy by managing intermediate aggregation results hierarchically, and eliminating repeated computations and unnecessary data transfers in GNN training and inference. We introduce an accurate cost function to quantitatively evaluate the runtime performance of different HAGsand use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN graph representation by increasing the end-to-end training throughput by up to 2.8x and reducing the aggregations and data transfers in GNN training by up to 6.3x and 5.6x. Meanwhile, HAGs improve runtime performance by preserving GNNcomputation, and maintain the original model accuracy for arbitrary GNNs.
reject
This paper proposes a new graph Hierarchy representation (HAG) which eliminates the redundancy during the aggregation stage and improves computation efficiency. It achieves good speedup and also provide theoretical analysis. There has been several concerns from the reviewers; authors' response addressed them partially. Despite this, due to the large number of strong papers, we cannot accept the paper at this time. We encourage the authors to further improve the work for a future version.
val
[ "ryxmxkCRtB", "rkl9lZZhoB", "SygZInyhsS", "ryexiZJ2jB", "r1gKPbyhor", "S1x2xZ1njH", "rJxFqXA1ir", "rkxBrYqaYH", "Syly8HwpKH", "BJlw11mpFr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper aims to propose a speeding-up strategy to reduce the training time for existing GNN models by reducing the redundant neighbor pairs. The idea is simple and clear. The paper is well-written. However, major concerns are:\n\n1. This strategy is only for equal contribution models (e.g., GCN, GraphSAGE), not...
[ 3, -1, -1, -1, -1, -1, 6, 6, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, -1, -1 ]
[ "iclr_2020_H1eF3kStPS", "SygZInyhsS", "ryexiZJ2jB", "rkxBrYqaYH", "ryxmxkCRtB", "rJxFqXA1ir", "iclr_2020_H1eF3kStPS", "iclr_2020_H1eF3kStPS", "BJlw11mpFr", "iclr_2020_H1eF3kStPS" ]
iclr_2020_Hygq3JrtwS
On the Reflection of Sensitivity in the Generalization Error
Even though recent works have brought some insight into the performance improvement of techniques used in state-of-the-art deep-learning models, more work is needed to understand the generalization properties of over-parameterized deep neural networks. We shed light on this matter by linking the loss function to the output’s sensitivity to its input. We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing generalization performance of networks, without requiring labeled data. We find that sensitivity is decreased by applying popular methods which improve the generalization performance of the model, such as (1) using a deep network rather than a wide one, (2) adding convolutional layers to baseline classifiers instead of adding fully connected layers, (3) using batch normalization, dropout and max-pooling, and (4) applying parameter initialization techniques.
reject
The paper proposes a definition of the sensitivity of the output to random perturbations of the input and its link to generalization. While both reviewers appreciated the timeliness of this research, they were taken aback by the striking similarity with the work of Novak et al. I encourage the authors to resubmit to a later conference with a lengthier analysis of the differences between the two frameworks, as they started to do in their rebuttal.
train
[ "Ske_OjzaYr", "H1xu68SnsB", "SkgmWi2Kor", "Hkg-VbqFjB", "S1llvxcYoS", "SJxoegqtoH", "BJgIJnMy5S" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the connection between sensitivity and generalization where sensitivity is roughly defined as the variance of the output of the network when gaussian noise is added to the input data (generated from the same distribution as the training error).\n\nThe paper is well-written and the experiments ar...
[ 3, -1, -1, -1, -1, -1, 3 ]
[ 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_Hygq3JrtwS", "SkgmWi2Kor", "Hkg-VbqFjB", "Ske_OjzaYr", "BJgIJnMy5S", "BJgIJnMy5S", "iclr_2020_Hygq3JrtwS" ]
iclr_2020_rJxq3kHKPH
A Simple Approach to the Noisy Label Problem Through the Gambler's Loss
Learning in the presence of label noise is a challenging yet important task. It is crucial to design models that are robust to noisy labels. In this paper, we discover that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption. Training with this modified loss function reduces memorization of data points with noisy labels and is a simple yet effective method to improve robustness and generalization. Moreover, using this loss function allows us to derive an analytical early stopping criterion that accurately estimates when memorization of noisy labels begins to occur. Our overall approach achieves strong results and outperforming existing baselines.
reject
This paper focuses on mitigating the effect of label noise. They provide a new class of loss functions along with a new stopping criteria for this problem. The authors claim that these new losses improves the test accuracy in the presence of label corruption and helps avoid memorization. The reviewers raised concerns about (1) lack of proper comparison with many baselines (2) subpar literature review and (3) state that parts of the paper is vague. The authors partially addressed these concerns and have significantly updated the paper including comparison with some of the baselines. However, the reviewers were not fully satisfied with the new updates. I mostly agree with the reviewers. I think the paper has potential but requires a bit more work to be ready for publication and can not recommend acceptance at this time. I have to say that the authors really put a lot of effort in their response and significantly improved their submission during the discussion period. I recommend the authors follow the reviewers' suggestions to further improve the paper (e.g. comparing with other baselines) for future submissions
train
[ "SyxBYBr6Fr", "rJepBpantB", "Skl4y3ytoB", "rkxyVdJFjH", "B1eD-EkKjr", "ByluCM5MsB", "BklPOzqGor", "HkghJo9-jS", "ryxCkugxjr", "HJleZsO2KB", "BkxvUxMA5H" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "public" ]
[ "\nUpdate after rebuttal:\n\nThe good:\nThe rebuttal and updated paper address many of my concerns. Most importantly, the updated paper demonstrates the three-stage phenomenon on Open Images and adds experiments on IMDB showing that the Gambler's loss with AES helps a lot. The LAES iteration introduced in the updat...
[ 6, 3, -1, -1, -1, -1, -1, -1, -1, 3, -1 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, 5, -1 ]
[ "iclr_2020_rJxq3kHKPH", "iclr_2020_rJxq3kHKPH", "HJleZsO2KB", "rJepBpantB", "SyxBYBr6Fr", "BklPOzqGor", "rJepBpantB", "HJleZsO2KB", "BkxvUxMA5H", "iclr_2020_rJxq3kHKPH", "iclr_2020_rJxq3kHKPH" ]
iclr_2020_rkeqn1rtDH
Hierarchical Graph Matching Networks for Deep Graph Similarity Learning
While the celebrated graph neural networks yields effective representations for individual nodes of a graph, there has been relatively less success in extending to deep graph similarity learning. Recent work has considered either global-level graph-graph interactions or low-level node-node interactions, ignoring the rich cross-level interactions between parts of a graph and a whole graph. In this paper, we propose a Hierarchical Graph Matching Network (HGMN) for computing the graph similarity between any pair of graph-structured objects. Our model jointly learns graph representations and a graph matching metric function for computing graph similarity in an end-to-end fashion. The proposed HGMN model consists of a multi-perspective node-graph matching network for effectively learning cross-level interactions between parts of a graph and a whole graph, and a siamese graph neural network for learning global-level interactions between two graphs. Our comprehensive experiments demonstrate that our proposed HGMN consistently outperforms state-of-the-art graph matching networks baselines for both classification and regression tasks.
reject
The submission proposes an architecture to learn a similarity metric for graph matching. The architecture uses node-graph information in order to learn a more expressive, multi-level similarity score. The hierarchical approach is empirically validated on a limited set of graphs for which pairwise matching information is available and is shown to outperform other methods for classification and regression tasks. The reviewers were divided in their scores for this paper, but all noted that the approach was somewhat incremental and empirically motivated, without adequate analysis, theoretical justification, or extensive benchmark validation. Although the approach has value, more work is needed to support the method fully. Recommendation is to reject at this time.
train
[ "SylL1lIOoH", "rklNbW8_oS", "HklRSgLOiB", "ryxXCRrujS", "S1lZD04oYr", "rJe9pCD6tH", "BJgjovGVqB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Author response:\n\nFirst of all, we want to thank the reviewer for their thorough reading and valuable comments! However, there are some points of misunderstanding that we address in this rebuttal. \n\nBelow we address the concerns mentioned in the review:\n\n1) The novelty of the paper is incremental. The major ...
[ -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "rJe9pCD6tH", "S1lZD04oYr", "SylL1lIOoH", "BJgjovGVqB", "iclr_2020_rkeqn1rtDH", "iclr_2020_rkeqn1rtDH", "iclr_2020_rkeqn1rtDH" ]
iclr_2020_Bylh2krYPr
Probing Emergent Semantics in Predictive Agents via Question Answering
Recent work has demonstrated how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments. We propose question-answering as a general paradigm to decode and understand the representations that such agents develop, applying our method to two recent approaches to predictive modeling – action-conditional CPC (Guo et al., 2018) and SimCore (Gregor et al., 2019). After training agents with these predictive objectives in a visually-rich, 3D environment with an assortment of objects, colors, shapes, and spatial configurations, we probe their internal state representations with a host of synthetic (English) questions, without backpropagating gradients from the question-answering decoder into the agent. The performance of different agents when probed in this way reveals that they learn to encode detailed, and seemingly compositional, information about objects, properties and spatial relations from their physical environment. Our approach is intuitive, i.e. humans can easily interpret the responses of the model as opposed to inspecting continuous vectors, and model-agnostic, i.e. applicable to any modeling approach. By revealing the implicit knowledge of objects, quantities, properties and relations acquired by agents as they learn, question-conditional agent probing can stimulate the design and development of stronger predictive learning objectives.
reject
This paper proposes question-answering as a general paradigm to decode and understand the representations that agents develop, with application to two recent approaches to predictive modeling. During rebuttal, some critical issues still exist, e.g., as Reviewer#3 pointed out, the submission in its current form lacks experimental analysis of the proposed conditional probes, especially the trade-offs on the reliability of the representation analysis when performed with a conditional probe as well as a clear motivation for the need of a language interface. The authors are encouraged to incorporate the refined motivation and add more comprehensive experimental evaluation for a possible resubmission.
train
[ "r1ep1oO0YS", "B1eOCAMhsH", "B1gI0H9oiH", "rJeYs91jor", "HJg4tUz5jB", "BygryLlKjB", "H1ltX8ltiS", "HylfJebMsB", "BylkT1WMoS", "HJemQxbMoH", "rJx8VFf0Kr", "rklyq2NAFS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n########## Post-Rebuttal Summary ###########\nThe authors engaged actively in the rebuttal discussion and in the process we were able to concretize the motivation of the submission (as a result in increased my score). However, I think that the submission in its current form lacks experimental analysis of the pro...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_Bylh2krYPr", "B1gI0H9oiH", "rJeYs91jor", "HJg4tUz5jB", "BygryLlKjB", "BylkT1WMoS", "HylfJebMsB", "rklyq2NAFS", "r1ep1oO0YS", "rJx8VFf0Kr", "iclr_2020_Bylh2krYPr", "iclr_2020_Bylh2krYPr" ]
iclr_2020_Hkeh21BKPH
Towards Finding Longer Proofs
We present a reinforcement learning (RL) based guidance system for automated theorem proving geared towards Finding Longer Proofs (FLoP). FLoP focuses on generalizing from short proofs to longer ones of similar structure. To achieve that, FLoP uses state-of-the-art RL approaches that were previously not applied in theorem proving. In particular, we show that curriculum learning significantly outperforms previous learning-based proof guidance on a synthetic dataset of increasingly difficult arithmetic problems.
reject
This paper proposes a curriculum-based reinforcement learning approach to improve theorem proving towards longer proofs. While the authors are tackling an important problem, and their method appears to work on the environment it was tested in, the reviewers found the experimental section too narrow and not convincing enough. In particular, the authors are encouraged to apply their methods to more complex domains beyond Robinson arithmetic. It would also be helpful to get a more in depth analysis of the role of the curriculum. The discussion period did not lead to improvements in the reviewers’ scores, hence I recommend that this paper is rejected at this time.
train
[ "H1ec98wcsH", "Hkx7iVuYiB", "rkxeRXOKoH", "SylpAGdFjB", "SJeKpyOtur", "SJeNm88iKr", "HJxLhLwAtS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Your response was very helpful. Thanks!", "Dear Reviewer,\nThank you for your comments.\n\nYou are right that from a strictly RL perspective, our system brings no methodological novelty. However, this is a new method in automatic theorem proving and we argue that it is a good approach to address the sparse rewar...
[ -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, 1, 3, 4 ]
[ "SylpAGdFjB", "SJeKpyOtur", "SJeNm88iKr", "HJxLhLwAtS", "iclr_2020_Hkeh21BKPH", "iclr_2020_Hkeh21BKPH", "iclr_2020_Hkeh21BKPH" ]
iclr_2020_Hkxp3JHtPr
Deep Variational Semi-Supervised Novelty Detection
In anomaly detection (AD), one seeks to identify whether a test sample is abnormal, given a data set of normal samples. A recent and promising approach to AD relies on deep generative models, such as variational autoencoders (VAEs),for unsupervised learning of the normal data distribution. In semi-supervised AD (SSAD), the data also includes a small sample of labeled anomalies. In this work,we propose two variational methods for training VAEs for SSAD. The intuitive idea in both methods is to train the encoder to ‘separate’ between latent vectors for normal and outlier data. We show that this idea can be derived from principled probabilistic formulations of the problem, and propose simple and effective algorithms. Our methods can be applied to various data types, as we demonstrate on SSAD datasets ranging from natural images to astronomy and medicine, and can be combined with any VAE model architecture. When comparing to state-of-the-art SSAD methods that are not specific to particular data types, we obtain marked improvement in outlier detection.
reject
This paper presents two novel VAE-based methods for semi-supervised anomaly detection (SSAD) where one has also access to a small set of labeled anomalous samples. The reviewers had several concerns about the paper, in particular completely addressing reviewer #3's comments would strengthen the paper.
train
[ "H1x83bU7cS", "BylzJCALjS", "B1xga208oH", "H1eZ96CLir", "SkgaN20Usr", "BkgL_iMqKH", "BJgu6JrjtS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes two variational methods for training VAEs for SSAD (Semi-supervised Anomaly Detection). Experiments on benchmarking datasets show improvements over state-of-the-art SSAD methods.\n\nIn generally, the paper is well written. But I have some concerns.\n\n1. Some of the results have not yet been ob...
[ 6, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_Hkxp3JHtPr", "iclr_2020_Hkxp3JHtPr", "BJgu6JrjtS", "BkgL_iMqKH", "H1x83bU7cS", "iclr_2020_Hkxp3JHtPr", "iclr_2020_Hkxp3JHtPr" ]
iclr_2020_HyeCnkHtwH
Efficient generation of structured objects with Constrained Adversarial Networks
Despite their success, generative adversarial networks (GANs) cannot easily generate structured objects like molecules or game maps. The issue is that such objects must satisfy structural requirements (e.g., molecules must be chemically valid, game maps must guarantee reachability of the end goal) that are difficult to capture with examples alone. As a remedy, we propose constrained adversarial networks (CANs), which embed the constraints into the model during training by penalizing the generator whenever it outputs invalid structures. As in unconstrained GANs, new objects can be sampled straightforwardly from the generator, but in addition they satisfy the constraints with high probability. Our approach handles arbitrary logical constraints and leverages knowledge compilation techniques to efficiently evaluate the expected disagreement between the model and the constraints. This setup is further extended to hybrid logical-neural constraints for capturing complex requirements like graph reachability. An extensive empirical analysis on constrained images, molecules, and video game levels shows that CANs efficiently generate valid structures that are both high-quality and novel.
reject
This paper develops ideas for enabling the data generation with GANs in the presence of structured constraints on the data manifold. This problem is interesting and quite relevant to the ICLR community. The reviewers raised concerns about the similarity to prior work (Xu et al '17), and missing comparisons to previous approaches that study this problem (e.g. Hu et al '18) that make it difficult to judge the significance of the work. Overall, the paper is slightly below the bar for acceptance.
train
[ "BJgmAroiFB", "r1xvGsaZsr", "Hyx9h93-oH", "SJgSmV5bsr", "rkeHQhM3Kr", "SygPGB_3YB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors present a Generative Adversarial Neural Networks with Xu et al.’s semantic loss applied to the generator. They call this GAN a Constrained Adversarial Network or (CAN) and identify it as a new class of GAN. The authors present three different problem domains for their experiments focused ...
[ 3, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, -1, 3, 5 ]
[ "iclr_2020_HyeCnkHtwH", "SygPGB_3YB", "BJgmAroiFB", "rkeHQhM3Kr", "iclr_2020_HyeCnkHtwH", "iclr_2020_HyeCnkHtwH" ]
iclr_2020_S1lk61BtvB
``"Best-of-Many-Samples" Distribution Matching
Generative Adversarial Networks (GANs) can achieve state-of-the-art sample quality in generative modelling tasks but suffer from the mode collapse problem. Variational Autoencoders (VAE) on the other hand explicitly maximize a reconstruction-based data log-likelihood forcing it to cover all modes, but suffer from poorer sample quality. Recent works have proposed hybrid VAE-GAN frameworks which integrate a GAN-based synthetic likelihood to the VAE objective to address both the mode collapse and sample quality issues, with limited success. This is because the VAE objective forces a trade-off between the data log-likelihood and divergence to the latent prior. The synthetic likelihood ratio term also shows instability during training. We propose a novel objective with a ``"Best-of-Many-Samples" reconstruction cost and a stable direct estimate of the synthetic likelihood. This enables our hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybrid VAE-GANS and plain GANs in mode coverage and quality.
reject
This paper proposed an improvement on VAE-GAN which draws multiple samples from the reparameterized latent distribution for each inferred q(z|x), and only backpropagates reconstruction error for the resulting G(z) which has the lowest reconstruction. While the idea is interesting, the novelty is not high compared with existing similar works, and the improvement is not significant.
train
[ "SkxNbzojYB", "B1lD7rTsFB", "r1ejzBAX5r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "“Best of Many Samples” Distribution matching\n\nSummary:\n\nThis paper proposes to a novel VAE-GAN hybrid which, during training, draws multiple samples from the reparameterized latent distribution for each inferred q(z|x), and only backpropagates reconstruction error for the resulting G(z) which has the lowest re...
[ 3, 3, 6 ]
[ 5, 4, 1 ]
[ "iclr_2020_S1lk61BtvB", "iclr_2020_S1lk61BtvB", "iclr_2020_S1lk61BtvB" ]
iclr_2020_ByggpyrFPS
Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection
Despite their successes, deep neural networks still make unreliable predictions when faced with test data drawn from a distribution different to that of the training data, constituting a major problem for AI safety. While this motivated a recent surge in interest in developing methods to detect such out-of-distribution (OoD) inputs, a robust solution is still lacking. We propose a new probabilistic, unsupervised approach to this problem based on a Bayesian variational autoencoder model, which estimates a full posterior distribution over the decoder parameters using stochastic gradient Markov chain Monte Carlo, instead of fitting a point estimate. We describe how information-theoretic measures based on this posterior can then be used to detect OoD data both in input space as well as in the model’s latent space. The effectiveness of our approach is empirically demonstrated.
reject
This paper tackles the problem of detection out-of-distribution (OoD) samples. The proposed solution is based on a Bayesian variational autoencoder. The authors show that information-theoretic measures applied on the posterior distribution over the decoder parameters can be used to detect OoD samples. The resulting approach is shown to outperform baselines in experiments conducted on three benchmarks (CIFAR-10 vs SVNH and two based on FashionMNIST). Following the rebuttal, major concerns remained regarding the justification of the approach. The reason why relying on active learning principles should allow for OoD detection would need to be clarified, and the use of the effective sample size (ESS) would require a stronger motivation. Overall, although a theoretically-informed OoD strategy is indeed interesting and relevant, reviewers were not convinced by the provided theoretical justifications. I therefore recommend to reject this paper.
train
[ "Syg9CadRFr", "ryx7HOcniS", "rJe_GY83sB", "HJeMtmghjH", "S1lvLqFYor", "SyxKnFtKsB", "BJlSh3OFiH", "H1gQjvYpFS", "H1lxgQ5-9B" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "After reading all the reviews, the comments, and the additional work done by the Authors, I have decided to confirm my rating.\n\n==================\n\nThis paper leverage probabilistic inference techniques to maintain a posterior distribution over the parameters of a variational autoencoder (VAE). This results in...
[ 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_ByggpyrFPS", "HJeMtmghjH", "iclr_2020_ByggpyrFPS", "S1lvLqFYor", "H1gQjvYpFS", "H1lxgQ5-9B", "Syg9CadRFr", "iclr_2020_ByggpyrFPS", "iclr_2020_ByggpyrFPS" ]
iclr_2020_SklgTkBKDr
Neural Non-additive Utility Aggregation
Neural architectures for set regression problems aim at learning representations such that good predictions can be made based on the learned representations. This strategy, however, ignores the fact that meaningful intermediate results might be helpful to perform well. We study two new architectures that explicitly model latent intermediate utilities and use non-additive utility aggregation to estimate the set utility based on the latent utilities. We evaluate the new architectures with visual and textual datasets, which have non-additive set utilities due to redundancy and synergy effects. We find that the new architectures perform substantially better in this setup.
reject
This paper presents two new architectures that model latent intermediate utilities and use non-additive utility aggregation to estimate the set utility based on the computed latent utilities. These two extensions are easy to understand and seem like a simple extension to the existing RNN model architectures, so that they can be implemented easily. However, the connection to Choquet integral is not clear and no theory has been provided to make that connection. Hence, it is hard for the reader to understand why the integral is useful here. The reviewers have also raised objection about the evaluation which does not seem to be fair to existing methods. These comments can be incorporated to make the paper more accessible and the results more appreciable.
test
[ "ByguiZo2iS", "Hkex-DF3oS", "SJgNfb_tor", "B1eKyZOtoH", "ryeOhg_YiS", "BJl-jwU6tB", "ryxs4ypg9r", "BJeididLqS" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer 2, thank you very much for your reply, we appreciate the effort. \n\nWe agree that it would be great to test our work on the same datasets/tasks previous approaches such as the DeepSets approach have been tested on. Unfortunately, the problems in the DeepSets paper do not have the properties we are m...
[ -1, -1, -1, -1, -1, 3, 1, 1 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "Hkex-DF3oS", "B1eKyZOtoH", "BJl-jwU6tB", "ryxs4ypg9r", "BJeididLqS", "iclr_2020_SklgTkBKDr", "iclr_2020_SklgTkBKDr", "iclr_2020_SklgTkBKDr" ]
iclr_2020_SJlgTJHKwB
Continual Learning with Delayed Feedback
Most of the artificial neural networks are using the benefit of labeled datasets whereas in human brain, the learning is often unsupervised. The feedback or a label for a given input or a sensory stimuli is not often available instantly. After some time when brain gets the feedback, it updates its knowledge. That's how brain learns. Moreover, there is no training or testing phase. Human learns continually. This work proposes a model-agnostic continual learning framework which can be used with neural networks as well as decision trees to incorporate continual learning. Specifically, this work investigates how delayed feedback can be handled. In addition, a way to update the Machine Learning models with unlabeled data is proposed. Promising results are received from the experiments done on neural networks and decision trees.
reject
This paper claims to present a model-agnostic continual learning framework which uses a queue to work with delayed feedback. All reviewers agree that the paper is difficult to follow. I also have a difficult time reading the paper. In addition, all reviewers mentioned there is no baseline in the experiments, which makes it difficult to empirically analyze the strengths and weaknesses of the proposed model. R2 and R3 also have some concerns regarding the motivation and claim made in the paper, especially in relation to previous work in this area. The authors did not respond to any of the concerns raised by the reviewers. It is very clear that the paper is not ready for publication at a venue such as ICLR at the current state, so I recommend rejecting the paper.
test
[ "Hyx_QzKsFH", "SklhMPn0Fr", "BklCS-XNqS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper describes a method that draws inspiration from neuroscience and aims to handle delayed feedback in continual learning (ie. When labels are provided for images after a phase of unsupervised learning on the same classes). It is an interesting idea, and worth exploring.\n\nI found the paper quite hard to f...
[ 1, 1, 1 ]
[ 4, 4, 4 ]
[ "iclr_2020_SJlgTJHKwB", "iclr_2020_SJlgTJHKwB", "iclr_2020_SJlgTJHKwB" ]
iclr_2020_HJx-akSKPS
Neural Subgraph Isomorphism Counting
In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms. Although the learning based approach is inexact, we are able to generalize to count large patterns and data graphs in polynomial time compared to the exponential time of the original NP-complete problem. Different from other traditional graph learning problems such as node classification and link prediction, subgraph isomorphism counting requires more global inference to oversee the whole graph. To tackle this problem, we propose a dynamic intermedium attention memory network (DIAMNet) which augments different representation learning architectures and iteratively attends pattern and target data graphs to memorize different subgraph isomorphisms for the global counting. We develop both small graphs (<= 1,024 subgraph isomorphisms in each) and large graphs (<= 4,096 subgraph isomorphisms in each) sets to evaluate different models. Experimental results show that learning based subgraph isomorphism counting can help reduce the time complexity with acceptable accuracy. Our DIAMNet can further improve existing representation learning models for this more global problem.
reject
This paper proposes a method called Dynamic Intermedium Attention Memory Network (DIAMNet) to learn the subgraph isomorphism counting for a given pattern graph P and target graph G. However, the reviewers think the experimental comparisons are insufficient. Furthermore, the evaluation is only for synthetic dataset for which generating process is designed by the authors. If possible, evaluation on benchmark graph datasets would be convincing though creating the ground truth might be difficult for larger graphs.
val
[ "SJgrUVjFjS", "r1xZ5ViKiB", "Syec_BjYsr", "HJe_WNjFiS", "HJg6JmstoH", "B1lIjMJPjr", "Hkeg_NnxoB", "r1x1gwAcYH", "SylbC532YB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your questions.\n\nQ.1 \nQ.1.1. Besides what we explained in the “general response of why counting”, we would like to emphasize that simply using a binary classifier for subgraph isomorphism and graph isomorphism would be less useful than counting in knowledge discovery and \"how many\" based KBQA, alth...
[ -1, -1, -1, -1, -1, 6, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 5, 5, 1, 4 ]
[ "SylbC532YB", "r1x1gwAcYH", "Hkeg_NnxoB", "B1lIjMJPjr", "iclr_2020_HJx-akSKPS", "iclr_2020_HJx-akSKPS", "iclr_2020_HJx-akSKPS", "iclr_2020_HJx-akSKPS", "iclr_2020_HJx-akSKPS" ]
iclr_2020_S1ef6JBtPr
Probabilistic View of Multi-agent Reinforcement Learning: A Unified Approach
Formulating the reinforcement learning (RL) problem in the framework of probabilistic inference not only offers a new perspective about RL, but also yields practical algorithms that are more robust and easier to train. While this connection between RL and probabilistic inference has been extensively studied in the single-agent setting, it has not yet been fully understood in the multi-agent setup. In this paper, we pose the problem of multi-agent reinforcement learning as the problem of performing inference in a particular graphical model. We model the environment, as seen by each of the agents, using separate but related Markov decision processes. We derive a practical off-policy maximum-entropy actor-critic algorithm that we call Multi-agent Soft Actor-Critic (MA-SAC) for performing approximate inference in the proposed model using variational inference. MA-SAC can be employed in both cooperative and competitive settings. Through experiments, we demonstrate that MA-SAC outperforms a strong baseline on several multi-agent scenarios. While MA-SAC is one resultant multi-agent RL algorithm that can be derived from the proposed probabilistic framework, our work provides a unified view of maximum-entropy algorithms in the multi-agent setting.
reject
The paper takes the perspective of "reinforcement learning as inference", extends it to the multi-agent setting and derives a multi-agent RL algorithm that extends Soft Actor Critic. Several reviewer questions were addressed in the rebuttal phase, including key design choices. A common concern was the limited empirical comparison, including comparisons to existing approaches.
train
[ "S1lHqUjRKB", "BkgSsm3TKS", "BJlf6OhIsr", "BJeG9_38jS", "H1lxwu3LiS", "r1etri0TtS", "Skxdcm4zdB", "SyghKkD5PB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "The paper extends soft actor-critic (SAC) to Markov games, or in other words multi-agent reinforcement learning setting. The paper is very nicely written, derives MA-SAC in a fairly general way, and introduces a variational approximation of the distribution over optimal trajectories which enables centralized train...
[ 3, 1, -1, -1, -1, 3, -1, -1 ]
[ 4, 5, -1, -1, -1, 3, -1, -1 ]
[ "iclr_2020_S1ef6JBtPr", "iclr_2020_S1ef6JBtPr", "BkgSsm3TKS", "r1etri0TtS", "S1lHqUjRKB", "iclr_2020_S1ef6JBtPr", "SyghKkD5PB", "iclr_2020_S1ef6JBtPr" ]
iclr_2020_BylfTySYvB
GATO: Gates Are Not the Only Option
Recurrent Neural Networks (RNNs) facilitate prediction and generation of structured temporal data such as text and sound. However, training RNNs is hard. Vanishing gradients cause difficulties for learning long-range dependencies. Hidden states can explode for long sequences and send unbounded gradients to model parameters, even when hidden-to-hidden Jacobians are bounded. Models like the LSTM and GRU use gates to bound their hidden state, but most choices of gating functions lead to saturating gradients that contribute to, instead of alleviate, vanishing gradients. Moreover, performance of these models is not robust across random initializations. In this work, we specify desiderata for sequence models. We develop one model that satisfies them and that is capable of learning long-term dependencies, called GATO. GATO is constructed so that part of its hidden state does not have vanishing gradients, regardless of sequence length. We study GATO on copying and arithmetic tasks with long dependencies and on modeling intensive care unit and language data. Training GATO is more stable across random seeds and learning rates than GRUs and LSTMs. GATO solves these tasks using an order of magnitude fewer parameters.
reject
This paper proposes a modification of RNN that does not suffer from vanishing and exploding gradient problems. The proposed model, GATO partitions the RNN hidden state into two channels, and both are updated by the previous state. This model ensures that the state in one of the parts is time-independent by using residual connections. The reviews are mixed for this paper, but the general consensus was that the experiments could be better (baseline comparisons could have been fairer). The reviewers have low confidence in the revised/updated results. Moreover, it remains unclear what the critical components are that make things work. It would be great to read a paper and understand why something works and not that something works. Overall: Nice idea, but the paper is not quite ready yet.
train
[ "BJxeizHRtS", "SJxnjAcnoB", "Syxg9BPojH", "B1lAmSwojH", "BJezyHPiir", "Syxn8EDojB", "SygdaoLEtr", "SJxmtISTFS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a modification of RNN that does not suffer from vanishing and exploding gradient problems. The proposed model, GATO partitions the RNN hidden state into two channels, and both are updated by the previous state. This model ensures that the state in one of the parts is time-independent by using r...
[ 3, -1, -1, -1, -1, -1, 8, 3 ]
[ 5, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_BylfTySYvB", "iclr_2020_BylfTySYvB", "SJxmtISTFS", "SygdaoLEtr", "BJxeizHRtS", "iclr_2020_BylfTySYvB", "iclr_2020_BylfTySYvB", "iclr_2020_BylfTySYvB" ]
iclr_2020_BJeXaJHKvB
P-BN: Towards Effective Batch Normalization in the Path Space
Neural networks with ReLU activation functions have demonstrated their success in many applications. Recently, researchers noticed a potential issue with the optimization of ReLU networks: the ReLU activation functions are positively scale-invariant (PSI), while the weights are not. This mismatch may lead to undesirable behaviors in the optimization process. Hence, some new algorithms that conduct optimizations directly in the path space (the path space is proven to be PSI) were developed, such as Stochastic Gradient Descent (SGD) in the path space, and it was shown that SGD in the path space is superior to that in the weight space. However, it is still unknown whether other deep learning techniques beyond SGD, such as batch normalization (BN), could also have their counterparts in the path space. In this paper, we conduct a formal study on the design of BN in the path space. According to our study, the key challenge is how to ensure the forward propagation in the path space, because BN is utilized during the forward process. To tackle such challenge, we propose a novel re-parameterization of ReLU networks, with which we replace each weight in the original neural network, with a new value calculated from one or several paths, while keeping the outputs of the network unchanged for any input. Then we show that BN in the path space, namely P-BN, is just a slightly modified conventional BN on the re-parameterized ReLU networks. Our experiments on two benchmark datasets, CIFAR and ImageNet, show that the proposed P-BN can significantly outperform the conventional BN in the weight space.
reject
This paper addresses the extension of path-space-based SGD (which has some previously-acknowledged advantages over traditional weight-space SGD) to handle batch normalization. Given the success of BN in traditional settings, this is a reasonable scenario to consider. The analysis and algorithm development involved exploits a reparameterization process to transition from the weight space to the path space. Empirical tests are then conducted on CIFAR and ImageNet. Overall, there was a consensus among reviewers to reject this paper, and the AC did not find sufficient justification to overrule this consensus. Note that some of the negative feedback was likely due, at least in part, to unclear aspects of the paper, an issue either explicitly stated or implied by all reviewers. While obviously some revisions were made, at this point it seems that a new round of review is required to reevaluate the contribution and ensure that it is properly appreciated.
train
[ "HygQZEdrir", "BygbSE_riH", "BJl__VdBiH", "rygGgK_roS", "ryeSKamyoB", "r1leWJ7TYr", "HkxkpF_j5S" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. The following is our responses.\n\nQ1: “Let start with Theorem 3.1: I am not sure about the statement of the theorem. Is this result for a linear net? I think for a Relu net, outputs need an additional scaling parameter that depends on all past hidden states (outputs).”\nA1: No, all re...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 4, 4, 1 ]
[ "ryeSKamyoB", "HkxkpF_j5S", "r1leWJ7TYr", "iclr_2020_BJeXaJHKvB", "iclr_2020_BJeXaJHKvB", "iclr_2020_BJeXaJHKvB", "iclr_2020_BJeXaJHKvB" ]
iclr_2020_rJg46kHYwH
Adaptive Generation of Unrestricted Adversarial Inputs
Neural networks are vulnerable to adversarially-constructed perturbations of their inputs. Most research so far has considered perturbations of a fixed magnitude under some lp norm. Although studying these attacks is valuable, there has been increasing interest in the construction of—and robustness to—unrestricted attacks, which are not constrained to a small and rather artificial subset of all possible adversarial inputs. We introduce a novel algorithm for generating such unrestricted adversarial inputs which, unlike prior work, is adaptive: it is able to tune its attacks to the classifier being targeted. It also offers a 400–2,000× speedup over the existing state of the art. We demonstrate our approach by generating unrestricted adversarial inputs that fool classifiers robust to perturbation-based attacks. We also show that, by virtue of being adaptive and unrestricted, our attack is able to bypass adversarial training against it.
reject
This paper presents an interesting method for creating adversarial examples using a GAN. Reviewers are concerned that ImageNet Results, while successfully evading a classifier, do not appear to be natural images. Furthermore, the attacks are demonstrated on fairly weak baseline classifiers that are known to be easily broken. They attack Resnet50 (without adv training), for which Lp-bounded attacks empirically seem to produce more convincing images. For MNIST, they attack Wong and Kolter’s "certifiable" defense, which is empirically much weaker than an adversarially trained network, and also weaker than more recent certifiable baselines.
train
[ "rJeOnu0jYr", "rJxptTW6tH", "HyetH8SnjH", "SkgDv7mhoS", "rJxomm72or", "HJeH1uLssB", "ryeTAIYdsr", "SylNpBFusr", "Syl3VrKOsB", "HklaLEY_iS", "rJgbx4Y_sH", "rkgqAJxpFr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes using GANs to generate unrestricted adversarial examples. They seek to generate examples that are adversarial for a specific classifier, and they do so by using class-conditional GANs and a fine-tuning loss. The fine-tuning loss consists of both the ordinary GAN loss (to fool the discriminator) ...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_rJg46kHYwH", "iclr_2020_rJg46kHYwH", "SkgDv7mhoS", "rJxomm72or", "HJeH1uLssB", "SylNpBFusr", "iclr_2020_rJg46kHYwH", "Syl3VrKOsB", "rJeOnu0jYr", "rkgqAJxpFr", "rJxptTW6tH", "iclr_2020_rJg46kHYwH" ]
iclr_2020_H1gN6kSFwS
Learning Neural Causal Models from Unknown Interventions
Meta-learning over a set of distributions can be interpreted as learning different types of parameters corresponding to short-term vs long-term aspects of the mechanisms underlying the generation of data. These are respectively captured by quickly-changing \textit{parameters} and slowly-changing \textit{meta-parameters}. We present a new framework for meta-learning causal models where the relationship between each variable and its parents is modeled by a neural network, modulated by structural meta-parameters which capture the overall topology of a directed graphical model. Our approach avoids a discrete search over models in favour of a continuous optimization procedure. We study a setting where interventional distributions are induced as a result of a random intervention on a single unknown variable of an unknown ground truth causal model, and the observations arising after such an intervention constitute one meta-example. To disentangle the slow-changing aspects of each conditional from the fast-changing adaptations to each intervention, we parametrize the neural network into fast parameters and slow meta-parameters. We introduce a meta-learning objective that favours solutions \textit{robust} to frequent but sparse interventional distribution change, and which generalize well to previously unseen interventions. Optimizing this objective is shown experimentally to recover the structure of the causal graph. Finally, we find that when the learner is unaware of the intervention variable, it is able to infer that information, improving results further and focusing the parameter and meta-parameter updates where needed.
reject
This paper proposes a metalearning objective to infer causal graphs from data based on masked neural networks to capture arbitrary conditional relationships. While the authors agree that the paper contains various interesting ideas, the theoretical and conceptual underpinnings of the proposed methodology are still lacking and the experiments cannot sufficiently make up for this. The method is definitely worth exploring more and a revision is likely to be accepted at another venue.
train
[ "rJlp-hUnjr", "SJlO234hsr", "BkgBSh0iiS", "r1eGmbwssr", "B1gMQPO5jS", "rylbZRDpYS", "BJlYsorcsS", "ByxAt1yFsH", "Hyg7Tw9OsH", "BJgyDD5dsS", "H1lA_85_oH", "r1lSLH9diH", "SJlarIt_sS", "r1ekeUYOjH", "HJxO6r5HYr", "Skx29AZdFH" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer, \n\nCould you let us know if our response has addressed the concerns raised in your review? I think our response in point (b) above clarifies your main concern about insufficient comparisons (as in Asia graph, it was not simulated from MLP).\n\nWe would be happy to provide further revisions to addr...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "SJlO234hsr", "BkgBSh0iiS", "B1gMQPO5jS", "HJxO6r5HYr", "Skx29AZdFH", "iclr_2020_H1gN6kSFwS", "ByxAt1yFsH", "r1ekeUYOjH", "Skx29AZdFH", "Skx29AZdFH", "Skx29AZdFH", "Skx29AZdFH", "rylbZRDpYS", "rylbZRDpYS", "iclr_2020_H1gN6kSFwS", "iclr_2020_H1gN6kSFwS" ]
iclr_2020_rJxBa1HFvS
Value-Driven Hindsight Modelling
Value estimation is a critical component of the reinforcement learning (RL) paradigm. The question of how to effectively learn predictors for value from data is one of the major problems studied by the RL community, and different approaches exploit structure in the problem domain in different ways. Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function. In contrast, model-free methods directly leverage the quantity of interest from the future but have to compose with a potentially weak scalar signal (an estimate of the return). In this paper we develop an approach for representation learning in RL that sits in between these two extremes: we propose to learn what to model in a way that can directly help value prediction. To this end we determine which features of the future trajectory provide useful information to predict the associated return. This provides us with tractable prediction targets that are directly relevant for a task, and can thus accelerate learning of the value function. The idea can be understood as reasoning, in hindsight, about which aspects of the future observations could help past value prediction. We show how this can help dramatically even in simple policy evaluation settings. We then test our approach at scale in challenging domains, including on 57 Atari 2600 games.
reject
This paper studies the problem of estimating the value function in an RL setting by learning a representation of the value function. While this topic is one of general interest to the ICLR community, the paper would benefit from a more careful revision and reorganization following the suggestions of the reviewers.
test
[ "Byev_nUoFr", "Hkeh6HcjsB", "ryezItX9jr", "HyeFfYm5sS", "Skg30_m5sB", "BylXiem0KH", "Skg7ydsXcr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a new model-based reinforcement learning method, termed hindsight modelling. The method works by training a value function which, in addition to depending on information available at the present time is conditioned on some learned embedding of a partial future trajectory. A model is then traine...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rJxBa1HFvS", "HyeFfYm5sS", "Byev_nUoFr", "BylXiem0KH", "Skg7ydsXcr", "iclr_2020_rJxBa1HFvS", "iclr_2020_rJxBa1HFvS" ]
iclr_2020_H1g8p1BYvS
Adversarial Filters of Dataset Biases
Large-scale benchmark datasets have been among the major driving forces in AI, supporting training of models and measuring their progress. The key assumption is that these benchmarks are realistic approximations of the target tasks in the real world. However, while machine performance on these benchmarks advances rapidly --- often surpassing human performance --- it still struggles on the target tasks in the wild. This raises an important question: whether the surreal high performance on existing benchmarks are inflated due to spurious biases in them, and if so, how we can effectively revise these benchmarks to better simulate more realistic problem distributions in the real world.   In this paper, we posit that while the real world problems consist of a great deal of long-tail problems, existing benchmarks are overly populated with a great deal of similar (thus non-tail) problems, which in turn, leads to a major overestimation of true AI performance. To address this challenge, we present a novel framework of Adversarial Filters to investigate model-based reduction of dataset biases. We discuss that the optimum bias reduction via AFOptimum is intractable, thus propose AFLite, an iterative greedy algorithm that adversarially filters out data points to identify a reduced dataset with more realistic problem distributions and considerably less spurious biases. AFLite is lightweight and can in principle be applied to any task and dataset. We apply it to popular benchmarks that are practically solved --- ImageNet and Natural Language Inference (SNLI, MNLI, QNLI) --- and present filtered counterparts as new challenge datasets where the model performance drops considerably (e.g., from 84% to 24% for ImageNet and from 92% to 62% for SNLI), while human performance remains high. An extensive suite of analysis demonstrates that AFLite effectively reduces measurable dataset biases in both the synthetic and real datasets. Finally, we introduce new measures of dataset biases based on K-nearest-neighbors to help guide future research on dataset developments and bias reduction. 
reject
This paper proposes to address the issue of biases and artifacts in benchmark datasets through the use of adversarial filtering. That is, removing training and test examples that a baseline model or ensemble gets wright. The paper is borderline, and could have flipped to an accept if the target acceptance rate for the conference were a bit higher. All three reviewers ultimately voted weakly in favor of it, especially after the addition of the new out-of-domain generalization results. However, reviewers found it confusing in places, and R2 wasn't fully convinced that this should be applied in the settings the authors suggest. This paper raises some interesting and controversial points, but after some private discussion, there wasn't a clear consensus that publishing it as is would do more good than harm.
train
[ "S1gBu5PLtB", "HkgJVxk6tS", "rJeVCr85sS", "rJgdQ_85sS", "HylVBv8csr", "SJxEp8U9oB", "H1xnLtZkcS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes to learn a subset of a given dataset that acts as an adversary, that hurts the model performance when used as a training dataset. The central claim of the paper is that existing datasets on which models are trained are potentially biased, and are not reflective of real world scenarios. By disca...
[ 6, 6, -1, -1, -1, -1, 6 ]
[ 3, 1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_H1g8p1BYvS", "iclr_2020_H1g8p1BYvS", "iclr_2020_H1g8p1BYvS", "S1gBu5PLtB", "HkgJVxk6tS", "H1xnLtZkcS", "iclr_2020_H1g8p1BYvS" ]
iclr_2020_SJev6JBtvH
Testing For Typicality with Respect to an Ensemble of Learned Distributions
Good methods of performing anomaly detection on high-dimensional data sets are needed, since algorithms which are trained on data are only expected to perform well on data that is similar to the training data. There are theoretical results on the ability to detect if a population of data is likely to come from a known base distribution, which is known as the goodness-of-fit problem, but those results require knowing a model of the base distribution. The ability to correctly reject anomalous data hinges on the accuracy of the model of the base distribution. For high dimensional data, learning an accurate-enough model of the base distribution such that anomaly detection works reliably is very challenging, as many researchers have noted in recent years. Existing methods for the goodness-of-fit problem do not ac- count for the fact that a model of the base distribution is learned. To address that gap, we offer a theoretically motivated approach to account for the density learning procedure. In particular, we propose training an ensemble of density models, considering data to be anomalous if the data is anomalous with respect to any member of the ensemble. We provide a theoretical justification for this approach, proving first that a test on typicality is a valid approach to the goodness-of-fit problem, and then proving that for a correctly constructed ensemble of models, the intersection of typical sets of the models lies in the interior of the typical set of the base distribution. We present our method in the context of an example on synthetic data in which the effects we consider can easily be seen.
reject
The paper proposes a new method for testing whether new data comes from the same distribution as training data without having an a-priori density model of the training data. This is done by looking at the intersection of typical sets of an ensemble of learned models. On the theoretical side, the paper was received positively by all reviewers. The theoretical results were deemed strong, and the ideas in the paper were considered novel. The problem setting was considered relevant, and seen as a good proposal to deal with the shortcoming of models on out of distribution data. However, the lack of empirical results on at least somewhat realistic datasets (e.g. MNIST) was commented on by all reviewers. The authors only present a toy experiment. The authors have explained their decision, but I agree with R1 that it would be appropriate in such situations to present the toy experiment next to a more realistic dataset. This also means that the effectiveness of the proposed method in real settings is as of yet unclear. Although the provided toy example was considered clear and illuminating, the clarity of the text could still be improved. Although the reviewers had a spread in their final score, I think they would all agree that the direction this paper takes is very exciting, but that the current version of the paper is somewhat premature. Thus, unfortunately, I have to recommend rejection at this point.
train
[ "rygtsAw7YB", "rJx8OwSnsH", "H1l0rTEhjr", "HJl_WFgjir", "Syg0lMC9oH", "B1gQLJXzYB", "BkxifG02KH", "Skxa7Wkmcr", "Skeymf3J5B", "BylKbgx8KH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "This paper proposes to use ensembles of estimated probability distributions in hypothesis testing for anomaly detection.\nWhile the problem of density estimation with its application to anomaly detection is relevant, I have a number of concerns listed below:\n\n- Overall, this paper is not clearly written and it i...
[ 3, -1, -1, -1, -1, 1, 6, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, 4, 4, -1, -1, -1 ]
[ "iclr_2020_SJev6JBtvH", "H1l0rTEhjr", "iclr_2020_SJev6JBtvH", "rygtsAw7YB", "B1gQLJXzYB", "iclr_2020_SJev6JBtvH", "iclr_2020_SJev6JBtvH", "Skeymf3J5B", "BylKbgx8KH", "iclr_2020_SJev6JBtvH" ]
iclr_2020_HyewT1BKvr
SpectroBank: A filter-bank convolutional layer for CNN-based audio applications
We propose and investigate the design of a new convolutional layer where kernels are parameterized functions. This layer aims at being the input layer of convolutional neural networks for audio applications. The kernels are defined as functions having a band-pass filter shape, with a limited number of trainable parameters. We show that networks having such an input layer can achieve state-of-the-art accuracy on several audio classification tasks. This approach, while reducing the number of weights to be trained along with network training time, enables larger kernel sizes, an advantage for audio applications. Furthermore, the learned filters bring additional interpretability and a better understanding of the data properties exploited by the network.
reject
The paper proposed a parameterized convolution layer using predefined filterbanks. It has the benefit of less parameters to optimize and better interpretability. The original submission failed to inlcude many related work into the discussion which was addressed during the rebutal. The main concerns for this paper is the limited novelty and insufficient experimental validation and comprisons: * There have been existing work using sinc parameterized filters, learnable Gammatones etc, which are very similar to the proposed method. Also in the rebutal, the authors acknowledged that "We did not claim that cosine modulation was the novelty in our paper" and it is "just a way of simplifying implementation and dealing with real values instead of complex ones" and "addressing the question of convergence of parametric filter banks to perceptual scale". * Although the authors addressed the missing related work problem by including them into discussions, the expeirmental sections need more work to include comparisons to those methods and also more validations on difference datasets to address the concern on the generalization of the proposed method.
train
[ "rkge3Hj6tS", "rkxEAorhsH", "BJgpQEuhtr", "SJeDD_-nor", "B1etp1gisB", "HJgqWxgsoH", "Syxf1vAciS", "HygHvXCqjB", "SygoIGCcoH", "Bkg5AvUOor", "BklbJDXJqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes to specify the first layer of a CNN for audio applications with predefined filterbanks from the signal processing community. Those latter are only specified by a limited number of parameters, such as the bandwidth or the central frequency of the filter, and those parameters are then optimized t...
[ 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 5, -1, 3, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_HyewT1BKvr", "SJeDD_-nor", "iclr_2020_HyewT1BKvr", "BklbJDXJqB", "Syxf1vAciS", "HygHvXCqjB", "SygoIGCcoH", "rkge3Hj6tS", "rkge3Hj6tS", "BJgpQEuhtr", "iclr_2020_HyewT1BKvr" ]
iclr_2020_B1lda1HtvB
Feature Selection using Stochastic Gates
Feature selection problems have been extensively studied in the setting of linear estimation, for instance LASSO, but less emphasis has been placed on feature selection for non-linear functions. In this study, we propose a method for feature selection in high-dimensional non-linear function estimation problems. The new procedure is based on directly penalizing the ℓ0 norm of features, or the count of the number of selected features. Our ℓ0 based regularization relies on a continuous relaxation of the Bernoulli distribution, which allows our model to learn the parameters of the approximate Bernoulli distributions via gradient descent. The proposed framework simultaneously learns a non-linear regression or classification function while selecting a small subset of features. We provide an information-theoretic justification for incorporating Bernoulli distribution into our approach. Furthermore, we evaluate our method using synthetic and real-life data and demonstrate that our approach outperforms other embedded methods in terms of predictive performance and feature selection.
reject
The authors propose a method for feature selection in non linear models by using an appropriate continuous relaxation of binary feature selection variables. The reviewers found that the paper contains several interesting methodological contributions. However, they thought that the foundations of the methodology make very strong assumptions. Moreover the experimental evaluation is lacking comparison with other methods for non linear feature selection such as that of Doquet et al and Chang et al.
train
[ "ryeWRH4TFr", "HylAbOm2iB", "ByeWmvX3jH", "rkeInLm3jr", "rye3z0ytcr", "H1e8061p9H", "SJetxqOMuB", "SkeKx9U6vB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The author rebuttal sufficiently addresses my concerns, so I am upgrading my score.\n\n***\n\nThe paper considers the problem of embedded feature selection for supervised learning with nonlinear functions. A feature subset is evaluated via the loss function in a \"soft\" manner: a fraction of an individual feature...
[ 6, -1, -1, -1, 3, 3, -1, -1 ]
[ 3, -1, -1, -1, 4, 3, -1, -1 ]
[ "iclr_2020_B1lda1HtvB", "ryeWRH4TFr", "rye3z0ytcr", "H1e8061p9H", "iclr_2020_B1lda1HtvB", "iclr_2020_B1lda1HtvB", "SkeKx9U6vB", "iclr_2020_B1lda1HtvB" ]
iclr_2020_B1xu6yStPH
Using Explainabilty to Detect Adversarial Attacks
Deep learning models are often sensitive to adversarial attacks, where carefully-designed input samples can cause the system to produce incorrect decisions. Here we focus on the problem of detecting attacks, rather than robust classification, since detecting that an attack occurs may be even more important than avoiding misclassification. We build on advances in explainability, where activity-map-like explanations are used to justify and validate decisions, by highlighting features that are involved with a classification decision. The key observation is that it is hard to create explanations for incorrect decisions. We propose EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class. Specifically, we use SHAP, which uses Shapley values in the space of the input image, to identify which input features contribute to a class decision. Interestingly, this approach does not require to modify the attacked model, and it can be applied without modelling a specific attack. It can therefore be applied successfully to detect unfamiliar attacks, that were unknown at the time the detection model was designed. We evaluate EXAID on two benchmark datasets CIFAR-10 and SVHN, and against three leading attack techniques, FGSM, PGD and C&W. We find that EXAID improves over the SoTA detection methods by a large margin across a wide range of noise levels, improving detection from 70% to over 90% for small perturbations.
reject
This paper proposes EXAID, a method to detect adversarial attacks by building on the advances in explainability (particularly SHAP), where activity-map-like explanations are used to justify and validate decisions. Though it may have some valuable ideas, the execution is not satisfying, with various issues raised in comments. No rebuttal was provided.
test
[ "r1xUKwB0FH", "Bke1qCDAtB", "Skxtu9afqB", "HJeuJ1NX9S", "SylGovyRFS", "rJeIFIAcYS", "Hyxa0GMcFH", "r1l5cbcsPB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "This paper suggests a method for detecting adversarial attacks known as EXAID, which leverages deep learning explainability techniques to detect adversarial examples. The method works by looking at the prediction made by the classifier as well as the output of the explainability method, and labelling the input as ...
[ 3, 3, 1, 3, -1, -1, -1, -1 ]
[ 5, 3, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2020_B1xu6yStPH", "iclr_2020_B1xu6yStPH", "iclr_2020_B1xu6yStPH", "iclr_2020_B1xu6yStPH", "rJeIFIAcYS", "Hyxa0GMcFH", "r1l5cbcsPB", "iclr_2020_B1xu6yStPH" ]
iclr_2020_H1eKT1SFvH
Towards Effective 2-bit Quantization: Pareto-optimal Bit Allocation for Deep CNNs Compression
State-of-the-art quantization methods can compress deep neural networks down to 4 bits without losing accuracy. However, when it comes to 2 bits, the performance drop is still noticeable. One problem in these methods is that they assign equal bit rate to quantize weights and activations in all layers, which is not reasonable in the case of high rate compression (such as 2-bit quantization), as some of layers in deep neural networks are sensitive to quantization and performing coarse quantization on these layers can hurt the accuracy. In this paper, we address an important problem of how to optimize the bit allocation of weights and activations for deep CNNs compression. We first explore the additivity of output error caused by quantization and find that additivity property holds for deep neural networks which are continuously differentiable in the layers. Based on this observation, we formulate the optimal bit allocation problem of weights and activations in a joint framework and propose a very efficient method to solve the optimization problem via Lagrangian Formulation. Our method obtains excellent results on deep neural networks. It can compress deep CNN ResNet-50 down to 2 bits with only 0.7% accuracy loss. To the best our knowledge, this is the first paper that reports 2-bit results on deep CNNs without hurting the accuracy.
reject
This works presents a method for inferring the optimal bit allocation for quantization of weights and activations in CNNs. The formulation is sound and the experiments are complete. However, the main concern is that the paper is very similar to a recent work by the authors, which is not cited.
train
[ "SkgGDCrhiS", "rJlLBT1Yjr", "SkgvAjJKsr", "rJehZskYoB", "B1etb9GNjr", "SyeB8pofiS", "SkxS-gdzjH", "BklwCkdGoH", "Hkx3g0MZoS", "rylTYKApFB", "rJx2sPqAFS", "HJeAzEJQqS", "BJezfXDH_S", "BJl5neg1_H", "Sygq4eUhDH", "Skg_McpoPH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public", "public" ]
[ "I appreciate the authors' responses to my questions. I can see that the paper now considers the efficiency issue of the mixed-precision feedforward more fairly. ", "\nWe thank all reviewers for their careful reviews, insightful comments and feedback on our paper. The draft has been revised accordingly. The revis...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, -1, -1, -1, -1 ]
[ "rJx2sPqAFS", "iclr_2020_H1eKT1SFvH", "rJx2sPqAFS", "HJeAzEJQqS", "rylTYKApFB", "Skg_McpoPH", "BklwCkdGoH", "rJx2sPqAFS", "HJeAzEJQqS", "iclr_2020_H1eKT1SFvH", "iclr_2020_H1eKT1SFvH", "iclr_2020_H1eKT1SFvH", "BJl5neg1_H", "Sygq4eUhDH", "iclr_2020_H1eKT1SFvH", "iclr_2020_H1eKT1SFvH" ]
iclr_2020_Skg2pkHFwS
Emergence of Collective Policies Inside Simulations with Biased Representations
We consider a setting where biases are involved when agents internalise an environment. Agents have different biases, all of which resulting in imperfect evidence collected for taking optimal actions. Throughout the interactions, each agent asynchronously internalises their own predictive model of the environment and forms a virtual simulation within which the agent plays trials of the episodes in entirety. In this research, we focus on developing a collective policy trained solely inside agents' simulations, which can then be transferred to the real-world environment. The key idea is to let agents imagine together; make them take turns to host virtual episodes within which all agents participate and interact with their own biased representations. Since agents' biases vary, the collective policy developed while sequentially visiting the internal simulations complement one another's shortcomings. In our experiment, the collective policies consistently achieve significantly higher returns than the best individually trained policies.
reject
This paper presents an ensemble method for reinforcement learning. The method trains an ensemble of transition and reward models. Each element of this ensemble has a different view of the data (for example, ablated observation pixels) and a different latent space for its models. A single (collective) policy is then trained, by learning from trajectories generated from each of the models in the ensemble. The collective policy makes direct use of the latent spaces and models in the ensemble by means of a translator that maps one latent space into all the other latent spaces, and an aggregator that combines all the model outputs. The method is evaluated on the CarRacing and VizDoom environments. The reviewers raised several concerns about the paper. The evaluations were not convincing with artificially weak baselines and only worked well in one of the two tested environments (reviewer 2). The paper does not adequately connect to related work on model-based RL (reviewer 1 and 2). The paper does not motivate its artificial setting (reviewer 2 and 1). The paper's presentation lacks clarity from using non-standard terminology and notation without adequate explanation (reviewer 1 and 3). Technical aspects of the translator component were also unclear to multiple reviewers (reviewers 1, 2 and 3). The authors found the review comments to be helpful for future work, but provided no additional clarifications. The paper is not ready for publication.
train
[ "HJlU5046OH", "H1eTRv42sH", "Bkg92xTnFr", "rJlsPbo0KH" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the settings where multiple agents each act independently in different copies of an environment, without interacting with each other. Each agent uses model-based learning, learning a representation of the world, then learning a controller against the learned world model (learned with an evolution...
[ 3, -1, 1, 3 ]
[ 3, -1, 5, 3 ]
[ "iclr_2020_Skg2pkHFwS", "iclr_2020_Skg2pkHFwS", "iclr_2020_Skg2pkHFwS", "iclr_2020_Skg2pkHFwS" ]
iclr_2020_H1epaJSYDS
Anchor & Transform: Learning Sparse Representations of Discrete Objects
Learning continuous representations of discrete objects such as text, users, and items lies at the heart of many applications including text and user modeling. Unfortunately, traditional methods that embed all objects do not scale to large vocabulary sizes and embedding dimensions. In this paper, we propose a general method, Anchor & Transform (ANT) that learns sparse representations of discrete objects by jointly learning a small set of anchor embeddings and a sparse transformation from anchor objects to all objects. ANT is scalable, flexible, end-to-end trainable, and allows the user to easily incorporate domain knowledge about object relationships (e.g. WordNet, co-occurrence, item clusters). ANT also recovers several task-specific baselines under certain structural assumptions on the anchors and transformation matrices. On text classification and language modeling benchmarks, ANT demonstrates stronger performance with fewer parameters as compared to existing vocabulary selection and embedding compression baselines.
reject
The paper proposes a method to produce embeddings of discrete objects, jointly learning a small set of anchor embeddings and a sparse transformation from anchor objects to all the others. While the paper is well written, and proposes an interesting solution, the contribution seems rather incremental (as noted by several reviewers), considering the existing literature in the area. Also, after discussions the usefulness of the method remains a bit unclear - it seems some engineering (related to sparse operations) is still required to validate the viability of the approach.
train
[ "B1l9VtzZqB", "HygycRZhoS", "ByeF7mM3sH", "BkxKAnZ3sH", "SJxj4JM3jr", "S1ghoqZhiB", "HyxrQW3foH", "S1eBeeATKS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This manuscript proposed to represent the embedding matrix as a small set of anchor embedding and sparse transformation. The paper is trying to be general-purpose, end-to-end trainable, and able to incorporate domain knowledge. Experimental results show that it is possible to compress the embedding in the proposed...
[ 3, -1, -1, -1, -1, -1, 3, 6 ]
[ 1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_H1epaJSYDS", "S1eBeeATKS", "iclr_2020_H1epaJSYDS", "HyxrQW3foH", "HygycRZhoS", "B1l9VtzZqB", "iclr_2020_H1epaJSYDS", "iclr_2020_H1epaJSYDS" ]
iclr_2020_Hygy01StvH
Impact of the latent space on the ability of GANs to fit the distribution
The goal of generative models is to model the underlying data distribution of a sample based dataset. Our intuition is that an accurate model should in principle also include the sample based dataset as part of its induced probability distribution. To investigate this, we look at fully trained generative models using the Generative Adversarial Networks (GAN) framework and analyze the resulting generator on its ability to memorize the dataset. Further, we show that the size of the initial latent space is paramount to allow for an accurate reconstruction of the training data. This gives us a link to compression theory, where Autoencoders (AE) are used to lower bound the reconstruction capabilities of our generative model. Here, we observe similar results to the perception-distortion tradeoff (Blau & Michaeli (2018)). Given a small latent space, the AE produces low quality and the GAN produces high quality outputs from a perceptual viewpoint. In contrast, the distortion error is smaller for the AE. By increasing the dimensionality of the latent space the distortion decreases for both models, but the perceptual quality only increases for the AE.
reject
The reviewers have pointed out several major deficiencies of the paper, which the authors decided not to address.
train
[ "SkxSd5_otS", "S1ecMH32YS", "SkxN3oeEqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Impact of the latent space on the ability of GANs to fit the distribution\n\nThis paper purports to study the behavior of latent variable generative models by examining how the dimensionality of the latent space affects the ability of said models to reconstruct samples from the dataset. The authors perform experim...
[ 1, 1, 1 ]
[ 5, 4, 5 ]
[ "iclr_2020_Hygy01StvH", "iclr_2020_Hygy01StvH", "iclr_2020_Hygy01StvH" ]
iclr_2020_r1geR1BKPr
MULTI-STAGE INFLUENCE FUNCTION
Multi-stage training and knowledge transfer from a large-scale pretrain task to various fine-tune end tasks have revolutionized natural language processing (NLP) and computer vision (CV), with state-of-the-art performances constantly being improved. In this paper, we develop a multi-stage influence function score to track predictions from a finetune model all the way back to the pretrain data. With this score, we can identify the pretrain examples in the pretrain task that contribute most to a prediction in the fine-tune task. The proposed multi-stage influence function generalizes the original influence function for a single model in Koh et al 2017, thereby enabling influence computation through both pretrain and fine-tune models. We test our proposed method in various experiments to show its effectiveness and potential applications.
reject
This paper extends the idea of influence functions (aka the implicit function theorem) to multi-stage training pipelines, and also adds an L2 penalty to approximate the effect of training for a limited number of iterations. I think this paper is borderline. I also think that R3 had the best take and questions on this paper. Pros: - The main idea makes sense, and could be used to understand real training pipelines better. - The experiments, while mostly small-scale, answer most of the immediate questions about this model. Cons: - The paper still isn't all that polished. E.g. on page 4: "Algorithm 1 shows how to compute the influence score in (11). The pseudocode for computing the influence function in (11) is shown in Algorithm 1" - I wish the image dataset experiments had been done with larger images and models. Ultimately, the straightforwardness of the extension and the relative niche applications mean that although the main idea is sound, the quality and the overall impact of this paper don't quite meet the bar.
train
[ "rJeh0c5hjB", "SylSRUuqjH", "ByxWItucsr", "SkeI0v_9sS", "rkxd5xEcFB", "rJlosyF6tB", "rJgdpYkL9S" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their constructive comments! Below is a summary of our reply to the reviewers and the main changes to the paper.\n\n1) To answer R1 and R3’s questions on the use case of our model, we ran additional experiments to show that removing examples from the highest influence scores in pretr...
[ -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 5, 4, 5 ]
[ "iclr_2020_r1geR1BKPr", "rJgdpYkL9S", "rkxd5xEcFB", "rJlosyF6tB", "iclr_2020_r1geR1BKPr", "iclr_2020_r1geR1BKPr", "iclr_2020_r1geR1BKPr" ]
iclr_2020_B1xxAJHFwS
A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation
Q-learning with neural network function approximation (neural Q-learning for short) is among the most prevalent deep reinforcement learning algorithms. Despite its empirical success, the non-asymptotic convergence rate of neural Q-learning remains virtually unknown. In this paper, we present a finite-time analysis of a neural Q-learning algorithm, where the data are generated from a Markov decision process and the action-value function is approximated by a deep ReLU neural network. We prove that neural Q-learning finds the optimal policy with O(1/T) convergence rate if the neural function approximator is sufficiently overparameterized, where T is the number of iterations. To our best knowledge, our result is the first finite-time analysis of neural Q-learning under non-i.i.d. data assumption.
reject
This was an extremely difficult paper to decide, as it attracted significant commentary (and controversy) that led to non-trivial corrections in the results. One of the main criticisms is that the work is an incremental combination of existing results. A potentially bigger concern is that of correctness: the main convergence rate was changed from 1/T to 1/sqrt{T} during the rebuttal and revision process. Such a change is not trivial and essentially proves the initial submission was incorrect. In general, it is not prudent to accept a hastily revised theory paper without a proper assessment of correctness in its modified form. Therefore, I think it would be premature to accept this paper without a full review cycle that assessed the revised form. There also appear to be technical challenges from the discussion that remain unaddressed. Any resubmission will also have to highlight significance and make a stronger case for the novelty of the results.
train
[ "SyeDLwWqjH", "HJgIU4UPoS", "rklbQE8vjS", "HygAFXIwjB", "rylHBQLvsH", "B1l0WBG3tB", "HJxVNuXatB", "HJeNiNNRYS", "BylHEpDWcH", "BylDqq5RKS", "rkgFLhm3YB", "BJgdOt4ttr", "Hkx_AGivYH", "rkxIbYcwtS", "H1g49268FS", "rJgmqLVQtH", "BklyMs56ur", "BJxg-5cadS", "H1e34GDFdH", "r1g3c5IYuB"...
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "public", "public", "author", "public", "author", "author", "public", "public" ]
[ "Thank the authors for addressing all my comments. I feel satisfied with all the revisions. Although the rate has been changed to $O(1/\\sqrt{T})$, I still feel this paper makes a good theoretical contribution to neural Q-learning. ", "Thank you for reviewing our submission. We have addressed all your questions i...
[ -1, -1, -1, -1, -1, 3, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rylHBQLvsH", "iclr_2020_B1xxAJHFwS", "HJeNiNNRYS", "B1l0WBG3tB", "HJxVNuXatB", "iclr_2020_B1xxAJHFwS", "iclr_2020_B1xxAJHFwS", "iclr_2020_B1xxAJHFwS", "BylDqq5RKS", "rkgFLhm3YB", "BJgdOt4ttr", "Hkx_AGivYH", "iclr_2020_B1xxAJHFwS", "H1g49268FS", "rJgmqLVQtH", "H1e34GDFdH", "H1e34GDFd...
iclr_2020_r1xZAkrFPr
Deep Ensembles: A Loss Landscape Perspective
Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods.
reject
Paper https://arxiv.org/abs/1802.10026 (Garipov et. al, NeurIPS 2018) shows that one can find curves between two independently trained solutions along which the loss is relatively constant. The authors of this ICLR submission claim as a key contribution that they show the weights along the path correspond to different models that make different predictions ("Note that prior work on loss landscapes has focused on mode-connectivity and low-loss tunnels, but has not explicitly focused on how diverse the functions from different modes are, beyond an initial exploration in Fort & Jastrzebski (2019)"). Much of the disagreement between two of the reviewers and the authors is whether this point had already been shown in 1802.10026. It is in fact very clear that 1802.10026 shows that different points on the curve correspond to diverse functions. Figure 2 (right) of this paper shows the test error of an _ensemble_ of predictions made by the network for the parameters at one end of the curve, and the network described by \phi_\theta(t) at some point t along the curve: since the error goes down and changes significantly as t varies, the functions corresponding to different parameter settings along these curves must be diverse. This functional diversity is also made explicit multiple times in 1802.10026, which clearly says that this result shows that the curves contain meaningfully different representations. In response to R3, the authors incorrectly claim that "Figure 2 in Garipov et al. only plots loss and accuracy, and does not measure function space similarity, between different initializations, or along the tunnel at all. Just by looking at accuracy and loss values, there is no way to infer how similar the predictions of the two functions are." But Figure 2 (right) is actually showing the test error of an average of predictions of networks with parameters at different points along the curve, how it changes as one moves along the curve, and the improved accuracy of the ensemble over using one of the endpoints. If the functions associated with different parameters along the curve were the same, averaging their predictions would not help performance. Moreover, Figure 6 (bottom left, dashed lines) in the appendix of 1802.10026 shows the improvement in performance in ensembling points along the curve over ensembling independently trained networks. Section A6 (Appendix) also describes ensembling along the curve in some detail, with several quantitative results. There is no sense in ensembling models along the curve if they were the same model. These results unequivocally demonstrate that the points on the curve have functional diversity, and this connection is made explicit multiple times in 1802.10026 with the claim of meaningfully different representations: “This result also demonstrates that these curves do not exist only due to degenerate parametrizations of the network (such as rescaling on either side of a ReLU); instead, points along the curve correspond to meaningfully different representations of the data that can be ensembled for improved performance.” Additionally, other published work has built on this observation, such as 1907.07504 (UAI 2019), which performs Bayesian model averaging over the mode connecting subspace, relying on diversity of functions in this space; that work also visualizes the different functions arising in this space. It is incorrect to attribute these findings to Fort & Jastrzebski (2019) or the current submission. It is a positive contribution to build on prior work, but what is prior work and what is new should be accurately characterized, and currently is not, even after the discussion phase where multiple reviewers raised the same concern. Reviewers appreciated the broader investigation of diversity and its effect on ensembling, and the more detailed study regarding connecting curves. In addition to the concerns about inaccurate claims regarding prior work and novelty (which included aspects of the mode connectivity work but also other works), several reviewers also felt that the time-accuracy trade-offs of deep ensembles relative to standard approaches were not clearly presented, and comparisons were lacking. It would be simple and informative to do an experiment showing a runtime-accuracy trade-off curve for deep ensembles alongside FGE and various Bayesian deep learning methods and mc-dropout. It's also possible to use for example parallel MCMC chains to explore multiple quite different modes like deep ensembles but for Bayesian deep learning. For the paper to be accepted, it would need significant revisions, correcting the accuracy of claims, and providing such experiments.
train
[ "SkxGPxKsir", "ryxpOFb5ir", "rJxUsCwvjS", "HJe6GkuvoS", "rJg4TaPDjr", "rkl8KjQ6YH", "BJxk8KaptB", "H1l0JeopFB" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response. \n\nReading your recent comments (“The authors indeed conducted much broader investigation than was done in previous works and these results are clearly written. Nevertheless, the investigated phenomena are not quite new“) it seems like you agree that our paper conducts a much broader ...
[ -1, -1, -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "ryxpOFb5ir", "rJxUsCwvjS", "H1l0JeopFB", "rkl8KjQ6YH", "BJxk8KaptB", "iclr_2020_r1xZAkrFPr", "iclr_2020_r1xZAkrFPr", "iclr_2020_r1xZAkrFPr" ]
iclr_2020_rkxZCJrtwS
D3PG: Deep Differentiable Deterministic Policy Gradients
Over the last decade, two competing control strategies have emerged for solving complex control tasks with high efficacy. Model-based control algorithms, such as model-predictive control (MPC) and trajectory optimization, peer into the gradients of underlying system dynamics in order to solve control tasks with high sample efficiency. However, like all gradient-based numerical optimization methods,model-based control methods are sensitive to intializations and are prone to becoming trapped in local minima. Deep reinforcement learning (DRL), on the other hand, can somewhat alleviate these issues by exploring the solution space through sampling — at the expense of computational cost. In this paper, we present a hybrid method that combines the best aspects of gradient-based methods and DRL. We base our algorithm on the deep deterministic policy gradients (DDPG) algorithm and propose a simple modification that uses true gradients from a differentiable physical simulator to increase the convergence rate of both the actor and the critic. We demonstrate our algorithm on seven 2D robot control tasks, with the most complex one being a differentiable half cheetah with hard contact constraints. Empirical results show that our method boosts the performance of DDPGwithout sacrificing its robustness to local minima.
reject
This paper proposes a hybrid RL algorithm that uses model based gradients from a differentiable simulator to accelerate learning of a model-free policy. While the method seems sound, the reviewers raised concerns about the experimental evaluation, particularly lack of comparisons to prior works, and that the experiments do not show a clear improvement over the base algorithms that do not make use of the differentiable dynamics. I recommend rejecting this paper, since it is not obvious from the results that the increased complexity of the method can be justified by a better performance, particularly since the method requires access to a simulator, which is not available for real world experiments where sample complexity matters more.
train
[ "SJek1Z91cS", "rkxuCmq3iH", "SJgwyeiooH", "rklgox5jiS", "B1eIgYQjsr", "B1eFQ9Lcor", "rJgVDqggoB", "BJxm08CJcr", "rJxqEfvvcS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies optimal control problems where a physical simulator of the system is available, which outputs the gradient of the dynamics. Using the gradients proposed by the model, the authors propose to add two additional terms in the loss function for critic training in DDPG, where these to terms correspond...
[ 3, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rkxZCJrtwS", "rklgox5jiS", "iclr_2020_rkxZCJrtwS", "BJxm08CJcr", "SJek1Z91cS", "rJxqEfvvcS", "SJek1Z91cS", "iclr_2020_rkxZCJrtwS", "iclr_2020_rkxZCJrtwS" ]
iclr_2020_S1e-0kBYPB
Can I Trust the Explainer? Verifying Post-Hoc Explanatory Methods
For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks. In this work, we identify two issues of current explanatory methods. First, we show that two prevalent perspectives on explanations—feature-additivity and feature-selection—lead to fundamentally different instance-wise explanations. In the literature, explainers from different perspectives are currently being directly compared, despite their distinct explanation goals. The second issue is that current post-hoc explainers have only been thoroughly validated on simple models, such as linear regression, and, when applied to real-world neural networks, explainers are commonly evaluated under the assumption that the learned models behave reasonably. However, neural networks often rely on unreasonable correlations, even when producing correct decisions. We introduce a verification framework for explanatory methods under the feature-selection perspective. Our framework is based on a non-trivial neural network architecture trained on a real-world task, and for which we are able to provide guarantees on its inner workings. We validate the efficacy of our evaluation by showing the failure modes of current explainers. We aim for this framework to provide a publicly available,1 off-the-shelf evaluation when the feature-selection perspective on explanations is needed.
reject
The paper proposes a framework for generating evaluation tests for feature-based explainers. The framework provides guarantees on the behaviors of each trained model in that non-selected tokens are irrelevant for each prediction, and for each instance in the pruned dataset, one subset of clearly relevant tokens is selected. After reading the paper, I think there are a few issues with the current version of the paper: (1) the writing can be significantly improved: the motivation is unclear, which makes it difficult for readers to fully appreciate the work. It seems that each part of the paper is written by different persons, so the transition between different parts seems abrupt and the consistency of the texts is poor. For example, the framework is targeted at NLP applications, but in the introduction the texts are more focused on general purpose explainers. The transition from the RCNN approach to the proposed framework is not well thought-out, which makes the readers confused about what exactly is the proposed framework and what is the novelty. (2) the claimed properties of the proposed framework are rather straightforward derivations. The technical novelty is not as high as claimed in the paper. (3) The experiment results are not fully convincing. All the reviewers have read the authors' feedback and responded. It is agreed that the current version of the paper is not ready for publication.
train
[ "Bkgnv8LCtH", "HJerWDThiS", "rJxy1Mc0YH", "B1xuk-mfoB", "r1gcz1QziS", "HJlsZl7fiS", "r1gL0Rzzir", "Byg_L_yEqH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary - \n\nThe paper proposes a verification method for instance wise feature explanations. The verification framework uses an RCNN to identify two types of tokens a) the tokens that are not predictive of outcome b) the subset of clearly relevant tokens for prediction. The data used for RCNN is a pruned version...
[ 3, -1, 3, -1, -1, -1, -1, 3 ]
[ 3, -1, 3, -1, -1, -1, -1, 4 ]
[ "iclr_2020_S1e-0kBYPB", "B1xuk-mfoB", "iclr_2020_S1e-0kBYPB", "rJxy1Mc0YH", "Byg_L_yEqH", "Bkgnv8LCtH", "iclr_2020_S1e-0kBYPB", "iclr_2020_S1e-0kBYPB" ]
iclr_2020_SJlM0JSFDr
A Theoretical Analysis of Deep Q-Learning
Despite the great empirical success of deep reinforcement learning, its theoretical foundation is less well understood. In this work, we make the first attempt to theoretically understand the deep Q-network (DQN) algorithm (Mnih et al., 2015) from both algorithmic and statistical perspectives. In specific, we focus on a slight simplification of DQN that fully captures its key features. Under mild assumptions, we establish the algorithmic and statistical rates of convergence for the action-value functions of the iterative policy sequence obtained by DQN. In particular, the statistical error characterizes the bias and variance that arise from approximating the action-value function using deep neural network, while the algorithmic error converges to zero at a geometric rate. As a byproduct, our analysis provides justifications for the techniques of experience replay and target network, which are crucial to the empirical success of DQN. Furthermore, as a simple extension of DQN, we propose the Minimax-DQN algorithm for zero-sum Markov game with two players, which is deferred to the appendix due to space limitations.
reject
The authors offer theoretical guarantees for a simplified version of the deep Q-learning algorithm. However, the majority of the reviewers agree that the simplifying assumptions are so many that the results do not capture major important aspects of deep Q-Learning (e.g. understanding good exploration strategies, understanding why deep nets are better approximators and not using neural net classes that are so large that can capture all non-parametric functions). For justifying the paper to be called a theoretical analysis of deep Q-Learning some of these aspects need to be addressed, or the motivation/title of the paper needs to be re-defined.
train
[ "Skx3BiTjiH", "BJe7ITasir", "Hklsij6joH", "Bylbko6ooH", "SJej2cpsor", "r1xwdcaiiS", "SJeRYDrnYr", "SyeZTEeEFH", "SJlXXDdAcr" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nWe appreciate the valuable comments from the reviewer. We first address the concern on the assumption of i.i.d. sampling from a fixed behavioural policy and then address each detailed comments separately.\n\n\nSampling i.i.d. data from a behavioural policy:\n\nAs also pointed out by Reviewer 4, the challenges of...
[ -1, -1, -1, -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "SyeZTEeEFH", "iclr_2020_SJlM0JSFDr", "Skx3BiTjiH", "SJeRYDrnYr", "r1xwdcaiiS", "SJlXXDdAcr", "iclr_2020_SJlM0JSFDr", "iclr_2020_SJlM0JSFDr", "iclr_2020_SJlM0JSFDr" ]
iclr_2020_rJe7CkrFvS
Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search
Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kino-dynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks.
reject
The paper is about exploration in deep reinforcement learning. The reviewers agree that this is an interesting and important topic, but the authors provide only a slim analysis and theoretical support for the proposed methods. Furthermore, the authors are encouraged to evaluate the proposed method on more than a single benchmark problem.
test
[ "rklbNiVniB", "S1ez8cE2jB", "HJeltOV2jS", "SygxmJz8YH", "ByxPXHFptS", "Bkefy-y0Fr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "• Thank you for your review!\n• @1.) Eventually, probably the most interesting final metric is task success and therefore achieved return – but that strongly depends on the task.\n\nWithout assumptions on how the reward is structured (with respect to the state space), it is not possible to exclude portions of the ...
[ -1, -1, -1, 3, 1, 1 ]
[ -1, -1, -1, 1, 4, 5 ]
[ "Bkefy-y0Fr", "ByxPXHFptS", "SygxmJz8YH", "iclr_2020_rJe7CkrFvS", "iclr_2020_rJe7CkrFvS", "iclr_2020_rJe7CkrFvS" ]
iclr_2020_HklE01BYDB
Improving Sample Efficiency in Model-Free Reinforcement Learning from Images
Training an agent to solve control tasks directly from high-dimensional images with model-free reinforcement learning (RL) has proven difficult. The agent needs to learn a latent representation together with a control policy to perform the task. Fitting a high-capacity encoder using a scarce reward signal is not only extremely sample inefficient, but also prone to suboptimal convergence. Two ways to improve sample efficiency are to learn a good feature representation and use off-policy algorithms. We dissect various approaches of learning good latent features, and conclude that the image reconstruction loss is the essential ingredient that enables efficient and stable representation learning in image-based RL. Following these findings, we devise an off-policy actor-critic algorithm with an auxiliary decoder that trains end-to-end and matches state-of-the-art performance across both model-free and model-based algorithms on many challenging control tasks. We release our code to encourage future research on image-based RL.
reject
The paper investigates how sample efficiency of image based model-free RL can be improved by including an image reconstruction loss as an auxiliary task and applies it to soft actor-critic. The method is demonstrated to yield a substantial improvement compared to SAC learned directly from pixels, and comparable performance to other prior works, such as SLAC and PlaNet, but with a simpler learning setup. The reviewers generally appreciate the clarity of presentation and good experimental evaluation. However, all reviewers raise concerns regarding limited novelty, as auxiliary losses for RL have been studied before, and the contribution is mainly in the design choices of the implementation. In this view, and given that the results are on a par with SOTA, the contribution of this paper seems too incremental for publishing in this venue, and I’m recommending rejection.
train
[ "r1ly8pPKuH", "H1l3eXJ8tB", "SJlvWnSmjS", "S1lOShrmsB", "HkeFDVFtur" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "The paper aims to tackle the problem of improving sample efficiency of model-free, off-policy reinforcement learning in an image-based environment. They do so by taking SAC and adding a deterministic autoencoder, trained end-to-end with the actor and critic, with the actor and critic trained on top of the learned ...
[ 6, 3, -1, -1, 6 ]
[ 5, 5, -1, -1, 4 ]
[ "iclr_2020_HklE01BYDB", "iclr_2020_HklE01BYDB", "iclr_2020_HklE01BYDB", "iclr_2020_HklE01BYDB", "iclr_2020_HklE01BYDB" ]
iclr_2020_Bkx4AJSFvB
Efficient Bi-Directional Verification of ReLU Networks via Quadratic Programming
Neural networks are known to be sensitive to adversarial perturbations. To investigate this undesired behavior we consider the problem of computing the distance to the decision boundary (DtDB) from a given sample for a deep NN classifier. In this work we present an iterative procedure where in each step we solve a convex quadratic programming (QP) task. Solving the single initial QP already results in a lower bound on the DtDB and can be used as a robustness certificate of the classifier around a given sample. In contrast to currently known approaches our method also provides upper bounds used as a measure of quality for the certificate. We show that our approach provides better or competitive results in comparison with a wide range of existing techniques.
reject
This article is concerned with sensitivity to adversarial perturbations. It studies the computation of the distance to the decision boundary from a given sample in order to obtain robustness certificates, and presents an iterative procedure to this end. This is a very relevant line of investigation. The reviewers found that the approach is different from previous ones (even if related quadratic constraints had been formulated in previous works). However, they expressed concerns with the presentation, missing details or intuition for the upper bounds, and the small size of the networks that are tested. The reviewers also mentioned that the paper could be clearer about the strengths and weaknesses of the proposed algorithm. The responses clarified a number of points from the initial reviews. However, some reviewers found that important aspects were still not addressed satisfactorily, specifically in relation to the justification of the approach to obtain upper bounds (although they acknowledge that the strategy seems at least empirically validated), and reiterated concerns about the scalability of the approach. Overall, this article ranks good, but not good enough.
train
[ "Bkegk_OnoH", "H1geDDdniH", "rkeJ4oFnjB", "r1xNzFdnoB", "rJedSuu2jH", "SyeGBpm3tH", "HyxxRpXCYB", "BklGRAFe9B" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "2. STRONGER ATTACKS:\nWe conducted additional experiments with FGSM replaced by 200-steps PGD (starting from the anchor point, no random sampling).\nExperiments show that QPRel-UB still outperforms them in $l_2$ setting on all architectures allowing for more samples to be verified as non-robust.\nResults are inclu...
[ -1, -1, -1, -1, -1, 3, 8, 6 ]
[ -1, -1, -1, -1, -1, 4, 1, 5 ]
[ "H1geDDdniH", "BklGRAFe9B", "r1xNzFdnoB", "SyeGBpm3tH", "HyxxRpXCYB", "iclr_2020_Bkx4AJSFvB", "iclr_2020_Bkx4AJSFvB", "iclr_2020_Bkx4AJSFvB" ]
iclr_2020_HJxVC1SYwr
Crafting Data-free Universal Adversaries with Dilate Loss
We introduce a method to create Universal Adversarial Perturbations (UAP) for a given CNN in a data-free manner. Data-free approaches suite scenarios where the original training data is unavailable for crafting adversaries. We show that the adversary generation with full training data can be approximated to a formulation without data. This is realized through a sequential optimization of the adversarial perturbation with the proposed dilate loss. Dilate loss basically maximizes the Euclidean norm of the output before nonlinearity at any layer. By doing so, the perturbation constrains the ReLU activation function at every layer to act roughly linear for data points and thus eliminate the dependency on data for crafting UAPs. Extensive experiments demonstrate that our method not only has theoretical support, but achieves higher fooling rate than the existing data-free work. Furthermore, we evidence improvement in limited data cases as well.
reject
This paper focuses on finding universal adversarial perturbations, that is, a single noise pattern that can be applied to any input to fool the network in many cases. Further more, it focuses on the data-free setting, where such a perturbation is found without having access to data (images) from the distribution that train- and test data comes from. The reviewers were very conflicted about this paper. Among others, the strong experimental results and the clarity of writing and analysis were praised. However, there was also criticism of the amount of novelty compared to GDUAP, on the strong assumptions needed (potentially limiting the applicability), and on some weakness in the theoretical analysis. In the end, the paper seems in current form not convincing enough for me to recommend acceptance for ICLR.
train
[ "rygchoCqiS", "r1lsVjRcjB", "Syxaq9RqjS", "HJx99dCqiS", "H1l5NeGTFS", "BJxB-51Rtr", "ByeI1QS99B", "r1xcIWR35r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the Reviewer for the valuable feedback.\n\n1. Section 3.2 (the top of page 4) clarification: Additive means that $\\sigma_{R}(W_{1}X+W_{1}p1^{T})=\\sigma_{R}(W_{1}X)+\\sigma_{R}(W_{1}p1^{T})$. Please see the response to Reviewer 4 Q1.", "We thank the Reviewer for the valuable feedback.\n\n1. Novelty: Th...
[ -1, -1, -1, -1, 8, 3, 6, 3 ]
[ -1, -1, -1, -1, 3, 1, 1, 4 ]
[ "H1l5NeGTFS", "BJxB-51Rtr", "ByeI1QS99B", "r1xcIWR35r", "iclr_2020_HJxVC1SYwr", "iclr_2020_HJxVC1SYwr", "iclr_2020_HJxVC1SYwr", "iclr_2020_HJxVC1SYwr" ]
iclr_2020_Bklr0kBKvB
Geometry-aware Generation of Adversarial and Cooperative Point Clouds
Recent studies show that machine learning models are vulnerable to adversarial examples. In 2D image domain, these examples are obtained by adding imperceptible noises to natural images. This paper studies adversarial generation of point clouds by learning to deform those approximating object surfaces of certain categories. As 2D manifolds embedded in the 3D Euclidean space, object surfaces enjoy the general properties of smoothness and fairness. We thus argue that in order to achieve imperceptible surface shape deformations, adversarial point clouds should have the same properties with similar degrees of smoothness/fairness to the benign ones, while being close to the benign ones as well when measured under certain distance metrics of point clouds. To this end, we propose a novel loss function to account for imperceptible, geometry-aware deformations of point clouds, and use the proposed loss in an adversarial objective to attack representative models of point set classifiers. Experiments show that our proposed method achieves stronger attacks than existing methods, without introduction of noticeable outliers and surface irregularities. In this work, we also investigate an opposite direction that learns to deform point clouds of object surfaces in the same geometry-aware, but cooperative manner. Cooperatively generated point clouds are more favored by machine learning models in terms of improved classification confidence or accuracy. We present experiments verifying that our proposed objective succeeds in learning cooperative shape deformations.
reject
This paper offers an improved attack on 3-D point clouds. Unfortunately the clarity of the contribution is unclear and on balance insufficient for acceptance.
train
[ "ByeqWqh5sS", "rklgh93cjH", "Byx3zY39sH", "SJxzT10hYH", "ryle0T519r", "H1e7frzf9H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your constructive comments. We have improved the paper based on these comments. Our responses to individual comments are as follows.\n\nQ1. Is Chamfer distance / Hausdorff distance essentially the L_2 / L_infinity norm?\n\nReply: We emphasize that the nature of Chamfer / Hausdorff distance is fundame...
[ -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "ryle0T519r", "SJxzT10hYH", "H1e7frzf9H", "iclr_2020_Bklr0kBKvB", "iclr_2020_Bklr0kBKvB", "iclr_2020_Bklr0kBKvB" ]
iclr_2020_SkxHRySFvr
LEARNING TO IMPUTE: A GENERAL FRAMEWORK FOR SEMI-SUPERVISED LEARNING
Recent semi-supervised learning methods have shown to achieve comparable results to their supervised counterparts while using only a small portion of labels in image classification tasks thanks to their regularization strategies. In this paper, we take a more direct approach for semi-supervised learning and propose learning to impute the labels of unlabeled samples such that a network achieves better generalization when it is trained on these labels. We pose the problem in a learning-to-learn formulation which can easily be incorporated to the state-of-the-art semi-supervised techniques and boost their performance especially when the labels are limited. We demonstrate that our method is applicable to both classification and regression problems including image classification and facial landmark detection tasks.
reject
There is insufficient support to recommend accepting this paper. The reviewers unanimously criticize the quality of the exposition, noting that many key elements in the main development and experimental set up are not clear. The significance of the contribution could be made stronger with some form of theoretical analysis. The current paper lacks depth and insufficient justification for the proposed approach. The submitted comments should be able to help the authors improve the paper.
val
[ "S1giyndnsr", "BJxafid3jr", "SylTUcdhjB", "HyxT__vnir", "BkgLDSmstB", "BkxfphJd5H", "Syeqoi3d9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the feedback and respond to the individual points below. \n\nQ1: why the strategy is effective should be further analyzed.\nRE: Our hypothesis is that models trained on more accurate labels will yield higher performance for a given task. Assuming that evaluation on a meta-validation set i...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "BkgLDSmstB", "BkxfphJd5H", "Syeqoi3d9r", "iclr_2020_SkxHRySFvr", "iclr_2020_SkxHRySFvr", "iclr_2020_SkxHRySFvr", "iclr_2020_SkxHRySFvr" ]