paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2019_SJzvDjAcK7
Intriguing Properties of Learned Representations
A key feature of neural networks, particularly deep convolutional neural networks, is their ability to learn useful representations from data. The very last layer of a neural network is then simply a linear model trained on these learned representations. Despite their numerous applications in other tasks such as classification, retrieval, clustering etc., a.k.a. transfer learning, not much work has been published that investigates the structure of these representations or indeed whether structure can be imposed on them during the training process. In this paper, we study the effective dimensionality of the learned representations by models that have proved highly successful for image classification. We focus on ResNet-18, ResNet-50 and VGG-19 and observe that when trained on CIFAR10 or CIFAR100, the learned representations exhibit a fairly low rank structure. We propose a modification to the training procedure, which further encourages low rank structure on learned activations. Empirically, we show that this has implications for robustness to adversarial examples and compression.
rejected-papers
Dear authors, The reviewers all appreciated the interest of studying properties of the latent representations rather than of the weights. The impact of the rank on the robustness to adversarial attacks is also of interest. There were, however, two main issues raised. Due to the lack of confidence of some reviewers, I reviewed the paper myself and found the same issues: - Clarity could be improved. Some models are mentioned before being described (N-LR) and some important details are missing. In particular, we sometimes lose track of the goal of the experiments. For instance, there are quite a few experiments on the further reduction of the rank of the representation but it is not clear what to extract from them. - More importantly, there are several important gaps in the analysis. In particular: a/ As many reviewers have pointed out, low-rank constraints on the weight matrices induce low-rank representations if the activation function is linear. As it is not, this might not be true but deserves a discussion. b/ You state that the rank constraint has little effect given that the actual rank is much less than the constraint. However, one would expect the resulting rank to be a smooth function of the rank of the constraint. Since there is a discrepancy between ResNet N-LR and ResNet 1-LR, this should be investigated. c/ For the robustness to black-box adversarial attacks, these attacks are constructed using the N-LR models. Is is thus not too surprising that those models do not perform as well. Thus, despite the lack of confidence of one reviewer (the question about the N-LR models might stem from the fact that it is used before being introduced), I strongly encourage you to take their comments into account for a future submission.
train
[ "HkeQVaRCC7", "B1en5riAAm", "ByxW00ct07", "r1ezu6bEA7", "Bkglnp-N0X", "rJlSboq02m", "rJg8x9KChQ", "rylpQJf_37" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Comparison with [2]\n\nThe method proposed in this paper is indeed different than [2], the proposed one being in the form of a penalty while the latter is a reparametrization. There is nonetheless a strong similarity and a comparison with [2] should be done, even if they do not study it from the same perspective. ...
[ -1, -1, -1, -1, -1, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, 2, 2 ]
[ "B1en5riAAm", "ByxW00ct07", "r1ezu6bEA7", "rJlSboq02m", "rylpQJf_37", "iclr_2019_SJzvDjAcK7", "iclr_2019_SJzvDjAcK7", "iclr_2019_SJzvDjAcK7" ]
iclr_2019_SJzwvoCqF7
On Tighter Generalization Bounds for Deep Neural Networks: CNNs, ResNets, and Beyond
We propose a generalization error bound for a general family of deep neural networks based on the depth and width of the networks, as well as the spectral norm of weight matrices. Through introducing a novel characterization of the Lipschitz properties of neural network family, we achieve a tighter generalization error bound. We further obtain a result that is free of linear dependence on norms for bounded losses. Besides the general deep neural networks, our results can be applied to derive new bounds for several popular architectures, including convolutional neural networks (CNNs), residual networks (ResNets), and hyperspherical networks (SphereNets). When achieving same generalization errors with previous arts, our bounds allow for the choice of much larger parameter spaces of weight matrices, inducing potentially stronger expressive ability for neural networks.
rejected-papers
I'm quite concerned by the conversation with Anonymous, entitled "Why is the dependence...". My issues concern the empirical Rademacher complexity (ERC) and in particular the choice of the loss class for which the ERC is being computed. This class is obviously data dependent, but the Reviewers concerns centers on the nature of its data dependence. It is not valid to define the classes by the Jacobian's norm on the input data, as this _structure_ over the space of classes is data dependent, which is not kosher. The reviewer was gently pushing the authors towards a very strong assumption... i'm guessing that the jacobian norm over all data sets was bounded by a particular constant. This seems like a whopping assumption. The fact that I can so easily read this concern off of the reviewer's comments and the authors seem to not be able to understand what the reviewer is getting at, concerns me. Besides this concern, it seems that this paper has undergone a rather significant revision. I'm not convinced the new version has been properly reviewed. For a theory paper, I'm concerned about letting work through that's not properly vetted, and I'm really not certain this has been. I suggest the authors consider sending it to COLT.
train
[ "H1gtrvTL37", "BklVMpXrnm", "HJxH0BvcCQ", "r1g1nZwcCX", "HJxTtpB5A7", "SklBAHZt0X", "H1xI1kWYRQ", "Hkl1cCxYRQ", "rygmMJlZAQ", "S1g9WgktRm", "r1xvGqCOA7", "SJgLZvA_0X", "B1gKBvnd0X", "ryeZDD3u0X", "Sye88ggb0X", "H1lv6JlZ0X", "SklcK0k-RX", "Hyx7z-fvCX", "HJeACKWDR7", "H1er84lwRX"...
[ "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "author", "author", "public", "author", "public", "author", "author", "author", "author", "author", "public", "public", "author", "public", "author", "public", "public", "offic...
[ "The rebutal and the revision of the paper solve my comments.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe paper presents a new characterization of generalization error bound for general deep neural networks in terms of the depth and width of the networks and the spectral norm of weight matrices. The proof follows th...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2019_SJzwvoCqF7", "iclr_2019_SJzwvoCqF7", "r1g1nZwcCX", "HJxTtpB5A7", "SklBAHZt0X", "Sye88ggb0X", "S1g9WgktRm", "iclr_2019_SJzwvoCqF7", "H1gtrvTL37", "r1xvGqCOA7", "SJgLZvA_0X", "ryeZDD3u0X", "Hyx7z-fvCX", "HJeACKWDR7", "BJluyTu3Tm", "BklVMpXrnm", "HkeC6lVonX", "H1er84lwRX", ...
iclr_2019_SkGH2oRcYX
DEEP ADVERSARIAL FORWARD MODEL
Learning world dynamics has recently been investigated as a way to make reinforcement learning (RL) algorithms to be more sample efficient and interpretable. In this paper, we propose to capture an environment dynamics with a novel forward model that leverages recent works on adversarial learning and visual control. Such a model estimates future observations conditioned on the current ones and other input variables such as actions taken by an RL-agent. We focus on image generation which is a particularly challenging topic but our method can be adapted to other modalities. More precisely, our forward model is trained to produce realistic observations of the future while a discriminator model is trained to distinguish between real images and the model’s prediction of the future. This approach overcomes the need to define an explicit loss function for the forward model which is currently used for solving such a class of problem. As a consequence, our learning protocol does not have to rely on an explicit distance such as Euclidean distance which tends to produce unsatisfactory predictions. To illustrate our method, empirical qualitative and quantitative results are presented on a real driving scenario, along with qualitative results on Atari game Frostbite.
rejected-papers
The paper presents an action conditioned video prediction method that combines previous losses in the literature, such as, perceptual, adversarial and infogan type of losses. The reviewers point out the lack of novelty in the formulation, as well as the lack of experiments that would verify its usefulness in model based RL. There is no rebuttal thus no ground for discussion or acceptance.
train
[ "Byl3zi5K2X", "BklJKsfYnQ", "BygIV9GUnQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: Model-based RL that work on pixel-based environments tend to use forward models trained with pixel-wise loss. Rather than using pixel-wise loss for an action-conditioned video prediction model (\"Forward Model\"), they use an adversarial loss combined with mutual-information loss (from InfoGAN) and conten...
[ 4, 4, 4 ]
[ 5, 5, 4 ]
[ "iclr_2019_SkGH2oRcYX", "iclr_2019_SkGH2oRcYX", "iclr_2019_SkGH2oRcYX" ]
iclr_2019_SkGNrnC9FQ
Manifold Alignment via Feature Correspondence
We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data.
rejected-papers
The diffusion maps framework is used to embed a given collection of datasets into diffusion coordinates that capture intrinsic geometry. Then a correspondence map is constructed between datasets by finding rotations that align these coordinates. The approach is interesting. The reviewers, however, found the empirical analysis somewhat simplistic with inadequate comparisons to other correspondence construction methods in the literature.
test
[ "rJxIvXajhm", "r1gFHAhF2Q", "HyxNF2Jr2Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors pointed out that the measurements in biology and natural science suffer from batch effects such as the variations between batches of data measured at different times or by different sensors. In order to analyze different batches of data, an alignment or a calibration is frequently needed. The authors p...
[ 5, 5, 4 ]
[ 4, 4, 3 ]
[ "iclr_2019_SkGNrnC9FQ", "iclr_2019_SkGNrnC9FQ", "iclr_2019_SkGNrnC9FQ" ]
iclr_2019_SkGQujR5FX
DANA: Scalable Out-of-the-box Distributed ASGD Without Retuning
Distributed computing can significantly reduce the training time of neural networks. Despite its potential, however, distributed training has not been widely adopted: scaling the training process is difficult, and existing SGD methods require substantial tuning of hyperparameters and learning schedules to achieve sufficient accuracy when increasing the number of workers. In practice, such tuning can be prohibitively expensive given the huge number of potential hyperparameter configurations and the effort required to test each one. We propose DANA, a novel approach that scales out-of-the-box to large clusters using the same hyperparameters and learning schedule optimized for training on a single worker, while maintaining similar final accuracy without additional overhead. DANA estimates the future value of model parameters by adapting Nesterov Accelerated Gradient to a distributed setting, and so mitigates the effect of gradient staleness, one of the main difficulties in scaling SGD to more workers. Evaluation on three state-of-the-art network architectures and three datasets shows that DANA scales as well as or better than existing work without having to tune any hyperparameters or tweak the learning schedule. For example, DANA achieves 75.73% accuracy on ImageNet when training ResNet-50 with 16 workers, similar to the non-distributed baseline.
rejected-papers
The paper needs more revisions and better presentation of empirical study.
test
[ "HkgMC3jORm", "S1lcLmmdaX", "SJxRtzQ_a7", "S1e2BMQOpX", "SJgkF-mdam", "Skxqr-7OaQ", "B1gf7bQ_a7", "HJxD0Jmu6Q", "rye5N_bah7", "B1xCQE3KhQ", "Bylab3SKh7" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank again the anonymous reviewers for their insightful and helpful reviews which helped us improve our paper. We have updated the paper to address your concerns in the following way:\n\n1) We added a new simulation that is based on well-documented research into modeling task execution times [Ali et al., 2000]...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2019_SkGQujR5FX", "rye5N_bah7", "B1xCQE3KhQ", "Bylab3SKh7", "HJxD0Jmu6Q", "HJxD0Jmu6Q", "HJxD0Jmu6Q", "iclr_2019_SkGQujR5FX", "iclr_2019_SkGQujR5FX", "iclr_2019_SkGQujR5FX", "iclr_2019_SkGQujR5FX" ]
iclr_2019_SkGT6sRcFX
Infinitely Deep Infinite-Width Networks
Infinite-width neural networks have been extensively used to study the theoretical properties underlying the extraordinary empirical success of standard, finite-width neural networks. Nevertheless, until now, infinite-width networks have been limited to at most two hidden layers. To address this shortcoming, we study the initialisation requirements of these networks and show that the main challenge for constructing them is defining the appropriate sampling distributions for the weights. Based on these observations, we propose a principled approach to weight initialisation that correctly accounts for the functional nature of the hidden layer activations and facilitates the construction of arbitrarily many infinite-width layers, thus enabling the construction of arbitrarily deep infinite-width networks. The main idea of our approach is to iteratively reparametrise the hidden-layer activations into appropriately defined reproducing kernel Hilbert spaces and use the canonical way of constructing probability distributions over these spaces for specifying the required weight distributions in a principled way. Furthermore, we examine the practical implications of this construction for standard, finite-width networks. In particular, we derive a novel weight initialisation scheme for standard, finite-width networks that takes into account the structure of the data and information about the task at hand. We demonstrate the effectiveness of this weight initialisation approach on the MNIST, CIFAR-10 and Year Prediction MSD datasets.
rejected-papers
The paper studies how to construct infinitely deep infinite-width networks from a theoretical point of view, and uses the results of its theoretical analysis to design a weight initialization scheme for finite-width networks. While the idea is interesting and the paper may contain novel theoretical contributions, the experimental results are weak, as pointed out by all three reviewers from several different perspectives. In particular, it seems that the presented theoretical analysis is useful mainly for weight initialization and hence has limited potential impacts. In addition, the authors have responded to neither the AC's question, nor a detailed anonymous comment that challenges the value of Proposition 1 given the previous work by Aronszajn.
train
[ "HygAk060A7", "B1lP1Ca0A7", "HkechuUDnQ", "rkxaa2b5Rm", "S1ldNRec07", "rylf0Tgc0m", "r1ezww6Up7", "BJldu_uza7", "rJlx_rSo3m" ]
[ "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I have several confusions regarding Proposition 1. I will try to informally describe the proof in Proposition 1. Kindly clarify if my understanding is correct or not.\n\nFirstly regarding the statement of the proof: You are trying to construct a distribution over the weights connecting two (infinitely wide) hidden...
[ -1, -1, 6, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2019_SkGT6sRcFX", "iclr_2019_SkGT6sRcFX", "iclr_2019_SkGT6sRcFX", "BJldu_uza7", "rylf0Tgc0m", "HkechuUDnQ", "rJlx_rSo3m", "iclr_2019_SkGT6sRcFX", "iclr_2019_SkGT6sRcFX" ]
iclr_2019_SkGpW3C5KX
Heated-Up Softmax Embedding
Metric learning aims at learning a distance which is consistent with the semantic meaning of the samples. The problem is generally solved by learning an embedding, such that the samples of the same category are close (compact) while samples from different categories are far away (spread-out) in the embedding space. One popular way of generating such embeddings is to use the second-to-last layer of a deep neural network trained as a classifier with the softmax cross-entropy loss. In this paper, we show that training classifiers with different temperatures of the softmax function lead to different distributions of the embedding space. And finding a balance between the compactness, 'spread-out' and the generalization ability of the feature is critical in metric learning. Leveraging these insights, we propose a 'heating-up' strategy to train a classifier with increasing temperatures. Extensive experiments show that the proposed method achieves state-of-the-art embeddings on a variety of metric learning benchmarks.
rejected-papers
1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion. - The method and justification are clear - The quantitative results are promising. 2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision. - The contribution is minor - Analysis of the properties of the method is lacking. The first point was the major factor in the final decision. 3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately. Reviewer opinion was quite divergent but both AR1 and AR2 had concerns about the 2 weaknesses mentioned in the previous section (which remained after the author rebuttal). 4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another. No consensus was reached. The source of disagreement was on how to weigh the pros vs the cons. The final decision was aligned with the lower ratings. The AC agrees that the contribution is minor.
train
[ "r1xD4p4FRm", "HkgwBhNKA7", "rJlU-sVtR7", "ByeFu5_ahQ", "S1loo0Von7", "BJlWjOyc2X" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q1: How to select the intermediate temperature alpha.\nA1: A good intermediate temperature value can be selected by cross-validation. According to our experiment, choosing alpha value = 16 generally gives a good performance. According to the new experiment in Appendix E, we can learn an alpha value and apply the h...
[ -1, -1, -1, 8, 3, 5 ]
[ -1, -1, -1, 4, 5, 4 ]
[ "ByeFu5_ahQ", "BJlWjOyc2X", "S1loo0Von7", "iclr_2019_SkGpW3C5KX", "iclr_2019_SkGpW3C5KX", "iclr_2019_SkGpW3C5KX" ]
iclr_2019_SkGtjjR5t7
Learning to Drive by Observing the Best and Synthesizing the Worst
Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world ( https://sites.google.com/view/learn-to-drive ).
rejected-papers
The authors present a method for training a policy for a self-driving car. The inputs to the policy are map-based perceptual features and the outputs are waypoints on a trajectory, and the method is an augmented imitation learning framework that uses perturbations and additional losses to make the policy more robust and effective in rare events. The paper is clear and well-written and the authors do demonstrate that it can be used to control a real vehicle. However, the reviewers all had concerns about the oracle feature representation which is the input and also concerns about the lack of baselines such as optimization based methods. They also felt that the approach was limited to self-driving cars and thus would have limited interest for the community.
train
[ "Hkg_TlxqAQ", "HJlaL4CX6Q", "ryx7irA7aX", "rJl5LICQa7", "rkxhXU07aX", "HJgYFS07pQ", "Hkl6CDSR3Q", "rJl-iWUTnX", "rJefh7l5hm" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I would like to thank the authors for their feedback and the various updates to the paper, the text is definitely clearer now.\n\nWhen it comes to evaluation, I am still not convinced that the proposed baselines would not be useful. E.g., Figure 5 results could have these extra baselines included and their displac...
[ -1, -1, -1, -1, -1, -1, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "ryx7irA7aX", "rJefh7l5hm", "HJgYFS07pQ", "rkxhXU07aX", "Hkl6CDSR3Q", "rJl-iWUTnX", "iclr_2019_SkGtjjR5t7", "iclr_2019_SkGtjjR5t7", "iclr_2019_SkGtjjR5t7" ]
iclr_2019_SkMON20ctX
On the Trajectory of Stochastic Gradient Descent in the Information Plane
Studying the evolution of information theoretic quantities during Stochastic Gradient Descent (SGD) learning of Artificial Neural Networks (ANNs) has gained popularity in recent years. Nevertheless, these type of experiments require estimating mutual information and entropy which becomes intractable for moderately large problems. In this work we propose a framework for understanding SGD learning in the information plane which consists of observing entropy and conditional entropy of the output labels of ANN. Through experimental results and theoretical justifications it is shown that, under some assumptions, the SGD learning trajectories appear to be similar for different ANN architectures. First, the SGD learning is modeled as a Hidden Markov Process (HMP) whose entropy tends to increase to the maximum. Then, it is shown that the SGD learning trajectory appears to move close to the shortest path between the initial and final joint distributions in the space of probability measures equipped with the total variation metric. Furthermore, it is shown that the trajectory of learning in the information plane can provide an alternative for observing the learning process, with potentially richer information about the learning than the trajectories in training and test error.
rejected-papers
The paper proposes a quantity to monitor learning on an information plane which is related to the information curves considered in the bottleneck analysis but is more reliable and easier to compute. The main concern with the paper is the lack of interpretation and elaboration of potential uses. A concern is raised that the proposed method abstracts away way too much detail, so that the shapes of the curves are to be expected and contain little useful information (see AnonReviewer2 comments). The authors agree to some of the main issues, as they pointed out in the discussion, although they maintain that the method could still contain useful information. The reviewers are not very convinced by this paper, with ratings either marginally above the acceptance threshold, marginally below the acceptance threshold, or strong reject.
train
[ "HJlbKkWqA7", "rkeVazW5AX", "HJluK-W9RQ", "B1xvAxW5C7", "Byg9ksJZaQ", "BkgWjike67", "BJlBFC6EjQ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for their comments. Here we address some concerns about the paper that are common among the reviews:\n\n1. (Generality) How General is this trajectory?\n\nAn important motivation of this work is to explore the generality of the observed trajectories and its interpretation. \n\nWe have tried ...
[ -1, -1, -1, -1, 4, 6, 2 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2019_SkMON20ctX", "BJlBFC6EjQ", "BkgWjike67", "Byg9ksJZaQ", "iclr_2019_SkMON20ctX", "iclr_2019_SkMON20ctX", "iclr_2019_SkMON20ctX" ]
iclr_2019_SkMx_iC9K7
DelibGAN: Coarse-to-Fine Text Generation via Adversarial Network
In this paper, we propose a novel adversarial learning framework, namely DelibGAN, for generating high-quality sentences without supervision. Our framework consists of a coarse-to-fine generator, which contains a first-pass decoder and a second-pass decoder, and a multiple instance discriminator. And we propose two training mechanisms DelibGAN-I and DelibGAN-II. The discriminator is used to fine-tune the second-pass decoder in DelibGAN-I and further evaluate the importance of each word and tune the first-pass decoder in DelibGAN-II. We compare our models with several typical and state-of-the-art unsupervised generic text generation models on three datasets (a synthetic dataset, a descriptive text dataset and a sentimental text dataset). Both qualitative and quantitative experimental results show that our models produce more realistic samples, and DelibGAN-II performs best.
rejected-papers
All reviewers are in agreement for a rejection decision. Details below.
train
[ "BklJ7d4n2X", "H1lg-gtK2m", "BkxT8cgF2X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes using a deliberation network approach to text generation with GANs. It does this by using two decoders where the discriminator is trained on the second decoder and signals from that training are also used to improve the first decoder.\n\nSince both approaches have been published before (deliber...
[ 4, 3, 4 ]
[ 4, 4, 4 ]
[ "iclr_2019_SkMx_iC9K7", "iclr_2019_SkMx_iC9K7", "iclr_2019_SkMx_iC9K7" ]
iclr_2019_SkNSOjR9Y7
Training Variational Auto Encoders with Discrete Latent Representations using Importance Sampling
The Variational Auto Encoder (VAE) is a popular generative latent variable model that is often applied for representation learning. Standard VAEs assume continuous valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete valued latent variables, since reparametrized sampling is not possible. Till now, there exist no simple solutions to circumvent this problem. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and Categorically distributed latent representations on two different benchmark datasets.
rejected-papers
The paper is addressing an important problem, but misses many related references (see Reviewer 2's comments for a long list of highly relevant papers). More importantly, as Reviewer 3 pointed out (which the AC fully agrees): "The gradient estimator the paper proposes is the REINFORCE estimator [Williams, ML 1992] re-derived through importance sampling." "The equivalence would not be exact if the authors chose the importance distribution to be different than the variational approximation q(z|x), so there still may be room for novelty in their proposal, but in the current draft only q(z|x) is considered."
train
[ "BJxYsjJchQ", "SkxYMKKu2Q", "ryejpQThsX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In computing the gradient of the ELBO, the main challenge lies in computing the gradient of the reconstruction loss with respect to the encoder parameters. VAEs traditionally rely on reparameterization in order to obtain a low-variance estimate, but there are a number of other gradient estimators that one can appl...
[ 3, 1, 3 ]
[ 5, 5, 5 ]
[ "iclr_2019_SkNSOjR9Y7", "iclr_2019_SkNSOjR9Y7", "iclr_2019_SkNSOjR9Y7" ]
iclr_2019_SkNSehA9FQ
Open Vocabulary Learning on Source Code with a Graph-Structured Cache
Machine learning models that take computer program source code as input typically use Natural Language Processing (NLP) techniques. However, a major challenge is that code is written using an open, rapidly changing vocabulary due to, e.g., the coinage of new variable and method names. Reasoning over such a vocabulary is not something for which most NLP methods are designed. We introduce a Graph-Structured Cache to address this problem; this cache contains a node for each new word the model encounters with edges connecting each word to its occurrences in the code. We find that combining this graph-structured cache strategy with recent Graph-Neural-Network-based models for supervised learning on code improves the models' performance on a code completion task and a variable naming task --- with over 100\% relative improvement on the latter --- at the cost of a moderate increase in computation time.
rejected-papers
This paper introduces fairly complex methods for dealing with OOV words in graphs representing source code, and aims to show that these improve over existing methods. The chief and valid concern raised by the reviewers was that the experiments had been changed so as to not allow proper comparison to prior work, or where comparison can be made. It is essential that a new method such as this be properly evaluated against existing benchmarks, under the same experimental conditions as presented in related literature. It seems that while the method is interesting, the empirical section of this paper needs reworking in order to be suitable for publication.
train
[ "B1l4NLJk1E", "rklbFlAO2X", "ByewoRiy67", "BkxrXrewT7", "B1xbctmITX", "S1eEG1gLam", "HJehnx0bam", "HJexTpnWaQ", "rylubM3k6Q", "S1xu_TT1T7", "HkgDMtLd37", "rJxfsg4Dh7" ]
[ "official_reviewer", "official_reviewer", "author", "public", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you to the reviewers of this paper for engaging in discussion not just with the authors, but with one another, and providing substantial and detailed reviews. You are an excellent example for the community, and demonstrate the high standard according to which papers should be evaluated in ML conferences. You...
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2019_SkNSehA9FQ", "iclr_2019_SkNSehA9FQ", "rklbFlAO2X", "S1eEG1gLam", "S1eEG1gLam", "ByewoRiy67", "HJexTpnWaQ", "rylubM3k6Q", "HkgDMtLd37", "rJxfsg4Dh7", "iclr_2019_SkNSehA9FQ", "iclr_2019_SkNSehA9FQ" ]
iclr_2019_SkVRTj0cYQ
Differentially Private Federated Learning: A Client Level Perspective
Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.
rejected-papers
Following the unanimous vote of the reviewers, this paper is not ready for publication at ICLR. The greatest concern was that the novelty beyond past work has not been sufficiently demonstrated.
test
[ "r1eXjyItn7", "r1g9TUss3m", "SkenUlAliQ", "BygeEICS3X", "rkgvwEsl2m", "Bkx4AgGjim", "BklIfi7voQ", "rJl-Xe68j7", "S1x_i3hUsm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "public", "public", "public" ]
[ "[Post-rebuttal update] No author response was provided to address the reviewer comments. In particular, the paper's contributions and novelty compared with previous work seem limited, and no author response was provided to address this concern. I've left my overall score for the paper unchanged.\n\n[Summary] The a...
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_SkVRTj0cYQ", "iclr_2019_SkVRTj0cYQ", "iclr_2019_SkVRTj0cYQ", "rkgvwEsl2m", "iclr_2019_SkVRTj0cYQ", "rJl-Xe68j7", "S1x_i3hUsm", "iclr_2019_SkVRTj0cYQ", "iclr_2019_SkVRTj0cYQ" ]
iclr_2019_SkVe3iA9Ym
Beyond Winning and Losing: Modeling Human Motivations and Behaviors with Vector-valued Inverse Reinforcement Learning
In recent years, reinforcement learning methods have been applied to model gameplay with great success, achieving super-human performance in various environments, such as Atari, Go and Poker. However, those studies mostly focus on winning the game and have largely ignored the rich and complex human motivations, which are essential for understanding the agents' diverse behavior. In this paper, we present a multi-motivation behavior modeling which investigates the multifaceted human motivations and models the underlying value structure of the agents. Our approach extends inverse RL to the vectored-valued setting which imposes a much weaker assumption than previous studies. The vectorized rewards incorporate Pareto optimality, which is a powerful tool to explain a wide range of behavior by its optimality. For practical assessment, our algorithm is tested on the World of Warcraft Avatar History dataset spanning three years of the gameplay. Our experiments demonstrate the improvement over the scalarization-based methods on real-world problem settings.
rejected-papers
Pros: - new multi-objective approach to IRL - new algorithm - strong results - real-world dataset Cons: - straightforward theoretical extensions - unclear motivation - inappropriate empirical assessment metrics - weak rebuttal All the reviewers feel that the paper needs further improvements, and while the authors comment on some of these concerns, their rebuttal and revised paper does not address them sufficiently. So at this stage it is a (borderline) reject.
train
[ "BkxdGEkwaX", "Byl9CJ1Da7", "rJeG4mqH6Q", "SkeDO_x1am", "r1xcwnpq3Q", "SkgHFfw52X" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the review! The most important clarification that we would like to make, is that \"Pareto dominance is a rather weak relation\" makes the model rather strong. That is because the dominance relation is the assumption of the IRL models, and weak assumptions are desired. We believe that justifies our mo...
[ -1, -1, -1, 5, 4, 4 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "SkeDO_x1am", "SkgHFfw52X", "r1xcwnpq3Q", "iclr_2019_SkVe3iA9Ym", "iclr_2019_SkVe3iA9Ym", "iclr_2019_SkVe3iA9Ym" ]
iclr_2019_Ske1-209Y7
Probabilistic Model-Based Dynamic Architecture Search
The architecture search methods for convolutional neural networks (CNNs) have shown promising results. These methods require significant computational resources, as they repeat the neural network training many times to evaluate and search the architectures. Developing the computationally efficient architecture search method is an important research topic. In this paper, we assume that the structure parameters of CNNs are categorical variables, such as types and connectivities of layers, and they are regarded as the learnable parameters. Introducing the multivariate categorical distribution as the underlying distribution for the structure parameters, we formulate a differentiable loss for the training task, where the training of the weights and the optimization of the parameters of the distribution for the structure parameters are coupled. They are trained using the stochastic gradient descent, leading to the optimization of the structure parameters within a single training. We apply the proposed method to search the architecture for two computer vision tasks: image classification and inpainting. The experimental results show that the proposed architecture search method is fast and can achieve comparable performance to the existing methods.
rejected-papers
The paper presents an architecture search method which jointly optimises the architecture and its weights. As noted by reviewers, the method is very close to Shirakawa et al., with the main innovation being the use of categorical distributions to model the architecture. This is a minor innovation, and while the results are promising, they are not strong enough to justify acceptance based on the results alone.
train
[ "ryeWhXFa0m", "Bkl7FniF0m", "rJxkP2jFCm", "HkxSJ3oKC7", "rylZqklgTX", "ByxqzZ6jnX", "rJlLArz9hX" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I'd like to first thank the authors for their reply. They have tried conscientiously to improve the paper.\n\nHowever, in its current form, I believe the paper still has two shortcomings, namely the similarity to the work of Shirakawa et al (2018) and its comparison to ENAS. I think with a bit of thinking, you may...
[ -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "Bkl7FniF0m", "rJlLArz9hX", "ByxqzZ6jnX", "rylZqklgTX", "iclr_2019_Ske1-209Y7", "iclr_2019_Ske1-209Y7", "iclr_2019_Ske1-209Y7" ]
iclr_2019_Ske25sC9FQ
Robustness and Equivariance of Neural Networks
Neural networks models are known to be vulnerable to geometric transformations as well as small pixel-wise perturbations of input. Convolutional Neural Networks (CNNs) are translation-equivariant but can be easily fooled using rotations and small pixel-wise perturbations. Moreover, CNNs require sufficient translations in their training data to achieve translation-invariance. Recent work by Cohen & Welling (2016), Worrall et al. (2016), Kondor & Trivedi (2018), Cohen & Welling (2017), Marcos et al. (2017), and Esteves et al. (2018) has gone beyond translations, and constructed rotation-equivariant or more general group-equivariant neural network models. In this paper, we do an extensive empirical study of various rotation-equivariant neural network models to understand how effectively they learn rotations. This includes Group-equivariant Convolutional Networks (GCNNs) by Cohen & Welling (2016), Harmonic Networks (H-Nets) by Worrall et al. (2016), Polar Transformer Networks (PTN) by Esteves et al. (2018) and Rotation equivariant vector field networks by Marcos et al. (2017). We empirically compare the ability of these networks to learn rotations efficiently in terms of their number of parameters, sample complexity, rotation augmentation used in training. We compare them against each other as well as Standard CNNs. We observe that as these rotation-equivariant neural networks learn rotations, they instead become more vulnerable to small pixel-wise adversarial attacks, e.g., Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), in comparison with Standard CNNs. In other words, robustness to geometric transformations in these models comes at the cost of robustness to small pixel-wise perturbations.
rejected-papers
Positives: The paper proposes an interesting idea: to study the effect on vulnerability to adversarial attacks of training for invariance with respect to rotations. Experiments on MNIST, FashionMNIST, and CIFAR10. An interesting hypothesis partially borne out in experiments. Negatives: no accept recommendation from any reviewer insufficient empirical results not a clear enough message very limited theoretical contribution Although additional experimental results on FashionMNIST and CIFAR10 were added to the initial very limited results on MNIST, the main claim of the paper seems to be somewhat weakened. The effect of increased vulnerability to adversarial attacks as invariance is increased is less pronounced on the additional datasets. This calls into question how relevant this effect is on more realistic data than the toy problems considered here. The size of the network is not varied in the experiments. If increased invariance results in poorer performance with respect to attacks, one possible explanation is that the invariance taxes the capacity of the network architecture. Varying architecture depth could partially answer whether this is relevant. Given the lack of theoretical contribution, more insights along these lines would potentially strengthen the work. The title uses the term "equivariance," which strictly speaking is when the inputs and outputs of a function vary equally, e.g. an image and its segmentation are equivariant under rotations, but classification tasks should probably be called "invariant." The reviewers were unanimous in not recommending the paper for acceptance. The key concerns remain after the author response.
train
[ "S1eeUtHP2m", "BJg9YVHKRQ", "rygy4NHY0m", "SyljZ4St0X", "r1gzY7zB2X", "Hye_9ozisX" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper empirically studies various CNN robustifying mechanisms aiming to achieve rotational invariance. The main finding is that such robustifying mechanisms may lead to lack of robustness against pixel-level attacks such as FGSM and its variants. The paper does a comprehensive job in studying relevant robusti...
[ 3, -1, -1, -1, 4, 5 ]
[ 5, -1, -1, -1, 4, 3 ]
[ "iclr_2019_Ske25sC9FQ", "Hye_9ozisX", "r1gzY7zB2X", "S1eeUtHP2m", "iclr_2019_Ske25sC9FQ", "iclr_2019_Ske25sC9FQ" ]
iclr_2019_Ske6wiAcKQ
Real-time Neural-based Input Method
The input method is an essential service on every mobile and desktop devices that provides text suggestions. It converts sequential keyboard inputs to the characters in its target language, which is indispensable for Japanese and Chinese users. Due to critical resource constraints and limited network bandwidth of the target devices, applying neural models to input method is not well explored. In this work, we apply a LSTM-based language model to input method and evaluate its performance for both prediction and conversion tasks with Japanese BCCWJ corpus. We articulate the bottleneck to be the slow softmax computation during conversion. To solve the issue, we propose incremental softmax approximation approach, which computes softmax with a selected subset vocabulary and fix the stale probabilities when the vocabulary is updated in future steps. We refer to this method as incremental selective softmax. The results show a two order speedup for the softmax computation when converting Japanese input sequences with a large vocabulary, reaching real-time speed on commodity CPU. We also exploit the model compressing potential to achieve a 92% model size reduction without losing accuracy.
rejected-papers
All reviewers agree in their assessment that this paper is not ready for acceptance into ICLR.
train
[ "BJlrdnMbAQ", "BkgoW2z-AQ", "HJlsGizZ07", "SyelRMzX6X", "BJlxSBTA3X", "Bked30s_hm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Our work originally focuses on conversion task for Japanese and Chinese input method. As reviewer mentioned, it is a better contribution if the approach can be demonstrated on other classic tasks. We choose a simple LSTM model as a baseline for our selective softmax for its simplicity. Choosing complex network ar...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "Bked30s_hm", "BJlxSBTA3X", "SyelRMzX6X", "iclr_2019_Ske6wiAcKQ", "iclr_2019_Ske6wiAcKQ", "iclr_2019_Ske6wiAcKQ" ]
iclr_2019_Ske7ToC5Km
Graph2Seq: Scalable Learning Dynamics for Graphs
Neural networks have been shown to be an effective tool for learning algorithms over graph-structured data. However, graph representation techniques---that convert graphs to real-valued vectors for use with neural networks---are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but these methods have difficulty scaling and generalizing to graphs with different sizes and shapes. We present Graph2Seq, a new technique that represents vertices of graphs as infinite time-series. By not limiting the representation to a fixed dimension, Graph2Seq scales naturally to graphs of arbitrary sizes and shapes. Graph2Seq is also reversible, allowing full recovery of the graph structure from the sequence. By analyzing a formal computational model for graph representation, we show that an unbounded sequence is necessary for scalability. Our experimental results with Graph2Seq show strong generalization and new state-of-the-art performance on a variety of graph combinatorial optimization problems.
rejected-papers
This was an extremely difficult case. There are many positive aspects of Graph2Seq, as detailed by all of the reviewers, however two of the reviewers have issue with the current theory, specifically the definition of k-local-gather and its relation to existing models. The authors and reviewers have had a detailed and discussion on the issue, however we do not seem to have come to a resolution. I will not wade into the specifics of the argument, however, ultimately, the onus is on the authors to convince the reviewers of the merits/correctness, and in this case two reviewers had the same issue, and their concerns have not been resolved. The best advice I can give is to consider the discussion so far and why this misunderstanding occurred, so that it might lead the best version of this paper possible.
train
[ "SJgD7cA9k4", "B1x4zMpOy4", "SJeHmb5uJ4", "BygyngcdyN", "B1e-SxqOkN", "SkxrWlq_kV", "HJeJ-oBmJV", "SyeD-YBXyN", "B1eiod9GJ4", "HJg70ZAFCm", "BJe-fWUL0Q", "HkgHmDu26Q", "SJlWaUdha7", "H1lebgVjp7", "BkxxnJOcTX", "Skxg-2w5pQ", "rklM99wcTX", "BklljYaanQ", "BygIHT5jn7", "SyexIerDnQ"...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ "Dear reviewer, \n \nFirst, we would like to again thank for taking time to provide feedback which has helped improve the paper. But with all due respect, we do not see inconsistencies in our definition or arguments. What we are saying is that a GCN with k convolutional layers uses a neighborhood of distance k arou...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "B1x4zMpOy4", "B1e-SxqOkN", "HJeJ-oBmJV", "SyeD-YBXyN", "SkxrWlq_kV", "B1eiod9GJ4", "Skxg-2w5pQ", "B1eiod9GJ4", "HkgHmDu26Q", "BJe-fWUL0Q", "iclr_2019_Ske7ToC5Km", "SJlWaUdha7", "H1lebgVjp7", "BkxxnJOcTX", "SyexIerDnQ", "BygIHT5jn7", "BklljYaanQ", "iclr_2019_Ske7ToC5Km", "iclr_20...
iclr_2019_SkeJ6iR9Km
Variational Sparse Coding
Variational auto-encoders (VAEs) offer a tractable approach when performing approximate inference in otherwise intractable generative models. However, standard VAEs often produce latent codes that are disperse and lack interpretability, thus making the resulting representations unsuitable for auxiliary tasks (e.g. classification) and human interpretation. We address these issues by merging ideas from variational auto-encoders and sparse coding, and propose to explicitly model sparsity in the latent space of a VAE with a Spike and Slab prior distribution. We derive the evidence lower bound using a discrete mixture recognition function thereby making approximate posterior inference as computational efficient as in the standard VAE case. With the new approach, we are able to infer truly sparse representations with generally intractable non-linear probabilistic models. We show that these sparse representations are advantageous over standard VAE representations on two benchmark classification tasks (MNIST and Fashion-MNIST) by demonstrating improved classification accuracy and significantly increased robustness to the number of latent dimensions. Furthermore, we demonstrate qualitatively that the sparse elements capture subjectively understandable sources of variation.
rejected-papers
The paper develops and investigates the use of a spike-and-slab prior and approximate posterior for a VAE. It uses a continuous relaxation for the discrete binary component in the reconstruction term of the ELBO, and an analytic expression for the KL term between the spike-and-slab prior and approximate posterior. Experiments on MNIS, Fashion-MNIST and CelebA convincingly show that the approach works to learn sparse representations with improved interpretability that also yield more robust classification All reviewers agreed that this approach to sparsity in VAEs is well motivated and sound, that the paper is well written and clear, and the experiments interesting. One reviewer noted that the accuracy on MNIST remains really poor, so the approach does not cure VAEs yielding subpar representations for classification (although not the goal of this research). The reviewers and the AC however all judged that it currently constitutes a too limited contribution because a) the approach is a straightforward application of vanilla VAEs with a different prior/posterior, and is thus rather incremental. b) the scope of the paper is rather limited, in particular as it does not sufficiently discuss and does not empirically compare with other (VAE-related) approaches from the literature that were developed for sparse latent representations.
train
[ "rJesxiCnam", "Skx0DAtK6m", "HJeE9pYK6Q", "Syxnw6tFTQ", "rkxTh3YFpX", "rkeBK2FKaQ", "rkl2hXp627", "SJlf7Bfanm", "Syedv6PB3X", "B1lX0JeT57", "HylDUcPiq7", "S1lEYOzrq7" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public" ]
[ "We have now uploaded a revised version of our paper. We have made the following changes:\n\n-\t*Additional Related Work Section* We have added a subsection (2.3) covering related work on discrete VAEs and sparsity in VAEs.\n-\t*KL Divergence Derivation* We have included a derivation of the analytic KL divergence t...
[ -1, -1, -1, -1, -1, -1, 4, 5, 5, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, -1, -1, -1 ]
[ "rkeBK2FKaQ", "Syedv6PB3X", "Syxnw6tFTQ", "SJlf7Bfanm", "rkl2hXp627", "iclr_2019_SkeJ6iR9Km", "iclr_2019_SkeJ6iR9Km", "iclr_2019_SkeJ6iR9Km", "iclr_2019_SkeJ6iR9Km", "S1lEYOzrq7", "S1lEYOzrq7", "iclr_2019_SkeJ6iR9Km" ]
iclr_2019_SkeL6sCqK7
REPRESENTATION COMPRESSION AND GENERALIZATION IN DEEP NEURAL NETWORKS
Understanding the groundbreaking performance of Deep Neural Networks is one of the greatest challenges to the scientific community today. In this work, we introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities. By studying the Information Plane, the plane of the mutual information between the input variable and the desired label, for each hidden layer. Specifically, we show that the training of the network is characterized by a rapid increase in the mutual information (MI) between the layers and the target label, followed by a longer decrease in the MI between the layers and the input variable. Further, we explicitly show that these two fundamental information-theoretic quantities correspond to the generalization error of the network, as a result of introducing a new generalization bound that is exponential in the representation compression. The analysis focuses on typical patterns of large-scale problems. For this purpose, we introduce a novel analytic bound on the mutual information between consecutive layers in the network. An important consequence of our analysis is a super-linear boost in training time with the number of non-degenerate hidden layers, demonstrating the computational benefit of the hidden layers.
rejected-papers
The authors admit the paper "was not written carefully enough and requires major rewriting." This seems to be a frustratingly common phenomenon with work on the information bottleneck.
val
[ "S1xlhulPCX", "Skg-RwX2aX", "rJx3jDQham", "BkxBcybAh7", "rJx9I0BgaQ", "B1gMg00tn7" ]
[ "public", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The relation between compression (information reduction), flat minima (SGD), and generalization is also described in Achile https://arxiv.org/abs/1706.01350 which proves that flatness bounds information in the weights, and information in the weights bounds information in the activations, which is the form of compr...
[ -1, -1, -1, 3, 6, 4 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "iclr_2019_SkeL6sCqK7", "rJx3jDQham", "iclr_2019_SkeL6sCqK7", "iclr_2019_SkeL6sCqK7", "iclr_2019_SkeL6sCqK7", "iclr_2019_SkeL6sCqK7" ]
iclr_2019_SkeQniAqK7
Combining Learned Representations for Combinatorial Optimization
We propose a new approach to combine Restricted Boltzmann Machines (RBMs) that can be used to solve combinatorial optimization problems. This allows synthesis of larger models from smaller RBMs that have been pretrained, thus effectively bypassing the problem of learning in large RBMs, and creating a system able to model a large, complex multi-modal space. We validate this approach by using learned representations to create ``invertible boolean logic'', where we can use Markov chain Monte Carlo (MCMC) approaches to find the solution to large scale boolean satisfiability problems and show viability towards other combinatorial optimization problems. Using this method, we are able to solve 64 bit addition based problems, as well as factorize 16 bit numbers. We find that these combined representations can provide a more accurate result for the same sample size as compared to a fully trained model.
rejected-papers
Dear authors, Thank you for submitting your work to ICLR. The original goal of using smaller models to train a bigger one is definitely interesting and has been the topic of a lot of works. However, the reviewers had two major complaints: the first one is about the clarity of the paper and the second one is about the significance of the tasks on which the algorith is tested. For the latter point, your rebuttal uses arguments which are little known in the ML community and so should be expanded in a future submission.
train
[ "HJec4jK_R7", "HklrWoFdCX", "S1e9R9KOCQ", "SJee_5FOC7", "SklwBiVQaQ", "rker_F1q3X", "BylH3HnQnm" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments, we will be responding with specific comments to AnonReviewer3 here, and more general comments to the reviewer above. \n\nR2: For instance, in the introduced approach, only an example of combination is provided in Figure 1. It is not clear how smaller RBMs (and their associated paramete...
[ -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, 3, 5, 3 ]
[ "BylH3HnQnm", "rker_F1q3X", "SklwBiVQaQ", "iclr_2019_SkeQniAqK7", "iclr_2019_SkeQniAqK7", "iclr_2019_SkeQniAqK7", "iclr_2019_SkeQniAqK7" ]
iclr_2019_SkeUG30cFQ
The Expressive Power of Deep Neural Networks with Circulant Matrices
Recent results from linear algebra stating that any matrix can be decomposed into products of diagonal and circulant matrices has lead to the design of compact deep neural network architectures that perform well in practice. In this paper, we bridge the gap between these good empirical results and the theoretical approximation capabilities of Deep diagonal-circulant ReLU networks. More precisely, we first demonstrate that a Deep diagonal-circulant ReLU networks of bounded width and small depth can approximate a deep ReLU network in which the dense matrices are of low rank. Based on this result, we provide new bounds on the expressive power and universal approximativeness of this type of networks. We support our experimental results with thorough experiments on a large, real world video classification problem.
rejected-papers
The paper conveys interesting study but the reviewers expressed concerns regarding the difference of this work compared to existing approaches and pointed a room for more thorough empirical evaluation.
train
[ "B1gyINsFhX", "Hygqlw3G07", "Bkeezu4n6X", "B1xQrvG_pX", "S1leyKeua7", "r1l27pBshX" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "The paper proposes using structured matrices, specifically circulant and diagonal matrices, to speed up computation and reduce memory requirements in NNs. The idea has been previously explored by a number of papers, as described in the introduction and related work. The main contribution of the paper is to do som...
[ 6, -1, -1, 4, -1, 7 ]
[ 5, -1, -1, 4, -1, 4 ]
[ "iclr_2019_SkeUG30cFQ", "B1xQrvG_pX", "B1xQrvG_pX", "iclr_2019_SkeUG30cFQ", "B1gyINsFhX", "iclr_2019_SkeUG30cFQ" ]
iclr_2019_SkeXehR9t7
Graph2Seq: Graph to Sequence Learning with Attention-Based Neural Networks
The celebrated Sequence to Sequence learning (Seq2Seq) technique and its numerous variants achieve excellent performance on many tasks. However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence. To address this challenge, we introduce a general end-to-end graph-to-sequence neural encoder-decoder architecture that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors. Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings. We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs. Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms existing graph neural networks, Seq2Seq, and Tree2Seq models; using the proposed bi-directional node embedding aggregation strategy, the model can converge rapidly to the optimal performance.
rejected-papers
Strengths: The work proposes a novel architecture for graph to sequence learning. The paper shows improved performance on synthetic transduction tasks and for graph to text generation. Weaknesses: Multiple reviewers felt that the experiments were insufficient to evaluate the novel aspects of the submission relative to prior work. Newer experiments with the proposed aggregation strategy and a different graph representation were not as promising with respect to simple baselines. Points of contention: The discussion with the authors and one of the reviewers was particular contentious. The title of the paper & sentences within the paper such as "We propose a new attention-based neural networks paradigm to elegantly address graph- to-sequence learning problems" caused significant contention, as this was perceived to discount the importance of prior work on graph-to-sequence problems which led to a perception of the paper "overclaiming" novelty. Consensus: Consensus was not reached, but both the reviewer with the lowest score and one of the reviewers giving a 6 came to the consensus that the experimental evaluation does not yet evaluate the novel aspects of the submission thoroughly enough. Due to the aggregate score, factors discussed above (and others) the AC recommends rejection; however, this work shows promise and additional experimental work should allow a new set of reviewers to better understand the behaviour and utility of the proposed method.
train
[ "SJeVih7mJN", "SyxNIcGX1V", "r1xMb8TaCm", "BklNzoQaR7", "rkliiSzaCQ", "BJehojWpCm", "rJgfjwl2AX", "SJl8tDxhCm", "r1gV0ocjC7", "BylxnicsAm", "HJlZDoqo0m", "HJKBboE3m", "r1e4N3kj07", "rJlfIu89Cm", "rkgCK_UqAQ", "B1eTkY89Cm", "HkgbmcIcCm", "H1ebaFIc0Q", "BylkEj0gTm", "B1x_Q__qhQ" ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ "We are sorry to hear that the reviewer 1 did not feel that this is a strong enough submission. However, we would like to clarify some points as follows:\n\nQ1: - The paper cannot claim that its main contribution is to propose a general graph-to-seq framework as (a) multiple models falling under the framework alrea...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "SyxNIcGX1V", "B1eTkY89Cm", "BklNzoQaR7", "rkliiSzaCQ", "rJgfjwl2AX", "SJl8tDxhCm", "r1gV0ocjC7", "r1gV0ocjC7", "r1e4N3kj07", "r1e4N3kj07", "r1e4N3kj07", "iclr_2019_SkeXehR9t7", "HkgbmcIcCm", "BylkEj0gTm", "BylkEj0gTm", "B1x_Q__qhQ", "HJKBboE3m", "HJKBboE3m", "iclr_2019_SkeXehR9t...
iclr_2019_SkelJnRqt7
Neural separation of observed and unobserved distributions
Separating mixed distributions is a long standing challenge for machine learning and signal processing. Applications include: single-channel multi-speaker separation (cocktail party problem), singing voice separation and separating reflections from images. Most current methods either rely on making strong assumptions on the source distributions (e.g. sparsity, low rank, repetitiveness) or rely on having training samples of each source in the mixture. In this work, we tackle the scenario of extracting an unobserved distribution additively mixed with a signal from an observed (arbitrary) distribution. We introduce a new method: Neural Egg Separation - an iterative method that learns to separate the known distribution from progressively finer estimates of the unknown distribution. In some settings, Neural Egg Separation is initialization sensitive, we therefore introduce GLO Masking which ensures a good initialization. Extensive experiments show that our method outperforms current methods that use the same level of supervision and often achieves similar performance to full supervision.
rejected-papers
This paper presents a novel technique for separating signals in a given mixture, a common problem encountered in audio and vision tasks. The algorithm assumes that training samples from only one of the sources and the mixture distributions are available, which is a realistic assumption in a lot of cases. It then iteratively learns a model that can separate the mixture by using the available samples in a clever fashion. Strengths: - The novelty lies in how the authors formulate the problem, and the iterative approach used to learn the unknown distribution and thereby improve source separation. - The use of existing GLO masking techniques for initialization to improve performance is also novel and interesting. Weaknesses - There are some concerns around guarantees of convergence. Empirically, the algorithm works well, but it is unclear when the algorithm will fail. Some analysis here would have greatly improved the quality of the paper. - The reviewers also raised concerns around clarity of presentation and consistency of notation. While the presentation improved after revision, there are parts which remain unclear (e.g., those raised by R3) that may hinder readability and reproducibility. - The mixing model assumed by the authors is additive, which may not always be the case, e.g. when noise is convolutive (room reverberation, for instance). - (Minor) Experiments can also be improved. The vision tasks are not very realistic. For the speech separation task, relatively clean speech is easy to obtain. Therefore, it would be worth considering speech as observed, and noise as unobserved. The authors cite separating animal sounds from background, but the task chosen does not quite match that setup. Overall, the reviewers agree that the paper presents an interesting approach to separation. But given the issues with presentation and evaluations, the recommendation is to reject the paper. We strongly encourage the authors to address these concerns and resubmit in the future.
train
[ "BkxTHIvXx4", "HkxhxAtn3X", "rJg_cCnmpQ", "SJx8ouYFh7", "SyxqMBBwA7", "BylFNTTg67", "S1xwYnry67", "H1eXejrJpm" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We thank the reviewers for the detailed discussion and believe that the manuscript has greatly benefited from it. We also thank all reviewers for increasing their scores.\n\nAll reviewers agree that the task we tackle is important, ideas presented are interesting and experimental performance is convincing. \n\nIt ...
[ -1, 5, 6, 6, -1, -1, -1, -1 ]
[ -1, 4, 2, 4, -1, -1, -1, -1 ]
[ "iclr_2019_SkelJnRqt7", "iclr_2019_SkelJnRqt7", "iclr_2019_SkelJnRqt7", "iclr_2019_SkelJnRqt7", "rJg_cCnmpQ", "S1xwYnry67", "SJx8ouYFh7", "HkxhxAtn3X" ]
iclr_2019_SkenUj0qYm
Semi-supervised Learning with Multi-Domain Sentiment Word Embeddings
Word embeddings are known to boost performance of many NLP tasks such as text classification, meanwhile they can be enhanced by labels at the document level to capture nuanced meaning such as sentiment and topic. Can one combine these two research directions to benefit from both? In this paper, we propose to jointly train a text classifier with a label-enhanced and domain-aware word embedding model, using an unlabeled corpus and only a few labeled data from non-target domains. The embeddings are trained on the unlabed corpus and enhanced by pseudo labels coming from the classifier, and at the same time are used by the classifier as input and training signals. We formalize this symbiotic cycle in a variational Bayes framework, and show that our method improves both the embeddings and the text classifier, outperforming state-of-the-art domain adaptation and semi-supervised learning techniques. We conduct detailed ablative tests to reveal gains from important components of our approach. The source code and experiment data will be publicly released.
rejected-papers
Pros: - The paper is well written Cons: - Not very novel - Evaluation only on sentiment classification, whereas approaches applicable in broader context exists - There are question re baselines (R3) Neither reviewer was particularly enthusiastic about the paper, I believe, mostly because of the limited score and novelty.
train
[ "SJx79i1WRm", "rkeChhJ-0X", "Hklu8akbC7", "SkeDSxzF37", "r1xl1tcu37", "r1gMSyoHh7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the informative comment. Hu et al.’s ICML2017 paper is especially a very interesting read. It is not directly comparable to this paper because, as the reviewer already pointed out, Hu et al.’s is about (sentence) generation but this paper is about (document) classification. Nevertheless, let’s have a de...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "SkeDSxzF37", "r1xl1tcu37", "r1gMSyoHh7", "iclr_2019_SkenUj0qYm", "iclr_2019_SkenUj0qYm", "iclr_2019_SkenUj0qYm" ]
iclr_2019_Skf-oo0qt7
On Generalization Bounds of a Family of Recurrent Neural Networks
Recurrent Neural Networks (RNNs) have been widely applied to sequential data analysis. Due to their complicated modeling structures, however, the theory behind is still largely missing. To connect theory and practice, we study the generalization properties of vanilla RNNs as well as their variants, including Minimal Gated Unit (MGU) and Long Short Term Memory (LSTM) RNNs. Specifically, our theory is established under the PAC-Learning framework. The generalization bound is presented in terms of the spectral norms of the weight matrices and the total number of parameters. We also establish refined generalization bounds with additional norm assumptions, and draw a comparison among these bounds. We remark: (1) Our generalization bound for vanilla RNNs is significantly tighter than the best of existing results; (2) We are not aware of any other generalization bounds for MGU and LSTM in the exiting literature; (3) We demonstrate the advantages of these variants in generalization.
rejected-papers
Some expert reviewers have raised novelty issues, that the authors have addressed in detail. Still, these expert reviewers are not entirely convinced. If this were a journal, I would recommend a major revision or reject-and-resubmit in order to allow the authors to anticipate the reviewers' concerns in the body of the paper and get some fresh reviews. I compliment the authors on the diligence they have put into the rebuttal stage, and look forward to reading the next version of the work. I will note that the bounds by Bartlett, Foster, and Telgarsky (and then the PAC-Bayes versions by Neyshabur et al.) are numerically vacuous empirically, and so whether those bounds or these bounds for RNNs explain generalization is up for debate.
test
[ "rJlmjXByJN", "BklaeRh207", "BkxRG1XIhX", "rygsfIzs07", "Skgd5kAtAX", "rJlN81CFAm", "Hyx17yRtAQ", "H1gfl1CtAX", "SJgzW2Vqh7", "SklswyoFnX", "SJgZc5nt97", "S1gZ3ent9X", "BkgrYFOH9Q", "S1eG6BLE5m" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "public" ]
[ "I have read the response provided by the authors , the reviews of the other authors and have also looked at the updated version of the paper for the changes made. \n\nThe authors have added in the numerical numbers and comparison with previous bounds. However, as has also been pointed by Reviewer 1, the paper seem...
[ -1, -1, 3, -1, -1, -1, -1, -1, 4, 6, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1 ]
[ "H1gfl1CtAX", "rygsfIzs07", "iclr_2019_Skf-oo0qt7", "rJlN81CFAm", "iclr_2019_Skf-oo0qt7", "BkxRG1XIhX", "SklswyoFnX", "SJgzW2Vqh7", "iclr_2019_Skf-oo0qt7", "iclr_2019_Skf-oo0qt7", "S1eG6BLE5m", "BkgrYFOH9Q", "iclr_2019_Skf-oo0qt7", "iclr_2019_Skf-oo0qt7" ]
iclr_2019_SkfTIj0cKX
Purchase as Reward : Session-based Recommendation by Imagination Reconstruction
One of the key challenges of session-based recommender systems is to enhance users’ purchase intentions. In this paper, we formulate the sequential interactions between user sessions and a recommender agent as a Markov Decision Process (MDP). In practice, the purchase reward is delayed and sparse, and may be buried by clicks, making it an impoverished signal for policy learning. Inspired by the prediction error minimization (PEM) and embodied cognition, we propose a simple architecture to augment reward, namely Imagination Reconstruction Network (IRN). Specifically, IRN enables the agent to explore its environment and learn predictive representations via three key components. The imagination core generates predicted trajectories, i.e., imagined items that users may purchase. The trajectory manager controls the granularity of imagined trajectories using the planning strategies, which balances the long-term rewards and short-term rewards. To optimize the action policy, the imagination-augmented executor minimizes the intrinsic imagination error of simulated trajectories by self-supervised reconstruction, while maximizing the extrinsic reward using model-free algorithms. Empirically, IRN promotes quicker adaptation to user interest, and shows improved robustness to the cold-start scenario and ultimately higher purchase performance compared to several baselines. Somewhat surprisingly, IRN using only the purchase reward achieves excellent next-click prediction performance, demonstrating that the agent can "guess what you like" via internal planning.
rejected-papers
This paper addresses the problem of recommendations within user sessions from a reinforcement learning perspective. The problem is naturally modeled as an RL problem, given its sequential nature and inherent uncertainty of any model over user preferences. The problem suffers from delayed and sparse rewards, which the authors propose to address using self-supervised prediction. The approach is empirically validated in a simulated setting, using data from the 2015 ACM RecSys Challenge. The reviewers and AC note that the problem studied is an important application area where RL has high potential to improve over current research results and industry practice. The proposed idea is interesting, and the strong empirical evaluation on a publicly available data set is highlighted. R1 also commends the authors' decision to address the challenging cold-start problem. The reviewers and AC also note several potential weaknesses. The choice of addressing the problem from a reinforcement learning perspective is not clearly motivated. This is needed, as many supervised learning (and other types) approaches to the problem exist. A performance comparison to current state-of-the-art RL baselines is missing. The proposed approach is related to both imagination augmented (I2A, Racaniere et al. 2017) and agents with auxiliary rewards (UNREAL, Jaderberg et al. 2016), but does not compare to either method. Neither does the related work section sufficiently clarify why the proposed approach is expected to improve over these prior approaches. A thorough comparison to these baselines in a real-world application like session-based recommendation would be a strong contribution in itself, but without the contributions of the paper are hard to assess. Reviewers also noted lack of clarity. Some concerns are addressed by the authors, but the consensus is that the paper would benefit from a major revision to clearly work out the method, as well as it's conceptual and empirical differences from existing reinforcement learning approaches. R3 mentions missing related work, some of which the authors include in the revision. The AC recommends also following up on references in cited papers to ensure a future revision of the paper is well placed in the context of prior work on recommender systems, especially when modeled as a reinforcement learning problem. Overall, the paper was assessed as borderline by the reviewers. The ACs view is that there are too many concerns for acceptance at ICLR in the present form, and that the paper will benefit from a thorough revision.
train
[ "SyenKQGCA7", "rye-UEDYC7", "BJlDlN9KRm", "Hkl4OErXa7", "B1lGkaXqnm", "r1e_dvHgnX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q1: What is L_A3C in “L = L_A3C + L_IRN” in the first paragraph of session 4? It looks like a loss from a previous paper, but it’s kind hard to track what it is exactly.\nA1: Thanks. Based on your comment, we have added detailed descriptions for both L_A3C and L_IRN. The L_A3C loss function is defined in Section 3...
[ -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, 3, 2, 5 ]
[ "B1lGkaXqnm", "Hkl4OErXa7", "r1e_dvHgnX", "iclr_2019_SkfTIj0cKX", "iclr_2019_SkfTIj0cKX", "iclr_2019_SkfTIj0cKX" ]
iclr_2019_SkfhIo0qtQ
Volumetric Convolution: Automatic Representation Learning in Unit Ball
Convolution is an efficient technique to obtain abstract feature representations using hierarchical layers in deep networks. Although performing convolution in Euclidean geometries is fairly straightforward, its extension to other topological spaces---such as a sphere S^2 or a unit ball B^3---entails unique challenges. In this work, we propose a novel `"volumetric convolution" operation that can effectively convolve arbitrary functions in B^3. We develop a theoretical framework for "volumetric convolution" based on Zernike polynomials and efficiently implement it as a differentiable and an easily pluggable layer for deep networks. Furthermore, our formulation leads to derivation of a novel formula to measure the symmetry of a function in B^3 around an arbitrary axis, that is useful in 3D shape analysis tasks. We demonstrate the efficacy of proposed volumetric convolution operation on a possible use-case i.e., 3D object recognition task.
rejected-papers
Using volumetric convolutions, this paper focuses on learning in (rather than on) the unit sphere. The novelty of the approach is debatable, and the mathematical analysis not strong enough to merit that. In combination with good but not outstanding results, interest of the research community is doubted. An extended experimental analysis of the method would greatly improve the paper.
train
[ "H1lmSxlYTQ", "BJeT3lDZ6X", "S1eV8V6RTX", "Byxk5M8Bam", "HJemb8lrCm", "Byx2gqCUam", "B1x5PlVW67", "SkepEpn1aQ", "BylyjcBTnm", "ryeNQ-Zj37" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank you for the insightful and valuable reply. Please find our responses below.\n\nQ0 - Radial component and Weiler, et al., 2018\n\nWe agree with your point that the addition of the radial component is a straightforward extension. However, the radial component essentially moves the functions from S^2 to B^3...
[ -1, 6, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, 2, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "Byx2gqCUam", "iclr_2019_SkfhIo0qtQ", "BJeT3lDZ6X", "BJeT3lDZ6X", "BJeT3lDZ6X", "SkepEpn1aQ", "ryeNQ-Zj37", "BylyjcBTnm", "iclr_2019_SkfhIo0qtQ", "iclr_2019_SkfhIo0qtQ" ]
iclr_2019_SkgCV205tQ
Accelerating first order optimization algorithms
There exist several stochastic optimization algorithms. However in most cases, it is difficult to tell for a particular problem which will be the best optimizer to choose as each of them are good. Thus, we present a simple and intuitive technique, when applied to first order optimization algorithms, is able to improve the speed of convergence and reaches a better minimum for the loss function compared to the original algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient during training. We conducted several tests with Adam and AMSGrad on two different datasets. The preliminary results show that the proposed technique improves the performance of existing optimization algorithms and works well in practice.
rejected-papers
Dear authors, All reviewers commented that the paper had issues with the presentations and the results, making it unsuitable for publication to ICLR. Please address these comments should you decide to resubmit this work.
train
[ "B1lJIzckam", "Bklc29M5nX", "rJg32aeL2X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper considers a simplistic extension of first order methods typically used for neural network training. Apart from the basic idea the paper's actual algorithm is hard to read because it is full of lacking definitions. I have tried to piece together whatever I could by reading the proof. The algorithm box is ...
[ 3, 4, 4 ]
[ 3, 3, 5 ]
[ "iclr_2019_SkgCV205tQ", "iclr_2019_SkgCV205tQ", "iclr_2019_SkgCV205tQ" ]
iclr_2019_SkgE8sRcK7
Sample Efficient Deep Neuroevolution in Low Dimensional Latent Space
Current deep neuroevolution models are usually trained in a large parameter search space for complex learning tasks, e.g. playing video games, which needs billions of samples and thousands of search steps to obtain significant performance. This raises a question of whether we can make use of sequential data generated during evolution, encode input samples, and evolve in low dimensional parameter space with latent state input in a fast and efficient manner. Here we give an affirmative answer: we train a VAE to encode input samples, then an RNN to model environment dynamics and handle temporal information, and last evolve our low dimensional policy network in latent space. We demonstrate that this approach is surprisingly efficient: our experiments on Atari games show that within 10M frames and 30 evolution steps of training, our algorithm could achieve competitive result compared with ES, A3C, and DQN which need billions of frames.
rejected-papers
Pros: - compelling idea to use VAEs to reduce the dimensionality of the space in which to run evolution - non-trivial benchmark results - clearly written, solid background Cons: - moderate novelty (as compared to [1]) - performance results are sup-par - no rebuttal, despite constructive and detailed review comments (and an explicit willingness to raise scores by multiple points!) The reviewers agree that the paper should be rejected in its current form, but would plausibly have been willing to reassess their scores for a major revision -- which did not materialize.
train
[ "rkeSEvPo3m", "B1ehYPDtn7", "ryesWWEY27" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a combination of Evolutionary methods and variational representation learning to improve the sample efficiency of RL methods.\nThey train a VAE on environment frames, as well as an action-conditioned Dynamics model to predict the next frames, and these form the representations fed into a policy...
[ 4, 5, 4 ]
[ 4, 5, 4 ]
[ "iclr_2019_SkgE8sRcK7", "iclr_2019_SkgE8sRcK7", "iclr_2019_SkgE8sRcK7" ]
iclr_2019_SkgKzh0cY7
Unsupervised Video-to-Video Translation
Unsupervised image-to-image translation is a recently proposed task of translating an image to a different style or domain given only unpaired image examples at training time. In this paper, we formulate a new task of unsupervised video-to-video translation, which poses its own unique challenges. Translating video implies learning not only the appearance of objects and scenes but also realistic motion and transitions between consecutive frames. We investigate the performance of per-frame video-to-video translation using existing image-to-image translation networks, and propose a spatio-temporal 3D translator as an alternative solution to this problem. We evaluate our 3D method on multiple synthetic datasets, such as moving colorized digits, as well as the realistic segmentation-to-video GTA dataset and a new CT-to-MRI volumetric images translation dataset. Our results show that frame-wise translation produces realistic results on a single frame level but underperforms significantly on the scale of the whole video compared to our three-dimensional translation approach, which is better able to learn the complex structure of video and motion and continuity of object appearance.
rejected-papers
In this work, a central idea introduced by CycleGAN is extended from 2D convolutions to 3D convolutions to ensure better consistency of style transfer across time. Authors demonstrate improvements on a variety of datasets in comparison to frame-by-frame style transfer. Reviewer Pros: + Seems to be effective at enforcing improved consistency over time + Proposed medical dataset may be good contribution to community. + Good quality evaluation Reviewer Cons: - All reviewers felt the technical novelty was low. - Some questions arose around quantitative results, left unanswered by authors. - Experiments missing some baseline approaches - Architecture limited to fixed length video segments Reviewer consensus is to reject. Authors are encouraged to continue their work and take into account suggestions made by reviewers, including adding additional comparison baselines
train
[ "HyekQszq27", "S1bVl93d27", "H1e426gJhX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper present a spatio-temporal (i.e., 3D version) of Cycle-Consistent Adversarial Networks (CycleGAN) for unsupervised video-to-video translation. The evaluations on multiple datasets show the proposed model is better able to work for video translation in terms of image continuity and frame-wise translation ...
[ 3, 4, 4 ]
[ 4, 5, 5 ]
[ "iclr_2019_SkgKzh0cY7", "iclr_2019_SkgKzh0cY7", "iclr_2019_SkgKzh0cY7" ]
iclr_2019_SkgToo0qFm
Transferrable End-to-End Learning for Protein Interface Prediction
While there has been an explosion in the number of experimentally determined, atomically detailed structures of proteins, how to represent these structures in a machine learning context remains an open research question. In this work we demonstrate that representations learned from raw atomic coordinates can outperform hand-engineered structural features while displaying a much higher degree of transferrability. To do so, we focus on a central problem in biology: predicting how proteins interact with one another—that is, which surfaces of one protein bind to which surfaces of another protein. We present Siamese Atomic Surfacelet Network (SASNet), the first end-to-end learning method for protein interface prediction. Despite using only spatial coordinates and identities of atoms as inputs, SASNet outperforms state-of-the-art methods that rely on hand-engineered, high-level features. These results are particularly striking because we train the method entirely on a significantly biased data set that does not account for the fact that proteins deform when binding to one another. Demonstrating the first successful application of transfer learning to atomic-level data, our network maintains high performance, without retraining, when tested on real cases in which proteins do deform.
rejected-papers
Two out of three reviews for this paper were provided in detail, but all three reviewers agreed unanimously that this paper is below the acceptance bar for ICLR. The reviewers admired the clarity of writing, and appreciated the importance of the application, but none recommended the paper for acceptance due largely to concerns on the experimental setup.
train
[ "H1etg-g5pX", "B1gm0gxqp7", "H1lU_ZlqaX", "BklQEZg5TX", "SkeItzcq3m", "Skl6siujnm", "S1gJdPVn3m" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> Moreover, it is the prediction performance that matters to such task, but the authors remove the non-structure features from the compared methods. Results and discussion about how the previous methods with full features perform compared to SASNet, and also how we can include those features into SASNet should com...
[ -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "B1gm0gxqp7", "S1gJdPVn3m", "SkeItzcq3m", "Skl6siujnm", "iclr_2019_SkgToo0qFm", "iclr_2019_SkgToo0qFm", "iclr_2019_SkgToo0qFm" ]
iclr_2019_SkgVRiC9Km
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
Deep networks have achieved impressive results across a variety of important tasks. However, a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples. We propose \emph{Fortified Networks}, a simple transformation of existing networks, which “fortifies” the hidden layers in a deep network by identifying when the hidden states are off of the data manifold, and maps these hidden states back to parts of the data manifold where the network performs well. Our principal contribution is to show that fortifying these hidden states improves the robustness of deep networks and our experiments (i) demonstrate improved robustness to standard adversarial attacks in both black-box and white-box threat models; (ii) suggest that our improvements are not primarily due to the problem of deceptively good results due to degraded quality in the gradient signal (the gradient masking problem) and (iii) show the advantage of doing this fortification in the hidden layers instead of the input space. We demonstrate improvements in adversarial robustness on three datasets (MNIST, Fashion MNIST, CIFAR10), across several attack parameters, both white-box and black-box settings, and the most widely studied attacks (FGSM, PGD, Carlini-Wagner). We show that these improvements are achieved across a wide variety of hyperparameters.
rejected-papers
This paper suggests a method for defending against adversarial examples and out-of-distribution samples via projection onto the data manifold. The paper suggests a new method for detecting when hidden layers are off of the manifold, and uses auto encoders to map them back onto the manifold. The paper is well-written and the method is novel and interesting. However, most of the reviewers agree that the original robustness evaluations were not sufficient due to restricting the evaluation to using FGSM baseline and comparison with thermometer encoding (which both are known to not be fully effective baselines). After rebuttal, Reviewer 4 points out that the method offers very little robustness over adversarial training alone, even though it is combined with adversarial training, which suggests that the method itself provides very little robustness.
train
[ "ryxxQlaWy4", "r1gedxOiCX", "S1gcR8XcAX", "BkeHBHy50Q", "Bkgo0MaF0m", "HJe8N-s_CQ", "HyxDFKXdC7", "SkenZJ6URQ", "SklESyJGCm", "SklG26PgAm", "Syell2DxRQ", "SyeyA7BlA7", "ByxB567yRQ", "B1ekccmyAQ", "r1gVS6NaaX", "rJgZu_ba6X", "BJxi0d9q6Q", "H1e2sO9567", "Bye09P9caQ", "S1eCNv5c6X"...
[ "author", "official_reviewer", "author", "author", "author", "author", "public", "public", "public", "public", "public", "official_reviewer", "author", "public", "author", "public", "author", "author", "author", "author", "author", "author", "author", "official_reviewer...
[ "Hello, \n\nWe've updated the paper with new results on the PGD attack with many more iterations, architectures (including large architectures like wideresnet), and setups (especially see Tables 2 and 3). This directly addresses the over-reliance on FGSM as an attack, which was the focus of Ian Goodfellow's commen...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, 5, 9, 6, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, 3, 4, 3, -1, -1, -1 ]
[ "BylDnFpep7", "S1gcR8XcAX", "iclr_2019_SkgVRiC9Km", "SklG26PgAm", "SyeyA7BlA7", "HyxDFKXdC7", "iclr_2019_SkgVRiC9Km", "SyeyA7BlA7", "r1gVS6NaaX", "iclr_2019_SkgVRiC9Km", "ByxB567yRQ", "H1e2sO9567", "B1ekccmyAQ", "H1e2sO9567", "ByxZaNAChm", "H1e2sO9567", "B1li57iX6Q", "B1li57iX6Q", ...
iclr_2019_SkgZNnR5tX
Uncovering Surprising Behaviors in Reinforcement Learning via Worst-case Analysis
Reinforcement learning agents are typically trained and evaluated according to their performance averaged over some distribution of environment settings. But does the distribution over environment settings contain important biases, and do these lead to agents that fail in certain cases despite high average-case performance? In this work, we consider worst-case analysis of agents over environment settings in order to detect whether there are directions in which agents may have failed to generalize. Specifically, we consider a 3D first-person task where agents must navigate procedurally generated mazes, and where reinforcement learning agents have recently achieved human-level average-case performance. By optimizing over the structure of mazes, we find that agents can suffer from catastrophic failures, failing to find the goal even on surprisingly simple mazes, despite their impressive average-case performance. Additionally, we find that these failures transfer between different agents and even significantly different architectures. We believe our findings highlight an important role for worst-case analysis in identifying whether there are directions in which agents have failed to generalize. Our hope is that the ability to automatically identify failures of generalization will facilitate development of more general and robust agents. To this end, we report initial results on enriching training with settings causing failure.
rejected-papers
The paper presents adversarial "attacks" to maze generation for RL agents trained to perform 2D navigation tasks in 3D environments (DM Lab). The paper is well written, and the rebuttal(s) and additional experiments (section 4) make the paper better. The approach itself is very interesting. However, there are a few limitations, and thus I am very borderline on this submission: - the analysis of why and how the navigation trained models fail, is rather succinct. Analyzing what happens on the model side (not just the features of the adversarial mazes vs. training mazes) would make the paper stronger. - (more importantly) Section 4: "adapting the training distribution" by incorporating adversarial mazes into training feels incomplete. That is a pithy as giving an adversarial attack for RL trained navigation agents would be much more complete of a contribution if at least the most obvious way to defend the attack was studied in depth. The authors themselves are honest about it and write "Therefore, it is possible that many more training iterations are necessary for agents to learn to perform well in each adversarial setting." (under 4.4 / Expensive Training). I would invite the authors to submit this version to the workshop track, and/or to finish the work started in Section 4 and make it a strong paper.
train
[ "ByxGf2I5nQ", "BJlJ9lVN0Q", "HylA5R1cp7", "SJgT7Rk5a7", "B1xuCa1cTX", "rJxuv6y5T7", "B1xFkeP5hX", "SygK7DGMnX" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update:\n\nI appreciate the clarifications and the extension of the paper in response to the reviews. I think it made the work stronger. The results in the newly added section are interesting and actually suggest that by putting more effort into training set design/augmentation, one could further robustify the age...
[ 7, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2019_SkgZNnR5tX", "iclr_2019_SkgZNnR5tX", "iclr_2019_SkgZNnR5tX", "ByxGf2I5nQ", "B1xFkeP5hX", "SygK7DGMnX", "iclr_2019_SkgZNnR5tX", "iclr_2019_SkgZNnR5tX" ]
iclr_2019_Skgge3R9FQ
Controlling Over-generalization and its Effect on Adversarial Examples Detection and Generation
Convolutional Neural Networks (CNNs) significantly improve the state-of-the-art for many applications, especially in computer vision. However, CNNs still suffer from a tendency to confidently classify out-distribution samples from unknown classes into pre-defined known classes. Further, they are also vulnerable to adversarial examples. We are relating these two issues through the tendency of CNNs to over-generalize for areas of the input space not covered well by the training set. We show that a CNN augmented with an extra output class can act as a simple yet effective end-to-end model for controlling over-generalization. As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples. To help select a representative natural out-distribution set among available ones, we propose a simple measurement to assess an out-distribution set's fitness. We also demonstrate that training such an augmented CNN with representative out-distribution natural datasets and some interpolated samples allows it to better handle a wide range of unseen out-distribution samples and black-box adversarial examples without training it on any adversaries. Finally, we show that generation of white-box adversarial attacks using our proposed augmented CNN can become harder, as the attack algorithms have to get around the rejection regions when generating actual adversaries.
rejected-papers
The reviewers agree the paper is not ready for publication at ICLR.
train
[ "SylJ-xIopQ", "SkevWvSj6m", "BJeC2_Hspm", "Skl-BtQq6X", "rylwp4Sm67", "H1ghn977a7", "SygmR_X7pX", "H1eqf5OA3m", "H1gMR8h6h7", "rygaTnElhQ", "ByelE-bMjX", "rkxFFJ-jsX", "B1lmogO4j7", "Bkeqla1No7", "S1evvljJsQ", "rkgfQcOksQ", "rkelT3a3c7" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "public", "public", "public", "author", "public", "author", "public" ]
[ "We are thankful for the reviewer 2 to provide us with his/her feedback,\n\nThe reviewer mentioned: \"the interpolation mechanism is also too simple\":\nWe would like to highlight that despite the simplicity of interpolated samples, there has been demonstrated the effectiveness of using such samples on developing m...
[ -1, -1, -1, 4, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, -1, -1, -1, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1gMR8h6h7", "iclr_2019_Skgge3R9FQ", "H1eqf5OA3m", "iclr_2019_Skgge3R9FQ", "rygaTnElhQ", "rkxFFJ-jsX", "B1lmogO4j7", "iclr_2019_Skgge3R9FQ", "iclr_2019_Skgge3R9FQ", "iclr_2019_Skgge3R9FQ", "iclr_2019_Skgge3R9FQ", "iclr_2019_Skgge3R9FQ", "iclr_2019_Skgge3R9FQ", "S1evvljJsQ", "rkgfQcOksQ"...
iclr_2019_SkghBoR5FX
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers
The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains. However, recent work has shown that these largely uninterpretable models exhibit glaring security vulnerabilities in the presence of an adversary. In this work, we develop a powerful untargeted adversarial attack for action recognition systems in both white-box and black-box settings. Action recognition models differ from image-classification models in that their inputs contain a temporal dimension, which we explicitly target in the attack. Drawing inspiration from image classifier attacks, we create new attacks which achieve state-of-the-art success rates on a two-stream classifier trained on the UCF-101 dataset. We find that our attacks can significantly degrade a model’s performance with sparsely and imperceptibly perturbed examples. We also demonstrate the transferability of our attacks to black-box action recognition systems.
rejected-papers
With scores of 5, 4 and 3 the paper is just too far away from the threshold for acceptance.
val
[ "SJgJZi2_pQ", "Hyxl5T2u6Q", "BJgL6T3ua7", "HkeOoRnd6m", "rkeFEMNT27", "HkgdPJ_9hQ", "HyghzyUUnQ" ]
[ "public", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer,\n\nThank you for your review and thoughtful comments. We would like to now address each of your observed weaknesses with some of our thoughts and explanations.\n\n**Weakness 1:** As we observed in the literature review stage, the problem of action recognition/video classification has not yet been “...
[ -1, -1, -1, -1, 4, 3, 5 ]
[ -1, -1, -1, -1, 5, 4, 4 ]
[ "rkeFEMNT27", "HkgdPJ_9hQ", "Hyxl5T2u6Q", "HyghzyUUnQ", "iclr_2019_SkghBoR5FX", "iclr_2019_SkghBoR5FX", "iclr_2019_SkghBoR5FX" ]
iclr_2019_SkghN205KQ
Search-Guided, Lightly-supervised Training of Structured Prediction Energy Networks
In structured output prediction tasks, labeling ground-truth training output is often expensive. However, for many tasks, even when the true output is unknown, we can evaluate predictions using a scalar reward function, which may be easily assembled from human knowledge or non-differentiable pipelines. But searching through the entire output space to find the best output with respect to this reward function is typically intractable. In this paper, we instead use efficient truncated randomized search in this reward function to train structured prediction energy networks (SPENs), which provide efficient test-time inference using gradient-based search on a smooth, learned representation of the score landscape, and have previously yielded state-of-the-art results in structured prediction. In particular, this truncated randomized search in the reward function yields previously unknown local improvements, providing effective supervision to SPENs, avoiding their traditional need for labeled training data.
rejected-papers
This paper proposes search-guided training for structured prediction energy networks (SPENs). The reviewers found some interest in this approach, though were somewhat underwhelmed by the experimental comparison and the details provided about the method. R1 was positive and recommends acceptance; R2 and R3 thought the paper was on the incremental side and recommend rejection. Given the space restriction to this year's conference, we have to reject some borderline papers. The AC thus recommends the authors to take the reviewers comments in consideration for a "revise and resubmit".
test
[ "BJxCJPkzCQ", "SkxM8XSfRm", "Hygke-HfAX", "rkePgshh3Q", "HyeoPm4t2Q", "rkxbqGkLh7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. \nWe should first clarify that the R-SPEN training algorithm collects samples of structured outputs by performing gradient-descent inference over the energy function of SPEN not over the reward function as the reward function is not differentiable in most cases.\nThe major contribution...
[ -1, -1, -1, 5, 7, 4 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "rkePgshh3Q", "HyeoPm4t2Q", "rkxbqGkLh7", "iclr_2019_SkghN205KQ", "iclr_2019_SkghN205KQ", "iclr_2019_SkghN205KQ" ]
iclr_2019_SkgiX2Aqtm
PIE: Pseudo-Invertible Encoder
We consider the problem of information compression from high dimensional data. Where many studies consider the problem of compression by non-invertible trans- formations, we emphasize the importance of invertible compression. We introduce new class of likelihood-based auto encoders with pseudo bijective architecture, which we call Pseudo Invertible Encoders. We provide the theoretical explanation of their principles. We evaluate Gaussian Pseudo Invertible Encoder on MNIST, where our model outperform WAE and VAE in sharpness of the generated images.
rejected-papers
The presented approach demonstrates an invertible architecture for auto-encoding, which demonstrates improvements in performance relative to VAE and WAE's on MNIST. Pros: + R3: The idea of pseudo-inversion is interesting. + R3: Manuscript is clear. Cons: - R1,2,3: Additional experiments needed on CIFAR, ImageNet, others. - R1: Presentation unclear. Authors have not made any apparent attempt to improve the clarity of the manuscript, though they make their point that the method allows dimensionality reduction in their response. - R1, R2: Main advantages not clear. - R3: Text could be compressed further to allow room for additional experiments. Reviewers lean reject, and authors have not updated experiments. Authors are encouraged to continue to improve the work.
val
[ "Hye21tiOC7", "SJl6C6od0X", "HJgQj2qd0Q", "BJg5ZZkp3Q", "rJxyHtAn37", "SkgeFa7c37", "S1e7sluJjX", "S1xo1vQAqX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for your review! \n\nFirst of all, we would like to emphasize the fact the PIE is an autoencoder and allows for dimensionality reduction.\nWe refer to PIE as an Autoencoder as it performs the encoding of the data automatically. \nIn contrast to previously published paper on invertible models, PIE allows ...
[ -1, -1, -1, 3, 5, 5, -1, -1 ]
[ -1, -1, -1, 4, 5, 4, -1, -1 ]
[ "BJg5ZZkp3Q", "rJxyHtAn37", "SkgeFa7c37", "iclr_2019_SkgiX2Aqtm", "iclr_2019_SkgiX2Aqtm", "iclr_2019_SkgiX2Aqtm", "S1xo1vQAqX", "iclr_2019_SkgiX2Aqtm" ]
iclr_2019_SkgkJn05YX
RANDOM MASK: Towards Robust Convolutional Neural Networks
Robustness of neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs. In this paper, we design a new CNN architecture that by itself has good robustness. We introduce a simple but powerful technique, Random Mask, to modify existing CNN structures. We show that CNN with Random Mask achieves state-of-the-art performance against black-box adversarial attacks without applying any adversarial training. We next investigate the adversarial examples which “fool” a CNN with Random Mask. Surprisingly, we find that these adversarial examples often “fool” humans as well. This raises fundamental questions on how to define adversarial examples and robustness properly.
rejected-papers
This paper presents a new technique for modifying neural network structure, and suggest that this structure provides improved robustness to black-box attacks, as compared to standard architectures. The paper is very thorough in its experimentation, and the method is simple and quite easy to understand. It also raises some important questions about adversarial examples. However, there are serious concerns regarding the evaluation methodology. In particular, the authors claim "black-box robustness" but do not test against any query-based attacks, which are known to perform better against gradient masking-based adversarial defenses. Furthermore, it is not clear why one would expect adversarial examples to transfer between models representing two completely different functions (i.e. from a standard model to a random mask model). So, the gray-box evaluation is much more informative and, unfortunately, random-mask seems to provide little to no robustness in this setting. Given how fundamental sound and convincing evaluation is for proposed defense methods, the submission is not ready for publication yet. In particular, the authors are urged to (a) evaluate on stronger black-box attacks, and (b) compare to a baseline that is known to be non-robust, (e.g. JPEG encoding or SAP), to verify that these results are actually due to black-box robustness and not simply obfuscation.
train
[ "BkeE9tvwgN", "S1eHtOdR14", "BJeroNlpJV", "BkeDueELk4", "r1lnh14c2Q", "H1lZi4IVAQ", "SkeohX84Cm", "HylaV7U40Q", "SkxtXXUE0m", "SJgVG7L4Am", "H1gNJm8N0Q", "SyxehfLEAQ", "S1g1sfLNRQ", "HylhdMI4RX", "H1lRgzLNAQ", "H1x5kMLEAX", "rylQObUVRm", "rylQUW8VRX", "HkepklN96X", "HklI1zYRhX"...
[ "public", "author", "public", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public"...
[ "Thanks for your reply!", "> There is an \"overfitting\" phenomenon [3] of the adversarial examples generated by PGD and CW. So changing the network architecture (e.g., masking neurons) could be useful to defend against them. But it's not clear whether the proposed method is generally robust to more powerful tran...
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "S1eHtOdR14", "BJeroNlpJV", "H1x5kMLEAX", "r1lnh14c2Q", "iclr_2019_SkgkJn05YX", "iclr_2019_SkgkJn05YX", "S1lY888zq7", "rke9ze4W5X", "H1e3ueVW9m", "B1lJQZVZcX", "r1lvubNZqQ", "SyemKNUf5X", "H1xUboODhX", "r1lnh14c2Q", "HkepklN96X", "rkxrChj23Q", "Hyg6j7t02Q", "HklI1zYRhX", "rkxrChj...
iclr_2019_SkguE30ct7
Neural Model-Based Reinforcement Learning for Recommendation
There are great interests as well as many challenges in applying reinforcement learning (RL) to recommendation systems. In this setting, an online user is the environment; neither the reward function nor the environment dynamics are clearly defined, making the application of RL challenging. In this paper, we propose a novel model-based reinforcement learning framework for recommendation systems, where we develop a generative adversarial network to imitate user behavior dynamics and learn her reward function. Using this user model as the simulation environment, we develop a novel DQN algorithm to obtain a combinatorial recommendation policy which can handle a large number of candidate items efficiently. In our experiments with real data, we show this generative adversarial user model can better explain user behavior than alternatives, and the RL policy based on this model can lead to a better long-term reward for the user and higher click rate for the system.
rejected-papers
This paper formulates the recommendation as a model-based reinforcement learning problem. Major concerns of the paper include: paper writing needs improvement; many decisions in experimental design were not justified; lack of sufficient baselines; results not convincing. Overall, this paper cannot be published in its current form.
train
[ "H1xg9f5PCX", "BkePyI9vAX", "B1lpq75DRQ", "SJl7NZqD0Q", "Byljz3KPC7", "SklsdEJh2m", "Bylww959nX", "HygC7wF9nQ" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "(7)Section 4.3 to be relatively unclear.\n\nOne can interpret the mini-max optimization in two ways: \nThe user behavior model \\phi acts as a generator which generates the user's next actions based on her history, while the reward r acts as a discriminator which tries to differentiate user's actual actions from t...
[ -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "SJl7NZqD0Q", "HygC7wF9nQ", "Bylww959nX", "SklsdEJh2m", "iclr_2019_SkguE30ct7", "iclr_2019_SkguE30ct7", "iclr_2019_SkguE30ct7", "iclr_2019_SkguE30ct7" ]
iclr_2019_SkgzYiRqtX
Graph Neural Networks with Generated Parameters for Relation Extraction
Recently, progress has been made towards improving relational reasoning in machine learning field. Among existing models, graph neural networks (GNNs) is one of the most effective approaches for multi-hop relational reasoning. In fact, multi-hop relational reasoning is indispensable in many natural language processing tasks such as relation extraction. In this paper, we propose to generate the parameters of graph neural networks (GP-GNNs) according to natural language sentences, which enables GNNs to process relational reasoning on unstructured text inputs. We verify GP-GNNs in relation extraction from text. Experimental results on a human-annotated dataset and two distantly supervised datasets show that our model achieves significant improvements compared to the baselines. We also perform a qualitative analysis to demonstrate that our model could discover more accurate relations by multi-hop relational reasoning.
rejected-papers
+ experiments on an interesting task: inferring relations which are not necessarily explicitly mentioned in a sentence but need to be induced relying on other relations + the idea to frame the relation prediction task as an inference task on a graph is interesting - the paper is not very well written, and it is hard to understand what exactly the contribution is. E.g., the authors contrast with previous work saying that previous work was relying on pre-defined graphs rather than inducing them. However, here they actually rely on predefined full graphs as well (i.e. full graphs connecting all entities). (See questions from R1) - the idea of predicting edge embeddings from the sentence is an interesting one. However, I do not see results studying alternative architectures (e.g., fixed transition matrices + gates / attention), or careful ablation studies. It is hard to say if this modification is indeed necessary / beneficial. (See also R3, agreeing that experiments look preliminary) - Extra baselines? E.g., what about layers of multi-head self-attention across entities? (as in Transformer). What about the number of parameters for the proposed model? Is there chance that it works better simply because it is a larger model? (See also R3) - evaluation only one dataset (not clear if any other datasets of this kind exist though) Overall, though I find the direction and certain aspects of the model quite interesting, the paper is not ready for publication.
train
[ "ryxJBSl5CQ", "HkeucHlq07", "H1loPUeqRQ", "ByeiKLgq0m", "HkekMdOihm", "BJelhSgoh7", "BJlOlHzcnm" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the thoughtful advice and the following is our response.\n\n\n> - “l” is not defined -- I assume it denotes the number of tokens in the sentence but |s| is used in other places. Are “entires” and “entities” the same? a series of tokens => a sequence of tokens.\n\nWe have changed the notations and words ac...
[ -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "HkekMdOihm", "BJelhSgoh7", "BJlOlHzcnm", "BJlOlHzcnm", "iclr_2019_SkgzYiRqtX", "iclr_2019_SkgzYiRqtX", "iclr_2019_SkgzYiRqtX" ]
iclr_2019_Skl3M20qYQ
Non-Synergistic Variational Autoencoders
Learning disentangling representations of the independent factors of variations that explain the data in an unsupervised setting is still a major challenge. In the following paper we address the task of disentanglement and introduce a new state-of-the-art approach called Non-synergistic variational Autoencoder (Non-Syn VAE). Our model draws inspiration from population coding, where the notion of synergy arises when we describe the encoded information by neurons in the form of responses from the stimuli. If those responses convey more information together than separate as independent sources of encoding information, they are acting synergetically. By penalizing the synergistic mutual information within the latents we encourage information independence and by doing that disentangle the latent factors. Notably, our approach could be added to the VAE framework easily, where the new ELBO function is still a lower bound on the log likelihood. In addition, we qualitatively compare our model with Factor VAE and show that this one implicitly minimises the synergy of the latents.
rejected-papers
The paper introduces a form of variational auto encoder for learning disentangled representations. The idea is to penalise synergistic mutual information. The introduction of concepts from synergy to the community is appreciated. Although the approach appears interesting and forward looking in understanding complex models, at this point the paper does not convince on the theoretical nor on the experimental side. The main concepts used in the paper are developed elsewhere, the potential value of synergy is not properly examined. The reviewers agree on a not so positive view on this paper, with ratings either ok, but not good enough, or clear rejection. There is a consensus that the paper needs more work.
val
[ "HJe7r3bRhm", "BJx7Hedv3X", "BklcjjTljm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new objective function for learning disentangled representations in a variational framework, building on the beta-VAE work by Higgins et al, 2017. The approach attempts to minimise the synergy of the information provided by the independent latent dimensions of the model. Unfortunately, the aut...
[ 3, 4, 3 ]
[ 4, 3, 5 ]
[ "iclr_2019_Skl3M20qYQ", "iclr_2019_Skl3M20qYQ", "iclr_2019_Skl3M20qYQ" ]
iclr_2019_Skl6k209Ym
Alignment Based Mathching Networks for One-Shot Classification and Open-Set Recognition
Deep learning for object classification relies heavily on convolutional models. While effective, CNNs are rarely interpretable after the fact. An attention mechanism can be used to highlight the area of the image that the model focuses on thus offering a narrow view into the mechanism of classification. We expand on this idea by forcing the method to explicitly align images to be classified to reference images representing the classes. The mechanism of alignment is learned and therefore does not require that the reference objects are anything like those being classified. Beyond explanation, our exemplar based cross-alignment method enables classification with only a single example per category (one-shot). Our model cuts the 5-way, 1-shot error rate in Omniglot from 2.1\% to 1.4\% and in MiniImageNet from 53.5\% to 46.5\% while simultaneously providing point-wise alignment information providing some understanding on what the network is capturing. This method of alignment also enables the recognition of an unsupported class (open-set) in the one-shot setting while maintaining an F1-score of above 0.5 for Omniglot even with 19 other distracting classes while baselines completely fail to separate the open-set class in the one-shot setting.
rejected-papers
The reviewers are polarized on this paper and the overall feeling is that it is not quite ready for publication. There is also an interesting interpretability aspect that, while given as a motivation for the approach, is never really explored beyond showing some figures of alignments. One of the main concerns of the method’s effectiveness in practice is the computational cost. There is also concern from one of the reviewers that the formulation could result in creating sparse matching maps where only a few pixels get matched. The authors provide some justification for why this wouldn’t happen, and this should be put in a future draft. Even better would be to show statistics to demonstrate empirically that this doesn’t happen. There were a number of clarifications that were brought up during the discussion, and the authors should go over this carefully and update the draft to resolve these issues. There is also a typo in the title that should be fixed.
test
[ "S1e9sNWygE", "S1eVQB6FTQ", "S1gHbfVFaQ", "Hye4nbNtaX", "Bye9YWnphm", "BkxJlR5ahX", "SylnZdjD37" ]
[ "official_reviewer", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Authors argue that using average (independent) greedy matching of pixel embedding (based on 4-6 layer cnn hypercolumns) is a better metric for one-shot learning than just using final layer embedding of a 4-6 layer cnn for the whole image. Their argument is backed by outperforming their baseline and getting compet...
[ 7, -1, -1, -1, 6, 7, 4 ]
[ 3, -1, -1, -1, 2, 4, 4 ]
[ "iclr_2019_Skl6k209Ym", "SylnZdjD37", "BkxJlR5ahX", "Bye9YWnphm", "iclr_2019_Skl6k209Ym", "iclr_2019_Skl6k209Ym", "iclr_2019_Skl6k209Ym" ]
iclr_2019_SklR_iCcYm
Faster Training by Selecting Samples Using Embeddings
Long training times have increasingly become a burden for researchers by slowing down the pace of innovation, with some models taking days or weeks to train. In this paper, a new, general technique is presented that aims to speed up the training process by using a thinned-down training dataset. By leveraging autoencoders and the unique properties of embedding spaces, we are able to filter training datasets to include only those samples that matter the most. Through evaluation on a standard CIFAR-10 image classification task, this technique is shown to be effective. With this technique, training times can be reduced with a minimal loss in accuracy. Conversely, given a fixed training time budget, the technique was shown to improve accuracy by over 50%. This technique is a practical tool for achieving better results with large datasets and limited computational budgets.
rejected-papers
The paper proposes a filtering technique to use less training examples in order to train faster; the filtering step is done with an autoencoder. Experiments are done on CIFAR-10. Reviewers point to a lack of convincing experiments, weak evidence, lack of experimental details. Overall, all reviewers converge to reject this paper, and I agree with them.
train
[ "rygiGu-T37", "HkxUGRec3m", "SkeOLXkc2Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThe manuscript introduces a dataset filtering technique for the purpose of speeding up training of machine learning models.\nThe technique filters the training set, yielding a subset of examples that are as diverse as possible, according to an autoencoder embedding of the input space. First, one trains ...
[ 3, 3, 2 ]
[ 5, 3, 5 ]
[ "iclr_2019_SklR_iCcYm", "iclr_2019_SklR_iCcYm", "iclr_2019_SklR_iCcYm" ]
iclr_2019_SklVEnR5K7
Making Convolutional Networks Shift-Invariant Again
Modern convolutional networks are not shift-invariant, despite their convolutional nature: small shifts in the input can cause drastic changes in the internal feature maps and output. In this paper, we isolate the cause -- the downsampling operation in convolutional and pooling layers -- and apply the appropriate signal processing fix -- low-pass filtering before downsampling. This simple architectural modification boosts the shift-equivariance of the internal representations and consequently, shift-invariance of the output. Importantly, this is achieved while maintaining downstream classification performance. In addition, incorporating the inductive bias of shift-invariance largely removes the need for shift-based data augmentation. Lastly, we observe that the modification induces spatially-smoother learned convolutional kernels. Our results suggest that this classical signal processing technique has a place in modern deep networks.
rejected-papers
The reviewers are reasonably positive about this submission although two of them feel the paper is below acceptance threshold. AR1 advocates large scale experiments on ILSVRC2012/Cifar10/Cifar100 and so on. AR3 would like to see more comparisons to similar works and feels that the idea is not that significant. AR2 finds evaluations flawed. On balance, the reviewers find numerous flaws in experimentation that need to be improved. Additionally, AC is aware that approaches such as 'Convolutional Kernel Networks' by J. Mairal et al. derive a pooling layer which, by its motivation and design, obeys the sampling theorem to attain anti-aliasing. Essentially, for pooling, they obtain a convolution of feature maps with an appropriate Gaussian prior to sampling. Thus, on balance, the idea proposed in this ICLR submission may sound novel but it is not. Ideas such as 'blurring before downsampling' or 'low-pass filter kernels' applied here are simply special cases of anti-aliasing. The authors may also want to read about aliasing in 'Invariance, Stability, and Complexity of Deep Convolutional Representations' to see how to prevent aliasing. On balance, the theory behind this problem is mostly solved even if standard networks overlook this mechanism. Note also that there exist a fundamental trade-off between shift-invariance plus anti-aliasing (stability) and performance; this being a reason why max-pooling is still preferred over anti-aliasing (better performance versus stability). Though, this is nothing new for those who delve into more theoretical papers on CNNs: this is an invite for the authors to go thoroughly first through the relevant literature/numerous prior works on this topic.
train
[ "HklzYf-SJ4", "B1gszqoVkV", "Hkx4V6w9h7", "S1xu-L_41N", "r1lNpJD907", "BkeFl6LcRX", "r1exflw9RX", "B1gzMCIqCm", "HkexkySc27", "rJgwq8nFnX", "S1gnfAzDhQ" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "Sorry for making it sound that way, ImageNet is not a requirement, I did change my rating from weak reject to weak accept after all. I did this because of the additional experiments. Everything else I wrote are simply suggestions for possible future re-submissions. ", "Thank you for reading the rebuttal. While w...
[ -1, -1, 6, -1, -1, -1, -1, -1, 5, 5, -1 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, 4, 4, -1 ]
[ "B1gszqoVkV", "S1xu-L_41N", "iclr_2019_SklVEnR5K7", "BkeFl6LcRX", "rJgwq8nFnX", "Hkx4V6w9h7", "rJgwq8nFnX", "HkexkySc27", "iclr_2019_SklVEnR5K7", "iclr_2019_SklVEnR5K7", "iclr_2019_SklVEnR5K7" ]
iclr_2019_SklXvs0qt7
Curiosity-Driven Experience Prioritization via Density Estimation
In Reinforcement Learning (RL), an agent explores the environment and collects trajectories into the memory buffer for later learning. However, the collected trajectories can easily be imbalanced with respect to the achieved goal states. The problem of learning from imbalanced data is a well-known problem in supervised learning, but has not yet been thoroughly researched in RL. To address this problem, we propose a novel Curiosity-Driven Prioritization (CDP) framework to encourage the agent to over-sample those trajectories that have rare achieved goal states. The CDP framework mimics the human learning process and focuses more on relatively uncommon events. We evaluate our methods using the robotic environment provided by OpenAI Gym. The environment contains six robot manipulation tasks. In our experiments, we combined CDP with Deep Deterministic Policy Gradient (DDPG) with or without Hindsight Experience Replay (HER). The experimental results show that CDP improves both performance and sample-efficiency of reinforcement learning agents, compared to state-of-the-art methods.
rejected-papers
The manuscript describes a procedure for prioritizing the contents of an experience replay buffer in a UVFA setting based on a density model of the trajectory of the achieved goal states. A rank-based transformation of densities is used to stochastically prioritize the replay memory. Reducing the sample complexity of RL is a worthy goal and reviewers found the overall approach is interesting, if somewhat arbitrary in the implementation details. Concerns were raised about clarity and justification, and the restriction of experiments to fully deterministic environments. After personally reading the updated manuscript I found clarity to still be lacking. Statements like "... uses the ranking number (starting from zero) directly as the probability for sampling" -- this is not true (it is normalized, as confusingly laid out in equation 2 with the same symbol used for the unnormalized and normalized densities), and also implies that the least likely trajectory under the model is never sampled, which doesn't seem like a desirable property. Schaul's "prioritized experience replay" is cited for the choice of rank-based distribution, but the distribution employed in that work has rather different form. The related work section is also very poor given the existing body of literature on curiosity in a reinforcement learning context, and the new "importance sampling perspective" section serves little explicatory purpose given that an importance re-weighting is not performed. Overall, I concur most strongly with AnonReviewer1 that more work is needed to motivate the method and prove its robustness applicability, as well as to polish the presentation.
train
[ "SyeblhC2pQ", "Byl5tiR3pX", "HylYBoAhTm", "rJxr2qAnp7", "rJgNuJFq37", "H1g5Um_92m", "Byxla5U9h7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the valuable feedback!\nWe uploaded a revised version of the paper based on the comments.\n\n- The reason behind using V-GMM is that V-GMM is much faster than KDE in inference and has a better generalization ability compared to GMM. We use V-GMM as a proof of concept for the idea “Curiosity-Driven Ex...
[ -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "Byxla5U9h7", "H1g5Um_92m", "rJgNuJFq37", "iclr_2019_SklXvs0qt7", "iclr_2019_SklXvs0qt7", "iclr_2019_SklXvs0qt7", "iclr_2019_SklXvs0qt7" ]
iclr_2019_SklcFsAcKX
Deep Denoising: Rate-Optimal Recovery of Structured Signals with a Deep Prior
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy image. The underlying principle is that neural networks trained on large datasets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image. Given such a generator network, or prior, a noisy image can be denoised by finding the closest image in the range of the prior. However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the networks parameters. In this paper we consider the problem of denoising an image from additive Gaussian noise, assuming the image is well described by a deep neural network with ReLu activations functions, mapping a k-dimensional latent space to an n-dimensional image. We state and analyze a simple gradient-descent-like iterative algorithm that minimizes a non-convex loss function, and provably removes a fraction of (1 - O(k/n)) of the noise energy. We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data.
rejected-papers
The paper analyzes the interesting problem of image denoising with neural networks by imposing simplifying assumptions on the Gaussianity and independence of the prior. A bound is established from the analysis of (Hand & Voroninksi, 2018) that can be algorithmically achieved through a small tweak to gradient descent. Unfortunately, the contribution of this paper is incremental given the recent works of (Hand & Voroninksi, 2018) and (Bora et al., 2017); an opinion the reviewers unanimously shared. Reviewer opinion differed on whether they found the overall contribution to be barely acceptable or simply insufficient. No reviewer detected a major advance, and there seems to be a question of whether the achievement is significant given the strength of the assumptions required to achieve the modest additions. After scrutiny, the main theoretical contributions of the paper appear to be a bit overstated. For example, the bound in Theorem 1 is quite weak: it does not establish convergence to a global minimizer (even under the strong assumptions given), but only that Algorithm 1 eventually remains in a neighborhood of the global minimizer. It is true that this neighborhood can be made arbitrarily small by increasing the strength of the assumptions made on epsilon and omega, but epsilon remains a constant with respect to iteration count. The subsequent claim that the algorithm achieves a denoising rate of sigma^2 k/n is not an accurate interpretation of Theorem 1, given that this claim would require require (at the very least) that epsilon can be made arbitrarily small, which it cannot be. More precision is required in stating supportable conclusions from the given results. The algorithmic motivation itself is rather weak, in the sense that this paper only provides an anecdotal demonstration that there are no spurious critical points beyond the negation of the global minimizer---the theoretical support for this claim already resides in (Hand & Voroninski, 2018). The provenance of such a central observation was not made sufficiently clear in the paper nor in the discussion. An additional quibble about the experimental evaluation is that it does not compare to plain gradient descent (or other baseline optimization techniques), which the authors observe almost always works in the scenario considered. It seems that the "negation tweak" embedded in Algorithm 1 has no real impact on the experimental results, raising the question of whether the contributions do indeed have any practical import. The descriptions offered in the current paper suggest that a serious algorithmic advantage has yet to be demonstrated in any real experiment. The paper requires a far better evaluation of Algorithm 1 in comparison to standard baseline optimizers, to support the case that the proposed algorithmic tweak has practical significance. This paper remained in a weak borderline position after the review and discussion period. In the end, this was a very difficult decision to make, but I think the paper would benefit from further strengthening before it can constitute a solid publication.
train
[ "B1l4G17zT7", "HyeSpphtam", "S1lQF03t6m", "S1eRyjhF6X", "SkxQw8EGpQ", "SJlH2n1Zh7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the signal denoting problem. The theoretical results are nice, and supported by numerical experiments. I have the following two major concerns:\n\n(1) Using deep neural network as a prior in signal denoising is definitely an important and also challenging problem, only when the neural network is...
[ 5, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, 3, 4 ]
[ "iclr_2019_SklcFsAcKX", "B1l4G17zT7", "SJlH2n1Zh7", "SkxQw8EGpQ", "iclr_2019_SklcFsAcKX", "iclr_2019_SklcFsAcKX" ]
iclr_2019_SklckhR5Ym
Improved Language Modeling by Decoding the Past
Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token. This biases the model towards retaining more contextual information, in turn improving its ability to predict the next token. With negligible overhead in the number of parameters and training time, our Past Decode Regularization (PDR) method achieves a word level perplexity of 55.6 on the Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax. We also show gains by using PDR in combination with a mixture-of-softmaxes, achieving a word level perplexity of 53.8 and 60.5 on these datasets. In addition, our method achieves 1.169 bits-per-character on the Penn Treebank Character dataset for character level language modeling. These results constitute a new state-of-the-art in their respective settings.
rejected-papers
The paper proposes an additional module to train language models, adding a new loss that tries to predict the previous token given the next one, thus enforcing the model to remember the past. Two out of 3 reviewers recommend to accept the paper; the third one said it was misleading to claim SOTA since authors didn't try the mixture-of-softmax model that is actually currently SOTA. The authors acknowledged and modified the paper accordingly, and added a few more experiments. The reviewer still thinks the improvements are not important enough to claim significant novelty. Overall, I think the idea is simple and adds some structure to language modeling, but I also concur with the reviewer about limited improvements, which makes it a borderline paper. When calibrating with other area chairs, I decided to recommend to reject the paper.
train
[ "B1g6I0toy4", "Bkg7KrSUA7", "HkeEorv1hm", "SJg_ec1m0m", "BJe0cdJfC7", "BJlh2rJGCm", "HylZSxkGAQ", "ryevn2r93m", "S1gyA_wmhX", "SJedESYsj7", "BklBzOjYiQ", "ryltyRlNim", "ByeYhB1EjX", "B1xug9umjX", "ryeLx7S7jm", "rJxSPfnC5X", "HkeleVGa5Q" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public", "author", "public" ]
[ "I am sorry for the late reply, and thank you for the update. I am convinced that the suggested technique is useful.", "We have uploaded a revised version of the paper. The main changes in brief, which have been discussed in detail in the comments below, are as follows\n\n1. Addition of PDR to the Mixture-of-Soft...
[ -1, -1, 3, -1, -1, -1, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 5, -1, -1, -1, -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HylZSxkGAQ", "iclr_2019_SklckhR5Ym", "iclr_2019_SklckhR5Ym", "BJe0cdJfC7", "HkeEorv1hm", "ryevn2r93m", "S1gyA_wmhX", "iclr_2019_SklckhR5Ym", "iclr_2019_SklckhR5Ym", "BklBzOjYiQ", "HkeleVGa5Q", "ByeYhB1EjX", "B1xug9umjX", "ryeLx7S7jm", "rJxSPfnC5X", "HkeleVGa5Q", "iclr_2019_SklckhR5Y...
iclr_2019_SklgHoRqt7
Metric-Optimized Example Weights
Real-world machine learning applications often have complex test metrics, and may have training and test data that follow different distributions. We propose addressing these issues by using a weighted loss function with a standard convex loss, but with weights on the training examples that are learned to optimize the test metric of interest on the validation set. These metric-optimized example weights can be learned for any test metric, including black box losses and customized metrics for specific applications. We illustrate the performance of our proposal with public benchmark datasets and real-world applications with domain shift and custom loss functions that balance multiple objectives, impose fairness policies, and are non-convex and non-decomposable.
rejected-papers
While there was some support for the ideas presented, the majority of reviewers did not think this paper was ready for publication at ICLR. In particular the experiments need more work, including the protocol for validation, and attention to overfitting.
val
[ "ryeamta0hQ", "Byg37fo2hm", "Bke6g1Hc37" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose to optimize a black-box (validation/test) metric by learning to re-weight the training examples. The weights are calculated from a linear model on an auto-encoder-computed embedding, and the parameters of the linear model is found by an Gaussian-Process-Regression-(UCB)-guided global optimizati...
[ 4, 4, 7 ]
[ 4, 4, 3 ]
[ "iclr_2019_SklgHoRqt7", "iclr_2019_SklgHoRqt7", "iclr_2019_SklgHoRqt7" ]
iclr_2019_Sklqvo0qt7
A Priori Estimates of the Generalization Error for Two-layer Neural Networks
New estimates for the generalization error are established for a nonlinear regression problem using a two-layer neural network model. These new estimates are a priori in nature in the sense that the bounds depend only on some norms of the underlying functions to be fitted, not the parameters in the model. In contrast, most existing results for neural networks are a posteriori in nature in the sense that the bounds depend on some norms of the model parameters. The error rates are comparable to that of the Monte Carlo method in terms of the size of the dataset. Moreover, these bounds are equally effective in the over-parametrized regime when the network size is much larger than the size of the dataset.
rejected-papers
I enjoyed reading the paper myself and agree with some of the criticisms raised by the reviewers, but not all of them. In particular, I don't think it's a major issues that this work studies an explicit regularization scheme BECAUSE the state of our understanding of generalization in deep learning is so embarrassingly poor!! Unlike a lot of work, this work is engaging with the *approximation* error and developing risk bounds (called "generalization error" here ... not my favorite term for the risk!) rather than just controlling the generalization gap. The simple proof in the bounded noiseless case was nice to see. On the other hand, not being familiar with the work of Klusowski and Barron (2016), I'm not willing to overrule the reviewers on judgments that this work is not novel enough. I would suggest the authors take control of this and paint a more detailed picture of how these two bodies of work relate, including how the proof techniques and arguments overlap. Some other comments: 1. your theorem requires \lambda > 4, but then you're using \lambda = 0.1. this seems problematic to me. 2. your "nonvacuous upper bound" is path-norm/sqrt(n) ... but do the numbers in the table include the constants? looking at the constants that are likely to show up, (4Bn sqrt(2 log 2d), they are easily contributing a factor greater than 10 which would make these bounds vacuous as well. you need to explain how you are calculating these numbers more carefully. 3. several times Arora et al and Neyshabur et al are cited when reference is being made to numerical experiments to show that existing bounds are vacuously large. But Dziugaite and Roy, who you cite for the term "nonvacuous", made an earlier analysis of path-norm bounds in their appendix and point out that they are vacuous. 4. the paper does not really engage with the fact that you are unlikely to be exactly minimizing the functional J. any hope of bridging this gap? 5. the experiments in general are a bit too vaguely described. also, you control squared error but then only report classification error. would be interested to see both.
test
[ "BkeLnEkK6m", "HyeE_R5u67", "ryerHq0xaQ", "SklpV1w5nQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors consider the notion of path norm for two layer ReLu network, and derive a generalization bound for a path-norm regularized estimator under a regression model.\n\nI apologize for the cursory review, as I have only been asked to review this paper two days ago. I have two main concerns about this paper: i...
[ 4, 4, 4, 5 ]
[ 3, 3, 4, 3 ]
[ "iclr_2019_Sklqvo0qt7", "iclr_2019_Sklqvo0qt7", "iclr_2019_Sklqvo0qt7", "iclr_2019_Sklqvo0qt7" ]
iclr_2019_Sklr9i09KQ
Neural Networks for Modeling Source Code Edits
Programming languages are emerging as a challenging and interesting domain for machine learning. A core task, which has received significant attention in recent years, is building generative models of source code. However, to our knowledge, previous generative models have always been framed in terms of generating static snapshots of code. In this work, we instead treat source code as a dynamic object and tackle the problem of modeling the edits that software developers make to source code files. This requires extracting intent from previous edits and leveraging it to generate subsequent edits. We develop several neural networks and use synthetic data to test their ability to learn challenging edit patterns that require strong generalization. We then collect and train our models on a large-scale dataset consisting of millions of fine-grained edits from thousands of Python developers.
rejected-papers
This paper focuses on neural network models for source code edits. Compared to prior literature that focused on generative models of source codes, this paper focuses on the generative models of edit sequences of the source code. The paper explores both explicit and implicit representations of source code edits with experiments on synthetic and real code data. Pros: The task studied has a potential real world impact. The reviewers found the paper is generally clear to read. Cons: While the paper doesn't have a major flaw, the overall impact and novelty of the paper are considered to be relatively marginal. Even after the rebuttal, none of the reviewers felt compelled to increase their score. One point that came up multiple times is that the paper treats the source code as flat text and does not model the semantic and syntactic structure of the source code (via e.g., abstract syntax tree). While this alone would have not been a deal-breaker, the overall substance presented in the paper does not seem strong. Also, the empirical results are reasonable but not impressive given that the experiments are focused more on the synthetic data, and the experiments on the real source code are weaker and less clear as has been also noted by the fourth reviewer. Verdict: Possible weak reject. No significant deal breaker per say but the overall substance and novelty are marginal.
train
[ "Hkxm1kN214", "B1ey36qPk4", "Hkeev9g9CQ", "rJeObclc0Q", "B1e_Utl5CX", "r1xwoul9Cm", "Byxo3QfphQ", "rJeHXBvqnQ", "ByxTpyLc3X" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and for your questions.\n\n> 1. It's nice that you've tested out different single synthetic edit patterns, but what happens in a dataset with multiple edit patterns? Because this will be the scenario in a real world dataset.\n\nThis is already in the paper. See the “MultiTask” dataset in ...
[ -1, 5, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, 2, 4, 4 ]
[ "B1ey36qPk4", "iclr_2019_Sklr9i09KQ", "Byxo3QfphQ", "rJeHXBvqnQ", "ByxTpyLc3X", "iclr_2019_Sklr9i09KQ", "iclr_2019_Sklr9i09KQ", "iclr_2019_Sklr9i09KQ", "iclr_2019_Sklr9i09KQ" ]
iclr_2019_SklrrhRqFX
Learning Physics Priors for Deep Reinforcement Learing
While model-based deep reinforcement learning (RL) holds great promise for sample efficiency and generalization, learning an accurate dynamics model is challenging and often requires substantial interactions with the environment. Further, a wide variety of domains have dynamics that share common foundations like the laws of physics, which are rarely exploited by these algorithms. Humans often acquire such physics priors that allow us to easily adapt to the dynamics of any environment. In this work, we propose an approach to learn such physics priors and incorporate them into an RL agent. Our method involves pre-training a frame predictor on raw videos and then using it to initialize the dynamics prediction model on a target task. Our prediction model, SpatialNet, is designed to implicitly capture localized physical phenomena and interactions. We show the value of incorporating this prior through empirical experiments on two different domains – a newly created PhysWorld and games from the Atari benchmark, outperforming competitive approaches and demonstrating effective transfer learning.
rejected-papers
The paper suggests a new way to learn a physics prior, in an action-free way from raw frames. The idea is to "learn the common rules of physics" in some sense (from purely visual observations) and use that as pre-training. The authors made a number of experiments in response to the reviewer concerns, but the submission still fell short of their expectations. In the post-rebuttal discussion, the reviewers mentioned that it's not clear how SpatialNet is different from a ConvLSTM, mentioned the writing quality and the fact that the "physics prior" is really quite close to what others call video prediction in other baselines.
train
[ "Bkeq10KhJ4", "rJlJpsLiJE", "S1llsjn9JN", "HJetKohcJN", "r1xm853cyE", "S1ecIeKSkE", "H1euYXIq2m", "Skgq5LTxR7", "SkgxeUagCX", "S1gsaHTlR7", "ryl_iHpl07", "SJex19xJam", "H1ljcqfW37" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response. The extra experiments definitely make this paper much convincing. However, learning *physics* priors is not clarified in the text/experiments. The ablation study is not convincing to me to show *physics priors* is different or superior to \"imagination augmented\"(video prediction) met...
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 4, 5 ]
[ "ryl_iHpl07", "S1llsjn9JN", "SJex19xJam", "H1ljcqfW37", "S1ecIeKSkE", "S1gsaHTlR7", "iclr_2019_SklrrhRqFX", "iclr_2019_SklrrhRqFX", "SJex19xJam", "H1euYXIq2m", "H1ljcqfW37", "iclr_2019_SklrrhRqFX", "iclr_2019_SklrrhRqFX" ]
iclr_2019_Skluy2RcK7
Selectivity metrics can overestimate the selectivity of units: a case study on AlexNet
Various methods of measuring unit selectivity have been developed in order to understand the representations learned by neural networks (NNs). Here we undertake a comparison of four such measures on AlexNet, namely, localist selectivity, \precision (Zhou et al, ICLR 2015), class-conditional mean activity selectivity CCMAS; (Morcos et al, ICLR 2018), and a new measure called top-class selectivity. In contrast with previous work on recurrent neural networks (RNNs), we fail to find any 100\% selective `localist units' in AlexNet, and demonstrate that the \precision and CCMAS measures provide a much higher level of selectivity than is warranted, with the most selective hidden units only responding strongly to a small minority of images from within a category. We also generated activation maximization (AM) images that maximally activated individual units and found that under (5\%) of units in fc6 and conv5 produced interpretable images of objects, whereas fc8 produced over 50\% interpretable images. Furthermore, the interpretable images in the hidden layers were not associated with highly selective units. These findings highlight the problem with current selectivity measures and show that new measures are required in order to provide a better assessment of learned representations in NNs. We also consider why localist representations are learned in RNNs and not AlexNet.
rejected-papers
The paper examined the folk-knowledge that there are highly selective units in popular CNN architectures, and performed a detailed analysis of recent measures of unit selectivity, as well as introducing a novel one. The finding that units are not extremely selective in CNNs was intriguing to some (not all) reviewers. Further, they show recent measures of selectivity dramatically over-estimate selectivity. There was not tight agreement amongst the reviewers on the paper's rating, but it trended towards rejection. Weaknesses highlighted by reviewers include lack of visual clarity in their demonstrations, the use of a several-generations-old CNN architecture, as well as a lack of enthusiasm for the findings.
train
[ "r1l_LOiKCm", "SyewgdjtCm", "H1g9WtiYAX", "BJlGDtiYCX", "SJlixzjtCm", "BJeGSxsKRm", "rkej1sdjhm", "B1gQqFQq3m", "rkgdcTkchX" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewers for their positive comments and their constructive criticisms.   Below we respond to these criticisms, some of which reflect a misunderstanding of our our main objectives (this is our fault--and something we clarify here and in our revision).  We hope our revision makes it clea...
[ -1, -1, -1, -1, -1, -1, 5, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "iclr_2019_Skluy2RcK7", "iclr_2019_Skluy2RcK7", "rkgdcTkchX", "rkgdcTkchX", "B1gQqFQq3m", "rkej1sdjhm", "iclr_2019_Skluy2RcK7", "iclr_2019_Skluy2RcK7", "iclr_2019_Skluy2RcK7" ]
iclr_2019_SklzIjActX
HIGHLY EFFICIENT 8-BIT LOW PRECISION INFERENCE OF CONVOLUTIONAL NEURAL NETWORKS
High throughput and low latency inference of deep neural networks are critical for the deployment of deep learning applications. This paper presents a general technique toward 8-bit low precision inference of convolutional neural networks, including 1) channel-wise scale factors of weights, especially for depthwise convolution, 2) Winograd convolution, and 3) topology-wise 8-bit support. We experiment the techniques on top of a widely-used deep learning framework. The 8-bit optimized model is automatically generated with a calibration process from FP32 model without the need of fine-tuning or retraining. We perform a systematical and comprehensive study on 18 widely-used convolutional neural networks and demonstrate the effectiveness of 8-bit low precision inference across a wide range of applications and use cases, including image classification, object detection, image segmentation, and super resolution. We show that the inference throughput and latency are improved by 1.6X and 1.5X respectively with minimal within 0.6%1to no loss in accuracy from FP32 baseline. We believe the methodology can provide the guidance and reference design of 8-bit low precision inference for other frameworks. All the code and models will be publicly available soon.
rejected-papers
The paper proposes to combine three methods of quantization and apply them to neural network compression. The methods are known in the literature. There is a lack of theoretical contribution, and experimental results show variable speedups that may not be competitive with the current state-of-the-art in neural network compression. The majority of reviewers recommend that this paper be rejected. The authors have not provided a response.
train
[ "rJxpIeRR27", "HJx1v9Vo2m", "HJlm4Ejc2X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper designs a system to automatically quantize the CNN pretrained models. This system contains three main components: 1) different scale factors for channel-wise network; 2) Winograd 8bit quantization; 3) topology wise 8bit operation support. All these three techniques are standard ways to perform model qua...
[ 6, 4, 4 ]
[ 4, 4, 4 ]
[ "iclr_2019_SklzIjActX", "iclr_2019_SklzIjActX", "iclr_2019_SklzIjActX" ]
iclr_2019_SkxANsC9tQ
Learning Graph Representations by Dendrograms
Hierarchical clustering is a common approach to analysing the multi-scale structure of graphs observed in practice. We propose a novel metric for assessing the quality of a hierarchical clustering. This metric reflects the ability to reconstruct the graph from the dendrogram encoding the hierarchy. The best representation of the graph for this metric in turn yields a novel hierarchical clustering algorithm. Experiments on both real and synthetic data illustrate the efficiency of the approach.
rejected-papers
All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.
val
[ "SylOF8qBRX", "rJlW_G5BRm", "r1x2LYtSRQ", "HJe8cc-3hm", "SylEVwTI3X", "HkliHhtr3Q" ]
[ "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "As stated in our previous post, our main contribution is a metric that is *interpretable* in terms of graph reconstruction. We do not claim that our metric is better than existing ones. It is simply different. Its theoretical and practical interests come from the fact that it is derived from an optimization proble...
[ -1, -1, -1, 4, 5, 5 ]
[ -1, -1, -1, 4, 4, 3 ]
[ "HkliHhtr3Q", "SylEVwTI3X", "HJe8cc-3hm", "iclr_2019_SkxANsC9tQ", "iclr_2019_SkxANsC9tQ", "iclr_2019_SkxANsC9tQ" ]
iclr_2019_SkxJ-309FQ
Hallucinations in Neural Machine Translation
Neural machine translation (NMT) systems have reached state of the art performance in translating text and are in wide deployment. Yet little is understood about how these systems function or break. Here we show that NMT systems are susceptible to producing highly pathological translations that are completely untethered from the source material, which we term hallucinations. Such pathological translations are problematic because they are are deeply disturbing of user trust and easy to find with a simple search. We describe a method to generate hallucinations and show that many common variations of the NMT architecture are susceptible to them. We study a variety of approaches to reduce the frequency of hallucinations, including data augmentation, dynamical systems and regularization techniques, showing that data augmentation significantly reduces hallucination frequency. Finally, we analyze networks that produce hallucinations and show that there are signatures in the attention matrix as well as in the hidden states of the decoder.
rejected-papers
Strengths - Hallucinations are a problem for seq2seq models, esp trained on small datasets Weankesses - Hallucinations are known to exists, the analyses / observations are not very novel - The considered space of hallucinations source (i.e. added noise) is fairly limited, it is not clear that these are the most natural sources of hallucination and not clear if the methods defined to combat these types would generalize to other types. E.g., I'd rather see hallucinations appearing when running NMT on some natural (albeit noisy) corpus, rather than defining the noise model manually. - The proposed approach is not particularly interesting, and may not be general. Alternative techniques (e.g., modeling coverage) have been proposed in the past. - A wider variety of language pairs, amounts of data, etc needed to validate the methods. This is an empirical paper, I would expect higher quality of evaluation. Two reviewers argued that the baseline system is somewhat weak and the method is not very exciting.
train
[ "ryxxt8I9RX", "rkg3VT4q0Q", "r1e3aqVqRQ", "rJedhZhYA7", "H1lf5bhKA7", "ryeoD08d0X", "rkxlNRL_C7", "rygelC8dR7", "Byg0UoU_RX", "SyeoGAwkAX", "BJxe2Ed9nQ", "HkgrJLDq3m", "rJgDGTh_2X" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The transformer_big are too large to feasibly run many experiments on. However, at your request, we are currently finishing a batch of transformer_base models (each takes about a week). We will include the results of these models in our manuscript.", "I suggest to use Transformer_big configuration, which shoul...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "rkg3VT4q0Q", "rkxlNRL_C7", "rygelC8dR7", "H1lf5bhKA7", "SyeoGAwkAX", "rJgDGTh_2X", "rygelC8dR7", "HkgrJLDq3m", "BJxe2Ed9nQ", "iclr_2019_SkxJ-309FQ", "iclr_2019_SkxJ-309FQ", "iclr_2019_SkxJ-309FQ", "iclr_2019_SkxJ-309FQ" ]
iclr_2019_SkxXwo0qYm
An Automatic Operation Batching Strategy for the Backward Propagation of Neural Networks Having Dynamic Computation Graphs
Organizing the same operations in the computation graph of a neural network into batches is one of the important methods to improve the speed of training deep learning models and applications since it helps to execute operations with the same type in parallel and to make full use of the available hardware resources. This batching task is usually done by the developers manually and it becomes more dif- ficult when the neural networks have dynamic computation graphs because of the input data with varying structures or the dynamic flow control. Several automatic batching strategies were proposed and integrated into some deep learning toolkits so that the programmers don’t have to be responsible for this task. These strategies, however, will miss some important opportunities to group the operations in the backward propagation of training neural networks. In this paper, we proposed a strategy which provides more efficient automatic batching and brings benefits to the memory access in the backward propagation. We also test our strategy on a variety of benchmarks with dynamic computation graphs. The result shows that it really brings further improvements in the training speed when our strategy is working with the existing automatic strategies.
rejected-papers
This paper describes a new batching strategy for more efficient training of deep neural nets. The idea stems from the observation that some operations can only be batched more efficiently in the backward, suggesting that batching should be different between forward and backward. The results show that the proposed method improves upon existing batch strategies across three tasks. The reviewers find the work novel, but note that it does not properly address the trade-offs made by the technique - such as memory consumption. They also argue that the writing should be improved before acceptance at ICLR.
train
[ "SJxssjhchm", "Hyghugeq3Q", "BJxJTFhd2X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Batching of similar and independent operations in a neural network computation graph is a common way to improve efficiency through computational parallelism. Optimization is often applied to the computation graph by grouping independent operations into batches that can be computed in parallel. \nExisting technique...
[ 5, 6, 4 ]
[ 3, 5, 4 ]
[ "iclr_2019_SkxXwo0qYm", "iclr_2019_SkxXwo0qYm", "iclr_2019_SkxXwo0qYm" ]
iclr_2019_SkxYOiCqKX
Pixel Chem: A Representation for Predicting Material Properties with Neural Network
In this work we developed a new representation of the chemical information for the machine learning models, with benefits from both the real space (R-space) and energy space (K-space). Different from the previous symmetric matrix presentations, the charge transfer channel based on Pauling’s electronegativity is derived from the dependence on real space distance and orbitals for the hetero atomic structures. This representation can work for the bulk materials as well as the low dimensional nano materials, and can map the R-space and K-space into the pixel space (P-space) by training and testing 130k structures. P-space can well reproduce the R-space quantities within error 0.53. This new asymmetric matrix representation double the information storage than the previous symmetric representations.This work provides a new dimension for the computational chemistry towards the machine learning architecture.
rejected-papers
I would like to highlight to the PCs that reviewers highlighted clear evidence of plagiarism from prior work, which I was able to easily verify (a full paragraph of text was copied, word-for-word, from a paper describing one of the baselines the current work compares against). Further, all reviewers unanimously agreed that the paper was poorly written, and contains no useful advances for the ICLR audience. I recommend a rejection, and further, examination by the PCs of the conduct of the authors.
train
[ "HkexnxSeRX", "SkxfUlreAQ", "HkxXfere0m", "S1gJzNO7Tm", "rJeBH0RxaQ", "SJxQQwGRjX", "H1lM7x8g37", "SklwYnBlhQ", "rJgh6oIk3X", "B1gwMINyn7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public" ]
[ "Thanks for your constructive review.\nWe have added the result of [1] in our comparison, and fixed typos in our paper.\nAnd we notice that our PCnet underperformance some neural network currently, and we are trying to improve the performance based on your review.\nOur Pixel Chem is applicable to periodic structure...
[ -1, -1, -1, 3, 1, 3, -1, -1, -1, -1 ]
[ -1, -1, -1, 3, 5, 5, -1, -1, -1, -1 ]
[ "SJxQQwGRjX", "rJeBH0RxaQ", "S1gJzNO7Tm", "iclr_2019_SkxYOiCqKX", "iclr_2019_SkxYOiCqKX", "iclr_2019_SkxYOiCqKX", "rJgh6oIk3X", "B1gwMINyn7", "iclr_2019_SkxYOiCqKX", "iclr_2019_SkxYOiCqKX" ]
iclr_2019_SkxZFoAqtQ
Improving Composition of Sentence Embeddings through the Lens of Statistical Relational Learning
Various NLP problems -- such as the prediction of sentence similarity, entailment, and discourse relations -- are all instances of the same general task: the modeling of semantic relations between a pair of textual elements. We call them textual relational problems. A popular model for textual relational problems is to embed sentences into fixed size vectors and use composition functions (e.g. difference or concatenation) of those vectors as features for the prediction. Meanwhile, composition of embeddings has been a main focus within the field of Statistical Relational Learning (SRL) whose goal is to predict relations between entities (typically from knowledge base triples). In this work, we show that textual relational models implicitly use compositions from baseline SRL models. We show that such compositions are not expressive enough for several tasks (e.g. natural language inference). We build on recent SRL models to address textual relational problems, showing that they are more expressive, and can alleviate issues from simpler compositions. The resulting models significantly improve the state of the art in both transferable sentence representation learning and relation prediction.
rejected-papers
This paper offers a new angle through which to study the development of comparison functions for sentence pair classification tasks by drawing on the literature on statistical relational learning. All three reviewers seemed happy to see an attempt to unify these two closely related relation-learning problems. However, none of the reviewers were fully convinced that this attempt has yielded any substantial new knowledge: Many of the ideas that come out of this synthesis have already appeared in the sentence-pair modeling literature (in work cited in the paper under review), and the proposed new methods do not yield substantial improvements for the tasks they're tested on. I'm happy to accept the authors' arguments that sentence-to-vector models have practical value, and I'm not placing too much weight on the reviewer's comments about the choice to use that modeling framework. I am slightly concerned that the reviewers (especially R2) observed some overly broad statements in the paper, and I urge the authors to take those comments very seriously. I'm mostly concerned, though, about the lack of an impactful positive contribution: I'd have hoped for a paper of this kind to offer a a method with clear empirical advantages over prior work, or else a formal result which is more clearly new, and the reviewers are not convinced that this paper makes a contribution of either kind.
train
[ "HJlRzmOwpX", "rJe22muvTm", "rylyjQuwTX", "BJeOdQuPp7", "HyeT7CTY2Q", "BJxqOwHYnm", "rJgpfc2O37" ]
[ "public", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their detailed and constructive feedback; we will provide answers to common concerns below, and respond to specific concerns in separate replies to each reviewer. \nWe will also update the paper and take in account all suggestions.\n \n* Limited scope of the approach: \nReviewers raised ...
[ -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2019_SkxZFoAqtQ", "rJgpfc2O37", "BJxqOwHYnm", "HyeT7CTY2Q", "iclr_2019_SkxZFoAqtQ", "iclr_2019_SkxZFoAqtQ", "iclr_2019_SkxZFoAqtQ" ]
iclr_2019_SkxbDsR9Ym
RelWalk -- A Latent Variable Model Approach to Knowledge Graph Embedding
Knowledge Graph Embedding (KGE) is the task of jointly learning entity and relation embeddings for a given knowledge graph. Existing methods for learning KGEs can be seen as a two-stage process where (a) entities and relations in the knowledge graph are represented using some linear algebraic structures (embeddings), and (b) a scoring function is defined that evaluates the strength of a relation that holds between two entities using the corresponding relation and entity embeddings. Unfortunately, prior proposals for the scoring functions in the first step have been heuristically motivated, and it is unclear as to how the scoring functions in KGEs relate to the generation process of the underlying knowledge graph. To address this issue, we propose a generative account of the KGE learning task. Specifically, given a knowledge graph represented by a set of relational triples (h, R, t), where the semantic relation R holds between the two entities h (head) and t (tail), we extend the random walk model (Arora et al., 2016a) of word embeddings to KGE. We derive a theoretical relationship between the joint probability p(h, R, t) and the embeddings of h, R and t. Moreover, we show that marginal loss minimisation, a popular objective used by much prior work in KGE, follows naturally from the log-likelihood ratio maximisation under the probabilities estimated from the KGEs according to our theoretical relationship. We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph. The KGEs learnt by our proposed method obtain state-of-the-art performance on FB15K237 and WN18RR benchmark datasets, providing empirical evidence in support of the theory.
rejected-papers
This paper proposes a new scoring function for link prediction model that is based on a generative model for the knowledge graph, based on a random-walk model previously used for word embeddings. The new scoring function, as it is accompanied by the generative model, provides interesting theoretical results that the reviewers also appreciate. Finally, the results are quite strong, as they obtain state-of-art on the primary benchmarks for the task. Based on the submitted version, the reviewers and AC note the following potential weaknesses: (1) the reviewers felt that the proposed work is a direct application of the random-walk model from Arora et al. and thus limited in novelty, (2) given the generative model, the reviewers felt that the paper would benefit from an analysis of the learned embeddings, and their difference from ones from existing approaches, (3) The reviewers noted that the authors were using an incorrect version of FB15k and WN18, (4) the authors were not providing results for all the metrics, (5) the coverage of related work is quite limited. The authors addressed many of the concerns raised by the reviewers in their comments and revision, in particular, they obtained state-of-art results for the corrected versions of the benchmarks. Further, they clarified the assumptions made in their modeling and revised the related work to include the papers that the reviewers mentioned. However, the concerns regarding the lack of novelty of the proposed approach, w.r.t Arora et al 2016 and the need for further analysis of the learned embeddings, still remain. This paper comes really close to getting accepted, but ultimately the reviewers agree that the remaining concerns need to be addressed.
train
[ "HyxKd25wp7", "S1gUjPPvRm", "BJgxzoVCam", "HJlNV5yu6X", "H1ey7t1_6Q", "rkxdq_JOT7", "SklgyeG5hX", "r1eY4IjF2X", "rkxAisFCcm", "BJgu1f6cqQ", "r1eH1XsOc7", "HJgNw8ZOqm", "HJxptPnIqX", "rkl-f_f757", "rygUojWAKQ", "HJlffFoTFX", "r1l8S-KhY7", "r1ecOjvhFQ" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public", "author", "public", "author", "public" ]
[ "This paper proposes to perform the link prediction in knowledge bases by introducing a new scoring function and theoretically motivating their method. The authors validate their proposed approach through several experiments. \n\nThis paper reads well and the results appear sound. I personally find the theoretical ...
[ 6, -1, -1, -1, -1, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_SkxbDsR9Ym", "BJgxzoVCam", "rkxdq_JOT7", "r1eY4IjF2X", "SklgyeG5hX", "HyxKd25wp7", "iclr_2019_SkxbDsR9Ym", "iclr_2019_SkxbDsR9Ym", "BJgu1f6cqQ", "r1eH1XsOc7", "HJgNw8ZOqm", "HJxptPnIqX", "rkl-f_f757", "iclr_2019_SkxbDsR9Ym", "HJlffFoTFX", "r1l8S-KhY7", "r1ecOjvhFQ", "icl...
iclr_2019_SkxxIs0qY7
CoT: Cooperative Training for Generative Modeling of Discrete Data
We propose Cooperative Training (CoT) for training generative models that measure a tractable density for discrete data. CoT coordinately trains a generator G and an auxiliary predictive mediator M. The training target of M is to estimate a mixture density of the learned distribution G and the target distribution P, and that of G is to minimize the Jensen-Shannon divergence estimated through M. CoT achieves independent success without the necessity of pre-training via Maximum Likelihood Estimation or involving high-variance algorithms like REINFORCE. This low-variance algorithm is theoretically proved to be superior for both sample generation and likelihood prediction. We also theoretically and empirically show the superiority of CoT over most previous algorithms in terms of generative quality and diversity, predictive generalization ability and computational cost.
rejected-papers
The paper proposes an original and interesting alternative to GANs for optimizing a (proxy to) Jensen-Shannon divergence for discrete sequence data. Experimental results seem promising. Official reviewers were largely positive based on originality and results. However, as it currently stands, the paper still makes false claims that are not well explained or supported, in particular its repeated central claim to provide a "low-variance, bias-free algorithm" to optimize JS. Given that these central issues were clearly pointed out in a review from a prior submission of this work to another venue (review reposted on the current OpenReview thread on Nov. 6), the AC feels that the authors had had plenty of time to look into them and address them in the paper, as well as occasions to reference and discuss relevant related work pointed in that review. The current version of the paper does neither. The algorithm is not unbiased for at least two reasons pointed out in discussions: a) in practice a parameterized mediator will be unable to match the true P+G, at best yielding a useful biased estimate (not unlike how GAN's parameterized discriminator induces bias). b) One would need to use REINFORCE (or similar) to get an unbiased estimate of the gradient in Eq. 13, a key detail omitted from the paper. From the discussion thread it is possible that authors were initially confused about the fact that this fundamental issue did not disappear with Eq. 13 (they commented "most important idea we want to present in this paper is HOW TO avoid incorporating REINFORCE. Please refer to Eq.13, which is the key to the success of this."). But rather, as guessed by a commentator, that a heuristic implementation, not explained in the paper, dropped the REINFORCE term thus effectively trading variance for bias. On December 4th authors posted a justification confirming heuristically dropping the REINFORCE terms when taking the gradient of Eq. 13, and said they could attach detailed analysis and experiment results in the camera-ready version. However if one of the "most important idea" of the paper is how to avoid REINFORCE (as still implied and highlighted in the abstract), the AC finds it worrisome that the paper had no explanation of when and how this was done, and no analysis of the bias induced by (unreportedly) dropping the term. The approach remains original, interesting, and potentially promising, but as it currently stands, AC and SAC agreed that inexact theoretical over-claiming and insufficient justification and in-depth analysis of key heuristic shortcuts/tradeoffs (however useful) are too important for their fixing to be entrusted to a final camera-ready revision step. A major revision that clearly adresses these issues in depth (both in how the approach is presented and in supporting experiments) will constitute a much more convincing, sound, and impactful research contribution.
test
[ "SJefME_ZeN", "Bygvu1w2jm", "rkgfHtIWx4", "r1eZN4i4JN", "Skx7dQKjC7", "BklMQW7oRQ", "rygI5D8q0Q", "BJgeArCZ0m", "BklAEn3K67", "HyxkN2DDTm", "B1l29bRkpm", "HJl2Eka7pX", "r1eMNq-faQ", "S1xBB06y6Q", "BklsMW0kpX", "rJelZVAJpX", "Hkghpphk6X", "SJx1up316X", "r1lfAuiyaQ", "HkgJzPJP2X"...
[ "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "official_reviewer", "author", "public", "author", "author", "public", "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer", "author", "public",...
[ "Thank you for your advice.\n\n1. The paper was revised in the revision period to address most of the comments on experiments. We consider it now to be more solid against the previous comments and criticisms. You may want to re-check it.\n\n2. We agree, as we promise the corresponding discussion would be included ...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rkgfHtIWx4", "iclr_2019_SkxxIs0qY7", "BklsMW0kpX", "Skx7dQKjC7", "S1xBB06y6Q", "rygI5D8q0Q", "iclr_2019_SkxxIs0qY7", "rJelZVAJpX", "iclr_2019_SkxxIs0qY7", "HJl2Eka7pX", "HkgJzPJP2X", "r1eMNq-faQ", "SJx1up316X", "SJx1up316X", "Bygvu1w2jm", "r1lfAuiyaQ", "SJx1up316X", "iclr_2019_Skx...
iclr_2019_Skz-3j05tm
Graph Convolutional Network with Sequential Attention For Goal-Oriented Dialogue Systems
Domain specific goal-oriented dialogue systems typically require modeling three types of inputs, viz., (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances and (iii) the current utterance for which the response needs to be generated. While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context. Inspired by the recent success of structure-aware Graph Convolutional Networks (GCNs) for various NLP tasks such as machine translation, semantic role labeling and document dating, we propose a memory augmented GCN for goal-oriented dialogues. Our model exploits (i) the entity relation graph in a knowledge-base and (ii) the dependency graph associated with an utterance to compute richer representations for words and entities. Further, we take cognizance of the fact that in certain situations, such as, when the conversation is in a code-mixed language, dependency parsers may not be available. We show that in such situations we could use the global word co-occurrence graph and use it to enrich the representations of utterances. We experiment with the modified DSTC2 dataset and its recently released code-mixed versions in four languages and show that our method outperforms existing state-of-the-art methods, using a wide range of evaluation metrics.
rejected-papers
This paper describes a graph convolutional network (GCN) approach to capture relational information in natural language as well as knowledge sources for goal-oriented dialogue systems. Relational information is captured by dependency parses, and when there is code switching in the input language, word co-occurrence information is used instead. Experiments on the modified DSTC2 dataset show significant improvements over baselines. The original version of the paper lacked comparison to some SOTA baselines as also raised by the reviewers, these are included in the revised version. Although the results show improvements over other approaches, it is arguable BLEU and ROUGE scores are not good enough for this task. Inclusion of human evaluation in the results would be very useful.
test
[ "r1g3xb9tRX", "BJgSYWnIR7", "Hye1xZ3I0Q", "ryxiXl2UCm", "HJeBukc62m", "r1lCQGAFhm", "SklThhKisX" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Many thanks for adding the figures - they have improved my understading of the paper and I think they make it more easy to understand.", "We would like to thank you for your comments and valuable suggestions on improving the clarity of the paper. Below, we provide updates on some of the improvements that we have...
[ -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, -1, 3, 4, 2 ]
[ "BJgSYWnIR7", "SklThhKisX", "r1lCQGAFhm", "HJeBukc62m", "iclr_2019_Skz-3j05tm", "iclr_2019_Skz-3j05tm", "iclr_2019_Skz-3j05tm" ]
iclr_2019_Skz3Q2CcFX
Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae
Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. State of the art in analyzing embeddings consists in projecting them in two-dimensional planes without any interpretable semantics associated to the axes of the projection, which makes detailed analyses and comparison among multiple sets of embeddings challenging. In this work, we propose to use explicit axes defined as algebraic formulae over embeddings to project them into a lower dimensional, but semantically meaningful subspace, as a simple yet effective analysis and visualization methodology. This methodology assigns an interpretable semantics to the measures of variability and the axes of visualizations, allowing for both comparisons among different sets of embeddings and fine-grained inspection of the embedding spaces. We demonstrate the power of the proposed methodology through a series of case studies that make use of visualizations constructed around the underlying methodology and through a user study. The results show how the methodology is effective at providing more profound insights than classical projection methods and how it is widely applicable to many other use cases.
rejected-papers
Several visualizations are shown in this paper but it is unclear if they are novel.
train
[ "S1lEgPp70m", "Hyl64g_mRQ", "rJgQde81pQ", "ryeGFtiDT7", "B1lGiFoDaQ", "S1eFYDjDaQ", "Hyl7dLiDaX", "S1g6_qV937", "rJxIh6zc3Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for providing this additional references, we'll add them as relevant literature.\n\nRegarding the first one (Figure 4 of Bolukbasi et Al.) it surely is relevant, but as you can see the axis are the difference of two embeddings in two different embedding spaces. This makes the plot difficult to interpret ...
[ -1, -1, 3, -1, -1, -1, -1, 3, 4 ]
[ -1, -1, 4, -1, -1, -1, -1, 3, 3 ]
[ "Hyl64g_mRQ", "Hyl7dLiDaX", "iclr_2019_Skz3Q2CcFX", "S1g6_qV937", "S1g6_qV937", "rJxIh6zc3Q", "rJgQde81pQ", "iclr_2019_Skz3Q2CcFX", "iclr_2019_Skz3Q2CcFX" ]
iclr_2019_SkzK4iC5Ym
Diminishing Batch Normalization
In this paper, we propose a generalization of the BN algorithm, diminishing batch normalization (DBN), where we update the BN parameters in a diminishing moving average way. Batch normalization (BN) is very effective in accelerating the convergence of a neural network training phase that it has become a common practice. Our proposed DBN algorithm remains the overall structure of the original BN algorithm while introduces a weighted averaging update to some trainable parameters. We provide an analysis of the convergence of the DBN algorithm that converges to a stationary point with respect to trainable parameters. Our analysis can be easily generalized for original BN algorithm by setting some parameters to constant. To the best knowledge of authors, this analysis is the first of its kind for convergence with Batch Normalization introduced. We analyze a two-layer model with arbitrary activation function. The primary challenge of the analysis is the fact that some parameters are updated by gradient while others are not. The convergence analysis applies to any activation function that satisfies our common assumptions. For the analysis, we also show the sufficient and necessary conditions for the stepsizes and diminishing weights to ensure the convergence. In the numerical experiments, we use more complex models with more layers and ReLU activation. We observe that DBN outperforms the original BN algorithm on Imagenet, MNIST, NI and CIFAR-10 datasets with reasonable complex FNN and CNN models.
rejected-papers
The paper introduces a modification of batch normalization technique. In contrast to the original batch normalization that normalizes minibatch examples using their mean and standard deviation, this modification uses weighted average of mean and standard deviation from the current and all previous minibatches. The authors then provide some theoretical justification for the superiority of their variant of BatchNorm. Unfortunately, the empirical demonstration of the improved performance seems not sufficient and thus fairly unconvincing.
train
[ "H1xR8d84RQ", "rJgohYm5pm", "BJx32um9TX", "rJerXYm967", "Bkg4xgD62m", "H1gyh8vq2m", "Sylf5fDPhm" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The convergence to the stationary point rather than to the minimum is still the major concern for the strength/significance of the analysis for optimizing networks. I do not have further questions. ", "Thanks for your delightful review. Please allow us to try to address your remarks below:\n\n1) We believe it is...
[ -1, -1, -1, -1, 4, 3, 4 ]
[ -1, -1, -1, -1, 4, 3, 5 ]
[ "rJerXYm967", "Sylf5fDPhm", "Bkg4xgD62m", "H1gyh8vq2m", "iclr_2019_SkzK4iC5Ym", "iclr_2019_SkzK4iC5Ym", "iclr_2019_SkzK4iC5Ym" ]
iclr_2019_SkzeJ3A9F7
Beyond Games: Bringing Exploration to Robots in Real-world
Exploration has been a long standing problem in both model-based and model-free learning methods for sensorimotor control. While there has been major advances over the years, most of these successes have been demonstrated in either video games or simulation environments. This is primarily because the rewards (even the intrinsic ones) are non-differentiable since they are function of the environment (which is a black-box). In this paper, we focus on the policy optimization aspect of the intrinsic reward function. Specifically, by using a local approximation, we formulate intrinsic reward as a differentiable function so as to perform policy optimization using likelihood maximization -- much like supervised learning instead of reinforcement learning. This leads to a significantly sample efficient exploration policy. Our experiments clearly show that our approach outperforms both on-policy and off-policy optimization approaches like REINFORCE and DQN respectively. But most importantly, we are able to implement an exploration policy on a robot which learns to interact with objects completely from scratch just using data collected via the differentiable exploration module. See project videos at https://doubleblindICLR.github.io/robot-exploration/
rejected-papers
The authors propose implementing intrinsic motivation as a differentiable supervised loss coming from the error of a forward model, rather than the black box style of curiosity reward. The motivation is that this approach will lead to more sample efficient exploration for real robots. The use of a differentiable loss for policy optimization is interesting and has some novelty. However, the reviewers were unanimous in their criticism of the paper for poor baselines, unclear experiments and results, and unsupported claims. Even after substantial revisions to the paper, the AC and reviewers were unconvinced of the basic claims of the paper.
train
[ "rJeScKSnyN", "Syxgdeci14", "Hyg-mVIjyN", "rygyz3owk4", "SJxhXwuOCX", "rklLqv7q2X", "B1g6IN8707", "SklfCrWWRX", "S1xtwO-WR7", "SkgfN_-ZAX", "S1eCHvZb0m", "B1xMn8bW0X", "ryeSpwrep7", "S1lHJGUk6m", "S1ghxZMR3X", "B1eBUWdThm", "BJeWVWOphX", "HJlFlZOa2X", "HJlxrKt0nQ", "S1ePezGAh7"...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "public", "public", "public", "author", "author" ]
[ "Thank for the follow-up comments.\n\nR3: \"I don't see the necessary connection between achieving Euclidean-distant outcomes and achieving outcomes that are hard to predict; I think that would need to be established rigorously\"\n=> We describe our intrinsic reward r_t mathematically as follows:\n\nr_t = || f(x_t,...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "Syxgdeci14", "Hyg-mVIjyN", "rygyz3owk4", "S1xtwO-WR7", "B1g6IN8707", "iclr_2019_SkzeJ3A9F7", "S1eCHvZb0m", "iclr_2019_SkzeJ3A9F7", "SkgfN_-ZAX", "S1lHJGUk6m", "rklLqv7q2X", "ryeSpwrep7", "iclr_2019_SkzeJ3A9F7", "iclr_2019_SkzeJ3A9F7", "B1eBUWdThm", "BJeWVWOphX", "HJlFlZOa2X", "icl...
iclr_2019_Sy4lojC9tm
Dataset Distillation
Model distillation aims to distill the knowledge of a complex model into a simpler one. In this paper, we consider an alternative formulation called {\em dataset distillation}: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one. The idea is to {\em synthesize} a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data. For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic {\em distilled images} (one per class) and achieve close to original performance with only a few steps of gradient descent, given a particular fixed network initialization. We evaluate our method in a wide range of initialization settings and with different learning objectives. Experiments on multiple datasets show the advantage of our approach compared to alternative methods in most settings.
rejected-papers
The reviewers agree that the idea for dataset distillation is novel, however it is unclear how practical it can be. The paper has been significantly improved through the addition of new baselines, however ultimately the performance is not quite good enough for the reviewers to advocate strongly on its behalf. Perhaps the paper would be better motivated by finding a realistic scenario in which it would make sense for someone to use this approach over reasonable alternatives.
train
[ "H1eF5Ki9Cm", "HygSJuj50X", "rJBddsq0X", "HkljCYi907", "S1xVvuOLh7", "S1gcr_G5hm", "S1lPGgNVs7", "S1gktV8BcX", "Syg9BHEHcX", "SJxZo-Y4qX", "SJxQBk_4qm" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "Q: Consistency among intro, Table 1, and Figure 2. \nA: These numbers are in fact consistent. We clarify the differences here. Given a fixed initialization that achieves 12% *initial* test accuracy on MNIST, ten distilled images can boost this network to 94% *final* test accuracy. Both intro and Figure 2 describes...
[ -1, -1, -1, -1, 6, 5, 5, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 4, 4, -1, -1, -1, -1 ]
[ "S1lPGgNVs7", "S1gcr_G5hm", "S1xVvuOLh7", "iclr_2019_Sy4lojC9tm", "iclr_2019_Sy4lojC9tm", "iclr_2019_Sy4lojC9tm", "iclr_2019_Sy4lojC9tm", "Syg9BHEHcX", "SJxZo-Y4qX", "SJxQBk_4qm", "iclr_2019_Sy4lojC9tm" ]
iclr_2019_SyG4RiR5Ym
Neural Distribution Learning for generalized time-to-event prediction
Predicting the time to the next event is an important task in various domains. However, due to censoring and irregularly sampled sequences, time-to-event prediction has resulted in limited success only for particular tasks, architectures and data. Using recent advances in probabilistic programming and density networks, we make the case for a generalized parametric survival approach, sequentially predicting a distribution over the time to the next event. Unlike previous work, the proposed method can use asynchronously sampled features for censored, discrete, and multivariate data. Furthermore, it achieves good performance and near perfect calibration for probabilistic predictions without using rigid network-architectures, multitask approaches, complex learning schemes or non-trivial adaptations of cox-models. We firmly establish that this can be achieved in the standard neural network framework by simply switching out the output layer and loss function.
rejected-papers
All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.
train
[ "H1eEWxXcCm", "SkeMu1XcAX", "r1la-afcAm", "HyxXPizcRX", "HkgIpdGqC7", "rJgnHrMqAQ", "HygO5BG9CX", "SJePeMzp2X", "HklarYERom", "r1xplpiajm" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> Also, while the HazardNet framework looks convenient, by using hazard and survival functions as discusses by the authors, it is not clear to me what are the benefits from recent works in neural temporal point processes which also define a general framework for temporal predictions of events. Approaches such at l...
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "SkeMu1XcAX", "r1xplpiajm", "HyxXPizcRX", "HkgIpdGqC7", "SJePeMzp2X", "HklarYERom", "rJgnHrMqAQ", "iclr_2019_SyG4RiR5Ym", "iclr_2019_SyG4RiR5Ym", "iclr_2019_SyG4RiR5Ym" ]
iclr_2019_SyGjQ30qFX
TopicGAN: Unsupervised Text Generation from Explainable Latent Topics
Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied because the obtained discrete representations can benefit unsupervised learning. However, the performance of learning discrete representations of textual data with deep generative models has not been widely explored. In addition, although generative adversarial networks(GAN) have shown impressing results in many areas such as image generation, for text generation, it is notorious for extremely difficult to train. In this work, we propose TopicGAN, a two-step text generative model, which is able to solve those two important problems simultaneously. In the first step, it discovers the latent topics and produced bag-of-words according to the latent topics. In the second step, it generates text from the produced bag-of-words. In our experiments, we show our model can discover meaningful discrete latent topics of texts in an unsupervised fashion and generate high quality natural language from the discovered latent topics.
rejected-papers
This paper proposes TopicGAN, a generative adversarial approach to topic modeling and text generation. TopicGAN operates in two steps: it first generates latent topics and produces bag-of-words corresponding to those latent topics. In the second step, the model generates text conditioning on those topic words. Pros: It combines the strength of topic models (interpretable topics that are learned unsupervised) with GAN for text generation. Cons: There are three major concerns raised by reviewers: (1) clarity, (2) relatively thin experimental results, and (3) novelty. Of these, the first two were the main concerns. In particular, R1 and R2 raised concerns about insufficient component-wise evaluation (e.g., text classification from topic models) and insufficient GAN-based baselines. Also, the topic model part of TopicGAN seems somewhat underdeveloped in that the model assumes a single topic per document, which is a relatively strong simplifying assumption compared to most other topic models (R1, R3). The technical novelty is not extremely strong in that the proposed model combines existing components together. But this alone would have not been a deal breaker if the empirical results were rigorous and strong. Verdict: Reject. Many technical details require clarification and experiments lack sufficient comparisons against prior art.
val
[ "BkguegXjR7", "B1eCntlsRQ", "S1erKnbi0X", "B1gtuhMg6m", "SJlsDbO627", "H1l8tzxq3Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n(1)More details and writing:\nIn the revised version of paper, we have provided more details and clearer explanations of our model in revised version Section 3.3. We have also rewritten many parts of the article to make the paper easier to understand.\n\n(2)Baseline:\nBecause we use GAN to do the same fine tunin...
[ -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, 2, 4, 4 ]
[ "H1l8tzxq3Q", "B1gtuhMg6m", "SJlsDbO627", "iclr_2019_SyGjQ30qFX", "iclr_2019_SyGjQ30qFX", "iclr_2019_SyGjQ30qFX" ]
iclr_2019_SyMras0cFQ
An adaptive homeostatic algorithm for the unsupervised learning of visual features
The formation of structure in the brain, that is, of the connections between cells within neural populations, is by large an unsupervised learning process: the emergence of this architecture is mostly self-organized. In the primary visual cortex of mammals, for example, one may observe during development the formation of cells selective to localized, oriented features. This leads to the development of a rough representation of contours of the retinal image in area V1. We modeled these mechanisms using sparse Hebbian learning algorithms. These algorithms alternate a coding step to encode the information with a learning step to find the proper encoder. A major difficulty faced by these algorithms is to deduce a good representation while knowing immature encoders, and to learn good encoders with a non-optimal representation. To address this problem, we propose to introduce a new regulation process between learning and coding, called homeostasis. Our homeostasis is compatible with a neuro-mimetic architecture and allows for the fast emergence of localized filters sensitive to orientation. The key to this algorithm lies in a simple adaptation mechanism based on non-linear functions that reconciles the antagonistic processes that occur at the coding and learning time scales. We tested this unsupervised algorithm with this homeostasis rule for a range of existing unsupervised learning algorithms coupled with different neural coding algorithms. In addition, we propose a simplification of this optimal homeostasis rule by implementing a simple heuristic on the probability of activation of neurons. Compared to the optimal homeostasis rule, we show that this heuristic allows to implement a more rapid unsupervised learning algorithm while keeping a large part of its effectiveness. These results demonstrate the potential application of such a strategy in machine learning and we illustrate this with one result in a convolutional neural network.
rejected-papers
This paper shows how to obtain more homogeneous activation of atoms in a dictionary. As reviewers point out, the paper is well written and indeed shows that the propose scheme results in a more uniform activation. However, the value of this contribution rests on making a case that uniformity is indeed a desirable outcome per se. As two reviewers explain, this crucial point is left unaddressed, which makes the paper too weak for ICLR.
val
[ "rylvv96opX", "H1gqIX0TAm", "ByxGwNP4Rm", "Syl4GvUHA7", "BylRLBcBAQ", "BylQtQPHRX", "H1eP8UR9hm", "ryxPs1nNnQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper discusses the addition of a regularizer to a standard sparse coding/dictionary learning algorithm to encourage the atoms to be used with uniform frequency. I do not think this work should be accepted to the conference for the following reasons:\n\n1: The authors show no benefit of this scheme except ...
[ 5, -1, -1, -1, -1, -1, 4, 9 ]
[ 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_SyMras0cFQ", "BylRLBcBAQ", "iclr_2019_SyMras0cFQ", "ryxPs1nNnQ", "rylvv96opX", "H1eP8UR9hm", "iclr_2019_SyMras0cFQ", "iclr_2019_SyMras0cFQ" ]
iclr_2019_SyNbRj09Y7
Visual Imitation Learning with Recurrent Siamese Networks
People are incredibly skilled at imitating others by simply observing them. They achieve this even in the presence of significant morphological differences and capabilities. Further, people are able to do this from raw perceptions of the actions of others, without direct access to the abstracted demonstration actions and with only partial state information. People therefore solve a difficult problem of understanding the salient features of both observations of others and the relationship to their own state when learning to imitate specific tasks. However, we can attempt to reproduce a similar demonstration via trail and error and through this gain more understanding of the task space. To reproduce this ability an agent would need to both learn how to recognize the differences between itself and some demonstration and at the same time learn to minimize the distance between its own performance and that of the demonstration. In this paper we propose an approach using only visual information to learn a distance metric between agent behaviour and a given video demonstration. We train an RNN-based siamese model to compute distances in space and time between motion clips while training an RL policy to minimize this distance. Furthermore, we examine a particularly challenging form of this problem where the agent must learn an imitation based task given a single demonstration. We demonstrate our approach in the setting of deep learning based control for physical simulation of humanoid walking in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF.
rejected-papers
This paper proposes an approach for imitation learning from video data. The problem is important and the contribution is timely. The reviewers brought up several concerns regarding the clarity of the paper and the lack of sufficient comparisons. The authors have improved the paper significantly, adding several new comparisons and improving the presentation. However, concerns still remain regarding the description of the method and the presentation of the results. Hence, the reviewers agree that the paper does not meet the bar for publication.
train
[ "SJlSXJmUAX", "H1xY9HL7Cm", "SyeuNZLmRQ", "BkeohNIQ0m", "r1erAUxeaQ", "rylk79B52Q", "BJg5LiSD27" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "As requested, we have added numerous additional experiments which we outline below:\n\nWe have added two additional types of baseline comparisons. One comparing our method to GAIL and a VAE, and another comparing our method to a non-recurrent version that is similar to TCN. Please see figure 4a of the revised manu...
[ -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2019_SyNbRj09Y7", "BJg5LiSD27", "iclr_2019_SyNbRj09Y7", "rylk79B52Q", "iclr_2019_SyNbRj09Y7", "iclr_2019_SyNbRj09Y7", "iclr_2019_SyNbRj09Y7" ]
iclr_2019_SyVhg20cK7
Inducing Cooperation via Learning to reshape rewards in semi-cooperative multi-agent reinforcement learning
We propose a deep reinforcement learning algorithm for semi-cooperative multi-agent tasks, where agents are equipped with their separate reward functions, yet with willingness to cooperate. Under these semi-cooperative scenarios, popular methods of centralized training with decentralized execution for inducing cooperation and removing the non-stationarity problem do not work well due to lack of a common shared reward as well as inscalability in centralized training. Our algorithm, called Peer-Evaluation based Dual DQN (PED-DQN), proposes to give peer evaluation signals to observed agents, which quantifies how they feel about a certain transition. This exchange of peer evaluation over time turns out to render agents to gradually reshape their reward functions so that their action choices from the myopic best-response tend to result in the good joint action with high cooperation. This evaluation-based method also allows flexible and scalable training by not assuming knowledge of the number of other agents and their observation and action spaces. We provide the performance evaluation of PED-DQN for the scenarios ranging from a simple two-person prisoner's dilemma to more complex semi-cooperative multi-agent tasks. In special cases where agents share a common reward function as in the centralized training methods, we show that inter-agent evaluation leads to better performance
rejected-papers
This work introduces a reward-shaping scheme for multi-agent settings based on the TD-error of other agents. Overall, reviewers were positive about the direction and the presentation but had a variety of concerns and questions and felt more experiments were necessary to validate the claims of flexibility and scalability, with results more comparable to the scale of the contemporary multi-agent literature. One note in particular: a feed-forward Q network is used in a partially observable environment, which the authors seemed to dismiss in their rebuttal. I agree with the reviewer that this is an important consideration when comparing to baselines which were developed with recurrent networks in mind. A revised manuscript addressed concerns with the presentation but did not introduce new results or plots, and reviewers were not convinced to alter their evaluation. There is agreement that this is an interesting paper, so I recommend that the authors conduct a more thorough empirical evaluation and submit to another venue.
test
[ "BkgEMMW937", "SkgTcbF-07", "BklUQlK-A7", "BJgggeFW07", "rkeR41FW0m", "r1eC3Qh_hQ", "SygVYCxF27" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work is well-written, but the quality of some sections can be improved significantly as suggested in the comments. I have a few main concerns that I explain in detailed comments. Among those, the paper argues that the algorithms converge without discussing why. Also, the amount of overestimation of the Q-valu...
[ 5, -1, -1, -1, -1, 5, 5 ]
[ 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_SyVhg20cK7", "iclr_2019_SyVhg20cK7", "BkgEMMW937", "SygVYCxF27", "r1eC3Qh_hQ", "iclr_2019_SyVhg20cK7", "iclr_2019_SyVhg20cK7" ]
iclr_2019_SyVpB2RqFX
INFORMATION MAXIMIZATION AUTO-ENCODING
We propose the Information Maximization Autoencoder (IMAE), an information theoretic approach to simultaneously learn continuous and discrete representations in an unsupervised setting. Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations. A decoder is included to approximate the posterior distribution of the data given their representations, where a high fidelity approximation can be achieved by leveraging the informative representations. We show that the proposed objective is theoretically valid and provides a principled framework for understanding the tradeoffs regarding informativeness of each representation factor, disentanglement of representations, and decoding quality.
rejected-papers
The paper proposes a principled modeling framework to train a stochastic auto-encoder that is regularized with mutual information maximization. For unsupervised learning, this auto-encoder produces a hybrid continuous-discrete latent representation. While the authors' response and revision have partially addressed some of the raised concerns on the technical analyses, the experimental evaluations presented in the paper do not appear adequate to justify the advantages of the proposed method over previously proposed ones, and the clarity (in particular, notation) needs further improvement. The proposed framework and techniques are potentially of interest to the machine learning community, but the paper of its current form fells below the acceptance bar. The authors are encouraged to improve the clarify of the paper and provide more convincing experiments (e.g., on high-dimensional datasets beyond MNIST).
train
[ "rklyeyXiCQ", "Bkg6QFClk4", "SkeGsXiy1N", "BJxMsuuJyV", "r1eDov911V", "Syx8CbKyyE", "HJlHo760AX", "SygKzZnR0X", "SJxThwjCCQ", "Bke39kQsC7", "HJxEtON50m", "ryxOw1q2CX", "S1lkbOFhRm", "BklalyU5AQ", "BJxtpDEqRm", "r1l-arks0X", "H1eZTkUqRQ", "Bye3TDn_hX", "Hye1UdoD2X", "HkxV_FHCsQ"...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_r...
[ "1) We seek to learn interpretable representations together with a decoding/generative model, where informative representations can then be leveraged to generate high fidelity data. The decoder is included to approximate the posterior distribution of the data given their representations, which together with the lea...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "BJxtpDEqRm", "SkeGsXiy1N", "BJxMsuuJyV", "HJlHo760AX", "SygKzZnR0X", "SJxThwjCCQ", "iclr_2019_SyVpB2RqFX", "S1lkbOFhRm", "Bke39kQsC7", "Bye3TDn_hX", "Bye3TDn_hX", "S1lkbOFhRm", "r1l-arks0X", "Hye1UdoD2X", "iclr_2019_SyVpB2RqFX", "H1eZTkUqRQ", "HkxV_FHCsQ", "iclr_2019_SyVpB2RqFX", ...
iclr_2019_Sye2doC9tX
Exploration by Uncertainty in Reward Space
Efficient exploration plays a key role in reinforcement learning tasks. Commonly used dithering strategies, such as-greedy, try to explore the action-state space randomly; this can lead to large demand for samples. In this paper, We propose an exploration method based on the uncertainty in reward space. There are two policies in this approach, the exploration policy is used for exploratory sampling in the environment, then the benchmark policy try to update by the data proven by the exploration policy. Benchmark policy is used to provide the uncertainty in reward space, e.g. td-error, which guides the exploration policy updating. We apply our method on two grid-world environments and four Atari games. Experiment results show that our method improves learning speed and have a better performance than baseline policies
rejected-papers
The paper has some nice ideas for efficient exploration, but reviewers think more work is needed before it is ready for publication. In particular, the paper should have an improved discussion of state-of-the-art work on exploration, compare the difference and benefits of the proposed approach, and then conduct proper experiments to validate the claims.
train
[ "BJeZDDMgaQ", "HJg5oYDUhX", "Bkxhq0z83Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper considered the idea of accelerating sampling process by exploring uncertainty of rewards. The authors claimed more efficient sampling by building a reference policy and an exploration policy. Algorithm was tested on grids and Atari games.\n\nThe authors proposed eDQN, which is more like exploring the un...
[ 5, 5, 3 ]
[ 3, 2, 5 ]
[ "iclr_2019_Sye2doC9tX", "iclr_2019_Sye2doC9tX", "iclr_2019_Sye2doC9tX" ]
iclr_2019_Sye7qoC5FQ
Adversarial Attacks on Node Embeddings
The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable since they generalize to many models, and are successful even when the attacker is restricted.
rejected-papers
The paper provides a novel analysis of the robustness to adversarial attacks in network representation learning. It appears to be a useful contribution for important class of models; however, the detailed reviews (1 and 2) raise some concerns that may require a bit of further work (though partially addressed in revised version).
train
[ "SJxrTF8Y3X", "S1g0pL5gR7", "B1e3PL9l0m", "rJgJGLcgAQ", "rklsxjcMpQ", "HylYhqZRh7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Brief Summary:\nThe authors present a novel adversarial attack on node embedding method based on random walks. They focus on perturbing the structure of the network. Because the bi-level optimization problem can be highly challenging, they refer to factorize a random walk matrix which is proved equivalent to DeepW...
[ 6, -1, -1, -1, 5, 6 ]
[ 4, -1, -1, -1, 5, 3 ]
[ "iclr_2019_Sye7qoC5FQ", "SJxrTF8Y3X", "HylYhqZRh7", "rklsxjcMpQ", "iclr_2019_Sye7qoC5FQ", "iclr_2019_Sye7qoC5FQ" ]
iclr_2019_SyeBqsRctm
Step-wise Sensitivity Analysis: Identifying Partially Distributed Representations for Interpretable Deep Learning
In this paper, we introduce a novel method, called step-wise sensitivity analysis, which makes three contributions towards increasing the interpretability of Deep Neural Networks (DNNs). First, we are the first to suggest a methodology that aggregates results across input stimuli to gain model-centric results. Second, we linearly approximate the neuron activation and propose to use the outlier weights to identify distributed code. Third, our method constructs a dependency graph of the relevant neurons across the network to gain fine-grained understanding of the nature and interactions of DNN's internal features. The dependency graph illustrates shared subgraphs that generalise across 10 classes and can be clustered into semantically related groups. This is the first step towards building decision trees as an interpretation of learned representations.
rejected-papers
This work proposes a modification of gradient based saliency map methods that measure the importance of all nodes at each layer. The reviewers found the novelty is rather marginal and that the evaluation is not up to par (since it's mostly qualitative). The reviewers are in strong agreement that this work does not pass the bar for acceptance.
train
[ "HJe2emInR7", "rkxd0SW7R7", "S1e_LoqbT7", "SJlqCanjnX", "Hygkyo2t27" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their thoughtful comments. \n\nThe main concern of the reviewers is whether the magnitude of gradients can be used to determine neuron relevance and they suggested to illustrate the validity of this approach through a toy example. \n\nFirst, the novelty of our work is the new approach of...
[ -1, -1, 3, 4, 3 ]
[ -1, -1, 5, 4, 4 ]
[ "iclr_2019_SyeBqsRctm", "iclr_2019_SyeBqsRctm", "iclr_2019_SyeBqsRctm", "iclr_2019_SyeBqsRctm", "iclr_2019_SyeBqsRctm" ]
iclr_2019_SyeKf30cFQ
A theoretical framework for deep and locally connected ReLU network
Understanding theoretical properties of deep and locally connected nonlinear network, such as deep convolutional neural network (DCNN), is still a hard problem despite its empirical success. In this paper, we propose a novel theoretical framework for such networks with ReLU nonlinearity. The framework bridges data distribution with gradient descent rules, favors disentangled representations and is compatible with common regularization techniques such as Batch Norm, after a novel discovery of its projection nature. The framework is built upon teacher-student setting, by projecting the student's forward/backward pass onto the teacher's computational graph. We do not impose unrealistic assumptions (e.g., Gaussian inputs, independence of activation, etc). Our framework could help facilitate theoretical analysis of many practical issues, e.g. disentangled representations in deep networks.
rejected-papers
This paper studies the behavior of gradient descent on deep neural network architectures with spatial locality, under generic input data distributions, using a planted or "teacher-student" model. Whereas R1 was supportive of this work, R2 and R3 could not verify the main statements and the proofs due to a severe lack of clarity and mathematical rigor. The AC strongly aligns with the latter, and therefore recommends rejection at this time, encouraging the authors to address clarity and rigor issues and resubmit their work again.
train
[ "SyeIhpXAR7", "HkeLn38Kn7", "HJeAzKeA0m", "rkgcPMmZAm", "ByxFsnnbp7", "r1gmaqNbam", "HJeS_wyh2m", "HklFc7kchX", "S1edfAN19m", "Syl7jfgxq7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "public", "author" ]
[ "Thanks R3 for taking time to read the revised paper! We really appreciated it. \n\nThe assumption of \"locally connected neural network\" is indeed important. However we regard this assumption as an important contribution of this paper rather than a restriction. First of all, network with such structures (e.g., CN...
[ -1, 5, -1, -1, -1, -1, 3, 7, -1, -1 ]
[ -1, 3, -1, -1, -1, -1, 4, 4, -1, -1 ]
[ "HJeAzKeA0m", "iclr_2019_SyeKf30cFQ", "rkgcPMmZAm", "iclr_2019_SyeKf30cFQ", "r1gmaqNbam", "iclr_2019_SyeKf30cFQ", "iclr_2019_SyeKf30cFQ", "iclr_2019_SyeKf30cFQ", "iclr_2019_SyeKf30cFQ", "S1edfAN19m" ]
iclr_2019_SyeLno09Fm
Few-Shot Intent Inference via Meta-Inverse Reinforcement Learning
A significant challenge for the practical application of reinforcement learning toreal world problems is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a "prior" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.
rejected-papers
This work proposes to use the MAML meta-learning approach in order to tackle the typical problem of insufficient demonstrations in IRL. All reviewers found this work to contain a novel and well-motivated idea and the manuscript to be well-written. The combination of MAML and MaxEnt IRL is straightforward, as R2 points out, however the AC does not consider this to be a flaw given that the main novelty here is the high-level idea rather than the technical details. However, all reviewers agree that for this paper to meet the ICLR standards, there has to be an increase in rigorousness through (a) a more close examination of assumptions, sensitivity of parameters and connections to imitation learning (b) expanding the experimental section.
train
[ "BJg0ATQ2yE", "SJl9jB7nJV", "rJglubvh14", "ryxrkKm2kV", "BygzyiA4TQ", "rJgRd4i_RX", "SJgvMVsuCm", "BygVyNouCX", "SkeDtAClTX", "B1ef3ksY27" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Our paper is on inverse reinforcement learning. The goal of this setting to learn the cost function of another agent through observing behavior. Evaluation of the learned cost function should naturally be with respect to the original cost function. Since we are trying to explicitly recover the cost function of the...
[ -1, -1, -1, -1, 3, -1, -1, -1, 4, 4 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, 3, 4 ]
[ "ryxrkKm2kV", "SJgvMVsuCm", "BygVyNouCX", "SJl9jB7nJV", "iclr_2019_SyeLno09Fm", "B1ef3ksY27", "BygzyiA4TQ", "SkeDtAClTX", "iclr_2019_SyeLno09Fm", "iclr_2019_SyeLno09Fm" ]
iclr_2019_SyeQFiCcF7
Siamese Capsule Networks
Capsule Networks have shown encouraging results on defacto benchmark computer vision datasets such as MNIST, CIFAR and smallNORB. Although, they are yet to be tested on tasks where (1) the entities detected inherently have more complex internal representations and (2) there are very few instances per class to learn from and (3) where point-wise classification is not suitable. Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points. In doing so we introduce Siamese Capsule Networks, a new variant that can be used for pairwise learning tasks. The model is trained using contrastive loss with l2-normalized capsule encoded pose features. We find that Siamese Capsule Networks perform well against strong baselines on both pairwise learning datasets, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.
rejected-papers
The paper extends capsule networks with a pairwise learning objective and evaluates on small face verification datasets. The authors do a great job describing prior work, but lack clarity when articulating their contribution and proposed method. In addition, some important implementation details, such as hyperparameter selection, are missing causing further confusion as to the final approach. Overall, according to the experiments shown, the approach offers modest improvements over prior work. The approach offers an interesting and promising direction. We encourage the authors to revise the manuscript to clarify their approach and contribution and to improve their evaluation by including the relevant metrics and implementation details.
train
[ "SygYSXFuam", "rJxStfYda7", "Hkglv2F237", "SklHeL_2hX", "Byl6BILU27" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for reviewing the paper.\n\n5 - I will try to fit more training details \n6 - A revised diagram will be put into the update version.", "Yes, you are right we do not use face landmarks to align images. We instead rely on the final fully connected layer to carry this out from the flattened pose vectors. \nI...
[ -1, -1, 5, 6, 3 ]
[ -1, -1, 4, 4, 4 ]
[ "SklHeL_2hX", "Hkglv2F237", "iclr_2019_SyeQFiCcF7", "iclr_2019_SyeQFiCcF7", "iclr_2019_SyeQFiCcF7" ]
iclr_2019_Syeben09FQ
Evaluating GANs via Duality
Generative Adversarial Networks (GANs) have shown great results in accurately modeling complex distributions, but their training is known to be difficult due to instabilities caused by a challenging minimax optimization problem. This is especially troublesome given the lack of an evaluation metric that can reliably detect non-convergent behaviors. We leverage the notion of duality gap from game theory in order to propose a novel convergence metric for GANs that has low computational cost. We verify the validity of the proposed metric for various test scenarios commonly used in the literature.
rejected-papers
All reviewers still argue for rejection for the submitted paper. The AC thinks that this paper should be published at some point, but for now it is a "revise and resubmit".
train
[ "Hyl8kexEyE", "ByeTMuzaA7", "B1xtw3-50Q", "BJesks4t07", "Bye_KjCLCm", "rkeV_HNVCX", "H1eMeNcf0m", "SJx-uhFn6X", "Skgo3KthpX", "BJgjorY2TX", "HkeCF4Y3a7", "HJearNF36X", "H1gYbPGh3X", "S1xlTDG9nX", "B1x8hZ4q27" ]
[ "author", "official_reviewer", "public", "author", "public", "author", "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1) Thank you for your suggestion. We will change the term \"evaluation\" into \"monitoring\" .\n\n2) There are two good reasons to use the duality gap as we compute it in practice:\n a) If we compute an approximate worst case D/G (say up to some \\epsilon), then this affects the duality-gap up to a factor of \\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "ByeTMuzaA7", "HkeCF4Y3a7", "BJesks4t07", "Bye_KjCLCm", "rkeV_HNVCX", "H1eMeNcf0m", "iclr_2019_Syeben09FQ", "Skgo3KthpX", "S1xlTDG9nX", "B1x8hZ4q27", "HJearNF36X", "H1gYbPGh3X", "iclr_2019_Syeben09FQ", "iclr_2019_Syeben09FQ", "iclr_2019_Syeben09FQ" ]
iclr_2019_SyehMhC9Y7
Deep Imitative Models for Flexible Inference, Planning, and Control
Imitation learning provides an appealing framework for autonomous control: in many tasks, demonstrations of preferred behavior can be readily obtained from human experts, removing the need for costly and potentially dangerous online data collection in the real world. However, policies learned with imitation learning have limited flexibility to accommodate varied goals at test time. Model-based reinforcement learning (MBRL) offers considerably more flexibility, since a predictive model learned from data can be used to achieve various goals at test time. However, MBRL suffers from two shortcomings. First, the model does not help to choose desired or safe outcomes -- its dynamics estimate only what is possible, not what is preferred. Second, MBRL typically requires additional online data collection to ensure that the model is accurate in those situations that are actually encountered when attempting to achieve test time goals. Collecting this data with a partially trained model can be dangerous and time-consuming. In this paper, we aim to combine the benefits of imitation learning and MBRL, and propose imitative models: probabilistic predictive models able to plan expert-like trajectories to achieve arbitrary goals. We find this method substantially outperforms both direct imitation and MBRL in a simulated autonomous driving task, and can be learned efficiently from a fixed set of expert demonstrations without additional online data collection. We also show our model can flexibly incorporate user-supplied costs at test-time, can plan to sequences of goals, and can even perform well with imprecise goals, including goals on the wrong side of the road.
rejected-papers
This paper proposes to combine RL and imitation learning, and the proposed approach seems convincing. As is typical in RL work, the evaluation of the method is not strong enough to convince the reviewers. Increasing community criticism on RL methods not scaling must be taken seriously here, despite the authors' disagreement.
train
[ "SJgVjK_qAQ", "HklQf72q3m", "HygTv3B90m", "HylhHY7KTX", "rJgS9EXK6m", "SJxcueXtpQ", "S1echIHR3X", "H1eXBFr9h7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your reply.\n\nQ1:\nYes, we refer to the noise injection, used for better generalization and avoiding \"unstable policies\" (Codevilla, et al. 2018). Our reasoning is injected noise constitutes an intervention taken by the machine. Consequently, training could be more dangerous, especially at speed: ...
[ -1, 6, -1, -1, -1, -1, 5, 6 ]
[ -1, 3, -1, -1, -1, -1, 5, 1 ]
[ "HygTv3B90m", "iclr_2019_SyehMhC9Y7", "rJgS9EXK6m", "H1eXBFr9h7", "HklQf72q3m", "S1echIHR3X", "iclr_2019_SyehMhC9Y7", "iclr_2019_SyehMhC9Y7" ]
iclr_2019_Syeil309tX
Optimized Gated Deep Learning Architectures for Sensor Fusion
Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control. Deep neural networks have been adopted for sensor fusion in a body of recent studies. Among these, the so-called netgated architecture was proposed, which has demonstrated improved performances over the conventional convolu- tional neural networks (CNN). In this paper, we address several limitations of the baseline negated architecture by proposing two further optimized architectures: a coarser-grained gated architecture employing (feature) group-level fusion weights and a two-stage gated architectures leveraging both the group-level and feature- level fusion weights. Using driving mode prediction and human activity recogni- tion datasets, we demonstrate the significant performance improvements brought by the proposed gated architectures and also their robustness in the presence of sensor noise and failures.
rejected-papers
The paper builds up on the gated fusion network architectures, and adapt those approaches to reach improved results. In that it is incrementally worthwhile. All the same, all reviewers agree that the work is not yet up to par. In particular, the paper is only incremental, and the novelty of it is not clear. It does not relate well to existing work in this field, and the results are not rigorously evaluated; thus its merit is unclear experimentally.
train
[ "rylmrpAo2Q", "HJluzZxjhX", "SJgnfvEqh7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThis paper tackles the problem of sensor fusion, where multiple (possibly differing) sensor modalities are available and neural network architectures are used to combine information from them to perform prediction tasks. The paper proposed modifications to a gated fusion network specifically: 1) Grouping sets of...
[ 4, 4, 3 ]
[ 5, 4, 4 ]
[ "iclr_2019_Syeil309tX", "iclr_2019_Syeil309tX", "iclr_2019_Syeil309tX" ]
iclr_2019_SyerAiCqt7
Hierarchical Bayesian Modeling for Clustering Sparse Sequences in the Context of Group Profiling
This paper proposes a hierarchical Bayesian model for clustering sparse sequences.This is a mixture model and does not need the data to be represented by a Gaussian mixture and that gives significant modelling freedom.It also generates a very interpretable profile for the discovered latent groups.The data that was used for the work have been contributed by a restaurant loyalty program company. The data is a collection of sparse sequences where each entry of each sequence is the number of user visits of one week to some restaurant. This algorithm successfully clustered the data and calculated the expected user affiliation in each cluster.
rejected-papers
All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.
train
[ "ByeQzhQpnm", "S1e3Sz2jhX", "HJeUU5LIhQ", "B1xrAbZ9n7", "rylBeSoS3X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Pros:\n-- Clustering sequence vectors is a practical and useful problem. Some of the business use-cases described in the paper are indeed useful and relevant for analytics in healthcare and retail.\n\nCons:\n-- The paper is poorly written. There are numerous typos and grammatical errors throughout the paper. \n-- ...
[ 2, 2, 1, 3, 2 ]
[ 5, 5, 5, 4, 4 ]
[ "iclr_2019_SyerAiCqt7", "iclr_2019_SyerAiCqt7", "iclr_2019_SyerAiCqt7", "iclr_2019_SyerAiCqt7", "iclr_2019_SyerAiCqt7" ]
iclr_2019_Syez3j0cKX
Dissecting an Adversarial framework for Information Retrieval
Recent advances in Generative Adversarial Networks facilitated by improvements to the framework and successful application to various problems has resulted in extensions to multiple domains. IRGAN attempts to leverage the framework for Information-Retrieval (IR), a task that can be described as modeling the correct conditional probability distribution p(d|q) over the documents (d), given the query (q). The work that proposes IRGAN claims that optimizing their minimax loss function will result in a generator which can learn the distribution, but their setup and baseline term steer the model away from an exact adversarial formulation, and this work attempts to point out certain inaccuracies in their formulation. Analyzing their loss curves gives insight into possible mistakes in the loss functions and better performance can be obtained by using the co-training like setup we propose, where two models are trained in a co-operative rather than an adversarial fashion.
rejected-papers
The manuscript centers on a critique of IRGAN, a recently proposed extension of GANs to the information retrieval setting, and introduces a competing procedure. Reviewers found the findings and the proposed alternative to be interesting and in one case described the findings as "illuminating", but were overall unsatisfied with the depth of the analysis, and in more than one case complained that too much of the manuscript is spent reviewing IRGAN, with not enough emphasis and detailed investigation of the paper's own contribution. Notational issues, certain gaps in the related work and experiments were addressed in a revision but the paper still reads as spending a bit too much time on background relative to the contributions. Two reviewers seemed to agree that IRGAN's significance made at least some of the focus on it justifiable, but one remarked that SIGIR may be a better venue for this line of work (the AC doesn't necessarily agree). Given the nature of the changes and the status of the manuscript following revision, it does seem like a more comprehensive rewrite and reframing would be necessary to truly satisfy all reviewer concerns. I therefore recommend against acceptance at this point in time.
train
[ "S1xA6c49C7", "ByenCKVcRQ", "rke2DvNcRQ", "Syelph9yp7", "Syg60A2OhQ", "HJeFkJ9D2Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for carefully reading the paper and understanding the crux even though our writing was not very clear in a few sections. As pointed out by the reviewer, the main motivation of the paper was to point out a few loopholes in IRGAN, which led to the proposal of a co-training based setup. It furth...
[ -1, -1, -1, 6, 5, 4 ]
[ -1, -1, -1, 4, 3, 4 ]
[ "Syelph9yp7", "HJeFkJ9D2Q", "Syg60A2OhQ", "iclr_2019_Syez3j0cKX", "iclr_2019_Syez3j0cKX", "iclr_2019_Syez3j0cKX" ]
iclr_2019_SyezvsC5tX
The loss landscape of overparameterized neural networks
We explore some mathematical features of the loss landscape of overparameterized neural networks. A priori one might imagine that the loss function looks like a typical function from Rn to R - in particular, nonconvex, with discrete global minima. In this paper, we prove that in at least one important way, the loss function of an overparameterized neural network does not look like a typical function. If a neural net has n parameters and is trained on d data points, with n>d, we show that the locus M of global minima of L is usually not discrete, but rather an n−d dimensional submanifold of Rn. In practice, neural nets commonly have orders of magnitude more parameters than data points, so this observation implies that M is typically a very high-dimensional subset of Rn.
rejected-papers
The paper proves that the locus of the global minima of an over-parameterized neural nets objective forms a low-dimensional manifold. The reviewers and AC note the following potential weaknesses: --- it's not clear why the proved result is significant: it neither implies the SGD can find a global minimum, nor that the found solution can generalize. (Very likely, most of the global minima on the manifold cannot generalize.) --- the results seem very intuitive and are a straightforward application of certain topological theorem.
train
[ "SygNerAUJ4", "rklQPG0IkE", "HJgdPoB8yE", "HkgWAw29hm", "SJlgKxXq0Q", "Byl5UnmICQ", "rygVs1sSCm", "BkgQtf5S0X", "rJetN0YBR7", "SyxQoGBER7", "HklQuKk-0m", "rkgw4upgAX", "rkxd587yRm", "rJehBLmkCm", "rJe40SXkRm", "rkgAlW71AQ", "BkeYca75nX", "SkgSSMfBnQ", "r1lAj8TRhm" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "Great. Glad we reached a common understanding about the special properties of global minima that were used in the argument. Thanks for helping us clarify that.\n\nWe wanted to make a brief clarification about your first concern. It is true that given a set of data points {(x_i,y_i)}, x_i in R^a, y_i in R^b, $M$...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, -1 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, -1 ]
[ "SJlgKxXq0Q", "HJgdPoB8yE", "iclr_2019_SyezvsC5tX", "iclr_2019_SyezvsC5tX", "rJetN0YBR7", "rygVs1sSCm", "BkgQtf5S0X", "SyxQoGBER7", "HklQuKk-0m", "rkgAlW71AQ", "rkgw4upgAX", "rJehBLmkCm", "r1lAj8TRhm", "HkgWAw29hm", "BkeYca75nX", "SkgSSMfBnQ", "iclr_2019_SyezvsC5tX", "iclr_2019_Sye...
iclr_2019_Syf9Q209YQ
Manifold regularization with GANs for semi-supervised learning
Generative Adversarial Networks are powerful generative models that can model the manifold of natural images. We leverage this property to perform manifold regularization by approximating a variant of the Laplacian norm using a Monte Carlo approximation that is easily computed with the GAN. When incorporated into the semi-supervised feature-matching GAN we achieve state-of-the-art results for semi-supervised learning on CIFAR-10 benchmarks when few labels are used, with a method that is significantly easier to implement than competing methods. We find that manifold regularization improves the quality of generated images, and is affected by the quality of the GAN used to approximate the regularizer.
rejected-papers
The paper proposes a method to perform manifold regularization for semi-supervised learning using GANs. Although the SSL results in the paper are competitive with existing methods, R1 and R3 are concerned about the novelty of the work in the light of recent manifold regularization SSL papers with GANs, a point that the AC agrees with. Given the borderline reviews and limited novelty of the core method, the paper just falls short of the acceptance threshold for ICLR.
train
[ "H1elMgJc0m", "H1g061y90Q", "Syl4_k19Rm", "BJlYMJJ507", "Byl2kX8c2Q", "rkepvb7c2Q", "BklhkXfqn7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your constructive comments.\n\nFirst, with respect to baselines, we have updated the results tables to include the additional baselines mentioned, as well as runs for VAT(+EntMin) with lower numbers of labels on CIFAR-10. After updating these baselines, we note that our method still achieve...
[ -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "BklhkXfqn7", "rkepvb7c2Q", "Byl2kX8c2Q", "iclr_2019_Syf9Q209YQ", "iclr_2019_Syf9Q209YQ", "iclr_2019_Syf9Q209YQ", "iclr_2019_Syf9Q209YQ" ]
iclr_2019_SyfXKoRqFQ
Ada-Boundary: Accelerating the DNN Training via Adaptive Boundary Batch Selection
Neural networks can converge faster with help from a smarter batch selection strategy. In this regard, we propose Ada-Boundary, a novel adaptive-batch selection algorithm that constructs an effective mini-batch according to the learning progress of the model.Our key idea is to present confusing samples what the true label is. Thus, the samples near the current decision boundary are considered as the most effective to expedite convergence. Taking advantage of our design, Ada-Boundary maintains its dominance in various degrees of training difficulty. We demonstrate the advantage of Ada-Boundary by extensive experiments using two convolutional neural networks for three benchmark data sets. The experiment results show that Ada-Boundary improves the training time by up to 31.7% compared with the state-of-the-art strategy and by up to 33.5% compared with the baseline strategy.
rejected-papers
This paper introduced an adaptive importance sampling strategy to select mini-batches to speed up the convergence of network training. The method is well motivated and easy to follow. The main concerns raised by the reviewers are limited novelty of the proposed simple idea compared to related recent work, and moderate empirical performance. The authors argue that the particular choice of the adaptive sampling method comes after trying various methods. I believe providing more detailed discussion and comparison with different methods together with the "active bias" paper would help the readers appreciate the insights conveyed in this paper. The authors provide some additional experiments in the revision. It would make the whole experiment section a lot stronger and convincing if the authors could run more thorough experiments on extra challenging datasets and include all the results int the main text. Additional experiment to clarify the merit of the proposed method on either faster convergence or lower asymptotic error would also improve the contribution of this paper.
test
[ "Byg7rxNfy4", "BJek2eU80Q", "Hklk5-DLRX", "rklJrpUICm", "SyxUEYILCX", "H1giBYNWp7", "HJet7CSinX", "HyeO3O55nQ", "rJlcMLoPnm", "BygPAI5AhX", "rygEfdl1aQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "It is known that emphasizing uncertain examples or examples close to the decision boundary can improve asymptotic error (not only Active Bias paper, but a few other papers as well). I don't see how the warm-starting removes this effect. Thus, without showing the full learning curves, it is unclear if the better er...
[ -1, -1, -1, -1, -1, -1, 5, 5, 5, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, -1, -1 ]
[ "rygEfdl1aQ", "iclr_2019_SyfXKoRqFQ", "HJet7CSinX", "rJlcMLoPnm", "HyeO3O55nQ", "HyeO3O55nQ", "iclr_2019_SyfXKoRqFQ", "iclr_2019_SyfXKoRqFQ", "iclr_2019_SyfXKoRqFQ", "HJet7CSinX", "rJlcMLoPnm" ]
iclr_2019_Syfz6sC9tQ
Generative Feature Matching Networks
We propose a non-adversarial feature matching-based approach to train generative models. Our approach, Generative Feature Matching Networks (GFMN), leverages pretrained neural networks such as autoencoders and ConvNet classifiers to perform feature extraction. We perform an extensive number of experiments with different challenging datasets, including ImageNet. Our experimental results demonstrate that, due to the expressiveness of the features from pretrained ImageNet classifiers, even by just matching first order statistics, our approach can achieve state-of-the-art results for challenging benchmarks such as CIFAR10 and STL10.
rejected-papers
The paper proposes a method of training implicit generative models based on moment matching in the feature spaces of pre-trained feature extractors, derived from autoencoders or classifiers. The authors also propose a trick for tracking the moving averages by appealing to the Adam optimizer and deriving updates based on the implied loss function of a moving average update. It was generally agreed that the paper was well written and easy to follow, that empirical results were good, but that the novelty is relatively low. Generative models have been built out of pre-trained classifiers before (e.g. generative plug & play networks), feature matching losses for generator networks have been proposed before (e.g. Salimans et al, 2016). The contribution here is mainly the extensive empirical analysis plus the AMA trick. After receiving exclusively confidence score 3 reviews, I sought the opinion of a 4th reviewer, an expert on GANs and GAN-like generative models. Their remaining sticking points, after a rapid rebuttal, are with possible degeneracies in the loss function and class-level information leakage from pre-trained classifiers, making these results are not properly "unconditional". The authors rebutted this by suggesting that unlike Salimans et al (2016), there is no signal backpropagated from the label layer, but I find this particularly unconvincing: the objective in that work maximizes a "none-of-the-above" class (and thus minimizes *all* classes). The gradient backpropagated to the generator is uninformative about which particular class a sample should imitate, but the features learned by the discriminator needing to discriminate between classes shape those gradients in a particular way all the same, and the result is samples that look like distinct CIFAR classes. In the same way, the gradients used to train GFMN are "shaped" by particular class-discriminative features when trained against a classifier feature extractor. From my own perspective, while there is no theory presented to support why this method is a good idea (why matching arbitrary features unconnected with the generative objective should lead to good results), the idea of optimizing a moment matching objective in classifier feature space is rather obvious, and it is unsurprising that with enough "elbow grease" it can be made to work. The Adam moving average trick is interesting but a deeper analysis and ablation of why this works would have helped convince the reader that it is principled. This paper was very much on the borderline. Aside from quibbles over the fairness of comparisons above, I was forced to ask myself whether I could imagine that this would be a widely read, influential, and frequently cited piece of work. I believe that the carefully done empirical investigation has its merits, but that the core ideas are rather obvious and the added novelty of a poorly understood stabilized moving average is not enough to warrant acceptance.
val
[ "HylSrUITnm", "S1eBhca0JV", "BkefMVTAkN", "HyeRBO3RJE", "ByeVlu2AyV", "B1lFdFqHkN", "S1grfUFBJN", "SyefTUDBkN", "H1eUWZyryN", "SJg0BnM7R7", "HklWGPEQJE", "S1eBdGFMkN", "HketY5fDRQ", "Syx0-9GwAQ", "SJg3r9MwC7", "Hkln8-idpX", "rkeSiI7z07", "SyeV-VyzAX", "ByxhmrsOaX", "HygWkrouTQ"...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author", "author", "public", "author", "author", "author", "author",...
[ "This paper consists of two contributions: (1) using a fixed pre-trained network as a discriminator in feature matching loss ((Salimans et al., 2016). Since it's fixed there is no GAN-like training procedure. (2) Using \"ADAM\"-moving average to improve the convergency for the feature matching loss.\n\nThe paper is...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2019_Syfz6sC9tQ", "BkefMVTAkN", "HyeRBO3RJE", "rkx9_6p42Q", "HylSrUITnm", "S1grfUFBJN", "SyefTUDBkN", "H1eUWZyryN", "Syx0-9GwAQ", "iclr_2019_Syfz6sC9tQ", "S1eBdGFMkN", "iclr_2019_Syfz6sC9tQ", "SJg0BnM7R7", "SJg0BnM7R7", "SJg0BnM7R7", "iclr_2019_Syfz6sC9tQ", "SyeV-VyzAX", "icl...
iclr_2019_SygHGnRqK7
Probabilistic Federated Neural Matching
In federated learning problems, data is scattered across different servers and exchanging or pooling it is often impractical or prohibited. We develop a Bayesian nonparametric framework for federated learning with neural networks. Each data server is assumed to train local neural network weights, which are modeled through our framework. We then develop an inference approach that allows us to synthesize a more expressive global network without additional supervision or data pooling. We then demonstrate the efficacy of our approach on federated learning problems simulated from two popular image classification datasets.
rejected-papers
While there was some support for the ideas presented, unfortunately this paper was on the borderline. Significant concerns were raised as to whether the setting studied was realistic, among others.
train
[ "B1lO8WT_p7", "HJx1mbpd67", "rJgwugpdaQ", "rJgnheTOaX", "B1x1LTanhQ", "SJekcov5nQ", "S1lgPnb_n7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their feedback. We've uploaded the revised draft to resolve reviewers' concerns. Individual responses follow below.", "We thank the reviewer for their time and interesting suggestions. We have added additional experiments to the draft (first paragraph of Section 4) to help address your...
[ -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2019_SygHGnRqK7", "S1lgPnb_n7", "B1x1LTanhQ", "SJekcov5nQ", "iclr_2019_SygHGnRqK7", "iclr_2019_SygHGnRqK7", "iclr_2019_SygHGnRqK7" ]
iclr_2019_SygInj05Fm
Physiological Signal Embeddings (PHASE) via Interpretable Stacked Models
In health, machine learning is increasingly common, yet neural network embedding (representation) learning is arguably under-utilized for physiological signals. This inadequacy stands out in stark contrast to more traditional computer science domains, such as computer vision (CV), and natural language processing (NLP). For physiological signals, learning feature embeddings is a natural solution to data insufficiency caused by patient privacy concerns -- rather than share data, researchers may share informative embedding models (i.e., representation models), which map patient data to an output embedding. Here, we present the PHASE (PHysiologicAl Signal Embeddings) framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction results by estimating feature attributions in the "stacked" models (i.e., feature embedding model followed by prediction model). PHASE is novel in three ways: 1) To our knowledge, PHASE is the first instance of transferal of neural networks to create physiological signal embeddings. 2) We present a tractable method to obtain feature attributions through stacked models. We prove that our stacked model attributions can approximate Shapley values -- attributions known to have desirable properties -- for arbitrary sets of models. 3) PHASE was extensively tested in a cross-hospital setting including publicly available data. In our experiments, we show that PHASE significantly outperforms alternative embeddings -- such as raw, exponential moving average/variance, and autoencoder -- currently in use. Furthermore, we provide evidence that transferring neural network embedding/representation learners between distinct hospitals still yields performant embeddings and offer recommendations when transference is ineffective.
rejected-papers
Authors present a technique to learn embeddings over physiological signals independently using univariate LSTMs tasked to predict future values. Supervised methods are them employed over these embeddings. Univariate approach is taken to improve transferability across institutions, and Shapley values are used to provide interpretable insight. The work is interesting, and authors have made a good attempt at answering reviewers' concerns, but more work remains to be done. Pros: - R1 & R3: Well written. - R3: Transferrable embeddings are useful in this domain, and not often researched. Cons: - R3: Method builds embeddings that assume that future task will be relevant to drops in signals. Authors confirm. - R3: Performance improvement is marginal versus baselines. Authors essentially confirm that the small improvement is the accurate number. - R2 & R3: Interpretability evaluation is not sufficient. Medical expert should rate interpretability of results. Authors did not include or revise according to suggestion.
train
[ "Hye0kEE907", "HyxUt7VqRQ", "S1xZdQVcRm", "H1rL7VqAQ", "ByxCez4qA7", "Skl3RJ4cR7", "H1eEGZV9AX", "Skg7B-V5AQ", "rkliQbEqAX", "S1xzxx4q0m", "HJxTNKXs3m", "Bkx_Q3Kq37", "rkg-azY9hX" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Most major changes are made in red. The appendix section is entirely new.", "*”- Lack of description on experiment setup. The authors do not describe how they pre-trained the LSTMs to obtain Min^h, Auto^h and Hypox^h, which significantly hurts reproducibility. Also I couldn't find any description regarding trai...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2019_SygInj05Fm", "S1xZdQVcRm", "H1rL7VqAQ", "ByxCez4qA7", "Bkx_Q3Kq37", "HJxTNKXs3m", "rkg-azY9hX", "rkliQbEqAX", "H1eEGZV9AX", "Skl3RJ4cR7", "iclr_2019_SygInj05Fm", "iclr_2019_SygInj05Fm", "iclr_2019_SygInj05Fm" ]
iclr_2019_SygJSiA5YQ
Weak contraction mapping and optimization
The weak contraction mapping is a self mapping that the range is always a subset of the domain, which admits a unique fixed-point. The iteration of weak contraction mapping is a Cauchy sequence that yields the unique fixed-point. A gradient-free optimization method as an application of weak contraction mapping is proposed to achieve global minimum convergence. The optimization method is robust to local minima and initial point position.
rejected-papers
This paper proposes an optimization algorithm based on 'weak contraction mapping'. The paper is written poorly without clear definitions and mathematical rigor. Reviewers doubt both the correctness and the usefulness of the proposed method. I strongly suggest authors to rewrite the paper addressing all the reviews before submitting to a different venue.
train
[ "S1e8d-Qf0Q", "rJlE1rck6X", "Hklb7Rt1pQ", "BJxCMkkqhX", "BkxP_4cQhQ", "H1xAcYdC3Q", "HkeFledC3X" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "This paper considers self-maps of metric spaces where the range is strictly smaller than the domain. Under this condition this tries to show that such a map has a fixed point. Now the paper suggests that such a \"weakly contractive\" map has a fixed point and tries to use such maps to find the global minima of f...
[ 3, -1, -1, 1, 4, -1, -1 ]
[ 2, -1, -1, 5, 5, -1, -1 ]
[ "iclr_2019_SygJSiA5YQ", "Hklb7Rt1pQ", "HkeFledC3X", "iclr_2019_SygJSiA5YQ", "iclr_2019_SygJSiA5YQ", "BkxP_4cQhQ", "BJxCMkkqhX" ]
iclr_2019_SygK6sA5tX
Graph Classification with Geometric Scattering
One of the most notable contributions of deep learning is the application of convolutional neural networks (ConvNets) to structured signal classification, and in particular image classification. Beyond their impressive performances in supervised learning, the structure of such networks inspired the development of deep filter banks referred to as scattering transforms. These transforms apply a cascade of wavelet transforms and complex modulus operators to extract features that are invariant to group operations and stable to deformations. Furthermore, ConvNets inspired recent advances in geometric deep learning, which aim to generalize these networks to graph data by applying notions from graph signal processing to learn deep graph filter cascades. We further advance these lines of research by proposing a geometric scattering transform using graph wavelets defined in terms of random walks on the graph. We demonstrate the utility of features extracted with this designed deep filter bank in graph classification of biochemistry and social network data (incl. state of the art results in the latter case), and in data exploration, where they enable inference of EC exchange preferences in enzyme evolution.
rejected-papers
AR1 is concerned about the overlap of this paper with Gama et al., 2018 as well as lack of theoretical analysis and poor results on REDDIT-5k and REDDIT-5B datasets. AR2 reflects the same concerns (lack of clear cut novelty over Zou & Lerman, 2018, Game, 2018. AR3 also points the same issue re. lack of theoretical results. The austhors admit that Zou and Lerman, 2018, and Gama, 2018, focus on stability results while this submission offers empirical evaluations. Unfortunately, reviewers did not find these arguments convincing. Thus, at this point, the paper cannot be accepted for publication in ICLR. AC strongly encourages authors to develop their theoretical 'edge' over this crowded market of GCNs and scattering approaches.
train
[ "ryxeDc7zy4", "H1eadygcRQ", "B1gnX1xqC7", "rJxjKP150m", "BygaT4Jq0Q", "SkejcEy50m", "rJek5Xl9n7", "rJxzUquw2Q", "BJxeBoLX3m" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the extensive changes of the paper! I appreciate the work that you did, but I am still not convinced about the novelty of the approach as well as its practical benefits. The 'edge' over existing methods does not appear to be sufficiently large to me.\n\nWith the new data sets, the difference over existi...
[ -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "H1eadygcRQ", "B1gnX1xqC7", "BJxeBoLX3m", "rJxzUquw2Q", "SkejcEy50m", "rJek5Xl9n7", "iclr_2019_SygK6sA5tX", "iclr_2019_SygK6sA5tX", "iclr_2019_SygK6sA5tX" ]
iclr_2019_SygONjRqKm
Amortized Context Vector Inference for Sequence-to-Sequence Networks
Neural attention (NA) has become a key component of sequence-to-sequence models that yield state-of-the-art performance in as hard tasks as abstractive document summarization (ADS), machine translation (MT), and video captioning (VC). NA mechanisms perform inference of context vectors; these constitute weighted sums of deterministic input sequence encodings, adaptively sourced over long temporal horizons. Inspired from recent work in the field of amortized variational inference (AVI), in this work we consider treating the context vectors generated by soft-attention (SA) models as latent variables, with approximate finite mixture model posteriors inferred via AVI. We posit that this formulation may yield stronger generalization capacity, in line with the outcomes of existing applications of AVI to deep networks. To illustrate our method, we implement it and experimentally evaluate it considering challenging ADS, VC, and MT benchmarks. This way, we exhibit its improved effectiveness over state-of-the-art alternatives.
rejected-papers
The overall view of the reviewers is that the paper is not quite good enough as it stands. The reviewers also appreciate the contributions so taking the comments into account and resubmit elsewhere is encouraged.
train
[ "HJg6yqxPaX", "rJg2m5lzTX", "BJeKvchrTm", "Skl_CWo1aQ", "SyxnVbbanQ", "HkeJCJAY2m" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. \"For video captioning task, various methods perform quite similar on test set, but vary a lot on validation set? \"\nA.: It seems all methods work much better in the validation set than in the test set. We suppose the way the dataset was split, the validation split was much \"easier\" than the \"test\" set. If...
[ -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "SyxnVbbanQ", "HkeJCJAY2m", "Skl_CWo1aQ", "iclr_2019_SygONjRqKm", "iclr_2019_SygONjRqKm", "iclr_2019_SygONjRqKm" ]
iclr_2019_Sygx4305KQ
Small steps and giant leaps: Minimal Newton solvers for Deep Learning
We propose a fast second-order method that can be used as a drop-in replacement for current deep learning solvers. Compared to stochastic gradient descent (SGD), it only requires two additional forward-mode automatic differentiation operations per iteration, which has a computational cost comparable to two standard forward passes and is easy to implement. Our method addresses long-standing issues with current second-order solvers, which invert an approximate Hessian matrix every iteration exactly or by conjugate-gradient methods, procedures that are much slower than a SGD step. Instead, we propose to keep a single estimate of the gradient projected by the inverse Hessian matrix, and update it once per iteration with just two passes over the network. This estimate has the same size and is similar to the momentum variable that is commonly used in SGD. No estimate of the Hessian is maintained. We first validate our method, called CurveBall, on small problems with known solutions (noisy Rosenbrock function and degenerate 2-layer linear networks), where current deep learning solvers struggle. We then train several large models on CIFAR and ImageNet, including ResNet and VGG-f networks, where we demonstrate faster convergence with no hyperparameter tuning. We also show our optimiser's generality by testing on a large set of randomly-generated architectures.
rejected-papers
The proposal is a scheme for using implicit matrix-vector products to exploit curvature information for neural net optimization, roughly based on the adaptive learning rate and momentum tricks from Martens and Grosse (2015). The paper is well-written, and the proposed method seems like a reasonable thing to try. I don't see any critical flaws in the methods. While there was a long discussion between R1 and the authors on many detailed points, most of the points R1 raises seem very minor, and authors' response to the conceptual points seems satisfactory. In terms of novelty, the method is mostly a remixing of ideas that have already appeared in the neural net optimization literature. There is sufficient novelty to justify acceptance if there were strong experimental results, but in my opinion not enough for the conceptual contributions to stand on their own. There is not much evidence of a real optimization improvement. The per-epoch improvement over SGD is fairly small, and (as the reviewers point out) probably outweighed by the factor-of-2 computational overhead, so it's likely there is no wall-clock improvement. Other details of the experimental setup seem concerning; e.g., if I understand right, the SGD training curve flatlines because the SGD parameters were tuned for validation accuracy rather than training accuracy (as is reported). The only comparison to another second-order method is to K-FAC on an MNIST MLP, even though K-FAC and other methods have been applied to much larger-scale models. I think there's a promising idea here which could make a strong paper if the theory or experiments were further developed. But I can't recommend acceptance in its current form.
train
[ "SJgp6njLJN", "Byxdx-dHJE", "SygVWkE4JE", "rJxEV074kE", "SJx9di-XkV", "Ske076UWJV", "S1e706lJkN", "rkxy10K2C7", "HyeK5Jt3R7", "SyemYpEiam", "B1xyXcQ767", "SyxRWTJxpQ", "ryeMkWWQn7", "ryeIGPDv2m" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for pointing out this interesting connection. Quoting from the paper you mentioned:\n\n“If CG terminated after just 1 step, HF becomes equivalent to NAG, except that it uses a special formula based on the curvature matrix for the learning rate instead of a fixed constant.”\n\nThe same reasoning applies t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "SJx9di-XkV", "Ske076UWJV", "ryeMkWWQn7", "ryeMkWWQn7", "iclr_2019_Sygx4305KQ", "S1e706lJkN", "rkxy10K2C7", "HyeK5Jt3R7", "ryeMkWWQn7", "ryeIGPDv2m", "SyxRWTJxpQ", "iclr_2019_Sygx4305KQ", "iclr_2019_Sygx4305KQ", "iclr_2019_Sygx4305KQ" ]
iclr_2019_SygxYoC5FX
BIGSAGE: unsupervised inductive representation learning of graph via bi-attended sampling and global-biased aggregating
Different kinds of representation learning techniques on graph have shown significant effect in downstream machine learning tasks. Recently, in order to inductively learn representations for graph structures that is unobservable during training, a general framework with sampling and aggregating (GraphSAGE) was proposed by Hamilton and Ying and had been proved more efficient than transductive methods on fileds like transfer learning or evolving dataset. However, GraphSAGE is uncapable of selective neighbor sampling and lack of memory of known nodes that've been trained. To address these problems, we present an unsupervised method that samples neighborhood information attended by co-occurring structures and optimizes a trainable global bias as a representation expectation for each node in the given graph. Experiments show that our approach outperforms the state-of-the-art inductive and unsupervised methods for representation learning on graphs.
rejected-papers
AR2 is concerned about the marginal novelty, weak experiments and very high complexity of the algorithm. AR3 is concerned about lack of theoretical analysis and parameter setting. AR4 is concerned that the proposed method is useful in very restricted settings and the paper is incremental. Unfortunately, with strong critique from reviewers regarding the novelty, complexity, poor presentation and restricted setting, this draft cannot be accepted by ICLR.
val
[ "HyeZ0Llep7", "ByxStr6L3X", "r1e8qsZWhm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper modifies the GraphSAGE on unsupervised inductive node embedding.\nThe authors propose to use the bi-attention architecture to sample\ninteresting nodes (instead of the uniform sampler in GraphSAGE), and to use a\nglobal embedding bias matrix in the local aggregating functions. The method\nshowed improve...
[ 2, 4, 4 ]
[ 4, 3, 4 ]
[ "iclr_2019_SygxYoC5FX", "iclr_2019_SygxYoC5FX", "iclr_2019_SygxYoC5FX" ]
iclr_2019_Syl6tjAqKX
BEHAVIOR MODULE IN NEURAL NETWORKS
Prefrontal cortex (PFC) is a part of the brain which is responsible for behavior repertoire. Inspired by PFC functionality and connectivity, as well as human behavior formation process, we propose a novel modular architecture of neural networks with a Behavioral Module (BM) and corresponding end-to-end training strategy. This approach allows the efficient learning of behaviors and preferences representation. This property is particularly useful for user modeling (as for dialog agents) and recommendation tasks, as allows learning personalized representations of different user states. In the experiment with video games playing, the resultsshow that the proposed method allows separation of main task’s objectives andbehaviors between different BMs. The experiments also show network extendability through independent learning of new behavior patterns. Moreover, we demonstrate a strategy for an efficient transfer of newly learned BMs to unseen tasks.
rejected-papers
This paper takes inspiration from the brain to add a behavioral module to a deep reinforcement learning architecture. Unfortunately, the paper's structure and execution lacks clarity and requires a lot more work: as noted by reviewers, the link link between motivation and experiments is too fuzzy and their execution is not convincing.
train
[ "HyxcQgBonQ", "H1gNs4tqhX", "BygeJNU5hX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# Summary\nThis paper proposes to learn behaviors independently from the main task. The main idea is to train a behavior classifier and use domain-adversarial training idea to make the features invariant to sources of behaviors for transfer learning to new behaviors/tasks. The results on Atari games show that the ...
[ 3, 3, 4 ]
[ 4, 5, 5 ]
[ "iclr_2019_Syl6tjAqKX", "iclr_2019_Syl6tjAqKX", "iclr_2019_Syl6tjAqKX" ]
iclr_2019_SylU3jC5Y7
ADAPTIVE NETWORK SPARSIFICATION VIA DEPENDENT VARIATIONAL BETA-BERNOULLI DROPOUT
While variational dropout approaches have been shown to be effective for network sparsification, they are still suboptimal in the sense that they set the dropout rate for each neuron without consideration of the input data. With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss. To overcome this limitation, we propose adaptive variational dropout whose probabilities are drawn from sparsity-inducing beta-Bernoulli prior. It allows each neuron to be evolved either to be generic or specific for certain inputs, or dropped altogether. Such input-adaptive sparsity-inducing dropout allows the resulting network to tolerate larger degree of sparsity without losing its expressive power by removing redundancies among features. We validate our dependent variational beta-Bernoulli dropout on multiple public datasets, on which it obtains significantly more compact networks than baseline methods, with consistent accuracy improvements over the base networks.
rejected-papers
The paper proposes Variational Beta-Bernoulli Dropout,, a Bayesian method for sparsifying neural networks. The method adopts a spike-and-slab pior over parameter of the network. The paper proposes Beta hyperpriors over the network, motivated by the Indian Buffet Process, and propose a method for input-conditional priors. The paper is well-written and the material is communicated clearly. The topic is also of interest to the community and might have important implications down the road. The authors, however, failed to convince the reviewers that the paper is ready for publication at ICLR. The proposed method is very similar to earlier work. The reviewers think that the paper is not ready for publication.
train
[ "Skg3CTn00m", "SylamO6cRQ", "S1gDLqIdRQ", "Bkg_LDLdRQ", "ByeY-DL_07", "BygPuUI_AX", "HyejvX5hnQ", "HyeKbrJc27", "Byx1wGww2Q" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comment.\n\nComparison to generalized dropout \nGeneralized dropout is similar to our beta-Bernoulli dropout in a sense that it places a beta prior Beta(alpha, beta) on the mask probability pi. The generalized dropout has several variants according to the choice of hyperparameters alpha and beta, a...
[ -1, -1, -1, -1, -1, -1, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "SylamO6cRQ", "ByeY-DL_07", "iclr_2019_SylU3jC5Y7", "Byx1wGww2Q", "HyeKbrJc27", "HyejvX5hnQ", "iclr_2019_SylU3jC5Y7", "iclr_2019_SylU3jC5Y7", "iclr_2019_SylU3jC5Y7" ]
iclr_2019_Syx4_iCqKQ
Polar Prototype Networks
This paper proposes a neural network for classification and regression, without the need to learn layout structures in the output space. Standard solutions such as softmax cross-entropy and mean squared error are effective but parametric, meaning that known inductive structures such as maximum margin separation and simplicity (Occam's Razor) need to be learned for the task at hand. Instead, we propose polar prototype networks, a class of networks that explicitly states the structure, \ie the layout, of the output. The structure is defined by polar prototypes, points on the hypersphere of the output space. For classification, each class is described by a single polar prototype and they are a priori distributed with maximal separation and equal shares on the hypersphere. Classes are assigned to prototypes randomly or based on semantic priors and training becomes a matter of minimizing angular distances between examples and their class prototypes. For regression, we show that training can be performed as a polar interpolation between two prototypes, arriving at a regression with higher-dimensional outputs. From empirical analysis, we find that polar prototype networks benefit from large margin separation and semantic class structure, while only requiring a minimal amount of output dimensions. While the structure is simple, the performance is on par with (classification) or better than (regression) standard network methods. Moreover, we show that we gain the ability to perform regression and classification jointly in the same space, which is disentangled and interpretable by design.
rejected-papers
This work proposes a class of neural networks that can jointly perform classification and regression in the output space. The authors explore the concept of polar prototypes which are points on the hypersphere in the output space. For classification, each class is described by a single polar prototype and training is equivalent to minimizing angular distances between examples and their class prototypes. For regression, training can be performed as a polar interpolation between two prototypes. As rightly acknowledged by R3, “it is nice to see an alternative to the dominant cross-entropy loss and l2 loss for deep classification and regression respectively, also the ability to tackle both” at the same time. However, all reviewers and AC agreed that the current manuscript lacks convincing empirical evaluations that clearly show the benefits of the proposed approach. To strengthen the evaluation, (1) see R1’s concern regarding the state-of-the-art performance on CIFAR-10; (2) see R3’s suggestion to use more challenging datasets (e.g. ImageNet), stronger backbone networks (e.g. densenet), and also other applications (e.g. object recognition and pose estimation; face recognition and age estimation as classification and regression problems); (3) see R2’s suggestions for more baselines to be compared to. Two other requests to further strengthen the manuscript are: (1) finding alternative ways to MC or evolutionary algorithms (R2); (2) exploring class correlation in the prototype space (R2). In the response, the authors acknowledged that their initial results were not aimed for state-of-the-art comparison, but to show that the proposed objective is comparable to minimizing softmax cross-entropy loss. The authors provide additional experiments using DenseNet as the base network and the results are still slightly inferior to state-of-the-art performance. The experiments using ImageNet dataset have been promised by the authors (in response to R3), but are not included in the current revision. AC suggests in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.
train
[ "rkggi75dC7", "HkxJ8McO0m", "HJxkMfcu07", "H1g1RZ9u0Q", "HJeS3pI8hm", "SkeIzMP427", "SJeeJ8RQsQ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their feedback and suggestions to improve the paper. We have provided a rebuttal for each reviewer separately below. Based on the reviews, we have made the following updates to the paper:\n\n- We have added a classification experiment with improved performance using a deeper architecture...
[ -1, -1, -1, -1, 5, 3, 4 ]
[ -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2019_Syx4_iCqKQ", "SJeeJ8RQsQ", "SkeIzMP427", "HJeS3pI8hm", "iclr_2019_Syx4_iCqKQ", "iclr_2019_Syx4_iCqKQ", "iclr_2019_Syx4_iCqKQ" ]
iclr_2019_Syx9rnRcYm
A CASE STUDY ON OPTIMAL DEEP LEARNING MODEL FOR UAVS
Over the passage of time Unmanned Autonomous Vehicles (UAVs), especially Autonomous flying drones grabbed a lot of attention in Artificial Intelligence. Since electronic technology is getting smaller, cheaper and more efficient, huge advancement in the study of UAVs has been observed recently. From monitoring floods, discerning the spread of algae in water bodies to detecting forest trail, their application is far and wide. Our work is mainly focused on autonomous flying drones where we establish a case study towards efficiency, robustness and accuracy of UAVs where we showed our results well supported through experiments. We provide details of the software and hardware architecture used in the study. We further discuss about our implementation algorithms and present experiments that provide a comparison between three different state-of-the-art algorithms namely TrailNet, InceptionResnet and MobileNet in terms of accuracy, robustness, power consumption and inference time. In our study, we have shown that MobileNet has produced better results with very less computational requirement and power consumption. We have also reported the challenges we have faced during our work as well as a brief discussion on our future work to improve safety features and performance.
rejected-papers
The paper compared between different CNNs for UAV trail guidance. The reviewers arrived at a consensus on rejection due to lack of new ideas, and the paper is not well polished.
train
[ "BJeDB0wLTm", "Bkl5ifgIT7", "HJlVyMV63Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper considers the task of trail navigation task recently explored by Giusti et al. and Smolyanskiy et al. The authors describe their setup for physical experiments with a drone, and compare three neural network architectures for trail navigation on the IDSIA dataset. Experiments in a simulator a...
[ 3, 3, 2 ]
[ 3, 2, 2 ]
[ "iclr_2019_Syx9rnRcYm", "iclr_2019_Syx9rnRcYm", "iclr_2019_Syx9rnRcYm" ]
iclr_2019_SyxHKjAcYX
Zero-Resource Multilingual Model Transfer: Learning What to Share
Modern natural language processing and understanding applications have enjoyed a great boost utilizing neural networks models. However, this is not the case for most languages especially low-resource ones with insufficient annotated training data. Cross-lingual transfer learning methods improve the performance on a low-resource target language by leveraging labeled data from other (source) languages, typically with the help of cross-lingual resources such as parallel corpora. In this work, we propose a zero-resource multilingual transfer learning model that can utilize training data in multiple source languages, while not requiring target language training data nor cross-lingual supervision. Unlike most existing methods that only rely on language-invariant features for cross-lingual transfer, our approach utilizes both language-invariant and language-specific features in a coherent way. Our model leverages adversarial networks to learn language-invariant features and mixture-of-experts models to dynamically exploit the relation between the target language and each individual source language. This enables our model to learn effectively what to share between various languages in the multilingual setup. It results in significant performance gains over prior art, as shown in an extensive set of experiments over multiple text classification and sequence tagging tasks including a large-scale real-world industry dataset.
rejected-papers
The proposed method proposes a new architecture that uses mixture of experts to determine what to share between multiple languages for transfer learning. The results are quite good. There is still a bit of a lack of framing compared to the large amount of previous work in the field, even after initial revisions to cover reviewer comments. I think that probably this requires a significant rewrite of the intro and maybe even title of the paper to scope the contributions, and also make sure in the empirical analysis that the novel contributions are evaluated independently on their own (within the same experimental setting and hyperparameters). As such, and given the high quality bar of ICLR, I can't recommend this paper be accepted at this time, but I encourage the authors to revise this explanation and re-submit a new version elsewhere.
train
[ "S1eFzjDlAX", "Byl5skjDTX", "ByeWEkiP6m", "BJxG1kov6X", "BklQiRqvpX", "rJxtmM89n7", "r1lBbvaSnm", "HkeghLbf2m", "SJe0M38JpQ" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Dear Reviewers,\n\nThe authors have posted a response, and I'd appreciate if you could take a look at it and see if it has addressed any of your concerns. Thank you!", "Thank you for bringing this paper to our attention, and we added the citation and comparison in the latest update.\nOur results are similar with...
[ -1, -1, -1, -1, -1, 6, 5, 6, -1 ]
[ -1, -1, -1, -1, -1, 4, 5, 4, -1 ]
[ "iclr_2019_SyxHKjAcYX", "SJe0M38JpQ", "rJxtmM89n7", "r1lBbvaSnm", "HkeghLbf2m", "iclr_2019_SyxHKjAcYX", "iclr_2019_SyxHKjAcYX", "iclr_2019_SyxHKjAcYX", "iclr_2019_SyxHKjAcYX" ]
iclr_2019_SyxMWh09KX
Attentive Task-Agnostic Meta-Learning for Few-Shot Text Classification
Current deep learning based text classification methods are limited by their ability to achieve fast learning and generalization when the data is scarce. We address this problem by integrating a meta-learning procedure that uses the knowledge learned across many tasks as an inductive bias towards better natural language understanding. Inspired by the Model-Agnostic Meta-Learning framework (MAML), we introduce the Attentive Task-Agnostic Meta-Learning (ATAML) algorithm for text classification. The proposed ATAML is designed to encourage task-agnostic representation learning by way of task-agnostic parameterization and facilitate task-specific adaptation via attention mechanisms. We provide evidence to show that the attention mechanism in ATAML has a synergistic effect on learning performance. Our experimental results reveal that, for few-shot text classification tasks, gradient-based meta-learning approaches ourperform popular transfer learning methods. In comparisons with models trained from random initialization, pretrained models and meta trained MAML, our proposed ATAML method generalizes better on single-label and multi-label classification tasks in miniRCV1 and miniReuters-21578 datasets.
rejected-papers
This paper describes an incorporation of attention into model agnostic meta learning. The reviewers found that the paper was rather confusing in its presentation of both the method and the tasks. While the results seemed interesting, it was difficult to frame them due to lack of clarity as to what the task is, and the relation between attention and MAML. It sounds like this paper needs a bit more work, and thus is not suitable for publication at this time. It is disappointing that the reviews were so short, but as the authors did not challenge them, unfortunately the AC must decide on the basis of the first set of comments by reviewers.
train
[ "SygMnjw63X", "B1xKIsQcnQ", "rkljaHMGhm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary of paper: For the few shot text classification task, train a model with MAML where only a subset of parameters (attention parameters in this case) are updated in the inner loop of MAML. The empirical results suggest that this improves over the MAML baseline.\n\nI found this paper confusingly written. The a...
[ 5, 5, 7 ]
[ 4, 3, 3 ]
[ "iclr_2019_SyxMWh09KX", "iclr_2019_SyxMWh09KX", "iclr_2019_SyxMWh09KX" ]