paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2018_SyhcXjy0Z
APPLICATION OF DEEP CONVOLUTIONAL NEURAL NETWORK TO PREVENT ATM FRAUD BY FACIAL DISGUISE IDENTIFICATION
The paper proposes and demonstrates a Deep Convolutional Neural Network (DCNN) architecture to identify users with disguised face attempting a fraudulent ATM transaction. The recent introduction of Disguised Face Identification (DFI) framework proves the applicability of deep neural networks for this very problem. All the ATMs nowadays incorporate a hidden camera in them and capture the footage of their users. However, it is impossible for the police to track down the impersonators with disguised faces from the ATM footage. The proposed deep convolutional neural network is trained to identify, in real time, whether the user in the captured image is trying to cloak his identity or not. The output of the DCNN is then reported to the ATM to take appropriate steps and prevent the swindler from completing the transaction. The network is trained using a dataset of images captured in similar situations as of an ATM. The comparatively low background clutter in the images enables the network to demonstrate high accuracy in feature extraction and classification for all the different disguises.
rejected-papers
Reviewers are unanimous that this is a reject. A "class project" level presentation. Errors in methodology and presentation. No author rebuttal or revision
train
[ "Hk2HjIfxG", "r11aaNYez", "BJE2bF3lM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is relatively clear to follow, and implement. \n\nThe main concern is that this looks like a class project rather than a scientific paper. For a class project this could get an A in a ML class!\n\nIn particular, the authors take an already existing dataset, design a trivial convolutional neural network, ...
[ 1, 2, 3 ]
[ 5, 4, 5 ]
[ "iclr_2018_SyhcXjy0Z", "iclr_2018_SyhcXjy0Z", "iclr_2018_SyhcXjy0Z" ]
iclr_2018_HkGcX--0-
Auxiliary Guided Autoregressive Variational Autoencoders
Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields state-of-the-art quantitative results.
rejected-papers
To ensure that a VAE with a powerful autoregressive decoder does not ignore its latent variables, the authors propose adding an extra term to the ELBO, corresponding to a reconstruction with an auxiliary non-autoregressive decoder. This does indeed produce models that use latent variables and (with some tuning of the weight on the KL term) perform as well as the underlying autoregressive model alone. However, as the reviewers pointed out, the paper does not demonstrate the value of the resulting models. If the goal is learning meaningful latent representations, then the quality of the representations should be evaluated empirically. Currently it is not clear whether that the proposed approach would yield better representations than a VAE with a non-autoregressive decoder or a VAE with an autoregressive decoder trained using the "free bits" trick of Kingma et al. (2016). This is certainly an interesting idea, but without a proper evaluation it is impossible to judge its value.
test
[ "rJuuHjLEG", "Bk9xqcOgG", "rJZx5rBNG", "B1LwiG9gz", "BylKxYolM", "HJGc1y3XG", "ByqjxYB-z", "HkwL0GWfM", "BJu573sZM", "r1zYWb9bG", "BkLUDaPZz", "ryc07KSZG", "rJTcZYS-M", "HJZ4bKSZM", "Byjzhwnlf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "author", "author", "author", "public" ]
[ "My main problem is still that it's not clear what this model has to offer. The model is neither able to improve density estimation over PixelCNNs (while adding complexity), nor has it been shown to learn better representations (none of the evaluations seem appropriate to evaluate representations). Nevertheless, I ...
[ -1, 5, -1, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Bk9xqcOgG", "iclr_2018_HkGcX--0-", "B1LwiG9gz", "iclr_2018_HkGcX--0-", "iclr_2018_HkGcX--0-", "iclr_2018_HkGcX--0-", "iclr_2018_HkGcX--0-", "Bk9xqcOgG", "r1zYWb9bG", "BkLUDaPZz", "Byjzhwnlf", "Bk9xqcOgG", "B1LwiG9gz", "BylKxYolM", "iclr_2018_HkGcX--0-" ]
iclr_2018_SkxqZngC-
A Bayesian Nonparametric Topic Model with Variational Auto-Encoders
Topic modeling of text documents is one of the most important tasks in representation learning. In this work, we propose iTM-VAE, which is a Bayesian nonparametric (BNP) topic model with variational auto-encoders. On one hand, as a BNP topic model, iTM-VAE potentially has infinite topics and can adapt the topic number to data automatically. On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner. Two variants of iTM-VAE are also proposed in this paper, where iTM-VAE-Prod models the generative process in products-of-experts fashion for better performance and iTM-VAE-G places a prior over the concentration parameter such that the model can adapt a suitable concentration parameter to data automatically. Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-arts in terms of perplexity, topic coherence and document retrieval tasks. Moreover, the ability of adjusting the concentration parameter to data is also confirmed by experiments.
rejected-papers
The paper proposes a BNP topic model that uses a stick-breaking prior over document topics and performs VAE-style inference over them. Unfortunately, the novelty of this work is limited, as VAE-like inference for LDA-like models, inference with stick-breaking priors for VAEs, and placing a prior on the concentration parameter in a non-parametric topic model have all been done before (see e.g. Srivastava & Sutton (2017), Nalisnick & Smyth (2017), and Teh, Kurihara & Welling (2007) respectively). There are also concerns about the correctness of treating topics as parameters (as opposed to random variables) in the proposed model. The authors' clarification regarding this point was helpful but not sufficient to show the validity of the approach.
train
[ "ByaL9g21M", "SyhzVlKez", "rJfo7HsxG", "SySKU3i7f", "r1ceZuQmf", "rJreJd77M", "ryyiTwS-f", "S1qr6DS-M", "SJCGAPH-f", "HyKCvwrWM", "HkykYDB-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "\"topic modeling of text documents one of most important tasks\"\nDoes this claim have any backing?\n\n\"inference of HDP is more complicated and not easy to be applied to new models\" Really an artifact of the misguided nature of earlier work. The posterior for the $\\vec\\pi$ of a elements of DP or HDP can be m...
[ 7, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkxqZngC-", "iclr_2018_SkxqZngC-", "iclr_2018_SkxqZngC-", "iclr_2018_SkxqZngC-", "SyhzVlKez", "rJfo7HsxG", "SyhzVlKez", "SyhzVlKez", "SyhzVlKez", "rJfo7HsxG", "ByaL9g21M" ]
iclr_2018_SJSVuReCZ
SHADE: SHAnnon DEcay Information-Based Regularization for Deep Learning
Regularization is a big issue for training deep neural networks. In this paper, we propose a new information-theory-based regularization scheme named SHADE for SHAnnon DEcay. The originality of the approach is to define a prior based on conditional entropy, which explicitly decouples the learning of invariant representations in the regularizer and the learning of correlations between inputs and labels in the data fitting term. We explain why this quantity makes our model able to achieve invariance with respect to input variations. We empirically validate the efficiency of our approach to improve classification performances compared to standard regularization schemes on several standard architectures.
rejected-papers
The proposed conditional variance regularizer looks interesting and the results show some promise. However, as the reviewers pointed out, the connection between the information-theoretic argument provided and the final form of the regularizer is too tenuous in its current form. Since this argument is central to the paper, the authors are urged to either provide a more rigorous derivation or motivate the regularizer more directly and place more emphasis on its empirical evaluation.
train
[ "HyloFODVM", "By-4qhFeM", "SJ3gWWsxM", "Hkd4Dn2eG", "B1zvHR7mf", "S1KszjW7f", "SJyBX4gmM", "HJwva4tzM", "B17OvXYfz", "ByJAL7tzf", "ByGVIQFfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors propose a particular variance regularizer on activations and connect it to the conditional entropy of the activation given the class label. They also present some competitive results on CIFAR-10 and ImageNet.\n\nDespite some promising results, I found some issues with the paper. The main one is that th...
[ 4, 5, 7, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJSVuReCZ", "iclr_2018_SJSVuReCZ", "iclr_2018_SJSVuReCZ", "iclr_2018_SJSVuReCZ", "SJyBX4gmM", "HJwva4tzM", "B17OvXYfz", "ByGVIQFfz", "By-4qhFeM", "SJ3gWWsxM", "Hkd4Dn2eG" ]
iclr_2018_SJ3dBGZ0Z
LSH Softmax: Sub-Linear Learning and Inference of the Softmax Layer in Deep Architectures
Log-linear models models are widely used in machine learning, and in particular are ubiquitous in deep learning architectures in the form of the softmax. While exact inference and learning of these requires linear time, it can be done approximately in sub-linear time with strong concentrations guarantees. In this work, we present LSH Softmax, a method to perform sub-linear learning and inference of the softmax layer in the deep learning setting. Our method relies on the popular Locality-Sensitive Hashing to build a well-concentrated gradient estimator, using nearest neighbors and uniform samples. We also present an inference scheme in sub-linear time for LSH Softmax using the Gumbel distribution. On language modeling, we show that Recurrent Neural Networks trained with LSH Softmax perform on-par with computing the exact softmax while requiring sub-linear computations.
rejected-papers
The authors propose an efficient LSH-based method for computing unbiased gradients for softmax layers, building on (Mussmann et al. 2017). Given the somewhat incremental nature of the method, a thorough experimental evaluation is essential to demonstrating its value. The reviewers however found the experimental section weak and expressed concerns about the choice of baselines and their surprisingly poor performance.
train
[ "rkQC_Rwlz", "Hy2-5bqeG", "S1FN4XcgM", "SJsvyvAzz", "rJ44ywRMz", "H1jyJw0Mf", "SyV2Rwtyf", "ByGORwtyz", "rkHB-57Jf", "BJQpqNNAZ", "BkN1Uh2R-", "Hy6mJ4qCZ", "r1CPEtQAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public", "public", "public" ]
[ "The paper proposes to use LSH to approximate softmax, which greatly speeds up classification with large output space. The paper is overall well-written. However, similar ideas have been proposed before, such as \"Deep networks with large output spaces\" by Vijayanarasimhan et. al. (ICLR 2015). And this manuscript ...
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJ3dBGZ0Z", "iclr_2018_SJ3dBGZ0Z", "iclr_2018_SJ3dBGZ0Z", "rkQC_Rwlz", "Hy2-5bqeG", "S1FN4XcgM", "rkHB-57Jf", "Hy6mJ4qCZ", "iclr_2018_SJ3dBGZ0Z", "r1CPEtQAZ", "iclr_2018_SJ3dBGZ0Z", "iclr_2018_SJ3dBGZ0Z", "iclr_2018_SJ3dBGZ0Z" ]
iclr_2018_BJRxfZbAW
The Context-Aware Learner
One important aspect of generalization in machine learning involves reasoning about previously seen data in new settings. Such reasoning requires learning disentangled representations of data which are interpretable in isolation, but can also be combined in a new, unseen scenario. To this end, we introduce the context-aware learner, a model based on the variational autoencoding framework, which can learn such representations across data sets exhibiting a number of distinct contexts. Moreover, it is successfully able to combine these representations to generate data not seen at training time. The model enjoys an exponential increase in representational ability for a linear increase in context count. We demonstrate that the theory readily extends to a meta-learning setting such as this, and describe a fully unsupervised model in complete generality. Finally, we validate our approach using an adaptation with weak supervision.
rejected-papers
The paper proposes augmenting Neural Statistician with a meta-context variable that specifies the partitioning of the latent context into the per-dataset and per-datapoint dimensions. This idea makes a lot of sense but the reviewers found the experimental section clearly insufficient to demonstrate its effectiveness convincingly. Also introducing only the unsupervised version of the model, which looks challenging to train, but performing all the experiments with the less interesting semi-supervised version makes the paper both less compelling and harder to follow.
val
[ "BkzesZcxG", "BkJ3NH2lM", "B1CLys4bM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose an extension to the Neural Statistician which can model contexts with multiple partially overlapping features. This model can explain datasets by taking into account covariate structure needed to explain away factors of variation and it can also share this structure partially between datasets.\...
[ 6, 4, 4 ]
[ 5, 3, 4 ]
[ "iclr_2018_BJRxfZbAW", "iclr_2018_BJRxfZbAW", "iclr_2018_BJRxfZbAW" ]
iclr_2018_HkPCrEZ0Z
Combining Model-based and Model-free RL via Multi-step Control Variates
Model-free deep reinforcement learning algorithms are able to successfully solve a wide range of continuous control tasks, but typically require many on-policy samples to achieve good performance. Model-based RL algorithms are sample-efficient on the other hand, while learning accurate global models of complex dynamic environments has turned out to be tricky in practice, which leads to the unsatisfactory performance of the learned policies. In this work, we combine the sample-efficiency of model-based algorithms and the accuracy of model-free algorithms. We leverage multi-step neural network based predictive models by embedding real trajectories into imaginary rollouts of the model, and use the imaginary cumulative rewards as control variates for model-free algorithms. In this way, we achieved the strengths of both sides and derived an estimator which is not only sample-efficient, but also unbiased and of very low variance. We present our evaluation on the MuJoCo and OpenAI Gym benchmarks.
rejected-papers
The paper has some potentially interesting ideas but it feels very preliminary. The experimental section in particular needs a lot more work.
train
[ "BkOz8MSxG", "rkiek__xz", "r1ftQIqgz", "Hk1nV1A7G", "Syhp8-b-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public" ]
[ "The paper studies a combination of model-based and model-free RL. The idea is to train a forward predictive model which provides multi-step estimates to facilitate model-free policy learning. Some parts of the paper lack clarity and the empirical results need improvement to support the claims (see details below)....
[ 5, 5, 4, -1, -1 ]
[ 4, 4, 3, -1, -1 ]
[ "iclr_2018_HkPCrEZ0Z", "iclr_2018_HkPCrEZ0Z", "iclr_2018_HkPCrEZ0Z", "iclr_2018_HkPCrEZ0Z", "iclr_2018_HkPCrEZ0Z" ]
iclr_2018_SkZ-BnyCW
Learning Deep Generative Models With Discrete Latent Variables
There have been numerous recent advancements on learning deep generative models with latent variables thanks to the reparameterization trick that allows to train deep directed models effectively. However, since reparameterization trick only works on continuous variables, deep generative models with discrete latent variables still remain hard to train and perform considerably worse than their continuous counterparts. In this paper, we attempt to shrink this gap by introducing a new architecture and its learning procedure. We develop a hybrid generative model with binary latent variables that consists of an undirected graphical model and a deep neural network. We propose an efficient two-stage pretraining and training procedure that is crucial for learning these models. Experiments on binarized digits and images of natural scenes demonstrate that our model achieves close to the state-of-the-art performance in terms of density estimation and is capable of generating coherent images of natural scenes.
rejected-papers
The reviewers agreed that while this is a well-written paper, it is low on novelty and does not make a substantial enough contribution. They also pointed out that although the reported MNIST results are highly competitive, possibly due to the use of a powerful ResNet decoder, the CIFAR10/ImageNet results are underwhelming.
train
[ "r1yDSeCJM", "SkHg5PQxf", "rkp8JGcef", "rJ7n07jGz", "BJ0mHjnlf", "SkaVZ-9xf", "H1gx50Xyf", "BJdtEE8AW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "public" ]
[ "Summary of the paper:\nThe paper proposes to augment a variational auto encoder (VAE) with an binary restricted Boltzmann machine (RBM) in the role of the prior of the generative model. To yield a good initialisation of the parameters of the RBM and the inference network a special pertaining procedure is introduce...
[ 4, 5, 4, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkZ-BnyCW", "iclr_2018_SkZ-BnyCW", "iclr_2018_SkZ-BnyCW", "iclr_2018_SkZ-BnyCW", "SkaVZ-9xf", "iclr_2018_SkZ-BnyCW", "BJdtEE8AW", "iclr_2018_SkZ-BnyCW" ]
iclr_2018_HkbmWqxCZ
The Mutual Autoencoder: Controlling Information in Latent Code Representations
Variational autoencoders (VAE) learn probabilistic latent variable models by optimizing a bound on the marginal likelihood of the observed data. Beyond providing a good density model a VAE model assigns to each data instance a latent code. In many applications, this latent code provides a useful high-level summary of the observation. However, the VAE may fail to learn a useful representation when the decoder family is very expressive. This is because maximum likelihood does not explicitly encourage useful representations and the latent variable is used only if it helps model the marginal distribution. This makes representation learning with VAEs unreliable. To address this issue, we propose a method for explicitly controlling the amount of information stored in the latent code. Our method can learn codes ranging from independent to nearly deterministic while benefiting from decoder capacity. Thus, we decouple the choice of decoder capacity and the latent code dimensionality from the amount of information stored in the code.
rejected-papers
This is a well-written paper that aims to address an important problem. However, all the reviewers agreed that the experimental section is currently too weak for publication. They also made several good suggestions about improving the paper and the authors are encouraged to incorporate them before resubmitting.
train
[ "SkOy0Pokz", "rki0XSHlf", "Sy-QZtjgz", "SJBM0BYXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "\nSummary\n\nThis paper proposes a penalized VAE training objection for the purpose of increasing the information between the data x and latent code z. Ideally, optimization would consist of maximizing log p(x) - | I(x,z) - M |, where M is the user-specified target mutual information (MI) and I(x,z) is the model’...
[ 4, 5, 4, -1 ]
[ 5, 4, 4, -1 ]
[ "iclr_2018_HkbmWqxCZ", "iclr_2018_HkbmWqxCZ", "iclr_2018_HkbmWqxCZ", "iclr_2018_HkbmWqxCZ" ]
iclr_2018_ryb83alCZ
Towards Unsupervised Classification with Deep Generative Models
Deep generative models have advanced the state-of-the-art in semi-supervised classification, however their capacity for deriving useful discriminative features in a completely unsupervised fashion for classification in difficult real-world data sets, where adequate manifold separation is required has not been adequately explored. Most methods rely on defining a pipeline of deriving features via generative modeling and then applying clustering algorithms, separating the modeling and discriminative processes. We propose a deep hierarchical generative model which uses a mixture of discrete and continuous distributions to learn to effectively separate the different data manifolds and is trainable end-to-end. We show that by specifying the form of the discrete variable distribution we are imposing a specific structure on the model's latent representations. We test our model's discriminative performance on the task of CLL diagnosis against baselines from the field of computational FC, as well as the Variational Autoencoder literature.
rejected-papers
The authors propose a hierarchical VAE model with a discrete latent variable in the top-most layer for unsupervised learning of discriminative representations. While the reported results on the two flow cytometry datasets are encouraging, they are insufficient to draw strong conclusions about the general effectiveness of the proposed architecture. Also, as two of the reviewers stated the proposed model is very similar to several VAE models in the literature. This paper seems better suited for a more applied venue than ICLR.
train
[ "SJk7H29xM", "SyangtilG", "BkmqxxDbz", "rJFs4Qh7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper addresses the question of unsupervised clustering with high classification performance. They propose a deep variational autoencoder architecture with categorical latent variables at the deepest layer and propose to train it with modifications of the standard variational approach with reparameterization ...
[ 4, 4, 4, -1 ]
[ 4, 4, 5, -1 ]
[ "iclr_2018_ryb83alCZ", "iclr_2018_ryb83alCZ", "iclr_2018_ryb83alCZ", "iclr_2018_ryb83alCZ" ]
iclr_2018_SkERSm-0-
Preliminary theoretical troubleshooting in Variational Autoencoder
What would be learned by variational autoencoder(VAE) and what influence the disentanglement of VAE? This paper tries to preliminarily address VAE's intrinsic dimension, real factor, disentanglement and indicator issues theoretically in the idealistic situation and implementation issue practically through noise modeling perspective in the realistic case. On intrinsic dimension issue, due to information conservation, the idealistic VAE learns and only learns intrinsic factor dimension. Besides, suggested by mutual information separation property, the constraint induced by Gaussian prior to the VAE objective encourages the information sparsity in dimension. On disentanglement issue, subsequently, inspired by information conservation theorem the clarification on disentanglement in this paper is made. On real factor issue, due to factor equivalence, the idealistic VAE possibly learns any factor set in the equivalence class. On indicator issue, the behavior of current disentanglement metric is discussed, and several performance indicators regarding the disentanglement and generating influence are subsequently raised to evaluate the performance of VAE model and to supervise the used factors. On implementation issue, the experiments under noise modeling and constraints empirically testify the theoretical analysis and also show their own characteristic in pursuing disentanglement.
rejected-papers
The reviewers agreed that the paper was too long (more than twice the recommended page limit not counting the appendix) and difficult to follow. They also pointed out that its central idea of learning the noise distribution in a VAE was not novel. While the shortened version uploaded by the authors looks like a step in the right direction, it was not sufficient to convince the reviewers.
train
[ "Hk-DIMdez", "r1-zOIFgM", "SJcdJ0tez", "SkljvChQz", "S1XZ-gBGG", "HJ3HXnNZM", "rJJokxrMf", "S1wuQeBMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper studies the importance of the noise modelling in Gaussian VAE. The original Gaussian VAE proposes to use the inference network for the noise that takes latent variables as inputs and outputs the variances, but most of the existing works on Gaussian VAE just use fixed noise probably because the inference...
[ 5, 3, 2, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkERSm-0-", "iclr_2018_SkERSm-0-", "iclr_2018_SkERSm-0-", "iclr_2018_SkERSm-0-", "r1-zOIFgM", "SJcdJ0tez", "SJcdJ0tez", "Hk-DIMdez" ]
iclr_2018_r1kj4ACp-
Understanding Deep Learning Generalization by Maximum Entropy
Deep learning achieves remarkable generalization capability with overwhelming number of model parameters. Theoretical understanding of deep learning generalization receives recent attention yet remains not fully explored. This paper attempts to provide an alternative understanding from the perspective of maximum entropy. We first derive two feature conditions that softmax regression strictly apply maximum entropy principle. DNN is then regarded as approximating the feature conditions with multilayer feature learning, and proved to be a recursive solution towards maximum entropy principle. The connection between DNN and maximum entropy well explains why typical designs such as shortcut and regularization improves model generalization, and provides instructions for future model development.
rejected-papers
The reviewers are in agreement, that the paper is a big hard to follow and incorrect in places, including some claims not supported by experiments.
train
[ "HkBIjt2xz", "SyDSqb6gz", "Sy7fJuCxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper presents a derivation which links a DNN to recursive application of\nmaximum entropy model fitting. The mathematical notation is unclear, and in\none cases the lemmas are circular (i.e. two lemmas each assume the other is\ncorrect for their proof). Additionally the main theorem requires comp...
[ 2, 3, 6 ]
[ 3, 3, 2 ]
[ "iclr_2018_r1kj4ACp-", "iclr_2018_r1kj4ACp-", "iclr_2018_r1kj4ACp-" ]
iclr_2018_B1X4DWWRb
Learning Weighted Representations for Generalization Across Designs
Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation.
rejected-papers
The submission provides an interesting way to tackle the so-called distributional shift problem in machine learning. One familiar example is unsupervised domain adaptation. The main contribution of this work is deriving a bound on the generalization error/risk for a target domain as a combo of re-weighted empirical risk on the source domain and some discrepancy between the re-weighted source domain and the target domain. The authors then use this to formulate an objective function. The reviewers generally liked the paper for its theoretical results, but found the empirical evaluation somewhat lacking, as do I. Especially the unsupervised domain adaptation results are very toy-ish in nature (synthetic data), whereas the literature in this field, cited by the authors, does significantly larger scale experiments. I am unsure as to how much I value I can place in the IHDP results since I am not familiar with the benchmark (and hence my lower confidence in the recommendation). Finally, I am not very convinced that this is the appropriate venue for this work, despite containing some interesting results.
train
[ "H1HywYblM", "ByozI_rlG", "ryOA0TKgG", "BkGC4OpQM", "r1pGQ_aQM", "r1HF7_pQM", "Skwvf_pmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper proposes a novel way of causal inference in situations where in causal SEM notation the outcome Y = f(T,X) is a function of a treatment T and covariates X. The goal is to infer the treatment effect E(Y|T=1,X=x) - E(Y|T=0,X=x) for binary treatments at every location x. If the treatment effect can be learn...
[ 5, 8, 7, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_B1X4DWWRb", "iclr_2018_B1X4DWWRb", "iclr_2018_B1X4DWWRb", "ByozI_rlG", "ryOA0TKgG", "H1HywYblM", "iclr_2018_B1X4DWWRb" ]
iclr_2018_HJ4IhxZAb
Meta-Learning Transferable Active Learning Policies by Deep Reinforcement Learning
Active learning (AL) aims to enable training high performance classifiers with low annotation cost by predicting which subset of unlabelled instances would be most beneficial to label. The importance of AL has motivated extensive research, proposing a wide variety of manually designed AL algorithms with diverse theoretical and intuitive motivations. In contrast to this body of research, we propose to treat active learning algorithm design as a meta-learning problem and learn the best criterion from data. We model an active learning algorithm as a deep neural network that inputs the base learner state and the unlabelled point set and predicts the best point to annotate next. Training this active query policy network with reinforcement learning, produces the best non-myopic policy for a given dataset. The key challenge in achieving a general solution to AL then becomes that of learner generalisation, particularly across heterogeneous datasets. We propose a multi-task dataset-embedding approach that allows dataset-agnostic active learners to be trained. Our evaluation shows that AL algorithms trained in this way can directly generalize across diverse problems.
rejected-papers
In general, this seems like a sensible idea, but in my opinion the empirical results do not show a very compelling margin between using *entropy* as an active learning selection criterion vs the proposed methods. The difference is small enough that in practice it is very hard for me to believe that many researchers would choose to use the meta-learning via deep RL method (given that they'd need to train on multiple datasets and tune REINFORCE which is not going to be obviously easy). For that reason I am inclined to reject the paper. In a follow-up version, I would heed the advice of Reviewer 1 and do more ablation analyses to understand the value of myopic vs non-myopic, cross-dataset vs. not, bandits vs RL, on the fly vs not (these are all intermingled issues). The relative lack of such analyses in the paper does not help in terms of it passing the bar.
train
[ "By1MecNBG", "HkaYd1yHM", "H1g6bb9gG", "SJEDvEvez", "rki3FqilM", "rklZUu6Xf", "SydBHdTXM", "rkXXBdTmz", "ByEkBd67G" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Sorry about the confusion, this was our oversight. We will correct the inaccurate sentence. \n\nWe also agree T-LSA is relevant for comparison, and we are running the experiment now and will add it to the final version. To contrast them explicitly, we expect MAP-GAL to perform better: (i) Due to non-myopic RL lear...
[ -1, -1, 6, 7, 6, -1, -1, -1, -1 ]
[ -1, -1, 3, 4, 4, -1, -1, -1, -1 ]
[ "HkaYd1yHM", "rklZUu6Xf", "iclr_2018_HJ4IhxZAb", "iclr_2018_HJ4IhxZAb", "iclr_2018_HJ4IhxZAb", "rki3FqilM", "H1g6bb9gG", "SJEDvEvez", "iclr_2018_HJ4IhxZAb" ]
iclr_2018_SktLlGbRZ
CyCADA: Cycle-Consistent Adversarial Domain Adaptation
Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.
rejected-papers
I concur with two of the reviewers: the work is somewhat incremental in terms of technical novelty (it's effectively CycleGANs for domain adaptation with a couple of effective tricks) and the need/advantage of the cycle consistency loss is not demonstrated sufficiently. The only solid ablation evidence seems to the the SVHN-->MNIST experiment from post-submission; I would personally like to see this kind of empirical proof extended much further (the fact that Shrivastava et al.'s method doesn't work well on GTA-->Cityscapes is not itself proof that cycle consistency is needed). With more empirical evidence I can see this paper being a good candidate for a computer vision conference like CVPR or ICCV.
train
[ "S1Elwq_xf", "SyFscqngM", "S14j0RTxM", "B1upY6WmM", "SJGM5T-Xz", "S1bP5aZQG", "HyQBYTbXz", "SyqyFTWmM", "Sy-YBUn1G", "BJxW87myM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public" ]
[ "This paper proposed a domain adaptation approach by extending the CycleGAN with 1) task specific loss functions and 2) loss imposed over both pixels and features. Experiments on digit recognition and semantic segmentation verify the effectiveness of the proposed method.\n\nStrengths:\n+ It is a natural and intuiti...
[ 5, 5, 9, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SktLlGbRZ", "iclr_2018_SktLlGbRZ", "iclr_2018_SktLlGbRZ", "SyFscqngM", "S1Elwq_xf", "S14j0RTxM", "Sy-YBUn1G", "BJxW87myM", "BJxW87myM", "iclr_2018_SktLlGbRZ" ]
iclr_2018_SyhRVm-Rb
Automatic Goal Generation for Reinforcement Learning Agents
Reinforcement learning (RL) is a powerful technique to train an agent to perform a task. However, an agent that is trained using RL is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. We use a generator network to propose tasks for the agent to try to achieve, each task being specified as reaching a certain parametrized subset of the state-space. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment (Videos and code available at: https://sites.google.com/view/goalgeneration4rl). Our method can also learn to achieve tasks with sparse rewards, which pose significant challenges for traditional RL methods.
rejected-papers
In principle, the idea behind the submission is sound: use a generative model (GANs in this case) to learn to generate desirable "goals" (subsets of the state space) and use that instead of uniform sampling for goals. Overall I tend to agree with Reviewer 3 in that the current set of results is not convincing in terms of it being able to generate goals in a high-dimensional state space, which seems to be be whole raison d'etre of GANs in this proposed method. The coverage experiment in Figure 5 seems like a good *illustration* of the method, but for this work to be convincing, I think we would need a more diverse set of experiments (a la Figure 2) showing how this method performs on complicated tasks. I encourage the authors to sharpen the definitions, as suggested by reviewers, and, if possible, provide experiments where the Assumptions being made in Section 3.3 are *violated* somehow (to actually test how the method fails in those cases).
val
[ "S1kxi6OlM", "S1m5kPUrz", "S10H-jEBG", "rJg5hxtgf", "Syx7RZ9eG", "HyhA3Pgmf", "ry2eoPlmG", "ry6s9De7f", "HkhKcvgXf", "BkuuqPlXz", "Sy98cwe7M", "BknhKwg7G", "HJvjFPl7G", "HkecKvemf", "H1TDFDeQG", "HyvUFDxXz", "HyaEYDx7z", "BkvMYveXM", "H1Oi_PgQG", "HyywOvxmM", "B1OrdweQz" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "In general I find this to be a good paper and vote for acceptance. The paper is well-written and easy to follow. The proposed approach is a useful addition to existing literature.\n\nBesides that I have not much to say except one point I would like to discuss:\n\nIn 4.2 I am not fully convinced of using an advers...
[ 8, -1, -1, 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyhRVm-Rb", "S10H-jEBG", "H1Oi_PgQG", "iclr_2018_SyhRVm-Rb", "iclr_2018_SyhRVm-Rb", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "S1kxi6OlM", "...
iclr_2018_ryj0790hb
Incremental Learning through Deep Adaptation
Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network. We propose a method called Deep Adaptation Networks (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs preserve performance on the original task, require a fraction (typically 13%) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.
rejected-papers
This work tackles an important problem of incremental learning and does so with extensive experimentation. As pointed out by two reviewers, the idea does seem novel and interesting, but the submission would require some rewriting before being potentially accepted at a venue like ICLR. I suggest focusing the paper more on the task-incremental learning aspects, doing the ablation studies (and other changes) as requested by the reviewers, and having a rich appendix with details (with more discussion in the paper itself).
train
[ "BJJTve9gM", "HyOveS5gf", "HyK6w83xM", "rJsPGMmbz", "HyGLffmbM", "S1PmnJmbM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes to adapt convnet representations to new tasks while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network while keeping the filters themselves fixed.\n\n\nPros\n\nThe proposed approach is novel and broadly...
[ 6, 4, 5, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_ryj0790hb", "iclr_2018_ryj0790hb", "iclr_2018_ryj0790hb", "BJJTve9gM", "HyOveS5gf", "iclr_2018_ryj0790hb" ]
iclr_2018_r1DPFCyA-
Discriminative k-shot learning using probabilistic models
This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the classes (concept transfer). The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning. We show that even a simple probabilistic model achieves state-of-the-art on a standard k-shot learning dataset by a large margin. Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning.
rejected-papers
This submission presents intriguingly good results on k-shot learning and I agree with the authors that the results are better than the presented previous work, and that the method is simple, so I took a deeper look into the paper despite the overall negative reviews. However, I think in its current form, the paper is not suitable for publication: - The previous work, that the authors compare to, were not really using comparable architectures: in fact, likely much worse base models with fewer parameters etc. I think any future version of this work would need to control for architecture capacity, otherwise how can we be sure where the gains come from? To me, this is a major unknown in terms of the credit assignment for the great results. - The authors should be comparing with MAML (and follow-up work) by Finn et al. (2017) - I don't really understand why the authors claim to have no need for validation sets. That's a very strong claim: are ALL the hyper-parameters (model architectures etc) just chosen in another, principled way? This issue would definitely need to be addressed in a follow-up work.
train
[ "rJ60euDeG", "SyJRNAKeG", "SknsYOMZf", "ryuLjYUfG", "HJsAYK8Gz", "r1hd5K8zG", "S1Wl_KLfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper presents a procedure to efficiently do K-shot learning in a classification setting by creating informative priors from information learned from a large, fully labeled dataset. Image features are learned using a standard convolutional neural network---the last layer form image features, while the last s...
[ 5, 5, 5, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_r1DPFCyA-", "iclr_2018_r1DPFCyA-", "iclr_2018_r1DPFCyA-", "SknsYOMZf", "SyJRNAKeG", "rJ60euDeG", "iclr_2018_r1DPFCyA-" ]
iclr_2018_rkeZRGbRW
Variance Regularizing Adversarial Learning
We study how, in generative adversarial networks, variance in the discriminator's output affects the generator's ability to learn the data distribution. In particular, we contrast the results from various well-known techniques for training GANs when the discriminator is near-optimal and updated multiple times per update to the generator. As an alternative, we propose an additional method to train GANs by explicitly modeling the discriminator's output as a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through meta-adversarial training. We observe that our new method, when trained together with a strong discriminator, provides meaningful, non-vanishing gradients.
rejected-papers
The reviewers found a number of short-comings in this work that would prevent it from being accepted at ICLR in its current form, both in terms of writing (not specifying the loss function), experiments that are too limited, and inconclusive comparisons with existing regularization techniques. I recommend the authors take into account the feedback from reviewers in any follow-up submissions.
train
[ "HyZBE0IlM", "ByjCp4qgM", "HkxvlUclG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies how the variance of the discriminator affect the gradient signal provided to the generator and therefore how it might limit its ability to learn the true data distribution.\n\nThe approach suggested in this paper models the output of the discriminator using a mixture of two Gaussians (one for “f...
[ 5, 4, 6 ]
[ 4, 4, 3 ]
[ "iclr_2018_rkeZRGbRW", "iclr_2018_rkeZRGbRW", "iclr_2018_rkeZRGbRW" ]
iclr_2018_H1BO9M-0Z
Lifelong Word Embedding via Meta-Learning
Learning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks. On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks. On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings. We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings. In this paper, we formulate the learning of word embeddings as a lifelong learning process. Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level. Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledges\footnote{We will release the code after final revisions.}. We also demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks.
rejected-papers
While the problem of learning word embeddings for a new domain is important, the proposed method was found to be unclearly presented and missing a number of important baselines. The reviewers found the technical contribution to be of only limited value.
val
[ "HyyI-JYlM", "SktcRWIgf", "HJ8Q-8deM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a lifelong learning method for learning word embeddings. Given a new domain of interest, the method leverages previously seen domains in order to hopefully generate better embeddings compared to ones computed over just the new domain, or standard pre-trained embeddings.\n\nThe general problem ...
[ 4, 5, 3 ]
[ 4, 4, 4 ]
[ "iclr_2018_H1BO9M-0Z", "iclr_2018_H1BO9M-0Z", "iclr_2018_H1BO9M-0Z" ]
iclr_2018_BJB7fkWR-
Domain Adaptation for Deep Reinforcement Learning in Visually Distinct Games
Many deep reinforcement learning approaches use graphical state representations, this means visually distinct games that share the same underlying structure cannot effectively share knowledge. This paper outlines a new approach for learning underlying game state embeddings irrespective of the visual rendering of the game state. We utilise approaches from multi-task learning and domain adaption in order to place visually distinct game states on a shared embedding manifold. We present our results in the context of deep reinforcement learning agents.
rejected-papers
The reviewers have found that while the task of visual domain adaptation is meaningful to explore and improve, the proposed method is not sufficiently well-motivated, explained or empirically tested.
train
[ "H1xFygmyz", "SyZl4CKeM", "BJ5RWXilM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose a new approach for learning underlying structure of visually distinct games.\n\nThe proposed approach combines convolutional layers for processing input images, Asynchronous Advantage Actor Critic for deep reinforcement learning task and adversarial approach to force the embeddin...
[ 3, 2, 4 ]
[ 3, 4, 5 ]
[ "iclr_2018_BJB7fkWR-", "iclr_2018_BJB7fkWR-", "iclr_2018_BJB7fkWR-" ]
iclr_2018_B1suU-bAW
Learning Covariate-Specific Embeddings with Tensor Decompositions
Word embedding is a useful approach to capture co-occurrence structures in a large corpus of text. In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e.g. the demographic of the author, time and venue of publication, etc.---and we would like the embedding to naturally capture the information of the covariates. In this paper, we propose a new tensor decomposition model for word embeddings with covariates. Our model jointly learns a \emph{base} embedding for all the words as well as a weighted diagonal transformation to model how each covariate modifies the base embedding. To obtain the specific embedding for a particular author or venue, for example, we can then simply multiply the base embedding by the transformation matrix associated with that time or venue. The main advantages of our approach is data efficiency and interpretability of the covariate transformation matrix. Our experiments demonstrate that our joint model learns substantially better embeddings conditioned on each covariate compared to the standard approach of learning a separate embedding for each covariate using only the relevant subset of data. Furthermore, our model encourages the embeddings to be ``topic-aligned'' in the sense that the dimensions have specific independent meanings. This allows our covariate-specific embeddings to be compared by topic, enabling downstream differential analysis. We empirically evaluate the benefits of our algorithm on several datasets, and demonstrate how it can be used to address many natural questions about the effects of covariates.
rejected-papers
The reviewers agree that this paper provides a sensible mechanism for producing word embeddings that exploit correlating features in the data (e.g. texts written by the same author), but point to other work doing the same thing. The lack of direct comparison in the experimental section is troublesome, although it is entirely possible the authors' were not aware of related work. Unfortunately, the lack of an author response to the reviews makes it hard to see the argument in defense of this paper, and I must recommend rejection.
train
[ "HkqIGZGlG", "HkIjbYOxz", "SkNrPRFgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper produces word embedding tensors where the third order gives covariate information, via venue or author. The model is simple: tensor factorization, where the covariate can be viewed as warping the cosine distance to favor that covariate's more commonly cooccuring vocabulary (e.g. trump on hillary and cro...
[ 5, 5, 5 ]
[ 3, 5, 4 ]
[ "iclr_2018_B1suU-bAW", "iclr_2018_B1suU-bAW", "iclr_2018_B1suU-bAW" ]
iclr_2018_S16FPMgRZ
Tensor Contraction & Regression Networks
Convolution neural networks typically consist of many convolutional layers followed by several fully-connected layers. While convolutional layers map between high-order activation tensors, the fully-connected layers operate on flattened activation vectors. Despite its success, this approach has notable drawbacks. Flattening discards the multi-dimensional structure of the activations, and the fully-connected layers require a large number of parameters. We present two new techniques to address these problems. First, we introduce tensor contraction layers which can replace the ordinary fully-connected layers in a neural network. Second, we introduce tensor regression layers, which express the output of a neural network as a low-rank multi-linear mapping from a high-order activation tensor to the softmax layer. Both the contraction and regression weights are learned end-to-end by backpropagation. By imposing low rank on both, we use significantly fewer parameters. Experiments on the ImageNet dataset show that applied to the popular VGG and ResNet architectures, our methods significantly reduce the number of parameters in the fully connected layers (about 65% space savings) while negligibly impacting accuracy.
rejected-papers
This paper proposes methods for replacing parts of neural networks with tensors, the values of which are efficiently estimated through factorisation methods. The paper is well written and clear, but the two main objections from reviewers surround the novelty and evaluation of the method proposed. I am conscious that the authors have responded to reviewers on the topic of novelty, but the case could be made more strongly in the paper, perhaps by showing significant improvements over alternatives. The evaluation was considered weak by reviewers, in particular due to the lack of comparable baselines. Interesting work, but I'm afraid on the basis of the reviews, I must recommend rejection.
test
[ "SJ3JSwBlG", "rysUKSIxM", "Byz0IGvgz", "SJl_QRsmM", "rJP6M0oQz", "ry1Kf0j7G", "ryINzRoXz", "SyZphSqeM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "In this paper, new layer architectures of neural networks using a low-rank representation of tensors are proposed. The main idea is assuming Tucker-type low-rank assumption for both a weight and an input. The performance is evaluated with toy data and Imagenet.\n\n[Clarity]\nThe paper is well written and easy to f...
[ 4, 6, 4, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S16FPMgRZ", "iclr_2018_S16FPMgRZ", "iclr_2018_S16FPMgRZ", "SyZphSqeM", "rysUKSIxM", "SJ3JSwBlG", "Byz0IGvgz", "iclr_2018_S16FPMgRZ" ]
iclr_2018_rkGZuJb0b
Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz
The goal of this paper is to demonstrate a method for tensorizing neural networks based upon an efficient way of approximating scale invariant quantum states, the Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a replacement for linear layers in a neural network and test this implementation on the CIFAR-10 dataset. The proposed method outperforms factorization using tensor trains, providing greater compression for the same level of accuracy and greater accuracy for the same level of compression. We demonstrate MERA-layers with 3900 times fewer parameters and a reduction in accuracy of less than 1% compared to the equivalent fully connected layers.
rejected-papers
This paper proposes a tree-structured tensor factorisation method for parameter reduction. The reviewers felt the paper was somewhat interesting, but agreed that more detail was needed in the method description, and that the experiments were on the whole uninformative. This seems like a promising research direction which needs more empirical work, but is not ready for publication as is.
train
[ "HJfixHulz", "B1TCDcOxG", "SyR6NUcxz", "SyMJMna7f", "By4ce367M", "S1UvxhaXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "In the paper the authors suggest to use MERA tensorization technique for compressing neural networks. MERA itseld in a known framework in QM but not in ML. Although the idea seems to be fruitful and interesting I find the paper quite unclear. The most important part is section 2 which presents the methodology used...
[ 5, 5, 4, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_rkGZuJb0b", "iclr_2018_rkGZuJb0b", "iclr_2018_rkGZuJb0b", "HJfixHulz", "B1TCDcOxG", "SyR6NUcxz" ]
iclr_2018_B1bgpzZAZ
ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
The task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given \{\textit{passage, question}\} pair and select one of the n given options. The current state of the art model for this task first computes a query-aware representation for the passage and then \textit{selects} the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of \textit{elimination} and \textit{selection}. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose \textit{ElimiNet}, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the \{\textit{document, question}\} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our \textit{elimination-selection} based method with a \textit{selection} based method gives us an improvement of 7\% (relative) over the best reported performance on this dataset.
rejected-papers
This paper provides a method for eliminating options in multiple-answer reading comprehension tasks, based on the contents of the text, in order to reduce the "answer space" a machine reading model must consider. While there's nothing wrong with this, conceptually, reviewers have questioned whether or not this is a particularly useful process to include in a machine reading pipeline, versus having agents that understand the text well enough to select the correct answer (which is, after all, the primary goal of machine reading). Some reviewers were uncomfortable with the choice of dataset, suggesting SQuAD might be a better alternative), and why I am not sure I agree with that recommendation, it would be good to see stronger positive results on more than one dataset. At the end of the day, it is the lack of convincing experimental results showing that this method yields substantial improvements over comparable baselines which does the most harm to this well written paper, and I must recommend rejection.
train
[ "SyfPjhYef", "HkHGUsPef", "HJViVF5gf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper gives an elaboration on the Gated Attention Reader (GAR) adding gates based on answer elimination in multiple choice reading comprehension. I found the formal presentation of the model reasonably clear the the empirical evaluation reasonably compelling.\n\nIn my opinion the main weakness of the paper i...
[ 5, 5, 4 ]
[ 3, 3, 4 ]
[ "iclr_2018_B1bgpzZAZ", "iclr_2018_B1bgpzZAZ", "iclr_2018_B1bgpzZAZ" ]
iclr_2018_ryF-cQ6T-
Machine Learning by Two-Dimensional Hierarchical Tensor Networks: A Quantum Information Theoretic Perspective on Deep Architectures
The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous results used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA). This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state. We study the quantum features of the TN states, including quantum entanglement and fidelity. We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks. Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.
rejected-papers
This paper seeks to integrate tensor-based models from physics into machine learning architectures. The two main objections to this paper are first that, despite honest (I assume) efforts from the authors, it remains somewhat hard to understand without substantial background knowledge of physics. Second, that the experiments focus on MNIST and CIFAR image classification tasks, two datasets where linear models perform with high accuracy, and as such are unsuitable for properly evaluating the claims made about the models in this paper. Unfortunately, it does not seem there is sufficient enthusiasm for this paper amongst the reviewers to justify its inclusion in the conference.
train
[ "rycZrCJef", "S17TnsFez", "rkd7rq6gf", "Bylqi1qQz", "HJoitKU7G", "r1LNXkUmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Authors of this paper derived an efficient quantum-inspired learning algorithm based on a hierarchical representation that is known as tree tensor network, which is inspired by the multipartite entanglement renormalization ansatz approach where the tensors in the TN are kept to be unitary during training. Some obs...
[ 6, 4, 3, -1, -1, -1 ]
[ 3, 3, 2, -1, -1, -1 ]
[ "iclr_2018_ryF-cQ6T-", "iclr_2018_ryF-cQ6T-", "iclr_2018_ryF-cQ6T-", "S17TnsFez", "rycZrCJef", "rkd7rq6gf" ]
iclr_2018_HyHmGyZCZ
Comparison of Paragram and GloVe Results for Similarity Benchmarks
Distributional Semantics Models(DSM) derive word space from linguistic items in context. Meaning is obtained by defining a distance measure between vectors corresponding to lexical entities. Such vectors present several problems. This work concentrates on quality of word embeddings, improvement of word embedding vectors, applicability of a novel similarity metric used ‘on top’ of the word embeddings. In this paper we provide comparison between two methods for post process improvements to the baseline DSM vectors. The counter-fitting method which enforces antonymy and synonymy constraints into the Paragram vector space representations recently showed improvement in the vectors’ capability for judging semantic similarity. The second method is our novel RESM method applied to GloVe baseline vectors. By applying the hubness reduction method, implementing relational knowledge into the model by retrofitting synonyms and providing a new ranking similarity definition RESM that gives maximum weight to the top vector component values we equal the results for the ESL and TOEFL sets in comparison with our calculations using the Paragram and Paragram + Counter-fitting methods. For SIMLEX-999 gold standard since we cannot use the RESM the results using GloVe and PPDB are significantly worse compared to Paragram. Apparently, counter-fitting corrects hubness. The Paragram or our cosine retrofitting method are state-of-the-art results for the SIMLEX-999 gold standard. They are 0.2 better for SIMLEX-999 than word2vec with sense de-conflation (that was announced to be state-of the-art method for less reliable gold standards). Apparently relational knowledge and counter-fitting is more important for judging semantic similarity than sense determination for words. It is to be mentioned, though that Paragram hyperparameters are fitted to SIMLEX-999 results. The lesson is that many corrections to word embeddings are necessary and methods with more parameters and hyperparameters perform better.
rejected-papers
This paper proposes a method for refining distributional semantic representation at the lexical level. The reviews are fairly unanimous in that they found both the initial version of the paper, which was deemed quite rushed, and the substantial revision unworthy of publication in their current state. The weakness of both the motivation and the experimental results, as well as the lack of a clear hypothesis being tested, or of an explanation as to why the proposed method should work, indicates that this work needs revision and further evaluation beyond what is possible for this conference. I unfortunately must recommend rejection.
train
[ "S1ZbRMqlM", "HJmKXVcgz", "SJWbIA3eG", "SJcyMXTmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The paper suggests taking GloVe word vectors, adjust them, and then use a non-Euclidean similarity function between them. The idea is tested on very small data sets (80 and 50 examples, respectively). The proposed techniques are a combination of previously published steps, and the new algorithm fails to reach stat...
[ 2, 4, 3, -1 ]
[ 4, 5, 4, -1 ]
[ "iclr_2018_HyHmGyZCZ", "iclr_2018_HyHmGyZCZ", "iclr_2018_HyHmGyZCZ", "iclr_2018_HyHmGyZCZ" ]
iclr_2018_SJlhPMWAW
GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders
Deep learning on graphs has become a popular research topic with many applications. However, past work has concentrated on learning graph embedding tasks only, which is in contrast with advances in generative models for images and text. Is it possible to transfer this progress to the domain of graphs? We propose to sidestep hurdles associated with linearization of such discrete structures by having a decoder output a probabilistic fully-connected graph of a predefined maximum size directly at once. Our method is formulated as a variational autoencoder. We evaluate on the challenging task of conditional molecule generation.
rejected-papers
The authors present GraphVAE, a method for fitting a generative deep model, a variational autoencoder, to small graphs. Fitting deep learning models to graphs remains challenging (although there is relevant literature as brought up by the reviewers and anonymous comments) and this paper is a strong start. In weighing the various reviews, AnonReviewer3 is weighed more highly than AnonReviewer1 and AnonReviewer2 since that review is far more thorough and the reviewer is more expert on this subject. Unfortunately, the review from AnonReviewer1 is extremely short and of very low confidence. As such, this paper sits just below the borderline for acceptance. In general, the main criticisms of the paper are that some claims are too strong (e.g. non-differentiability of discrete structures), treatment of related work (missing references, etc.) and weak experiments and baselines. The consensus among the reviews (even AnonReviewer2) is that the paper is preliminary. The paper is close, however, and addressing these concerns will make the paper much stronger. Pros: - Proposes a method to build a generative deep model of graphs - Addresses a timely and interesting topic in deep learning - Exposition is clear Cons: - Treatment of related literature should be improved - Experiments and baselines are somewhat weak - "Preliminary" - Only works on rather small graphs (i.e. O(k^4) for graphs with k nodes)
train
[ "rJa-njiVz", "S1ccEH6VM", "SkfkNHa4z", "B1ubmvfZM", "ryqbAq3VG", "B1oxoZnEM", "r1W-8-8EG", "rJZdWiUxz", "ByvkN-k-G", "SJB8_E2zz", "Hypz2G3Gz", "H1lPszhMM", "H1KfiM3MM", "Hk1_cfnzf", "BkVdoCJZG" ]
[ "public", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Interesting paper. How important is the graph matching layer to the whole network? There are recent graph matching methods that have been shown to outperform MPM (such as this one http://openaccess.thecvf.com/content_cvpr_2017/papers/Le-Huu_Alternating_Direction_Graph_CVPR_2017_paper.pdf). It is worth investigatin...
[ -1, -1, -1, 5, -1, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 3, -1, -1, -1, 2, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJlhPMWAW", "SkfkNHa4z", "Hypz2G3Gz", "iclr_2018_SJlhPMWAW", "rJa-njiVz", "r1W-8-8EG", "iclr_2018_SJlhPMWAW", "iclr_2018_SJlhPMWAW", "iclr_2018_SJlhPMWAW", "iclr_2018_SJlhPMWAW", "B1ubmvfZM", "BkVdoCJZG", "ByvkN-k-G", "rJZdWiUxz", "iclr_2018_SJlhPMWAW" ]
iclr_2018_S1fcY-Z0-
Bayesian Hypernetworks
We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks. A Bayesian hypernetwork, h, is a neural network which learns to transform a simple noise distribution, p(e) = N(0,I), to a distribution q(t) := q(h(e)) over the parameters t of another neural network (the ``primary network). We train q with variational inference, using an invertible h to enable efficient estimation of the variational lower bound on the posterior p(t | D) via sampling. In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap iid sampling of q(t). In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection.
rejected-papers
This paper presents a new method for approximate Bayesian inference in neural networks. The reviewers all found the proposed idea interesting but originally had questions about its novelty (with regard to normalizing flows) and questioned the technical rigor of the approach. The authors did a good job of addressing the technical concerns, causing two of the reviewers to raise their scores. However, the paper remains just borderline and none of the reviewers are willing to champion the paper as their questions about novelty and empirical evaluation remain. The reviewers all questioned fundamental technical aspects of the paper (which were clarified in the discussion), indicating that the paper requires more careful exposition of the technical contributions. Taking the reviewers feedback and discussion into account, running some more compelling experiments and rewriting the paper to make the technical aspects more clear would make this a much stronger submission. Pros: - Provides an interesting idea for approximate Bayesian inference in deep networks - The paper appears correct - The approach is scalable and tractable Cons: - The technical writing is not rigorous - The reviewers don't seem convinced by the empirical analysis - Incremental over existing (but recent) work (Luizos and Welling)
train
[ "HJC7ApOHM", "HJPY0ycef", "Hyjt0II4M", "rJNPwwYef", "B1Ev6EUEz", "Hy5hZMulM", "Skyy9P6Qz", "BJIlL5aGM", "B1H3S5pGG", "HknN5vp7z", "B1Z75Ppmf", "HJgbqvp7M", "SJXrL9TGM", "B1_f8G0fG", "HkMZb19ef", "HkpJDjmez" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "public", "public", "public", "public", "public", "public", "official_reviewer" ]
[ "We agree that *mechanically*, the procedure for sampling the posterior in MNF and BHN is very similar, to whit:\n1. in BHNs, we sample the (scaling factors of the) parameters directly; this is equivalent to scaling units’ pre-activations.\n2. in MNF, they sample z (which can be viewed as a scaling factor of the ac...
[ -1, 6, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HJPY0ycef", "iclr_2018_S1fcY-Z0-", "B1Z75Ppmf", "iclr_2018_S1fcY-Z0-", "HknN5vp7z", "iclr_2018_S1fcY-Z0-", "iclr_2018_S1fcY-Z0-", "iclr_2018_S1fcY-Z0-", "HkpJDjmez", "Hy5hZMulM", "rJNPwwYef", "HJPY0ycef", "HkMZb19ef", "SJXrL9TGM", "iclr_2018_S1fcY-Z0-", "iclr_2018_S1fcY-Z0-" ]
iclr_2018_SkqV-XZRZ
Variational Bi-LSTMs
Recurrent neural networks like long short-term memory (LSTM) are important architectures for sequential prediction tasks. LSTMs (and RNNs in general) model sequences along the forward time direction. Bidirectional LSTMs (Bi-LSTMs), which model sequences along both forward and backward directions, generally perform better at such tasks because they capture a richer representation of the data. In the training of Bi-LSTMs, the forward and backward paths are learned independently. We propose a variant of the Bi-LSTM architecture, which we call Variational Bi-LSTM, that creates a dependence between the two paths (during training, but which may be omitted during inference). Our model acts as a regularizer and encourages the two networks to inform each other in making their respective predictions using distinct information. We perform ablation studies to better understand the different components of our model and evaluate the method on various benchmarks, showing state-of-the-art performance.
rejected-papers
This paper proposes a method for performing stochastic variational inference for bidirectional LSTMs through introducing an additional latent variable that induces a dependence between the forward and backward directions. The authors demonstrate that their method achieves very strong empirical performance (log-likelihood on test data) on the benchmark TIMIT and BLIZZARD datasets. The paper is borderline in terms of scores with a 7, 6 and 4. Unfortunately the highest rating also corresponds to the least thorough review and that review seems to indicate that the reviewer found the technical exposition confusing. AnonReviewer2 also found the writing confusing and discovered mistakes in the technical aspects of the paper (e.g. in Eq 1). Unfortunately, the reviewer who seemed to find the paper most easy to understand also gave the lowest score. A trend among the reviewers and anonymous comments was that the paper didn't do a good enough job of placing itself in the context of related work (Goyal et. al, "Z-forcing") in particular. The authors seem to have addressed this (curiously in an anonymous link and not in an updated manuscript) but the manuscript itself has not been updated. In general, this paper presents an interesting idea with strong empirical results. The paper itself is not well composed, however, and can be improved upon significantly. Taking the reviews into account and including a better treatment of related work in writing and empirically will make this a much stronger paper. Pros: - Strong empirical performance (log-likelihood on test data) - A neat idea - Deep generative models are of great interest to the community Cons: - Incremental in relation to Goyal et al., 2017 - Needs better treatment of related work - The writing is confusing and the technical exposition is not clear enough
test
[ "HyrTqpoVf", "Hy7uO8PlG", "HJg6l0FxM", "H1BQNZqgf", "SkL8dLWEf", "SkeZ-Btff", "BJRGgSFff", "ByA_NNKzz", "rJU7Epmef", "SknDf6mxz", "S1-LoAGxM", "H1RmckMeM", "rkK4-0bgG", "ByTDAogxG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "We apologize for the confusion. If we consider the Jensen's inequality derived from the term \\log p(b,h) as a starting point, then your argument about using alpha and beta equal to 1 would be absolutely correct. However, we do not have the term \\log p(b,h) in the original objective (equation 4). To arrive at our...
[ -1, 4, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SkL8dLWEf", "iclr_2018_SkqV-XZRZ", "iclr_2018_SkqV-XZRZ", "iclr_2018_SkqV-XZRZ", "SkeZ-Btff", "H1BQNZqgf", "HJg6l0FxM", "Hy7uO8PlG", "H1RmckMeM", "S1-LoAGxM", "iclr_2018_SkqV-XZRZ", "rkK4-0bgG", "ByTDAogxG", "iclr_2018_SkqV-XZRZ" ]
iclr_2018_Bk6qQGWRb
Efficient Exploration through Bayesian Deep Q-Networks
We propose Bayesian Deep Q-Network (BDQN), a practical Thompson sampling based Reinforcement Learning (RL) Algorithm. Thompson sampling allows for targeted exploration in high dimensions through posterior sampling but is usually computationally expensive. We address this limitation by introducing uncertainty only at the output layer of the network through a Bayesian Linear Regression (BLR) model, which can be trained with fast closed-form updates and its samples can be drawn efficiently through the Gaussian distribution. We apply our method to a wide range of Atari Arcade Learning Environments. Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster than a key baseline, DDQN.
rejected-papers
This work develops a methodology for exploration in deep Q-learning through Thompson sampling to learn to play Atari games. The major innovation is to perform a Bayesian linear regression on the last layer of the deep neural network mapping from frames to Q-values. This Bayesian linear regression allows for efficiently drawing (approximate) samples from the network. A careful methodology is presented that achieves impressive results on a subset of Atari games. The initial reviews all indicated that the results were impressive but questioned the rigor of the empirical analysis and the implementation of the baselines. The authors have since improved the baselines and demonstrated impressive results across more games but questions over the empirical analysis remain (by AnonReviewer3 for instance) and the results still span only a small subset of the Atari suite. The reviewers took issue with the treatment of related work, placing the contributions of this paper in relation to previous literature. In general, this paper shows tremendous promise, but is just below borderline. It is very close to a strong and impressive paper, but requires more careful empirical work and a better treatment of related work. Hopefully the reviews and the discussion process will help make the paper much stronger for a future submission. Pros: - Very impressive results on a subset of Atari games - A simple and elegant solution to achieving approximate samples from the Q-network - The paper is well written and the methodology is clearly explained Cons: - Questions remain about the rigor of the empirical analysis (comparison to baselines) - Requires more thoughtful comparison in the manuscript to related literature - The theoretical justification for the proposed methods is not strong
val
[ "SkMlZJ9gG", "rJVfxQ2lf", "HkHX9bm-M", "SkCDCP6Qz", "r1fs6DpXf", "HkPHDDpXG", "S1Ux1vpmG", "Hy4eOLaXM", "HJ5TIU6Qz", "BkisdVq7f", "rk1A7xIGz", "B1Y_pIHfM", "ryfx6L7fM", "HkUmeK2WG", "B1MgfN_WM", "H1pN_dqlz", "BJpzYoqlz", "rJnxeAYeM", "ByuOaTteM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "public", "author", "author", "public", "public" ]
[ "The authors propose a new algorithm for exploration in Deep RL. They apply Bayesian linear regression, given the last layer of a DQN network as features, to estimate the Q function for each action. Posterior weights are sampled to select actions during execution (Thompson Sampling style). I generally liked the pap...
[ 6, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb", "SkMlZJ9gG", "rJVfxQ2lf", "HkHX9bm-M", "HkUmeK2WG", "BkisdVq7f", "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb", "B1Y_pIHfM", "ryfx6L7fM", "B1MgfN_WM", "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb", "ByuOaTteM", "rJnxe...
iclr_2018_By-IifZRW
Gaussian Process Neurons
We propose a method to learn stochastic activation functions for use in probabilistic neural networks. First, we develop a framework to embed stochastic activation functions based on Gaussian processes in probabilistic neural networks. Second, we analytically derive expressions for the propagation of means and covariances in such a network, thus allowing for an efficient implementation and training without the need for sampling. Third, we show how to apply variational Bayesian inference to regularize and efficiently train this model. The resulting model can deal with uncertain inputs and implicitly provides an estimate of the confidence of its predictions. Like a conventional neural network it can scale to datasets of arbitrary size and be extended with convolutional and recurrent connections, if desired.
rejected-papers
The authors propose the use of Gaussian processes as the prior over activation functions in deep neural networks. This is a purely mathematical paper in which the authors derive an efficient and scalable approach to their problem. The idea of having flexible distributions over activation functions is interesting and possibly impactful. One reviewer recommended acceptance with low confidence. The other two found the idea interesting and compelling but confidently recommended rejection. These reviewers are concerned that the paper is unnecessarily complex in terms of the mathematical exposition and that it repeats existing derivations without citation. It is very important that the authors acknowledge existing literature for mathematical derivations. Furthermore, the reviewers question the correctness of some of the statements (e.g. is the variational bound preserved?). These reviewers agreed that the paper is incomplete without any empirical validation. Pros: - A compelling and promising idea - The approach seems to be scalable and highly plausible Cons: - No experiments - Significant issues with citing of related work - Significant questions about the novelty of the mathematical work
val
[ "H1IrTpFxz", "Skf5I79gf", "Bkhq035gz", "B1WzCoQZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "The paper addresses the problem of learning the form of the activation functions in neural networks. The authors propose to place Gaussian process (GP) priors on the functional form of each activation function (each associated with a hidden layer and unit) in the neural net. This somehow allows to non-parametric...
[ 5, 4, 7, -1 ]
[ 4, 5, 2, -1 ]
[ "iclr_2018_By-IifZRW", "iclr_2018_By-IifZRW", "iclr_2018_By-IifZRW", "Skf5I79gf" ]
iclr_2018_BJlrSmbAZ
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
Deep neural networks have led to a series of breakthroughs, dramatically improving the state-of-the-art in many domains. The techniques driving these advances, however, lack a formal method to account for model uncertainty. While the Bayesian approach to learning provides a solid theoretical framework to handle uncertainty, inference in Bayesian-inspired deep neural networks is difficult. In this paper, we provide a practical approach to Bayesian learning that relies on a regularization technique found in nearly every modern network, batch normalization. We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models, and we demonstrate how this finding allows us to make useful estimates of the model uncertainty. Using our approach, it is possible to make meaningful uncertainty estimates using conventional architectures without modifying the network or the training procedure. Our approach is thoroughly validated in a series of empirical experiments on different tasks and using various measures, showing it to outperform baselines on a majority of datasets with strong statistical significance.
rejected-papers
This paper shows that batch normalization can be cast as approximate inference in deep neural networks. This is an appealing result as batch normalization is used in practice in a wide variety of models. The reviewers found the paper well written and easy to understand and were motivated by underlying idea. However, they found the empirical analysis lacking and found that there was not enough detail in the main text to verify whether the claims were true. The authors empirically compared to a recent method showing that dropout can be cast as approximate inference with the claim that by transitivity they were comparing to a variety of recent methods. AnonReviewer1 casts significant doubt on the results of that work. This is very unfortunate and not the fault of the authors of this paper. The authors have since gone to great length to compare to Louizos and Welling, 2017. Unfortunately, that comparison doesn't appear to be complete in the manuscript. The main text was also lacking specific detail relating to fundamental parts of the proposed method (noted by all reviewers). Overall, this paper seems to be tremendously promising and the underlying idea potentially very impactful. However, given the reviews, it doesn't seem that this paper would achieve its potential impact. The response from the authors is appreciated and goes a long way to improving the paper. Taking the reviews into account, adding specific detail about the methodology and model (e.g. the prior) and completing careful empirical analysis will make this a strong paper that should be much more impactful.
train
[ "Hk7HI4h1G", "Bk8cjwFgz", "Bkw2_15xz", "S12zvunmz", "Hk6h1Rt7G", "rJXxD0YXG", "rkPNCRYmM", "BknqaaYmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes an approximate method to construct Bayesian uncertainty estimates in networks trained with batch normalization.\n\nThere is a lot going on in this paper. Although the overall presentation is clean, there are few key shortfalls (see below). Overall, the reported functionality is nice, although t...
[ 5, 5, 6, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJlrSmbAZ", "iclr_2018_BJlrSmbAZ", "iclr_2018_BJlrSmbAZ", "BknqaaYmM", "Bkw2_15xz", "Bk8cjwFgz", "Hk7HI4h1G", "iclr_2018_BJlrSmbAZ" ]
iclr_2018_SknC0bW0-
Continuous-fidelity Bayesian Optimization with Knowledge Gradient
While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search (Li et al., 2016) and multi-fidelity BO (e.g. Klein et al. (2017)) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO approaches that use only full-fidelity observations. In this paper, we propose a novel Bayesian optimization algorithm, the continuous-fidelity knowledge gradient (cfKG) method, that can be used when fidelity is controlled by one or more continuous settings such as training data size and the number of training iterations. cfKG characterizes the value of the information gained by sampling a point at a given fidelity, choosing to sample at the point and fidelity with the largest value per unit cost. Furthermore, cfKG can be generalized, following Wu et al. (2017), to settings where derivatives are available in the optimization process, e.g. large-scale kernel learning, and where more than one point can be evaluated simultaneously. Numerical experiments show that cfKG outperforms state-of-art algorithms when optimizing synthetic functions, tuning convolutional neural networks (CNNs) on CIFAR-10 and SVHN, and in large-scale kernel learning.
rejected-papers
This paper combines multiple existing ideas in Bayesian optimization (continuous-fidelity, use of gradient information and knowledge gradient) to develop their proposed cfKG method. While the methodology seems neat and effective, the reviewers (and AC) found that the presented approach was not quite novel enough in light of existing work to justify acceptance to ICLR. Continuous fidelity Bayesian optimization is well studied and knowledge gradient + derivative information was presented at NIPS. The combination of these things seems quite sensible but not sufficiently novel (unless the empirical results were *really* compelling). Pros: - The paper is clear and writing is of high quality - Bayesian optimization is interesting to the community and compelling methods are potentially practically impactful - Outperforms existing methods on the chosen benchmarks Cons: - Is an incremental combination of existing methods - The paper claims too much
test
[ "H1Dw9y51z", "Sy4mWsOeG", "ryZpu-qlM", "BJBvNdaXz", "ryq3aP37M", "Bkq8aP27M", "HkXHnv2XM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper studies hyperparameter-optimization by Bayesian optimization, using the Knowledge Gradient framework and allowing the Bayesian optimizer to tune fideltiy against cost.\n\nThere’s nothing majorly wrong with this paper, but there’s also not much that is exciting about it. As the authors point out very cle...
[ 5, 4, 6, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1 ]
[ "iclr_2018_SknC0bW0-", "iclr_2018_SknC0bW0-", "iclr_2018_SknC0bW0-", "Bkq8aP27M", "H1Dw9y51z", "Sy4mWsOeG", "ryZpu-qlM" ]
iclr_2018_rk8R_JWRW
Gating out sensory noise in a spike-based Long Short-Term Memory network
Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network. While convolutional spiking neural networks have been demonstrated to achieve near state-of-the-art performance, only one solution has been proposed to convert gated recurrent neural networks, so far. Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation. Here, we design an analog gated LSTM cell where its neurons can be substituted for efficient stochastic spiking neurons. These adaptive spiking neurons implement an adaptive form of sigma-delta coding to convert internally computed analog activation values to spike-trains. For such neurons, we approximate the effective activation function, which resembles a sigmoid. We show how analog neurons with such activation functions can be used to create an analog LSTM cell; networks of these cells can then be trained with standard backpropagation. We train these LSTM networks on a noisy and noiseless version of the original sequence prediction task from Hochreiter & Schmidhuber (1997), and also on a noisy and noiseless version of a classical working memory reinforcement learning task, the T-Maze. Substituting the analog neurons for corresponding adaptive spiking neurons, we then show that almost all resulting spiking neural network equivalents correctly compute the original tasks.
rejected-papers
The reviewers agreed that the paper was somewhat preliminary in terms of the exposition and empirical work. They all find the underlying problem quite interesting and challenging (i.e. spiking recurrent networks). However, the manuscript failed to motivate the approach. In particular, everyone agrees that spiking networks are very interesting, but it's unclear what problem the presented work is solving. The authors need to be more clear about their motivation and then close the loop with empirical validation that their approach is solving the motivating problem (i.e. do we learn something about biological plausibility, are spiking networks better than traditional LSTMs at modeling a particular kind of data, or are they more efficiently implemented on hardware?). Motivating the work with one of these followed by convincing experiments would make this a much stronger paper. Pros: - Tackles an interesting and challenging problem at the intersection of neuroscience and ML - A novel method for creating a spiking LSTM Cons: - The motivation is not entirely clear - The empirical analysis is too simple and does not demonstrate the advantages of this approach - The paper seems unfocused and could use rewriting
train
[ "BkeiHSFxz", "SyWwzQceM", "BkQ6S3QZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "First the authors suggest an adaptive analog neuron (AAN) model which can be trained by back-propagation and then mapped to an Adaptive Spiking Neuron (ASN). Second, the authors suggest a network module called Adaptive Analog LSTM Cell (AA-LSTM) which contains input cells, input gates, constant error carousels (CE...
[ 5, 5, 4 ]
[ 4, 3, 4 ]
[ "iclr_2018_rk8R_JWRW", "iclr_2018_rk8R_JWRW", "iclr_2018_rk8R_JWRW" ]
iclr_2018_SyxCqGbRZ
Learning to Treat Sepsis with Multi-Output Gaussian Process Deep Recurrent Q-Networks
Sepsis is a life-threatening complication from infection and a leading cause of mortality in hospitals. While early detection of sepsis improves patient outcomes, there is little consensus on exact treatment guidelines, and treating septic patients remains an open problem. In this work we present a new deep reinforcement learning method that we use to learn optimal personalized treatment policies for septic patients. We model patient continuous-valued physiological time series using multi-output Gaussian processes, a probabilistic model that easily handles missing values and irregularly spaced observation times while maintaining estimates of uncertainty. The Gaussian process is directly tied to a deep recurrent Q-network that learns clinically interpretable treatment policies, and both models are learned together end-to-end. We evaluate our approach on a heterogeneous dataset of septic spanning 15 months from our university health system, and find that our learned policy could reduce patient mortality by as much as 8.2\% from an overall baseline mortality rate of 13.3\%. Our algorithm could be used to make treatment recommendations to physicians as part of a decision support tool, and the framework readily applies to other reinforcement learning problems that rely on sparsely sampled and frequently missing multivariate time series data.
rejected-papers
This paper brings recent innovations in reinforcement learning to bear on a tremendously important application, treating sepsis. The reviewers were all compelled by the application domain but thought that the technical innovation in the work was low. While ICLR welcomes application papers, in this instance the reviewers felt that the technical contribution was not justified well enough. Two of the reviewers asked for a more clear discussion of the underlying assumptions of the approach (i.e. offline policy evaluation and not missing at random). Unfortunately, lack of significant revisions to the manuscript over the discussion period seem to have precluded changes to the reviewer scores. Overall, this could be a strong submission to a conference that is more closely tied to the application domain. Pros: - Very compelling application that is well motivated - Impressive (possibly impactful) results - Thorough empirical comparison Cons: - Lack of technical innovation - Questions about the underlying assumptions and choice of methodology
train
[ "BJlGBMKVz", "rJlaKw9lG", "rkM5HcFxf", "Hycqpx9lM", "HJ4SAuFXz", "SyQ5tOFmM", "rJ8O5vYQG", "SJLV9PKmG", "rkeR64qfM", "H1ghDZfMz", "SJO3UDJGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public" ]
[ "I appreciate the authors' in-depth and very thoughtful responses to all of the reviews. I really REALLY like this work, and contrary to the other (IMO, overly negative) reviews, I feel that it fits at ICLR, which has recent history of accepting very solid clinical application work, even without significant methods...
[ -1, 6, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "rJlaKw9lG", "iclr_2018_SyxCqGbRZ", "iclr_2018_SyxCqGbRZ", "iclr_2018_SyxCqGbRZ", "rkM5HcFxf", "Hycqpx9lM", "SJLV9PKmG", "rJlaKw9lG", "iclr_2018_SyxCqGbRZ", "iclr_2018_SyxCqGbRZ", "iclr_2018_SyxCqGbRZ" ]
iclr_2018_rkTBjG-AZ
DeepArchitect: Automatically Designing and Training Deep Architectures
In deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperpa- rameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a result, the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular lan- guage that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The resulting search spaces are tree- structured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequen- tial model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. We also present experiments on MNIST, showing that the same search space achieves near state-of-the-art performance with a few samples. These experiments show that our framework can be used effectively for model discov- ery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available
rejected-papers
This paper introduces a framework for specifying the model search space for exploring over the space of architectures and hyperparameters in deep learning models (often referred to as architecture search). Optimizing over complex architectures is a challenging problem that has received significant attention as deep learning models become more exotic and complex. This work helps to develop a methodology for describing and exploring the complex space of architectures, which is a challenging problem. The authors demonstrate that their method helps to structure the search over hyperparameters using sequential model based optimization and Monte Carlo tree search. The paper is well written and easy to follow. However, the level of technical innovation is low and the experiments don't really demonstrate the merits of the method over existing strategies. One reviewer took issue with the treatment of related work. The underlying idea is compelling and addresses an open question that is of great interest currently. However, without experiments demonstrating that this works better than, e.g., the specification in the hyperopt package, it is difficult to assess the contribution. The authors must do a better job of placing this contributing in the context of existing literature and empirically demonstrate its advantages. The presented experiments show that the method works in a limited setting and don't explore optimization over complex spaces (i.e. over architectures - e.g. number of layers, regularization for each layer, type of each layer, etc.). There's nothing presented empirically that hasn't been possible with standard Bayesian optimization techniques. This is a great start, but it needs more justification empirically (or theoretically). Pros: - Addresses an important and pertinent problem - architecture search for deep learning - Provides an intuitive and interesting solution to specifying the architecture search problem - Well written and clear Cons: - The empirical analysis does not demonstrate the advantages of this approach over existing literature - Needs to place itself better in the context of existing literature
train
[ "Skp1e-zgM", "H1ZsTiUEf", "B1BvqGB4f", "Sy4q4vBgf", "ByV24asxM", "r1RljwaQz", "rk6l68T7G", "BJS0iIpQG", "rJHFv8pXG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The author present a language for expressing hyperparameters (HP) of a network. This language allows to define a tree structure search space to cover the case where some HP variable exists only if some previous HP variable took some specific value. Using this tool, they explore the depth of the network, when to ap...
[ 4, -1, -1, 5, 4, -1, -1, -1, -1 ]
[ 5, -1, -1, 3, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rkTBjG-AZ", "B1BvqGB4f", "BJS0iIpQG", "iclr_2018_rkTBjG-AZ", "iclr_2018_rkTBjG-AZ", "Skp1e-zgM", "Sy4q4vBgf", "rJHFv8pXG", "ByV24asxM" ]
iclr_2018_HyBbjW-RW
Open Loop Hyperparameter Optimization and Determinantal Point Processes
Driven by the need for parallelizable hyperparameter optimization methods, this paper studies \emph{open loop} search methods: sequences that are predetermined and can be generated before a single configuration is evaluated. Examples include grid search, uniform random search, low discrepancy sequences, and other sampling distributions. In particular, we propose the use of k-determinantal point processes in hyperparameter optimization via random search. Compared to conventional uniform random search where hyperparameter settings are sampled independently, a k-DPP promotes diversity. We describe an approach that transforms hyperparameter search spaces for efficient use with a k-DPP. In addition, we introduce a novel Metropolis-Hastings algorithm which can sample from k-DPPs defined over spaces with a mixture of discrete and continuous dimensions. Our experiments show significant benefits over uniform random search in realistic scenarios with a limited budget for training supervised learners, whether in serial or parallel.
rejected-papers
The idea of using the determinant of the covariance matrix over inputs to select experiments to run is a foundational concept of experimental design. Thus it is natural to think about extending such a strategy to sequential model based optimization for the hyperparameters of machine learning models, using recent advances in determinantal point processes. The idea of sampling from k-DPPs to do parallel hyperparameter search, balancing quality and diversity of expected outcomes, seems neat. While the reviewers found the idea interesting, they saw weaknesses in the approach and most importantly were not convinced by the empirical results. All reviewers thought that the baselines were inappropriate given recent work in hyperparameter optimization (and classic work in statistics). Pros: - Useful to a large portion of the community (if it works) - An interesting idea that seems timely Cons: - Only slightly outperforms baselines that are too weak - Not empirically compared to recent literature - Some of the design and methodology require more justification - Experiments are limited to small scale problems
train
[ "ryH3y2Oxf", "SyRrSKFef", "ry7VxCKlM", "rkIuDU27G", "r18Y8I2mM", "SyUfLUhmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "\nThis paper considers hyperparameter searches in which all of the\ncandidate points are selected in advance. The most common approaches\nare uniform random search and grid search, but more recently\nlow-discrepancy sequences have sometimes been used to try to achieve\nbetter coverage of the space. This paper pr...
[ 4, 4, 4, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1 ]
[ "iclr_2018_HyBbjW-RW", "iclr_2018_HyBbjW-RW", "iclr_2018_HyBbjW-RW", "ryH3y2Oxf", "SyRrSKFef", "ry7VxCKlM" ]
iclr_2018_H1Nyf7W0Z
Alpha-divergence bridges maximum likelihood and reinforcement learning in neural sequence generation
Neural sequence generation is commonly approached by using maximum- likelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α → 0 and RL to α → 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.
rejected-papers
The reviewers agreed that this paper is not quite ready for publication at ICLR. One of the reviewers thought the paper was well written and easy to follow while the two others said the opposite. One of the main criticisms was issues with the composition. The paper seems to lack a clear formal explanation of the problem and the proposed methodology. The reviewers in general weren't convinced by the experiments, complaining about the lack of a required baseline and that the proposed method doesn't seem to significantly help in the experiment presented. Pros: - The proposed idea is interesting - The problem is timely and of interest to the community - Addresses multiple important problems at the intersection of ML and RL in sequence generation Cons: - Novel but somewhat incremental - The experiments are not compelling (i.e. the results are not strong) - A necessary baseline is missing - Significant issues with the writing - both in terms of clarity and correctness.
train
[ "HyTf0MygM", "BkwRmW1Wz", "HyiwD8N-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes another training objective for training neural sequence-to-sequence models. The objective is based on alpha-divergence between the true input-output distribution q and the model distribution p. The new objective generalizes Reward-Augmented Maximum Likelihood (RAML) and entropy-regularized Rein...
[ 4, 4, 4 ]
[ 5, 1, 3 ]
[ "iclr_2018_H1Nyf7W0Z", "iclr_2018_H1Nyf7W0Z", "iclr_2018_H1Nyf7W0Z" ]
iclr_2018_ryk77mbRZ
Noise-Based Regularizers for Recurrent Neural Networks
Recurrent neural networks (RNNs) are powerful models for sequential data. They can approximate arbitrary computations, and have been used successfully in domains such as text and speech. However, the flexibility of RNNs makes them susceptible to overfitting and regularization is important. We develop a noise-based regularization method for RNNs. The idea is simple and easy to implement: we inject noise in the hidden units of the RNN and then maximize the original RNN's likelihood averaged over the injected noise. On a language modeling benchmark, our method achieves better performance than the deterministic RNN and the variational dropout.
rejected-papers
This paper proposes a regularizer for recurrent neural networks, based on injecting random noise into the hidden unit activations. In general the reviewers thought that the paper was well written and easy to understand. However, the major concern among the reviewers was a lack of empirical evidence that the method works consistently. Essentially, the reviewers were not compelled by the presented experiments and demanded more rigorous empirical validation of the approach. Pros: - Well written and easy to follow - An interesting idea - Regularizing RNNs is an interesting and active area of research in the community Cons: - The experiments are not compelling and are questioned by all the reviewers - The writing does not cite relevant related work - The work seems underexplored (empirically and methodologically)
train
[ "Syr4moYxG", "Sk6_QZcgM", "ry22qzclM", "H17Hs5e-z", "ByHyFBcef", "ry5oc0bxM", "SJuWpvGAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "public" ]
[ "The authors of the paper advocate injecting noise into the activations of recurrent networks for regularisation. This is done by replacing the deterministic units with stochastic ones.\n\nThe paper has several issues with respect to the method and related work. \n\n- The paper needs to mention [Graves 2011], whic...
[ 2, 5, 3, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_ryk77mbRZ", "iclr_2018_ryk77mbRZ", "iclr_2018_ryk77mbRZ", "iclr_2018_ryk77mbRZ", "iclr_2018_ryk77mbRZ", "SJuWpvGAZ", "iclr_2018_ryk77mbRZ" ]
iclr_2018_r1vccClCb
Neighbor-encoder
We propose a novel unsupervised representation learning framework called neighbor-encoder in which domain knowledge can be trivially incorporated into the learning process without modifying the general encoder-decoder architecture. In contrast to autoencoder, which reconstructs the input data, neighbor-encoder reconstructs the input data's neighbors. The proposed neighbor-encoder can be considered as a generalization of autoencoder as the input data can be treated as the nearest neighbor of itself with zero distance. By reformulating the representation learning problem as a neighbor reconstruction problem, domain knowledge can be easily incorporated with appropriate definition of similarity or distance between objects. As such, any existing similarity search algorithms can be easily integrated into our framework. Applications of other algorithms (e.g., association rule mining) in our framework is also possible since the concept of ``neighbor" is an abstraction which can be appropriately defined differently in different contexts. We have demonstrated the effectiveness of our framework in various domains, including images, time series, music, etc., with various neighbor definitions. Experimental results show that neighbor-encoder outperforms autoencoder in most scenarios we considered.
rejected-papers
The paper proposes a form of autoencoder that learns to predict the neighbors of a given input vector rather than the input itself. The idea is nice but there are some reviewer concerns about insufficient evaluation and the effect of the curse of dimensionality. The revised paper does address some questions and includes additional helpful experiments with different types of autoencoders. However, the work is still a bit preliminary. The area of auto-encoder variants, and corresponding experiments on CIFAR-10 and the like, is crowded. In order to convince the reader that a new approach makes a real contribution, it should have very thorough experiments. Suggestions: try to improve the CIFAR-10 numbers (they need not be state-of-the-art but should be more credible), adding more data sets (especially high-dimensional ones), and analyzing the effects of factors that are likely to be important (e.g. dimensionality, choice of distance function for neighbor search).
train
[ "Hk4qYw7eG", "HJDy-RKef", "S1JBxOqlz", "SJzuXLaXz", "r1nqmLp7M", "rJ-BQUaXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper describes a generalization of autoencoders that are trained to reconstruct a close neighbor of its input, instead of merely the input itself. Experiments on 3 datasets show that this yields better representations in terms of post hoc classification with a linear classifier or clustering, compared to a r...
[ 5, 6, 4, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_r1vccClCb", "iclr_2018_r1vccClCb", "iclr_2018_r1vccClCb", "HJDy-RKef", "Hk4qYw7eG", "S1JBxOqlz" ]
iclr_2018_SkYMnLxRW
Weighted Transformer Network for Machine Translation
State-of-the-art results on neural machine translation often use attentional sequence-to-sequence models with some form of convolution or recursion. Vaswani et. al. (2017) propose a new architecture that avoids recurrence and convolution completely. Instead, it uses only self-attention and feed-forward layers. While the proposed architecture achieves state-of-the-art results on several machine translation tasks, it requires a large number of parameters and training iterations to converge. We propose Weighted Transformer, a Transformer with modified attention layers, that not only outperforms the baseline network in BLEU score but also converges 15-40% faster. Specifically, we replace the multi-head attention by multiple self-attention branches that the model learns to combine during the training process. Our model improves the state-of-the-art performance by 0.5 BLEU points on the WMT 2014 English-to-German translation task and by 0.4 on the English-to-French translation task.
rejected-papers
The paper proposes a modification to the Transformer network, which mostly consists in changing how the attention heads are combined. The contribution is incremental, and its novelty is limited. The results demonstrate an improvement over the baseline at the cost of a more complicated training procedure with more hyper-parameters, and it is possible that with similar tuning the baseline performance could be improved in a similar way.
train
[ "SJQVdQ5lG", "Hy_tscFgf", "SyIxIgcxf", "r18Flq4mz", "HJZvfPMzf", "BybJGDfGz", "B16UZDGMM", "HkFhFG1-f", "ByZykz-ez", "HJ0Fy8egG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public" ]
[ "This paper describes an extension to the recently introduced Transformer networks which shows better convergence properties and also improves results on standard machine translation benchmarks. \n\nThis is a great paper -- it introduces a relatively simple extension of Transformer networks which only adds very few...
[ 9, 4, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkYMnLxRW", "iclr_2018_SkYMnLxRW", "iclr_2018_SkYMnLxRW", "HkFhFG1-f", "Hy_tscFgf", "SyIxIgcxf", "SJQVdQ5lG", "iclr_2018_SkYMnLxRW", "HJ0Fy8egG", "iclr_2018_SkYMnLxRW" ]
iclr_2018_rJBiunlAW
Training RNNs as Fast as CNNs
Common recurrent neural network architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU) architecture, a recurrent unit that simplifies the computation and exposes more parallelism. In SRU, the majority of computation for each step is independent of the recurrence and can be easily parallelized. SRU is as fast as a convolutional layer and 5-10x faster than an optimized LSTM implementation. We study SRUs on a wide range of applications, including classification, question answering, language modeling, translation and speech recognition. Our experiments demonstrate the effectiveness of SRU and the trade-off it enables between speed and performance.
rejected-papers
The paper presents Simple Recurrent Unit, which is characterised by the lack of state-to-gates connections as used in conventional recurrent architectures. This allows for efficient implementation, and leads to results competitive with the recurrent baselines, as shown on several benchmarks. The submission lacks novelty, as the proposed method is essentially a special case of Quasi-RNN [Bradbury et al.], published at ICLR 2017. The comparison in Appendix A confirms that, as well as similar results of SRU and Quasi-RNN in Figures 4 and 5. Quasi-RNN has already been demonstrated to be amenable to efficient implementation and perform on a par with the recurrent baselines, so this submission doesn’t add much to that.
train
[ "BJsMKkGgf", "SyjjOZ5gM", "HyMadv_bz", "SyHeKNHmG", "B10wVlpGM", "SyT4TchzG", "Bkavi5nGM", "rycUihjGz", "HJ99HQzMf", "ryHvm3qZG", "BJ_3Gnc-M", "BkTOG25bf", "Hya4QnxWz", "SyAqQBqyz", "H10L7B91G", "BJABIzckG", "r1XmNCKkz", "HkeDMwt1f", "r1Yz8IYkG", "BJR39j4yM", "B1HgFDfJf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "public", "author", "author", "author", "public", "author", "author", "public", "official_reviewer", "author", "public", "author", "public" ]
[ "This work presents the Simple Recurrent Unit architecture which allows more parallelism than the LSTM architecture while maintaining high performance.\n\nSignificance, Quality and clarity:\nThe idea is well motivated: Faster training is important for rapid experimentation, and altering the RNN cell so it can be pa...
[ 7, 8, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJBiunlAW", "iclr_2018_rJBiunlAW", "iclr_2018_rJBiunlAW", "BJsMKkGgf", "Bkavi5nGM", "iclr_2018_rJBiunlAW", "rycUihjGz", "BJ_3Gnc-M", "iclr_2018_rJBiunlAW", "BJsMKkGgf", "SyjjOZ5gM", "HyMadv_bz", "iclr_2018_rJBiunlAW", "BJABIzckG", "r1XmNCKkz", "HkeDMwt1f", "iclr_2018_rJBiu...
iclr_2018_HJOQ7MgAW
Long Short-Term Memory as a Dynamically Computed Element-wise Weighted Sum
Long short-term memory networks (LSTMs) were introduced to combat vanishing gradients in simple recurrent neural networks (S-RNNs) by augmenting them with additive recurrent connections controlled by gates. We present an alternate view to explain the success of LSTMs: the gates themselves are powerful recurrent models that provide more representational power than previously appreciated. We do this by showing that the LSTM's gates can be decoupled from the embedded S-RNN, producing a restricted class of RNNs where the main recurrence computes an element-wise weighted sum of context-independent functions of the inputs. Experiments on a range of challenging NLP problems demonstrate that the simplified gate-based models work substantially better than S-RNNs, and often just as well as the original LSTMs, strongly suggesting that the gates are doing much more in practice than just alleviating vanishing gradients.
rejected-papers
The paper performs an ablation analysis on LSTM, showing that the gating component is the most important. There is little novelty in the analysis, and in its current form, its impact is rather limited.
train
[ "Bko-OSYgM", "HJ3eGCYlG", "Bk_Zgxcef", "SJ-VlLPZM", "ByUL_Swbz", "By5K36x-f", "SyaLnaeWG", "HyKV3TlZz", "HJ4IMQZxz", "BywjBIgxM", "Bk43StH0-", "SJeJ8KSAW", "rJuJg4dAW", "rJAIRiwR-", "HyIdLBH0W", "Bk_ZP7HRb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "public", "public", "public" ]
[ "This paper proposes a simplified LSTM variants by removing the non-linearity of content item and output gate. It shows comparable results with standard LSTM.\n\nI believe this is a updated version of https://arxiv.org/abs/1705.07393 (Recurrent Additive Networks) with stronger experimental results. \n\nHowever, the...
[ 6, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJOQ7MgAW", "iclr_2018_HJOQ7MgAW", "iclr_2018_HJOQ7MgAW", "ByUL_Swbz", "By5K36x-f", "Bko-OSYgM", "HJ3eGCYlG", "Bk_Zgxcef", "BywjBIgxM", "iclr_2018_HJOQ7MgAW", "Bk_ZP7HRb", "HyIdLBH0W", "rJAIRiwR-", "iclr_2018_HJOQ7MgAW", "iclr_2018_HJOQ7MgAW", "iclr_2018_HJOQ7MgAW" ]
iclr_2018_SkffVjUaW
Building effective deep neural networks one feature at a time
Successful training of convolutional neural networks is often associated with suffi- ciently deep architectures composed of high amounts of features. These networks typically rely on a variety of regularization and pruning techniques to converge to less redundant states. We introduce a novel bottom-up approach to expand representations in fixed-depth architectures. These architectures start from just a single feature per layer and greedily increase width of individual layers to attain effective representational capacities needed for a specific task. While network growth can rely on a family of metrics, we propose a computationally efficient version based on feature time evolution and demonstrate its potency in determin- ing feature importance and a networks’ effective capacity. We demonstrate how automatically expanded architectures converge to similar topologies that benefit from lesser amount of parameters or improved accuracy and exhibit systematic correspondence in representational complexity with the specified task. In contrast to conventional design patterns with a typical monotonic increase in the amount of features with increased depth, we observe that CNNs perform better when there is more learnable parameters in intermediate, with falloffs to earlier and later layers.
rejected-papers
Regarding clarity, while the paper definitely needs work if it is to be resubmitted to an ML venue, different revisions would be appropriate for a physics audience. And given the above comment, any suggested changes are likely to be superfluous.
train
[ "SkvTjWqxG", "S1gFMVoeM", "BJWfJzTez", "rJhvF46Xf", "Bk2UTZgGM", "HyNZcZxfG", "S1Nnn6A-G", "rkdbjT0Zz", "rknxzzRAb", "BJ4OYp8Tb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public", "public", "author", "public" ]
[ "The authors propose an approach to dynamically adjust the feature map depth of a fully convolutional neural network. The work formulates a measure of self-resemblance, to determine when to stop increasing the feature dimensionality at each convolutional layer. The experimental section evaluates this method on MNIS...
[ 4, 8, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkffVjUaW", "iclr_2018_SkffVjUaW", "iclr_2018_SkffVjUaW", "iclr_2018_SkffVjUaW", "SkvTjWqxG", "SkvTjWqxG", "S1gFMVoeM", "BJWfJzTez", "BJ4OYp8Tb", "iclr_2018_SkffVjUaW" ]
iclr_2018_SJmAXkgCb
DNN Feature Map Compression using Learned Representation over GF(2)
In this paper, we introduce a method to compress intermediate feature maps of deep neural networks (DNNs) to decrease memory storage and bandwidth requirements during inference. Unlike previous works, the proposed method is based on converting fixed-point activations into vectors over the smallest GF(2) finite field followed by nonlinear dimensionality reduction (NDR) layers embedded into a DNN. Such an end-to-end learned representation finds more compact feature maps by exploiting quantization redundancies within the fixed-point activations along the channel or spatial dimensions. We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection. Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations.
rejected-papers
The paper presents a technique for feature map compression at inference time. As noted by reviewers, the main concern is that the method is applied to one NN architecture (SqueezeNet), which severely limits its impact and applicability to better performing state-of-the-art models.
train
[ "rJbz1nrgM", "SJG0Ga5ef", "BJ46Rwjez", "ByqVft3GG", "HyQcdMfGM", "S1uluMGGM", "BJR2wfGGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The method of this paper minimizes the memory usage of the activation maps of a CNN. It starts from a representation where activations are compressed with a uniform scalar quantizer and fused to reduce intermediate memory usage. This looses some accuracy, so the contribution of the paper is to add a pair of convol...
[ 7, 5, 4, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJmAXkgCb", "iclr_2018_SJmAXkgCb", "iclr_2018_SJmAXkgCb", "iclr_2018_SJmAXkgCb", "rJbz1nrgM", "SJG0Ga5ef", "BJ46Rwjez" ]
iclr_2018_SJn0sLgRb
Data Augmentation by Pairing Samples for Images Classification
Data augmentation is a widely used technique in many machine learning tasks, such as image classification, to virtually enlarge the training dataset size and avoid overfitting. Traditional data augmentation techniques for image classification tasks create new samples from the original training data by, for example, flipping, distorting, adding a small amount of noise to, or cropping a patch from an original image. In this paper, we introduce a simple but surprisingly effective data augmentation technique for image classification tasks. With our technique, named SamplePairing, we synthesize a new sample from one image by overlaying another image randomly chosen from the training data (i.e., taking an average of two images for each pixel). By using two images randomly selected from the training set, we can generate N^2 new samples from N training samples. This simple data augmentation technique significantly improved classification accuracy for all the tested datasets; for example, the top-1 error rate was reduced from 33.5% to 29.0% for the ILSVRC 2012 dataset with GoogLeNet and from 8.22% to 6.93% in the CIFAR-10 dataset. We also show that our SamplePairing technique largely improved accuracy when the number of samples in the training set was very small. Therefore, our technique is more valuable for tasks with a limited amount of training data, such as medical imaging tasks.
rejected-papers
The paper proposes a data augmentation technique for image classification which consists in averaging two input images and using the label of one of them. The method is shown to outperform the baseline on the image classification task, the but evaluation doesn’t extend beyond that (to other tasks or alternative augmentation mechanisms); theoretical justification is also lacking.
train
[ "ryOCyetxf", "S1CYm85gM", "ryjXhymZM", "r1DpKzaQM", "Hy-FdzpXG", "Hy1zDzTmz", "SkqNBf6XG", "HJDYNmMff", "HJgwDzMff", "S1KnxyMGz", "Hkp3PtC-G", "ByZ_nVAbf", "HJJ1R06-z", "Hk0XcA6WM", "H1BRpQFbM", "BJZogjdZG", "r1zfLuSZf", "rybe_34-f", "HyMUHnsez", "SJXiG2oeM", "H1kIxTqxM", "...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public", "author", "public", "author", "public", "author", "public", "author", "public", "author", "public", "author", "author", "public", "author...
[ "The paper proposes a new data augmentation technique based on picking random image pairs and producing \na new average image which is associated with the label of one of the two original samples. The experiments show\nthat this strategy allows to reduce the risk of overfitting especially in the case of a limited a...
[ 4, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "S1CYm85gM", "ryOCyetxf", "ryjXhymZM", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "ByZ_nVAbf", "HJJ1R06-z", "Hk0XcA6WM", "r1zfLuSZf", "BJZogjdZG", "HyMUHnsez", "rybe_...
iclr_2018_Sy3fJXbA-
Connectivity Learning in Multi-Branch Networks
While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance. To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network. Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task. We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art ``ResNeXt'' multi-branch network given the same learning capacity.
rejected-papers
The paper proposes a method for learning connectivity in neural networks, evaluated on the ResNeXt architecture. The novelty of the method is rather limited, and even though the method has been shown to improve on the ResNeXt baselines on CIFAR-100 and ImageNet classification tasks (which is encouraging), it should have been evaluated on more architectures and datasets to confirm its generality.
train
[ "ByyJKKXgz", "BJ9DfkxWM", "HJykWAMWG", "B1Vwd8pQf", "rJa9dLT7M", "B1HmOUTQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors extend the ResNeXt architecture. They substitute the simple add operation with a selection operation for each input in the residual module. The selection of the inputs happens through gate weights, which are sampled at train time. At test time, the gates with the highest values are kept on, while the o...
[ 5, 5, 5, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Sy3fJXbA-", "iclr_2018_Sy3fJXbA-", "iclr_2018_Sy3fJXbA-", "BJ9DfkxWM", "ByyJKKXgz", "HJykWAMWG" ]
iclr_2018_rJoXrxZAZ
HybridNet: A Hybrid Neural Architecture to Speed-up Autoregressive Models
This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive models for raw audio waveform generation. As an example, we propose a hybrid model that combines an autoregressive network named WaveNet and a conventional LSTM model to address speech synthesis. Instead of generating one sample per time-step, the proposed HybridNet generates multiple samples per time-step by exploiting the long-term memory utilization property of LSTMs. In the evaluation, when applied to text-to-speech, HybridNet yields state-of-art performance. HybridNet achieves a 3.83 subjective 5-scale mean opinion score on US English, largely outperforming the same size WaveNet in terms of naturalness and provide 2x speed up at inference.
rejected-papers
The paper presents a hybrid architecture which combines WaveNet and LSTM for speeding-up raw audio generation. The novelty of the method is limited, as it’s a simple combination of existing techniques. The practical impact of the approach is rather questionable since the generated audio has significantly lower MOS scores than the state-of-the-art WaveNet model.
test
[ "r16uKJ5gG", "ryOLIn5lf", "ByDRVIuZG", "Sk3XOcp7f", "Byp0z9TQz", "rJ43n56XM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "TL;DR of paper: for sequential prediction, in order to scale up the model size without increasing inference time, use a model that predicts multiple timesteps at once. In this case, use an LSTM on top of a Wavenet for audio synthesis, where the LSTM predicts N steps for every Wavenet forward pass. The main result ...
[ 6, 4, 4, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1 ]
[ "iclr_2018_rJoXrxZAZ", "iclr_2018_rJoXrxZAZ", "iclr_2018_rJoXrxZAZ", "r16uKJ5gG", "ryOLIn5lf", "ByDRVIuZG" ]
iclr_2018_S1NHaMW0b
ShakeDrop regularization
This paper proposes a powerful regularization method named \textit{ShakeDrop regularization}. ShakeDrop is inspired by Shake-Shake regularization that decreases error rates by disturbing learning. While Shake-Shake can be applied to only ResNeXt which has multiple branches, ShakeDrop can be applied to not only ResNeXt but also ResNet, Wide ResNet and PyramidNet in a memory efficient way. Important and interesting feature of ShakeDrop is that it strongly disturbs learning by multiplying even a negative factor to the output of a convolutional layer in the forward training pass. The effectiveness of ShakeDrop is confirmed by experiments on CIFAR-10/100 and Tiny ImageNet datasets.
rejected-papers
The paper proposes a regularisation technique based on Shake-Shake which leads to the state of the art performance on the CIFAR-10 and CIFAR-100 dataset. Despite good results on CIFAR, the novelty of the method is low, justification for the method is not provided, and the impact of the method on tasks beyond CIFAR classification is unclear.
test
[ "r1Pm9nUNG", "r1HFPmSgG", "HkAAvk0gf", "HyI9Lxf-z", "Sy8VrBpXf", "HJ2pqLhQf", "ryGlO75QG", "Skv0E75Xz", "H11n1Xcmz", "BknEhzqmf", "HkmBcxr-f", "Sy3adjgWz", "r1F6F5Ygf", "S1rHOM-gz", "B16xxZzJf", "SyrG-ql1f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "author", "public", "author", "public" ]
[ "---\nRegd. the 'factual errors':\n\n1. My original review said \"the proposed method is *fundamentally* a combination of prior work\" --- in that the underlying ideas had been introduced before in prior work (dropout & shake shake), not that the proposed method involved literally applying a combination of dropout ...
[ -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 2, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Skv0E75Xz", "iclr_2018_S1NHaMW0b", "iclr_2018_S1NHaMW0b", "iclr_2018_S1NHaMW0b", "iclr_2018_S1NHaMW0b", "BknEhzqmf", "r1HFPmSgG", "HkAAvk0gf", "HyI9Lxf-z", "iclr_2018_S1NHaMW0b", "Sy3adjgWz", "r1F6F5Ygf", "S1rHOM-gz", "iclr_2018_S1NHaMW0b", "SyrG-ql1f", "iclr_2018_S1NHaMW0b" ]
iclr_2018_rJ6iJmWCW
POLICY DRIVEN GENERATIVE ADVERSARIAL NETWORKS FOR ACCENTED SPEECH GENERATION
In this paper, we propose the generation of accented speech using generative adversarial networks. Through this work we make two main contributions a) The ability to condition latent representations while generating realistic speech samples b) The ability to efficiently generate long speech samples by using a novel latent variable transformation module that is trained using policy gradients. Previous methods are limited in being able to generate only relatively short samples or are not very efficient at generating long samples. The generated speech samples are validated through a number of various evaluation measures viz, a WGAN critic loss and through subjective scores on user evaluations against competitive speech synthesis baselines and detailed ablation analysis of the proposed model. The evaluations demonstrate that the model generates realistic long speech samples conditioned on accent efficiently.
rejected-papers
The paper proposes a method for accented speech generation using GANs. The reviewers have pointed out the problems in the justification of the method (e.g. the need for using policy gradients with a differentiable objective) as well as its evaluation.
train
[ "rJfNnSxez", "rkKtPpFxz", "SkxuGmyZG", "BJBuWyf4z", "r1IP74-Ez", "B1LlRgbVM", "HJdYIP6Qz", "H1m-DPpmf", "SkuDwwaQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper presents a method for generating speech audio in a particular accent. The proposed approach relies on a generative adversarial network (GAN), combined with a policy approach for joining together generated speech segments. The latter is used to deal with the problem of generating very long sequences (whi...
[ 5, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJ6iJmWCW", "iclr_2018_rJ6iJmWCW", "iclr_2018_rJ6iJmWCW", "r1IP74-Ez", "B1LlRgbVM", "SkuDwwaQf", "SkxuGmyZG", "rkKtPpFxz", "rJfNnSxez" ]
iclr_2018_H1cKvl-Rb
UCB EXPLORATION VIA Q-ENSEMBLES
We show how an ensemble of Q∗-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the Q-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.
rejected-papers
The idea studied here is interesting, if incremental. The empirical results are not particularly stellar, but it's clear that the authors have done their best to provide reproducible and defensible results. A few sticking points: a) The use of the term 'UCB', as mentioned in an anonymous comment, is somewhat misleading. "Approximate Confidence Interval" might be less controversial; b) there are a number of recent research results on exploration that are worth paying attention to (Plappert et al, O'Donoghue et al.) and worth comparing to, and c) the theoretical results are not always justified or useful (e.g. Equation 9: the bound is trivial, posterior >= 0 or 1).
train
[ "rkTJ2wYeG", "B13fzyclG", "r1prpe1ZM", "BkyyDvTXG", "SkOHUDamG", "S1S54DamM", "SyQS9Go7f", "SJMbiMiXf", "BkI9qfsmG", "HyqLPh9Qz", "H14cttFXM", "HySlgMqMf", "rysAtjUMz", "r1w5R7Ufz", "SJsA9mLfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "public", "public" ]
[ "This paper paper uses an ensemble of networks to represent the uncertainty in deep reinforcement learning.\nThe algorithm then chooses optimistically over the distribution induced by the ensemble.\nThis leads to improved learning / exploration, notably better than the similar approach bootstrapped DQN.\n\nThere ar...
[ 6, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1cKvl-Rb", "iclr_2018_H1cKvl-Rb", "iclr_2018_H1cKvl-Rb", "iclr_2018_H1cKvl-Rb", "HySlgMqMf", "H14cttFXM", "r1prpe1ZM", "rkTJ2wYeG", "B13fzyclG", "SJsA9mLfz", "iclr_2018_H1cKvl-Rb", "iclr_2018_H1cKvl-Rb", "r1w5R7Ufz", "B13fzyclG", "iclr_2018_H1cKvl-Rb" ]
iclr_2018_B16yEqkCZ
Avoiding Catastrophic States with Intrinsic Fear
Many practical reinforcement learning problems contain catastrophic states that the optimal policy visits infrequently or never. Even on toy problems, deep reinforcement learners periodically revisit these states, once they are forgotten under a new policy. In this paper, we introduce intrinsic fear, a learned reward shaping that accelerates deep reinforcement learning and guards oscillating policies against periodic catastrophes. Our approach incorporates a second model trained via supervised learning to predict the probability of imminent catastrophe. This score acts as a penalty on the Q-learning objective. Our theoretical analysis demonstrates that the perturbed objective yields the same average return under strong assumptions and an ϵ-close average return under weaker assumptions. Our analysis also shows robustness to classification errors. Equipped with intrinsic fear, our DQNs solve the toy environments and improve on the Atari games Seaquest, Asteroids, and Freeway.
rejected-papers
This paper presents an interesting idea that is related to imitation learning, safe exploration, and intrinsic motivation. However, in its current state the paper needs improvement in clarity. There are also some concerns about the number of hyperparameters involved. Finally, the experimental results are not completely convincing and should reflect existing baselines in one of the areas described above.
test
[ "ByVTKgqEz", "SkYNcg5xz", "SyHc3kp1f", "S117txRef", "H11eRromM", "Sy1zTBomM", "rk6W9Si7z", "B10tYHsQG", "BkEm_rimM", "B1mRwQzQf", "BJZkdQzGG", "ry8OQ7zzz", "BybJ2MqxG", "S16Q2IXlM", "rJckvqkgz", "rk3V5Hjkf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "public", "author", "public", "author", "public" ]
[ "I've slightly increased my score to reflect the improvements made by the authors. Theorem 1 seems to have been corrected. Unfortunately, the bound now indicates that the average reward is within lambda * epsilon * (R_max - R_min) of the optimal average reward (where lambda can be arbitrarily large). This does not ...
[ -1, 5, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SkYNcg5xz", "iclr_2018_B16yEqkCZ", "iclr_2018_B16yEqkCZ", "iclr_2018_B16yEqkCZ", "SyHc3kp1f", "SkYNcg5xz", "S117txRef", "iclr_2018_B16yEqkCZ", "B1mRwQzQf", "BJZkdQzGG", "iclr_2018_B16yEqkCZ", "iclr_2018_B16yEqkCZ", "S16Q2IXlM", "iclr_2018_B16yEqkCZ", "rk3V5Hjkf", "iclr_2018_B16yEqkCZ"...
iclr_2018_BJ7d0fW0b
Faster Reinforcement Learning with Expert State Sequences
Imitation learning relies on expert demonstrations. Existing approaches often re- quire that the complete demonstration data, including sequences of actions and states are available. In this paper, we consider a realistic and more difficult sce- nario where a reinforcement learning agent only has access to the state sequences of an expert, while the expert actions are not available. Inferring the unseen ex- pert actions in a stochastic environment is challenging and usually infeasible when combined with a large state space. We propose a novel policy learning method which only utilizes the expert state sequences without inferring the unseen ac- tions. Specifically, our agent first learns to extract useful sub-goal information from the state sequences of the expert and then utilizes the extracted sub-goal information to factorize the action value estimate over state-action pairs and sub- goals. The extracted sub-goals are also used to synthesize guidance rewards in the policy learning. We evaluate our agent on five Doom tasks. Our empirical results show that the proposed method significantly outperforms the conventional DQN method.
rejected-papers
This paper proposes a simple idea for using expert data to improve a deep RL agent's performance. Its main flaw is the lack of justification for the specific techniques used. The empirical evaluation is also fairly limited.
train
[ "r1ke1YDlz", "H1lsqQdeG", "Hk-OLVKeM", "S1LqdbcMG", "BygudWqfz", "SJOQu-5Mf", "H1o2J2YfG", "r1J7Oaw1z", "HJv1OsDyM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "SIGNIFICANCE AND ORIGINALITY:\n\nThe authors propose to accelerate the learning of complex tasks by exploiting traces of experts.\nUnlike the most common form of imitation learning or behavioral cloning, the authors \nformulate their solution in the case where the expert’s state trajectory is observable, \nbut the...
[ 6, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJ7d0fW0b", "iclr_2018_BJ7d0fW0b", "iclr_2018_BJ7d0fW0b", "Hk-OLVKeM", "H1lsqQdeG", "r1ke1YDlz", "iclr_2018_BJ7d0fW0b", "HJv1OsDyM", "iclr_2018_BJ7d0fW0b" ]
iclr_2018_HkpRBFxRb
Learning to Mix n-Step Returns: Generalizing Lambda-Returns for Deep Reinforcement Learning
Reinforcement Learning (RL) can model complex behavior policies for goal-directed sequential decision making tasks. A hallmark of RL algorithms is Temporal Difference (TD) learning: value function for the current state is moved towards a bootstrapped target that is estimated using the next state's value function. lambda-returns define the target of the RL agent as a weighted combination of rewards estimated by using multiple many-step look-aheads. Although mathematically tractable, the use of exponentially decaying weighting of n-step returns based targets in lambda-returns is a rather ad-hoc design choice. Our major contribution is that we propose a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. In contrast to lambda-returns wherein the RL agent is restricted to use an exponentially decaying weighting scheme, CAR allows the agent to learn to decide how much it wants to weigh the n-step returns based targets. Our experiments, in addition to showing the efficacy of CAR, also empirically demonstrate that using sophisticated weighted mixtures of multi-step returns (like CAR and lambda-returns) considerably outperforms the use of n-step returns. We perform our experiments on the Asynchronous Advantage Actor Critic (A3C) algorithm in the Atari 2600 domain.
rejected-papers
This is an interesting paper, but was quite difficult to follow. As they stand, the empirical results are not altogether convincing nor warrant acceptance.
train
[ "ryLFZHGgG", "BkWlPOFlM", "rkVjnRYef", "S1M2zLpXz", "B1wS3LaXz", "rk_TOUa7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "SUMMARY\nThe major contribution of the paper is a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. These CARs are used in the A3C algorithm. The weights are based on the confidence of the...
[ 5, 6, 5, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_HkpRBFxRb", "iclr_2018_HkpRBFxRb", "iclr_2018_HkpRBFxRb", "ryLFZHGgG", "rkVjnRYef", "BkWlPOFlM" ]
iclr_2018_HyunpgbR-
Structured Exploration via Hierarchical Variational Policy Networks
Reinforcement learning in environments with large state-action spaces is challenging, as exploration can be highly inefficient. Even if the dynamics are simple, the optimal policy can be combinatorially hard to discover. In this work, we propose a hierarchical approach to structured exploration to improve the sample efficiency of on-policy exploration in large state-action spaces. The key idea is to model a stochastic policy as a hierarchical latent variable model, which can learn low-dimensional structure in the state-action space, and to define exploration by sampling from the low-dimensional latent space. This approach enables lower sample complexity, while preserving policy expressivity. In order to make learning tractable, we derive a joint learning and exploration strategy by combining hierarchical variational inference with actor-critic learning. The benefits of our learning approach are that 1) it is principled, 2) simple to implement, 3) easily scalable to settings with many actions and 4) easily composable with existing deep learning approaches. We demonstrate the effectiveness of our approach on learning a deep centralized multi-agent policy, as multi-agent environments naturally have an exponentially large state-action space. In this setting, the latent hierarchy implements a form of multi-agent coordination during exploration and execution (MACE). We demonstrate empirically that MACE can more efficiently learn optimal policies in challenging multi-agent games with a large number (~20) of agents, compared to conventional baselines. Moreover, we show that our hierarchical structure leads to meaningful agent coordination.
rejected-papers
The reviewers feel there are two issues that make this paper fall short of acceptance: first, the lack of a clear emphasis and focus (evidenced by the significant revisions) and second, a lack of comparison to similar, existing methods for multi-agent reinforcement learning.
train
[ "Hk4yYjNef", "SJmr_aFgf", "B1hxXS9xM", "HkULJrp7M", "r1WspD_7f", "BywLPDO7M", "rJiQSDumf", "SJHUqM2MM", "rJ_4ESsff", "Bkz6yknZf", "HJITw-LWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public", "public" ]
[ "This paper proposes an approach to improve exploration in multiagent reinforcement learning by allowing the policies of the individual agents to be conditioned on an external coordination signal \\lambda. In order to find such parametrized policies, the approach combines deep RL with a variational inference approa...
[ 4, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyunpgbR-", "iclr_2018_HyunpgbR-", "iclr_2018_HyunpgbR-", "iclr_2018_HyunpgbR-", "SJHUqM2MM", "rJ_4ESsff", "HJITw-LWf", "iclr_2018_HyunpgbR-", "iclr_2018_HyunpgbR-", "HJITw-LWf", "iclr_2018_HyunpgbR-" ]
iclr_2018_BJvWjcgAZ
Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update
We propose Episodic Backward Update - a new algorithm to boost the performance of a deep reinforcement learning agent by fast reward propagation. In contrast to the conventional use of the replay memory with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state into its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate effectively throughout the sampled episode. We evaluate our algorithm on 2D MNIST Maze Environment and 49 games of the Atari 2600 Environment and show that our agent improves sample efficiency with a competitive computational cost.
rejected-papers
The reviewers agree the proposed idea is relatively incremental, and the paper itself does not do an exemplary job in other areas to make up for this.
train
[ "SJ3y_pYxM", "H1gBrkcgM", "HyxmggJbM", "S1CtiJ4EM", "BJ1nRs3Xf", "Hync2bzMz", "B1__SU0Zz", "B11nkS0-M", "r1lBnW0Zf", "Hyp9C2pbf", "rkbujh2-M", "B1Q8wv2ZG", "Bk-FxwW-G", "HJIuwJGZz", "HygPguZ-M", "B1fm4OW-G", "ByT_zd-Wz", "H1GqLfb-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public", "author", "public", "author", "public", "author", "author", "author", "public" ]
[ "This paper proposes a new variant of DQN where the DQN targets are computed on a full episode by a « backward » update (i.e. from end to start of episode). The targets’ update rule is similar to a regular tabular Q-learning update with high learning rate beta: this allows faster propagation of rewards obtained at ...
[ 4, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJvWjcgAZ", "iclr_2018_BJvWjcgAZ", "iclr_2018_BJvWjcgAZ", "HygPguZ-M", "iclr_2018_BJvWjcgAZ", "iclr_2018_BJvWjcgAZ", "B11nkS0-M", "r1lBnW0Zf", "Hyp9C2pbf", "rkbujh2-M", "B1Q8wv2ZG", "Bk-FxwW-G", "H1GqLfb-f", "Bk-FxwW-G", "SJ3y_pYxM", "H1gBrkcgM", "HyxmggJbM", "iclr_2018_...
iclr_2018_Sy_MK3lAZ
PARAMETRIZED DEEP Q-NETWORKS LEARNING: PLAYING ONLINE BATTLE ARENA WITH DISCRETE-CONTINUOUS HYBRID ACTION SPACE
Most existing deep reinforcement learning (DRL) frameworks consider action spaces that are either discrete or continuous space. Motivated by the project of design Game AI for King of Glory (KOG), one the world’s most popular mobile game, we consider the scenario with the discrete-continuous hybrid action space. To directly apply existing DLR frameworks, existing approaches either approximate the hybrid space by a discrete set or relaxing it into a continuous set, which is usually less efficient and robust. In this paper, we propose a parametrized deep Q-network (P-DQN) for the hybrid action space without approximation or relaxation. Our algorithm combines DQN and DDPG and can be viewed as an extension of the DQN to hybrid actions. The empirical study on the game KOG validates the efficiency and effectiveness of our method.
rejected-papers
The idea studied here is fairly incremental and the empirical evaluation could be improved.
train
[ "SkQBXUfJG", "r1j4GjKeM", "BkO0bWclz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper examines a modified NN architecture and algorithm (P-DQN) for learning in hybrid discrete/continuous action spaces. The authors come up with a clever way of modifying the architecture of parameterized-action-space DDPG (as in Hausknecht & Stone 16) in such a way that the actor only outputs values for th...
[ 5, 5, 4 ]
[ 4, 3, 4 ]
[ "iclr_2018_Sy_MK3lAZ", "iclr_2018_Sy_MK3lAZ", "iclr_2018_Sy_MK3lAZ" ]
iclr_2018_S1GDXzb0b
Model-based imitation learning from state trajectories
Imitation learning from demonstrations usually relies on learning a policy from trajectories of optimal states and actions. However, in real life expert demonstrations, often the action information is missing and only state trajectories are available. We present a model-based imitation learning method that can learn environment-specific optimal actions only from expert state trajectories. Our proposed method starts with a model-free reinforcement learning algorithm with a heuristic reward signal to sample environment dynamics, which is then used to train the state-transition probability. Subsequently, we learn the optimal actions from expert state trajectories by supervised learning, while back-propagating the error gradients through the modeled environment dynamics. Experimental evaluations show that our proposed method successfully achieves performance similar to (state, action) trajectory-based traditional imitation learning methods even in the absence of action information, with much fewer iterations compared to conventional model-free reinforcement learning methods. We also demonstrate that our method can learn to act from only video demonstrations of expert agent for simple games and can learn to achieve desired performance in less number of iterations.
rejected-papers
The paper is hard to follow at times. The heuristic reward has little justification -- not clear how this would extend to other domains. Lack of empirical comparisons (see e.g. Hester et al., Deep Q-Learning from Demonstrations, 2017).
train
[ "rJynoO9zM", "SJlNadqfM", "ry8Bn_5Mz", "rJfWQ9OeG", "SymLN__gM", "HJO3Kl0ef", "B1V5vHb1M" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Thank you for the overall encouraging review. We address some of the concerns in the following,\n\nQ : Not clear that method converges on all problems. \nA: Yes it does not converge on all dynamics models. Currently, the main drawback of the method is that it cannot model complex dynamics models like raw video tra...
[ -1, -1, -1, 7, 4, 3, -1 ]
[ -1, -1, -1, 3, 4, 5, -1 ]
[ "rJfWQ9OeG", "B1V5vHb1M", "SymLN__gM", "iclr_2018_S1GDXzb0b", "iclr_2018_S1GDXzb0b", "iclr_2018_S1GDXzb0b", "iclr_2018_S1GDXzb0b" ]
iclr_2018_HyDAQl-AW
Time Limits in Reinforcement Learning
In reinforcement learning, it is common to let an agent interact with its environment for a fixed amount of time before resetting the environment and repeating the process in a series of episodes. The task that the agent has to learn can either be to maximize its performance over (i) that fixed amount of time, or (ii) an indefinite period where the time limit is only used during training. In this paper, we investigate theoretically how time limits could effectively be handled in each of the two cases. In the first one, we argue that the terminations due to time limits are in fact part of the environment, and propose to include a notion of the remaining time as part of the agent's input. In the second case, the time limits are not part of the environment and are only used to facilitate learning. We argue that such terminations should not be treated as environmental ones and propose a method, specific to value-based algorithms, that incorporates this insight by continuing to bootstrap at the end of each partial episode. To illustrate the significance of our proposals, we perform several experiments on a range of environments from simple few-state transition graphs to complex control tasks, including novel and standard benchmark domains. Our results show that the proposed methods improve the performance and stability of existing reinforcement learning algorithms.
rejected-papers
The reviewers agree that this paper suffers from a lack of novelty and does not make sufficient contributions to warrant acceptance.
train
[ "S1nWk0YgG", "ryeKvcAeM", "rkUhtNy-M", "BJA7nJp7z", "H122t1a7f", "Sk0rt1amM", "HyVrdyT7z", "ry-v6YFff", "SJB4mKFGM", "H1XFBYKMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "official_reviewer" ]
[ "Summary: This paper explores how to handle two practical issues in reinforcement learning. The first is including time remaining in the state, for domains where episodes are cut-off before a terminal state is reached in the usual way. The second idea is to allow bootstrapping at episode boundaries, but cutting off...
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyDAQl-AW", "iclr_2018_HyDAQl-AW", "iclr_2018_HyDAQl-AW", "S1nWk0YgG", "ryeKvcAeM", "rkUhtNy-M", "SJB4mKFGM", "H1XFBYKMf", "iclr_2018_HyDAQl-AW", "SJB4mKFGM" ]
iclr_2018_rkc_hGb0Z
A dynamic game approach to training robust deep policies
We present a method for evaluating the sensitivity of deep reinforcement learning (RL) policies. We also formulate a zero-sum dynamic game for designing robust deep reinforcement learning policies. Our approach mitigates the brittleness of policies when agents are trained in a simulated environment and are later exposed to the real world where it is hazardous to employ RL policies. This framework for training deep RL policies involve a zero-sum dynamic game against an adversarial agent, where the goal is to drive the system dynamics to a saddle region. Using a variant of the guided policy search algorithm, our agent learns to adopt robust policies that require less samples for learning the dynamics and performs better than the GPS algorithm. Without loss of generality, we demonstrate that deep RL policies trained in this fashion will be maximally robust to a ``worst" possible adversarial disturbances.
rejected-papers
The reviewers are unanimous that the paper is not sufficiently clear and could be improved with better empirical results.
test
[ "rJAdpODef", "ryha-G5lM", "HJ_WDu5xM", "SyVqfP7Wz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The authors propose to incorporate elements of robust control into guided policy search, in order to devise a method that is resilient to perturbations and (presumably) model mismatch.\n\nThe idea behind the method and the discussion in the introduction and related work is interesting and worthwhile, and I think t...
[ 5, 3, 5, -1 ]
[ 4, 3, 2, -1 ]
[ "iclr_2018_rkc_hGb0Z", "iclr_2018_rkc_hGb0Z", "iclr_2018_rkc_hGb0Z", "rJAdpODef" ]
iclr_2018_rkvDssyRb
Multi-Advisor Reinforcement Learning
We consider tackling a single-agent RL problem by distributing it to n learners. These learners, called advisors, endeavour to solve the problem from a different focus. Their advice, taking the form of action values, is then communicated to an aggregator, which is in control of the system. We show that the local planning method for the advisors is critical and that none of the ones found in the literature is flawless: the \textit{egocentric} planning overestimates values of states where the other advisors disagree, and the \textit{agnostic} planning is inefficient around danger zones. We introduce a novel approach called \textit{empathic} and discuss its theoretical aspects. We empirically examine and validate our theoretical findings on a fruit collection task.
rejected-papers
The reviewers agree this is an interesting paper with interesting ideas, but is not ready for publication in its current shape. In particular, there is a need for strong empirical results.
train
[ "rJbjUB3JM", "B1m1clFlM", "H18ZJWAgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents Multi-Advisor RL (MAd-RL), a formalized view of many forms of performing RL by training multiple learners, then aggregating their results into a single decision-making agent. Previous work and citations are plentiful and complete, and the field of study is a promising approach to RL. Through M...
[ 4, 4, 4 ]
[ 4, 4, 5 ]
[ "iclr_2018_rkvDssyRb", "iclr_2018_rkvDssyRb", "iclr_2018_rkvDssyRb" ]
iclr_2018_rJIgf7bAZ
An inference-based policy gradient method for learning options
In the pursuit of increasingly intelligent learning systems, abstraction plays a vital role in enabling sophisticated decisions to be made in complex environments. The options framework provides formalism for such abstraction over sequences of decisions. However most models require that options be given a priori, presumably specified by hand, which is neither efficient, nor scalable. Indeed, it is preferable to learn options directly from interaction with the environment. Despite several efforts, this remains a difficult problem: many approaches require access to a model of the environmental dynamics, and inferred options are often not interpretable, which limits our ability to explain the system behavior for verification or debugging purposes. In this work we develop a novel policy gradient method for the automatic learning of policies with options. This algorithm uses inference methods to simultaneously improve all of the options available to an agent, and thus can be employed in an off-policy manner, without observing option labels. Experimental results show that the options learned can be interpreted. Further, we find that the method presented here is more sample efficient than existing methods, leading to faster and more stable learning of policies with options.
rejected-papers
The reviewers are unanimous that this is an interesting paper, but that ultimately the empirical results are not sufficiently promising to warrant the added complexity.
train
[ "B1LVBmqgf", "B1F3lO2lG", "ByV--6Xbz", "SyDFSOp7z", "H1uwNUJMz", "BkGEN81fz", "BkV07I1fM", "H1ci0S1zG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper treats option discovery as being analogous to discovering useful latent variables. The proposed formulation assumes there is a policy over options, which invokes an option’s policy to select actions at each timestep until the option’s termination function is activated. A contribution of this paper is ...
[ 3, 3, 4, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJIgf7bAZ", "iclr_2018_rJIgf7bAZ", "iclr_2018_rJIgf7bAZ", "iclr_2018_rJIgf7bAZ", "ByV--6Xbz", "B1LVBmqgf", "B1F3lO2lG", "iclr_2018_rJIgf7bAZ" ]
iclr_2018_Bk-ofQZRb
TD Learning with Constrained Gradients
Temporal Difference Learning with function approximation is known to be unstable. Previous work like \citet{sutton2009fast} and \citet{sutton2009convergent} has presented alternative objectives that are stable to minimize. However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly \citep{mnih2015human}. In this work we propose a constraint on the TD update that minimizes change to the target values. This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation. We validate this update by applying our technique to deep Q-learning, and training without a target network. We also show that adding this constraint on Baird's counterexample keeps Q-learning from diverging.
rejected-papers
The reviewers agree this paper is not yet ready for publication.
train
[ "H1JLMpYlM", "rk0cH1cgM", "rJHtH-clM", "SJjLLwp7z", "SkQRzPJzz", "S1HLi7-bG", "HJmlAOcxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public" ]
[ "Summary: This paper tackles the issue of combining TD learning methods with function approximation. The proposed algorithm constrains the gradient update to deal with the fact that canonical TD with function approximation ignores the impact of changing the weights on the target of the TD learning rule. Results wit...
[ 2, 3, 4, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Bk-ofQZRb", "iclr_2018_Bk-ofQZRb", "iclr_2018_Bk-ofQZRb", "SkQRzPJzz", "iclr_2018_Bk-ofQZRb", "HJmlAOcxM", "iclr_2018_Bk-ofQZRb" ]
iclr_2018_SyF7Erp6W
Learning to play slot cars and Atari 2600 games in just minutes
Machine learning algorithms for controlling devices will need to learn quickly, with few trials. Such a goal can be attained with concepts borrowed from continental philosophy and formalized using tools from the mathematical theory of categories. Illustrations of this approach are presented on a cyberphysical system: the slot car game, and also on Atari 2600 games.
rejected-papers
This paper does not seem completely appropriate for ICLR.
val
[ "SyI9nuuez", "SJyMuKclM", "B1_Afzy-f", "SJxoffIbM", "Hk51IWIZG", "H1nySWL-G", "Sy_hfWU-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors argue that many machine learning systems need a large amount of data and long training times. To mend those shortcomings their proposed algorithm takes the novel approach of combining mathematical category theory and continental philosophy. Instead of computation units, the concept of entities and a ...
[ 3, 2, 3, -1, -1, -1, -1 ]
[ 2, 5, 1, -1, -1, -1, -1 ]
[ "iclr_2018_SyF7Erp6W", "iclr_2018_SyF7Erp6W", "iclr_2018_SyF7Erp6W", "Sy_hfWU-z", "SyI9nuuez", "SJyMuKclM", "B1_Afzy-f" ]
iclr_2018_rJ3fy0k0Z
Deterministic Policy Imitation Gradient Algorithm
The goal of imitation learning (IL) is to enable a learner to imitate an expert’s behavior given the expert’s demonstrations. Recently, generative adversarial imitation learning (GAIL) has successfully achieved it even on complex continuous control tasks. However, GAIL requires a huge number of interactions with environment during training. We believe that IL algorithm could be more applicable to the real-world environments if the number of interactions could be reduced. To this end, we propose a model free, off-policy IL algorithm for continuous control. The keys of our algorithm are two folds: 1) adopting deterministic policy that allows us to derive a novel type of policy gradient which we call deterministic policy imitation gradient (DPIG), 2) introducing a function which we call state screening function (SSF) to avoid noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations. Experimental results show that our algorithm can achieve the goal of IL with at least tens of times less interactions than GAIL on a variety of continuous control tasks.
rejected-papers
All of the reviewers found some aspects of the formulation and experiments interesting, but they found the paper hard to read and understand. Some of the components of the technique such as the state screening function (SSF) seem ad-hoc and heuristic without much justification. Please improve the exposition and remove the unnecessary component of the technique, or come up with better justifications.
train
[ "S1_na_OlG", "B1nuCculG", "S1tVQ5Kef", "SypN6BT7M", "SknsnHTQG", "S1WJnrpmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes to extend the determinist policy gradient algorithm to learn from demonstrations. The method is combined with a type of density estimation of the expert to avoid noisy policy updates. It is tested on Mujoco tasks with expert demonstrations generated with a pre-trained network. \n\nI found the p...
[ 6, 5, 5, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_rJ3fy0k0Z", "iclr_2018_rJ3fy0k0Z", "iclr_2018_rJ3fy0k0Z", "S1_na_OlG", "B1nuCculG", "S1tVQ5Kef" ]
iclr_2018_B1mSWUxR-
Softmax Q-Distribution Estimation for Structured Prediction: A Theoretical Interpretation for RAML
Reward augmented maximum likelihood (RAML), a simple and effective learning framework to directly optimize towards the reward function in structured prediction tasks, has led to a number of impressive empirical successes. RAML incorporates task-specific reward by performing maximum-likelihood updates on candidate outputs sampled according to an exponentiated payoff distribution, which gives higher probabilities to candidates that are close to the reference output. While RAML is notable for its simplicity, efficiency, and its impressive empirical successes, the theoretical properties of RAML, especially the behavior of the exponentiated payoff distribution, has not been examined thoroughly. In this work, we introduce softmax Q-distribution estimation, a novel theoretical interpretation of RAML, which reveals the relation between RAML and Bayesian decision theory. The softmax Q-distribution can be regarded as a smooth approximation of the Bayes decision boundary, and the Bayes decision rule is achieved by decoding with this Q-distribution. We further show that RAML is equivalent to approximately estimating the softmax Q-distribution, with the temperature τ controlling approximation error. We perform two experiments, one on synthetic data of multi-class classification and one on real data of image captioning, to demonstrate the relationship between RAML and the proposed softmax Q-distribution estimation, verifying our theoretical analysis. Additional experiments on three structured prediction tasks with rewards defined on sequential (named entity recognition), tree-based (dependency parsing) and irregular (machine translation) structures show notable improvements over maximum likelihood baselines.
rejected-papers
There are some interesting ideas discussed in the paper, but the reviewers expressed difficulty understanding the motivation and the theoretical results. The experiments do not seem convincing in showing that SQDML achieves significant gains. Overall, the the paper needs either stronger and clearer theoretical results, or more convincing experiments for publication at ICLR.
train
[ "S1B8Oq7ez", "BJNeA-cgG", "B16z4vAgG", "H12WLWmXM", "r10sQDqZz", "rJQYzP9bz", "H1S5eP9Zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper dives deeper into understand reward augmented maximum likelihood training. Overall, I feel that the paper is hard to understand and that it would benefit from more clarity, e.g., section 3.3 states that decoding from the softmax q-distribution is similar to the Bayes decision rule. Please elaborate on t...
[ 5, 5, 6, -1, -1, -1, -1 ]
[ 2, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_B1mSWUxR-", "iclr_2018_B1mSWUxR-", "iclr_2018_B1mSWUxR-", "iclr_2018_B1mSWUxR-", "B16z4vAgG", "BJNeA-cgG", "S1B8Oq7ez" ]
iclr_2018_rk3b2qxCW
Policy Gradient For Multidimensional Action Spaces: Action Sampling and Entropy Bonus
In recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games. Many reinforcement learning problems, however, involve high-dimensional discrete action spaces as well as high-dimensional state spaces. In this paper, we develop a novel policy gradient methodology for the case of large multidimensional discrete action spaces. We propose two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) parameterization. Both of these approaches provide expressive models to which backpropagation can be applied for training. We then consider entropy bonus, which is typically added to the reward function to enhance exploration. In the case of high-dimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem.
rejected-papers
The paper has some interesting ideas around auto-regressive policies and estimating their entropy for exploration. The use of autoregressive policies in RL is not particularly novel, and the estimate of entropy for such models is straightforward. Finally, the experiments focus on very simple tasks.
train
[ "HJs1WiFlM", "ry9X12Fgz", "Bk4yQ1Alz", "H1fJ6l2QM", "B1K3hg3mG", "S1kwnx2Qz", "rJWe3ehmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "In this paper, the authors suggest introducing dependencies between actions in RL settings with multi-dimensional action spaces by way of two mechanisms (using an RNN and making partial action specification as part of the state); they then introduce entropy pseudo-rewards whose maximization corresponding to joint ...
[ 6, 5, 5, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rk3b2qxCW", "iclr_2018_rk3b2qxCW", "iclr_2018_rk3b2qxCW", "HJs1WiFlM", "ry9X12Fgz", "Bk4yQ1Alz", "iclr_2018_rk3b2qxCW" ]
iclr_2018_SyPMT6gAb
Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization
Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts. One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low variance and thus low generalization error. In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks.Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work. With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical results.
rejected-papers
The reviewers agree that the paper studies and interesting problem with an interesting approach. The reviewers raised some concerns regarding the theoretical and empirical results. The authors have made changes to the paper, but given the theoretical nature of the paper and the extent of changes, another review is needed before publication.
train
[ "H1rSYwTQG", "BJMTTZjNM", "r18QCeLNz", "r1Sed5uez", "BJ3VB6_xG", "r1gHCrFlM", "Bk-ucO6Xf", "HJvH6w67f", "HJkIbD6mG" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Dear reviewer,\n\nThanks a lot for the inspiring comments and below are our point-by-point correspondence and hope the revision can address these concerns and make the paper more solid.\n\n- (Citations formatting) We have fixed the missing parenthesis for end-of-sentence citations. We apologize for the inconvenien...
[ -1, -1, -1, 4, 5, 7, -1, -1, -1 ]
[ -1, -1, -1, 4, 5, 3, -1, -1, -1 ]
[ "BJ3VB6_xG", "r18QCeLNz", "HJkIbD6mG", "iclr_2018_SyPMT6gAb", "iclr_2018_SyPMT6gAb", "iclr_2018_SyPMT6gAb", "iclr_2018_SyPMT6gAb", "r1Sed5uez", "r1gHCrFlM" ]
iclr_2018_By5ugjyCb
PACT: Parameterized Clipping Activation for Quantized Neural Networks
Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTi-vation (PACT), uses an activation clipping parameter α that is optimized duringtraining to find the right quantization scale. PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.
rejected-papers
All of the reviewers agree that the experimental results are promising and the proposed activation function enables a decent degree of quantization. However, the main concern with the approach is its limited novelty compared to previous work on clipped activation functions. minor comments: - Even though PACT is very similar to Relu, the names are very different. - Please include a plot showing the proposed activation function as well.
test
[ "B1wlzrslf", "BkgW-ZteG", "S1-ToUJWz", "HJ1GluT7z", "r1VDedaXz", "SJgql_6Xf", "S1cLKVpZG", "Bk08vE6ZM", "HkQb4VabM", "By90AXpZM", "rJ1FCma-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "The authors have addressed my concerns, and clarified a misunderstanding of the baseline that I had, which I appreciate. I do think that it is a solid contribution with thorough experiments. I still keep my original rating of the paper because the method presented is heavily based on previous works, which limits t...
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_By5ugjyCb", "iclr_2018_By5ugjyCb", "iclr_2018_By5ugjyCb", "S1-ToUJWz", "B1wlzrslf", "BkgW-ZteG", "BkgW-ZteG", "S1-ToUJWz", "B1wlzrslf", "rJ1FCma-z", "iclr_2018_By5ugjyCb" ]
iclr_2018_HJDV5YxCW
Heterogeneous Bitwidth Binarization in Convolutional Neural Networks
Recent work has shown that performing inference with fast, very-low-bitwidth (e.g., 1 to 2 bits) representations of values in models can yield surprisingly accurate results. However, although 2-bit approximated networks have been shown to be quite accurate, 1 bit approximations, which are twice as fast, have restrictively low accuracy. We propose a method to train models whose weights are a mixture of bitwidths, that allows us to more finely tune the accuracy/speed trade-off. We present the “middle-out” criterion for determining the bitwidth for each value, and show how to integrate it into training models with a desired mixture of bitwidths. We evaluate several architectures and binarization techniques on the ImageNet dataset. We show that our heterogeneous bitwidth approximation achieves superlinear scaling of accuracy with bitwidth. Using an average of only 1.4 bits, we are able to outperform state-of-the-art 2-bit architectures.
rejected-papers
All of the reviewers find the approach interesting, but they have reservations regarding the practical impact and empirical evaluation. The paper needs improvement both on the motivation and on the experimental results by including more baseline methods and neural architectures.
train
[ "SkYPj5Hez", "H1Jn8QYeG", "SklRZUJ-G", "rkrJS-_Qz", "H1iBLz8Xf", "r1UdREX-M", "BJzQtmQbG", "rkiavXQ-z", "rkNfvQQbG", "Hy2OIQXWf", "H162w9x-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "This paper suggests a method for varying the degree of quantization in a neural network during the forward propagation phase.\n\nThough this is an important direction to investigate, there are several issues:\n\n1. Comparison with previous results is misleading:\na.\t1-bit weights and floating point activations: R...
[ 6, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJDV5YxCW", "iclr_2018_HJDV5YxCW", "iclr_2018_HJDV5YxCW", "H1iBLz8Xf", "BJzQtmQbG", "H162w9x-z", "SkYPj5Hez", "H1Jn8QYeG", "SklRZUJ-G", "iclr_2018_HJDV5YxCW", "iclr_2018_HJDV5YxCW" ]
iclr_2018_SJgf6Z-0W
Predicting Multiple Actions for Stochastic Continuous Control
We introduce a new approach to estimate continuous actions using actor-critic algorithms for reinforcement learning problems. Policy gradient methods usually predict one continuous action estimate or parameters of a presumed distribution (most commonly Gaussian) for any given state which might not be optimal as it may not capture the complete description of the target distribution. Our approach instead predicts M actions with the policy network (actor) and then uniformly sample one action during training as well as testing at each state. This allows the agent to learn a simple stochastic policy that has an easy to compute expected return. In all experiments, this facilitates better exploration of the state space during training and converges to a better policy.
rejected-papers
All of the reviewers agree that the paper presents strong experimental results on continuous control benchmarks. The reviewers raised concerns regarding the analysis of the behavior of the algorithm, the possible impact of the technique, and requested more references and comparison with related work. The paper has significantly improved since the initial submission, but still not able fully satisfactory to the reviewers, partly due to the large extent of the changes needed.
train
[ "Sk4kCOIEM", "B1B3e0Oef", "HJoqViKlM", "HyRqndjez", "rJrwYUp7G", "r1l7FUa7M", "Hy9nKUamM", "r125_LpQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The relationship with SVG0, is that both are off-policy stochastic algorithms learned with the reparametrization trick. Currently the comparisons you have are with DDPG (deterministic, off-policy), A3C(stochastic, on-policy) and MAPG(stochastic, on-policy). So it is difficult to separate which gains are simply due...
[ -1, 3, 7, 4, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1 ]
[ "Hy9nKUamM", "iclr_2018_SJgf6Z-0W", "iclr_2018_SJgf6Z-0W", "iclr_2018_SJgf6Z-0W", "HJoqViKlM", "HyRqndjez", "B1B3e0Oef", "iclr_2018_SJgf6Z-0W" ]
iclr_2018_r1BRfhiab
The Principle of Logit Separation
We consider neural network training, in applications in which there are many possible classes, but at test-time, the task is to identify only whether the given example belongs to a specific class, which can be different in different applications of the classifier. For instance, this is the case in an image search engine. We consider the Single Logit Classification (SLC) task: training the network so that at test-time, it would be possible to accurately identify if the example belongs to a given class, based only on the output logit for this class. We propose a natural principle, the Principle of Logit Separation, as a guideline for choosing and designing losses suitable for the SLC. We show that the cross-entropy loss function is not aligned with the Principle of Logit Separation. In contrast, there are known loss functions, as well as novel batch loss functions that we propose, which are aligned with this principle. In total, we study seven loss functions. Our experiments show that indeed in almost all cases, losses that are aligned with Principle of Logit Separation obtain a 20%-35% relative performance improvement in the SLC task, compared to losses that are not aligned with it. We therefore conclude that the Principle of Logit Separation sheds light on an important property of the most common loss functions used by neural network classifiers.
rejected-papers
All of the reviewers have found some aspects of the formulation interesting, but they raised concerns regarding the practical use of the experimental setup.
train
[ "B1mIOqdlz", "ryA44e5xf", "HyCT3vclM", "HJxlZlx7G", "B1Jp1ggmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "The paper is well-written which makes it easy to understand its main\nthrust - choosing loss functions so that at test time one can\naccurately (and speedily) determine whether an example is in a given\nclass, ie loss functions which are aligned with the \"Principle of Logit\nSeparation (PoLS)\". \n\nWhen the \"Pr...
[ 6, 3, 4, -1, -1 ]
[ 3, 4, 4, -1, -1 ]
[ "iclr_2018_r1BRfhiab", "iclr_2018_r1BRfhiab", "iclr_2018_r1BRfhiab", "iclr_2018_r1BRfhiab", "ryA44e5xf" ]
iclr_2018_SJD8YjCpW
Balanced and Deterministic Weight-sharing Helps Network Performance
Weight-sharing plays a significant role in the success of many deep neural networks, by increasing memory efficiency and incorporating useful inductive priors about the problem into the network. But understanding how weight-sharing can be used effectively in general is a topic that has not been studied extensively. Chen et al. (2015) proposed HashedNets, which augments a multi-layer perceptron with a hash table, as a method for neural network compression. We generalize this method into a framework (ArbNets) that allows for efficient arbitrary weight-sharing, and use it to study the role of weight-sharing in neural networks. We show that common neural networks can be expressed as ArbNets with different hash functions. We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network.
rejected-papers
An empirical study of weight sharing for neural networks is interesting, but all of the reviewers found the experiments insufficient without enough baseline comparisons.
test
[ "rJqGz8tlf", "rybTRlqgz", "BkmW-pbMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The manuscript advocates to study the weight sharing in a more systematic way by proposing ArbNets which defines the weight sharing function as a hash function. In this framework, any existing neural network architectures, including CNN and RNN, could be incorporated into ArbNets.\n\nThe manuscript is not well wri...
[ 4, 4, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJD8YjCpW", "iclr_2018_SJD8YjCpW", "iclr_2018_SJD8YjCpW" ]
iclr_2018_ByL48G-AW
Simple Nearest Neighbor Policy Method for Continuous Control Tasks
We design a new policy, called a nearest neighbor policy, that does not require any optimization for simple, low-dimensional continuous control tasks. As this policy does not require any optimization, it allows us to investigate the underlying difficulty of a task without being distracted by optimization difficulty of a learning algorithm. We propose two variants, one that retrieves an entire trajectory based on a pair of initial and goal states, and the other retrieving a partial trajectory based on a pair of current and goal states. We test the proposed policies on five widely-used benchmark continuous control tasks with a sparse reward: Reacher, Half Cheetah, Double Pendulum, Cart Pole and Mountain Car. We observe that the majority (the first four) of these tasks, which have been considered difficult, are easily solved by the proposed policies with high success rates, indicating that reported difficulties of them may have likely been due to the optimization difficulty. Our work suggests that it is necessary to evaluate any sophisticated policy learning algorithm on more challenging problems in order to truly assess the advances from them.
rejected-papers
Evaluating simple baselines for continuous control is important and nearest neighbor search methods are interesting. However, the reviewers think that the paper lacks citation and comparison to some prior work and evaluation on more challenging benchmarks.
train
[ "r1yRvu84z", "H1cX_a21z", "BkVx4mcez", "H1q18tjxM", "ryjPFIp7f", "SkIkY8pQM", "BJEAOU6Xf", "HyHjM9BmG", "B1NqTmtgz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public" ]
[ "Thanks for the author's response. As with the other reviewers, I continue to believe this is more suited for a workshop submission.\n\nAs I cited in my review (and hopefully this also addresses the follow-up comment), I don't believe there are recent, accepted papers which only use these simple tasks (except for s...
[ -1, 4, 4, 3, -1, -1, -1, -1, -1 ]
[ -1, 5, 4, 5, -1, -1, -1, -1, -1 ]
[ "SkIkY8pQM", "iclr_2018_ByL48G-AW", "iclr_2018_ByL48G-AW", "iclr_2018_ByL48G-AW", "H1cX_a21z", "BkVx4mcez", "H1q18tjxM", "BkVx4mcez", "iclr_2018_ByL48G-AW" ]
iclr_2018_rkw-jlb0W
Deep Lipschitz networks and Dudley GANs
Generative adversarial networks (GANs) have enjoyed great success, however often suffer instability during training which motivates many attempts to resolve this issue. Theoretical explanation for the cause of instability is provided in Wasserstein GAN (WGAN), and wasserstein distance is proposed to stablize the training. Though WGAN is indeed more stable than previous GANs, it takes much more iterations and time to train. This is because the ways to ensure Lipschitz condition in WGAN (such as weight-clipping) significantly limit the capacity of the network. In this paper, we argue that it is beneficial to ensure Lipschitz condition as well as maintain sufficient capacity and expressiveness of the network. To facilitate this, we develop both theoretical and practical building blocks, using which one can construct different neural networks using a large range of metrics, as well as ensure Lipschitz condition and sufficient capacity of the networks. Using the proposed building blocks, and a special choice of a metric called Dudley metric, we propose Dudley GAN that outperforms the state of the arts in both convergence and sample quality. We discover a natural link between Dudley GAN (and its extension) and empirical risk minimization, which gives rise to generalization analysis.
rejected-papers
Dear authors, While the reviewers appreciated your analysis, they all expressed concerns about the significance of the paper. Indeed, given the plethora of GAN variants, it would have been good to get stronger evidence about the advantages of the Dudley GAN. Even though I agree it is difficult to provide a clean comparison between generative models because of the lack of clear objectives, the LL on one dataset and images generated is limited. For instance, it would have been nice to show robustness results as this is a clear issue with GANs.
train
[ "SyXkyOqJz", "H17IrS0lz", "BkYfM_Rgz", "S1TXw7EfG", "BJV5PQ4MM", "r1f5sXVMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Ensuring Lipschitz condition in neural nets is essential of stablizing GANs. This paper proposes two contraint-based optimzation to ensure the Lips condtions , and these proposed approaches maintain suffcient capacity, as well as expressiveness of the network. A simple theoritical result is given by emprical risk...
[ 8, 5, 5, -1, -1, -1 ]
[ 4, 3, 1, -1, -1, -1 ]
[ "iclr_2018_rkw-jlb0W", "iclr_2018_rkw-jlb0W", "iclr_2018_rkw-jlb0W", "BkYfM_Rgz", "H17IrS0lz", "SyXkyOqJz" ]
iclr_2018_SJtChcgAW
Cheap DNN Pruning with Performance Guarantees
Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers often with little or no drop in classification accuracy. However most of the existing pruning schemes either have to be applied during training or require a costly retraining procedure after pruning to regain classification accuracy. In this paper we propose a cheap pruning algorithm based on difference of convex (DC) optimisation. We also provide theoretical analysis for the growth in the Generalisation Error (GE) of the new pruned network. Our method can be used with any convex regulariser and allows for a controlled degradation in classification accuracy while being orders of magnitude faster than competing approaches. Experiments on common feedforward neural networks show that for sparsity levels above 90% our method achieves 10% higher classification accuracy compared to Hard Thresholding.
rejected-papers
Dear authors, While the reviewers appreciated the idea, the significant loss of accuracy was a concern. Even though you made significant changes to the submission, it is unfortunately unrealistic to ask the reviewers to do another review of a heavily modified version in such a short amount of time. Thus, I cannot accept this paper for publication but I encourage you to address the reviewers' concerns and resubmit at a later conference.
train
[ "Sk4UIHOlM", "rybUQFOgf", "Skse_Ydxz", "rJmwxI6XG", "HkemB8pQM", "SJu6kG_zM", "HkxGjb_zz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The manuscript mainly presents a cheap pruning algorithm for dense layers of DNNs. The proposed algorithm is an improvement of Net-Trim (Aghasi et al., 2016), which is to enforce the weights to be sparse.\n\nThe main contribution of this manuscript is that the non-convex optimization problem in (Aghasi et al., 201...
[ 6, 5, 5, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJtChcgAW", "iclr_2018_SJtChcgAW", "iclr_2018_SJtChcgAW", "rybUQFOgf", "iclr_2018_SJtChcgAW", "Sk4UIHOlM", "Skse_Ydxz" ]
iclr_2018_Sy-tszZRZ
Bounding and Counting Linear Regions of Deep Neural Networks
In this paper, we study the representational power of deep neural networks (DNN) that belong to the family of piecewise-linear (PWL) functions, based on PWL activation units such as rectifier or maxout. We investigate the complexity of such networks by studying the number of linear regions of the PWL function. Typically, a PWL function from a DNN can be seen as a large family of linear functions acting on millions of such regions. We directly build upon the work of Mont´ufar et al. (2014), Mont´ufar (2017), and Raghu et al. (2017) by refining the upper and lower bounds on the number of linear regions for rectified and maxout networks. In addition to achieving tighter bounds, we also develop a novel method to perform exact numeration or counting of the number of linear regions with a mixed-integer linear formulation that maps the input space to output. We use this new capability to visualize how the number of linear regions change while training DNNs.
rejected-papers
Dear authors, The reviewers appreciated your work and recognized the importance of theoretical work to understand the behaviour of deep nets. That said, the improvement over existing work (especially Montufar, 2017) is minor. This, combined with the limited attraction of such work, means that the paper will not be accepted. I acknowledge the major modifications done but it is up to the reviewers to decide whether or not they agree to re-review a significantly updated version.
val
[ "SkfMvJqez", "SkSZLZ5gf", "r1knUinef", "BJpMV_p7G", "B126mvTXG", "SykyXwBmz", "HySqKpdZM", "H146OpdZf", "BkVEOadZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper investigates the complexity of neural networks with piecewise linear activations by studying the number of linear regions of the representable functions. It builds on previous works Montufar et al. (2014) and Raghu et al. (2017) and presents improved bounds on the maximum number of linear regions. It al...
[ 6, 4, 6, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "r1knUinef", "SkSZLZ5gf", "SkfMvJqez" ]
iclr_2018_H1l8sz-AW
Improving generalization by regularizing in L2 function space
Learning rules for neural networks necessarily include some form of regularization. Most regularization techniques are conceptualized and implemented in the space of parameters. However, it is also possible to regularize in the space of functions. Here, we propose to measure networks in an L2 Hilbert space, and test a learning rule that regularizes the distance a network can travel through L2-space each update. This approach is inspired by the slow movement of gradient descent through parameter space as well as by the natural gradient, which can be derived from a regularization term upon functional change. The resulting learning rule, which we call Hilbert-constrained gradient descent (HCGD), is thus closely related to the natural gradient but regularizes a different and more calculable metric over the space of functions. Experiments show that the HCGD is efficient and leads to considerably better generalization.
rejected-papers
Dear authors, Despite the desirable goal, that is to move away from regularization in parameter space toward regularization in function space, the reviewers all thought that the paper was not convincing enough, both in the choice of the particular regularization and in the experimental section. While I appreciate that you have done a major rework of the paper, the rebuttal period should not be used for that and we can not expect the reviewers to do a complete re-review of a new version. This paper thus cannot be accepted to ICLR.
train
[ "H1L5a2I1z", "S1-zlmikf", "HkI5OXsxz", "H1JU_P27M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "\nGENERAL IMPRESSION:\n\nOverall, the revised version of the paper is greatly improved. The new derivation of the method yields a much simpler interpretation, although the relation to the natural gradient remains weak (see below). The experimental evaluation is now far more solid. Multiple data sets and network ar...
[ 6, 5, 4, -1 ]
[ 3, 4, 3, -1 ]
[ "iclr_2018_H1l8sz-AW", "iclr_2018_H1l8sz-AW", "iclr_2018_H1l8sz-AW", "iclr_2018_H1l8sz-AW" ]
iclr_2018_ry831QWAb
BLOCK-NORMALIZED GRADIENT METHOD: AN EMPIRICAL STUDY FOR TRAINING DEEP NEURAL NETWORK
In this paper, we propose a generic and simple strategy for utilizing stochastic gradient information in optimization. The technique essentially contains two consecutive steps in each iteration: 1) computing and normalizing each block (layer) of the mini-batch stochastic gradient; 2) selecting appropriate step size to update the decision variable (parameter) towards the negative of the block-normalized gradient. We conduct extensive empirical studies on various non-convex neural network optimization problems, including multilayer perceptron, convolution neural networks and recurrent neural networks. The results indicate the block-normalized gradient can help accelerate the training of neural networks. In particular, we observe that the normalized gradient methods having constant step size with occasionally decay, such as SGD with momentum, have better performance in the deep convolution neural networks, while those with adaptive step sizes, such as Adam, perform better in recurrent neural networks. Besides, we also observe this line of methods can lead to solutions with better generalization properties, which is confirmed by the performance improvement over strong baselines.
rejected-papers
The paper proposes to study the impact of normalizing the gradient for each layer before applying existing techniques such as SG + momentum, Adam or AdaGrad. The study is done on a reasonable number of datasets and, after the reviewers' comments, confidence intervals have been added, although Table 1 puts results in bold but many of these results are not statistically significant. The paper, however, lacks a proper analysis of the results. Two main things could be improved: - Normalization does not always have the same effect but the reasons for it are not discussed. This needs not be done theoretically but a more thorough analysis would have been appreciated. - There is no hyperparameter tuning, which means that the results are heavily dependent on which hyperparameters were chosen. Thus, it is hard to draw any conclusion. Regarding the seemingly conflicting remarks of the two reviewers, it all depends on what the paper is trying to achieve. If it tries to show that is it state-of-the-art, then comparing to state-of-the-art algorithms on every dataset is crucial. If it tries to study the impact of one specific change, in this case layer normalization, on the optimization, then comparing to the vanilla version is fine. The paper seems to try to address the latter so it is OK if it is not compared to all the state-of-the-art algorithms. However, proper tuning of existing methods is still required. Ultimately, a better understanding of layer normalization could be useful but the paper is not convincing enough to provide that understanding. There is no need to increase the number of datasets but it should rather focus on designing setups to test and validate hypotheses.
train
[ "BkTXGMKlf", "H1O8NOKeM", "HJl9PL1zM", "r1Na9Th7z", "By48Tpn7z", "SyqMa63mM", "S1YM2p2mz", "HJzDuph7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes a family of first-order stochastic optimization schemes based on (1) normalizing (batches of) stochastic gradient descents and (2) choosing from a step size updating scheme. The authors argue that iterative first-order optimization algorithms can be interpreted as a choice of an update directi...
[ 4, 9, 2, -1, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry831QWAb", "iclr_2018_ry831QWAb", "iclr_2018_ry831QWAb", "BkTXGMKlf", "HJl9PL1zM", "HJl9PL1zM", "BkTXGMKlf", "H1O8NOKeM" ]
iclr_2018_H1pri9vTZ
Deep Function Machines: Generalized Neural Networks for Topological Layer Expression
In this paper we propose a generalization of deep neural networks called deep function machines (DFMs). DFMs act on vector spaces of arbitrary (possibly infinite) dimension and we show that a family of DFMs are invariant to the dimension of input data; that is, the parameterization of the model does not directly hinge on the quality of the input (eg. high resolution images). Using this generalization we provide a new theory of universal approximation of bounded non-linear operators between function spaces. We then suggest that DFMs provide an expressive framework for designing new neural network layer types with topological considerations in mind. Finally, we introduce a novel architecture, RippLeNet, for resolution invariant computer vision, which empirically achieves state of the art invariance.
rejected-papers
The idea of extending deep nets to infinite dimensional inputs is interesting but, as the reviewers noted, the execution does not have the quality we can expect from an ICLR publication. I encourage the authors to consider the meaningful comments that were made and modify the paper accordingly.
train
[ "rkeYOm_lM", "SJ2P_-YgG", "SyjvRE9lG", "Hyy_AXnmM", "rk_s9M2mz", "ByftMXn7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper extends the framework of neural networks for finite-dimension to the case of infinite-dimension setting, called deep function machines. This theory seems to be interesting and might have further potential in applications.", "The main idea of this paper is to replace the feedforward summation\ny = f(W*...
[ 7, 3, 4, -1, -1, -1 ]
[ 1, 4, 3, -1, -1, -1 ]
[ "iclr_2018_H1pri9vTZ", "iclr_2018_H1pri9vTZ", "iclr_2018_H1pri9vTZ", "SyjvRE9lG", "rkeYOm_lM", "SJ2P_-YgG" ]
iclr_2018_rJma2bZCW
Three factors influencing minima in SGD
We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount. We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.
rejected-papers
Dear authors, The reviewers agreed that the theoretical part lacked novelty and that the paper should focus on its experimental part which at the moment is not strong enough to warrant publication. Regarding the theoretical part, here are the main concerns: - Even though it is used in previous works, the continuous time approximation of stochastic gradient overlooks its practical behaviour, especially since a good rule of thumb is to use as large as stepsize as possible (without reaching divergence), as for instance mentioned in The Marginal Value of Adaptive Gradient Methods in Machine Learning by Wilson et al. - The isotropic approximation is very strong and I don't know settings where this would hold. Since it seems central to your statements, I wonder what can be deduced from the obtained results. - I do not think the Gaussian assumption is unreasonable and I am fine with it. Though there are clearly cases where this will not be true, it will probably be OK most of the time. I encourage the authors to focus on the experimental part in a resubmission.
train
[ "ByBJy2Oef", "BkC-HgcxG", "H19fnlceG", "rk7v6dZGM", "ryzJa_ZMG", "ryUHh_bGG", "rkHxndbMM", "SkX0jdWGM", "BJOPjdZGG", "HJ4-tdWzz", "rJJ6atxzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "The paper investigates how the learning rate and mini-batch size in SGD impacts the optima that the SGD algorithm finds.\nEmpirically, the authors argue that it was observed that larger learning rates converge to minima which are more wide,\nand that smaller learning rates more often lead to convergence to minima ...
[ 6, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJma2bZCW", "iclr_2018_rJma2bZCW", "iclr_2018_rJma2bZCW", "rJJ6atxzG", "iclr_2018_rJma2bZCW", "ByBJy2Oef", "BkC-HgcxG", "BkC-HgcxG", "BkC-HgcxG", "H19fnlceG", "iclr_2018_rJma2bZCW" ]
iclr_2018_rk3mjYRp-
Diffusing Policies : Towards Wasserstein Policy Gradient Flows
Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence. We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region). This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation. We show that in the small steps limit with respect to the Wasserstein distance W2, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result. This means that policies undergo diffusion and advection, concentrating near actions with high reward. This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.
rejected-papers
Dear authors, The authors all agreed that this was an interesting topic but that the novelty, either theoretical or empirical, was lacking. This, the paper cannot be accepted to ICLR in its current state but I encourage the authors to make the recommended updates and to push their idea further.
train
[ "Sywhphuez", "Syy9DXtef", "rk0iJ0FgM", "rJmn-DTmG", "rJdx9LamM", "B1m4OIamM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper ‘Diffusing policies: Towards Wasserstein policy gradient flows’ explores \nthe connections between reinforcement learning and the theory of quadratic optimal transport (i.e.\nusing the Wasserstein_2 as a regularizer of an iterative problem that converges toward\nan optimal policy). Following a classical ...
[ 4, 5, 4, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1 ]
[ "iclr_2018_rk3mjYRp-", "iclr_2018_rk3mjYRp-", "iclr_2018_rk3mjYRp-", "Sywhphuez", "Syy9DXtef", "rk0iJ0FgM" ]
iclr_2018_HyxjwgbRZ
Convergence rate of sign stochastic gradient descent for non-convex functions
The sign stochastic gradient descent method (signSGD) utilizes only the sign of the stochastic gradient in its updates. Since signSGD carries out one-bit quantization of the gradients, it is extremely practical for distributed optimization where gradients need to be aggregated from different processors. For the first time, we establish convergence rates for signSGD on general non-convex functions under transparent conditions. We show that the rate of signSGD to reach first-order critical points matches that of SGD in terms of number of stochastic gradient calls, up to roughly a linear factor in the dimension. We carry out simple experiments to explore the behaviour of sign gradient descent (without the stochasticity) close to saddle points and show that it often helps completely avoid them without using either stochasticity or curvature information.
rejected-papers
Dear authors, After carefully reading the reviews, the rebuttal, and going through the paper, I regret to inform you that this paper does not meet the requirements for publication at ICLR. While the variance analysis is definitely of interest, the reality of the algorithm does not match the claims. The theoretical rate is worse than that of SG but this could be an artefact of the analysis. Sadly, the experimental setup lacks in several ways: - It is not yet clear whether escaping the saddle points is really an issue in deep learning as the loss function is still poorly understood. - This analysis is done in the noiseless setting despite your argument being based around the variance of the gradients. - You report the test error on CIFAR-10. While interesting and required for an ML paper, you introduce an optimization algorithm and so the quantity that matters the most is the speed at which you achieve a given training accuracy. Also, your table lists the value of the test accuracy rather than the speed of increase. Thus, you test the generalization ability of your algorithm while making claims about the optimization performance.
train
[ "rJ6zUEaEf", "SytBqV64f", "Sy6g0wDxz", "S1CO_KVez", "rkMJQKYxz", "rkZ_o074M", "ryqhm0QEM", "S1-hX_XVG", "rySnzNa7G", "BkF5Z467M", "ByUdq7pmf", "S1NYkVpQG", "HJJB3OFmM", "rkzrZEXMz", "HyZFydp-G", "HJngWo3-M", "H1cWJi2Wf", "r18PT52-z" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "In what sense is the result far worse than Alistarh et al.?\n\nWe have now validated empirically that for resnet-20 on cifar-10, the squared gradient 1-norm dominates the squared gradient 2-norm by a factor O(d). Also the stochastic gradient variance is O(d).\n\nThe closest thing to our result in Alistarh et al. i...
[ -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rkZ_o074M", "S1-hX_XVG", "iclr_2018_HyxjwgbRZ", "iclr_2018_HyxjwgbRZ", "iclr_2018_HyxjwgbRZ", "S1NYkVpQG", "ByUdq7pmf", "BkF5Z467M", "Sy6g0wDxz", "HyZFydp-G", "iclr_2018_HyxjwgbRZ", "HJJB3OFmM", "r18PT52-z", "HyZFydp-G", "HJngWo3-M", "rkMJQKYxz", "Sy6g0wDxz", "S1CO_KVez" ]
iclr_2018_B1uvH_gC-
Parametric Manifold Learning Via Sparse Multidimensional Scaling
We propose a metric-learning framework for computing distance-preserving maps that generate low-dimensional embeddings for a certain class of manifolds. We employ Siamese networks to solve the problem of least squares multidimensional scaling for generating mappings that preserve geodesic distances on the manifold. In contrast to previous parametric manifold learning methods we show a substantial reduction in training effort enabled by the computation of geodesic distances in a farthest point sampling strategy. Additionally, the use of a network to model the distance-preserving map reduces the complexity of the multidimensional scaling problem and leads to an improved non-local generalization of the manifold compared to analogous non-parametric counterparts. We demonstrate our claims on point-cloud data and on image manifolds and show a numerical analysis of our technique to facilitate a greater understanding of the representational power of neural networks in modeling manifold data.
rejected-papers
Dear authors, Thank you for your submission to ICLR. Sadly, the reviewers were not convinced by the novelty of your approach nor by its experimental results. Thus, your paper cannot be accepted to ICLR.
train
[ "Bku1giNxf", "rkTKyhFxG", "H1iRKhYxf", "B1-O-VImM", "SkTaJVLQM", "SJlng48Qz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper describes a manifold learning method that adapts the old ideas of multidimensional scaling, with geodesic distances in particular, to neural networks. The goal is to switch from a non-parametric to a parametric method and hence to have a straightforward out-of-sample extension.\n\nThe paper has several m...
[ 5, 4, 3, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_B1uvH_gC-", "iclr_2018_B1uvH_gC-", "iclr_2018_B1uvH_gC-", "Bku1giNxf", "H1iRKhYxf", "rkTKyhFxG" ]
iclr_2018_Sk0pHeZAW
Sparse Regularized Deep Neural Networks For Efficient Embedded Learning
Deep learning is becoming more widespread in its application due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their applications. This work addresses the problem by proposing methods {\em Weight Reduction Quantisation} for compressing the memory footprint of the models, including reducing the number of weights and the number of bits to store each weight. Beside, applying with sparsity-inducing regularization, our work focuses on speeding up stochastic variance reduced gradients (SVRG) optimization on non-convex problem. Our method that mini-batch SVRG with ℓ1 regularization on non-convex problem has faster and smoother convergence rates than SGD by using adaptive learning rates. Experimental evaluation of our approach uses MNIST and CIFAR-10 datasets on LeNet-300-100 and LeNet-5 models, showing our approach can reduce the memory requirements both in the convolutional and fully connected layers by up to 60× without affecting their test accuracy.
rejected-papers
Dear authors, I agree with the reviewers that the paper tries to do several things at once and the results are not that convincing. Overall, this work is mostly incremental, which is fine if there is no issue in the execution. Thus, I regret to inform you that this paper will not be accepted to ICLR.
train
[ "SyPaSBDxz", "Hku4bLqgM", "SyRmCWAxf", "HklPfzTmG", "rk7ogG6Xz", "BkwVefp7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary: \nPaper proposes the compression method Delicate-SVRG-cumulative-L1 (combining minibatch SVRG with cumulative L1 regularization) which can significantly reduce the number of weights without affecting the test accuracy. Paper provides numerical experiments for MNIST and CIRAR10 on LeNet-300-100 and LeNet-5...
[ 4, 4, 2, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1 ]
[ "iclr_2018_Sk0pHeZAW", "iclr_2018_Sk0pHeZAW", "iclr_2018_Sk0pHeZAW", "SyPaSBDxz", "Hku4bLqgM", "SyRmCWAxf" ]
iclr_2018_r1ISxGZRb
Generation and Consolidation of Recollections for Efficient Deep Lifelong Learning
Deep lifelong learning systems need to efficiently manage resources to scale to large numbers of experiences and non-stationary goals. In this paper, we explore the relationship between lossy compression and the resource constrained lifelong learning problem of function transferability. We demonstrate that lossy episodic experience storage can enable efficient function transferability between different architectures and algorithms at a fraction of the storage cost of lossless storage. This is achieved by introducing a generative knowledge distillation strategy that does not store any full training examples. As an important extension of this idea, we show that lossy recollections stabilize deep networks much better than lossless sampling in resource constrained settings of lifelong learning while avoiding catastrophic forgetting. For this setting, we propose a novel dual purpose recollection buffer used to both stabilize the recollection generator itself and an accompanying reasoning model.
rejected-papers
The reviewers were uniformly unimpressed with the contributions of this paper. The method is somewhat derivative and the paper is quite long and lacks clarity. Moreover, the tactic of storing autoencoder variables rather than full samples is clearly an improvement, but it still does not allow the method to scale to a truly lifelong learning setting.
val
[ "ryfA9SYez", "S1iEoBnlf", "B1GkSWIWM", "ryAT4O6QM", "rJ9c-u67z", "HyMjedamG", "ByV-gu6mG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper proposes an architecture for efficient deep lifelong learning. The key idea is to use recollection generator (autoencoder) to remember the previously processed data in a compact representation. Then when training a reasoning model, recollections generated from the recollection generator are used with rea...
[ 5, 5, 5, -1, -1, -1, -1 ]
[ 2, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_r1ISxGZRb", "iclr_2018_r1ISxGZRb", "iclr_2018_r1ISxGZRb", "B1GkSWIWM", "S1iEoBnlf", "ryfA9SYez", "iclr_2018_r1ISxGZRb" ]
iclr_2018_S14EogZAZ
Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.
rejected-papers
The authors present a toy stacking task where the goal is to stack blocks to match a given configuration, and a method that is a slightly modified DQN algorithm where the target configuration is observed by the network as well as the current state. There are a few problems with this paper. First, the method lacks novelty - it is very similar to DQN. Second, the claims of learning physical intuitions is not borne out by the method or experimental results. Third, the tasks are very simple and there is no held-out test set of target configurations.
train
[ "HJUMdjteM", "H1uUNm9ef", "B171xj6eM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a model for learning physical interaction skills through trial and error. They use end-to-end deep reinforcement learning - the DQN model - including the task goal as an input in order to to improve generalization over several tasks, and shaping the reward depending on the visual differences be...
[ 5, 4, 5 ]
[ 4, 4, 3 ]
[ "iclr_2018_S14EogZAZ", "iclr_2018_S14EogZAZ", "iclr_2018_S14EogZAZ" ]
iclr_2018_rJssAZ-0-
TRL: Discriminative Hints for Scalable Reverse Curriculum Learning
Deep reinforcement learning algorithms have proven successful in a variety of domains. However, tasks with sparse rewards remain challenging when the state space is large. Goal-oriented tasks are among the most typical problems in this domain, where a reward can only be received when the final goal is accomplished. In this work, we propose a potential solution to such problems with the introduction of an experience-based tendency reward mechanism, which provides the agent with additional hints based on a discriminative learning on past experiences during an automated reverse curriculum. This mechanism not only provides dense additional learning signals on what states lead to success, but also allows the agent to retain only this tendency reward instead of the whole histories of experience during multi-phase curriculum learning. We extensively study the advantages of our method on the standard sparse reward domains like Maze and Super Mario Bros and show that our method performs more efficiently and robustly than prior approaches in tasks with long time horizons and large state space. In addition, we demonstrate that using an optional keyframe scheme with very small quantity of key states, our approach can solve difficult robot manipulation challenges directly from perception and sparse rewards.
rejected-papers
The paper proposes an extension to the reverse curriculum RL approach which uses a discriminator to label states as being on a goal trajectory or off the goal trajectory. The paper is well-written, with good empirical results on a number of task domains. However, the method relies on a number of assumptions on the ability of the agent to reset itself and the environment which are unrealistic and limiting, and beg the question as to why use the given method at all if this capability is assumed to exist. Overall, the method lacks significance and quality, and the motivation is not clear enough.
train
[ "B129GzFxf", "r1Kg9atxz", "BkFL6KCxf", "S1-osRMEM", "ryCOkjmmz", "rJRVQimmf", "Bkq9MWEmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a new method for reverse curriculum generation by gradually reseting the environment in phases and classifying states that tend to lead to success. It additionally proposes a mechanism for learning from human-provided \"key states\".\n\nThe ideas in this paper are quite nice, but the paper has ...
[ 4, 4, 5, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rJssAZ-0-", "iclr_2018_rJssAZ-0-", "iclr_2018_rJssAZ-0-", "Bkq9MWEmM", "BkFL6KCxf", "r1Kg9atxz", "B129GzFxf" ]
iclr_2018_BkeC_J-R-
Combination of Supervised and Reinforcement Learning For Vision-Based Autonomous Control
Reinforcement learning methods have recently achieved impressive results on a wide range of control problems. However, especially with complex inputs, they still require an extensive amount of training data in order to converge to a meaningful solution. This limitation largely prohibits their usage for complex input spaces such as video signals, and it is still impossible to use it for a number of complex problems in a real world environments, including many of those for video based control. Supervised learning, on the contrary, is capable of learning on a relatively small number of samples, however it does not take into account reward-based control policies and is not capable to provide independent control policies. In this article we propose a model-free control method, which uses a combination of reinforcement and supervised learning for autonomous control and paves the way towards policy based control in real world environments. We use SpeedDreams/TORCS video game to demonstrate that our approach requires much less samples (hundreds of thousands against millions or tens of millions) comparing to the state-of-the-art reinforcement learning techniques on similar data, and at the same time overcomes both supervised and reinforcement learning approaches in terms of quality. Additionally, we demonstrate the applicability of the method to MuJoCo control problems.
rejected-papers
The proposed method combines supervised pretraining given some expert data and further uses the supervision to regularize the Q-updates to prevent the agent from exploring 'nonsense' directions. There a significant problems with the paper: the approach is not novel, the assumption of large amounts of expert data is problematic, and the claim of vastly accelerated learning is not supported empirically, either in the main paper or in the additional mujoco experiments added in the appendix.
train
[ "BJCXSFZgz", "SJmcBU_ez", "Bk05KWcgz", "HkJ5oC37M", "rJyGRRhmM", "Sku3FC27f", "ry-wuA3QG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes leveraging labelled controlled data to accelerate reinforcement-based learning of a control policy. It provides two main contributions: pre-training the policy network of a DDPG agent in a supervised manner so that it begins in reasonable state-action distribution and regalurizing the Q-update...
[ 4, 5, 3, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_BkeC_J-R-", "iclr_2018_BkeC_J-R-", "iclr_2018_BkeC_J-R-", "SJmcBU_ez", "BJCXSFZgz", "Bk05KWcgz", "iclr_2018_BkeC_J-R-" ]
iclr_2018_HktXuGb0-
Reward Estimation via State Prediction
Reinforcement learning typically requires carefully designed reward functions in order to learn the desired behavior. We present a novel reward estimation method that is based on a finite sample of optimal state trajectories from expert demon- strations and can be used for guiding an agent to mimic the expert behavior. The optimal state trajectories are used to learn a generative or predictive model of the “good” states distribution. The reward signal is computed by a function of the difference between the actual next state acquired by the agent and the predicted next state given by the learned generative or predictive model. With this inferred reward function, we perform standard reinforcement learning in the inner loop to guide the agent to learn the given task. Experimental evaluations across a range of tasks demonstrate that the proposed method produces superior performance compared to standard reinforcement learning with both complete or sparse hand engineered rewards. Furthermore, we show that our method successfully enables an agent to learn good actions directly from expert player video of games such as the Super Mario Bros and Flappy Bird.
rejected-papers
The paper presents a method for learning from expert state trajectories using a similarity metric in a learned feature space. The approach uses only the states, not the actions of the expert. The reviewers were variously dissatisfied with the novelty, the theoretical presentation, and the robustness of the approach. Though it empirically works better than the baselines (without expert demos) this is not surprising, especially since thousands of expert demonstrations were used. This would have been more impressive with fewer demonstrations, or more novelty in the method, or more evidence of robustness when the agent's state is far from the demonstrations.
train
[ "S1ucldOlf", "S1qg275gM", "SkwCEXalM", "r1WNJ7KfM", "B1TekQFMG", "B1tP0GFfG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors propose to solve the inverse reinforcement learning problem of inferring the reward function from observations of a behaving agent, i.e. trajectories, albeit without observing state-action pairs as is common in IRL but only with the state sequences. This is an interesting problem setting. But, apparent...
[ 4, 5, 3, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HktXuGb0-", "iclr_2018_HktXuGb0-", "iclr_2018_HktXuGb0-", "S1ucldOlf", "S1qg275gM", "SkwCEXalM" ]
iclr_2018_BJgVaG-Ab
AUTOMATA GUIDED HIERARCHICAL REINFORCEMENT LEARNING FOR ZERO-SHOT SKILL COMPOSITION
An obstacle that prevents the wide adoption of (deep) reinforcement learning (RL) in control systems is its need for a large number of interactions with the environment in order to master a skill. The learned skill usually generalizes poorly across domains and re-training is often necessary when presented with a new task. We present a framework that combines techniques in \textit{formal methods} with \textit{hierarchical reinforcement learning} (HRL). The set of techniques we provide allows for the convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards using any RL methods and is able to construct new skills from existing ones without additional learning. We evaluate the proposed methods in a simple grid world simulation as well as simulation on a Baxter robot.
rejected-papers
The authors make an argument for constructing an MDP from the formal structures of temporal logic and associated finite state automata and then applying RL to learn a policy for the MDP. This does not provide a solution for low-level skill composition, because there are discontinuities between states, but does provide a means for high level skill composition. The reviewers agreed that the paper suffered from sloppy writing and unclear methods. They had concerns about correctness, and were not impressed by the novelty (combining TL and RL has been done previously). These concerns tip this paper to rejection.
val
[ "ryRVwuOeM", "SJnC0yKez", "Syp3P75gz", "BJlOg2YXz", "r1i7y2FQG", "Sk3PqjK7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper argues for structured task representations (in TLTL) and shows how these representations can be used to reuse learned subtasks to decrease learning time.\n\nOverall, the paper is sloppily put together, so it's a little difficult to assess the completeness of the ideas. The problem being solved is not lit...
[ 5, 3, 4, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_BJgVaG-Ab", "iclr_2018_BJgVaG-Ab", "iclr_2018_BJgVaG-Ab", "ryRVwuOeM", "SJnC0yKez", "Syp3P75gz" ]
iclr_2018_rJFOptp6Z
Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification
Knowledge distillation is a potential solution for model compression. The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one. Most previous studies focus on model distillation in the classification task, where they propose different architectures and initializations for the student network. However, only the classification task is not enough, and other related tasks such as regression and retrieval are barely considered. To solve the problem, in this paper, we take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification. By selecting appropriate initializations and targets in the knowledge transfer, the distillation can be easier in non-classification tasks. Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the student network can be competitive to the teacher one in alignment and verification, and even surpasses the teacher network under specific compression rates. In addition, to achieve stronger knowledge transfer, we also use a common initialization trick to improve the distillation performance of classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M datasets show the effectiveness of this simple trick.
rejected-papers
The authors propose a distillation-based approach that is applied to transfer knowledge from a classification network to non-classification tasks (face alignment and verification). The writing is very imprecise - for instance repeatedly referring to a 'simple trick' rather than actually defining the procedure - and the method is described in very task-specific ways that make it hard to understand how or whether it would generalize to other problems.
train
[ "B1736j_gz", "SkX-5ijlG", "rJR-8EAgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes knowledge distillation on two very specific non-classification tasks. I find the scope of the paper is quite limited and the approach seems hard to generalize to other tasks. There is also very limited technical contribution. I think the paper might be a better fit in conferences on faces such a...
[ 3, 5, 3 ]
[ 4, 5, 4 ]
[ "iclr_2018_rJFOptp6Z", "iclr_2018_rJFOptp6Z", "iclr_2018_rJFOptp6Z" ]
iclr_2018_Sktm4zWRb
Soft Value Iteration Networks for Planetary Rover Path Planning
Value iteration networks are an approximation of the value iteration (VI) algorithm implemented with convolutional neural networks to make VI fully differentiable. In this work, we study these networks in the context of robot motion planning, with a focus on applications to planetary rovers. The key challenging task in learning-based motion planning is to learn a transformation from terrain observations to a suitable navigation reward function. In order to deal with complex terrain observations and policy learning, we propose a value iteration recurrence, referred to as the soft value iteration network (SVIN). SVIN is designed to produce more effective training gradients through the value iteration network. It relies on a soft policy model, where the policy is represented with a probability distribution over all possible actions, rather than a deterministic policy that returns only the best action. We demonstrate the effectiveness of the proposed method in robot motion planning scenarios. In particular, we study the application of SVIN to very challenging problems in planetary rover navigation and present early training results on data gathered by the Curiosity rover that is currently operating on Mars.
rejected-papers
The authors have proposed a 'soft' version of VIN which is differentiable, where the cost function is trained by behavior cloning / imitation learning from expert/computer trajectories. The method is applied to a toy problem and to real historical data from mars rovers. The paper does not acknowledge nor compare against other methods, and the contribution is unclear, as is the justification for some of the aspects of the method. Additionally it is difficult to interpret the relevance or significance of the results (45% correct).
train
[ "Bknbc_kxG", "Sksl-n_xf", "HJTvyeceM", "BkJ2XCpQz", "HyB1eAp7M", "rk130paQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary:\n\nThe Value-Iteration-Network (VIN) architecture is modified to have a softmax loss function at the end. This is termed SVIN. It is then applied in a behavior cloning manner to the task of rover path planning from start to goal from overhead imagery.\n\nSimulation results on binary obstacle maps and usin...
[ 3, 3, 4, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1 ]
[ "iclr_2018_Sktm4zWRb", "iclr_2018_Sktm4zWRb", "iclr_2018_Sktm4zWRb", "Bknbc_kxG", "Sksl-n_xf", "HJTvyeceM" ]
iclr_2018_S1xDcSR6W
Hybed: Hyperbolic Neural Graph Embedding
Neural embeddings have been used with great success in Natural Language Processing (NLP) where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks. The success of neural embeddings has prompted significant amounts of research into applications in domains other than language. One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling. For both NLP and graph-based tasks, embeddings in high-dimensional Euclidean spaces have been learned. However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but a negatively curved hyperbolic space. We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space. We provide experimental evidence that hyperbolic embeddings significantly outperform Euclidean embeddings on vertex classification tasks for several real-world public datasets.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "HyiISdgef", "rynh2mGgf", "SyVv9AjWG", "HkfDunaZf", "BJ930g14M", "SyfSY3hmf", "rydq92kzf", "Hk6Ye6CWG", "Hk88g60Zf", "r1hExT0WG", "SJaMeTRbz", "SJRleTC-G", "r1YtIJCbM", "HJ5atCj-G", "rkKEbeO-z", "Sk2scfwWM", "SyFo-HpeM", "Sy6BFU_eG", "r1aJmPBef", "SylAxmQeM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "author", "public", "author", ...
[ "== Preamble ==\n\nAs promised, I have read the updated paper from scratch and this is my revised review. My original review is kept below for reference. My original review had rating \"4: Ok but not good enough - rejection\".\n\n== Updated review ==\n\nThe revised improves upon the original submission in several w...
[ 4, 7, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 2, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1xDcSR6W", "iclr_2018_S1xDcSR6W", "iclr_2018_S1xDcSR6W", "iclr_2018_S1xDcSR6W", "Hk88g60Zf", "iclr_2018_S1xDcSR6W", "Hk6Ye6CWG", "rkKEbeO-z", "rynh2mGgf", "HJ5atCj-G", "SyVv9AjWG", "HkfDunaZf", "SylAxmQeM", "iclr_2018_S1xDcSR6W", "Sk2scfwWM", "HyiISdgef", "Sy6BFU_eG", "...
iclr_2018_SJd0EAy0b
Generalized Graph Embedding Models
Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but only ad hoc solutions have been obtained this far. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning algorithms, and propose to extend this by introducing a multi-shot unsupervised learning framework. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "SJ5CeLYef", "S12o7fqlM", "Byp8oT3xf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is well-written and provides sufficient background on the knowledge graph tasks. The current state-of-the-art models are mentioned and the approach is evaluated against them. The proposed model is rather simple so it is really surprising that the proposed model performs on par or even outperforms existin...
[ 6, 4, 3 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJd0EAy0b", "iclr_2018_SJd0EAy0b", "iclr_2018_SJd0EAy0b" ]
iclr_2018_S1viikbCW
TCAV: Relative concept importance testing with Linear Concept Activation Vectors
Despite neural network’s high performance, the lack of interpretability has been the main bottleneck for its safe usage in practice. In domains with high stakes (e.g., medical diagnosis), gaining insights into the network is critical for gaining trust and being adopted. One of the ways to improve interpretability of a NN is to explain the importance of a particular concept (e.g., gender) in prediction. This is useful for explaining reasoning behind the networks’ predictions, and for revealing any biases the network may have. This work aims to provide quantitative answers to \textit{the relative importance of concepts of interest} via concept activation vectors (CAV). In particular, this framework enables non-machine learning experts to express concepts of interests and test hypotheses using examples (e.g., a set of pictures that illustrate the concept). We show that CAV can be learned given a relatively small set of examples. Testing with CAV, for example, can answer whether a particular concept (e.g., gender) is more important in predicting a given class (e.g., doctor) than other set of concepts. Interpreting with CAV does not require any retraining or modification of the network. We show that many levels of meaningful concepts are learned (e.g., color, texture, objects, a person’s occupation), and we present CAV’s \textit{empirical deepdream} — where we maximize an activation using a set of example pictures. We show how various insights can be gained from the relative importance testing with CAV.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "ryetNfcxG", "rkMtrl6bz", "H1EFxgC-f", "BykC2IA-G", "SJCVW6WfM", "r1C4WcxzM", "SyqwWceMM", "BJT3eceMG", "BkAu19xzM", "H18MkJAWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Summary\n---\nThis paper proposes the use of Concept Activation Vectors (CAVs) for interpreting deep models. It shows how concept activation vectors can be used to provide explanations where the user provides a concept (e.g., red) as a set of training examples and then the method provides explanations like \"If th...
[ 4, 4, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 2, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1viikbCW", "iclr_2018_S1viikbCW", "iclr_2018_S1viikbCW", "iclr_2018_S1viikbCW", "SyqwWceMM", "iclr_2018_S1viikbCW", "r1C4WcxzM", "BykC2IA-G", "H18MkJAWf", "iclr_2018_S1viikbCW" ]
iclr_2018_ryZ3KCy0W
Link Weight Prediction with Node Embeddings
Application of deep learning has been successful in various domains such as im- age recognition, speech recognition and natural language processing. However, the research on its application in graph mining is still in an early stage. Here we present the first generic deep learning approach to the graph link weight prediction problem based on node embeddings. We evaluate this approach with three differ- ent node embedding techniques experimentally and compare its performance with two state-of-the-art non deep learning baseline approaches. Our experiment re- sults suggest that this deep learning approach outperforms the baselines by up to 70% depending on the dataset and embedding technique applied. This approach shows that deep learning can be successfully applied to link weight prediction to improve prediction accuracy.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "SJFHMtueG", "H1Q3f2_ef", "BJ-Rv1neG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Although this paper aims at an interesting and important task, the reviewer does not feel it is ready to be published.\nBelow are some detailed comments:\n\nPros\n- Numerous public datasets are used for the experiments\n- Good introductions for some of the existing methods.\nCons\n- The novelty is limited. The bas...
[ 3, 3, 4 ]
[ 4, 5, 3 ]
[ "iclr_2018_ryZ3KCy0W", "iclr_2018_ryZ3KCy0W", "iclr_2018_ryZ3KCy0W" ]
iclr_2018_BJhxcGZCW
Generative Discovery of Relational Medical Entity Pairs
Online healthcare services can provide the general public with ubiquitous access to medical knowledge and reduce the information access cost for both individuals and societies. To promote these benefits, it is desired to effectively expand the scale of high-quality yet novel relational medical entity pairs that embody rich medical knowledge in a structured form. To fulfill this goal, we introduce a generative model called Conditional Relationship Variational Autoencoder (CRVAE), which can discover meaningful and novel relational medical entity pairs without the requirement of additional external knowledge. Rather than discriminatively identifying the relationship between two given medical entities in a free-text corpus, we directly model and understand medical relationships from diversely expressed medical entity pairs. The proposed model introduces the generative modeling capacity of variational autoencoder to entity pairs, and has the ability to discover new relational medical entity pairs solely based on the existing entity pairs. Beside entity pairs, relationship-enhanced entity representations are obtained as another appealing benefit of the proposed method. Both quantitative and qualitative evaluations on real-world medical datasets demonstrate the effectiveness of the proposed method in generating relational medical entity pairs that are meaningful and novel.
rejected-papers
The authors seem to miss important related literature for their comparison. They also tuned hyperparameters and tested on the same validation set. They should split between train/validation/test. Reviews are just too low across the board to accept.
train
[ "Bym8Y7aXf", "r1hITk7ez", "H1_4279lf", "SJyOXNclf", "S1Gt1m6mM", "HJ-9qfp7M" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "Thanks for your review. \n\n1.\tThe medical entity pairs generated by proposed model can be used to expand an existing knowledge graph with new entities as vertexes and relations as edges in a generative fashion. However, the KB completion task and the proposed entity pair discovery task share different objectives...
[ -1, 4, 4, 2, -1, -1 ]
[ -1, 3, 4, 5, -1, -1 ]
[ "SJyOXNclf", "iclr_2018_BJhxcGZCW", "iclr_2018_BJhxcGZCW", "iclr_2018_BJhxcGZCW", "H1_4279lf", "r1hITk7ez" ]
iclr_2018_ryA-jdlA-
A closer look at the word analogy problem
Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns. My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted. Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "Hkkq0dDlM", "B1oFM1FeG", "ByWUtfoef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new method for solving the analogy task, which can potentially provide some insight as to why word2vec recovers word analogies.\n\nIn my view, there are three main issues with this paper: (1) the assumptions it makes about our understanding of the analogy phenomenon; (2) the authors' understa...
[ 2, 3, 3 ]
[ 5, 4, 4 ]
[ "iclr_2018_ryA-jdlA-", "iclr_2018_ryA-jdlA-", "iclr_2018_ryA-jdlA-" ]