paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2019_r1GbfhRqF7 | Kernel Change-point Detection with Auxiliary Deep Generative Models | Detecting the emergence of abrupt property changes in time series is a challenging problem. Kernel two-sample test has been studied for this task which makes fewer assumptions on the distributions than traditional parametric approaches. However, selecting kernels is non-trivial in practice. Although kernel selection for the two-sample test has been studied, the insufficient samples in change point detection problem hinder the success of those developed kernel selection algorithms. In this paper, we propose KL-CPD, a novel kernel learning framework for time series CPD that optimizes a lower bound of test power via an auxiliary generative model. With deep kernel parameterization, KL-CPD endows kernel two-sample test with the data-driven kernel to detect different types of change-points in real-world applications. The proposed approach significantly outperformed other state-of-the-art methods in our comparative evaluation of benchmark datasets and simulation studies. | accepted-poster-papers | This paper proposes a new kernel learning framework for change point detection by using a generative model. The reviewers agree that the paper is interesting and useful for the community. One of the reviewer had some issues with the paper but those were resolved after the rebuttal. The other two reviewers have short reviews and somewhat low confidence, so it is difficult to tell how this paper stands among other that exist in the literature. Overall, given the consistent ratings from all the reviewers, I believe this paper can be accepted. | train | [
"rJgGC4xhAm",
"r1lg5FuhoQ",
"rkeOQQopa7",
"S1lkdzo7T7",
"rJlwQ-j76m",
"HJejUysX6m",
"BJxw5aOa2m",
"SJg2i1Pp3X"
] | [
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This is a good work, with a novel idea and strong experiment results. Besides kernel selection, using RNN is also very interesting to me; in fact, I hardly see a motivation when samples are iid. \n\nI have a few questions:\n\n- problem setting: do you really test if a point is a change-point, or if the sequence ma... | [
-1,
7,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_r1GbfhRqF7",
"iclr_2019_r1GbfhRqF7",
"HJejUysX6m",
"BJxw5aOa2m",
"SJg2i1Pp3X",
"r1lg5FuhoQ",
"iclr_2019_r1GbfhRqF7",
"iclr_2019_r1GbfhRqF7"
] |
iclr_2019_r1My6sR9tX | Unsupervised Learning via Meta-Learning | A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data. Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics. Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data. To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks. Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks. Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods. | accepted-poster-papers | Reviewers largely agree that the paper proposes a novel and interesting idea for unsupervised learning through meta learning and the empirical evaluation does a convincing job in demonstrating its effectiveness. There were some concerns on clarity/readability of the paper which seem to have been addressed by the authors. I recommend acceptance. | train | [
"Bye9Tru00X",
"HyeT4Mp5Rm",
"BkemmQw9Am",
"HJlNixAFAX",
"SklSFCJyaQ",
"HJldF4T8hQ",
"B1lYZ6Ht37",
"rkxrw3v-6X",
"rkgGnSmOam",
"HJlTH6DZT7",
"Skxii3P-a7",
"rkels4tJp7"
] | [
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"Code for producing part of the results has been released, but for anonymity reasons, we will not link to it here. We will update the code and add a link to the code in the paper and here after the review process is complete.",
"Unsupervised meta-learning seems interesting and a new setting for few-shot learning ... | [
-1,
-1,
-1,
7,
6,
6,
8,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
3,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"HyeT4Mp5Rm",
"iclr_2019_r1My6sR9tX",
"HJlNixAFAX",
"iclr_2019_r1My6sR9tX",
"iclr_2019_r1My6sR9tX",
"iclr_2019_r1My6sR9tX",
"iclr_2019_r1My6sR9tX",
"HJldF4T8hQ",
"iclr_2019_r1My6sR9tX",
"B1lYZ6Ht37",
"HJldF4T8hQ",
"SklSFCJyaQ"
] |
iclr_2019_r1NJqsRctX | Auxiliary Variational MCMC | We introduce Auxiliary Variational MCMC, a novel framework for learning MCMC kernels that combines recent advances in variational inference with insights drawn from traditional auxiliary variable MCMC methods such as Hamiltonian Monte Carlo. Our framework exploits low dimensional structure in the target distribution in order to learn a more efficient MCMC sampler. The resulting sampler is able to suppress random walk behaviour and mix between modes efficiently, without the need to compute gradients of the target distribution. We test our sampler on a number of challenging distributions, where the underlying structure is known, and on the task of posterior sampling in Bayesian logistic regression. Code to reproduce all experiments is available at https://github.com/AVMCMC/AuxiliaryVariationalMCMC .
| accepted-poster-papers | The reviewers all argued for acceptance citing the novelty and potential of the work as strengths. They all found the experiments a little underwhelming and asked for more exciting empirical evaluation. The authors have addressed this somewhat by including multi-modal experiments in the discussion period. The paper would be more impactful if the authors could demonstrate significant improvements on really challenging problems where MCMC is currently prohibitively expensive, such as improving over HMC for highly parameterized deep neural networks. Overall, however, this is a very nice paper and warrants acceptance to the conference. | train | [
"rkexvaLO0m",
"rJgRC1whTm",
"BklskfP36Q",
"HkxClWDnpX",
"Skgpobvnam",
"rJgjUgvn6X",
"rkeKLLVj67",
"rkg-CmR5aQ",
"SkemMdaOa7",
"S1xI4X5Opm",
"HkenO3Ku6m",
"ryg3s8BzTX",
"rkeocNqY2X",
"B1xDvf7FnX"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like again to thank the reviewers for their suggested improvements. We have updated the paper to try and improve the exposition in line with their comments. The changes we have made are:\n\nExperiments\n1) We include as a baseline the independent proposal distribution (equation (8)) discussed at the start... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2019_r1NJqsRctX",
"B1xDvf7FnX",
"rkeocNqY2X",
"ryg3s8BzTX",
"ryg3s8BzTX",
"B1xDvf7FnX",
"rkg-CmR5aQ",
"SkemMdaOa7",
"S1xI4X5Opm",
"HkenO3Ku6m",
"iclr_2019_r1NJqsRctX",
"iclr_2019_r1NJqsRctX",
"iclr_2019_r1NJqsRctX",
"iclr_2019_r1NJqsRctX"
] |
iclr_2019_r1e13s05YX | Neural network gradient-based learning of black-box function interfaces | Deep neural networks work well at approximating complicated functions when provided with data and trained by gradient descent methods. At the same time, there is a vast amount of existing functions that programmatically solve different tasks in a precise manner eliminating the need for training. In many cases, it is possible to decompose a task to a series of functions, of which for some we may prefer to use a neural network to learn the functionality, while for others the preferred method would be to use existing black-box functions. We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions. We do so by approximating the black-box functionality with a differentiable neural network in a way that drives the base network to comply with the black-box function interface during the end-to-end optimization process. At inference time, we replace the differentiable estimator with its external black-box non-differentiable counterpart such that the base network output matches the input arguments of the black-box function. Using this ``Estimate and Replace'' paradigm, we train a neural network, end to end, to compute the input to black-box functionality while eliminating the need for intermediate labels. We show that by leveraging the existing precise black-box function during inference, the integrated model generalizes better than a fully differentiable model, and learns more efficiently compared to RL-based methods. | accepted-poster-papers | The paper focuses on hybrid pipelines that contain black-boxes and neural networks, making it difficult to train the neural components due to non-differentiability. As a solution, this paper proposes to replace black-box functions with neural modules that approximate them during training, so that end-to-end training can be used, but at test time use the original black box modules. The authors propose a number of variations: offline, online, and hybrid of the two, to train the intermediate auxiliary networks. The proposed model is shown to be effective on a number of synthetic datasets.
The reviewers and AC note the following potential weaknesses: (1) the reviewers found some of the experiment details to be scattered, (2) It was unclear what happens if there is a mismatch between the auxiliary network and the black box function it is approximating, especially if the function is one, like sorting, that is difficult for neural models to approximate, and (3) the text lacked description of real-world tasks for which such a hybrid pipeline would be useful.
The authors provide comments and a revision to address these concerns. They added a section that described the experiment setup to aid reproducibility, and incorporated more details in the results and related work, as suggested by the reviewers. Although these changes go a long way, some of the concerns, especially regarding the mismatch between neural and black box function, still remain.
Overall, the reviewers agreed that the issues had been addressed to a sufficient degree, and the paper should be accepted. | test | [
"r1xKLxl5R7",
"BJx7vWFOCm",
"rJxBpMKOC7",
"SkeZmft_07",
"ByxwCZF_0Q",
"rkgznMOCnX",
"r1xRq4Zah7",
"rkg1n-hLhm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I have read the authors' reply. I am generally happy with the revision and will keep my rating. ",
"Thank you for the very detailed criticism and positive review.\n\nWe have updated the paper to address your concerns in the following way:\n\n(1) We have added an appendix section with experimental details for the... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"BJx7vWFOCm",
"rkgznMOCnX",
"iclr_2019_r1e13s05YX",
"rkg1n-hLhm",
"r1xRq4Zah7",
"iclr_2019_r1e13s05YX",
"iclr_2019_r1e13s05YX",
"iclr_2019_r1e13s05YX"
] |
iclr_2019_r1eEG20qKQ | Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions | Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters. We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases. We show how to construct scalable best-response approximations for neural networks by modeling the best-response as a single network whose hidden units are gated conditionally on the regularizer. We justify this approximation by showing the exact best-response for a shallow linear network with L2-regularized Jacobian can be represented by a similar gating mechanism. We fit this model using a gradient-based hyperparameter optimization algorithm which alternates between approximating the best-response around the current hyperparameters and optimizing the hyperparameters using the approximate best-response function. Unlike other gradient-based approaches, we do not require differentiating the training loss with respect to the hyperparameters, allowing us to tune discrete hyperparameters, data augmentation hyperparameters, and dropout probabilities. Because the hyperparameters are adapted online, our approach discovers hyperparameter schedules that can outperform fixed hyperparameter values. Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems. We call our networks, which update their own hyperparameters online during training, Self-Tuning Networks (STNs). | accepted-poster-papers | The paper proposes an approach to hyperparameter tuning based on bilevel optimization, and demonstrates promising empirical results. Reviewer's concerns seem to be addressed well in rebuttals and extended version of the paper. | train | [
"rkem6wHiCX",
"SJesafJKh7",
"rJlurZziCX",
"r1x8vz6F0X",
"Hyl-6laK07",
"B1eDIJ6FCQ",
"SklEgC2YCX",
"HJx1qfxi3Q",
"HkeVPgy5nQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank all the reviewers for their helpful comments. \n\nWe have made the following changes to the paper to address reviewer concerns:\n\n--- Improved clarity: We simplified our notation and included a table of notation in Appendix A. We added an additional figure which clarifies why hyperparameters must be samp... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_r1eEG20qKQ",
"iclr_2019_r1eEG20qKQ",
"B1eDIJ6FCQ",
"Hyl-6laK07",
"HkeVPgy5nQ",
"SJesafJKh7",
"HJx1qfxi3Q",
"iclr_2019_r1eEG20qKQ",
"iclr_2019_r1eEG20qKQ"
] |
iclr_2019_r1eVMnA9K7 | Unsupervised Control Through Non-Parametric Discriminative Rewards | Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research. We present an unsupervised learning algorithm to train agents to achieve perceptually-specified goals using only a stream of observations and actions. Our agent simultaneously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state. This dual optimization leads to a co-operative game, giving rise to a learned reward function that reflects similarity in controllable aspects of the environment instead of distance in the space of observations. We demonstrate the efficacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains -- Atari, the DeepMind Control Suite and DeepMind Lab. | accepted-poster-papers | This paper introduces an unsupervised algorithm to learn a goal-conditioned policy and the reward function by formulating a mutual information maximization problem. The idea is interesting, but the experimental studies seem not rigorous enough. In the final version, I would like to see some more detailed analysis of the results obtained by the baselines (pixel approaches), as well as careful discussion on the relationship with other related work, such as Variational Intrinsic Control. | train | [
"r1eSR828l4",
"HkxChlKQe4",
"HJlYLFbllN",
"SyxhS8Dpy4",
"SJgJEie3AQ",
"SygRw182s7",
"r1loqmzq0X",
"S1xjwi8dCm",
"SJljVd1DAm",
"BJeTg0KBAX",
"H1xQ8iScnX",
"BygCo9LNRQ",
"Hylt_SmER7",
"rylBLB7NAX",
"rJxo_xmE0X",
"SyeJ_RG40X",
"rkxnaAM4AX",
"HJgBJCzNCm",
"SygwYkYK3m",
"B1es20ASnQ"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_re... | [
"@Reviewer 3, \n\nThis paper is more about a reasonable simplification + CPC-like MI information estimator of Variational Intrinsic Control (VIC). I don't particularly see the need to discuss the relationship with DIAYN given that DIAYN is itself a trivial simplification of VIC while the authors clearly derive thei... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1
] | [
"HkxChlKQe4",
"HJlYLFbllN",
"SyxhS8Dpy4",
"iclr_2019_r1eVMnA9K7",
"S1xjwi8dCm",
"iclr_2019_r1eVMnA9K7",
"iclr_2019_r1eVMnA9K7",
"rJxo_xmE0X",
"iclr_2019_r1eVMnA9K7",
"rylBLB7NAX",
"iclr_2019_r1eVMnA9K7",
"H1xQ8iScnX",
"SygRw182s7",
"SygRw182s7",
"SygwYkYK3m",
"H1xQ8iScnX",
"H1xQ8iScn... |
iclr_2019_r1efr3C9Ym | Interpolation-Prediction Networks for Irregularly Sampled Time Series | In this paper, we present a new deep learning architecture for addressing the problem of supervised learning with sparse and irregularly sampled multivariate time series. The architecture is based on the use of a semi-parametric interpolation network followed by the application of a prediction network. The interpolation network allows for information to be shared across multiple dimensions of a multivariate time series during the interpolation stage, while any standard deep learning model can be used for the prediction network. This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate. We investigate the performance of this architecture on both classification and regression tasks, showing that our approach outperforms a range of baseline and recently proposed models.
| accepted-poster-papers | After much discussion, all reviewers agree that this paper should be accepted. Congratulations!! | train | [
"H1lAMFaq2Q",
"SJgvOS-92m",
"rJljJo--1N",
"H1gDJqZW1N",
"Hyx9K4r2C7",
"S1eQkYGnRX",
"rye9fvOkRQ",
"HygepnVy07",
"HJe2vXVJCQ",
"rkefjESCp7",
"HklR8g0p6Q",
"B1eEd30jpQ",
"rJgm6MxxT7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary:\nThe authors propose a framework for making predictions on a sparse, irregularly sampled time-series data. The proposed model consists of an interpolation module, and the prediction module, where the interpolation module models the missing values in using three outputs: smooth interpolation, non-smooth in... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_r1efr3C9Ym",
"iclr_2019_r1efr3C9Ym",
"Hyx9K4r2C7",
"HygepnVy07",
"S1eQkYGnRX",
"HJe2vXVJCQ",
"H1lAMFaq2Q",
"rkefjESCp7",
"rkefjESCp7",
"HklR8g0p6Q",
"rJgm6MxxT7",
"SJgvOS-92m",
"iclr_2019_r1efr3C9Ym"
] |
iclr_2019_r1eiqi09K7 | Riemannian Adaptive Optimization Methods | Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings. However, some of the most popular of these optimization tools - namely Adam, Adagrad and the more recent Amsgrad - remain to be generalized to Riemannian manifolds. We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product. Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms. Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincare ball. | accepted-poster-papers | Dear authors,
All reviewers agreed that your work sheds new light on a popular class of algorithms and should thus be presented at ICLR.
Please make sure to implement all their comments in the final version. | train | [
"SyebSCK30X",
"SkxSY3VnAX",
"r1x3Gsjl0m",
"B1gnp5olCX",
"rJglV5jxCQ",
"rkx1x9oeAm",
"HkeQ6djp2X",
"SylzCu-S3Q",
"HygZGC2Csm",
"Bkx-VlBpFX",
"ryxWLeM6F7",
"ryxFFxGaYX",
"S1lynLJot7",
"HylTxaesFm"
] | [
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"public",
"public"
] | [
"Our bounds indeed hold for any isometry phi. \nEven with phi(m) = -m. \nWe understand your confusion.\n\nFirst, please note that if phi(m) = -m, although the past gradients in the exponential moving average would be alternated, i.e. multiplied by some (-1)^t, the current gradient is not modified by phi. \n\nThat b... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
-1,
-1,
-1,
-1,
-1
] | [
"SkxSY3VnAX",
"iclr_2019_r1eiqi09K7",
"HygZGC2Csm",
"SylzCu-S3Q",
"HkeQ6djp2X",
"iclr_2019_r1eiqi09K7",
"iclr_2019_r1eiqi09K7",
"iclr_2019_r1eiqi09K7",
"iclr_2019_r1eiqi09K7",
"ryxWLeM6F7",
"S1lynLJot7",
"HylTxaesFm",
"iclr_2019_r1eiqi09K7",
"S1lynLJot7"
] |
iclr_2019_r1f0YiCctm | Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters | While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements. Consequently, model size reduction has become an utmost goal in deep learning. A typical approach is to train a set of deterministic weights, while applying certain techniques such as pruning and quantization, in order that the empirical weight distribution becomes amenable to Shannon-style coding schemes. However, as shown in this paper, relaxing weight determinism and using a full variational distribution over weights allows for more efficient coding schemes and consequently higher compression rates. In particular, following the classical bits-back argument, we encode the network weights using a random sample, requiring only a number of bits corresponding to the Kullback-Leibler divergence between the sampled variational distribution and the encoding distribution. By imposing a constraint on the Kullback-Leibler divergence, we are able to explicitly control the compression rate, while optimizing the expected loss on the training set. The employed encoding scheme can be shown to be close to the optimal information-theoretical lower bound, with respect to the employed variational family. Our method sets new state-of-the-art in neural network compression, as it strictly dominates previous approaches in a Pareto sense: On the benchmarks LeNet-5/MNIST and VGG-16/CIFAR-10, our approach yields the best test performance for a fixed memory budget, and vice versa, it achieves the highest compression rates for a fixed test performance. | accepted-poster-papers | This paper proposes a novel coding scheme for compressing neural network weights using Shannon-style coding and a variational distribution over weights. This approach is shown to improve over existing schemes for LeNet-5 on MNIST and VGG-16 on CIFAR-10, strictly dominating them in terms of compression/error rate tradeoffs. Comparing to more baselines would have been helpful. Theoretical analysis based on non-trivial extensions of prior work by Harsha et al. (2010) and Chatterjee & Diaconis (2018) is also presented. Overall, there was consensus among the reviewers that the paper makes a solid contribution and should be published.
| train | [
"BkxPTq2hAX",
"BJgeEAmlCm",
"Ske6QqXeAm",
"HJltv_7g0X",
"ryxkJV7xRX",
"BkxIDtRY6m",
"r1eTYfKA2X",
"BJe0AHhj3Q",
"S1lV-njc2m",
"rkxLkNQ0q7",
"Skea7Z0TcX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for this additional information. I believe the paper would benefit from including some of it in the camera-ready version.\n\nBased on the discussion, I'm keeping my original score.",
"Yes, in principle one could recover the randomness used to sample from \\tilde{q} analogous to the bits-back argument (... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
-1,
-1
] | [
"HJltv_7g0X",
"BkxIDtRY6m",
"S1lV-njc2m",
"r1eTYfKA2X",
"BJe0AHhj3Q",
"iclr_2019_r1f0YiCctm",
"iclr_2019_r1f0YiCctm",
"iclr_2019_r1f0YiCctm",
"iclr_2019_r1f0YiCctm",
"Skea7Z0TcX",
"iclr_2019_r1f0YiCctm"
] |
iclr_2019_r1g4E3C9t7 | Characterizing Audio Adversarial Examples Using Temporal Dependency | Recent studies have highlighted adversarial examples as a ubiquitous threat to different neural network models and many downstream applications. Nonetheless, as unique data properties have inspired distinct and powerful learning principles, this paper aims to explore their potentials towards mitigating adversarial inputs. In particular, our results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples. Tested on the automatic speech recognition (ASR) tasks and three recent audio adversarial attacks, we find that (i) input transformation developed from image adversarial defense provides limited robustness improvement and is subtle to advanced attacks; (ii) temporal dependency can be exploited to gain discriminative power against audio adversarial examples and is resistant to adaptive attacks considered in our experiments. Our results not only show promising means of improving the robustness of ASR systems, but also offer novel insights in exploiting domain-specific data properties to mitigate negative effects of adversarial examples. | accepted-poster-papers | The authors present a study characterizing adversarial examples in the audio domain. They highlight the importance of temporal dependency when defining defense against adversarial attacks.
Strengths
- The work presents an interesting analysis of properties of audio adversarial examples, and contrasts it with those in vision literature.
- Proposes a novel defense mechanism that is based on the idea of temporal dependency.
Weaknesses
- The technique identifies adversarial examples but is not able to make the correct prediction.
- The reviewers raised issue around clarity, but the authors took the effort to improve the section during the revision process.
The reviewers agree that the contribution is significant and useful for the community. There are still some concerns about clarity, which the authors should consider improving in the final version. Overall, the paper received positive reviews and therefore, is recommended to be accepted to the conference. | train | [
"SJxUvUaK27",
"HyxdLrxcCQ",
"HJeScNx9CX",
"rJejAXe907",
"BJgFHzg90X",
"ryx9MuhU6Q",
"SJgCwLfxa7",
"rke8d0QE3m",
"BJxqwKRJ2m",
"SklHTV9Aim",
"rJe03DXAiX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"This paper investigates adversarial examples for audio data. The standard defense techniques proposed for images are studied in the context of audio. It is shown that these techniques are somewhat robust to adversarial attacks, but fail against adaptive attacks. A method exploiting the temporal dependencies of the... | [
7,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2019_r1g4E3C9t7",
"SJxUvUaK27",
"SJgCwLfxa7",
"ryx9MuhU6Q",
"iclr_2019_r1g4E3C9t7",
"iclr_2019_r1g4E3C9t7",
"iclr_2019_r1g4E3C9t7",
"BJxqwKRJ2m",
"iclr_2019_r1g4E3C9t7",
"rJe03DXAiX",
"iclr_2019_r1g4E3C9t7"
] |
iclr_2019_r1gEqiC9FX | Equi-normalization of Neural Networks | Modern neural networks are over-parametrized. In particular, each rectified linear hidden unit can be modified by a multiplicative factor by adjusting input and out- put weights, without changing the rest of the network. Inspired by the Sinkhorn-Knopp algorithm, we introduce a fast iterative method for minimizing the l2 norm of the weights, equivalently the weight decay regularizer. It provably converges to a unique solution. Interleaving our algorithm with SGD during training improves the test accuracy. For small batches, our approach offers an alternative to batch- and group- normalization on CIFAR-10 and ImageNet with a ResNet-18. | accepted-poster-papers | The proposed ENorm procedure is a normalization scheme for neural nets whereby the weights are rescaled in a way that minimizes the sum of L_p norms while maintaining functional equivalence. An algorithm is given which provably converges to the globally optimal solution. Experiments show it is complementary to, and perhaps slightly better than, other normalization schemes.
Normalization issues are important for DNN training, and normalization schemes like batch norm, weight norm, etc. have the unsatisfying property that they entangle multiple issues such as normalization, stochastic regularization, and effective learning rates. ENorm is a conceptually cleaner (if more algorithmically complicated) approach. It's a nice addition to the set of normalization schemes, and possibly complementary to the existing ones.
After a revision which included various new experiments, the reviewers are generally happy with the paper. While there's still some controversy over whether it's really better than things like batch norm, I think the paper would be worth publishing even if the results came out negative, since it is a very natural idea which took some algorithmic insight in order to actually execute.
| train | [
"ByguRteu1E",
"Bkgb2RlUyN",
"SkeZ8Jeq2Q",
"HyxnVM8c2m",
"r1e3suRtR7",
"rJge6q_t6X",
"B1gE1MOKpX",
"rJg3oYDKa7",
"HJxqzYDt2Q",
"rkeI_yXZ9X",
"B1xaIdukqm"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Thanks for your interest in our work!
\n\n- We provide the code to reproduce the experiments you are working on as a .zip folder at the anonymous url indicated below. The folder contains a script \"reproduce.sh\" that (1) precisely describes all the hyper parameters that were found to work best on the validation ... | [
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
5,
-1,
-1
] | [
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
3,
-1,
-1
] | [
"Bkgb2RlUyN",
"iclr_2019_r1gEqiC9FX",
"iclr_2019_r1gEqiC9FX",
"iclr_2019_r1gEqiC9FX",
"iclr_2019_r1gEqiC9FX",
"HJxqzYDt2Q",
"SkeZ8Jeq2Q",
"HyxnVM8c2m",
"iclr_2019_r1gEqiC9FX",
"B1xaIdukqm",
"iclr_2019_r1gEqiC9FX"
] |
iclr_2019_r1gNni0qtm | Generalized Tensor Models for Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are very successful at solving challenging problems with sequential data. However, this observed efficiency is not yet entirely explained by theory. It is known that a certain class of multiplicative RNNs enjoys the property of depth efficiency --- a shallow network of exponentially large width is necessary to realize the same score function as computed by such an RNN. Such networks, however, are not very often applied to real life tasks. In this work, we attempt to reduce the gap between theory and practice by extending the theoretical analysis to RNNs which employ various nonlinearities, such as Rectified Linear Unit (ReLU), and show that they also benefit from properties of universality and depth efficiency. Our theoretical results are verified by a series of extensive computational experiments. | accepted-poster-papers | AR1 finds that extension of the previously presented ICLR'18 paper are interesting and sufficient due to the provided analysis of universality and depth efficiency. AR2 is concerned with the lack of any concrete toy example between the proposed architecture and RNNs. Kindly make an effort to add such a basic step-by-step illustration for a simple chosen architecture e.g. in the supplementary material. AR3 is the most critical (the analysis TT-RNN based on the product non-linearity done before, particular case of rectifier non-linearity is used, etc.)
Despite the authors cannot guarantee the existence of corresponding weight tensor W in less trivial cases, the overall analysis is very interesting and it is the starting point for further modeling. Thus, AC advocates acceptance of this paper. The review scores do not indicate this can be an oral paper, e.g. it currently is unlikely to be in top few percent of accepted papers. Nonetheless, this is a valuable and solid work.
Moreover, for the camera-ready paper, kindly refresh your list of citations as a mere 1 page of citations feels rather too conservative. This makes the background of the paper and related work obscure to average reader unfamiliar with this topic, tensors, tensor outer products etc. There are numerous works on tensor decompositions that can be acknowledged:
- Multilinear Analysis of Image Ensembles: TensorFaces by Vasilescou et al.
- Multilinear Projection for Face Recognition via Canonical Decomposition by Vasilescou et al.
- Tensor decompositions for learning latent variable models by Anandkumar et al.
- Fast and guaranteed tensor decomposition via sketching by Anandkumar et al.
One good example of the use of the outer product (sums over rank one outer products of higher-order) is paper from 2013. They perform higher-order pooling on encoded feature vectors (although this seems to be the shallow setting) similar to Eq. 2 and 3 (this submission):
- Higher-order occurrence pooling on mid-and low-level features: Visual concept detection by Koniusz et al. (e.g. equations equations 49 and 50 or 1, 16 and 17 realize Eq. 3 and 13 in this submission)
- Higher-Order Occurrence Pooling for Bags-of-Words: Visual Concept Detection (similar follow-up work)
Other related papers include:
- Long-term Forecasting using Tensor-Train RNNs by Anandkumar et al.
- Tensor Regression Networks with various Low-Rank Tensor Approximations by Cao et al.
Of course, the authors are encouraged to cite even more related works. | val | [
"Byxr4biHJV",
"r1xImMNH1V",
"rJeRns-xy4",
"r1ldXFHyJ4",
"rklZ6VBkyE",
"BJeyD19227",
"Sylvejvq27",
"HklQ01sssm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I thank the Authors for providing their responses to my questions.",
"Thank you for your response. I believe the paper could be improved and would be more interesting if some analysis on general nonlinearities is provided. ",
"Thank you for pointing us to the relevant papers on tensor decompositions for imag... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"r1ldXFHyJ4",
"rklZ6VBkyE",
"Sylvejvq27",
"HklQ01sssm",
"BJeyD19227",
"iclr_2019_r1gNni0qtm",
"iclr_2019_r1gNni0qtm",
"iclr_2019_r1gNni0qtm"
] |
iclr_2019_r1l73iRqKm | Wizard of Wikipedia: Knowledge-Powered Conversational Agents | In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction. | accepted-poster-papers | The paper proposes a new dataset for studying knowledge grounded conversations, that would be very useful in advancing this field. In addition to the details of the dataset and its collection, the paper also includes a framework for advancing the research in this area, that includes evaluation methods and baselines with a relatively new approach.
The proposed approach for dialogue generation however is a simple extension of previous work by (Zhang et al) to user transformers, hence is not very interesting. The proposed approach is also not compared to many previous studies in the experimental results.
One of the reviewers highlighted the weakness of the human evaluation performed in the paper. Moving on, it would be useful if further approaches are considered and included in the task evaluation.
A poster presentation of the work would enable participants to ask detailed questions about the proposed dataset and evaluation, and hence may be more appropriate.
| train | [
"rkx7FNXqC7",
"BJggPNQ907",
"S1gofVmqCQ",
"HklZ0zm5RX",
"S1l1hzXcCm",
"SJliuf75Am",
"HkgWNlm5R7",
"SkgGSxr93m",
"BJl_Z5Nc37",
"S1xbgTO_2Q"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"- In Table 5, human evaluators only measure the likeness of the dialog which seems very naive. Why don’t you measure whether the apprentice gets new knowledge of which s/he didn’t know before, whether the knowledge provided from the model was informative, whether the dialog was fun and engaging or more? The curren... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"BJggPNQ907",
"S1gofVmqCQ",
"BJl_Z5Nc37",
"S1l1hzXcCm",
"SJliuf75Am",
"S1xbgTO_2Q",
"SkgGSxr93m",
"iclr_2019_r1l73iRqKm",
"iclr_2019_r1l73iRqKm",
"iclr_2019_r1l73iRqKm"
] |
iclr_2019_r1lWUoA9FQ | Are adversarial examples inevitable? | A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?
This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.
| accepted-poster-papers | There's precious little work asking existential questions about adversarial examples, and so this work is most welcome. The work connects with deep results in probability to make simple and transparent claims about the inevitability of adversarial examples under some assumptions. The authors have addressed the key criticisms of the authors around clarity. | train | [
"S1go00H5nm",
"ByxucZuw1E",
"BkexCQTSkV",
"rJgz8FZcRm",
"HylS90l90X",
"B1gKFuqOAm",
"rJlynQY0X",
"BkeiR4hdCm",
"rJeG-d5dRQ",
"BklNjXcORX",
"S1gW94klpX",
"SJgNbBz03Q",
"SJgOUrbC2m",
"r1ghWAnAhX",
"BJgodUZR37",
"Hyxd147fim",
"S1ePuse2qX",
"BygAt3Jjqm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"public",
"author",
"public"
] | [
"This paper explores the inevitability of adversarial examples with concentration inequalities. It is motivated by the difficulties of achieving adversarial robustness in literature. It derives isoperimetric inequalities on a cube, and then discuss the adversarial robustness of data distributed inside the cube, wit... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_r1lWUoA9FQ",
"BkexCQTSkV",
"rJgz8FZcRm",
"B1gKFuqOAm",
"BklNjXcORX",
"S1go00H5nm",
"rJeG-d5dRQ",
"SJgOUrbC2m",
"SJgNbBz03Q",
"S1gW94klpX",
"iclr_2019_r1lWUoA9FQ",
"iclr_2019_r1lWUoA9FQ",
"iclr_2019_r1lWUoA9FQ",
"BJgodUZR37",
"SJgOUrbC2m",
"S1ePuse2qX",
"BygAt3Jjqm",
"icl... |
iclr_2019_r1laEnA5Ym | A Variational Inequality Perspective on Generative Adversarial Networks | Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, we cast GAN optimization problems in the general variational inequality framework. Tapping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend methods designed for variational inequalities to the training of GANs. We apply averaging, extrapolation and a computationally cheaper variant that we call extrapolation from the past to the stochastic gradient method (SGD) and Adam. | accepted-poster-papers | The paper presents a variational inequality perspective on the optimization problem arising in GANs. Convergence of stochastic gradient descent methods (averaging and extragradient variants) is given under monotonicity (or convex) assumptions. In particular, binlinear saddle point problem is carefully studied with batch and stochastic algorithms. Experiments on CIFAR10 with WGAN etc. show that the proposed averaging and extrapolation techniques improve the GAN training in such a nonconvex optimization practices.
General convergence results in the context of general non-monotone VIPs is still an open problem for future exploration. The questions raised by the reviewers are well answered. The reviewers unanimously accept the paper for ICLR publication. | train | [
"HygHUpH_g4",
"ryT6uWAa7",
"HJe-3rW0pQ",
"SJeigrZCaX",
"BkgcGClCa7",
"rygmGXa_p7",
"SJlRNFEa3Q",
"Bye1MVvq2m",
"HkekxnDu3m"
] | [
"public",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I have some concerns about this paper:\n\n(1) In the theory side, this paper assumes that the loss function F is \\mu-strongly monotone (or equivalently convex-concave), which seems to be a very unrealistic assumption as GANs are highly non-convex. Besides, many prior works on GANs have analyzed the convergence (w... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2019_r1laEnA5Ym",
"HkekxnDu3m",
"Bye1MVvq2m",
"SJlRNFEa3Q",
"rygmGXa_p7",
"iclr_2019_r1laEnA5Ym",
"iclr_2019_r1laEnA5Ym",
"iclr_2019_r1laEnA5Ym",
"iclr_2019_r1laEnA5Ym"
] |
iclr_2019_r1lohoCqY7 | Learning-Based Frequency Estimation Algorithms | Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning. The problem is typically addressed using streaming algorithms which can process very large data using limited storage. Today's streaming algorithms, however, cannot exploit patterns in their input to improve performance. We propose a new class of algorithms that automatically learn relevant patterns in the input data and use them to improve its frequency estimates. The proposed algorithms combine the benefits of machine learning with the formal guarantees available through algorithm theory. We prove that our learning-based algorithms have lower estimation errors than their non-learning counterparts. We also evaluate our algorithms on two real-world datasets and demonstrate empirically their performance gains. | accepted-poster-papers | The paper conveys interesting ideas but reviewers are concern about an incremental nature of results, choice of comparators, and in general empirical and analytical novelty. | train | [
"B1g6Ld0gaX",
"HkeXupr4Am",
"BylBDAB5aQ",
"H1gsmAr9aX",
"Skl8UhSqaX",
"SkgtrSEZ67",
"BkeVYzOCnm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Quality/clarity:\n- The problem setting description is neither formal nor intuitive which made it very hard for me to understand exactly the problem you are trying to solve. Starting with S and i: I guess S and i are both simply varying-length sequences in U.\n- In general the intro should focus more on an intuiti... | [
6,
-1,
-1,
-1,
-1,
7,
6
] | [
1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_r1lohoCqY7",
"iclr_2019_r1lohoCqY7",
"SkgtrSEZ67",
"B1g6Ld0gaX",
"BkeVYzOCnm",
"iclr_2019_r1lohoCqY7",
"iclr_2019_r1lohoCqY7"
] |
iclr_2019_r1lq1hRqYQ | From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following | Reinforcement learning is a promising framework for solving control problems, but its use in practical situations is hampered by the fact that reward functions are often difficult to engineer. Specifying goals and tasks for autonomous machines, such as robots, is a significant challenge: conventionally, reward functions and goal states have been used to communicate objectives. But people can communicate objectives to each other simply by describing or demonstrating them. How can we build learning algorithms that will allow us to tell machines what we want them to do? In this work, we investigate the problem of grounding language commands as reward functions using inverse reinforcement learning, and argue that language-conditioned rewards are more transferable than language-conditioned policies to new environments. We propose language-conditioned reward learning (LC-RL), which grounds language commands as a reward function represented by a deep neural network. We demonstrate that our model learns rewards that transfer to novel tasks and environments on realistic, high-dimensional visual environments with natural language commands, whereas directly learning a language-conditioned policy leads to poor performance. | accepted-poster-papers | This paper generated a lot of discussion (not all of it visible to the authors or the public).
R1 initially requested reasonable comparisons, but after the authors provided a response (and new results), R1 continued to recommend rejecting the paper simply because they personally did not find the manuscript insightful. Despite several requests for clarification, we could not converge on a specific problem with the manuscript. Ungrounded gut feelings are not grounds for rejection.
After an extensive discussion, R2 and R3 both recommend accepting the paper and the AC agrees. Paper makes interesting contributions and will be a welcome addition to the literature. | train | [
"SkxoIYmzA7",
"ByxaK7Xt2X",
"rJx-ECNJRQ",
"Sygm9a4y0m",
"BkgTwpVkCm",
"SkgNkygonm",
"rkxKz3BtnQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"If BC is being done from *all* states then yes I completely agree that you won't have the usual issues. It is indeed then a very strong baseline.",
"Summary:\n\nThis paper proposes learning reward functions via inverse reinforcement learning (IRL) for vision-based instruction following tasks like \"go to the cup... | [
-1,
9,
-1,
-1,
-1,
5,
5
] | [
-1,
5,
-1,
-1,
-1,
4,
4
] | [
"BkgTwpVkCm",
"iclr_2019_r1lq1hRqYQ",
"SkgNkygonm",
"rkxKz3BtnQ",
"ByxaK7Xt2X",
"iclr_2019_r1lq1hRqYQ",
"iclr_2019_r1lq1hRqYQ"
] |
iclr_2019_r1lrAiA5Ym | Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity | The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity. Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the brain. The resulting self-modifying abilities of the brain play an important role in learning and adaptation, and are a major basis for biological reinforcement learning. Here we show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent. Extending previous work on differentiable Hebbian plasticity, we propose a differentiable formulation for the neuromodulation of plasticity. We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks. In one task, neuromodulated plastic LSTMs with millions of parameters outperform standard LSTMs on a benchmark language modeling task (controlling for the number of parameters). We conclude that differentiable neuromodulation of plasticity offers a powerful new framework for training neural networks. | accepted-poster-papers | The authors consider the problem of active plasticity in the mammalian brain, seen as being a means to enable lifelong learning. Building on the recent paper on differentiable plasticity, the authors propose a learnt, neuro-modulated differentiable plasticity that can be trained with gradient descent but is more flexible than fixed plasticity. The paper is clearly motivated and written, and the tasks are constructed to validate the method by demonstrating clear cases where non-modulated plasticity fails completely but where the proposed approach succeeds. On a large, general language modeling task (PTB) there is a small but consistent improvement over LSTMS. The reviewers were very split on this submission, with two reviewers focusing on the lack of large improvements on large benchmarks, and the other reviewer focusing on the novelty and success of the method on simple tasks. The AC tends to side with the positive review because of the following observations: the method is novel and potentially will have long term impact on the field, the language modeling task seems like a poor fit to demonstrate the advantages of the dynamic plasticity, so focusing on that benchmark overly much is misleading, and the paper is high-quality and interesting to the community. | train | [
"rkg5CK6S1E",
"ryxDQhsSJV",
"Bkgp3oiBk4",
"S1g8Nj6qRX",
"HJeXzvo5CX",
"HJlVxZucAm",
"rJx0WYw9R7",
"S1gcKOwqR7",
"HyxE2PPcRm",
"ryxWDI_Gsm",
"rkg2K06S2m",
"ryeSkAjN37"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the careful evaluation on PTB but unfortunately this doesn't address my main concerns, which are that this is still a toy dataset imo, and that there is no visible advantage of using neuromodulation. What are the error bars on these results? Are the changes significant? The modest improvements can hav... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ryxDQhsSJV",
"S1g8Nj6qRX",
"HJeXzvo5CX",
"rJx0WYw9R7",
"HyxE2PPcRm",
"iclr_2019_r1lrAiA5Ym",
"ryxWDI_Gsm",
"ryeSkAjN37",
"rkg2K06S2m",
"iclr_2019_r1lrAiA5Ym",
"iclr_2019_r1lrAiA5Ym",
"iclr_2019_r1lrAiA5Ym"
] |
iclr_2019_r1lyTjAqYX | Recurrent Experience Replay in Distributed Reinforcement Learning | Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay. We study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed DQN, quadruples the previous state of the art on Atari-57, and matches the state of the art on DMLab-30. It is the first agent to exceed human-level performance in 52 of the 57 Atari games. | accepted-poster-papers | The paper proposes a new distributed DQN algorithm that combines recurrent neural networks with distributed prioritized replay memory. The authors systematically compare three types of initialization strategies for training the recurrent models. The thorough investigation is cited as a valuable contribution by all reviewers, with reviewer 1 noting that the study would be of interest to "anyone using recurrent networks on RL tasks". Empirical results on Atari and DMLab are impressive.
The reviewers noted several weaknesses in their original reviews. These included issues of clarity, a need for more detailed ablation studies, and need to more carefully document the empirical setup. A further question was raised on whether the empirical results could be complemented with theoretical or conceptual insights.
The authors carefully addressed all concerns raised during the reviewing and rebuttal period. They took exceptional care to clarify their writing, document experiment details, and ran a large set of additional experiments as suggested by the reviewers. The AC feels that the review period for the paper was particularly productive and would like to thank the reviewers and authors.
The reviewers and AC agree that the paper makes a significant contribution to the field and should be accepted. | train | [
"S1x50y_aRm",
"S1xbJJBj0Q",
"ryx6VXJoCm",
"r1e7BZpFAX",
"r1lgURl52X",
"BJxpTUOFRm",
"S1gMv8pH07",
"rkep84aYhQ",
"BJebb8iB07",
"ryesUjTWAX",
"SkxrqYaZAm",
"H1eZZKTZAQ",
"SkxyvPTZ0X",
"Ske9lOYp27",
"SJenv9e497",
"rylmsYeZ9m",
"Hkx94FeZ9X",
"HJgGyvfxqm",
"BJeijL3J5m",
"SJeCT3qk5Q"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"author",
"author",
"public",
"author",
"public",
... | [
"Absolutely, thanks!\n",
"Thanks, very interesting! Could you please just add a short description of R2D2+ in Table 1's caption, for the final version?",
"We thank all reviewers once again for the careful reading of the paper and the helpful comments. \n\nWe have updated our ablations, now including two life-lo... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
2,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"S1xbJJBj0Q",
"ryx6VXJoCm",
"SkxyvPTZ0X",
"S1gMv8pH07",
"iclr_2019_r1lyTjAqYX",
"ryesUjTWAX",
"BJebb8iB07",
"iclr_2019_r1lyTjAqYX",
"H1eZZKTZAQ",
"r1lgURl52X",
"Ske9lOYp27",
"rkep84aYhQ",
"iclr_2019_r1lyTjAqYX",
"iclr_2019_r1lyTjAqYX",
"BJeijL3J5m",
"HJgGyvfxqm",
"SJeCT3qk5Q",
"icl... |
iclr_2019_r1x4BnCqKX | A Generative Model For Electron Paths | Chemical reactions can be described as the stepwise redistribution of electrons in molecules. As such, reactions are often depicted using "arrow-pushing" diagrams which show this movement as a sequence of arrows. We propose an electron path prediction model (ELECTRO) to learn these sequences directly from raw reaction data. Instead of predicting product molecules directly from reactant molecules in one shot, learning a model of electron movement has the benefits of (a) being easy for chemists to interpret, (b) incorporating constraints of chemistry, such as balanced atom counts before and after the reaction, and (c) naturally encoding the sparsity of chemical reactions, which usually involve changes in only a small number of atoms in the reactants. We design a method to extract approximate reaction paths from any dataset of atom-mapped reaction SMILES strings. Our model achieves excellent performance on an important subset of the USPTO reaction dataset, comparing favorably to the strongest baselines. Furthermore, we show that our model recovers a basic knowledge of chemistry without being explicitly trained to do so. | accepted-poster-papers | The paper presents a graph neural network that represents the movements of electrons during chemical reactions, trained from a dataset to predict reactions outcomes.
The paper is clearly written, the comparisons are sensical. There are some concerns by reviewer 3 about the experimental results: in particular the lack of a simpler baseline, and the experimental variance. I think the some of the important concerns from reviewer 3 were addressed in the rebuttal, and I hope the authors will update the manuscript accordingly.
Overall, this is fitting for publication at ICLR 2019. | train | [
"r1xA3anNRQ",
"BJgVD33V0X",
"BJl6GhhNRX",
"BkgUnF3V0Q",
"rygRtthVCQ",
"HJeXgB2VCX",
"BkxM4Zo9nQ",
"H1ljey75h7",
"SJlofDCyn7",
"rklaBviq3m",
"Byxatx75nQ"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have responded to this comment on our answer to your review and provide a detailed response there. We hope this addresses your concerns. In summary, we show on ELECTRO-LITE using K-fold cross validation that the variation within a method is on the order of tenths of a percent, whereas the difference between the... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
8,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1,
-1
] | [
"Byxatx75nQ",
"BJl6GhhNRX",
"SJlofDCyn7",
"rygRtthVCQ",
"H1ljey75h7",
"BkxM4Zo9nQ",
"iclr_2019_r1x4BnCqKX",
"iclr_2019_r1x4BnCqKX",
"iclr_2019_r1x4BnCqKX",
"H1ljey75h7",
"SJlofDCyn7"
] |
iclr_2019_r1xQQhAqKX | Modeling Uncertainty with Hedged Instance Embeddings | Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering. Many metric learning methods represent the input as a single point in the embedding space. Often the distance between points is used as a proxy for match confidence. However, this can fail to represent uncertainty which can arise when the input is ambiguous, e.g., due to occlusion or blurriness. This work addresses this issue and explicitly models the uncertainty by “hedging” the location of each input in the embedding space. We introduce the hedged instance embedding (HIB) in which embeddings are modeled as random variables and the model is trained under the variational information bottleneck principle (Alemi et al., 2016; Achille & Soatto, 2018). Empirical results on our new N-digit MNIST dataset show that our method leads to the desired behavior of “hedging its bets” across the embedding space upon encountering ambiguous inputs. This results in improved performance for image matching and classification tasks, more structure in the learned embedding space, and an ability to compute a per-exemplar uncertainty measure which is correlated with downstream performance. | accepted-poster-papers | This work presents a method to model embeddings as distributions, instead of points, to better quantify uncertainty. Evaluations are carried out on a new dataset created from mixtures of MNIST digits, including noise (certain probability of occlusions), that introduce ambiguity, using a small "toy" neural network that is incapable of perfectly fitting the data, because authors mention that performance difference lessens when the network is complex enough to almost perfectly fit the data.
Reviewer assessment is unanimously accept, with the following points:
Pros:
+ "The topic of injecting uncertainty in neural networks should be of broad interest to the ICLR community."
+ "The paper is generally clear."
+ "The qualitative evaluation provides intuitive results."
Cons:
- Requirement of drawing samples may add complexity. Authors reply that alternatives should be studied in future work.
- No comparison to other uncertainty methods, such as dropout. Authors reply that dropout represents model uncertainty and not data uncertainty, but do not carry out an experiment to compare (i.e. sample from model leaving dropout activated during evaluation).
- No evaluation in larger scale/dimensionality datasets. Authors mention method scales linearly, but how practical or effective this method is to use on, say, face recognition datasets, is unclear.
As the general reviewer consensus is accept, Area Chair is recommending Accept; However, Area Chair has strong reservations because the method is evaluated on a very limited dataset, with a toy model designed to exaggerate differences between techniques. Essentially, the toy evaluation was designed to get the results the authors were looking for. A more thorough investigation would use more realistic sized network models on true datasets. | train | [
"ByegqBDioX",
"B1x9ZzYmCX",
"SygJLmYLpX",
"S1e2VQtLTQ",
"Byx_z7K86m",
"rkgPgQYLam",
"r1gHkXK86X",
"rJlt3zKLpm",
"HylIjDWunQ",
"SJl9pTb827"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"While most works consider embedding as the problem of mapping an input into a point in an embedding space, paper 1341 considers the problem of mapping an input into a distribution in an embedding space. Computing the matching score of two inputs (e.g. two images) involves the following steps: (i) assuming a Gaussi... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_r1xQQhAqKX",
"S1e2VQtLTQ",
"rkgPgQYLam",
"r1gHkXK86X",
"rJlt3zKLpm",
"ByegqBDioX",
"SJl9pTb827",
"HylIjDWunQ",
"iclr_2019_r1xQQhAqKX",
"iclr_2019_r1xQQhAqKX"
] |
iclr_2019_r1xX42R5Fm | Beyond Greedy Ranking: Slate Optimization via List-CVAE | The conventional approach to solving the recommendation problem greedily ranks
individual document candidates by prediction scores. However, this method fails to
optimize the slate as a whole, and hence, often struggles to capture biases caused
by the page layout and document interdepedencies. The slate recommendation
problem aims to directly find the optimally ordered subset of documents (i.e.
slates) that best serve users’ interests. Solving this problem is hard due to the
combinatorial explosion of document candidates and their display positions on the
page. Therefore we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework. In this paper, we introduce List Conditional Variational Auto-Encoders (ListCVAE),
which learn the joint distribution of documents on the slate conditioned
on user responses, and directly generate full slates. Experiments on simulated
and real-world data show that List-CVAE outperforms greedy ranking methods
consistently on various scales of documents corpora. | accepted-poster-papers | The paper presents a novel perspective on optimizing lists of documents ("slates") in a recommendation setting. The proposed approach builds on progress in variational auto-encoders, and proposes an approach that generates slates of the desired quality, conditioned on user responses.
The paper presents an interesting and promising novel idea that is expected to motivate follow-up work. Conceptually, the proposed model can learn complex relationships between documents and account for these when generating slates. The paper is clearly written. The empirical results show clear improvements over competitive baselines in synthetic and semi-synthetic experiments (real users and clicks, learned user model).
The reviewers and AC also note several potential shortcomings. The reviewers asked for additional baselines that reflect current state of the art approaches, and for comparisons in terms of prediction times. There are also concerns about the model's ability to generalize to (responses on) slates unseen during training, as well as concerns about the realism of the simulated user model in the evaluation. There were questions regarding the presentation, including model details / formalism.
In the rebuttal phase, the authors addressed the above as follows. They added new baselines that reflect sequential document selection (auto-regressive MLP and LSTM) and demonstrate that these perform on par with greedy approaches. They provide details on an experiment to test generalization, showing both when the model succeeds and where it fails - which is valuable for understanding the advantages and limitations of the proposed approach. The authors clarified modeling and evaluation choices.
Through the rebuttal and discussion phase, the reviewers reached consensus on a borderline / lean to accept decision. The AC suggests accepting the paper, based on the innovative approach and potential directions for follow up work.
| train | [
"BklYOAG62m",
"Hyxrzl_TAm",
"HJxk8hf41N",
"H1g9MpDTAm",
"HJlUh4EU07",
"ryg178Yi6Q",
"SygYUHKopX",
"Sklln4tiam",
"Syx1JutAn7",
"rJg80M9qhX"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The latest revision is a substantially improved version of the paper. The comment about generalization still feels unsatisfying (\"our model requires choosing c* in the support of P(c) seen during training\") but could spur follow-up work attempting a precise characterization.\nI remain wary of using a neural net ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_r1xX42R5Fm",
"SygYUHKopX",
"H1g9MpDTAm",
"HJlUh4EU07",
"Sklln4tiam",
"rJg80M9qhX",
"BklYOAG62m",
"Syx1JutAn7",
"iclr_2019_r1xX42R5Fm",
"iclr_2019_r1xX42R5Fm"
] |
iclr_2019_r1xdH3CcKX | Stochastic Prediction of Multi-Agent Interactions from Partial Observations | We present a method which learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents. Our method is based on a graph-structured variational recurrent neural network, which is trained end-to-end to infer the current state of the (partially observed) world, as well as to forecast future states. We show that our method outperforms various baselines on two sports datasets, one based on real basketball trajectories, and one generated by a soccer game engine. | accepted-poster-papers | This paper proposes a unified approach for performing state estimation and future forecasting for agents interacting within a multi-agent system. The method relies on a graph-structured recurrent neural network trained on temporal and visual (pixel) information.
The paper is well-written, with a convincing motivation and a set of novel ideas.
The reviewers pointed to a few caveats in the methodology, such as quality of trajectories (AnonReviewer2) and expensive learning of states (AnonReviewer3). However, these issues do not discount much of the papers' quality. Besides, the authors have rebutted satisfactorily some of those comments.
More importantly, all three reviewers were not convinced by the experimental evaluation. AnonReviewer1 believes that the idea has a lot of potential, but is hindered by the insufficient exposition of the experiments. AnonReviewer3 similarly asks for more consistency in the experiments.
Overall, all reviewers agree on a score "marginally above the threshold". While this is not a particularly strong score, the AC weighted all opinions that, despite some caveats, indicate that the developed model and considered application fit nicely in a coherent and convincing story. The authors are strongly advised to work further on the experimental section (which they already started doing as is evident from the rebuttal) to further improve their paper. | train | [
"ByxsCxyn3m",
"Byxum4VEkV",
"HJlnDtffyN",
"rJeK9qxY3m",
"ByeIhLLcnX",
"HkejG6vcA7",
"H1xfvqwqCQ",
"BygrEuP907",
"rygJ5BPqC7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors propose Graph VRNN. The proposed method models the interaction of multiple agents by deploying a VRNN for each agent. The interaction among the agents is modeled by the graph interaction update on the hidden states of the VRNNs. The model predicts the true state (e.g., location) of the agent via superv... | [
6,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2019_r1xdH3CcKX",
"HJlnDtffyN",
"HkejG6vcA7",
"iclr_2019_r1xdH3CcKX",
"iclr_2019_r1xdH3CcKX",
"rJeK9qxY3m",
"ByeIhLLcnX",
"ByxsCxyn3m",
"iclr_2019_r1xdH3CcKX"
] |
iclr_2019_r1xwKoR9Y7 | GamePad: A Learning Environment for Theorem Proving | In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant. Interactive theorem provers such as Coq enable users to construct machine-checkable proofs in a step-by-step manner. Hence, they provide an opportunity to explore theorem proving with human supervision. We use GamePad to synthesize proofs for a simple algebraic rewrite problem and train baseline models for a formalization of the Feit-Thompson theorem. We address position evaluation (i.e., predict the number of proof steps left) and tactic prediction (i.e., predict the next proof step) tasks, which arise naturally in tactic-based theorem proving. | accepted-poster-papers | This paper provides an RL environment defined over Coq, allowing for RL agents and other such systems to to be trained to propose tactics during the running of an ITP. I really like this general line of work, and the reviewers broadly speaking did as well. The one holdout is reviewer 3, who raises important concerns about the need for further evaluation. I understand and appreciate their points, and I think the authors should be careful to incorporate their feedback not only in final revisions to the paper, but in deciding what follow-on work to focus on. Nonetheless, and with all due respect to reviewer 3, who provided a review of acceptable quality, I am unsure the substance of their review merits a score as low as they have given. Considering the support the other reviews offer for the paper, I recommend acceptance for what the majority of reviewers believes is a good first step towards one day proving substantial new theorems using ITP-ML hybrids. | train | [
"SklgQanCA7",
"SkxNwHmYCQ",
"rklfHBXFAQ",
"HyxOZB7t0Q",
"Bylyu4mtA7",
"BkxjWxu23Q",
"SylgGgRc2X",
"SygN51E93m"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Reviewer 3, \n\nThank you for your review. Your score is currently the outlier amongst the reviews, which is fine! However, given the substance of the other reviews, the author response below, and the revisions made to the paper, it would be good to get a bit more detail from you. Could you please read the author ... | [
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"BkxjWxu23Q",
"SygN51E93m",
"SylgGgRc2X",
"BkxjWxu23Q",
"iclr_2019_r1xwKoR9Y7",
"iclr_2019_r1xwKoR9Y7",
"iclr_2019_r1xwKoR9Y7",
"iclr_2019_r1xwKoR9Y7"
] |
iclr_2019_rJ4km2R5t7 | GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding | For natural language understanding (NLU) technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks. By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks. GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models. We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task. However, the low absolute performance of our best model indicates the need for improved general NLU systems. | accepted-poster-papers | This paper provides an interesting benchmark for multitask learning in NLP.
I wish the dataset included language generation tasks instead of just classification but it's still a step in the right direction.
| train | [
"S1g4cw9Ohm",
"HJlblc2e0Q",
"B1xRaF2lR7",
"HklEfYnxR7",
"S1gHMCM0nX",
"ryeyISNYhQ",
"S1eZR9Jhq7",
"BklRvvUFcQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper introduces the General Language Understanding Evaluation (GLUE) benchmark and platform, which aims to evaluate representations of language with an emphasis on generalizability. This is a timely contribution and GLUE will be an impactful resource for the NLP community. This is mitigated, perhaps, somewha... | [
8,
-1,
-1,
-1,
7,
5,
-1,
-1
] | [
4,
-1,
-1,
-1,
1,
2,
-1,
-1
] | [
"iclr_2019_rJ4km2R5t7",
"S1g4cw9Ohm",
"ryeyISNYhQ",
"S1gHMCM0nX",
"iclr_2019_rJ4km2R5t7",
"iclr_2019_rJ4km2R5t7",
"BklRvvUFcQ",
"iclr_2019_rJ4km2R5t7"
] |
iclr_2019_rJNH6sAqY7 | On Computation and Generalization of Generative Adversarial Networks under Spectrum Control | Generative Adversarial Networks (GANs), though powerful, is hard to train. Several recent works (Brock et al., 2016; Miyato et al., 2018) suggest that controlling the spectra of weight matrices in the discriminator can significantly improve the training of GANs. Motivated by their discovery, we propose a new framework for training GANs, which allows more flexible spectrum control (e.g., making the weight matrices of the discriminator have slow singular value decays). Specifically, we propose a new reparameterization approach for the weight matrices of the discriminator in GANs, which allows us to directly manipulate the spectra of the weight matrices through various regularizers and constraints, without intensively computing singular value decompositions. Theoretically, we further show that the spectrum control improves the generalization ability of GANs. Our experiments on CIFAR-10, STL-10, and ImgaeNet datasets confirm that compared to other competitors, our proposed method is capable of generating images with better or equal quality by utilizing spectral normalization and encouraging the slow singular value decay. | accepted-poster-papers | All the reviewers agree that the paper has an interesting idea on regularizing the spectral norm of the weight matrices in GANs, and a generalization bound has been shown. The empirical result shows that indeed regularization improves the performance of the GANs. Based on these the AC suggested acceptance. | train | [
"SylcXqGF2Q",
"HylToq1YRm",
"Byx5-cyFRX",
"SJlu3tkKAX",
"HyxiDSMOnX",
"HkledSLmnm"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper is a natural extension of [1] which shows the importance of spectral normalization to encourage diversity of the discriminator weights in a GAN. A simple and effective parametrization of the weights similar to SVD is used: W = USV^T is used along with an orthonormal penalty on U and V and spectral penalt... | [
8,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
2,
4
] | [
"iclr_2019_rJNH6sAqY7",
"HkledSLmnm",
"HyxiDSMOnX",
"SylcXqGF2Q",
"iclr_2019_rJNH6sAqY7",
"iclr_2019_rJNH6sAqY7"
] |
iclr_2019_rJNwDjAqYX | Large-Scale Study of Curiosity-Driven Learning | Reinforcement learning algorithms rely on carefully engineered rewards from the environment that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is difficult and not scalable, motivating the need for developing reward functions that are intrinsic to the agent.
Curiosity is such intrinsic reward function which uses prediction error as a reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. {\em without any extrinsic rewards}, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance as well as a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many games. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://doubleblindsupplementary.github.io/large-curiosity/. | accepted-poster-papers | The authors have extended previous publications on curiosity driven, intrinsically motivated RL with this broad empirical study on the effectiveness of the curiosity algorithm on many game environments, the merits of different feature sets, and limitations of the approach. The paper is well-written and should be of interest to the community. The experiments are well conceived and seem to validate the general effectiveness of curiosity. However, the paper does not actually have any novel contribution compared against prior work, and there are no great insights or takeaways from the empirical study. Therefore, the reviewers were somewhat divided on how confident they were that the paper should be accepted. Overall, the AC agrees that it is a valuable paper that should be accepted even though it does not deliver any algorithmic novelty. | train | [
"HJx9J8YQA7",
"r1xr8BYm0m",
"rJxwGHYQRm",
"HyxxyrKQ0m",
"Ske_-TWah7",
"ryg_N1TK2Q",
"HylyhLLF3X"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you for the detailed and thoughtful review. We are glad that you found the paper well-contextualized and the presentation high-quality. Here we discuss some of your comments.\n\nR2: \"this 'finishes' [Pathak et al., ICML17] to its logical conclusion for game-based environments and should spur interesting... | [
-1,
-1,
-1,
-1,
6,
9,
7
] | [
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"Ske_-TWah7",
"HylyhLLF3X",
"ryg_N1TK2Q",
"iclr_2019_rJNwDjAqYX",
"iclr_2019_rJNwDjAqYX",
"iclr_2019_rJNwDjAqYX",
"iclr_2019_rJNwDjAqYX"
] |
iclr_2019_rJe10iC5K7 | Unsupervised Discovery of Parts, Structure, and Dynamics | Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future. In this paper, we propose a novel formulation that simultaneously learns a hierarchical, disentangled object representation and a dynamics model for object parts from unlabeled videos. Our Parts, Structure, and Dynamics (PSD) model learns to, first, recognize the object parts via a layered image representation; second, predict hierarchy via a structural descriptor that composes low-level concepts into a hierarchical structure; and third, model the system dynamics by predicting the future. Experiments on multiple real and synthetic datasets demonstrate that our PSD model works well on all three tasks: segmenting object parts, building their hierarchical structure, and capturing their motion distributions. | accepted-poster-papers | The paper proposes a novel method that learns decompositions of an image over parts, their hierarchical structure and their motion dynamics given temporal image pairs. The problem tackled is of great importance for unsupervised learning from videos. One downside of the paper is the simple datasets used to demonstrate the effectiveness of the method. All reviewers though agree on it being a valuable contribution for ICLR.
In the related work section the paper mentions "...Some systems emphasize
learning from pixels but without an explicitly object-based representation (Fragkiadaki et al., 2016 ...". The paper you cite in fact emphasized the importance of having object-centric predictive models and the generalization that comes from this design choice, thus, it may be potentially not the right citation. | train | [
"Hye8zH-GpQ",
"HyxfvG4EJN",
"BklGbM4VkV",
"S1lfKWE4J4",
"ByeN7W4VyE",
"r1g-PiwqC7",
"SJxW2sw5CX",
"SJgp510jaX",
"Hklh_47xC7",
"ryeEFJAja7",
"rkgSvJCi6Q",
"HJewHyCs6m",
"BJl2l10ipQ",
"BylaoAToa7",
"HyxX4NWbam",
"r1gn3jco3Q",
"S1DRG7chm",
"ryesfnW1T7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"==== Review Summary ====\n\nThe paper demonstrates an interesting and potentially useful idea. But much of it is poorely explained, and experimental results are not strongly convincing. The only numerical evaluations are on a simple dataset that the authors made themselves. The most interesting claim - that thi... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
-1
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
-1
] | [
"iclr_2019_rJe10iC5K7",
"HJewHyCs6m",
"BJl2l10ipQ",
"rkgSvJCi6Q",
"SJxW2sw5CX",
"iclr_2019_rJe10iC5K7",
"Hklh_47xC7",
"iclr_2019_rJe10iC5K7",
"ryeEFJAja7",
"Hye8zH-GpQ",
"HyxX4NWbam",
"S1DRG7chm",
"r1gn3jco3Q",
"ryesfnW1T7",
"iclr_2019_rJe10iC5K7",
"iclr_2019_rJe10iC5K7",
"iclr_2019_... |
iclr_2019_rJe4ShAcF7 | Music Transformer: Generating Music with Long-Term Structure | Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-competition, and obtain state-of-the-art results on the latter. | accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- improvements to a transformer model originally designed for machine translation
- application of this model to a different task: music generation
- compelling generated samples and user study.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- lack of clarity at times (much improved in the revised version)
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
The main contention was novelty. Some reviewers felt that adapting an existing transformer model to music generation and achieving SOTA results and minute-long music sequences was not sufficient novelty. The final decision aligns with the reviewers who felt that the novelty was sufficient.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
A consensus was not reached. The final decision is aligned with the positive reviews for the reason mentioned above.
| train | [
"BJxpLWBLp7",
"rklASVMAJE",
"rkeg8XwNaX",
"SklsDZT8JV",
"BJgstbaU1V",
"B1xRERh814",
"S1lzsOPLyN",
"H1gc4r04yV",
"HygT5k5hCQ",
"HJx0wWs9Rm",
"BkxdMZi90Q",
"r1gelbs5Cm",
"BJlPfxo9AQ",
"rylg1ejqAm",
"rylqizmohQ",
"rJlnOWEc2X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an implementation trick to reduce the memory footprint of relative attention within a transformer network. Specifically, the paper points out redudant computation and storage in the traditional implementation and re-orders matrix operations and indexing schemes to optimize. As an appllication, ... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_rJe4ShAcF7",
"BJgstbaU1V",
"iclr_2019_rJe4ShAcF7",
"r1gelbs5Cm",
"BkxdMZi90Q",
"S1lzsOPLyN",
"BJlPfxo9AQ",
"HygT5k5hCQ",
"rylg1ejqAm",
"iclr_2019_rJe4ShAcF7",
"BJxpLWBLp7",
"rkeg8XwNaX",
"rylqizmohQ",
"rJlnOWEc2X",
"iclr_2019_rJe4ShAcF7",
"iclr_2019_rJe4ShAcF7"
] |
iclr_2019_rJeXCo0cYX | BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning | Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and scientific reasons. Though, given the lack of sample efficiency in current learning methods, reaching this goal may require substantial research efforts. We introduce the BabyAI research platform, with the goal of supporting investigations towards including humans in the loop for grounded language learning. The BabyAI platform comprises an extensible suite of 19 levels of increasing difficulty. Each level gradually leads the agent towards acquiring a combinatorially rich synthetic language, which is a proper subset of English. The platform also provides a hand-crafted bot agent, which simulates a human teacher. We report estimated amount of supervision required for training neural reinforcement and behavioral-cloning agents on some BabyAI levels. We put forward strong evidence that current deep learning methods are not yet sufficiently sample-efficient in the context of learning a language with compositional properties. | accepted-poster-papers | This paper presents "BabyAI", a research platform to support grounded language learning. The platform supports a suite of 19 levels, based on *synthetic* natural language of increasing difficulties. The platform uniquely supports simulated "human-in-the-loop" learning, where a human teacher is simulated as a heuristic expert agent speaking in synthetic language.
Pros:
A new platform to support grounded natural language learning with 19 levels of increasing difficulties. The platform also supports a heuristic expert agent to simulate a human teacher, which aims to mimic "human-in-the-loop" learning. The platform seems to be the result of a substantial amount of engineering, thus nontrivial to develop. While not representing the real communication or true natural language, the platform is likely to be useful for DL/RL researchers to perform prototype research on interactive and grounded language learning.
Cons:
Everything in the presented platform is based on synthetic natural language. While the use of synthetic language is not entirely satisfactory, such limit is relatively common among the simulation environments available today, and lifting that limitation is not straightforward. The primary contribution of the paper is a new platform (resource). There are no insights or methods.
Verdict:
Potential weak accept. The potential impact of this work is that the platform will likely be useful for DL/RL research on interactive and grounded language learning. | train | [
"H1xW8wbllN",
"SJe8Euj9RX",
"HyliNl09h7",
"rylOdOTB0Q",
"SJl9AvKsam",
"rJgn5DFo6X",
"rylPWDFjpQ",
"S1gl2Itop7",
"ByeCy_PDp7",
"BJlsYhEgp7",
"SkebaWdinm",
"r1eeyarz3m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Hello,\n\nFollowing internal discussions, we have decided, in agreement with two reviewers, to change the title of the paper. The new title will be: \"BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning\".\n\nKindest regards,\n\n- The BabyAI team",
"Thanks for your responses. \nYes, I... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_rJeXCo0cYX",
"rJgn5DFo6X",
"iclr_2019_rJeXCo0cYX",
"iclr_2019_rJeXCo0cYX",
"r1eeyarz3m",
"HyliNl09h7",
"S1gl2Itop7",
"SkebaWdinm",
"BJlsYhEgp7",
"iclr_2019_rJeXCo0cYX",
"iclr_2019_rJeXCo0cYX",
"iclr_2019_rJeXCo0cYX"
] |
iclr_2019_rJed6j0cKX | Analyzing Inverse Problems with Invertible Neural Networks | For many applications, in particular in natural science, the task is to
determine hidden system parameters from a set of measurements. Often,
the forward process from parameter- to measurement-space is well-defined,
whereas the inverse problem is ambiguous: multiple parameter sets can
result in the same measurement. To fully characterize this ambiguity, the full
posterior parameter distribution, conditioned on an observed measurement,
has to be determined. We argue that a particular class of neural networks
is well suited for this task – so-called Invertible Neural Networks (INNs).
Unlike classical neural networks, which attempt to solve the ambiguous
inverse problem directly, INNs focus on learning the forward process, using
additional latent output variables to capture the information otherwise
lost. Due to invertibility, a model of the corresponding inverse process is
learned implicitly. Given a specific measurement and the distribution of
the latent variables, the inverse pass of the INN provides the full posterior
over parameter space. We prove theoretically and verify experimentally, on
artificial data and real-world problems from medicine and astrophysics, that
INNs are a powerful analysis tool to find multi-modalities in parameter space,
uncover parameter correlations, and identify unrecoverable parameters. | accepted-poster-papers | This paper proposes a framework for using invertible neural networks to study inverse problems, e.g., recover hidden states or parameters of a system from measurements. This is an important and well-motivated topic, and the solution proposed is novel although somewhat incremental. The paper is generally well written. Some theoretical analysis is provided, giving conditions under which the proposed approach recovers the true posterior. Empirically, the approach is tested on synthetic data and real world problems from medicine and astronomy, where it is shown to compared favorably to ABC and conditional VAEs. Adding additional baselines (Bayesian MCMC and Stein methods) would be good. There are some potential issues regarding MMD scalability to high dimensional spaces, but overall the paper makes a solid contribution and all the reviewers agree it should be accepted for publication. | train | [
"rJeG6QIc3X",
"HylH7ZIu0X",
"Byg4H46S0Q",
"H1l9qhhSAm",
"Skx7PaEXCX",
"BJgZjv4c37",
"r1xEx_DbA7",
"HkehGGd36Q",
"H1gn4WuhTX",
"H1ltbW_naQ",
"H1gIl-_n6m",
"H1gxtiJq6m",
"rygRUCn4am",
"SylKBy6Eam",
"HJxI8nnV6m",
"ryeDzcZ867",
"SkgZ2oh4aX",
"HJgl48AlTX",
"rJxS7urch7",
"Hkx3D2TaoX"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"official_reviewer",
"author",
... | [
"1) Summary\n\nThe authors propose to use invertible networks to solve ambiguous inverse problems. This is done by training one group of Real-NVP output variables supervised while training the other group via maximum likelihood under a Gaussian prior as done in the standard Real-NVP. Further, the authors suggest to... | [
7,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1
] | [
"iclr_2019_rJed6j0cKX",
"Byg4H46S0Q",
"H1l9qhhSAm",
"Skx7PaEXCX",
"H1gxtiJq6m",
"iclr_2019_rJed6j0cKX",
"HkehGGd36Q",
"HJgl48AlTX",
"rJeG6QIc3X",
"BJgZjv4c37",
"rJxS7urch7",
"ryeDzcZ867",
"BJgZjv4c37",
"rJxS7urch7",
"SkgZ2oh4aX",
"HJxI8nnV6m",
"rJeG6QIc3X",
"iclr_2019_rJed6j0cKX",
... |
iclr_2019_rJedV3R5tm | RelGAN: Relational Generative Adversarial Networks for Text Generation | Generative adversarial networks (GANs) have achieved great success at generating realistic images. However, the text generation still remains a challenging task for modern GAN architectures. In this work, we propose RelGAN, a new GAN architecture for text generation, consisting of three main components: a relational memory based generator for the long-distance dependency modeling, the Gumbel-Softmax relaxation for training GANs on discrete data, and multiple embedded representations in the discriminator to provide a more informative signal for the generator updates. Our experiments show that RelGAN outperforms current state-of-the-art models in terms of sample quality and diversity, and we also reveal via ablation studies that each component of RelGAN contributes critically to its performance improvements. Moreover, a key advantage of our method, that distinguishes it from other GANs, is the ability to control the trade-off between sample quality and diversity via the use of a single adjustable parameter. Finally, RelGAN is the first architecture that makes GANs with Gumbel-Softmax relaxation succeed in generating realistic text. | accepted-poster-papers |
pros:
- well-written and clear
- good evaluation with convincing ablations
- moderately novel
cons:
- Reviewers 1 and 3 feel the paper is somewhat incremental over previous work, combining previously proposed ideas.
(Reviewer 2 originally had concerns about the testing methodology but feels that the paper has improved in revision)
(Reviewer 3 suggests an additional comparison to related work which was addressed in revision)
I appreciate the authors' revisions and engagement during the discussion period. Overall the paper is good and I'm recommending acceptance. | val | [
"HJxbxO4cn7",
"HkexOxGYa7",
"B1gqWjV5hX",
"HkeOU5ZYa7",
"ryxH-q-FpQ",
"HyeNJtWtaQ",
"S1li5OZtpQ",
"HJxjoDZtTX",
"rygzGj-p2X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"==========================\nI have read the authors' response and other reviewers' comments. Thanks the authors for taking great effort in answering my questions. Generally, I feel satisfied with the repsonse, and prefer an acceptance recommendation. \n==========================\nContributions:\n\nThe main contrib... | [
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_rJedV3R5tm",
"S1li5OZtpQ",
"iclr_2019_rJedV3R5tm",
"ryxH-q-FpQ",
"HJxbxO4cn7",
"S1li5OZtpQ",
"B1gqWjV5hX",
"rygzGj-p2X",
"iclr_2019_rJedV3R5tm"
] |
iclr_2019_rJevYoA9Fm | The Singular Values of Convolutional Layers | We characterize the singular values of the linear transformation associated with a standard 2D multi-channel convolutional layer, enabling their efficient computation. This characterization also leads to an algorithm for projecting a convolutional layer onto an operator-norm ball. We show that this is an effective regularizer; for example, it improves the test error of a deep residual network using batch normalization on CIFAR-10 from 6.2% to 5.3%. | accepted-poster-papers | This paper proposes an efficient method to compute the singular values of the linear map represented by a convolutional layer. It makes uses of the special block-matrix form of convolutional layers to construct their more efficient method. Furthermore, it shows that this method can be used to devise new regularization schemes for DNNs. The reviewers did note that the diversity of the experiments could be improved, and R2 raised concerns that the wrong singular values were being computed. The authors should add a section clarifying why the singular values of a convolutional linear map are not found directly by performing SVD on the reshaped kernel - indeed the number of singular values would be wrong. A contrast with the singular values obtained by simple reshaping of the kernel would also be helpful. | train | [
"r1l8srO-eV",
"Syx8I7LcyE",
"HJe-8Pr93m",
"BkeXWRlM0Q",
"rkeiYma-0X",
"SylOsBC3pX",
"S1liyZ6-C7",
"Byldg4f9pm",
"Skev-7xcTm",
"r1gPu4eEaX",
"rJlab0Efam",
"BkggfSfzTX",
"Hkxjbe6VhQ",
"ryxHyswZpm",
"r1efQ83ep7",
"HJlxzP4epm",
"Byeq47sJaQ",
"Syxj6HcJ6Q",
"ryexDS9J6X",
"SJlKQW9JaQ"... | [
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"public",
"public",
"public",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
... | [
"Thank you for your interest. \nIf the wrap-around (hence circulant form) is replaced by zero-padding, the operator matrix becomes a Toeplitz matrix. The analysis of the error for 2D case is investigated in the paper: “ On the Asymptotic Equivalence of Circulant and Toeplitz Matrices”, Zhihui Zhu, and Michael B. W... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Syx8I7LcyE",
"iclr_2019_rJevYoA9Fm",
"iclr_2019_rJevYoA9Fm",
"Skev-7xcTm",
"S1liyZ6-C7",
"Byldg4f9pm",
"SylOsBC3pX",
"r1gPu4eEaX",
"iclr_2019_rJevYoA9Fm",
"rJlab0Efam",
"ryxHyswZpm",
"HJe-8Pr93m",
"iclr_2019_rJevYoA9Fm",
"r1efQ83ep7",
"Byeq47sJaQ",
"ryexDS9J6X",
"SJlKQW9JaQ",
"rkg... |
iclr_2019_rJfUCoR5KX | An Empirical study of Binary Neural Networks' Optimisation | Binary neural networks using the Straight-Through-Estimator (STE) have been shown to achieve state-of-the-art results, but their training process is not well-founded. This is due to the discrepancy between the evaluated function in the forward path, and the weight updates in the back-propagation, updates which do not correspond to gradients of the forward path. Efficient convergence and accuracy of binary models often rely on careful fine-tuning and various ad-hoc techniques. In this work, we empirically identify and study the effectiveness of the various ad-hoc techniques commonly used in the literature, providing best-practices for efficient training of binary models. We show that adapting learning rates using second moment methods is crucial for the successful use of the STE, and that other optimisers can easily get stuck in local minima. We also find that many of the commonly employed tricks are only effective towards the end of the training, with these methods making early stages of the training considerably slower. Our analysis disambiguates necessary from unnecessary ad-hoc techniques for training of binary neural networks, paving the way for future development of solid theoretical foundations for these. Our newly-found insights further lead to new procedures which make training of existing binary neural networks notably faster. | accepted-poster-papers | The paper summarizes existing work on binary neural network optimization and performs an empirical study across a few datasets and neural network architectures. I agree with the reviewers that this is a valuable study and it can establish a benchmark to help practitioners develop better binary neural network optimization techniques.
PS: How about "An empirical study of binary neural network optimization" as the title?
| train | [
"B1eVJiKyCX",
"Skgim8YkCQ",
"B1xMGsOyR7",
"Bklf3h1anX",
"Bkxxc8rEn7",
"BJe-izKw3X",
"HkxOUX34o7",
"Syx7Ex4c9X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We would like to thank the reviewer for the constructive comments.\n\nOur aim in this paper is to provide useful empirical observations and generate possible hypotheses that explain them, rather than to make new claims or theoretical analysis. It is true that we have provided some hypotheses about what might be go... | [
-1,
-1,
-1,
8,
4,
6,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
3,
-1,
-1
] | [
"Bkxxc8rEn7",
"BJe-izKw3X",
"Bklf3h1anX",
"iclr_2019_rJfUCoR5KX",
"iclr_2019_rJfUCoR5KX",
"iclr_2019_rJfUCoR5KX",
"Syx7Ex4c9X",
"iclr_2019_rJfUCoR5KX"
] |
iclr_2019_rJfW5oA5KQ | Approximability of Discriminators Implies Diversity in GANs | While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent works have shown that they suffer from lack of diversity or mode collapse. The theoretical work of Arora et al. (2017a) suggests a dilemma about GANs’ statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse.
By contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). For various generator classes such as mixture of Gaussians, exponential families, and invertible and injective neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence. This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes. Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence or the Wasserstein distance, indicating that the lack of diversity in GANs may be caused by the sub-optimality in optimization instead of statistical inefficiency. | accepted-poster-papers | The paper presents an interesting theoretical analysis by deriving polynomial sample complexity bounds for the training of GANs that depend on the approximator properties of the discriminator.
Even if it is not clear if the theory will help to pick suitable discriminators in practice, it provides
new and interesting theoretical insights on the properties of GAN training.
| train | [
"SyeMZLv9AX",
"H1lYQrvcRX",
"HJeQfBw9CX",
"SJeKkSv5CQ",
"BkgcGwG9hQ",
"Hkgc6MlO3m",
"rJgrauuDn7",
"SJxR7LWi9Q",
"rJlVCVTb5X"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We have made a revision to our paper according to the reviewers' comments. Major changes are the following:\n\n--- We have migrated one set of our experiments (previously Appendix F) into the main text (Section 5.1).\n\n--- The concept of restricted approximability is now defined in more generality without assumin... | [
-1,
-1,
-1,
-1,
8,
7,
7,
-1,
-1
] | [
-1,
-1,
-1,
-1,
2,
3,
3,
-1,
-1
] | [
"iclr_2019_rJfW5oA5KQ",
"rJgrauuDn7",
"Hkgc6MlO3m",
"BkgcGwG9hQ",
"iclr_2019_rJfW5oA5KQ",
"iclr_2019_rJfW5oA5KQ",
"iclr_2019_rJfW5oA5KQ",
"rJlVCVTb5X",
"iclr_2019_rJfW5oA5KQ"
] |
iclr_2019_rJg4J3CqFm | Learning Embeddings into Entropic Wasserstein Spaces | Despite their prevalence, Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions. Here we consider an alternative, which embeds data as discrete probability distributions in a Wasserstein space, endowed with an optimal transport metric. Wasserstein spaces are much larger and more flexible than Euclidean spaces, in that they can successfully embed a wider variety of metric structures. We propose to exploit this flexibility by learning an embedding that captures the semantic information in the Wasserstein distance between embedded distributions. We examine empirically the representational capacity of such learned Wasserstein embeddings, showing that they can embed a wide variety of complex metric structures with smaller distortion than an equivalent Euclidean embedding. We also investigate an application to word embedding, demonstrating a unique advantage of Wasserstein embeddings: we can directly visualize the high-dimensional embedding, as it is a probability distribution on a low-dimensional space. This obviates the need for dimensionality reduction techniques such as t-SNE for visualization. | accepted-poster-papers |
+ An interesting and original idea of embedding words into the (very low dimensional) Wasserstein space, i.e. clouds of points in a low-dimensional space
+ As the space is low-dimensional (2D), it can be directly visualized.
+ I could imagine the technique to be useful in social / human science for data visualization, the visualization is more faithful to what the model is doing than t-SNE plots of high-dimensional embeddings
+ Though not the first method to embed words as densities but seemingly the first one which shows that multi-modality / multiple senses are captured (except for models which capture discrete senses)
+ The paper is very well written
- The results are not very convincing but show that embeddings do capture word similarity (even when training the model on a small dataset)
- The approach is not very scalable (hence evaluation on 17M corpus)
- The method cannot be used to deal with data sparsity, though (very) interesting for visualization
- This is mostly an empirical paper (i.e. an interesting application of an existing method)
The reviewers are split. One reviewer is negative as they are unclear what the technical contribution is (but seems a bit biased against empirical papers). Another two find the paper very interesting.
| test | [
"rJgNxurnR7",
"BkgLLSSjC7",
"rkg2CiWi0Q",
"ryl-6sbsAX",
"SkxUwq-jCX",
"rJg9pF6g0m",
"rkegohvX0X",
"B1lPtGDXCm",
"rkgjrO0Z0Q",
"ByxV7nuupQ",
"r1eUJa_d6m",
"BkxbM0_d6X",
"SkgboCbq37",
"r1gPBWZ8hm",
"B1gutHkNnX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for responding. We are significantly confused by two of your statements:\n\n“... given that your axis are probability values, plotting the data in an orthogonal axis is probably not the best idea. The correct way of representing the data would be on a simplex.”\n\nThis statement is false. As noted by Rev... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"BkgLLSSjC7",
"rkg2CiWi0Q",
"ryl-6sbsAX",
"B1lPtGDXCm",
"rkegohvX0X",
"iclr_2019_rJg4J3CqFm",
"r1eUJa_d6m",
"BkxbM0_d6X",
"rJg9pF6g0m",
"SkgboCbq37",
"r1gPBWZ8hm",
"B1gutHkNnX",
"iclr_2019_rJg4J3CqFm",
"iclr_2019_rJg4J3CqFm",
"iclr_2019_rJg4J3CqFm"
] |
iclr_2019_rJg6ssC5Y7 | DeepOBS: A Deep Learning Optimizer Benchmark Suite | Because the choice and tuning of the optimizer affects the speed, and ultimately the performance of deep learning, there is significant past and recent research in this area. Yet, perhaps surprisingly, there is no generally agreed-upon protocol for the quantitative and reproducible evaluation of optimization strategies for deep learning. We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization. As the primary contribution, we present DeepOBS, a Python package of deep learning optimization benchmarks. The package addresses key challenges in the quantitative assessment of stochastic optimizers, and automates most steps of benchmarking. The library includes a wide and extensible set of ready-to-use realistic optimization problems, such as training Residual Networks for image classification on ImageNet or character-level language prediction models, as well as popular classics like MNIST and CIFAR-10. The package also provides realistic baseline results for the most popular optimizers on these test problems, ensuring a fair comparison to the competition when benchmarking new optimizers, and without having to run costly experiments. It comes with output back-ends that directly produce LaTeX code for inclusion in academic publications. It supports TensorFlow and is available open source. | accepted-poster-papers | The field of deep learning optimization suffers from a lack of standard benchmarks, and every paper reports results on a different set of models and architectures, likely with different protocols for tuning the baselines. This paper takes the useful step of providing a single benchmark suite for neural net optimizers.
The set of benchmarks seems well-designed, and covers the range of baselines with a variety of representative architectures. It seems like a useful contribution that will improve the rigor of neural net optimizer evaluation.
One reviewer had a long back-and-forth with the authors about whether to provide a standard protocol for hyperparameter tuning. I side with the authors on this one: it seems like a bad idea to force a one-size-fits-all protocol here.
As a lesser point, I'm a little concerned about the strength of some of the baselines. As reviewers point out, some of the baseline results are weaker than typical implementations of those methods. One explanation might be the lack of learning rate schedules, something that's critical to get reasonable performance on some of these tasks. I get that using a fixed learning rate simplifies the grid search protocol, but I'm worried it will hurt the baselines enough that effective learning rate schedules and normalization issues come to dominate the comparisons.
Still, the benchmark suite seems well constructed on the whole, and will probably be useful for evaluation of neural net optimizers. I recommend acceptance.
| val | [
"B1xd12Y037",
"rkeqB9k30Q",
"H1lmbe0tC7",
"B1g4zLh-C7",
"SkgW52MsaQ",
"HylEM34GT7",
"Byxn8_ae6Q",
"Syx2vZqgTQ",
"S1xprW9lp7",
"Bye00eclTQ",
"r1l6IbIc2X",
"r1eC9mIc2X"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new benchmark suite to compare optimizer on deep neural networks. It provides a pipeline to help streamlining the analysis of new optimizers which would favor easily reproducible results and fair comparisons.\n\nQuality\n\nThe paper covers well the problems underlying the construction of such... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_rJg6ssC5Y7",
"H1lmbe0tC7",
"B1g4zLh-C7",
"SkgW52MsaQ",
"HylEM34GT7",
"Byxn8_ae6Q",
"Syx2vZqgTQ",
"B1xd12Y037",
"r1eC9mIc2X",
"r1l6IbIc2X",
"iclr_2019_rJg6ssC5Y7",
"iclr_2019_rJg6ssC5Y7"
] |
iclr_2019_rJg8yhAqKm | InfoBot: Transfer and Exploration via the Information Bottleneck | A central challenge in reinforcement learning is discovering effective policies for tasks where rewards are sparsely distributed. We postulate that in the absence of useful reward signals, an effective exploration strategy should seek out {\it decision states}. These states lie at critical junctions in the state space from where the agent can transition to new, potentially unexplored regions. We propose to learn about decision states from prior experience. By training a goal-conditioned model with an information bottleneck, we can identify decision states by examining where the model accesses the goal state through the bottleneck. We find that this simple mechanism effectively identifies decision states, even in partially observed settings. In effect, the model learns the sensory cues that correlate with potential subgoals. In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space. | accepted-poster-papers | The paper presents the use of information bottlenecks as a way to identify key "decision states" in exploration, in a goal-conditioned model. The concept of "decision states" is actually common in RL, states where exploring can lead to very diverse/new states. The implementation of the "information bottleneck" is done by adding a regularizing term, the conditional mutual information I(A;G|S).
The main weaknesses of the paper were its lack of clarity and the experimental section. It seems to me that the rebuttals, and the additional experiments and details, made the paper worthy of publication. The authors cleared enough of the gray areas and showcased the relative merits of the methods. | train | [
"ryeX1rXrlV",
"SJxWVtZ5pX",
"B1ls9HHb1V",
"SJxUd48kk4",
"SJlJ48UJJE",
"rJg476FORQ",
"SJlbab6rCm",
"Hkg7_bprAm",
"HJlAV5ZPR7",
"ryxhXE9eAQ",
"rJxSNHQMRQ",
"Hyg3Bds96Q",
"BJlVbP2ZCQ",
"SkgQ1PcgCm",
"SJlWokadnQ",
"rJxR90hl0Q",
"rkluo4qgCQ",
"rkgF37cxCQ",
"Bke-kjhq6m",
"Skevj_35pX"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author... | [
"The authors thank the reviewer for reading the rebuttal, and increasing their score.\n\nThanks for your time! ",
"The authors propose a new regularizer for policy search in a multi-goal RL setting. The objective promotes a more efficient exploration strategy by encouraging the agent to learn policies that depend... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"SJxWVtZ5pX",
"iclr_2019_rJg8yhAqKm",
"iclr_2019_rJg8yhAqKm",
"BJxrL1eLoQ",
"SJxWVtZ5pX",
"BJxrL1eLoQ",
"SJxWVtZ5pX",
"BJxrL1eLoQ",
"SJlWokadnQ",
"Bke-kjhq6m",
"H1xMDas56m",
"iclr_2019_rJg8yhAqKm",
"rJxR90hl0Q",
"iclr_2019_rJg8yhAqKm",
"iclr_2019_rJg8yhAqKm",
"SkeROX29am",
"SJlWokadn... |
iclr_2019_rJgTTjA9tX | The Comparative Power of ReLU Networks and Polynomial Kernels in the Presence of Sparse Latent Structure | There has been a large amount of interest, both in the past and particularly recently, into the relative advantage of different families of universal function approximators, for instance neural networks, polynomials, rational functions, etc. However, current research has focused almost exclusively on understanding this problem in a worst case setting: e.g. characterizing the best L1 or L_{infty} approximation in a box (or sometimes, even under an adversarially constructed data distribution.) In this setting many classical tools from approximation theory can be effectively used.
However, in typical applications we expect data to be high dimensional, but structured -- so, it would only be important to approximate the desired function well on the relevant part of its domain, e.g. a small manifold on which real input data actually lies. Moreover, even within this domain the desired quality of approximation may not be uniform; for instance in classification problems, the approximation needs to be more accurate near the decision boundary. These issues, to the best of our knowledge, have remain unexplored until now.
With this in mind, we analyze the performance of neural networks and polynomial kernels in a natural regression setting where the data enjoys sparse latent structure, and the labels depend in a simple way on the latent variables. We give an almost-tight theoretical analysis of the performance of both neural networks and polynomials for this problem, as well as verify our theory with simulations. Our results both involve new (complex-analytic) techniques, which may be of independent interest, and show substantial qualitative differences with what is known in the worst-case setting. | accepted-poster-papers | This paper makes a substantial contribution to the understanding of the approximation ability of deep networks in comparison to classical approximation classes, such as polynomials. Strong results are given that show fundamental advantages for neural network function approximators in the presence of a natural form of latent structure. The analysis techniques required to achieve these results are novel and worth reporting to the community. The reviewers are uniformly supportive. | val | [
"Hkl1W-vLRm",
"Sylr0yhhhX",
"SyeekjH9h7",
"ByeUreMZhX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their valuable feedback, which we have incorporated into the new revision of the paper. In particular, in response to AnonReviewer2, we added a discussion of a related work by Zhang et al. on kernel methods simulating neural networks and have added more details to both the proofs and pro... | [
-1,
7,
7,
7
] | [
-1,
3,
3,
3
] | [
"iclr_2019_rJgTTjA9tX",
"iclr_2019_rJgTTjA9tX",
"iclr_2019_rJgTTjA9tX",
"iclr_2019_rJgTTjA9tX"
] |
iclr_2019_rJgYxn09Fm | Learning Implicitly Recurrent CNNs Through Parameter Sharing | We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy.
Our simple parameter sharing scheme, though defined via soft weights, in practice often yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual loops. Training these networks thus implicitly involves discovery of suitable recurrent architectures. Though considering only the aspect of recurrent links, our trained networks achieve accuracy competitive with those built using state-of-the-art neural architecture search (NAS) procedures.
Our hybridization of recurrent and convolutional networks may also represent a beneficial architectural bias. Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better to test examples outside the span of the training set. | accepted-poster-papers | This paper proposed an interesting approach to weight sharing among CNN layers via shared weight templates to save parameters. It's well written with convincing results. Reviewers have a consensus on accept. | train | [
"H1lWxZDEAm",
"Sye4qrs7Am",
"HJxuxUsQA7",
"SklIGLjQAm",
"BygqPHo70Q",
"HyxiNEjQCQ",
"S1eeeNoXCX",
"BygndrCp3m",
"Hke2lCb92Q",
"S1gMfAgqh7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for addressing all my comments in detail, your replies satisfactorily cover all the issues I raised. I look forward to reading the final version of the manuscript.",
"8 - Sec 4.4, it is unclear to me what can be the contribution of the 1x1 initial convolution, since it will see no context and all the i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"BygqPHo70Q",
"BygndrCp3m",
"BygndrCp3m",
"BygndrCp3m",
"BygndrCp3m",
"S1gMfAgqh7",
"Hke2lCb92Q",
"iclr_2019_rJgYxn09Fm",
"iclr_2019_rJgYxn09Fm",
"iclr_2019_rJgYxn09Fm"
] |
iclr_2019_rJgbSn09Ym | Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids | Real-life control tasks involve matters of various substances---rigid or soft bodies, liquid, gas---each with distinct physical behaviors. This poses challenges to traditional rigid-body physics engines. Particle-based simulators have been developed to model the dynamics of these complex scenes; however, relying on approximation techniques, their simulation often deviates from real-world physics, especially in the long term. In this paper, we propose to learn a particle-based simulator for complex control tasks. Combining learning with particle-based systems brings in two major benefits: first, the learned simulator, just like other particle-based systems, acts widely on objects of different materials; second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within. This enables the model to quickly adapt to new environments of unknown dynamics within a few observations. We demonstrate robots achieving complex manipulation tasks using the learned simulator, such as manipulating fluids and deformable foam, with experiments both in simulation and in the real world. Our study helps lay the foundation for robot learning of dynamic scenes with particle-based representations. | accepted-poster-papers | The paper proposes a particle based framework for learning object dynamics. A scene is represented by a hierarchical graph over particles, edges between particles are established dynamically based on Euclidean distance. The model is used for model predictive control, and there is also one experiment with a particle graph built from a real scene as opposed to simulation.
All reviewers agree that the architectural changes over previous relational networks are worthwhile and merit publication. They also suggest to tone down the ``dynamic” part of the graph construction by stating that edges are determined based on a radius. In particular, previous works also consider similar addition of edges during collisions, quoting Mrowca et al. "Collisions between objects are handled by dynamically defining pairwise collision relations ... between leaf particles..." which suggests that comparison against a baseline for Mrowca et al. that uses a static graph is not entirely fair. The authors are encouraged to repeat the experiment without disabling such dynamic addition of edges.
| train | [
"BJxjLCJFJV",
"ryehJ1s2n7",
"SJxC3glQyE",
"H1gkmwJmJE",
"Bkey7AnqC7",
"S1gcPWZphX",
"B1eZWv4q2m",
"HJlJ9385Cm",
"rJg_Ehy-C7",
"SkxqaaJbC7",
"BkxztqkZR7",
"HylEhDyWAm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"I went through the author's reply and the updated paper in detail. The major part of the authors' response above mainly argues about the differences of proposed method from Mrowca et.al. 2018. However, this is neither the question I asked nor had any concerns about.\n\nMy main concern is that the approach seems to... | [
-1,
6,
-1,
-1,
-1,
8,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1
] | [
"rJg_Ehy-C7",
"iclr_2019_rJgbSn09Ym",
"ryehJ1s2n7",
"Bkey7AnqC7",
"BkxztqkZR7",
"iclr_2019_rJgbSn09Ym",
"iclr_2019_rJgbSn09Ym",
"iclr_2019_rJgbSn09Ym",
"ryehJ1s2n7",
"B1eZWv4q2m",
"S1gcPWZphX",
"iclr_2019_rJgbSn09Ym"
] |
iclr_2019_rJl0r3R9KX | Regularized Learning for Domain Adaptation under Label Shifts | We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target domain. We first estimate importance weights using labeled source data and unlabeled target data, and then train a classifier on the weighted source samples. We derive a generalization bound for the classifier on the target domain which is independent of the (ambient) data dimensions, and instead only depends on the complexity of the function class. To the best of our knowledge, this is the first generalization bound for the label-shift problem where the labels in the target domain are not available. Based on this bound, we propose a regularized estimator for the small-sample regime which accounts for the uncertainty in the estimated weights. Experiments on the CIFAR-10 and MNIST datasets show that RLLS improves classification accuracy, especially in the low sample and large-shift regimes, compared to previous methods. | accepted-poster-papers | The paper gives a novel algorithm for transfer learning with label distribution shift with provably guarantees. As the reviewers pointed out, the pros include: 1) a solid and motivated algorithm for a understudied problem 2) the algorithm is implemented empirically and gives good performance. The drawback includes incomplete/unclear comparison with previous work. The authors claimed that the code of the previous work cannot be completed within a reasonable amount of time. The AC decided that the paper could be accepted without such a comparison, but the authors are strongly urged to clarify this point or include the comparison for a smaller dataset in the final revision if possible. | train | [
"S1xCYXQGyV",
"BJe5ulInCQ",
"r1ghyrDc2m",
"rygX0-Ehp7",
"BkeG9kNh6X",
"BJg59Q0iT7",
"Hkldv4Aj6X",
"SyxJIBjuhm",
"rkl-PP8d2Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank you for providing the link to the code. Inspired by your suggestion to analyze [1], we deployed the code provided by the authors and ran it for data with dimensionality equal to 700 (similar to MNIST).\nIn our experiments, we found that the largest number of samples for which we could feasib... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"BJe5ulInCQ",
"rygX0-Ehp7",
"iclr_2019_rJl0r3R9KX",
"rkl-PP8d2Q",
"SyxJIBjuhm",
"iclr_2019_rJl0r3R9KX",
"r1ghyrDc2m",
"iclr_2019_rJl0r3R9KX",
"iclr_2019_rJl0r3R9KX"
] |
iclr_2019_rJlDnoA5Y7 | Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs | The Softmax function is used in the final layer of nearly all existing sequence-to-sequence models for language generation. However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint. We propose a general technique for replacing the softmax layer with a continuous embedding layer. Our primary innovations are a novel probabilistic loss, and a training and inference procedure in which we generate a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax. We evaluate this new class of sequence-to-sequence models with continuous outputs on the task of neural machine translation. We show that our models obtain upto 2.5x speed-up in training time while performing on par with the state-of-the-art models in terms of translation quality. These models are capable of handling very large vocabularies without compromising on translation quality. They also produce more meaningful errors than in the softmax-based models, as these errors typically lie in a subspace of the vector space of the reference translations. | accepted-poster-papers | this is a meta-review with the recommendation, but i will ultimately leave the final call to the programme chairs, as this submission has a number of valid concerns.
the proposed approach is one of the early, principled one to using (fixed) dense vectors for computing the predictive probability without resorting to softmax, that scales better than and work almost as well as softmax in neural sequence modelling. the reviewers as well as public commentators have noticed some (potentially significant) short comings, such as instability of learning due to numerical precision and the inability of using beam search (perhaps due to the sub-optimal calibration of probabilities under vMF.) however, i believe these two issues should be addressed as separate follow-up work not necessarily by the authors themselves but by a broader community who would find this approach appealing for their own work, which would only be possible if the authors presented this work and had a chance to discuss it with the community at the conference. therefore, i recommend it be accepted. | test | [
"BkggPnwY2X",
"rklFQ4CPi7",
"rkx6JoVwpX",
"Hylv394DTm",
"BJlN4K4PTQ",
"r1xMA_VPa7",
"HylClIbC2m",
"S1gObe4q2X",
"S1gy51o03X",
"B1xGjesyoX",
"HkxNHeoR57",
"B1gSLhr45m",
"r1xVvGVZ5m",
"B1eWBOb4cQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"public",
"author",
"public",
"public",
"public"
] | [
"\n\n[clarity]\nThis paper is basically well written. \nThe motivation is clear and reasonable.\nHowever, I have some points that I need to confirm for review (Please see the significance part).\n\n\n[originality]\nThe idea of taking advantage of von Mises-Fisher distributions is not novel in the context of DL/DNN ... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_rJlDnoA5Y7",
"B1xGjesyoX",
"S1gy51o03X",
"BkggPnwY2X",
"S1gObe4q2X",
"HylClIbC2m",
"iclr_2019_rJlDnoA5Y7",
"iclr_2019_rJlDnoA5Y7",
"BkggPnwY2X",
"iclr_2019_rJlDnoA5Y7",
"r1xVvGVZ5m",
"B1eWBOb4cQ",
"iclr_2019_rJlDnoA5Y7",
"r1xVvGVZ5m"
] |
iclr_2019_rJlEojAqFm | Relational Forward Models for Multi-Agent Learning | The behavioral dynamics of multi-agent systems have a rich and orderly structure, which can be leveraged to understand these systems, and to improve how artificial agents learn to operate in them. Here we introduce Relational Forward Models (RFM) for multi-agent learning, networks that can learn to make accurate predictions of agents' future behavior in multi-agent environments. Because these models operate on the discrete entities and relations present in the environment, they produce interpretable intermediate representations which offer insights into what drives agents' behavior, and what events mediate the intensity and valence of social interactions. Furthermore, we show that embedding RFM modules inside agents results in faster learning systems compared to non-augmented baselines.
As more and more of the autonomous systems we develop and interact with become multi-agent in nature, developing richer analysis tools for characterizing how and why agents make decisions is increasingly necessary. Moreover, developing artificial agents that quickly and safely learn to coordinate with one another, and with humans in shared environments, is crucial. | accepted-poster-papers |
pros:
- interesting application of graph networks for relational inference in MARL, allowing interpretability and, as the results show, increasing performance
- better learning curves in several games
- somewhat better forward prediction than baselines
cons:
- perhaps some lingering confusion about the amount of improvement over the LSTM+MLP baseline
Many of the reviewer's other issues have been addressed in revision and I recommend acceptance. | train | [
"HylpLcxbTm",
"BJgBS8y0nX",
"rke5oY6H0X",
"ryglq4An67",
"rJlz7QR26Q",
"HkxhF8j8TX",
"SygcOsjn67",
"ryeSysshT7",
"rJe9bjs36X",
"ryg9NANi6Q",
"BJxryqVsTQ",
"Byxww9-qT7",
"H1gmKi-56m",
"Syes3cZcp7",
"rygAtEgF6Q",
"HJeL-oyF6Q",
"SkeCbN1KpX",
"rklQ8hTOT7",
"S1l7Eh6d6m",
"SyemfhaO6Q"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"a... | [
"\nThis paper used graph neural networks to do relational reasoning of multi-agent systems to predict the actions and returns of MARL agents that they call Relational Forward Modeling. They used RFM to analyze and assess the coordination between agents in three different multi-agent environments. They then construc... | [
7,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_rJlEojAqFm",
"iclr_2019_rJlEojAqFm",
"BJgBS8y0nX",
"ryeSysshT7",
"rJe9bjs36X",
"iclr_2019_rJlEojAqFm",
"ryg9NANi6Q",
"BJxryqVsTQ",
"BJxryqVsTQ",
"H1gmKi-56m",
"Syes3cZcp7",
"SkeCbN1KpX",
"rygAtEgF6Q",
"HJeL-oyF6Q",
"r1xuajaOT7",
"SyemfhaO6Q",
"S1l7Eh6d6m",
"HylpLcxbTm",
... |
iclr_2019_rJlWOj0qF7 | Imposing Category Trees Onto Word-Embeddings Using A Geometric Construction | We present a novel method to precisely impose tree-structured category information onto word-embeddings, resulting in ball embeddings in higher dimensional spaces (N-balls for short). Inclusion relations among N-balls implicitly encode subordinate relations among categories. The similarity measurement in terms of the cosine function is enriched by category information. Using a geometric construction method instead of back-propagation, we create large N-ball embeddings that satisfy two conditions: (1) category trees are precisely imposed onto word embeddings at zero energy cost; (2) pre-trained word embeddings are well preserved. A new benchmark data set is created for validating the category of unknown words. Experiments show that N-ball embeddings, carrying category information, significantly outperform word embeddings in the test of nearest neighborhoods, and demonstrate surprisingly good performance in validating categories of unknown words. Source codes and data-sets are free for public access \url{https://github.com/gnodisnait/nball4tree.git} and \url{https://github.com/gnodisnait/bp94nball.git}. | accepted-poster-papers | The authors provide an interesting method to infuse hierarchical information into existing word vectors. This could help with a variety of tasks that require both knowledge base information and textual co-occurrence counts.
Despite some of the shortcomings that the reviewers point out, I believe this could be one missing puzzle piece of connecting symbolic information/sets/logic/KBs with neural nets and hence I recommend acceptance of this paper. | train | [
"HklO7Cdt37",
"S1lpBN786Q",
"S1gYrFNQCQ",
"rJxJVYNq2Q",
"rJeKs2E52Q"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes N-ball embedding for taxonomic data. An N-ball is a pair of a centroid vector and the radius from the center, which represents a word.\n\nMajor comments:\n\n- The weakness of this paper is lack of experimental comparisons with other prominent studies. The Poincare embedding and the Lorentz mode... | [
3,
-1,
-1,
4,
4
] | [
4,
-1,
-1,
5,
4
] | [
"iclr_2019_rJlWOj0qF7",
"iclr_2019_rJlWOj0qF7",
"rJxJVYNq2Q",
"iclr_2019_rJlWOj0qF7",
"iclr_2019_rJlWOj0qF7"
] |
iclr_2019_rJleN20qK7 | Two-Timescale Networks for Nonlinear Value Function Approximation | A key component for many reinforcement learning agents is to learn a value function, either for policy evaluation or control. Many of the algorithms for learning values, however, are designed for linear function approximation---with a fixed basis or fixed representation. Though there have been a few sound extensions to nonlinear function approximation, such as nonlinear gradient temporal difference learning, these methods have largely not been adopted, eschewed in favour of simpler but not sound methods like temporal difference learning and Q-learning. In this work, we provide a two-timescale network (TTN) architecture that enables linear methods to be used to learn values, with a nonlinear representation learned at a slower timescale. The approach facilitates the use of algorithms developed for the linear setting, such as data-efficient least-squares methods, eligibility traces and the myriad of recently developed linear policy evaluation algorithms, to provide nonlinear value estimates. We prove convergence for TTNs, with particular care given to ensure convergence of the fast linear component under potentially dependent features provided by the learned representation. We empirically demonstrate the benefits of TTNs, compared to other nonlinear value function approximation algorithms, both for policy evaluation and control. | accepted-poster-papers | The paper proposes a new method to approximate the nonlinear value function by estimating it as a sum of linear and nonlinear terms. The nonlinear term is updated much slower than the linear term, and the paper proposes to use a
fast least-square algorithm to update the linear term. Convergence results are also discussed and empirical evidence is provided.
As reviewers have pointed out, the novelty of the paper is limited, but the ideas are interesting and could be useful for the community. I strongly recommend taking reviewers comments into account for the camera ready and also add a discussion on the relationship with the existing work.
Overall, I think this paper is interesting and I recommend acceptance.
| train | [
"rJlwi0bOAQ",
"rJeJQ-ud6m",
"HygVJ4_w0Q",
"B1gw_Gm96m",
"rklRWsfvCX",
"SJep0KGPAX",
"SJx8UYzwAX",
"BJeWyAYr0Q",
"rJlD3X8n3Q",
"ByeNn94ipm",
"HkxZ4Gxf6m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Thank you for answering my questions. I have adjusted my score accordingly.\nI suggest you to add few sentences to clarify the novelty, as you explained to me in your response (especially about the eligibility traces and the target/convergence).\nAlso, I would suggest to move the catcher experiments to the appendi... | [
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
6,
6,
-1
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
4,
4,
-1
] | [
"SJx8UYzwAX",
"iclr_2019_rJleN20qK7",
"SJep0KGPAX",
"iclr_2019_rJleN20qK7",
"ByeNn94ipm",
"B1gw_Gm96m",
"rJeJQ-ud6m",
"HkxZ4Gxf6m",
"iclr_2019_rJleN20qK7",
"iclr_2019_rJleN20qK7",
"rJlD3X8n3Q"
] |
iclr_2019_rJliMh09F7 | Diversity-Sensitive Conditional Generative Adversarial Networks | We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN). Although conditional distributions are multi-modal (i.e., having many modes) in practice, most cGAN approaches tend to learn an overly simplified distribution where an input is always mapped to a single output regardless of variations in latent code. To address such issue, we propose to explicitly regularize the generator to produce diverse outputs depending on latent codes. The proposed regularization is simple, general, and can be easily integrated into most conditional GAN objectives. Additionally, explicit regularization on generator allows our method to control a balance between visual quality and diversity. We demonstrate the effectiveness of our method on three conditional generation tasks: image-to-image translation, image inpainting, and future video prediction. We show that simple addition of our regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task. | accepted-poster-papers | The paper proposes a regularization term on the generator's gradient that increases sensitivity of the generator to the input noise variable in conditional and unconditional Generative Adversarial networks, and results in multimodal predictions. All reviewers agree that this is a simple and useful addition to current GANs. Experiments that demonstrate the trade off between diversity and generation quality would be important to include, as well as the experiment on using the proposed method on unconditional GANs, which was conducted during the discussion period. | train | [
"Hyg2WIvq27",
"HkexNsjFAQ",
"S1lcPdiF07",
"BJgHwssK0m",
"BylPIFoFCX",
"HylZvVLo3m",
"rkxiYmDq2m",
"HJgRk7fChQ",
"BkgYd5Rd3Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"The paper proposes a simple way of addressing the issue of mode-collapse by adding a regularisation to force the outputs to be diverse. Specifically, a loss is added that maximises the l2 loss between the images generated, normalised by the distance between the corresponding latent codes. This method is also used ... | [
6,
-1,
-1,
-1,
-1,
7,
7,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
3,
5,
-1,
-1
] | [
"iclr_2019_rJliMh09F7",
"Hyg2WIvq27",
"HylZvVLo3m",
"HkexNsjFAQ",
"rkxiYmDq2m",
"iclr_2019_rJliMh09F7",
"iclr_2019_rJliMh09F7",
"BkgYd5Rd3Q",
"iclr_2019_rJliMh09F7"
] |
iclr_2019_rJlk6iRqKX | Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach | We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions. This is a very challenging problem since the direct extension of state-of-the-art white-box attacks (e.g., C&W or PGD) to the hard-label black-box setting will require minimizing a non-continuous step function, which is combinatorial and cannot be solved by a gradient-based optimizer. The only two current approaches are based on random walk on the boundary (Brendel et al., 2017) and random trials to evaluate the loss function (Ilyas et al., 2018), which require lots of queries and lacks convergence guarantees.
We propose a novel way to formulate the hard-label black-box attack as a real-valued optimization problem which is usually continuous and can be solved by any zeroth order optimization algorithm. For example, using the Randomized Gradient-Free method (Nesterov & Spokoiny, 2017), we are able to bound the number of iterations needed for our algorithm to achieve stationary points under mild assumptions. We demonstrate that our proposed method outperforms the previous stochastic approaches to attacking convolutional neural networks on MNIST, CIFAR, and ImageNet datasets. More interestingly, we show that the proposed algorithm can also be used to attack other discrete and non-continuous machine learning models, such as Gradient Boosting Decision Trees (GBDT). | accepted-poster-papers | The reviewers liked the clarity of the material and agreed the experimental study is convincing. Accept. | train | [
"rkltT1X9pX",
"BkebK1X967",
"Skx3G1mqpQ",
"HJgnVJuGaQ",
"Hyxsb3Ha2m",
"SJefbO63hQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1. Without additional assumptions, we couldn’t prove g(theta) is continuous for general deep neural networks. It’s true that the g(theta) may not be continuous; for example, we think it might be possible to construct some counter-examples using ReLU activation. However, although the assumption may not hold for DNN... | [
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
3,
5,
4
] | [
"Hyxsb3Ha2m",
"HJgnVJuGaQ",
"SJefbO63hQ",
"iclr_2019_rJlk6iRqKX",
"iclr_2019_rJlk6iRqKX",
"iclr_2019_rJlk6iRqKX"
] |
iclr_2019_rJlnB3C5Ym | Rethinking the Value of Network Pruning | Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned ``important'' weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited ``important'' weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. | accepted-poster-papers | The paper presents a lot of empirical evidence that fine tuning pruned networks is inferior to training them from scratch. These results seem unsurprising in retrospect, but hindsight is 20-20. The reviewers raised a wide range of issues, some of which were addressed and some which were not. I recommend to the authors that they make sure that any claims they draw from their experiments are sufficiently prescribed. E.g., the lottery ticket experiments done by Anonymous in response to this paper show that the random initialization does poorer than restarting with the initial weights (other than in resnet, though this seems possibly due to the learning rate). There is something different in their setting, and so your claims should be properly circumscribed. I don't think the "standard" versus "nonstandard" terminology is appropriate until the actual boundary between these two behaviors is identified. I would recommend the authors make guarded claims here. | val | [
"HJxVsKesy4",
"BJggHPloJV",
"Hkgd8UEUyE",
"HJlT5l3Kh7",
"BkeOueaS1E",
"B1esWvSXkV",
"BygbtISQkV",
"B1xSR8r71N",
"SyxGOzHu2Q",
"SkeqNuj5R7",
"rJg7RrcqAX",
"ryxwEIq9AQ",
"H1lQcS95R7",
"H1eWSaXYAm",
"HklvbPrbRX",
"SkgWHPrWRX",
"H1gQ_fLYT7",
"BJlsAZIKpQ",
"HketQGIKTm",
"rylNnUrb0Q"... | [
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",... | [
"Dear reviewer, \n I think the discussion above answers your #5 https://openreview.net/forum?id=rJlnB3C5Ym¬eId=H1gtxtsJAQ¬eId=HklijvuWaX\n In short, no matter how long the model is fine-tuned, the comparison is unfair, since the original ImageNet model is not converged. If the original model is converged, p... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-... | [
"H1eWSaXYAm",
"Hkgd8UEUyE",
"HklvbPrbRX",
"iclr_2019_rJlnB3C5Ym",
"ryxwEIq9AQ",
"H1eWSaXYAm",
"H1eWSaXYAm",
"H1eWSaXYAm",
"iclr_2019_rJlnB3C5Ym",
"iclr_2019_rJlnB3C5Ym",
"HJlT5l3Kh7",
"HJlT5l3Kh7",
"HJlT5l3Kh7",
"BJlsAZIKpQ",
"rJgoFVI52m",
"rJgoFVI52m",
"SyxGOzHu2Q",
"SyxGOzHu2Q",
... |
iclr_2019_rJxHsjRqFQ | Hyperbolic Attention Networks | Recent approaches have successfully demonstrated the benefits of learning the parameters of shallow networks in hyperbolic space. We extend this line of work by imposing hyperbolic geometry on the embeddings used to compute the ubiquitous attention mechanisms for different neural networks architectures. By only changing the geometry of embedding of object representations, we can use the embedding space more efficiently without increasing the number of parameters of the model. Mainly as the number of objects grows exponentially for any semantic distance from the query, hyperbolic geometry --as opposed to Euclidean geometry-- can encode those objects without having any interference. Our method shows improvements in generalization on neural machine translation on WMT'14 (English to German), learning on graphs (both on synthetic and real-world graph tasks) and visual question answering (CLEVR) tasks while keeping the neural representations compact. | accepted-poster-papers | Reviewers all agree that this is a strong submission.
I also believe it is interesting that only by changing the geometry of embeddings, they can use the space more efficiently without increasing the number of parameters. | train | [
"H1lSV55B0m",
"HyeEJ59H0Q",
"S1xA9YqSAQ",
"rJe2zKqBA7",
"B1eVP5WS6m",
"H1g2vUL9nQ",
"Hylg-SL927",
"ryl7AYUH3Q",
"BygxP5uf9Q",
"BylT_-kTY7"
] | [
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for your remarkable feedback and comments about our paper.\n\n> Question: In Figure 3 (Center), the number of nodes 1000 and 1200 are pretty close. How about the results on 500 nodes and 2000 nodes? It seems the accuracy difference increases as the number of nodes increases. Is this true? \n\nThe differe... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
-1,
-1
] | [
"ryl7AYUH3Q",
"Hylg-SL927",
"H1g2vUL9nQ",
"B1eVP5WS6m",
"iclr_2019_rJxHsjRqFQ",
"iclr_2019_rJxHsjRqFQ",
"iclr_2019_rJxHsjRqFQ",
"iclr_2019_rJxHsjRqFQ",
"BylT_-kTY7",
"iclr_2019_rJxHsjRqFQ"
] |
iclr_2019_rJzLciCqKm | Learning from Positive and Unlabeled Data with a Selection Bias | We consider the problem of learning a binary classifier only from positive data and unlabeled data (PU learning). Recent methods of PU learning commonly assume that the labeled positive data are identically distributed as the unlabeled positive data. However, this assumption is unrealistic in many instances of PU learning because it fails to capture the existence of a selection bias in the labeling process. When the data has a selection bias, it is difficult to learn the Bayes optimal classifier by conventional methods of PU learning. In this paper, we propose a method to partially identify the classifier. The proposed algorithm learns a scoring function that preserves the order induced by the class posterior under mild assumptions, which can be used as a classifier by setting an appropriate threshold. Through experiments, we show that the method outperforms previous methods for PU learning on various real-world datasets. | accepted-poster-papers | This manuscript proposes a new algorithm for learning from positive and unlabeled data. The motivation for this work includes cases of selection bias, where the positive label is correlated with observation. The resulting procedure is shown to learn a scoring function that preserves the class-posterior ordering, and can thus be thresholded to obtain a classifier.
The problem addressed is interesting, and the approach sounds reasonable. The writing seems to be well done, particularly after the rebuttal when the work was better placed in context.
The reviewers and AC note issues with the evaluation of the proposed method. In particular, the authors do not provide a sufficiently convincing empirical evaluation on real data. | test | [
"HkeyUe_96Q",
"BJxy8aDqaX",
"ryxnYnvqp7",
"rJgSzZK33Q",
"H1g4cld937",
"SJxpvEPt3m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your insightful comments. The references you suggested are very helpful and we have included them in the newest manuscript. Our replies are listed below.\n\nQ1. It seems the problem could be cast as an (interesting) special case of learning from instance dependent label noise. The assumption of the s... | [
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
2,
4,
4
] | [
"H1g4cld937",
"rJgSzZK33Q",
"SJxpvEPt3m",
"iclr_2019_rJzLciCqKm",
"iclr_2019_rJzLciCqKm",
"iclr_2019_rJzLciCqKm"
] |
iclr_2019_rk4Qso0cKm | Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network | We present a new algorithm to train a robust neural network against adversarial attacks.
Our algorithm is motivated by the following two ideas. First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks (Liu 2017), we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness.
Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way. Second, we formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net. Experiment results demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks. On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement compared with adversarial training (Madry 2017) and random self-ensemble (Liu, 2017) under PGD attack with 0.035 distortion, and the gap becomes even larger on a subset of ImageNet. | accepted-poster-papers | Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
| train | [
"SJeDntQ3k4",
"HkgXvWptJN",
"SyeHI_C72X",
"Hye80M2JnX",
"H1xh1O74AQ",
"r1l_tTGfCm",
"Bye2N8dbAX",
"Hylsz71KpQ",
"HyxExoXr6X",
"SklNhuR_67",
"SJeXtbTEpQ",
"SyggL18wpm",
"HJxoBdEI6X",
"r1xcGYhNTX",
"S1x4D9aETX",
"Byxndvbkh7"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"public",
"author",
"author",
"official_reviewer"
] | [
"Thank you very much for introducing your recent paper on this topic! Since the paper is available after the ICLR submission deadline, we were not aware of this work. We will include some discussions and comparisons in our paper: \n- Based on our understanding, although both papers use Bayesian method to defense, ... | [
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"HkgXvWptJN",
"iclr_2019_rk4Qso0cKm",
"iclr_2019_rk4Qso0cKm",
"iclr_2019_rk4Qso0cKm",
"r1l_tTGfCm",
"Bye2N8dbAX",
"SJeXtbTEpQ",
"SklNhuR_67",
"iclr_2019_rk4Qso0cKm",
"SyggL18wpm",
"Hye80M2JnX",
"HJxoBdEI6X",
"iclr_2019_rk4Qso0cKm",
"SyeHI_C72X",
"Byxndvbkh7",
"iclr_2019_rk4Qso0cKm"
] |
iclr_2019_rkMW1hRqKX | Optimal Completion Distillation for Sequence Learning | We present Optimal Completion Distillation (OCD), a training procedure for optimizing sequence to sequence models based on edit distance. OCD is efficient, has no hyper-parameters of its own, and does not require pre-training or joint optimization with conditional log-likelihood. Given a partial sequence generated by the model, we first identify the set of optimal suffixes that minimize the total edit distance, using an efficient dynamic programming algorithm. Then, for each position of the generated sequence, we use a target distribution which puts equal probability on the first token of all the optimal suffixes. OCD achieves the state-of-the-art performance on end-to-end speech recognition, on both Wall Street Journal and Librispeech datasets, achieving 9.3% WER and 4.5% WER, respectively. | accepted-poster-papers | This paper proposes an algorithm for training sequence-to-sequence models from scratch to optimize edit distance. The algorithm, called optimal completion distillation (OCD), avoids the exposure bias problem inherent in maximum likelihood estimation training, is efficient and easily implemented, and does not have any tunable hyperparameters. Experiments on Librispeech and Wall Street Journal show that OCD improves test performance over both maximum likelihood and scheduled sampling, yielding state-of-the-art results. The primary concerns expressed by the reviewers pertained to the relationship of OCD to methods such as SEARN, DAgger, AggreVaTe, LOLS, and several other papers. The revision addresses the problem with a substantially larger number of references and discussion relating OCD to the previous work. Some issues of clarity were also well addressed by the revision. | train | [
"rJe4U-Bcnm",
"Hyg1e_DIRm",
"rkg1x8vI0m",
"SyxYgSvUC7",
"Syg5VVPUCm",
"ryxxYXwUA7",
"BygnjVilT7",
"Hkx2Wz86h7",
"rkeFupnrhQ",
"SyxAaU5s2X",
"SkgkR9CV2X",
"B1gEfjSwhm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"public",
"public",
"author"
] | [
"The authors propose an alternative approach to training seq2seq models, which addresses concerns about exposure bias and about the typical MLE objective being different from the final evaluation metric. In particular, the authors propose to use a dynamic program to compute the optimal continuations of predicted pr... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
-1,
-1
] | [
"iclr_2019_rkMW1hRqKX",
"iclr_2019_rkMW1hRqKX",
"rJe4U-Bcnm",
"Hkx2Wz86h7",
"rkeFupnrhQ",
"BygnjVilT7",
"iclr_2019_rkMW1hRqKX",
"iclr_2019_rkMW1hRqKX",
"iclr_2019_rkMW1hRqKX",
"B1gEfjSwhm",
"iclr_2019_rkMW1hRqKX",
"SkgkR9CV2X"
] |
iclr_2019_rke4HiAcY7 | Caveats for information bottleneck in deterministic scenarios | Information bottleneck (IB) is a method for extracting information from one random variable X that is relevant for predicting another random variable Y. To do so, IB identifies an intermediate "bottleneck" variable T that has low mutual information I(X;T) and high mutual information I(Y;T). The "IB curve" characterizes the set of bottleneck variables that achieve maximal I(Y;T) for a given I(X;T), and is typically explored by maximizing the "IB Lagrangian", I(Y;T) - βI(X;T). In some cases, Y is a deterministic function of X, including many classification problems in supervised learning where the output class Y is a deterministic function of the input X. We demonstrate three caveats when using IB in any situation where Y is a deterministic function of X: (1) the IB curve cannot be recovered by maximizing the IB Lagrangian for different values of β; (2) there are "uninteresting" trivial solutions at all points of the IB curve; and (3) for multi-layer classifiers that achieve low prediction error, different layers cannot exhibit a strict trade-off between compression and prediction, contrary to a recent proposal. We also show that when Y is a small perturbation away from being a deterministic function of X, these three caveats arise in an approximate way. To address problem (1), we propose a functional that, unlike the IB Lagrangian, can recover the IB curve in all cases. We demonstrate the three caveats on the MNIST dataset. | accepted-poster-papers | This paper considers the information bottleneck Lagrangian as a tool for studying deep networks in the common case of supervised learning (predicting label Y from features X) with a deterministic model, and identifies a number of troublesome issues. (1) The information bottleneck curve cannot be recovered by optimizing the Lagrangian for different values of β because in the deterministic case, the IB curve is piecewise linear, not strictly concave. (2) Uninteresting representations can lie on the IB curve, so information bottleneck optimality does not imply that a representation is useful. (3) In a multilayer model with a low probability of error, the only tradeoff that successive layers can make between compression and prediction is that deeper layers may compress more. Experiments on MNIST illustrate these issues, and supplementary material shows that these issues also apply to the deterministic information bottleneck and to stochastic models that are nearly deterministic. There was a substantial degree of disagreement between the reviewers of this paper. One reviewer (R3) suggested that all the conclusions of the paper are the consequence of P(X,Y) being degenerate. The authors responded to this criticism in their response and revision quite effectively, in the opinion of the AC. Because R3 failed to participate in the discussion, this review has been discounted in the final decision. The other two reviewers were considerably more positive about the paper, with one (R1) having basically no criticisms and the other (R2) expression some doubts about the novelty of the observations being made in the paper and their importance for practical machine learning scenarios. Following the revision and discussion, R2 expressed general satisfaction with the paper, so the AC is recommending acceptance. The AC thinks that the final paper would be clearer if the authors were to carefully distinguish between ground-truth labels used in training and the labels estimated by the model for a given input. At the moment, the symbol Y appears to be overloaded, standing for both. Perhaps the authors should place a hat over Y when it is standing for estimated labels? | val | [
"Skx29pAFAX",
"BJlmr_MYCX",
"Skx08mlv0Q",
"r1gp5blvAX",
"H1g6ymeICX",
"HJxsTLJ-AQ",
"SJgf61-tpQ",
"Skg8iyZFam",
"SJl6_1-Kp7",
"rkgnsAeFpQ",
"Byxrc0eKpm",
"B1e0VRlKaQ",
"SyxvmAgtaQ",
"rJlPdpxFp7",
"BJlhNTlKpQ",
"SylivA6b6m",
"S1lfqvH-T7",
"SyxYCicxpX",
"rklj4rCChQ",
"HylXDrkTn7"... | [
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your comment. The caveats discussed in our paper concern theoretical properties of IB-optimal variables and the IB Lagrangian, and hold whenever the output is a deterministic function of the input. In a machine learning context, they apply independently of the training algorithm and how the weights are ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"r1gp5blvAX",
"H1g6ymeICX",
"SylivA6b6m",
"iclr_2019_rke4HiAcY7",
"SJgf61-tpQ",
"rkgnsAeFpQ",
"Skg8iyZFam",
"SJl6_1-Kp7",
"HylIo_XX2X",
"Byxrc0eKpm",
"HylXDrkTn7",
"SyxvmAgtaQ",
"rklj4rCChQ",
"SyxYCicxpX",
"S1lfqvH-T7",
"S1lfqvH-T7",
"iclr_2019_rke4HiAcY7",
"iclr_2019_rke4HiAcY7",
... |
iclr_2019_rkeSiiA5Fm | Deep Learning 3D Shapes Using Alt-az Anisotropic 2-Sphere Convolution | The ground-breaking performance obtained by deep convolutional neural networks (CNNs) for image processing tasks is inspiring research efforts attempting to extend it for 3D geometric tasks. One of the main challenge in applying CNNs to 3D shape analysis is how to define a natural convolution operator on non-euclidean surfaces. In this paper, we present a method for applying deep learning to 3D surfaces using their spherical descriptors and alt-az anisotropic convolution on 2-sphere. A cascade set of geodesic disk filters rotate on the 2-sphere and collect spherical patterns and so to extract geometric features for various 3D shape analysis tasks. We demonstrate theoretically and experimentally that our proposed method has the possibility to bridge the gap between 2D images and 3D shapes with the desired rotation equivariance/invariance, and its effectiveness is evaluated in applications of non-rigid/ rigid shape classification and shape retrieval. | accepted-poster-papers | Strengths:
Well written paper on a new kind of spherical convolution for use in spherical CNNs.
Evaluated on rigid and non-rigid 3D shape recognition and retrieval problems.
Paper provides solid strategy for efficient GPU implementation.
Weaknesses: There was some misunderstanding about the properties of the alt-az convolution detected by one of the reviewers along with some points needing clarifications. However, discussion of these issues appears to have led to a resolution of the issues.
Contention: The weaknesses above were discussed in some detail, but the procedure was not particularly contentious and the discussion unfolded well.
All reviewers rate the paper as accept, the paper clearly provides value to the community and therefore should be accepted.
| train | [
"r1lHQJ1cTQ",
"S1x-pTCtTX",
"B1ebDaAFTQ",
"rkl_6hAY6X",
"BkgnVCCK6Q",
"Hyea8SUc0Q",
"H1g9cRrcRm",
"rJesVzvgAm",
"HJegyc6c6m",
"HyV6CRt6m",
"HygPsSJLpQ",
"HylYyr91T7",
"rJeeoDOinQ"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Q5: It would be nice to see a more direct comparison between the three definitions of spherical convolution (general SO3, isotropic S2, and anisotropic S2)\nA5: The two related papers (Cohen et al 2018 for general SO3 and Esteves, and Esteves et al 2018 for isotropic S2) both use lat-lon grid and Fourier domain co... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"HyV6CRt6m",
"HygPsSJLpQ",
"HygPsSJLpQ",
"HygPsSJLpQ",
"HylYyr91T7",
"HJegyc6c6m",
"iclr_2019_rkeSiiA5Fm",
"HJegyc6c6m",
"r1lHQJ1cTQ",
"rJeeoDOinQ",
"iclr_2019_rkeSiiA5Fm",
"iclr_2019_rkeSiiA5Fm",
"iclr_2019_rkeSiiA5Fm"
] |
iclr_2019_rke_YiRct7 | Small nonlinearities in activation functions create bad local minima in neural networks | We investigate the loss surface of neural networks. We prove that even for one-hidden-layer networks with "slightest" nonlinearity, the empirical risks have spurious local minima in most cases. Our results thus indicate that in general "no spurious local minim" is a property limited to deep linear networks, and insights obtained from linear networks may not be robust. Specifically, for ReLU(-like) networks we constructively prove that for almost all practical datasets there exist infinitely many local minima. We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum. Our results make the least restrictive assumptions relative to existing results on spurious local optima in neural networks. We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic. | accepted-poster-papers | This is an interesting paper that develops new techniques for analyzing the loss surface of deep networks, allowing the existence of spurious local minima to be established under fairly general conditions. The reviewers responded with uniformly positive opinions. | train | [
"r1xzVHt71E",
"ryxPjBlZyE",
"SJx3vHAQCm",
"BJl5ES0mRm",
"HJg0brAXCm",
"H1gdFd3epm",
"HkeS3_Q02X",
"Hkgdn1QB3X"
] | [
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your interest in our paper, and also for pointing out a relevant result! The paper you noted is indeed related, but its focus is a bit different from our paper. The authors made assumptions on the loss function, data distribution, network structure, and activation function, and showed that all local ... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"ryxPjBlZyE",
"iclr_2019_rke_YiRct7",
"H1gdFd3epm",
"HkeS3_Q02X",
"Hkgdn1QB3X",
"iclr_2019_rke_YiRct7",
"iclr_2019_rke_YiRct7",
"iclr_2019_rke_YiRct7"
] |
iclr_2019_rkemqsC9Fm | Information Theoretic lower bounds on negative log likelihood | In this article we use rate-distortion theory, a branch of information theory devoted to the problem of lossy compression, to shed light on an important problem in latent variable modeling of data: is there room to improve the model? One way to address this question is to find an upper bound on the probability (equivalently a lower bound on the negative log likelihood) that the model can assign to some data as one varies the prior and/or the likelihood function in a latent variable model. The core of our contribution is to formally show that the problem of optimizing priors in latent variable models is exactly an instance of the variational optimization problem that information theorists solve when computing rate-distortion functions, and then to use this to derive a lower bound on negative log likelihood. Moreover, we will show that if changing the prior can improve the log likelihood, then there is a way to change the likelihood function instead and attain the same log likelihood, and thus rate-distortion theory is of relevance to both optimizing priors as well as optimizing likelihood functions. We will experimentally argue for the usefulness of quantities derived from rate-distortion theory in latent variable modeling by applying them to a problem in image modeling. | accepted-poster-papers | Strengths: This paper gives a detailed treatment of the connections between rate distortion theory and variational lower bounds, culminating in a practical diagnostic tool. The paper is well-written.
Weaknesses: Many of the theoretical results existed in older work.
Points of contention: Most of the discussion was about the novelty of the lower bound.
Consensus: R3 and R2 both appear to recommend acceptance (R2 in a comment), and have both clearly given the paper detailed thought. | val | [
"ByeHEIo92Q",
"SJxTvNU9n7",
"rygt3n6K0m",
"rkxFnXnY0X",
"Hke68fMXAX",
"B1ln_cRNCm",
"BJlAZBGGR7",
"B1gWWp6Pp7",
"B1g_Vh5_Tm",
"SkgpEIKdTX",
"HkgjBeyBnm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"public",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper considers the optimization of the prior in the latent variable model and the selection of the likelihood function. The authors propose criteria for these problems based on a lower-bound on the negative log-likelihood, which is derived from rate-distortion theory.\n\nThere are some interesting points in ... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_rkemqsC9Fm",
"iclr_2019_rkemqsC9Fm",
"B1ln_cRNCm",
"SkgpEIKdTX",
"BJlAZBGGR7",
"B1gWWp6Pp7",
"iclr_2019_rkemqsC9Fm",
"ByeHEIo92Q",
"HkgjBeyBnm",
"SJxTvNU9n7",
"iclr_2019_rkemqsC9Fm"
] |
iclr_2019_rkevMnRqYQ | Preferences Implicit in the State of the World | Reinforcement learning (RL) agents optimize only the features specified in a reward function and are indifferent to anything left out inadvertently. This means that we must not only specify what to do, but also the much larger space of what not to do. It is easy to forget these preferences, since these preferences are already satisfied in our environment. This motivates our key insight: when a robot is deployed in an environment that humans act in, the state of the environment is already optimized for what humans want. We can therefore use this implicit preference information from the state to fill in the blanks. We develop an algorithm based on Maximum Causal Entropy IRL and use it to evaluate the idea in a suite of proof-of-concept environments designed to show its properties. We find that information from the initial state can be used to infer both side effects that should be avoided as well as preferences for how the environment should be organized. Our code can be found at https://github.com/HumanCompatibleAI/rlsp. | accepted-poster-papers | The paper proposes to take advantage of implicit preferential information in a single state, to design auxiliary reward functions that can be combined with the standard RL reward function. The motivation is to use the implicit information to infer signals that might not have been included in the reward function. The paper has some nice ideas and is quite novel. A new algorithm is developed, and is supported by proof-of-concept experiments.
Overall, the paper is a nice and novel contribution. But reviewers point out several limitations. The biggest one seems to be related to the problem setup: how to combine inferred reward and the given reward, especially when they are in conflict with each other. A discussion of multi-objective RL might be in place. | train | [
"BJliWzYh3m",
"BylXVFpH0X",
"SJgkSEbVCm",
"r1xFA4bEC7",
"Byxpl4-NC7",
"S1lERfWEAX",
"rJlEVG-4RX",
"r1xfyOLGTm",
"Byxr6ix6nX",
"rkl-7cZuo7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to augment the explicitly stated reward function of an RL agent with auxiliary rewards/costs inferred from the initial state and a model of the state dynamics. Intuitively, the fact that a vase precariously placed in the center of the room remains intact suggests that it is a precious object t... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2019_rkevMnRqYQ",
"iclr_2019_rkevMnRqYQ",
"BJliWzYh3m",
"rkl-7cZuo7",
"BJliWzYh3m",
"Byxr6ix6nX",
"r1xfyOLGTm",
"iclr_2019_rkevMnRqYQ",
"iclr_2019_rkevMnRqYQ",
"iclr_2019_rkevMnRqYQ"
] |
iclr_2019_rkgBHoCqYX | A Kernel Random Matrix-Based Approach for Sparse PCA | In this paper, we present a random matrix approach to recover sparse principal components from n p-dimensional vectors. Specifically, considering the large dimensional setting where n, p → ∞ with p/n → c ∈ (0, ∞) and under Gaussian vector observations, we study kernel random matrices of the type f (Ĉ), where f is a three-times continuously differentiable function applied entry-wise to the sample covariance matrix Ĉ of the data. Then, assuming that the principal components are sparse, we show that taking f in such a way that f'(0) = f''(0) = 0 allows for powerful recovery of the principal components, thereby generalizing previous ideas involving more specific f functions such as the soft-thresholding function. | accepted-poster-papers | The manuscript studies a random matrix approach to recover sparse principal components. This work extends prior work using soft thresholding of the sample covariance matrix to enable sparse PCA. In this light, the main contribution of the paper is a study of generalizing soft thresholding to a broader class of functions and showing that this improves performance. The contributions of this paper are primarily theoretical.
The reviewers and AC note issues with the discussion that can be further improved to better illustrate contributions, and place this work in context. In particular, multiple reviewers assumed that "kernel" referred to the covariance matrix. The authors provide a satisfactory rebuttal addressing these issues.
While not unanimous, overall the reviewers and AC have a positive opinion of this paper and recommend acceptance. | train | [
"Byl9Vy9t3m",
"rklLfTAba7",
"BkxY9TRWp7",
"Hkx-zCCbTX",
"ByeGIM9Cnm",
"ByltjVE52X"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an algorithm to approximate kernel matrix based on the Taylor expansion of the element-wise functions. The authors provide a spectral norm based error bound for their method and the corresponding results for the special case, \\epsilon-sparse matrix.\n\nI have some comment as follow.\n\n1. Can ... | [
6,
-1,
-1,
-1,
5,
7
] | [
4,
-1,
-1,
-1,
5,
2
] | [
"iclr_2019_rkgBHoCqYX",
"ByeGIM9Cnm",
"ByltjVE52X",
"Byl9Vy9t3m",
"iclr_2019_rkgBHoCqYX",
"iclr_2019_rkgBHoCqYX"
] |
iclr_2019_rkgK3oC5Fm | Bayesian Prediction of Future Street Scenes using Synthetic Likelihoods | For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence. In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons. Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn different hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations -- are well calibrated. However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches. This is because the used log-likelihood estimate discourages diversity. In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states. We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset. Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting. | accepted-poster-papers | This paper proposes a method to encourage diversity of Bayesian dropout method. A discriminator is used to facilitate diversity, which the method deal with multi-modality. Empirical results show good improvement over existing methods. This is a good paper and should be accepted.
| train | [
"B1gmsDaUyV",
"HJlkz38N1V",
"ryxnXzpQk4",
"rkehUL_aCm",
"BkgzBPe7Rm",
"r1lJGvgm0X",
"ryxs9IxQRQ",
"SJxRqrgQC7",
"rkxqXnMJTQ",
"ryl4znyKnm",
"BJxAQMND37"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your response. We are glad that our clarifications and changes have made the paper better. Regarding the Oracle Top 5% criterion: It is true that for a single test sequence, the Oracle Top 5% criterion measures if in the \"longer\" future there are some predicted trajectories that are still close to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"ryxnXzpQk4",
"r1lJGvgm0X",
"ryxs9IxQRQ",
"iclr_2019_rkgK3oC5Fm",
"BJxAQMND37",
"ryl4znyKnm",
"SJxRqrgQC7",
"rkxqXnMJTQ",
"iclr_2019_rkgK3oC5Fm",
"iclr_2019_rkgK3oC5Fm",
"iclr_2019_rkgK3oC5Fm"
] |
iclr_2019_rkgKBhA5Y7 | There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average | Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters. To understand consistency regularization, we conceptually explore how loss geometry interacts with training procedures. The consistency loss dramatically improves generalization performance over supervised-only training; however, we show that SGD struggles to converge on the consistency loss and continues to make large steps that lead to changes in predictions on the test data. Motivated by these observations, we propose to train consistency-based methods with Stochastic Weight Averaging (SWA), a recent approach which averages weights along the trajectory of SGD with a modified learning rate schedule. We also propose fast-SWA, which further accelerates convergence by averaging multiple points within each cycle of a cyclical learning rate schedule. With weight averaging, we achieve the best known semi-supervised results on CIFAR-10 and CIFAR-100, over many different quantities of labeled training data. For example, we achieve 5.0% error on CIFAR-10 with only 4000 labels, compared to the previous best result in the literature of 6.3%. | accepted-poster-papers | All reviewers appreciate the empirical analysis and insights provided in the paper. The paper also reports impressive results on SSL. It will be a good addition to the ICLR program. | train | [
"HyemfGLdg4",
"HylUNeePeE",
"ryenTr_IxN",
"SJgj_rd8eN",
"B1e9StjNCQ",
"HygKOhalA7",
"SyeJI36lCX",
"B1g4NjaeAX",
"B1lZLspeAm",
"HJgMPuQNTm",
"rklWfa233Q",
"S1gzzDv5hQ",
"HJlan4ZqnX",
"rkgNXKDCqX",
"SkgwKUS6cX"
] | [
"author",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Hi, thank you for your comment. In our paper we exactly replicate the experimental setup of [1], which uses 5000 validation images, in order to directly compare our approach with the most relevant existing literature. We note [2] uses a larger validation set of 10000 images, and [3] does not discuss validation. We... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
1,
-1,
-1
] | [
"ryenTr_IxN",
"SJgj_rd8eN",
"iclr_2019_rkgKBhA5Y7",
"rklWfa233Q",
"HJgMPuQNTm",
"HJlan4ZqnX",
"S1gzzDv5hQ",
"rklWfa233Q",
"B1g4NjaeAX",
"iclr_2019_rkgKBhA5Y7",
"iclr_2019_rkgKBhA5Y7",
"iclr_2019_rkgKBhA5Y7",
"iclr_2019_rkgKBhA5Y7",
"SkgwKUS6cX",
"iclr_2019_rkgKBhA5Y7"
] |
iclr_2019_rkgT3jRct7 | Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation | Answerer in Questioner's Mind (AQM) is an information-theoretic framework that has been recently proposed for task-oriented dialog systems. AQM benefits from asking a question that would maximize the information gain when it is asked. However, due to its intrinsic nature of explicitly calculating the information gain, AQM has a limitation when the solution space is very large. To address this, we propose AQM+ that can deal with a large-scale problem and ask a question that is more coherent to the current context of the dialog. We evaluate our method on GuessWhich, a challenging task-oriented visual dialog problem, where the number of candidate classes is near 10K. Our experimental results and ablation studies show that AQM+ outperforms the state-of-the-art models by a remarkable margin with a reasonable approximation. In particular, the proposed AQM+ reduces more than 60% of error as the dialog proceeds, while the comparative algorithms diminish the error by less than 6%. Based on our results, we argue that AQM+ is a general task-oriented dialog algorithm that can be applied for non-yes-or-no responses. | accepted-poster-papers | Important problem (visually grounded dialog); incremental (but not in a negative sense of the word) extension of prior work to an important new setting (GuessWhich); well-executed. Paper was reviewed by three experts. Initially there were some concerns but after the author response and reviewer discussion, all three unanimously recommend acceptance.
| train | [
"SkxfDTUqyV",
"ByxXr6Uq1N",
"Skxex6UqJV",
"rke8ETvCA7",
"rJg0MNRKhX",
"BJgpibpoR7",
"rkg3kkhtRm",
"Skl_DESX2X",
"Hkx4xHhap7",
"HJxDa4nTa7",
"r1lvHN2ppX",
"H1xzM43T67",
"SkxX14h6T7",
"BJeqFMhapX",
"rkedtsMYhm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thank you for your consideration and interest in our paper!\nWe also once again thank you for your comments and valuable suggestions for improving the quality of our paper.\n",
"We would like to thank the reviewer for considering our responses. We hope that our ablation studies may be able to alleviate the conce... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"rkg3kkhtRm",
"BJgpibpoR7",
"rke8ETvCA7",
"SkxX14h6T7",
"iclr_2019_rkgT3jRct7",
"r1lvHN2ppX",
"HJxDa4nTa7",
"iclr_2019_rkgT3jRct7",
"Skl_DESX2X",
"Skl_DESX2X",
"rkedtsMYhm",
"rJg0MNRKhX",
"rJg0MNRKhX",
"iclr_2019_rkgT3jRct7",
"iclr_2019_rkgT3jRct7"
] |
iclr_2019_rkgW0oA9FX | Graph HyperNetworks for Neural Architecture Search | Neural architecture search (NAS) automatically finds the best task-specific neural network topology, outperforming many manual architecture designs. However, it can be prohibitively expensive as the search requires training thousands of different networks, while each training run can last for hours. In this work, we propose the Graph HyperNetwork (GHN) to amortize the search cost: given an architecture, it directly generates the weights by running inference on a graph neural network. GHNs model the topology of an architecture and therefore can predict network performance more accurately than regular hypernetworks and premature early stopping. To perform NAS, we randomly sample architectures and use the validation accuracy of networks with GHN generated weights as the surrogate search signal. GHNs are fast - they can search nearly 10× faster than other random search methods on CIFAR-10 and ImageNet. GHNs can be further extended to the anytime prediction setting, where they have found networks with better speed-accuracy tradeoff than the state-of-the-art manual designs. | accepted-poster-papers | The paper proposes an architecture search method based on graph hypernetworks (GHN). The core idea is that given a candidate architecture, GHN predicts its weights (similar to SMASH), which allows for fast evaluation w/o training the architecture from scratch. Unlike SMASH, GHN can operate on an arbitrary directed acyclic graph. Architecture search using GHN is fast and achieves competitive performance. Overall, this is a relevant contribution backed up by solid experiments, and should be accepted. | train | [
"rkg3cupE14",
"rylzxtxbkN",
"rklXC6R5RQ",
"SkgA5aR5R7",
"S1g3LTC9C7",
"S1lkmT09CQ",
"SJlF5lI52X",
"HklyTNSthQ",
"rJgN1Zfz3m",
"rJgpSxAQ27",
"SkxLWX-e2m",
"rJxZH6PLsX",
"rJl4OSAVo7",
"HygnM4GEs7",
"S1lpn7Pcc7"
] | [
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Thanks for the further explanation especially on the memory usage. I'm fine with this part. However, the authors seem not adequatly addressed the other two concerns. \n\nFirst, for the LSTM encoding baseline, I'm not quite sure about the validness of \"the number of neighbours has been conventionally fixed for LS... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"S1g3LTC9C7",
"iclr_2019_rkgW0oA9FX",
"iclr_2019_rkgW0oA9FX",
"rJgN1Zfz3m",
"HklyTNSthQ",
"SJlF5lI52X",
"iclr_2019_rkgW0oA9FX",
"iclr_2019_rkgW0oA9FX",
"iclr_2019_rkgW0oA9FX",
"SkxLWX-e2m",
"rJxZH6PLsX",
"rJl4OSAVo7",
"HygnM4GEs7",
"S1lpn7Pcc7",
"iclr_2019_rkgW0oA9FX"
] |
iclr_2019_rkgbwsAcYm | DELTA: DEEP LEARNING TRANSFER USING FEATURE MAP WITH ATTENTION FOR CONVOLUTIONAL NETWORKS | Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task. To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied. In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention. Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network. Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner. We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP. The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks. | accepted-poster-papers | This paper argues that each layer of a network may have some channels useful for and some not useful for transfer learning. The main contribution is an approach which identifies the useful channels through an attention based mechanism. The reviewers agree that this work offers a valuable new approach that offers modest improvements over prior work.
The authors should take care to refine their definition of behavior regularization, including/expanding on the discussion from the rebuttal phase. The authors are also encouraged to experiment with other architecture backbones and report both overall performance as well as run time for learning with the larger models.
| train | [
"H1gy9nqOh7",
"rkxD3nB40X",
"SkxcAhkkAm",
"rJx9CoFoTQ",
"BylowA8YTX",
"BkguWT5PaX",
"HJxStoxV6m",
"SJxVC6xNaQ",
"SylfiAgETX",
"B1xLZTlE6m",
"ryl3-sh937",
"rJxvZDiN3m"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Authors present a new regularisation approach named DELTA (Deep Learning Transfer using feature map with attention). What it does is preserving the outer layer outputs of the target network (in a transfer learning scenario) instead of constraining the weights of the neural network. I am not sure how this approach ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_rkgbwsAcYm",
"iclr_2019_rkgbwsAcYm",
"rJx9CoFoTQ",
"HJxStoxV6m",
"BkguWT5PaX",
"HJxStoxV6m",
"ryl3-sh937",
"rJxvZDiN3m",
"iclr_2019_rkgbwsAcYm",
"H1gy9nqOh7",
"iclr_2019_rkgbwsAcYm",
"iclr_2019_rkgbwsAcYm"
] |
iclr_2019_rkgoyn09KQ | textTOvec: DEEP CONTEXTUALIZED NEURAL AUTOREGRESSIVE TOPIC MODELS OF LANGUAGE WITH DISTRIBUTED COMPOSITIONAL PRIOR | We address two challenges of probabilistic topic modelling in order to better estimate
the probability of a word in a given context, i.e., P(wordjcontext) : (1) No
Language Structure in Context: Probabilistic topic models ignore word order by
summarizing a given context as a “bag-of-word” and consequently the semantics
of words in the context is lost. In this work, we incorporate language structure
by combining a neural autoregressive topic model (TM) with a LSTM based language
model (LSTM-LM) in a single probabilistic framework. The LSTM-LM
learns a vector-space representation of each word by accounting for word order
in local collocation patterns, while the TM simultaneously learns a latent representation
from the entire document. In addition, the LSTM-LM models complex
characteristics of language (e.g., syntax and semantics), while the TM discovers
the underlying thematic structure in a collection of documents. We unite two complementary
paradigms of learning the meaning of word occurrences by combining
a topic model and a language model in a unified probabilistic framework, named
as ctx-DocNADE. (2) Limited Context and/or Smaller training corpus of documents:
In settings with a small number of word occurrences (i.e., lack of context)
in short text or data sparsity in a corpus of few documents, the application of TMs
is challenging. We address this challenge by incorporating external knowledge
into neural autoregressive topic models via a language modelling approach: we
use word embeddings as input of a LSTM-LM with the aim to improve the wordtopic
mapping on a smaller and/or short-text corpus. The proposed DocNADE
extension is named as ctx-DocNADEe.
We present novel neural autoregressive topic model variants coupled with neural
language models and embeddings priors that consistently outperform state-of-theart
generative topic models in terms of generalization (perplexity), interpretability
(topic coherence) and applicability (retrieval and classification) over 6 long-text
and 8 short-text datasets from diverse domains. | accepted-poster-papers | This paper presents an extension of an existing topic model, DocNADE. Compared to DocNADE and other existing bag-of-word topic models, the primary contribution of this work is to integrate neural language models into the topic model in order to address two limitations of the bag-of-word topic models: expressiveness and interpretability. In addtion, the paper presents an approach to integrate external knowledge into the neural topic models to address the empirical challenges of the application scenarios where there might be only a small training corpus or limited context available.
Pros:
The paper presents strong and extensive empirical results. The authors went above and beyond to strengthen their paper during the rebuttal and address all the reviewers' questions and suggestions (e.g., the submitted version had 7 baselines, and the revised version has 6 additional baselines per reviewers' requests).
Cons:
The paper builds on an earlier paper that introduced the DocNADE model. Thus, the modeling contribution is relatively marginal. On the other hand, the extended model, albeit based on a relatively simple idea, is still new and demonstrates strong empirical results.
Verdict:
Probably accept. While not groundbreaking, the proposed model is new and the empirical results are strong. | train | [
"S1geWBgPC7",
"H1xTIhGKn7",
"rkxOZ4TU0m",
"S1en6cDSRQ",
"Bkl-lUgBRQ",
"Hkg2dz7Kh7",
"SJl6p4JHR7",
"H1giEMySA7",
"BJgGJZ_e2m",
"SyePgiuz0X",
"rJg9nFA-RX",
"Ske8KU1f07",
"SygdZ8JGR7",
"HJxunB1MCm",
"Hygw_x1MRQ",
"H1l3AiAWCQ",
"Bygx_-FN37",
"BygKj5yM2X",
"HygAK7Te2m",
"Hkgb3c2e2X"... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"public",... | [
"We are very happy for your comment about our revised version: \"Wow guys, what a great revision\". \n\nWe appreciate and thank you for raising the overall rating. \n\n",
"DocNADE has great performance so this is a welcome bit of\nresearch extending it.\n\nThere has been a huge amount of activity in combining top... | [
-1,
8,
-1,
-1,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1xTIhGKn7",
"iclr_2019_rkgoyn09KQ",
"H1l3AiAWCQ",
"Bkl-lUgBRQ",
"H1giEMySA7",
"iclr_2019_rkgoyn09KQ",
"iclr_2019_rkgoyn09KQ",
"SyePgiuz0X",
"iclr_2019_rkgoyn09KQ",
"Ske8KU1f07",
"iclr_2019_rkgoyn09KQ",
"Hkg2dz7Kh7",
"Hkg2dz7Kh7",
"Hkg2dz7Kh7",
"BJgGJZ_e2m",
"H1xTIhGKn7",
"BygKj5yM2... |
iclr_2019_rkgpy3C5tX | Amortized Bayesian Meta-Learning | Meta-learning, or learning-to-learn, has proven to be a successful strategy in attacking problems in supervised learning and reinforcement learning that involve small amounts of data. State-of-the-art solutions involve learning an initialization and/or learning algorithm using a set of training episodes so that the meta learner can generalize to an evaluation episode quickly. These methods perform well but often lack good quantification of uncertainty, which can be vital to real-world applications when data is lacking. We propose a meta-learning method which efficiently amortizes hierarchical variational inference across tasks, learning a prior distribution over neural network weights so that a few steps of Bayes by Backprop will produce a good task-specific approximate posterior. We show that our method produces good uncertainty estimates on contextual bandit and few-shot learning benchmarks. | accepted-poster-papers | This paper combines two ideas: MAML, and the hierarchical Bayesian inference approach of Amit and Meir (2018). The idea is fairly straightforward but well-motivated, and it seems to work well in practice. The paper is well-written and includes good discussion of the relevant literature. The experiments show improvements on various tests of Bayesian inference, and include some good analysis beyond simply reporting better numbers.
On the whole, the reviewers are fairly positive about the paper. (While the numerical scores are slightly below the cutoff, the reviewers are more positive in the discussion.) The reviewers' main complaint is the lack of comparisons against recently published methods, especially Gordon et al. (2018). The lack of comparison to this paper doesn't strike me as a big problem; the preprint was released only a few months before the deadline, their approach was very different from the proposed one, and the proposed approach has some plausible advantages (simplicity, computational efficiency), so I don't think a direct comparison is required for acceptance.
Overall, I recommend acceptance.
| train | [
"HJe-jdy52Q",
"HylwxJMc07",
"ryxALOrbCX",
"ryeoisr-AQ",
"H1eEZqfb0m",
"ryepGbMbCQ",
"rJx-gyvyTQ",
"HJe5K0lT37"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes an adaptation to MAML-type models that accounts for posterior uncertainty in task specific latent variables. This is achieved via a hierarchical Bayesian view of MAML, employing variational inference for the task-specific parameters. The key intuition of this paper is that one can perform fast a... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_rkgpy3C5tX",
"iclr_2019_rkgpy3C5tX",
"HJe-jdy52Q",
"ryxALOrbCX",
"HJe5K0lT37",
"rJx-gyvyTQ",
"iclr_2019_rkgpy3C5tX",
"iclr_2019_rkgpy3C5tX"
] |
iclr_2019_rkl6As0cF7 | Probabilistic Recursive Reasoning for Multi-Agent Reinforcement Learning | Humans are capable of attributing latent mental contents such as beliefs, or intentions to others. The social skill is critical in everyday life to reason about the potential consequences of their behaviors so as to plan ahead. It is known that humans use this reasoning ability recursively, i.e. considering what others believe about their own beliefs. In this paper, we start from level-1 recursion and introduce a probabilistic recursive reasoning (PR2) framework for multi-agent reinforcement learning. Our hypothesis is that it is beneficial for each agent to account for how the opponents would react to its future behaviors. Under the PR2 framework, we adopt variational Bayes methods to approximate the opponents' conditional policy, to which each agent finds the best response and then improve their own policy. We develop decentralized-training-decentralized-execution algorithms, PR2-Q and PR2-Actor-Critic, that are proved to converge in the self-play scenario when there is one Nash equilibrium. Our methods are tested on both the matrix game and the differential game, which have a non-trivial equilibrium where common gradient-based methods fail to converge. Our experiments show that it is critical to reason about how the opponents believe about what the agent believes. We expect our work to contribute a new idea of modeling the opponents to the multi-agent reinforcement learning community.
| accepted-poster-papers | Pros:
- novel idea of endowing RL agents with recursive reasoning
- clear, well presented paper
- thorough rebuttal and revision with new results
Cons:
- small-scale experiments
The reviewers agree that the paper should be accepted. | train | [
"BJltf7tYk4",
"Syee-B352X",
"S1gKjx-cCQ",
"rJeOPGyda7",
"S1xJ4tgc0m",
"r1g7Ly5V0m",
"rJehIcWt2Q",
"rklp6Elta7",
"SkeFiNgF6X",
"rJe6zkZK6m",
"BygPe4xt67",
"SkxO7Het6Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"public",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"Thanks for the comments and more thorough comparison. This addresses most of my concerns.\n\nAs noted by someone in the comment earlier, LOLA does seem to work with opponent modeling. So I would avoid claiming that as an advantage of PR2.",
"\n# Summary:\nThe paper proposes a new approach for fully decentralized... | [
-1,
7,
-1,
8,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"BygPe4xt67",
"iclr_2019_rkl6As0cF7",
"S1xJ4tgc0m",
"iclr_2019_rkl6As0cF7",
"SkxO7Het6Q",
"BygPe4xt67",
"iclr_2019_rkl6As0cF7",
"rJehIcWt2Q",
"rJehIcWt2Q",
"Syee-B352X",
"Syee-B352X",
"rJeOPGyda7"
] |
iclr_2019_rklaWn0qK7 | Learning Neural PDE Solvers with Convergence Guarantees | Partial differential equations (PDEs) are widely used across the physical and computational sciences. Decades of research and engineering went into designing fast iterative solution methods. Existing solvers are general purpose, but may be sub-optimal for specific classes of problems. In contrast to existing hand-crafted solutions, we propose an approach to learn a fast iterative solver tailored to a specific domain. We achieve this goal by learning to modify the updates of an existing solver using a deep neural network. Crucially, our approach is proven to preserve strong correctness and convergence guarantees. After training on a single geometry, our model generalizes to a wide variety of geometries and boundary conditions, and achieves 2-3 times speedup compared to state-of-the-art solvers. | accepted-poster-papers | Quality: The overall quality of the work is high. The main idea and technical choices are well-motivated, and the method is about as simple as it could be while achieving its stated objectives.
Clarity: The writing is clear, with the exception of using alternative scripts for some letters in definitions.
Originality: The biggest weakness of this work is originality, in that there is a lot of closely related work, and similar ideas without convergence guarantees have begun to be explored. For example, the (very natural) U-net architecture was explored in previous work.
Significance: This seems like an example of work that will be of interest both to the machine learning community, and also the numerics community, because it also achieves the properties that the numerics community has historically cared about. It is significant on its own as an improved method, but also as a demonstration that using deep learning doesn't require scrapping existing frameworks but can instead augment them. | train | [
"BJxZsitoa7",
"H1eZ3PYjaX",
"BygG_etiTQ",
"Bkgt-vXc3X",
"rklkZrg8nm",
"BJg5yrONnm",
"H1g2mv84om",
"B1e_-B_MoX"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for your helpful reviews and suggestions.\n\n1) “The method seems to rely strongly on the linearity of the solver and its deformation (to guarantee the correctness of the solution). The operator H is a matrix of finite dimensions and it is not completely clear to me what is the role of the multi-layer pa... | [
-1,
-1,
-1,
7,
8,
6,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
3,
-1,
-1
] | [
"BJg5yrONnm",
"rklkZrg8nm",
"Bkgt-vXc3X",
"iclr_2019_rklaWn0qK7",
"iclr_2019_rklaWn0qK7",
"iclr_2019_rklaWn0qK7",
"B1e_-B_MoX",
"iclr_2019_rklaWn0qK7"
] |
iclr_2019_rkluJ2R9KQ | A new dog learns old tricks: RL finds classic optimization algorithms | This paper introduces a novel framework for learning algorithms to solve online combinatorial optimization problems. Towards this goal, we introduce a number of key ideas from traditional algorithms and complexity theory. First, we draw a new connection between primal-dual methods and reinforcement learning. Next, we introduce the concept of adversarial distributions (universal and high-entropy training sets), which are distributions that encourage the learner to find algorithms that work well in the worst case. We test our new ideas on a number of optimization problem such as the AdWords problem, the online knapsack problem, and the secretary problem. Our results indicate that the models have learned behaviours that are consistent with the traditional optimal algorithms for these problems. | accepted-poster-papers | This paper is concerned with solving Online Combinatorial Optimization (OCO) problems using reinforcement learning (RL). There is a well-established traditional family of approaches to solving OCO problems, therefore the attempt itself to solve them with RL is very intriguing, as this provides insights about the capabilities of RL in a new but at the same time well understood class of problems.
The reviewers agree that this approach is not entirely new. While past similar efforts take away some of the novelty of this paper, the reviewers and AC believe that still the setting considered here contains novel and interesting elements.
All reviewers were unconvinced that this work can provide strong claims about using RL to learn any primal-dual algorithm. This takes away some of the paper’s impact, but thanks to discussion the authors managed to clarify some “hand-wavy” claims and toned-down the claims that were not convincing. Therefore, it was agreed that the new revision still provides some useful insight into the RL and primal-dual connection, even without a complete formal connection.
| train | [
"BJxQhjRhnm",
"rkgHLHytJE",
"rJl-jfB537",
"HylpeOOPCQ",
"Syem2lmZRQ",
"rJeQSR9sT7",
"SJl1hw6ipX",
"Hyew609sTQ",
"SyxEQaK7pX",
"rkeDv6YXTQ",
"BkgWtntQaQ",
"ByeOxaPza7",
"SJlPfWfMp7",
"SygCVB25im"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"This paper introduces a new framework to solve online combinatorial problems using reinforcement learning. The idea is to encode the current input, the global parameters, and a succinct data structure (to represent current states of the online problem) as MDP states. Such a problem can then be solved by deep RL me... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2019_rkluJ2R9KQ",
"Syem2lmZRQ",
"iclr_2019_rkluJ2R9KQ",
"SJl1hw6ipX",
"BkgWtntQaQ",
"ByeOxaPza7",
"rJeQSR9sT7",
"ByeOxaPza7",
"rJl-jfB537",
"rJl-jfB537",
"BJxQhjRhnm",
"SJlPfWfMp7",
"SygCVB25im",
"iclr_2019_rkluJ2R9KQ"
] |
iclr_2019_rklz9iAcKQ | Deep Graph Infomax | We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs---both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning. | accepted-poster-papers | Because of strong support from two of the reviewers I am recommending accepting this paper. However, I believe reviewer 1's concerns should be taken seriously. Although I disagree with the reviewer that a general "framework" method is a bad thing, I agree with them that additional experiments would be valuable. | test | [
"HkgPLafJkE",
"Byl5M6VzkN",
"rJe0kif1yN",
"SklVLSaOCX",
"B1giOunS0Q",
"HJeiYITZ07",
"rkxb4LT-C7",
"r1gu9BpZAX",
"HyeIUHpZAm",
"HJeDwVp-AQ",
"B1eoGoSQpX",
"BygK5QSXp7",
"Skei4zS7a7",
"BkeOgwpm37",
"HkgHlf-qnX",
"B1lVxghvnm",
"SkxB37p2t7",
"rJl0fT6itQ"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for following the discussion, and giving us further comments. To address them:\n\nFor the optimal discriminator, the GAN and JSD objectives are directly proportional to each other, so in this limit the two discriminators actually optimize the exact same thing (this was pointed out in original GAN paper b... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
9,
5,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
-1,
-1
] | [
"B1giOunS0Q",
"rJe0kif1yN",
"SklVLSaOCX",
"HJeiYITZ07",
"rkxb4LT-C7",
"BkeOgwpm37",
"B1lVxghvnm",
"HkgHlf-qnX",
"iclr_2019_rklz9iAcKQ",
"Skei4zS7a7",
"BygK5QSXp7",
"iclr_2019_rklz9iAcKQ",
"iclr_2019_rklz9iAcKQ",
"iclr_2019_rklz9iAcKQ",
"iclr_2019_rklz9iAcKQ",
"iclr_2019_rklz9iAcKQ",
... |
iclr_2019_rkxQ-nA9FX | Theoretical Analysis of Auto Rate-Tuning by Batch Normalization | Batch Normalization (BN) has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization. While the idea makes intuitive sense, theoretical analysis of its effectiveness has been lacking. Here theoretical support is provided for one of its conjectured properties, namely, the ability to allow gradient descent to succeed with less tuning of learning rates. It is shown that even if we fix the learning rate of scale-invariant parameters (e.g., weights of each layer with BN) to a constant (say, 0.3), gradient descent still approaches a stationary point (i.e., a solution where gradient is zero) in the rate of T^{−1/2} in T iterations, asymptotically matching the best bound for gradient descent with well-tuned learning rates. A similar result with convergence rate T^{−1/4} is also shown for stochastic gradient descent. | accepted-poster-papers | This paper conducted theoretical analysis of the effect of batch normalisation to auto rate-tuning. It provides an explanation for the empirical success of BN. The assumptions for the analysis is also closer to the common practice of batch normalization compared to a related work of Wu et al. 2018.
One of the concerns raised by the reviewer is that the analysis does not immediately apply to practical uses of BN, but the authors already discussed how to fill the gap with a slight change of the activation function. Another concern is about the lack of empirical evaluation of the theory, and the authors provide additional experiments in the revision. R1 also points out a few weaknesses in the theoretical analysis, which I think would help improve the paper further if the authors could clarify and provide discussion in their revision.
Overall, it is a good paper that will help improve our theoretical understanding about the power tool of batch normalization. | val | [
"S1gZOpsFnQ",
"ByeehnIq0Q",
"HkxDvh89A7",
"SJgUKj8cC7",
"rJgNr6BHRQ",
"SygQ2bBxR7",
"HkgOWxN0T7",
"B1eSf1V0aX",
"H1xdKxiDp7",
"rylog8i62m"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"* Description\n\nThe work is motivated by the empirical performance of Batch Normalization and in particular the observed better robustness of the choice of the learning rate. Authors analyze theoretically the asymptotic convergence rate for objectives involving normalization, not necessarily BN, and show that fo... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2
] | [
"iclr_2019_rkxQ-nA9FX",
"SygQ2bBxR7",
"H1xdKxiDp7",
"iclr_2019_rkxQ-nA9FX",
"HkgOWxN0T7",
"B1eSf1V0aX",
"rylog8i62m",
"S1gZOpsFnQ",
"iclr_2019_rkxQ-nA9FX",
"iclr_2019_rkxQ-nA9FX"
] |
iclr_2019_rkxaNjA9Ym | Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm | The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained comput- ing systems. Many network complexity reduction techniques have been proposed including fixed-point implementation. However, a systematic approach for design- ing full fixed-point training and inference of deep neural networks remains elusive. We describe a precision assignment methodology for neural network training in which all network parameters, i.e., activations and weights in the feedforward path, gradients and weight accumulators in the feedback path, are assigned close to minimal precision. The precision assignment is derived analytically and enables tracking the convergence behavior of the full precision training, known to converge a priori. Thus, our work leads to a systematic methodology of determining suit- able precision for fixed-point training. The near optimality (minimality) of the resulting precision assignment is validated empirically for four networks on the CIFAR-10, CIFAR-100, and SVHN datasets. The complexity reduction arising from our approach is compared with other fixed-point neural network designs. | accepted-poster-papers | The paper investigates a detailed analysis of reduced precision training for a feedforward network, that accounts for both the forward and backward passes in detail. It is shown that precision can be greatly reduced throughout the network computations while largely preserving training quality. The analysis is thorough and carefully executed.
The technical presentation, including the motivation for some of the specific choices should be made clearer. Also, the requirement that the network first be trained to convergence at full 32 bit precision is a significant limitation of the proposed approach (a weakness that is shared with other work in this area). It would be highly desirable to find ways to bypass or at least mitigate this requirement, which would provide a real breakthrough rather than merely a solid improvement over competing work.
The reviewer disagreement revolves primarily around the clarity of the main technical exposition: there appears to be consensus that the paper is sound and provides a serious contribution to this area.
Although the persistent reviewer disagreement left this paper rated at the borderline, I am recommending acceptance, with the understanding that the authors will not disregard the dissenting review and strive to further improve the clarity of the presentation. | train | [
"Skx8SEogJN",
"HkgDLfbx1E",
"Syg8bl0N6m",
"SJlcUk2VCX",
"Skx0Q12VC7",
"HygF-JhECQ",
"Hkejra-86X",
"HyxDep-ITX",
"rJg8YWWzTm",
"Hye1UClfpQ",
"SkxGkN9c2X",
"BJep_2Kr2X"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear AnonReviewer2,\n\nThank you for going over our paper again and for acknowledging that our contribution is substantive.\n\nFirst, we wish to say that we absolutely agree with you that the main part of the paper should be self-contained and we emphasize that it is given the page limits. For example, Section 2 s... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"HkgDLfbx1E",
"Skx0Q12VC7",
"iclr_2019_rkxaNjA9Ym",
"rJg8YWWzTm",
"Hye1UClfpQ",
"Hkejra-86X",
"HyxDep-ITX",
"Syg8bl0N6m",
"BJep_2Kr2X",
"SkxGkN9c2X",
"iclr_2019_rkxaNjA9Ym",
"iclr_2019_rkxaNjA9Ym"
] |
iclr_2019_rkxacs0qY7 | FUNCTIONAL VARIATIONAL BAYESIAN NEURAL NETWORKS | Variational Bayesian neural networks (BNN) perform variational inference over weights, but it is difficult to specify meaningful priors and approximating posteriors in a high-dimensional weight space. We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i.e. distributions over functions. We prove that the KL divergence between stochastic processes is equal to the supremum of marginal KL divergences over all finite sets of inputs. Based on this, we introduce a practical training objective which approximates the functional ELBO using finite measurement sets and the spectral Stein gradient estimator. With fBNNs, we can specify priors which entail rich structure, including Gaussian processes and implicit stochastic processes. Empirically, we find that fBNNs extrapolate well using various structured priors, provide reliable uncertainty estimates, and can scale to large datasets. | accepted-poster-papers | This paper shows a promising new variational objective for Bayesian neural networks. The new objective is obtained by effectively considering a functional prior on the parameters. The paper is well-motivated and the mathematics are supported by theoretical justifications.
There has been some discussion regarding the experimental section. On one hand, it contains several real and synthetic data which show the good performance of the proposed method. On the other hand, the reviewers requested deeper comparisons with state-of-the art (deep) GP models and more general problem settings. The AC decided that the paper can be accepted with the experiments contained in the new revision, although the authors would be strongly encouraged to address the reviewers’ comments in a “non-cosmetic manner (as R2 put it).
| train | [
"HkggsUYK0X",
"rkxkdQ54CX",
"B1gT7Ic0hX",
"r1leNQqy6m",
"ryldJXc4Cm",
"SyeUaWdyaX",
"B1lgR6DAnQ",
"BJxDojhXh7",
"Ske9nqYFhm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the addition of the suggested references, but also find their inclusion fairly artificial. In the latest revision, they are simply tacked on to lists of similar references. The presentation has been improved since the orginal submission and is a big step forward towards bringing the paper closer to ac... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"BJxDojhXh7",
"BJxDojhXh7",
"iclr_2019_rkxacs0qY7",
"Ske9nqYFhm",
"B1lgR6DAnQ",
"iclr_2019_rkxacs0qY7",
"iclr_2019_rkxacs0qY7",
"iclr_2019_rkxacs0qY7",
"iclr_2019_rkxacs0qY7"
] |
iclr_2019_rkxciiC9tm | NADPEx: An on-policy temporally consistent exploration method for deep reinforcement learning | Reinforcement learning agents need exploratory behaviors to escape from local optima. These behaviors may include both immediate dithering perturbation and temporally consistent exploration. To achieve these, a stochastic policy model that is inherently consistent through a period of time is in desire, especially for tasks with either sparse rewards or long term information. In this work, we introduce a novel on-policy temporally consistent exploration strategy - Neural Adaptive Dropout Policy Exploration (NADPEx) - for deep reinforcement learning agents. Modeled as a global random variable for conditional distribution, dropout is incorporated to reinforcement learning policies, equipping them with inherent temporal consistency, even when the reward signals are sparse. Two factors, gradients' alignment with the objective and KL constraint in policy space, are discussed to guarantee NADPEx policy's stable improvement. Our experiments demonstrate that NADPEx solves tasks with sparse reward while naive exploration and parameter noise fail. It yields as well or even faster convergence in the standard mujoco benchmark for continuous control. | accepted-poster-papers | The authors have proposed a new method for exploration that is related to parameter noise, but instead uses Gaussian dropout across entire episodes, thus allowing for temporally consistent exploration. The method is evaluated in sparsely rewarded continuous control domains such as half-cheetah and humanoid, and compared against PPO and other variants. The method is novel and does seem to work stably across the tested tasks, and simple exploration methods are important for the RL field. However, the paper is poorly and confusingly written and really really needs to be thoroughly edited before the camera ready deadline. There are many approaches which are referred to without any summary or description, which makes it difficult to read the paper. The three reviewers all had low confidence in their understanding of the paper, which makes this a very borderline submission even though the reviewers gave relatively high scores. | train | [
"HkxrqOb6nm",
"B1e3qRw3p7",
"B1gaPZunTm",
"HyeGY3w3p7",
"HJlwfVrtpm",
"HJlU1mHKp7",
"Syx5z6ONTm",
"B1xcdgFE67",
"rkxN4ktETm",
"S1g0_Au4pm",
"S1lzrrz637",
"HyeuvtzF2X",
"BJlh3CZ8nQ",
"HyenFPKShQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"The authors propose a new on-policy exploration strategy by using a policy with a hierarchy of stochasticity. The authors use a two-level hierarchical distribution as a policy, where the global variable is used for dropout. This work is interesting since the authors use dropout for policy learning and exploration.... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1
] | [
"iclr_2019_rkxciiC9tm",
"HJlU1mHKp7",
"HJlwfVrtpm",
"Syx5z6ONTm",
"HJlU1mHKp7",
"B1xcdgFE67",
"iclr_2019_rkxciiC9tm",
"HkxrqOb6nm",
"S1lzrrz637",
"HyeuvtzF2X",
"iclr_2019_rkxciiC9tm",
"iclr_2019_rkxciiC9tm",
"HyenFPKShQ",
"iclr_2019_rkxciiC9tm"
] |
iclr_2019_rkxoNnC5FQ | SPIGAN: Privileged Adversarial Learning from Simulation | Deep Learning for Computer Vision depends mainly on the source of supervision. Photo-realistic simulators can generate large-scale automatically labeled synthetic data, but introduce a domain gap negatively impacting performance. We propose a new unsupervised domain adaptation algorithm, called SPIGAN, relying on Simulator Privileged Information (PI) and Generative Adversarial Networks (GAN). We use internal data from the simulator as PI during the training of a target task network. We experimentally evaluate our approach on semantic segmentation. We train the networks on real-world Cityscapes and Vistas datasets, using only unlabeled real-world images and synthetic labeled data with z-buffer (depth) PI from the SYNTHIA dataset. Our method improves over no adaptation and state-of-the-art unsupervised domain adaptation techniques. | accepted-poster-papers | The paper proposes an unsupervised domain adaptation solution applied for semantic segmentation from simulated to real world driving scenes. The main contribution consists of introducing an auxiliary loss based on depth information from the simulator. All reviewers agree that the solution offers a new idea and contribution to the adaptation literature. The ablations provided effectively address the concern that the privileged information does in fact aid in transfer. The additional ablation on the perceptual loss done during rebuttal is also valuable and should be included in the final version.
The work would benefit from application of the method across other sim2real dataset tasks so as to be compared to the recent approaches mentioned by the reviewers, but the current evaluation is sufficient to demonstrate the effectiveness of the approach over baseline solutions. | train | [
"rkgid_JICQ",
"ByeIUukIAQ",
"rJe47he8hX",
"SyglnrZ_6m",
"rylJefZd67",
"rJxTZW-dTm",
"ByeHihmQ67",
"ryedXdc4hm"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We have added the experimental results for SPIGAN-no-PI without perceptual loss, named SPIGAN-base in our latest revised version. As shown in Table 2, we observe that SPIGAN-no-PI (with perceptual loss) outperforms SPIGAN-base (without perceptual loss) in both datasets. This implies that perceptual regularization ... | [
-1,
-1,
7,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
5,
-1,
-1,
-1,
5,
4
] | [
"rylJefZd67",
"SyglnrZ_6m",
"iclr_2019_rkxoNnC5FQ",
"ByeHihmQ67",
"rJe47he8hX",
"ryedXdc4hm",
"iclr_2019_rkxoNnC5FQ",
"iclr_2019_rkxoNnC5FQ"
] |
iclr_2019_rkxw-hAcFQ | Generating Multi-Agent Trajectories using Programmatic Weak Supervision | We study the problem of training sequential generative models for capturing coordinated multi-agent trajectory behavior, such as offensive basketball gameplay. When modeling such settings, it is often beneficial to design hierarchical models that can capture long-term coordination using intermediate variables. Furthermore, these intermediate variables should capture interesting high-level behavioral semantics in an interpretable and manipulable way. We present a hierarchical framework that can effectively learn such sequential generative models. Our approach is inspired by recent work on leveraging programmatically produced weak labels, which we extend to the spatiotemporal regime. In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods. We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts. | accepted-poster-papers | The paper presents generative models to produce multi-agent trajectories. The approach of using a simple heuristic labeling function that labels variables that would otherwise be latent in training data is novel and and results in higher quality than the previously proposed baselines.
In response to reviewer suggestions, authors included further results with models that share parameters across agents as well as agent-specific parameters and further clarifications were made for other main comments (i.e., baselines that train the hierarchical model by maximizing an ELBO on the marginal likelihood?). | train | [
"ryeTwe7bAm",
"BygisxQbCm",
"S1lglem-0Q",
"r1lOYyQWRX",
"Bkgtp0yHaX",
"BJgg0sJQpm",
"rygJogZanm"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for reviewing our paper and providing insightful feedback. We respond to your main points below.\n\n> “... how would an intermediate baseline model where a set of parameters are shared and each agent also has an independent set of parameters perform?”\n\nFollowing your suggestion, we trained such a model... | [
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"BJgg0sJQpm",
"rygJogZanm",
"Bkgtp0yHaX",
"iclr_2019_rkxw-hAcFQ",
"iclr_2019_rkxw-hAcFQ",
"iclr_2019_rkxw-hAcFQ",
"iclr_2019_rkxw-hAcFQ"
] |
iclr_2019_rkxwShA9Ym | Label super-resolution networks | We present a deep learning-based method for super-resolving coarse (low-resolution) labels assigned to groups of image pixels into pixel-level (high-resolution) labels, given the joint distribution between those low- and high-resolution labels. This method involves a novel loss function that minimizes the distance between a distribution determined by a set of model outputs and the corresponding distribution given by low-resolution labels over the same set of outputs. This setup does not require that the high-resolution classes match the low-resolution classes and can be used in high-resolution semantic segmentation tasks where high-resolution labeled data is not available. Furthermore, our proposed method is able to utilize both data with low-resolution labels and any available high-resolution labels, which we show improves performance compared to a network trained only with the same amount of high-resolution data.
We test our proposed algorithm in a challenging land cover mapping task to super-resolve labels at a 30m resolution to a separate set of labels at a 1m resolution. We compare our algorithm with models that are trained on high-resolution data and show that 1) we can achieve similar performance using only low-resolution data; and 2) we can achieve better performance when we incorporate a small amount of high-resolution data in our training. We also test our approach on a medical imaging problem, resolving low-resolution probability maps into high-resolution segmentation of lymphocytes with accuracy equal to that of fully supervised models. | accepted-poster-papers | This paper formulates a method for training deep networks to produce high-resolution semantic segmentation output using only low-resolution ground-truth labels. Reviewers agree that this is a useful contribution, but with the limitation that joint distribution between low- and high-resolution labels must be known. Experimental results are convincing. The technique introduced by the paper could be applicable to many semantic segmentation problems and is likely to be of general interest.
| train | [
"B1xSzvYip7",
"SygsHEtspm",
"SJlqORuoTX",
"HJx_z_FYC7",
"HJg9xHKi6X",
"SJeTefhtn7",
"ByguYDKY3X",
"rkln9EjE3X"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"[Modified Nov. 19 to reflect changes in the text.]\n\nThank you for your thoughtful comments and questions. We have taken account of the Minor Concerns you raised.\n\nIn response to the Major Concerns:\n\nWe agree with your comment on P(Y,Z) and will incorporate discussion of both the generality and limitations of... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
9
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rkln9EjE3X",
"ByguYDKY3X",
"SJeTefhtn7",
"SJeTefhtn7",
"SJeTefhtn7",
"iclr_2019_rkxwShA9Ym",
"iclr_2019_rkxwShA9Ym",
"iclr_2019_rkxwShA9Ym"
] |
iclr_2019_rkzDIiA5YQ | ANYTIME MINIBATCH: EXPLOITING STRAGGLERS IN ONLINE DISTRIBUTED OPTIMIZATION | Distributed optimization is vital in solving large-scale machine learning problems. A widely-shared feature of distributed optimization techniques is the requirement that all nodes complete their assigned tasks in each computational epoch before the system can proceed to the next epoch. In such settings, slow nodes, called stragglers, can greatly slow progress. To mitigate the impact of stragglers, we propose an online distributed optimization method called Anytime Minibatch. In this approach, all nodes are given a fixed time to compute the gradients of as many data samples as possible. The result is a variable per-node minibatch size. Workers then get a fixed communication time to average their minibatch gradients via several rounds of consensus, which are then used to update primal variables via dual averaging. Anytime Minibatch prevents stragglers from holding up the system without wasting the work that stragglers can complete. We present a convergence analysis and analyze the wall time performance. Our numerical results show that our approach is up to 1.5 times faster in Amazon EC2 and it is up to five times faster when there is greater variability in compute node performance. | accepted-poster-papers | The reviewers that provided extensive reviews agree that the paper is well-written and contains solid technical material. The paper however should be edited to address specific concerns regarding theoretical and empirical aspects of this work. | train | [
"r1gL2Y_y0m",
"BkxentruRm",
"rklWmYrdR7",
"r1ewb3O10X",
"BklhOquy0m",
"BygJelUjnm",
"ByeDZyv5n7",
"H1eu6b-q2X"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We briefly summarize the impact of the work as we see it. We have developed a method for distributed optimization that accounts for the real-world non-idealities of cloud computing systems, including variability in compute node throughput and load, network congestion, and hardware failure. This approach allows str... | [
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"BygJelUjnm",
"r1ewb3O10X",
"r1gL2Y_y0m",
"H1eu6b-q2X",
"ByeDZyv5n7",
"iclr_2019_rkzDIiA5YQ",
"iclr_2019_rkzDIiA5YQ",
"iclr_2019_rkzDIiA5YQ"
] |
iclr_2019_rkzjUoAcFX | Sample Efficient Adaptive Text-to-Speech | We present a meta-learning approach for adaptive text-to-speech (TTS) with few data. During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker. The aim of training is not to produce a neural network with fixed weights, which is then deployed as a TTS system. Instead, the aim is to produce a network that requires few data at deployment time to rapidly adapt to new speakers. We introduce and benchmark three strategies:
(i) learning the speaker embedding while keeping the WaveNet core fixed,
(ii) fine-tuning the entire architecture with stochastic gradient descent, and
(iii) predicting the speaker embedding with a trained neural network encoder.
The experiments show that these approaches are successful at adapting the multi-speaker neural network to new speakers, obtaining state-of-the-art results in both sample naturalness and voice similarity with merely a few minutes of audio data from new speakers. | accepted-poster-papers | The paper benchmarks three strategies to adapt an existing TTS system (based on WaveNet) to new speakers.
The paper is clearly written. The models and adaptation strategies are not very novel, but still a scientific contribution. Overall, the experimental results are detailed and convincing. The rebuttals addressed some of the concerns.
This is a welcomed contribution to ICLR 2019. | train | [
"B1gXKtGspX",
"HJeX8tzoT7",
"SklAlKMipQ",
"rJgr4cv03X",
"Syxn5T5ch7",
"ByxoJb0w27"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your insightful comments. As discussed in the related work section, our proposed approaches are closely related to the methods in Arik et al. (2018) at a high level. However, this is a situation where the details seem to matter significantly. For instance, we find that the detail of applying few-shot ad... | [
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
4,
4,
5
] | [
"ByxoJb0w27",
"Syxn5T5ch7",
"rJgr4cv03X",
"iclr_2019_rkzjUoAcFX",
"iclr_2019_rkzjUoAcFX",
"iclr_2019_rkzjUoAcFX"
] |
iclr_2019_ryE98iR5tm | Practical lossless compression with latent variables using bits back coding | Deep latent variable models have seen recent success in many data domains. Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner. We present '`Bits Back with ANS' (BB-ANS), a scheme to perform lossless compression with latent variable models at a near optimal rate. We demonstrate this scheme by using it to compress the MNIST dataset with a variational auto-encoder model (VAE), achieving compression rates superior to standard methods with only a simple VAE. Given that the scheme is highly amenable to parallelization, we conclude that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time. We make our implementation available open source at https://github.com/bits-back/bits-back . | accepted-poster-papers | The paper proposes a novel lossless compression scheme that leverages latent-variable models such as VAEs. Its main original contribution is to improve the bits back coding scheme [B. Frey 1997] through the use of asymmetric numeral systems (ANS) instead of arithmetic coding. The developed practical algorithm is also able to use continuous latents. The paper is well written but the reader will benefit from prior familiarity with compression schemes. Resulting message bit-length is shown empirically to be close to ELBO on MNIST. The main weakness pointed out by reviewers is that the empirical evaluation is limited to MNIST and to a simple VAE, while applicability to other models (autoregressive) and data (PixelVAE on ImageNet) is only hinted to and expected bit-length merely extrapolated from previously reported log-likelihood. The work could be much more convincing if its compression was empirically demonstrated on larger and better models and larger scale data. Nevertheless reviewers agreed that it sufficiently advanced the field to warrant acceptance. | train | [
"ByxFEl-OgN",
"Sklqun3hC7",
"BJlbx6Q2pQ",
"SkgMN6Qnam",
"HJejADENT7",
"Byee3PVEa7",
"rkg4p37267",
"rJxqij6_am",
"r1x2bnMChm",
"B1glUErlTX",
"ryeexzmp37"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"Thank you for your responses to my comments. ",
"Thank you for addressing my concerns. Based on this, I am keeping my original score.",
"> Sec 3.2 mentions finding that around 400 clean bits are required. How does the performance vary as fewer (or more) clean bits are used? More generally, do you have suggesti... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
5
] | [
"rkg4p37267",
"HJejADENT7",
"rJxqij6_am",
"rJxqij6_am",
"ryeexzmp37",
"ryeexzmp37",
"rJxqij6_am",
"iclr_2019_ryE98iR5tm",
"iclr_2019_ryE98iR5tm",
"r1x2bnMChm",
"iclr_2019_ryE98iR5tm"
] |
iclr_2019_ryGfnoC5KQ | Kernel RNN Learning (KeRNL) | We describe Kernel RNN Learning (KeRNL), a reduced-rank, temporal eligibility trace-based approximation to backpropagation through time (BPTT) for training recurrent neural networks (RNNs) that gives competitive performance to BPTT on long time-dependence tasks. The approximation replaces a rank-4 gradient learning tensor, which describes how past hidden unit activations affect the current state, by a simple reduced-rank product of a sensitivity weight and a temporal eligibility trace. In this structured approximation motivated by node perturbation, the sensitivity weights and eligibility kernel time scales are themselves learned by applying perturbations. The rule represents another step toward biologically plausible or neurally inspired ML, with lower complexity in terms of relaxed architectural requirements (no symmetric return weights), a smaller memory demand (no unfolding and storage of states over time), and a shorter feedback time. | accepted-poster-papers | this submission follows on a line of work on online learning of a recurrent net, which is an important problem both in theory and in practice. it would have been better to see even more realistic experiments, but already with the set of experiments the authors have conducted the merit of the proposed approach shines. | test | [
"B1e45z9n2Q",
"SJxjeIlqCm",
"BJeu0VlqAX",
"ByeT17xqCm",
"B1lH3_Ian7",
"B1lQ16dO2Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a simple method for performing temporal credit assignment in RNN training. While it seems somewhat naive and unlikely to work (in my opinion), the experimental results surprisingly show reasonable performance on several reasonably challenging artificial tasks.\n\nThe core of the approach is ba... | [
7,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
1,
4
] | [
"iclr_2019_ryGfnoC5KQ",
"B1lQ16dO2Q",
"B1e45z9n2Q",
"B1lH3_Ian7",
"iclr_2019_ryGfnoC5KQ",
"iclr_2019_ryGfnoC5KQ"
] |
iclr_2019_ryGgSsAcFQ | Deep, Skinny Neural Networks are not Universal Approximators | In order to choose a neural network architecture that will be effective for a particular modeling problem, one must understand the limitations imposed by each of the potential options. These limitations are typically described in terms of information theoretic bounds, or by comparing the relative complexity needed to approximate example functions between different architectures. In this paper, we examine the topological constraints that the architecture of a neural network imposes on the level sets of all the functions that it is able to approximate. This approach is novel for both the nature of the limitations and the fact that they are independent of network depth for a broad family of activation functions. | accepted-poster-papers | The paper shows limitations on the types of functions that can be represented by deep skinny networks for certain classes of activation functions, independently of the number of layers. With many other works discussing capabilities but not limitations, the paper contributes to a relatively underexplored topic.
The settings capture a large family of activation functions, but exclude others, such as polynomial activations, for which the considered type of obstructions would not apply. Also a concern is raised about it not being clear how this theoretical result can shed insight on the empirical study of neural networks.
The authors have responded to some of the comments of the reviewers, but not to all comments, in particular comments of reviewer 1, who's positive review is conditional on the authors addressing some points.
The reviewers are all confident and are moderately positive, positive, or very positive about this paper. | test | [
"Syl8ctvQeE",
"r1x5UuwvJE",
"r1e3Hvvv1N",
"SJgK8rBp3m",
"r1lx-OTY3Q",
"BklY-9xwnX",
"Bkx5oBB1oQ",
"SJxvceLE9Q"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public"
] | [
"The referenced paper on narrow belief networks uses layers of width n+1 and poses size n as as an open problem. A later work by Le Roux and Bengio obtained width n. ",
"It's true that there are many activation functions that the result doesn't apply to, and in fact isn't true for. The selling point isn't the gen... | [
-1,
-1,
-1,
6,
8,
7,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
4,
-1,
-1
] | [
"iclr_2019_ryGgSsAcFQ",
"SJgK8rBp3m",
"Bkx5oBB1oQ",
"iclr_2019_ryGgSsAcFQ",
"iclr_2019_ryGgSsAcFQ",
"iclr_2019_ryGgSsAcFQ",
"iclr_2019_ryGgSsAcFQ",
"iclr_2019_ryGgSsAcFQ"
] |
iclr_2019_ryGkSo0qYm | Large Scale Graph Learning From Smooth Signals | Graphs are a prevalent tool in data science, as they model the inherent structure of the data. Typically they are constructed either by connecting nearest samples, or by learning them from data, solving an optimization problem. While graph learning does achieve a better quality, it also comes with a higher computational cost. In particular, the current state-of-the-art model cost is O(n^2) for n samples.
In this paper, we show how to scale it, obtaining an approximation with leading cost of O(n log(n)), with quality that approaches the exact graph learning model. Our algorithm uses known approximate nearest neighbor techniques to reduce the number of variables, and automatically selects the correct parameters of the model, requiring a single intuitive input: the desired edge density. | accepted-poster-papers | The paper is proposed as probable accept based on current ratings with a majority accept (7,7,5). | train | [
"H1gzJMz5Am",
"rklxB0GinX",
"rJgXinpl0m",
"SkeZBo6g0m",
"Bkxn9caeRQ",
"H1ek1FaxRX",
"HyezC8TlC7",
"SJxmCraxAX",
"HkgeCuI5nQ",
"BkgZllZ52X"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers,\n\nWe would like to sincerely thank you for your constructive comments. After implementing them, we believe that the current version of the paper is not only stronger, but also clearer.\n\nOur main focus was to strengthen the experimental section. The new version better illustrates the advantages ... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_ryGkSo0qYm",
"iclr_2019_ryGkSo0qYm",
"iclr_2019_ryGkSo0qYm",
"BkgZllZ52X",
"HkgeCuI5nQ",
"HkgeCuI5nQ",
"rklxB0GinX",
"rklxB0GinX",
"iclr_2019_ryGkSo0qYm",
"iclr_2019_ryGkSo0qYm"
] |
iclr_2019_ryGvcoA5YX | Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation | Learning multiple tasks sequentially is important for the development of AI and lifelong learning systems. However, standard neural network architectures suffer from catastrophic forgetting which makes it difficult for them to learn a sequence of tasks. Several continual learning methods have been proposed to address the problem. In this paper, we propose a very different approach, called Parameter Generation and Model Adaptation (PGMA), to dealing with the problem. The proposed approach learns to build a model, called the solver, with two sets of parameters. The first set is shared by all tasks learned so far and the second set is dynamically generated to adapt the solver to suit each test example in order to classify it. Extensive experiments have been carried out to demonstrate the effectiveness of the proposed approach. | accepted-poster-papers | This paper presents a promising model to avoid catastrophic forgetting in continual learning. The model consists of a) a data generator to be used at training time to replay past examples (and removes the need for storage of data or labels), b) a dynamic parameter generator that given a test input produces the parameters of a classification model, and c) a solver (the actual classifier). The advantages of such combination is that no parameter increase or network expansion is needed to learn a new task, and no previous data needs to be stored for memory replay.
There is reviewer disagreement on this paper. AC can confirm that all three reviewers have read the author responses and have significantly contributed to the revision of the manuscript.
All three reviewers and AC note the following potential weaknesses: (1) presentation clarity needed substantial improvement. Notably, the authors revised the paper several times while incorporating the reviewers suggestions regarding presentation clarity. R2 has raised the final rating from 4 to 5 while retaining doubts about clarity.
(2) weak empirical evidence: evaluation with more than three tasks and using more recent/stronger baseline methods would substantially strengthen the evaluation (R2, R3). AC would like to report the authors added an experiment with five tasks and provided a verbal comparison with "Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence", ECCV-2018 by reporting the authors results on the MNIST dataset.
(3) as noted by R2, an ablation study of different model components could strengthen the evaluation. The authors included such ablation study in Table 4 of the revised paper.
(4) reproducibility of the model could be difficult (R1). In their response, the authors promised to make the code publicly available.
AC can confirm that all three reviewers have contributed to the final discussion. Given the effort of the reviewers and authors in revising this work and its potential novelty, the AC decided that the paper could be accepted, but the authors are strongly urged to further improve presentation clarity in the final revision if possible.
| train | [
"rkldD8oFkN",
"SJemxth_y4",
"B1gVo5yv1V",
"r1gH2AauTm",
"HJxvFOI90Q",
"HkxxmPIqCQ",
"r1xJTLFtnQ",
"rkguZ_L5CX",
"SJgldY45AX",
"SylEV_-3TQ",
"BkeciPPVaQ",
"BJxh4PwEpm",
"SylAVCLE6m",
"ryeOkVsg67",
"H1eXILQeaX",
"SJxhhsOmhm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"My questions are answered and thanks for incorporating some in the document.",
"Thank you for taking the time to read our revision and giving the new comments and suggestions. We will improve the related work and organization more in the next version. With your suggestions, we will make the paper very clear. We ... | [
-1,
-1,
-1,
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"BJxh4PwEpm",
"B1gVo5yv1V",
"HkxxmPIqCQ",
"iclr_2019_ryGvcoA5YX",
"SJxhhsOmhm",
"SJgldY45AX",
"iclr_2019_ryGvcoA5YX",
"r1xJTLFtnQ",
"SylEV_-3TQ",
"r1gH2AauTm",
"SJxhhsOmhm",
"SJxhhsOmhm",
"r1xJTLFtnQ",
"H1eXILQeaX",
"iclr_2019_ryGvcoA5YX",
"iclr_2019_ryGvcoA5YX"
] |
iclr_2019_ryM_IoAqYX | Analysis of Quantized Models | Deep neural networks are usually huge, which significantly limits the deployment on low-end devices. In recent years, many
weight-quantized models have been proposed. They have small storage and fast inference, but training can still be time-consuming. This can be improved with distributed learning. To reduce the high communication cost due to worker-server synchronization, recently gradient quantization has also been proposed to train deep networks with full-precision weights.
In this paper, we theoretically study how the combination of both weight and gradient quantization affects convergence.
We show that (i) weight-quantized models converge to an error related to the weight quantization resolution and weight dimension; (ii) quantizing gradients slows convergence by a factor related to the gradient quantization resolution and dimension; and (iii) clipping the gradient before quantization renders this factor dimension-free, thus allowing the use of fewer bits for gradient quantization. Empirical experiments confirm the theoretical convergence results, and demonstrate that quantized networks can speed up training and have comparable performance as full-precision networks. | accepted-poster-papers | This paper provides the first convergence analysis for convex model distributed training with quantized weights and gradients. It is well written and organized. Extensive experiments are carried out beyond the assumption of convex models in the theoretical study.
Analysis with weight and gradient quantization has been separately studied, and this paper provides a combined analysis, which renders the contribution incremental.
As pointed out by R2 and R3, it is somewhat unclear under which problem setting, the proposed quantized training would help improve the convergence. The authors provide clarification in the feedback. It is important to include those, together with other explanations in the feedback, in the future revision.
Another limitation pointed out by R3 is that the theoretical analysis applies to convex models only. Nevertheless, it is nice to show in experiments that deep networks training is benefitted from the gradient quantization empirically. | train | [
"BJgWl8V_RQ",
"HJx0CdY1hX",
"ByxL06zL0Q",
"rJenGm0BAm",
"HkeHExsB0Q",
"SkxZe09HC7",
"S1gLcxsHCX",
"r1l0SeoHR7",
"r1eE80qH07",
"S1lX2i9S0Q",
"HJxJpq863X",
"rJexvR-c37"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your insightful comment. \n\nGradient quantization (Seide et al., 2014; Wen et al., 2017; Alistarh et al., 2017; Bernstein et al., 2018) and gradient sparsification (Aji & Heafield, 2017; Wangni et al., 2017) are most useful when the communication cost is larger than the computational cost. This is the ... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"ByxL06zL0Q",
"iclr_2019_ryM_IoAqYX",
"rJenGm0BAm",
"S1gLcxsHCX",
"rJexvR-c37",
"S1lX2i9S0Q",
"HJx0CdY1hX",
"HkeHExsB0Q",
"SkxZe09HC7",
"HJxJpq863X",
"iclr_2019_ryM_IoAqYX",
"iclr_2019_ryM_IoAqYX"
] |
iclr_2019_rye4g3AqFm | Deep learning generalizes because the parameter-function map is biased towards simple functions | Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit. In this paper, we provide a new explanation. By applying a very general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs should be exponentially biased towards simple functions. We then provide clear evidence for this strong simplicity bias in a model DNN for Boolean functions, as well as in much larger fully connected and convolutional networks trained on CIFAR10 and MNIST.
As the target functions in many real problems are expected to be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems.
This picture also facilitates a novel PAC-Bayes approach where the prior is taken over the DNN input-output function space, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic datasets such as MNIST and CIFAR10 and for architectures including convolutional and fully connected networks. | accepted-poster-papers | Dear authors,
There was some disagreement among reviewers on the significance of your results, in particular because of the limited experimental section.
Despite this issues, which is not minor, your work adds yet another piece of the generalization puzzle. However, I would encourage the authors to make sure they do not oversell their results, either in the title or in their text, for the final version. | train | [
"ByxxENLl1V",
"rJxEXPUn0X",
"Skxmk-LhAX",
"rJlbFXU20m",
"S1xnqB82Cm",
"H1e4AB830m",
"Hkx48e8nA7",
"SyxXKMG937",
"SkgdeIbqh7",
"BJlNf4D827"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I have read and considered the author's response, and do not wish to change my rating of the paper.",
"__Other papers on SGD that are relevant__\n\nWe also include in the introduction a brief description of a related argument by (Wu et al. 2017 https://arxiv.org/abs/1706.10239 ) who found that normal gradient de... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"H1e4AB830m",
"rJlbFXU20m",
"iclr_2019_rye4g3AqFm",
"SkgdeIbqh7",
"BJlNf4D827",
"S1xnqB82Cm",
"SyxXKMG937",
"iclr_2019_rye4g3AqFm",
"iclr_2019_rye4g3AqFm",
"iclr_2019_rye4g3AqFm"
] |
iclr_2019_rye7knCqK7 | Learning when to Communicate at Scale in Multiagent Cooperative and Competitive Tasks | Learning when to communicate and doing that effectively is essential in multi-agent tasks. Recent works show that continuous communication allows efficient training with back-propagation in multi-agent scenarios, but have been restricted to fully-cooperative tasks. In this paper, we present Individualized Controlled Continuous Communication Model (IC3Net) which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings. IC3Net controls continuous communication with a gating mechanism and uses individualized rewards foreach agent to gain better performance and scalability while fixing credit assignment issues. Using variety of tasks including StarCraft BroodWars explore and combat scenarios, we show that our network yields improved performance and convergence rates than the baselines as the scale increases. Our results convey that IC3Net agents learn when to communicate based on the scenario and profitability. | accepted-poster-papers | All reviewers agree that the proposed is interesting and innovative. One reviewer argues that some additional baseline comparisons could be beneficial and the other two suggest inclusion of additional explanations and discussions of the results. The authors’ rebuttal alleviated most of the concerns. All reviewers are very appreciative of the quality of the work overall and recommend probable acceptance. I agree with this score and recommend this work for poster presentation at ICLR. | train | [
"BJeUggdXeE",
"B1eTMQPYkE",
"ryeJRL41J4",
"rJlZELVkyV",
"SJlmcrN1JV",
"HylF2N4kJ4",
"H1en6bXKCX",
"Sye0fmXF07",
"Ske44-mFCm",
"r1gNHf7YCQ",
"H1eylm7KCX",
"r1lbXf7FAm",
"SygAse7OaX",
"BkxEvnt337",
"Skl2aq3q37"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I acknowledge that I have add the authors' response. They clarify some questions I had; however overall my score is unchanged.\n\nI was especially unconvinced by the response to my Concern #1. I don't understand why a policy search method cannot be compared to a Q-leaning based method. I'm not suggesting that one ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"SJlmcrN1JV",
"iclr_2019_rye7knCqK7",
"Skl2aq3q37",
"BkxEvnt337",
"SygAse7OaX",
"H1en6bXKCX",
"iclr_2019_rye7knCqK7",
"Skl2aq3q37",
"SygAse7OaX",
"BkxEvnt337",
"Skl2aq3q37",
"BkxEvnt337",
"iclr_2019_rye7knCqK7",
"iclr_2019_rye7knCqK7",
"iclr_2019_rye7knCqK7"
] |
iclr_2019_ryeOSnAqYm | Synthetic Datasets for Neural Program Synthesis | The goal of program synthesis is to automatically generate programs in a particular language from corresponding specifications, e.g. input-output behavior.
Many current approaches achieve impressive results after training on randomly generated I/O examples in limited domain-specific languages (DSLs), as with string transformations in RobustFill.
However, we empirically discover that applying test input generation techniques for languages with control flow and rich input space causes deep networks to generalize poorly to certain data distributions;
to correct this, we propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance. | accepted-poster-papers | This paper analyzes existing approaches to program induction from I/O pairs, and demonstrates that naively generating I/O pairs results in a non-uniform sampling of salient variables, leading to poor performance. The paper convincingly shows, via strong evaluation, that uniform sampling of these variables can much result in much better models, both for explicit DSL and implicit, neural models. The reviewers feel the observation is an important one, and the paper does a good job providing sufficiently convincing evidence for it.
The reviewers and AC note the following potential weaknesses: (1) the paper does not propose a new model, but instead a different data generation strategy, somewhat limiting the novelty, (2) Salient variables that need to be uniformly sampled are still user specified, (3) there were a number of notation and clarity issues that make it difficult to understand the details of the approach, and finally, (4) there are concerns with the use of rejection sampling.
The authors provided major revisions that address the clarity issues, including an addition of new proofs, cleaner notation, and removal of unnecessary text. The authors also included additional results, such as KL divergence evaluation to show how uniform the distribution is. The authors also described the need for rejection sampling, especially for Karel dataset, and clarified why the Calculator domain, even though is not "program synthesis", still faces similar challenges. The reviewers agreed that not having a new model is not a chief concern, and that using rejection sampling is a reasonable first step, with more efficient techniques left for others for future work.
Overall, the reviewers agreed that the paper should be accepted. As reviewer 1 said it best, this paper "is a timely contribution and I think it is important for future program synthesis papers to take the results and message here to heart". | train | [
"S1xqDuXihm",
"B1lM1QEc2X",
"r1g7QIKGJN",
"SJeY4OtP0m",
"BJeUtNtPRQ",
"rkxEB4tvA7",
"Bkl7uGKvAX",
"Syl7lJbO67"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper provides a good presentation of a serious problem in evaluating (as well as training!) performance of machine learning models for program synthesis / program induction: considering specifically the problem of learning a program which corresponds to given input/output pairs, since large datasets of \"rea... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
2,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_ryeOSnAqYm",
"iclr_2019_ryeOSnAqYm",
"BJeUtNtPRQ",
"iclr_2019_ryeOSnAqYm",
"B1lM1QEc2X",
"S1xqDuXihm",
"Syl7lJbO67",
"iclr_2019_ryeOSnAqYm"
] |
iclr_2019_ryeYHi0ctQ | DPSNet: End-to-end Deep Plane Sweep Stereo | Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets. | accepted-poster-papers | A deep neural network pipeline for multiview stereo is presented. After rebuttal and discussion, all reviewers learn toward accepting the paper. Reviewer3 points to good results, but is concerned that the technical aspects are somewhat straightforward, and thus the contribution in this area is limited. The AC concurs with the reviewers. | val | [
"BklzKibF27",
"S1gw0HrzRQ",
"rJeH6BHMC7",
"S1lRoHBM0X",
"rJeqHUBf0X",
"HkeF3NBMRQ",
"SyxNEIBfCQ",
"BJlme8SzAm",
"rJeI9Oz5hQ",
"SkxJToirn7",
"BJgWT3KSq7",
"HygmvWIH5Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Summary\nThis paper proposes an end-to-end learnable multiview stereo depth estimation network, which is basically very similar to the GCNet (Kendall et.al 2017) or PSMNet (Chang et.al 2018) for stereo estimation. The differences are using SPN to warp feature w.r.t RT, adding a multi view averaging cost and a cos... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1
] | [
"iclr_2019_ryeYHi0ctQ",
"BklzKibF27",
"BklzKibF27",
"BklzKibF27",
"rJeI9Oz5hQ",
"rJeI9Oz5hQ",
"rJeI9Oz5hQ",
"SkxJToirn7",
"iclr_2019_ryeYHi0ctQ",
"iclr_2019_ryeYHi0ctQ",
"HygmvWIH5Q",
"iclr_2019_ryeYHi0ctQ"
] |
iclr_2019_ryepUj0qtX | Conditional Network Embeddings | Network Embeddings (NEs) map the nodes of a given network into d-dimensional Euclidean space Rd. Ideally, this mapping is such that 'similar' nodes are mapped onto nearby points, such that the NE can be used for purposes such as link prediction (if 'similar' means being 'more likely to be connected') or classification (if 'similar' means 'being more likely to have the same label'). In recent years various methods for NE have been introduced, all following a similar strategy: defining a notion of similarity between nodes (typically some distance measure within the network), a distance measure in the embedding space, and a loss function that penalizes large distances for similar nodes and small distances for dissimilar nodes.
A difficulty faced by existing methods is that certain networks are fundamentally hard to embed due to their structural properties: (approximate) multipartiteness, certain degree distributions, assortativity, etc. To overcome this, we introduce a conceptual innovation to the NE literature and propose to create \emph{Conditional Network Embeddings} (CNEs); embeddings that maximally add information with respect to given structural properties (e.g. node degrees, block densities, etc.). We use a simple Bayesian approach to achieve this, and propose a block stochastic gradient descent algorithm for fitting it efficiently.
We demonstrate that CNEs are superior for link prediction and multi-label classification when compared to state-of-the-art methods, and this without adding significant mathematical or computational complexity. Finally, we illustrate the potential of CNE for network visualization. | accepted-poster-papers | The conditional network embedding approach proposed in the paper seems nice and novel, and consistently outperforms state-of-art on variety of datasets; scalability demonstration was added during rebuttals, as well as multiple other improvements; although the reviewers did not respond by changing the scores, this paper with augmentations provided during the rebuttal appears to be a useful contribution worthy of publishing at ICLR. | train | [
"r1l2se3YC7",
"SyxO5eTc3X",
"S1gfn0HFA7",
"Byl45PrH0X",
"BJgraHZmRm",
"ryezm5-70X",
"SkxlQv-m0X",
"HJly3bK2h7",
"rJldfOy-27"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your answer to his and the other reviews, it helped me position the work better as to its practicality and scope. Your comments/rebuttal have been reflected in my review.",
"The authors propose a generative model of networks by learning embeddings and pairing the embeddings with a prior distributio... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"SkxlQv-m0X",
"iclr_2019_ryepUj0qtX",
"iclr_2019_ryepUj0qtX",
"iclr_2019_ryepUj0qtX",
"HJly3bK2h7",
"rJldfOy-27",
"SyxO5eTc3X",
"iclr_2019_ryepUj0qtX",
"iclr_2019_ryepUj0qtX"
] |
iclr_2019_ryetZ20ctX | Defensive Quantization: When Efficiency Meets Robustness | Neural network quantization is becoming an industry standard to efficiently deploy deep learning models on hardware platforms, such as CPU, GPU, TPU, and FPGAs. However, we observe that the conventional quantization approaches are vulnerable to adversarial attacks. This paper aims to raise people's awareness about the security of the quantized models, and we designed a novel quantization methodology to jointly optimize the efficiency and robustness of deep learning models. We first conduct an empirical study to show that vanilla quantization suffers more from adversarial attacks. We observe that the inferior robustness comes from the error amplification effect, where the quantization operation further enlarges the distance caused by amplified noise. Then we propose a novel Defensive Quantization (DQ) method by controlling the Lipschitz constant of the network during quantization, such that the magnitude of the adversarial noise remains non-expansive during inference. Extensive experiments on CIFAR-10 and SVHN datasets demonstrate that our new quantization method can defend neural networks against adversarial examples, and even achieves superior robustness than their full-precision counterparts, while maintaining the same hardware efficiency as vanilla quantization approaches. As a by-product, DQ can also improve the accuracy of quantized models without adversarial attack. | accepted-poster-papers | The reviewers agree the paper brings a novel perspective by controlling the conditioning of the model when performing quantization. The experiments are convincing experiments. We encourage the authors to incorporate additional references suggested in the reviews. We recommend acceptance. | val | [
"r1l21JD9RQ",
"rJlCsNCx0X",
"SJlagR_gR7",
"H1lNfHKMTQ",
"BkxPH1-xp7",
"SylvEsdV3m",
"rygxXn_fhQ",
"S1e96wfRoQ",
"BJeTRnNpjX",
"Hyly7UCijQ",
"H1lOYy3joQ",
"S1gHry3ijQ",
"SyxwKb2hcX",
"BkgmUBVncQ",
"rkx4Xm72cm",
"ryly95Z29X",
"ByguRKJhqQ",
"Hkehlty29X",
"ryxW9o4Z9m",
"HklDKcVZqQ"... | [
"author",
"public",
"official_reviewer",
"author",
"public",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"public",
"public",
"author",
"public",
"author",
"author",
"public",
"public"
] | [
"Thank you so much for the detailed feedback and advice.\n\n1. We conducted an empirical study to find the reason for inferior robustness in Section 3 and Figure 3.\n\n2. We appreciate the reviewer for the advice. The orthogonal regularization is an effective method to regularize the Lipschitz constant of the netwo... | [
-1,
-1,
7,
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
-1,
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SJlagR_gR7",
"iclr_2019_ryetZ20ctX",
"iclr_2019_ryetZ20ctX",
"SylvEsdV3m",
"rygxXn_fhQ",
"iclr_2019_ryetZ20ctX",
"iclr_2019_ryetZ20ctX",
"BJeTRnNpjX",
"Hyly7UCijQ",
"S1gHry3ijQ",
"SyxwKb2hcX",
"BkgmUBVncQ",
"BkgmUBVncQ",
"rkx4Xm72cm",
"ryly95Z29X",
"Hkehlty29X",
"HklDKcVZqQ",
"ryx... |
iclr_2019_ryf6Fs09YX | GO Gradient for Expectation-Based Objectives | Within many machine learning algorithms, a fundamental problem concerns efficient calculation of an unbiased gradient wrt parameters γ for expectation-based objectives Eqγ(y)[f(y)]. Most existing methods either (i) suffer from high variance, seeking help from (often) complicated variance-reduction techniques; or (ii) they only apply to reparameterizable continuous random variables and employ a reparameterization trick. To address these limitations, we propose a General and One-sample (GO) gradient that (i) applies to many distributions associated with non-reparameterizable continuous {\em or} discrete random variables, and (ii) has the same low-variance as the reparameterization trick. We find that the GO gradient often works well in practice based on only one Monte Carlo sample (although one can of course use more samples if desired). Alongside the GO gradient, we develop a means of propagating the chain rule through distributions, yielding statistical back-propagation, coupling neural networks to common random variables. | accepted-poster-papers | This clearly written paper develops a novel, sound and comprehensive mathematical framework for computing low variance gradients of expectation-based objectives. The approach generalizes and encompasses several previous approaches for continuous random variables (reparametrization trick, Implicit Rep, pathwise gradients), and conveys novel insights.
Importantly, and originally, it extends to discrete random variables, and to chains of continuous random variables with optionally discrete terminal variables. These contributions are well exposed, and supported by convincing experiments.
Questions from reviewers were well addressed in the rebuttal and helped significantly clarify and improve the paper, in particular for delineating the novel contribution against prior related work.
| train | [
"rJlsvSSchm",
"Skx3km-FpQ",
"S1eUAxbKa7",
"ryxx8ZbFa7",
"ryerVgbtT7",
"HygWY_1c2X",
"rklz9YLKh7",
"B1ef1_Gg57",
"SklUUmJxcX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper presents a gradient estimator for expectation-based objectives, which is called Go-gradient. This estimator is unbiased, has low variance and, in contrast to other previous approaches, applies to either continuous and discrete random variables. They also extend this estimator to problems where the gradi... | [
7,
-1,
-1,
-1,
-1,
7,
6,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1
] | [
"iclr_2019_ryf6Fs09YX",
"iclr_2019_ryf6Fs09YX",
"HygWY_1c2X",
"rklz9YLKh7",
"rJlsvSSchm",
"iclr_2019_ryf6Fs09YX",
"iclr_2019_ryf6Fs09YX",
"SklUUmJxcX",
"iclr_2019_ryf6Fs09YX"
] |
iclr_2019_ryf7ioRqFX | h-detach: Modifying the LSTM Gradient Towards Better Optimization | Recurrent neural networks are known for their notorious exploding and vanishing gradient problem (EVGP). This problem becomes more evident in tasks where the information needed to correctly solve them exist over long time scales, because EVGP prevents important gradient components from being back-propagated adequately over a large number of steps. We introduce a simple stochastic algorithm (\textit{h}-detach) that is specific to LSTM optimization and targeted towards addressing this problem. Specifically, we show that when the LSTM weights are large, the gradient components through the linear path (cell state) in the LSTM computational graph get suppressed. Based on the hypothesis that these components carry information about long term dependencies (which we show empirically), their suppression can prevent LSTMs from capturing them. Our algorithm\footnote{Our code is available at https://github.com/bhargav104/h-detach.} prevents gradients flowing through this path from getting suppressed, thus allowing the LSTM to capture such dependencies better. We show significant improvements over vanilla LSTM gradient based training in terms of convergence speed, robustness to seed and learning rate, and generalization using our modification of LSTM gradient on various benchmark datasets. | accepted-poster-papers | This paper presents a method for preventing exploding and vanishing gradients in LSTMs by stochastically blocking some paths of the information flow (but not others). Experiments show improved training speed and robustness to hyperparameter settings.
I'm concerned about the quality of R2, since (as the authors point out) some of the text is copied verbatim from the paper. The other two reviewers are generally positive about the paper, with scores of 6 and 7, and R1 in particular points out that this work has already had noticeable impact in the field. While the reviewers pointed out some minor concerns with the experiments, there don't seem to be any major flaws. I think the paper is above the bar for acceptance.
| train | [
"Syliw--DnX",
"rkxqBUoHpQ",
"rkexWojBpX",
"B1xTkqsS6Q",
"BygbhKulp7",
"H1g48OS5h7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The results are intriguing. However, similar methods like BN-LSTM [3] and Variational RNNs [4] achieve arguably the same with very similar mechanisms. We do not think they can be considered as orthogonal. This should be addressed by the authors. Also, hard long-term experiments like sequentially predicting pixels ... | [
7,
-1,
-1,
-1,
5,
6
] | [
5,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_ryf7ioRqFX",
"Syliw--DnX",
"BygbhKulp7",
"H1g48OS5h7",
"iclr_2019_ryf7ioRqFX",
"iclr_2019_ryf7ioRqFX"
] |
iclr_2019_ryfMLoCqtQ | An analytic theory of generalization dynamics and transfer learning in deep linear networks | Much attention has been devoted recently to the generalization puzzle in deep learning: large, deep networks can generalize well, but existing theories bounding generalization error are exceedingly loose, and thus cannot explain this striking performance. Furthermore, a major hope is that knowledge may transfer across tasks, so that multi-task learning can improve generalization on individual tasks. However we lack analytic theories that can quantitatively predict how the degree of knowledge transfer depends on the relationship between the tasks. We develop an analytic theory of the nonlinear dynamics of generalization in deep linear networks, both within and across tasks. In particular, our theory provides analytic solutions to the training and testing error of deep networks as a function of training time, number of examples, network size and initialization, and the task structure and SNR. Our theory reveals that deep networks progressively learn the most important task structure first, so that generalization error at the early stopping time primarily depends on task structure and is independent of network size. This suggests any tight bound on generalization error must take into account task structure, and explains observations about real data being learned faster than random data. Intriguingly our theory also reveals the existence of a learning algorithm that proveably out-performs neural network training through gradient descent. Finally, for transfer learning, our theory reveals that knowledge transfer depends sensitively, but computably, on the SNRs and input feature alignments of pairs of tasks. | accepted-poster-papers | The authors provide a new analysis of generalization in deep linear networks, provide new insight through the role of "task structure". Empirical findings are used to cast light on the general case. This work seems interesting and worthy of publication. | val | [
"rJeLtcWKC7",
"ryenmhZFAQ",
"HyeOgwWKR7",
"HyxwLbwRn7",
"HJxUoGAa37",
"BkloP5xs3Q",
"Byl6YlFUjX",
"r1xIAyF8jQ",
"ryxA97hY57",
"rkgphDw3YX",
"S1elRPX3KX"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"public"
] | [
"Thank you for the thorough and helpful comments. We appreciate the time and care you took reviewing this paper. We have responded to a few points below.\n\n“One aim of the present work, that appears to be a unique contribution above the prior work is to focus on the role played by task structure, suggesting that c... | [
-1,
-1,
-1,
8,
7,
6,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"BkloP5xs3Q",
"HyxwLbwRn7",
"HJxUoGAa37",
"iclr_2019_ryfMLoCqtQ",
"iclr_2019_ryfMLoCqtQ",
"iclr_2019_ryfMLoCqtQ",
"r1xIAyF8jQ",
"ryxA97hY57",
"iclr_2019_ryfMLoCqtQ",
"S1elRPX3KX",
"iclr_2019_ryfMLoCqtQ"
] |
iclr_2019_ryggIs0cYQ | Differentiable Learning-to-Normalize via Switchable Normalization | We address a learning-to-normalize problem by proposing Switchable Normalization (SN), which learns to select different normalizers for different normalization layers of a deep neural network. SN employs three distinct scopes to compute statistics (means and variances) including a channel, a layer, and a minibatch. SN switches between them by learning their importance weights in an end-to-end manner. It has several good properties. First, it adapts to various network architectures and tasks (see Fig.1). Second, it is robust to a wide range of batch sizes, maintaining high performance even when small minibatch is presented (e.g. 2 images/GPU). Third, SN does not have sensitive hyper-parameter, unlike group normalization that searches the number of groups as a hyper-parameter. Without bells and whistles, SN outperforms its counterparts on various challenging benchmarks, such as ImageNet, COCO, CityScapes, ADE20K, and Kinetics. Analyses of SN are also presented. We hope SN will help ease the usage and understand the normalization techniques in deep learning. The code of SN will be released. | accepted-poster-papers | This paper proposes Switchable Normalization (SN) that leans how to combine three existing normalization techniques for improved performance. There is a general consensus that that the paper has good quality and clarity, is well motivated, is sufficiently novel, makes clear contributions for training deep neural networks, and provides convincing experimental results to show the advantages of the proposed SN. | test | [
"rkeynlndxE",
"B1gLdh5uh7",
"SJg4K43k07",
"BygLp1wrpm",
"B1x5xwU-a7",
"r1xiHRoe6Q"
] | [
"public",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"the idea to uniform some of the best normalization techniques is quite interesting. \nQuick question: does it make sense to use separate control parameters for the means and stds?\nadditional experiment request: Could you please provide us with a benchmark on CIFAR-10/100, which is less resource demanding so that ... | [
-1,
7,
-1,
7,
-1,
7
] | [
-1,
5,
-1,
3,
-1,
4
] | [
"iclr_2019_ryggIs0cYQ",
"iclr_2019_ryggIs0cYQ",
"B1gLdh5uh7",
"iclr_2019_ryggIs0cYQ",
"r1xiHRoe6Q",
"iclr_2019_ryggIs0cYQ"
] |
iclr_2019_rygjcsR9Y7 | SOM-VAE: Interpretable Discrete Representation Learning on Time Series | High-dimensional time series are common in many domains. Since human cognition is not optimized to work well in high-dimensional spaces, these areas could benefit from interpretable low-dimensional representations. However, most representation learning algorithms for time series data are difficult to interpret. This is due to non-intuitive mappings from data features to salient properties of the representation and non-smoothness over time.
To address this problem, we propose a new representation learning framework building on ideas from interpretable discrete dimensionality reduction and deep generative modeling. This framework allows us to learn discrete representations of time series, which give rise to smooth and interpretable embeddings with superior clustering performance. We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organizing map algorithm that is more performant than the original. Furthermore, to allow for a probabilistic interpretation of our method, we integrate a Markov model in the representation space.
This model uncovers the temporal transition structure, improves clustering performance even further and provides additional explanatory insights as well as a natural representation of uncertainty.
We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set. Our learned representations compare favorably with competitor methods and facilitate downstream tasks on the real world data. | accepted-poster-papers | This paper combines probabilistic models, VAEs, and self-organizing maps to learn interpretable representations on time series. The proposed contributions are a novel and interesting combination of existing ideas, in particular, the extension to time-series data by modeling the cluster dynamics. The empirical results show improved unsupervised clustering performance, on both synthetic and real datasets, compared to a number of baselines. The resulting 2D embedding also provides an interpretable visualization.
The reviewers and the AC identified a number of potential weaknesses in the presentation in the original submission: (1) there was insufficient background on SOMs, leaving the readers unable to comprehend the contributions, (2) some of the details about the experiments were missing, such as how the baselines were constructed, (3) additional experiments were needed in regards to the hyper-parameters, such as number of clusters and the weighting in the loss, and (4) Figure 4d required a description of the results.
The revision and the comments by the authors addressed most of these comments, and the reviewers felt that their concerns had been alleviated.
Thus, the reviewers felt the paper should be accepted. | train | [
"H1gXD7Qa2m",
"HkgyBz103m",
"SJeYiFul3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a deep learning method for representation learning in time series data. The goal is to learn a discrete two-dimensional representation of the time series data in an interpretable manner. The model is constructed on the basis of self-organizing maps (SOM) and involves reconstruction error in the... | [
6,
9,
6
] | [
2,
4,
4
] | [
"iclr_2019_rygjcsR9Y7",
"iclr_2019_rygjcsR9Y7",
"iclr_2019_rygjcsR9Y7"
] |
iclr_2019_rygkk305YQ | Hierarchical Generative Modeling for Controllable Speech Synthesis | This paper proposes a neural end-to-end text-to-speech (TTS) model which can control latent attributes in the generated speech that are rarely annotated in the training data, such as speaking style, accent, background noise, and recording conditions. The model is formulated as a conditional generative model with two levels of hierarchical latent variables. The first level is a categorical variable, which represents attribute groups (e.g. clean/noisy) and provides interpretability. The second level, conditioned on the first, is a multivariate Gaussian variable, which characterizes specific attribute configurations (e.g. noise level, speaking rate) and enables disentangled fine-grained control over these attributes. This amounts to using a Gaussian mixture model (GMM) for the latent distribution. Extensive evaluation demonstrates its ability to control the aforementioned attributes. In particular, it is capable of consistently synthesizing high-quality clean speech regardless of the quality of the training data for the target speaker. | accepted-poster-papers | This is an ambitious paper tackling the important and timely problem of controlling non-annotated attributes in generated speech.
The reviewers had mixed opinions about the results. R1 asks for more convincing exposition of results but, nevertheless, acknowledging that it is difficult to evaluate TTS systems systematically. Besides, R2 and R3 find the results good.
Judging from the reviews and previous work, this paper does not seem to be very novel, although it certainly has intriguing new elements. Furthermore, it constitutes a mature piece of work.
| train | [
"HygDIcyFTm",
"S1e8NIZJeN",
"HJefYZCX1V",
"r1xNW0zmCm",
"S1gIVoyFCQ",
"Byx-PFs1p7",
"HyeevXg4AX",
"HyxBrtvX0Q",
"BJlI-2NgCm",
"SJxl-ip2a7",
"SJgpPspnaQ",
"Bklg8iTn6X",
"HkeAzv1tpm",
"Sklk9EMiaQ",
"BkxEWrFKam",
"rklBfjyYTm",
"BJg-gjyFam",
"B1l-39yFpX",
"H1xiUwyFpQ",
"HJgljrkKpX"... | [
"author",
"author",
"public",
"author",
"author",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the great feedback. Below are the itemized responses regarding each comment. We will also incorporate these into the revised version.\n\nDue to the \"max 5000 characters\" limit, we break down our response into multiple comments.\n\n\nRe:\nThough empirical evidence in the form of numerica... | [
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"Hye08Kwcnm",
"HJefYZCX1V",
"iclr_2019_rygkk305YQ",
"iclr_2019_rygkk305YQ",
"HyeevXg4AX",
"iclr_2019_rygkk305YQ",
"HyxBrtvX0Q",
"BJlI-2NgCm",
"SJgpPspnaQ",
"BkxEWrFKam",
"Bklg8iTn6X",
"SJxl-ip2a7",
"HyxdG2D5h7",
"HkeAzv1tpm",
"iclr_2019_rygkk305YQ",
"BJg-gjyFam",
"B1l-39yFpX",
"Hyg... |
iclr_2019_rygqqsA9KX | Learning Factorized Multimodal Representations | Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information. Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data: 1) models must learn the complex intra-modal and cross-modal interactions for prediction and 2) models must be robust to unexpected missing or noisy modalities during testing. In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels. We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors. Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction. Modality-specific generative factors are unique for each modality and contain the information required for generating data. Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets. Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance. Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning. | accepted-poster-papers | This paper offers a novel perspective for learning latent multimodal representations. The idea of segmenting the information into multimodal discriminative and modality-specific generating factors is found to be intriguing by all reviewers and the AC. The technical derivations allow for an efficient implementation of this idea.
There have been some concerns regarding the experimental section, but they have all been addressed adequately during the rebuttal period. Therefore the AC suggests this paper for acceptance. It is an overall nice and well-thought work.
| val | [
"Bkg7bmmvJ4",
"Skxf4Ozwy4",
"H1eAxcFc67",
"rklUotF96Q",
"H1gemYFqp7",
"ryeAL1v927",
"r1ldcd85nQ",
"SklvlX672X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for addressing my confusions.",
"Thanks for addressing some of my feedback.",
"Thank you for your positive comments and suggestions for improvement. We address your comments and questions below.\n\n[Comparison with Hsu & Glass (2018)] We have performed additional experiments between our model and Hsu &... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"H1eAxcFc67",
"H1gemYFqp7",
"SklvlX672X",
"r1ldcd85nQ",
"ryeAL1v927",
"iclr_2019_rygqqsA9KX",
"iclr_2019_rygqqsA9KX",
"iclr_2019_rygqqsA9KX"
] |
iclr_2019_rygrBhC5tQ | Composing Complex Skills by Learning Transition Policies | Humans acquire complex skills by exploiting previously learned skills and making transitions between them. To empower machines with this ability, we propose a method that can learn transition policies which effectively connect primitive skills to perform sequential tasks without handcrafted rewards. To efficiently train our transition policies, we introduce proximity predictors which induce rewards gauging proximity to suitable initial states for the next skill. The proposed method is evaluated on a set of complex continuous control tasks in bipedal locomotion and robotic arm manipulation which traditional policy gradient methods struggle at. We demonstrate that transition policies enable us to effectively compose complex skills with existing primitive skills. The proposed induced rewards computed using the proximity predictor further improve training efficiency by providing more dense information than the sparse rewards from the environments. We make our environments, primitive skills, and code public for further research at https://youngwoon.github.io/transition . | accepted-poster-papers | Strengths: The paper tackles a novel, well-motivated problem related to options & HRL.
The problem is that of learning transition policies, and the paper proposes
a novel and simple solution to that problem, using learned proximity predictors and transition
policies that can leverage those. Solid evaluations are done on simulated locomotion and
manipulation tasks. The paper is well written.
Weaknesses: Limitations were not originally discussed in any depth.
There is related work related to sub-goal generation in HRL.
AC: The physics of the 2D walker simulations looks to be unrealistic;
the character seems to move in a low-gravity environment, and can lean
forwards at extreme angles without falling. It would be good to see this explained.
There is a consensus among reviewers and AC that the paper would make an excellent ICLR contribution.
AC: I suggest a poster presentation; it could also be considered for oral presentation based
on the very positive reception by reviewers. | train | [
"r1lb_e98Tm",
"S1gjDqMD0X",
"rygq35VZ0X",
"SyxGmZcUTm",
"Bkgfie58TX",
"BkghwsHAnQ",
"Ske4BidpnQ",
"SJltXWIi3X"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the feedback and address the concerns in detail below.\n\n> Reviewer 3 (R3): “... the choice of exponential (“discounted”) proximity function. Wouldn’t a linear function of “step” be more natural here?”\n\nThe proximity predictor is used to reward the ending state of a transition trajecto... | [
-1,
-1,
-1,
-1,
-1,
7,
9,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"BkghwsHAnQ",
"rygq35VZ0X",
"iclr_2019_rygrBhC5tQ",
"Ske4BidpnQ",
"SJltXWIi3X",
"iclr_2019_rygrBhC5tQ",
"iclr_2019_rygrBhC5tQ",
"iclr_2019_rygrBhC5tQ"
] |
iclr_2019_ryl5khRcKm | Human-level Protein Localization with Convolutional Neural Networks | Localizing a specific protein in a human cell is essential for understanding cellular functions and biological processes of underlying diseases. A promising, low-cost,and time-efficient biotechnology for localizing proteins is high-throughput fluorescence microscopy imaging (HTI). This imaging technique stains the protein of interest in a cell with fluorescent antibodies and subsequently takes a microscopic image. Together with images of other stained proteins or cell organelles and the annotation by the Human Protein Atlas project, these images provide a rich source of information on the protein location which can be utilized by computational methods. It is yet unclear how precise such methods are and whether they can compete with human experts. We here focus on deep learning image analysis methods and, in particular, on Convolutional Neural Networks (CNNs)since they showed overwhelming success across different imaging tasks. We pro-pose a novel CNN architecture “GapNet-PL” that has been designed to tackle the characteristics of HTI data and uses global averages of filters at different abstraction levels. We present the largest comparison of CNN architectures including GapNet-PL for protein localization in HTI images of human cells. GapNet-PL outperforms all other competing methods and reaches close to perfect localization in all 13 tasks with an average AUC of 98% and F1 score of 78%. On a separate test set the performance of GapNet-PL was compared with three human experts and 25 scholars. GapNet-PL achieved an accuracy of 91%, significantly (p-value 1.1e−6) outperforming the best human expert with an accuracy of 72%. | accepted-poster-papers | The reviewers all agreed that the problem application is interesting, and that there is little new methodology, but disagreed as to how that should translate into a score. The highest rating seemed to heavily weight the importance of the method to biological application, whereas the lowest rating heavily weighted the lack of technical novelty. However, because the ICLR call for papers clearly calls out applications in biology, and all reviewers agreed on its strength in that regard, and it was well-written and executed, I would recommend it for acceptance. | val | [
"SJlJ53Vqnm",
"B1eR4oSsRm",
"HkxjXg6tCX",
"Bkx8ennFC7",
"SyeqlUatC7",
"SylHrr7cnQ",
"HyedjVXrim"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper designed a GapNet-PL architecture and applied GapNet-PL, DenseNet, Multi-scale CNN etc. to the protein image (multi-labels) classification dataset.\n\nPros:\n\n1. The proposed method has a good performance on the given task. Compared to the claimed baselines (Liimatainen et al. and human experts), the p... | [
4,
-1,
-1,
-1,
-1,
5,
8
] | [
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_ryl5khRcKm",
"Bkx8ennFC7",
"SylHrr7cnQ",
"SJlJ53Vqnm",
"HyedjVXrim",
"iclr_2019_ryl5khRcKm",
"iclr_2019_ryl5khRcKm"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.