paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2019_HJflg30qKX | Gradient descent aligns the layers of deep linear networks | This paper establishes risk convergence and asymptotic weight matrix alignment --- a form of implicit regularization --- of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes):
(i) the risk converges to 0;
(ii) the normalized i-th weight matrix asymptotically equals its rank-1 approximation u_iv_i^T;
(iii) these rank-1 matrices are aligned across layers, meaning |v_{i+1}^T u_i| -> 1.
In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network --- the product of its weight matrices --- converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon. | accepted-poster-papers | This paper studies the behavior of weight parameters for linear networks when trained on separable data with strictly decreasing loss functions. For this setting the paper shows that the gradient descent solution converges to max margin solution and each layer converges to a rank 1 matrix with consequent layers aligned. All reviewers agree that the paper provides novel results for understanding implicit regularization effects of gradient descent for linear networks. Despite the limitations of this paper such as studying networks with linear activation, studying gradient descent not with practical step sizes, assuming data is linearly separable, reviewers find the results useful and a good addition to existing literature. | train | [
"Bkg-LaQ5hQ",
"SkxCoNJ9C7",
"rklqDEk5AQ",
"rJgrymKX0X",
"Hke838NVhm",
"rkxYrG9_T7",
"r1gjyf9ua7",
"SJefSe9OTm",
"rygrfa8shQ"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this work the authors prove several claims regarding the inductive bias of gradient descent and gradient flow trained on deep linear networks with linearly separable data. They show that asymptotically gradient descent minimizes the risk, each weight matrix converges to its rank one approximation and the top si... | [
9,
-1,
-1,
-1,
6,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
5,
-1,
-1,
-1,
4
] | [
"iclr_2019_HJflg30qKX",
"iclr_2019_HJflg30qKX",
"rJgrymKX0X",
"rkxYrG9_T7",
"iclr_2019_HJflg30qKX",
"Hke838NVhm",
"Bkg-LaQ5hQ",
"rygrfa8shQ",
"iclr_2019_HJflg30qKX"
] |
iclr_2019_HJfwJ2A5KX | Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds | We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network's output. Our approach is based on an importance sampling scheme that judiciously defines a sampling distribution over the neural network parameters, and as a result, retains parameters of high importance while discarding redundant ones. We leverage a novel, empirical notion of sensitivity and extend traditional coreset constructions to the application of compressing parameters. Our theoretical analysis establishes guarantees on the size and accuracy of the resulting compressed network and gives rise to generalization bounds that may provide new insights into the generalization properties of neural networks. We demonstrate the practical effectiveness of our algorithm on a variety of neural network configurations and real-world data sets. | accepted-poster-papers | The reviewers and AC note that the strength of the paper includes a) an interesting compression algorithm of neural networks with provable guarantees (under some assumptions), b) solid experimental comparison with the existing *matrix sparsification* algorithms. The AC's main concern of the experimental part of the paper is that it doesn't outperform or match the performance of the "vanilla" neural network compression algorithms such as Han et al'15. The AC decided to suggest acceptance for the paper but also strongly encourage the paper to clarify the algorithms in comparison don't include state-of-the-art compression algorithms. | train | [
"Ske32v9VhX",
"B1go__H9hX",
"S1eKa7phpX",
"S1eRFQph6m",
"SJeLVQ63TQ",
"H1gbAMphaX",
"HkeeA0minm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Given an additively decomposable function F(X, Q) = sum_over_x_in_X cost(x, Q), one can approximate it using either random sampling of x in X (unbiased, possibly high variance), or using importance sampling and replace the sum_over_x with a sum_over_coreset importance_of_a_point * cost(x, Q) which if properly defi... | [
6,
7,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_HJfwJ2A5KX",
"iclr_2019_HJfwJ2A5KX",
"Ske32v9VhX",
"B1go__H9hX",
"HkeeA0minm",
"iclr_2019_HJfwJ2A5KX",
"iclr_2019_HJfwJ2A5KX"
] |
iclr_2019_HJgXsjA5tQ | On the loss landscape of a class of deep neural networks with no bad local valleys | We identify a class of over-parameterized deep neural networks with standard activation functions and cross-entropy loss which provably have no bad local valley, in the sense that from any point in parameter space there exists a continuous path on which the cross-entropy loss is non-increasing and gets arbitrarily close to zero. This implies that these networks have no sub-optimal strict local minima. | accepted-poster-papers | This paper introduces a class of deep neural nets that provably have no bad local valleys. By constructing a new class of network this paper avoids having to rely on unrealistic assumptions and manages to provide a relatively concise proof that the network family has no strict local minima. Furthermore, it is demonstrated that this type of network yields reasonable experimental results on some benchmarks. The reviewers identified issues such as missing measurements of the training loss, which is the actual quantity studied in the theoretical results, as well as some issues with the presentation of the results. After revisions the reviewers are satisfied that their comments have been addressed. This paper continues an interesting line of theoretical research and brings it closer to practice and so it should be of interest to the ICLR community. | train | [
"Hkg3N5wTnQ",
"rke_QE95RQ",
"BJx5sWN5hX",
"HylgTfXc0X",
"SkgOtfWD0m",
"SJentufURQ",
"r1erFqJMAQ",
"rylHw4Tx0X",
"rkeg9UyaTX",
"ByeYv_JTaX",
"HkgeJDjn6Q",
"BkeL8Gi26Q",
"Ske7SH5q37"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper shows that a class of deep neural networks have no spurious local valleys –--implying no strict local-minima. The family of neural networks studied includes a wide variety of network structure such as (a variant of) DenseNet. Overall, this paper makes some progress, improving previous results on over-pa... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_HJgXsjA5tQ",
"HylgTfXc0X",
"iclr_2019_HJgXsjA5tQ",
"r1erFqJMAQ",
"SJentufURQ",
"BkeL8Gi26Q",
"rylHw4Tx0X",
"rkeg9UyaTX",
"BJx5sWN5hX",
"BJx5sWN5hX",
"Ske7SH5q37",
"Hkg3N5wTnQ",
"iclr_2019_HJgXsjA5tQ"
] |
iclr_2019_HJgd1nAqFX | DOM-Q-NET: Grounded RL on Structured Language | Building agents to interact with the web would allow for significant improvements in knowledge understanding and representation learning. However, web navigation tasks are difficult for current deep reinforcement learning (RL) models due to the large discrete action space and the varying number of actions between the states. In this work, we introduce DOM-Q-NET, a novel architecture for RL-based web navigation to address both of these problems. It parametrizes Q functions with separate networks for different action categories: clicking a DOM element and typing a string input. Our model utilizes a graph neural network to represent the tree-structured HTML of a standard web page. We demonstrate the capabilities of our model on the MiniWoB environment where we can match or outperform existing work without the use of expert demonstrations. Furthermore, we show 2x improvements in sample efficiency when training in the multi-task setting, allowing our model to transfer learned behaviours across tasks. | accepted-poster-papers | This paper considers the task of web navigation, i.e. given a goal expressed in natural language, the task is to navigate webs by filling up fields and clicking links. The proposed model uses reinforcement learning, introducing a novel extension where the graph embedding of the pages is incorporated into the Q-function. The results are sound, and the paper is overall well-written.
The reviewers and AC note the following potential weaknesses. The primary concern that was raised was the novelty. Since the task could potentially be framed as semantic parsing, reviewer 4 mentioned there may be readily available approaches for baselines that the authors did not consider. The comparison to semantic parsing required a more detailed discussion, pointing not only the differences but also the similarities, that would encourage the two communities to explore novel approaches to their tasks. Further, reviewer 2 was concerned about the limited novelty, given the extensive work that combines GNN and RL, such as NerveNet.
The authors provided comments and a revision to address these issues. They described why it is not trivial to formulate their setup as a semantic parsing problem, partly due to the fact that the environment is partially observable.
Similarly, the authors described the differences between the proposed approach and methods like NerveNet, such as the use of a dynamic graph and off-policy RL, making the latter not a viable baseline for the task. These changes addressed most of the concerns raised by the reviewers.
The reviewers agreed that this paper should be accepted. | test | [
"rkg8QY8cRX",
"rylDIqT7Tm",
"rkeKXXLqRQ",
"r1lIlRSqA7",
"r1xtL6BqAQ",
"BJgtJsUOAX",
"Bygx-rVqhX",
"B1gKBxIw0Q",
"B1e-onPBAX",
"rJlLUUOBC7",
"BkebhIuBRX",
"S1xqrTDrAX",
"H1xisrtn27"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"A minor point, just in case it's helpful (apologies if you already know this): one of the main goals of writing a related work section like this is to get the authors of that related work interested in what you're doing, to convince them to try your methods. So, e.g., \"we do something similar to the knowledge gr... | [
-1,
7,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
1
] | [
"rkeKXXLqRQ",
"iclr_2019_HJgd1nAqFX",
"r1lIlRSqA7",
"BJgtJsUOAX",
"BJgtJsUOAX",
"B1e-onPBAX",
"iclr_2019_HJgd1nAqFX",
"rJlLUUOBC7",
"rylDIqT7Tm",
"Bygx-rVqhX",
"Bygx-rVqhX",
"H1xisrtn27",
"iclr_2019_HJgd1nAqFX"
] |
iclr_2019_HJgeEh09KQ | Boosting Robustness Certification of Neural Networks | We present a novel approach for the certification of neural networks against adversarial perturbations which combines scalable overapproximation methods with precise (mixed integer) linear programming. This results in significantly better precision than state-of-the-art verifiers on challenging feedforward and convolutional neural networks with piecewise linear activation functions. | accepted-poster-papers | The paper addresess an important problem of neural net robustness verification, and presents a novel approach outperforming state of art; author provided details rebuttals which clarified their contributions over the state of art and highlighted scalability; this work appears to be a solid and useful contribution to the field.
| train | [
"SJxdpsmL0m",
"B1xk_jXURm",
"B1eV9q7URm",
"HyxBSKmURQ",
"S1xULFL9n7",
"H1ep3mjh2Q",
"BJeGXqR9h7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nQ1. My background is more theoretical, but I'm looking for theorems here, considering the complicatedness of the neural network. All I am looking for is probably some high-level explanation. \n\nR1. RefineAI is a new approach for proving the robustness of neural networks: it is more precise than current incomple... | [
-1,
-1,
-1,
-1,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"S1xULFL9n7",
"BJeGXqR9h7",
"H1ep3mjh2Q",
"iclr_2019_HJgeEh09KQ",
"iclr_2019_HJgeEh09KQ",
"iclr_2019_HJgeEh09KQ",
"iclr_2019_HJgeEh09KQ"
] |
iclr_2019_HJgkx2Aqt7 | Learning To Simulate | Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire. In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any (non-differentiable) simulator, thereby controlling the distribution of synthesized data in order to maximize the accuracy of a model trained on that data. In contrast to prior art that hand-crafts these simulation parameters or adjusts only parts of the available parameters, our approach fully controls the simulator with the actual underlying goal of maximizing accuracy, rather than mimicking the real data distribution or randomly generating a large volume of data. We find that our approach (i) quickly converges to the optimal simulation parameters in controlled experiments and (ii) can indeed discover good sets of parameters for an image rendering simulator in actual computer vision applications. | accepted-poster-papers | This paper discusses the promising idea of using RL for optimizing simulators’ parameters.
The theme of this paper was very well received by the reviewers. Initial concerns about insufficient experimentation were justified, however the amendments done during the rebuttal period ameliorated this issue. The authors argue that due to considered domain and status of existing literature, extensive comparisons are difficult. The AC sympathizes with this argument, however it is still advised that the experiments are conducted in a more conclusive way, for example by disentangling the effects of the different choices made by the proposed model. For example, how would different sampling strategies for optimization perform? Are there more natural black-box optimization methods to use?
The reviewers believe that the methodology followed has a lot of space for improvement. However, the paper presents some fresh and intriguing ideas, which make it overall a relevant work for presentation at ICLR. | train | [
"H1e7y2eA3Q",
"r1gzzcZ91E",
"HyxPjJwf2Q",
"rkgPl6OwAX",
"rJlEVhiYhm",
"Byg0vTOv0m",
"r1eN2TuDCX",
"Bkx0ZTdDCm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"Pros:\n* Using RL to choose the simulator parameters is a good idea. It does not sound too novel, but at the same time I am not personally aware of this having been explored in the past (Note that my confidence is 4, so maybe other reviewers might be able to chime in on this point)\n* In theory, you don't need dom... | [
6,
-1,
7,
-1,
6,
-1,
-1,
-1
] | [
4,
-1,
4,
-1,
5,
-1,
-1,
-1
] | [
"iclr_2019_HJgkx2Aqt7",
"HyxPjJwf2Q",
"iclr_2019_HJgkx2Aqt7",
"H1e7y2eA3Q",
"iclr_2019_HJgkx2Aqt7",
"rJlEVhiYhm",
"HyxPjJwf2Q",
"rkgPl6OwAX"
] |
iclr_2019_HJlLKjR9FQ | Towards Understanding Regularization in Batch Normalization | Batch Normalization (BN) improves both convergence and generalization in training neural networks. This work understands these phenomena theoretically. We analyze BN by using a basic block of neural networks, consisting of a kernel layer, a BN layer, and a nonlinear activation function. This basic network helps us understand the impacts of BN in three aspects. First, by viewing BN as an implicit regularizer, BN can be decomposed into population normalization (PN) and gamma decay as an explicit regularization. Second, learning dynamics of BN and the regularization show that training converged with large maximum and effective learning rate. Third, generalization of BN is explored by using statistical mechanics. Experiments demonstrate that BN in convolutional neural networks share the same traits of regularization as the above analyses. | accepted-poster-papers | + the ideas presented in the paper are quite intriguing and draw on a variety of different connections
- the presentation has a lot of room for improvement. In particular, the statement of Theorem 1, in its current form, requires rephrasing and making it more rigorous.
Still, the general consensus is that, once these presentation shortcomings are address, this will be an interesting paper.
| val | [
"HJxHQLtF07",
"SkxVzjdYAQ",
"HJeeREuK0X",
"BkglGEYK07",
"rJlRVnUHaX",
"H1efug0i3X",
"HJe6PQtLnQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"List of changes:\n1.\tThe conditions for the regularization form of BN are made clearer in Sec. 2.1.\n\n2.\tThe extension of BN regularization in deep neural networks has been added in the last paragraph in Sec. 2 and Appendix C.4.\n\n3.\tAnalytical comparisons of the generalization errors of BN, WN+gamma decay, a... | [
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
5,
2
] | [
"iclr_2019_HJlLKjR9FQ",
"H1efug0i3X",
"rJlRVnUHaX",
"HJe6PQtLnQ",
"iclr_2019_HJlLKjR9FQ",
"iclr_2019_HJlLKjR9FQ",
"iclr_2019_HJlLKjR9FQ"
] |
iclr_2019_HJlNpoA5YQ | The Laplacian in RL: Learning Representations with Efficient Approximations | The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph. In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning. However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices. Second, these methods lack adequate justification beyond simple, tabular, finite-state settings. In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context. We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting. Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals. Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent. | accepted-poster-papers | This paper provides a novel and non-trivial method for approximating the eigenvectors of the Laplacian, in large or continuous state environments. Eigenvectors of the Laplacian have been used for proto-value functions and eigenoptions, but it has remained an open problem to extend their use to the non-tabular case. This paper makes an important advance towards this goal, and will be of interest to many that would like to learn state representations based on the geometric information given by the Laplacian.
The paper could be made stronger by including a short discussion on why the limitations of this approach. Its an important new direction, but there must still be open questions (e.g., issues with the approach used to approximate the orthogonality constraint). It will be beneficial to readers to understand these issues. | train | [
"Hylrlueq3Q",
"SJe3zTf6p7",
"BJlAT9zapm",
"HkxzFKz6pX",
"ryl6Crxo3m",
"SJxZV3b82m"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This paper proposes a method to learn a state representation for RL using the Laplacian. The proposed method aims to generalize previous work, which has only been shown in finite state spaces, to continuous and large state spaces. It goes to approximate the eigenvectors of the Laplacian which is construct... | [
7,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_HJlNpoA5YQ",
"SJxZV3b82m",
"Hylrlueq3Q",
"ryl6Crxo3m",
"iclr_2019_HJlNpoA5YQ",
"iclr_2019_HJlNpoA5YQ"
] |
iclr_2019_HJlQfnCqKX | Predicting the Generalization Gap in Deep Networks with Margin Distributions | As shown in recent research, deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap should be predicted from the training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum).
Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization. | accepted-poster-papers | The paper suggests a new measurement of layer-wise margin distributions for generalization ability. Extensive experiments are conducted. Though there lacks a solid theory to explain the phenomenon. The majority of reviewers suggest acceptance (9,6,5). Therefore, it is proposed as probable accept. | train | [
"Ske1HVNc07",
"HygHuQVq07",
"BylGPEVq07",
"H1xTRJmc3m",
"SylJSZ_STX",
"B1gSCYhmTX",
"S1e5qVsQaQ",
"Hkl7VhOqhX",
"HkgvJXxt2Q",
"BJgDuN3On7"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Thank you for the review. We address your concerns below.\n\n#What benefit can be acquired when using geometric margin defined in the paper.#\nThe geometric distance is the actual distance between a point “x” and the decision boundary f(x)=0, i.e. d1=min_x ||x|| s.t. f(x)=0.This term is usually used in contrast to... | [
-1,
-1,
-1,
6,
5,
-1,
-1,
9,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
-1,
-1,
4,
-1,
-1
] | [
"SylJSZ_STX",
"iclr_2019_HJlQfnCqKX",
"Ske1HVNc07",
"iclr_2019_HJlQfnCqKX",
"iclr_2019_HJlQfnCqKX",
"H1xTRJmc3m",
"Hkl7VhOqhX",
"iclr_2019_HJlQfnCqKX",
"BJgDuN3On7",
"iclr_2019_HJlQfnCqKX"
] |
iclr_2019_HJlmHoR5tQ | Adversarial Imitation via Variational Inverse Reinforcement Learning | We consider a problem of learning the reward and policy from expert examples under unknown dynamics. Our proposed method builds on the framework of generative adversarial networks and introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies. Empowerment-based regularization prevents the policy from overfitting to expert demonstrations, which advantageously leads to more generalized behaviors that result in learning near-optimal rewards. Our method simultaneously learns empowerment through variational information maximization along with the reward and policy under the adversarial learning formulation. We evaluate our approach on various high-dimensional complex control tasks. We also test our learned rewards in challenging transfer learning problems where training and testing environments are made to be different from each other in terms of dynamics or structure. The results show that our proposed method not only learns near-optimal rewards and policies that are matching expert behavior but also performs significantly better than state-of-the-art inverse reinforcement learning algorithms. | accepted-poster-papers | This paper proposes a regularization for IRL based on empowerment. The paper has some good results, and is generally well-written. The reviewers raised concerns about how the approach was motivated; these concerns have largely been addressed from the reframing of the algorithm from the perspective of regularization. Now, all reviewers agree that the paper is somewhat above the bar for acceptance. Hence, I also recommend accept. There are several changes that the authors are strongly encouraged to incorporate in the final version of the paper (based on discussion between the reviewers):
- The claim that empowerment acts as a regularizer in the policy update is a fairly complicated interpretation of the effect of the algorithm. It relies on an approximation derived in the appendix that relates the proposed objective with an empowerment regularized IRL formulation. The new framing makes much more sense. However, the one sentence reference to this section of the appendix in the main paper is not appropriate given that it is central to the claims of the paper's contribution. More discussion in the main text should be included.
- There are still some parts of the implemented algorithm that could introduce bias (using a target network in the shaping term which differs from the theory in Ng et al. 1999), but this concern could be remedied by a code release. The authors are strongly encouraged to link to the code in the final non-blind submission, especially since IRL implementations tend to be quite difficult to get right.
- The authors said they would change the way they bold their best numbers in their rebuttal. The current paper does not make the promised change, and actually adopts different bolding conventions in different tables which is even more confusing. The numbers should be bolded in a consistent way, bolding the numbers with the best performance up to statistical significance. | train | [
"SJdOg6R3Q",
"H1gjxRaq1V",
"ryxBthEq1N",
"SkeaSLLMyE",
"rJgOYVPcnX",
"HJgv40ZF0m",
"SklJGpL40Q",
"BklSaQRunm",
"rJlvBZqP6X",
"HJeiRvOqTm",
"r1xjTdf_TX",
"BJldKTtvpX",
"BJlixw6ep7",
"rke1hVag6Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"Summary/Contribution:\nThis paper builds on the AIRL framework (Fu et al., 2017) by combining the empowerment maximization objective for optimizing both the policy and reward function. Algorithmically, the main difference is that this introduces the need to optimize a inverse model (q), an empowerment function (Ph... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HJlmHoR5tQ",
"ryxBthEq1N",
"BJlixw6ep7",
"HJgv40ZF0m",
"iclr_2019_HJlmHoR5tQ",
"BklSaQRunm",
"HJeiRvOqTm",
"iclr_2019_HJlmHoR5tQ",
"iclr_2019_HJlmHoR5tQ",
"r1xjTdf_TX",
"BJldKTtvpX",
"BklSaQRunm",
"SJdOg6R3Q",
"rJgOYVPcnX"
] |
iclr_2019_HJx9EhC9tQ | Reasoning About Physical Interactions with Object-Oriented Prediction and Planning | Object-based factorizations provide a useful level of abstraction for interacting with the world. Building explicit object representations, however, often requires supervisory signals that are difficult to obtain in practice. We present a paradigm for learning object-centric representations for physical scene understanding without direct supervision of object properties. Our model, Object-Oriented Prediction and Planning (O2P2), jointly learns a perception function to map from image observations to object representations, a pairwise physics interaction function to predict the time evolution of a collection of objects, and a rendering function to map objects back to pixels. For evaluation, we consider not only the accuracy of the physical predictions of the model, but also its utility for downstream tasks that require an actionable representation of intuitive physics. After training our model on an image prediction task, we can use its learned representations to build block towers more complicated than those observed during training. | accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- The problem is interesting and challenging
- The proposed approach is novel and performs well.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- The clarity could be improved
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
Many concerns were clarified during the discussion period. One major concern had been the experimental evaluation. In particular, some reviewers felt that experiments on real images (rather than in simulation) was needed.
To strengthen this aspect, the authors added new qualitative and quantitative results on a real-world experiment with a robot arm, under 10 different scenarios, showing good performance on this challenging task. Still, one reviewer was left unconvinced that the experimental evaluation was sufficient.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
Consensus was not reached. The final decision is aligned with the positive reviews as the AC believes that the evaluation was adequate.
| train | [
"Bkgyox9DnQ",
"BJxwg4pjAX",
"H1eh77VKR7",
"SyxLdj-IhX",
"H1x_6tMgAm",
"rygjPOfgRX",
"HJgCWEMxCm",
"rkglZ8xo2Q",
"H1e1DWaMqm",
"B1exCohM97",
"SkeuI3gb5m",
"SJl7bdRy5Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"public",
"public"
] | [
"Summary:\nThe paper presents a platform for predicting images of objects interacting with each other under the effect of gravitational forces. Given an image describing the initial arrangement of the objects in a scene, the proposed architecture first detects the objects and encode them using a perception module. ... | [
7,
-1,
-1,
9,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HJx9EhC9tQ",
"iclr_2019_HJx9EhC9tQ",
"HJgCWEMxCm",
"iclr_2019_HJx9EhC9tQ",
"SyxLdj-IhX",
"Bkgyox9DnQ",
"rkglZ8xo2Q",
"iclr_2019_HJx9EhC9tQ",
"SJl7bdRy5Q",
"SkeuI3gb5m",
"iclr_2019_HJx9EhC9tQ",
"iclr_2019_HJx9EhC9tQ"
] |
iclr_2019_HJxB5sRcFQ | LayoutGAN: Generating Graphic Layouts with Wireframe Discriminators | Layout is important for graphic design and scene generation. We propose a novel Generative Adversarial Network, called LayoutGAN, that synthesizes layouts by modeling geometric relations of different types of 2D elements. The generator of LayoutGAN takes as input a set of randomly-placed 2D graphic elements and uses self-attention modules to refine their labels and geometric parameters jointly to produce a realistic layout. Accurate alignment is critical for good layouts. We thus propose a novel differentiable wireframe rendering layer that maps the generated layout to a wireframe image, upon which a CNN-based discriminator is used to optimize the layouts in image space. We validate the effectiveness of LayoutGAN in various experiments including MNIST digit generation, document layout generation, clipart abstract scene generation and tangram graphic design. | accepted-poster-papers | Reviewers agree the paper should be accepted.
See reviews below. | train | [
"Hye5G91CRQ",
"S1eLur3cnQ",
"r1emOpxi0m",
"rJeEMnsXR7",
"B1lzpiiQAX",
"SJlQtjjm0X",
"S1eY99iXAX",
"rJxm6KDi2X",
"HkeCYxhc27"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your detailed rebuttal and for addressing my concerns and responding to my questions. \n- Specifically I found the additional analysis (both human evaluation and showing results from other baselines) on Clip-art scene generation satisfying. \n- I also found it helpful to look at DCGAN results for all... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"B1lzpiiQAX",
"iclr_2019_HJxB5sRcFQ",
"rJeEMnsXR7",
"HkeCYxhc27",
"S1eLur3cnQ",
"S1eLur3cnQ",
"rJxm6KDi2X",
"iclr_2019_HJxB5sRcFQ",
"iclr_2019_HJxB5sRcFQ"
] |
iclr_2019_HJxeWnCcF7 | Learning Mixed-Curvature Representations in Product Spaces | The quality of the representations achieved by embeddings is determined by how well the geometry of the embedding space matches the structure of the data.
Euclidean space has been the workhorse for embeddings; recently hyperbolic and spherical spaces have gained popularity due to their ability to better embed new types of structured data---such as hierarchical data---but most data is not structured so uniformly.
We address this problem by proposing learning embeddings in a product manifold combining multiple copies of these model spaces (spherical, hyperbolic, Euclidean), providing a space of heterogeneous curvature suitable for a wide variety of structures.
We introduce a heuristic to estimate the sectional curvature of graph data and directly determine an appropriate signature---the number of component spaces and their dimensions---of the product manifold.
Empirically, we jointly learn the curvature and the embedding in the product space via Riemannian optimization.
We discuss how to define and compute intrinsic quantities such as means---a challenging notion for product manifolds---and provably learnable optimization functions.
On a range of datasets and reconstruction tasks, our product space embeddings outperform single Euclidean or hyperbolic spaces used in previous works, reducing distortion by 32.55% on a Facebook social network dataset. We learn word embeddings and find that a product of hyperbolic spaces in 50 dimensions consistently improves on baseline Euclidean and hyperbolic embeddings, by 2.6
points in Spearman rank correlation on similarity tasks
and 3.4 points on analogy accuracy.
| accepted-poster-papers | This paper proposes a novel framework for tractably learning non-eucliean embeddings that are product spaces formed by hyperbolic, spherical, and Euclidean components, providing a heterogenous mix of curvature properties. On several datasets, these product space embeddings outperform single Euclidean or hyperbolic spaces. The reviewers unanimously recommend acceptance. | train | [
"ryxIIdud37",
"H1exAN8qaQ",
"B1la9NI96Q",
"S1lIwN896m",
"SygOB4LcTX",
"ryljZN8caX",
"r1gDfZ8ca7",
"Byxc9eI9am",
"Hklpflm6h7",
"r1x-tAv3jm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nPage 2: What are p_i, i=1,2,...,n, their set T and \\mathcal{P}?\n\nWhat is | | used to compute distortion between a and b?\n\nPlease fix the definition of the Riemannian manifold, such that M is not just any manifold, but should be a smooth manifold or a particular differentiable manifold. Please update your de... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"iclr_2019_HJxeWnCcF7",
"ryxIIdud37",
"ryxIIdud37",
"ryxIIdud37",
"ryxIIdud37",
"ryxIIdud37",
"r1x-tAv3jm",
"Hklpflm6h7",
"iclr_2019_HJxeWnCcF7",
"iclr_2019_HJxeWnCcF7"
] |
iclr_2019_HJxwDiActX | StrokeNet: A Neural Painting Environment | We've seen tremendous success of image generating models these years. Generating images through a neural network is usually pixel-based, which is fundamentally different from how humans create artwork using brushes. To imitate human drawing, interactions between the environment and the agent is required to allow trials. However, the environment is usually non-differentiable, leading to slow convergence and massive computation. In this paper we try to address the discrete nature of software environment with an intermediate, differentiable simulation. We present StrokeNet, a novel model where the agent is trained upon a well-crafted neural approximation of the painting environment. With this approach, our agent was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner. Our primary contribution is the neural simulation of a real-world environment. Furthermore, the agent trained with the emulated environment is able to directly transfer its skills to real-world software. | accepted-poster-papers | The paper proposes a novel differential way to output brush strokes, taking a few ideas from model-based learning. The method is efficient in that one can train it in an unsupervised manner and does not require paired data. The strengths of the paper are the qualitative results that demonstrate nice interpolations among other things, on a number of datasets (esp. post-rebuttal).
The weaknesses of the paper are the writing (which I think is relatively easy to improve if the authors make an honest effort) and some of the quantitative evaluation. I would encourage the authors to get in touch with the SPIRAL paper authors in order to get access to the SPIRAL generated MNIST test data and then perhaps the classification metric could be updated.
In summary, from the discussion, the major points of contention were the somewhat lacking initial evaluation (which was fixed to a large extent) and the quality of writing (which could be fixed more). I believe the submission is genuinely novel, interesting (esp. the usage of world model-like techniques) and valuable for the ICLR audience so I recommend acceptance. | train | [
"SkgawVChyE",
"rJlM3uThJE",
"SyxgZc6hkV",
"Hyg0dv2n2Q",
"Syx3a2S507",
"BJg-3v4v3X",
"SJeRjh4SkN",
"SkgLeTScA7",
"B1l5K2rcAQ",
"H1x_SWqKn7",
"HkeU4TB9CQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author"
] | [
"Thank you very much for your kind revision! Again we really appreciate your constructive suggestions that helped us make this a complete work!",
"Thanks again for your constructive advice and kind revision.\n\nRegarding the classification metrics of SPIRAL, we did not have access to SPIRAL generated MNIST test d... | [
-1,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
8,
-1
] | [
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
5,
-1
] | [
"H1x_SWqKn7",
"Hyg0dv2n2Q",
"SJeRjh4SkN",
"iclr_2019_HJxwDiActX",
"Hyg0dv2n2Q",
"iclr_2019_HJxwDiActX",
"HkeU4TB9CQ",
"H1x_SWqKn7",
"iclr_2019_HJxwDiActX",
"iclr_2019_HJxwDiActX",
"BJg-3v4v3X"
] |
iclr_2019_HJxyAjRcFX | Harmonizing Maximum Likelihood with GANs for Multimodal Conditional Generation | Recent advances in conditional image generation tasks, such as image-to-image translation and image inpainting, are largely accounted to the success of conditional GAN models, which are often optimized by the joint use of the GAN loss with the reconstruction loss. However, we reveal that this training recipe shared by almost all existing methods causes one critical side effect: lack of diversity in output samples. In order to accomplish both training stability and multimodal output generation, we propose novel training schemes with a new set of losses named moment reconstruction losses that simply replace the reconstruction loss. We show that our approach is applicable to any conditional generation tasks by performing thorough experiments on image-to-image translation, super-resolution and image inpainting using Cityscapes and CelebA dataset. Quantitative evaluations also confirm that our methods achieve a great diversity in outputs while retaining or even improving the visual fidelity of generated samples. | accepted-poster-papers | The paper presents new loss functions (which replace the reconstruction part) for the training of conditional GANs. Theoretical considerations and an empirical analysis show that the proposed loss can better handle multimodality of the target distribution than reconstruction based losses while being competitive in terms of image quality. | train | [
"HkgbB4TxeV",
"r1gdodFyx4",
"Skg_k4wklV",
"rJxtHml92m",
"Hyxkbojj2Q",
"B1gG_obrAQ",
"B1l6yj-H0m",
"H1ljacZB0X",
"Byer85ZHAQ",
"B1e85wUE67",
"S1gPNubqn7"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"We are deeply grateful to reviewer3 for a quick reply that reveals the detailed ground for the decision. Now we can understand the review much better to offer more focused answers to the concerns raised by reviewer3.\n\n1. Novelty\n===================================\nAccording to reviewer3’s clarification, per-pi... | [
-1,
-1,
-1,
4,
8,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
5,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"r1gdodFyx4",
"Skg_k4wklV",
"rJxtHml92m",
"iclr_2019_HJxyAjRcFX",
"iclr_2019_HJxyAjRcFX",
"B1e85wUE67",
"rJxtHml92m",
"S1gPNubqn7",
"Hyxkbojj2Q",
"iclr_2019_HJxyAjRcFX",
"iclr_2019_HJxyAjRcFX"
] |
iclr_2019_HJz05o0qK7 | Measuring Compositionality in Representation Learning | Many machine learning algorithms represent input data with vector embeddings or discrete codes. When inputs exhibit compositional structure (e.g. objects built from parts or procedures from subroutines), it is natural to ask whether this compositional structure is reflected in the the inputs’ learned representations. While the assessment of compositionality in languages has received significant attention in linguistics and adjacent fields, the machine learning literature lacks general-purpose tools for producing graded measurements of compositional structure in more general (e.g. vector-valued) representation spaces. We describe a procedure for evaluating compositionality by measuring how well the true representation-producing model can be approximated by a model that explicitly composes a collection of inferred representational primitives. We use the procedure to provide formal and empirical characterizations of compositional structure in a variety of settings, exploring the relationship between compositionality and learning dynamics, human judgments, representational similarity, and generalization. | accepted-poster-papers | This paper presents a method for measuring the degree to which some representation for a composed object effectively represents the pieces from which it is composed. All three authors found this to be an important topic for study, and found the paper to be a limited but original and important step toward studying this topic. However, two reviewers expressed serious concerns about clarity, and were not fully satisfied with the revisions made so far. I'm recommending acceptance, but I ask the authors to further revise the paper (especially the introduction) to make sure it includes a blunt and straightforward presentation of the problem under study and the way TRE addresses it.
I'm also somewhat concerned at R2's mention of a potential confound in one experiment. The paper has been updated with what appears to be a fix, though, and R2 has not yet responded, so I'm presuming that this issue has been resolved.
I also ask the authors to release code shortly upon de-anonymization, as promised. | train | [
"HkepfJbY3m",
"HygfJZNy0m",
"BylIvb4JCm",
"SJekSWNyCQ",
"SJxgMW4JCQ",
"Hye-eZd93m",
"S1xUg9jcnX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper tackles a very interesting problem about representations, especially of the connectionist kind -- how do we know if the learned representations capture the compositional structure present in the inputs, and tries to come up with a systematic framework to answer that question. The framework assumes the pr... | [
7,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_HJz05o0qK7",
"iclr_2019_HJz05o0qK7",
"HkepfJbY3m",
"Hye-eZd93m",
"S1xUg9jcnX",
"iclr_2019_HJz05o0qK7",
"iclr_2019_HJz05o0qK7"
] |
iclr_2019_HJz6tiCqYm | Benchmarking Neural Network Robustness to Common Corruptions and Perturbations | In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize. | accepted-poster-papers | The reviewers have all recommended accepting this paper thus I am as well. Based on the reviews and the selectivity of the single track for oral presentations, I am only recommending acceptance as a poster. | train | [
"BygwXHNOlV",
"rJxW-nzJyN",
"ryxahIfEgE",
"SJefbjX1e4",
"S1l6EPphkE",
"rylvrBOmkV",
"SJladTB7y4",
"rklBbUGA0m",
"rJxnuikRA7",
"SJgvBRYaR7",
"BkxNgFD9RX",
"B1xEWdwc0Q",
"rye6cKP5Cm",
"Skg0XKPqR7",
"r1xJYOvcR7",
"B1xfnvvc0m",
"SklrTPkU07",
"rkl_eOltTX",
"S1xvCfDD6X",
"HkgLx3DzpQ"... | [
"author",
"author",
"public",
"official_reviewer",
"author",
"public",
"official_reviewer",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"public",
"official_reviewer"
] | [
"Thank you for your interest! Since this task is not adversarial in nature, we do not intend to continually modify the corruptions to subvert new approaches, much like how CIFAR-10 did not continually change to make classification harder for every new architecture and method. Improved generalization to unseen corru... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
9,
-1,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
-1,
4
] | [
"ryxahIfEgE",
"iclr_2019_HJz6tiCqYm",
"iclr_2019_HJz6tiCqYm",
"rylvrBOmkV",
"SJladTB7y4",
"SJladTB7y4",
"rklBbUGA0m",
"rJxnuikRA7",
"iclr_2019_HJz6tiCqYm",
"B1xEWdwc0Q",
"S1xvCfDD6X",
"SklrTPkU07",
"ryeoWVTch7",
"HkgLx3DzpQ",
"rkl_eOltTX",
"iclr_2019_HJz6tiCqYm",
"iclr_2019_HJz6tiCqY... |
iclr_2019_Hk4dFjR5K7 | ADef: an Iterative Algorithm to Construct Adversarial Deformations | While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101. | accepted-poster-papers | The submission proposes a method to construct adversarial attacks based on deforming an input image rather than adding small peturbations. Although deformations can also be characterized by the difference of the original and deformed image, it is qualitatively and quantitatively different as a small deformation can result in a large difference.
On the positive side, this paper proposes an interesting form of adversarial attack, whose success can give additional insights on the forms of existing adversarial attacks. The experiments on MNIST and ImageNet are reasonably comprehensive and allow interesting interpretation of how the image deforms to allow the attack. The paper is also praised for its clarity, and cleaner formulation compared to Xiao et al. (see below). Additional experiments during rebuttal phase partially answered reviewer concerns, and provided more information e.g. about the effect of the smoothness of the deformation.
There were some concerns that the paper primarly presents one idea, and perhaps missed an opportunity for deeper analysis (R1). R2 would have appreciated more analysis on how to defend against the attack.
A controversial point is the relation / novelty with respect to Xiao et al., ICLR 2018. As e.g. pointed out by R1: "The paper originates from a document provably written in late 2017, which is before the deposit on arXiv of another article (by different authors, early 2018) which was later accepted to ICLR 2018 [Xiao and al.]. This remark is important in that it changes my rating of the paper (being more indulgent with papers proposing new ideas, as otherwise the novelty is rather low compared to [Xiao and al.])."
On the balance, all three reviewers recommended acceptance of the paper. Regarding novelty over Xiao et al., even ignoring the arguable precedence of the current submission, the formulation is cleaner and will likely advance the analysis of adversarial attacks. | val | [
"Bkg6rXVVAm",
"B1xYmy1-Cm",
"SJgaSNc92X",
"BJeklt6saX",
"Byx15QgopQ",
"SkgnsKp9am",
"rkeZVY6cT7",
"r1l_-cA-am",
"SJxk8F0bT7",
"rye0ucE1am",
"Ske8yLon37"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for the reconsideration. We are happy to engage in further discussion, and your question is interesting indeed. Our method focuses on finding an exact solution to Equation (5). This equation has many solutions, and in Equation (7) we choose the one that minimizes the l^2 norm of the vector fiel... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"BJeklt6saX",
"SkgnsKp9am",
"iclr_2019_Hk4dFjR5K7",
"SJxk8F0bT7",
"iclr_2019_Hk4dFjR5K7",
"rkeZVY6cT7",
"rye0ucE1am",
"Ske8yLon37",
"SJgaSNc92X",
"iclr_2019_Hk4dFjR5K7",
"iclr_2019_Hk4dFjR5K7"
] |
iclr_2019_Hk4fpoA5Km | Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning | We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments. | accepted-poster-papers | This work highlights the problem of biased rewards present in common adversarial imitation learning implementations, and proposes adding absorbing states to to fix the issue. This is combined with an off-policy training algorithm, yielding significantly improved sample efficiency, whose benefits are convincingly shown empirically. The paper is well written and clearly presents the contributions. Questions were satisfactorily answered during discussion, and resulted in an improved submission, a paper that all reviewers now agree is worth presenting at ICLR.
| train | [
"S1gpuXlih7",
"S1gnoccmC7",
"HkeHJf0GRQ",
"SJxux2BMCm",
"H1gEMY4eAQ",
"BkgkROElAm",
"SJeQPruhpX",
"B1gGZfAq6Q",
"rkeAEjMcT7",
"Byebk4m5Tm",
"rJgV29zq67",
"SygDehUmpQ",
"rJePR4gGT7",
"rJgH5Yg0h7",
"S1gzKHFunm",
"BJxqUpTA3Q",
"HyebhMXa2X",
"ByeHNeho37",
"BkxfyEhmhm",
"S1gOUczL3m"... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"public",
"author",
... | [
"The paper suggests to use TD3 to compute an off-policy update instead of the TRPO/PPO updates in GAIL/AIRL in order to increase sample efficiency.\nThe paper further discusses the problem of implicit step penalties and survival bias caused by absorbing states, when using the upper-bounded/lower-bounded reward func... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_Hk4fpoA5Km",
"S1gpuXlih7",
"SJxux2BMCm",
"H1gEMY4eAQ",
"BkgkROElAm",
"SJeQPruhpX",
"B1gGZfAq6Q",
"Byebk4m5Tm",
"S1gzKHFunm",
"S1gpuXlih7",
"rJgH5Yg0h7",
"rJePR4gGT7",
"iclr_2019_Hk4fpoA5Km",
"iclr_2019_Hk4fpoA5Km",
"iclr_2019_Hk4fpoA5Km",
"rJgH5Yg0h7",
"ByeHNeho37",
"icl... |
iclr_2019_HkG3e205K7 | Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives | Deep latent variable models have become a popular model choice due to the scalable learning algorithms introduced by (Kingma & Welling 2013, Rezende et al. 2014). These approaches maximize a variational lower bound on the intractable log likelihood of the observed data. Burda et al. (2015) introduced a multi-sample variational bound, IWAE, that is at least as tight as the standard variational lower bound and becomes increasingly tight as the number of samples increases. Counterintuitively, the typical inference network gradient estimator for the IWAE bound performs poorly as the number of samples increases (Rainforth et al. 2018, Le et al. 2018). Roeder et a. (2017) propose an improved gradient estimator, however, are unable to show it is unbiased. We show that it is in fact biased and that the bias can be estimated efficiently with a second application of the reparameterization trick. The doubly reparameterized gradient (DReG) estimator does not suffer as the number of samples increases, resolving the previously raised issues. The same idea can be used to improve many recently introduced training techniques for latent variable models. In particular, we show that this estimator reduces the variance of the IWAE gradient, the reweighted wake-sleep update (RWS) (Bornschein & Bengio 2014), and the jackknife variational inference (JVI) gradient (Nowozin 2018). Finally, we show that this computationally efficient, drop-in estimator translates to improved performance for all three objectives on several modeling tasks. | accepted-poster-papers | The paper is well written and easy to follow. The experiments are adequate to justify the usefulness of an identity for improving existing multi-Monte-Carlo-sample based gradient estimators for deep generative models. The originality and significance are acceptable, as discussed below.
The proposed doubly reparameterized gradient estimators are built on an important identity shown in Equation (5). This identity appears straightforward to derive by applying both score-function gradient and reparameterization gradient to the same objective function, which is expressed as an expectation. The AC suspects that this identity might have already appeared in previous publications / implementations, though not being claimed as an important contribution / being explicitly discussed. While that identity may not be claimed as the original contribution of the paper if that suspicion is true, the paper makes another useful contribution in applying that identity to the right problem: improving three distinct training algorithms for deep generative models. The doubly reparameterized versions of IWAE and reweighted wake-sleep (RWS) further show how IWAE and RWS are related to each other and how they can be combined for potentially further improved performance.
The AC believes that the paper makes enough contributions by well presenting the identity in (5) and applying it to the right problems. | train | [
"rJlMEiAnam",
"rygSDcRhaQ",
"SJluWc0npX",
"HyelqKC2pX",
"Hkx5PKC3hX",
"Hyle7iIF27",
"S1gF1q36j7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have updated the manuscript based on reviewer feedback. Apart from clarifying edits, we have rewritten the derivation in Appendix 8.1 and included a plot of variance for several values of K as Appendix Figure 8.",
"Recent work on reparameterizing mixture distributions has shown that the necessary gradients ca... | [
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"iclr_2019_HkG3e205K7",
"S1gF1q36j7",
"Hyle7iIF27",
"Hkx5PKC3hX",
"iclr_2019_HkG3e205K7",
"iclr_2019_HkG3e205K7",
"iclr_2019_HkG3e205K7"
] |
iclr_2019_HkNGYjR9FX | Learning Recurrent Binary/Ternary Weights | Recurrent neural networks (RNNs) have shown excellent performance in processing sequence data. However, they are both complex and memory intensive due to their recursive nature. These limitations make RNNs difficult to embed on mobile devices requiring real-time processes with limited hardware resources. To address the above issues, we introduce a method that can learn binary and ternary weights during the training phase to facilitate hardware implementations of RNNs. As a result, using this approach replaces all multiply-accumulate operations by simple accumulations, bringing significant benefits to custom hardware in terms of silicon area and power consumption. On the software side, we evaluate the performance (in terms of accuracy) of our method using long short-term memories (LSTMs) and gated recurrent units (GRUs) on various sequential models including sequence classification and language modeling. We demonstrate that our method achieves competitive results on the aforementioned tasks while using binary/ternary weights during the runtime. On the hardware side, we present custom hardware for accelerating the recurrent computations of LSTMs with binary/ternary weights. Ultimately, we show that LSTMs with binary/ternary weights can achieve up to 12x memory saving and 10x inference speedup compared to the full-precision hardware implementation design. | accepted-poster-papers | This work proposes a simple but useful way to train RNN with binary / ternary weights for improving memory and power efficiency. The paper presented a sequence of experiments on various benchmarks and demonstrated significant improvement on memory size with only minor decrease of accuracy. Authors' rebuttal addressed the reviewers' concern nicely.
| train | [
"Syxc_KqB1V",
"BkelhUWx1E",
"SJxm71x3CX",
"r1eCUUH0oQ",
"Byx7WRFtAX",
"rkgP1CFYR7",
"BylZ06Ft0X",
"HyxUw6KKC7",
"rye5HTKF0m",
"BkeyGhFFRQ",
"SyxrxntK0X",
"rkgP3jYYAX",
"HJlRfyx_aQ",
"HyxKWlpFh7"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We sincerely thank the reader for careful reading of our manuscript and code. Below we respond to each comment of yours in detail. \n---------------------------------------------------- \nComment: 1. what is the optimizer used for word-level language modeling on PTB data set? The submitted paper does not mention ... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"BkelhUWx1E",
"iclr_2019_HkNGYjR9FX",
"BylZ06Ft0X",
"iclr_2019_HkNGYjR9FX",
"r1eCUUH0oQ",
"r1eCUUH0oQ",
"r1eCUUH0oQ",
"HyxKWlpFh7",
"HyxKWlpFh7",
"HJlRfyx_aQ",
"HJlRfyx_aQ",
"HJlRfyx_aQ",
"iclr_2019_HkNGYjR9FX",
"iclr_2019_HkNGYjR9FX"
] |
iclr_2019_Hke-JhA9Y7 | Learning concise representations for regression by evolving networks of trees | We propose and study a method for learning interpretable representations for the task of regression. Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions. Differentiable features are trained via gradient descent, and the performance of features in a linear model is used to weight the rate of change among subcomponents of each representation. The search process maintains an archive of representations with accuracy-complexity trade-offs to assist in generalization and interpretation. We compare several stochastic optimization approaches within this framework. We benchmark these variants on 100 open-source regression problems in comparison to state-of-the-art machine learning approaches. Our main finding is that this approach produces the highest average test scores across problems while producing representations that are orders of magnitude smaller than the next best performing method (gradient boosting). We also report a negative result in which attempts to directly optimize the disentanglement of the representation result in more highly correlated features. | accepted-poster-papers | The reviewers all feel that the paper should be accepted to the conference. The main strengths that they noted were the quality of writing, the wide applicability of the proposed method and the strength of the empirical evaluation. It's nice to see experiments across a large number of problems (100), with corresponding code, where baselines were hyperparameter tuned as well. This helps to give some assurance that the method will generalize to new problems and datasets. Some weaknesses noted by the reviewers were computational cost (the method is significantly slower than the baselines) and they weren't entirely convinced that having more concise representations would directly lead to the claimed interpretability of the approach. Nevertheless, they found it would make for a solid contribution to the conference. | train | [
"BJgNl0PKnX",
"SyxS9k3-Am",
"ryxU37agA7",
"rkgGkod5pQ",
"S1lEd9Oqa7",
"Skl9kud9p7",
"rJeoSzM9h7",
"SkgGi8lYnX"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a genetic algorithm that maintains an archive of representations that are iteratively evolved and selected by comparing validation error. Each representation is constructed as a syntax tree consists of elements that are common in neural network architectures. The experimental results showed t... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
1,
4
] | [
"iclr_2019_Hke-JhA9Y7",
"ryxU37agA7",
"S1lEd9Oqa7",
"SkgGi8lYnX",
"BJgNl0PKnX",
"rJeoSzM9h7",
"iclr_2019_Hke-JhA9Y7",
"iclr_2019_Hke-JhA9Y7"
] |
iclr_2019_Hke20iA9Y7 | Efficient Training on Very Large Corpora via Gramian Estimation | We study the problem of learning similarity functions over very large corpora using neural network embedding models. These models are typically trained using SGD with random sampling of unobserved pairs, with a sample size that grows quadratically with the corpus size, making it expensive to scale.
We propose new efficient methods to train these models without having to sample unobserved pairs. Inspired by matrix factorization, our approach relies on adding a global quadratic penalty and expressing this term as the inner-product of two generalized Gramians. We show that the gradient of this term can be efficiently computed by maintaining estimates of the Gramians, and develop variance reduction schemes to improve the quality of the estimates. We conduct large-scale experiments that show a significant improvement both in training time and generalization performance compared to sampling methods. | accepted-poster-papers | This paper presents methods to scale learning of embedding models estimated using neural networks. The main idea is to work with Gram matrices whose sizes depend on the length of the embedding. Building upon existing works like SAG algorithm, the paper proposes two new stochastic methods for learning using stochastic estimates of Gram matrices.
Reviewers find the paper interesting and useful, although have given many suggestions to improve the presentation and experiments. For this reason, I recommend to accept this paper.
A small note: SAG algorithm was originally proposed in 2013. The paper only cites the 2017 version. Please include the 2013 version as well.
| train | [
"rJgsqcLt3m",
"SJxICZhjam",
"rkxc_enipX",
"ryltke3iT7",
"BJgu7y3o67",
"ryx3glNinm",
"r1gf2j2w27",
"r1gepzTgnm",
"BygKKCEuiX",
"r1xBJUGgo7",
"H1ejQiWgsX",
"SJl1I_Zgi7",
"rJx8ypbkjQ",
"rkeCV3lK5m",
"rke9t6tEcm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"public",
"public",
"public"
] | [
"This paper proposes an efficient algorithm to learn neural embedding models with a dot-product structure over very large corpora. The main method is to reformulate the objective function in terms of generalized Gramiam matrices, and maintain estimates of those matrices in the training process. The algorithm uses ... | [
8,
-1,
-1,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_Hke20iA9Y7",
"iclr_2019_Hke20iA9Y7",
"r1gf2j2w27",
"rJgsqcLt3m",
"ryx3glNinm",
"iclr_2019_Hke20iA9Y7",
"iclr_2019_Hke20iA9Y7",
"BygKKCEuiX",
"r1xBJUGgo7",
"rJx8ypbkjQ",
"rkeCV3lK5m",
"rke9t6tEcm",
"iclr_2019_Hke20iA9Y7",
"iclr_2019_Hke20iA9Y7",
"iclr_2019_Hke20iA9Y7"
] |
iclr_2019_Hke4l2AcKQ | MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders | Variational Autoencoder (VAE), a simple and effective deep generative model, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. However, recent studies demonstrate that, when equipped with expressive generative distributions (aka. decoders), VAE suffers from learning uninformative latent representations with the observation called KL Varnishing, in which case VAE collapses into an unconditional generative model. In this work, we introduce mutual posterior-divergence regularization, a novel regularization that is able to control the geometry of the latent space to accomplish meaningful representation learning, while achieving comparable or superior capability of density estimation.Experiments on three image benchmark datasets demonstrate that, when equipped with powerful decoders, our model performs well both on density estimation and representation learning. | accepted-poster-papers | This paper proposes a solution for the well-known problem of posterior collapse in VAEs: a phenomenon where the posteriors fail to diverge from the prior, which tends to happen in situations where the decoder is overly flexible.
A downside of the proposed method is the introduction of hyper-parameters controlling the degree of regularization. The empirical results show improvements on various baselines.
The paper proposes the addition of a regularization term that penalizes pairwise similarity of posteriors in latent space. The reviewers agree that the paper is clearly written and that the method is reasonably motivated. The experiments are also sufficiently convincing. | train | [
"HkxNWckD0Q",
"SJec11dj3X",
"ByxYlNP4C7",
"HyeQs7536Q",
"r1eryMXiTX",
"BJgo0o0NnX",
"HklIzEowaQ",
"H1l3IfiwaQ",
"Byl31zowTX",
"rJgA-rlU37",
"BylPAIJZ97",
"BJlD3jFg9X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Thanks to the authors for the work in addressing my questions and comments.\n\n2. That’s interesting to know, makes sense indeed. I would explicitly indicate this in your “Measure of Smoothness.” section then, as this does not come across in the current text.\nThe new figure in Appendix B.1.3 is interesting to see... | [
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1
] | [
-1,
5,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1
] | [
"Byl31zowTX",
"iclr_2019_Hke4l2AcKQ",
"H1l3IfiwaQ",
"r1eryMXiTX",
"HklIzEowaQ",
"iclr_2019_Hke4l2AcKQ",
"BJgo0o0NnX",
"rJgA-rlU37",
"SJec11dj3X",
"iclr_2019_Hke4l2AcKQ",
"BJlD3jFg9X",
"iclr_2019_Hke4l2AcKQ"
] |
iclr_2019_HkeGhoA5FX | Residual Non-local Attention Networks for Image Restoration | In this paper, we propose a residual non-local attention network for high-quality image restoration. Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features. To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts. Specifically, we design trunk branch and (non-)local mask branch in each (non-)local attention block. The trunk branch is used to extract hierarchical features. Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions. The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map. Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network. Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution. Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually. | accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- strong qualitative and quantitative results
- a good ablative analysis of the proposed method.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- clarity could be improved (and was much improved in the revision).
- somewhat limited novelty.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
No major points of contention.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be accepted.
| train | [
"BJeBKz333m",
"B1lVIRTKAX",
"H1llV0aYAX",
"SJlreA6FCQ",
"BylyOpaKRX",
"B1gPVaaYAQ",
"Bygp3i6YA7",
"B1lKz511pQ",
"BJx1ow5K2X"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a convolutional neural network architecture that includes blocks for local and non-local attention mechanisms, which are claimed to be responsible for achieving excellent results in four image restoration applications.\n\n\n# Results\nThe strongest point of the paper is that the quantitative and... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2019_HkeGhoA5FX",
"BJeBKz333m",
"BJeBKz333m",
"BJeBKz333m",
"BJx1ow5K2X",
"BJx1ow5K2X",
"B1lKz511pQ",
"iclr_2019_HkeGhoA5FX",
"iclr_2019_HkeGhoA5FX"
] |
iclr_2019_HkeoOo09YX | Meta-Learning For Stochastic Gradient MCMC | Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling. However, existing SG-MCMC schemes are not tailored to any specific probabilistic model, even a simple modification of the underlying dynamical system requires significant physical intuition. This paper presents the first meta-learning algorithm that allows automated design for the underlying continuous dynamics of an SG-MCMC sampler. The learned sampler generalizes Hamiltonian dynamics with state-dependent drift and diffusion, enabling fast traversal and efficient exploration of energy landscapes. Experiments validate the proposed approach on Bayesian fully connected neural network, Bayesian convolutional neural network and Bayesian recurrent neural network tasks, showing that the learned sampler outperforms generic, hand-designed SG-MCMC algorithms, and generalizes to different datasets and larger architectures. | accepted-poster-papers | This paper proposes to use meta-learning to design MCMC sampling distributions based on Hamiltonian dynamics, aiming to mix faster on set of problems that are related to the training problems. The reviewers agree that the paper is well-written and the ideas are interesting and novel. The main weaknesses of the paper are that (1) there is not a clear case for using this method over SG-HMC, and (2) there are many design choices that are not validated. The authors revised the paper to address some aspects of the latter concern, but are encouraged to add additional revisions to clarify the points brought up by the reviewers.
Despite the weaknesses, the reviewers all agree that the paper exceeds the bar for acceptance. I also recommend accept. | train | [
"BylJFZFmy4",
"SkxGLPbKCX",
"Skl2LzvOA7",
"B1g0UDdF6Q",
"Skgd58dF6Q",
"rJgWaN_KTX",
"BkxAJlS5n7",
"r1xb6mtK3X",
"rkxTfzKFhQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for the elaboration and clarification",
"I have read your response and thank you for your clarifications.\n\nThank you for the precision regarding Stein's; I do agree it is not a significant part of your work but I was wondering if this could be a particularly fragile one. The fact that it do... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"Skgd58dF6Q",
"B1g0UDdF6Q",
"iclr_2019_HkeoOo09YX",
"rkxTfzKFhQ",
"r1xb6mtK3X",
"BkxAJlS5n7",
"iclr_2019_HkeoOo09YX",
"iclr_2019_HkeoOo09YX",
"iclr_2019_HkeoOo09YX"
] |
iclr_2019_HkezXnA9YX | Systematic Generalization: What Is Required and Can It Be Learned? | Numerous models for grounded language understanding have been recently proposed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend themselves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small subset of them. Our findings show that the generalization of modular models is much more systematic and that it is highly sensitive to the module layout, i.e. to how exactly the modules are connected. We furthermore investigate if modular models that generalize well could be made more end-to-end by learning their layout and parametrization. We find that end-to-end methods from prior work often learn inappropriate layouts or parametrizations that do not facilitate systematic generalization. Our results suggest that, in addition to modularity, systematic generalization in language understanding may require explicit regularizers or priors.
| accepted-poster-papers | This paper generated a lot of discussion. Paper presents an empirical evaluation of generalization in models for visual reasoning. All reviewers generally agree that it presents a thorough evaluation with a good set of questions. The only remaining concerns of R3 (the sole negative vote) were lack of surprise in findings and lingering questions of whether these results generalize to realistic settings. The former suffers from hindsight bias and tends to be an unreliable indicator of the impact of a paper. The latter is an open question and should be worked on, but in the opinion of the AC, does not preclude publication of this manuscript. These experiments are well done and deserve to be published. If the findings don't generalize to more complex settings, we will let the noisy process of science correct our understanding in the future. | train | [
"BklMFEOy1V",
"Byg9M9Vyy4",
"Hkei5tV1JE",
"B1xq62y0CX",
"Hyere82c2m",
"SylZPzWcA7",
"BJe48Db6Tm",
"rkeU3aB86X",
"H1xbP6B8TX",
"S1xJ92HIaX",
"rJe-UgPqnX",
"rylJdHwn2Q"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the response, and sorry for the slow reply!\n\nAfter reading the response and revised paper, I'm leaving my review score unchanged, because I think my main concerns still stand. I didn't find the results surprising, and I don't see evidence that these results would generalize to more complex tasks. I th... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"S1xJ92HIaX",
"rylJdHwn2Q",
"rJe-UgPqnX",
"rkeU3aB86X",
"iclr_2019_HkezXnA9YX",
"iclr_2019_HkezXnA9YX",
"rJe-UgPqnX",
"Hyere82c2m",
"Hyere82c2m",
"rylJdHwn2Q",
"iclr_2019_HkezXnA9YX",
"iclr_2019_HkezXnA9YX"
] |
iclr_2019_Hkf2_sC5FX | Efficient Lifelong Learning with A-GEM | In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost. Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation. Second, we introduce a new metric measuring how quickly a learner acquires a new skill. Third, we propose an improved version of GEM (Lopez-Paz & Ranzato, 2017), dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC (Kirkpatrick et al., 2016) and other regularization-based methods. Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration. Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency | accepted-poster-papers |
Pros:
- Great work on getting rid of the need for QP and the corresponding proof of the update rule
- Mostly clear writing
- Good experimental results on relevant datasets
- Introduction of a more reasonable evaluation methodology for continual learning
Cons:
- The model is arguably a little incremental over GEM. In the end I think all the reviewers agree though that the practical value of a considerably more efficient and easy to implement approach largely outweighs this concern.
I think this is a good contribution in this area and I recommend acceptance. | train | [
"H1ltuen11V",
"rklWHLB9hQ",
"BygF0mRAC7",
"HJe92m0RCm",
"HkeVfvnnhX",
"rkepkT_u07",
"B1lVPuVcpQ",
"rJev5wEq6m",
"r1epCSNc67",
"BylvMvH5hm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thank you for your detailed rebuttal and revisions to the paper. I do agree that you have addressed my primary concerns and clarified some areas of confusion for me about the paper. I have updated my score in favor of acceptance after the revisions. ",
"This paper proposes a variant of GEM called A-GEM that subs... | [
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4
] | [
"HJe92m0RCm",
"iclr_2019_Hkf2_sC5FX",
"rJev5wEq6m",
"B1lVPuVcpQ",
"iclr_2019_Hkf2_sC5FX",
"r1epCSNc67",
"rklWHLB9hQ",
"BylvMvH5hm",
"HkeVfvnnhX",
"iclr_2019_Hkf2_sC5FX"
] |
iclr_2019_HkfPSh05K7 | Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering | This paper introduces a new framework for open-domain question answering in which the retriever and the reader \emph{iteratively interact} with each other. The framework is agnostic to the architecture of the machine reading model provided it has \emph{access} to the token-level hidden representations of the reader. The retriever uses fast nearest neighbor search that allows it to scale to corpora containing millions of paragraphs. A gated recurrent unit updates the query at each step conditioned on the \emph{state} of the reader and the \emph{reformulated} query is used to re-rank the paragraphs by the retriever. We conduct analysis and show that iterative interaction helps in retrieving informative paragraphs from the corpus. Finally, we show that our multi-step-reasoning framework brings consistent improvement when applied to two widely used reader architectures (\drqa and \bidaf) on various large open-domain datasets ---\tqau, \quasart, \searchqa, and \squado\footnote{Code and pretrained models are available at \url{https://github.com/rajarshd/Multi-Step-Reasoning}}. | accepted-poster-papers |
pros:
- novel idea for multi-step QA which rewrites the query in embedding space
- good comparison with related work
- reasonable evaluation and improved results
cons:
There were concerns about missing training details, insufficient evaluation, and presentation. These have been largely addressed in revision and I am recommending acceptance. | train | [
"SkxCvQKjyE",
"H1eqAZ8iJV",
"SJeVzbUoyN",
"rJeN1G8oA7",
"ryliayIs0X",
"BklS0hsYs7",
"Sygq4coqA7",
"SJl8P-Fc07",
"r1xQXGYqA7",
"SyxjKXK90m",
"HJgTj6OcRm",
"BklH_p_qRm",
"H1lQ7Ar52X",
"BylrP5Q92m"
] | [
"public",
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"That would be really helpful! Thanks for your update!",
"Thanks for your comment!. Right now the link is intentionally anonymized. We will release the code once the decision on the paper is finalized. Thank you for your interest!",
"This paper is very interesting and we're are doing follow-up research. Could t... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"H1eqAZ8iJV",
"SJeVzbUoyN",
"iclr_2019_HkfPSh05K7",
"ryliayIs0X",
"r1xQXGYqA7",
"iclr_2019_HkfPSh05K7",
"H1lQ7Ar52X",
"BklS0hsYs7",
"SJl8P-Fc07",
"iclr_2019_HkfPSh05K7",
"BklH_p_qRm",
"BylrP5Q92m",
"iclr_2019_HkfPSh05K7",
"iclr_2019_HkfPSh05K7"
] |
iclr_2019_HkfYOoCcYX | Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network | Weight pruning has been introduced as an efficient model compression technique. Even though pruning removes significant amount of weights in a network, memory requirement reduction was limited since conventional sparse matrix formats require significant amount of memory to store index-related information. Moreover, computations associated with such sparse matrix formats are slow because sequential sparse matrix decoding process does not utilize highly parallel computing systems efficiently. As an attempt to compress index information while keeping the decoding process parallelizable, Viterbi-based pruning was suggested. Decoding non-zero weights, however, is still sequential in Viterbi-based pruning. In this paper, we propose a new sparse matrix format in order to enable a highly parallel decoding process of the entire sparse matrix. The proposed sparse matrix is constructed by combining pruning and weight quantization. For the latest RNN models on PTB and WikiText-2 corpus, LSTM parameter storage requirement is compressed 19x using the proposed sparse matrix format compared to the baseline model. Compressed weight and indices can be reconstructed into a dense matrix fast using Viterbi encoders. Simulation results show that the proposed scheme can feed parameters to processing elements 20 % to 106 % faster than the case where the dense matrix values directly come from DRAM. | accepted-poster-papers | The authors propose an efficient scheme for encoding sparse matrices which allow weights to be compressed efficiently. At the same time, the proposed scheme allows for fast parallelizable decompression into a dense matrix using Viterbi-based pruning.
The reviewers noted that the techniques address an important problem relevant to deploying neural networks on resource-constrained platforms, and although the work builds on previous work, it is important from a practical standpoint.
The reviewers noted a number of concerns on the initial draft of this work related to the experimental methodology and the absence of runtime comparison against the baseline, which the reviewers have since fixed in the revised draft. The reviewers were unanimous in recommending that the revision be accepted, and the authors are requested to incorporate the final changes that they said they would make in the camera-ready version.
| train | [
"HJlNr-wCCm",
"SyxPPFpDhm",
"SJlvpFOh0X",
"SJgAEEpDhQ",
"H1gFqAPjAm",
"Byl9RiviCX",
"HJlAUngO2X",
"r1gC_KvsAQ",
"SygQztZjCX",
"Hke8lgOq0m",
"Hygh6aDcA7",
"HygDJA3YRQ",
"B1gIeQnYRQ",
"SJxecAdFRX",
"BJly3hutA7",
"rygt-_dF0X"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thank you for your kind response. We tried to address your requests as below:\n \n1. As suggested, we will add the information about the comparison between “Multi-bit quantization only” case and “Multi-bit-quantization + Viterbi-based binary code encoding\" case in the manuscript when we are allowed to update the ... | [
-1,
6,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
-1,
2,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SJlvpFOh0X",
"iclr_2019_HkfYOoCcYX",
"SJxecAdFRX",
"iclr_2019_HkfYOoCcYX",
"Byl9RiviCX",
"r1gC_KvsAQ",
"iclr_2019_HkfYOoCcYX",
"SygQztZjCX",
"Hygh6aDcA7",
"B1gIeQnYRQ",
"HygDJA3YRQ",
"rygt-_dF0X",
"BJly3hutA7",
"SyxPPFpDhm",
"SJgAEEpDhQ",
"HJlAUngO2X"
] |
iclr_2019_Hkg4W2AcFm | Overcoming the Disentanglement vs Reconstruction Trade-off via Jacobian Supervision | A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables. This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. In this work, we propose to overcome this trade-off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. As a result, the obtained models are effective at both disentangling and reconstruction. We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations. In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model. This is an order of magnitude more disentangled factors than state-of-the-art methods, while obtaining visually similar or superior results, and avoiding adversarial training. | accepted-poster-papers | The paper proposes a new way to tackle the trade-off between disentanglement and reconstruction, by training a teacher autoencoder that learns to disentangle, then distilling into a student model. The distillation is encouraged with a loss term that constrains the Jacobian in an interesting way. The qualitative results with image manipulation are interesting and the general idea seems to be well-liked by the reviewers (and myself).
The main weaknesses of the paper seem to be in the evaluation. Disentanglement is not exactly easy to measure as such. But overall the various ablation studies do show that the Jacobian regularization term improves meaningfully over Fader nets. Given the quality of the results and the fact that this work moves the needle in an important (albeit hard to define) area of learning disentangled representations, I think would be a good piece of work to present at ICLR so I recommend acceptance. | train | [
"SyxL9vbI27",
"H1gnj_npCm",
"r1gqJD2u2X",
"SyxiD3BG0Q",
"rJlI1wrzAX",
"HklRMVBzA7",
"S1lLdzSG0X",
"BJxo72kq3X",
"rygfJ69oqm",
"B1xizrM-q7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"This paper proposed a novel approach for learning disentangled representation from supervised data (x as the input image, y as different attributes), by learning an encoder E and a decoder D so that (1) D(E(x)) reconstructs the image, (2) E(D(x)) reconstruct the latent vector, in particular for the vectors that ar... | [
5,
-1,
7,
-1,
-1,
-1,
-1,
7,
-1,
-1
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1
] | [
"iclr_2019_Hkg4W2AcFm",
"rJlI1wrzAX",
"iclr_2019_Hkg4W2AcFm",
"r1gqJD2u2X",
"SyxL9vbI27",
"BJxo72kq3X",
"iclr_2019_Hkg4W2AcFm",
"iclr_2019_Hkg4W2AcFm",
"B1xizrM-q7",
"iclr_2019_Hkg4W2AcFm"
] |
iclr_2019_HkgEQnRqYQ | RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space | We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction. | accepted-poster-papers | This paper proposes a knowledge graph completion approach that represents relations as rotations in a complex space; an idea that the reviewers found quite interesting and novel. The authors provide analysis to show how this model can capture symmetry/assymmetry, inversions, and composition. The authors also introduce a separate contribution of self-adversarial negative sampling, which, combined with complex rotational embeddings, obtains state of the art results on the benchmarks for this task.
The reviewers and the AC identified a number of potential weaknesses in the initial paper: (1) the evaluation only showed the final performance of the approach, and thus it was not clear how much benefit was obtained from adversarial sampling vs the scoring model, or further, how good the results would be for the baselines if the same sampling was used, (2) citation and comparison to a closely related approach (TorusE), and (3) a number of presentation issues early on in the paper.
The reviewers appreciated the author's comments and the revision, which addressed all of the concerns by including (1) additional experiments to performance with and without self-adversarial sampling, and comparisons to TorusE, (2) improved presentation.
With the revision, the reviewers agreed that this is a worthy paper to include in the conference.
| test | [
"HyeIocuQxN",
"rJg9ItjCCm",
"ByxtN4yRR7",
"rkgm5j1aAQ",
"H1e_OITis7",
"B1gmLa420X",
"Sye6N04n0m",
"SJluiXkhRm",
"Hkxa6j_oAQ",
"H1lMravo07",
"SJxD-H5t0m",
"r1xF475K0m",
"Bkxn0G5YR7",
"Skl2_GqtAm",
"BJx75b5FCX",
"HJlFFR7167",
"rkgeGrg_Tm",
"HJlYlIhn2X",
"HJlUq8jq3X",
"HkxYw_tn3Q"... | [
"public",
"public",
"public",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",... | [
"This is a great paper with strong empirical performance!!\n\nI suppose you have also tried RotatE without self-adversarial training. Was it still better than all the other baselines (without self-adversarial training)? Or is it the combination of RotatE and self-adversarial that is crucial?\n\nI think it is also n... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HkgEQnRqYQ",
"SJxD-H5t0m",
"Sye6N04n0m",
"Hkxa6j_oAQ",
"iclr_2019_HkgEQnRqYQ",
"SJluiXkhRm",
"rkgeGrg_Tm",
"Skl2_GqtAm",
"H1lMravo07",
"r1xF475K0m",
"HJlFFR7167",
"HkxYw_tn3Q",
"HJlUq8jq3X",
"H1e_OITis7",
"HJlYlIhn2X",
"iclr_2019_HkgEQnRqYQ",
"HJlFFR7167",
"iclr_2019_Hkg... |
iclr_2019_HkgSEnA5KQ | Guiding Policies with Language via Meta-Learning | Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following. | accepted-poster-papers | The paper proposes a meta-learning approach to "language guided policy learning" where instructions are provided in the form of natural language instructions, rather than in the form of a reward function or through demonstration. A particularly interesting novel feature of the proposed approach is that it can seamlessly incorporate natural language corrections after an initial attempt to solve the task, opening up the direction towards natural instructions through interactive dialogue. The method is empirically shown to be able to learn to navigate environments and manipulate objects more sample efficiently (on test tasks) than approaches without instructions.
The reviewers noted several potential weaknesses: while the problem setting was considered interesting, the empirical validation was seen to be limited. Reviewers noted that only one (simple) domain was studied, and it was unclear if results would hold up in more complex domains. They also note lack of comparison to baselines based on prior work (e.g., pre-training).
The authors provided very detailed replies to the reviewer comments, and added very substantial new experiments, including an entire new domain and newly implemented baselines. Reviewers indicated that they are satisfied with the revisions. The AC reviewed the reviewer suggestions and revisions and notes that the additional experiments significantly improve the contribution of the paper. The resulting consensus is that the paper should be accepted.
The AC would like to note that several figures are very small and unreadable when the paper is printed, e.g., figure 7, and suggests that the authors increase figure size (and font size within figures) to ensure legibility. | train | [
"Syx6sF9JnQ",
"ByejewrEAX",
"Hygq4Iy4R7",
"BklVgmpgCQ",
"S1edmre36m",
"HyxheSl2a7",
"S1gowVln67",
"HJxXmVe3pm",
"r1glU6XCnQ",
"Hkx2E2HphX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\n\nUPDATE: I've increased my rating based on the authors' thorough responses and the updates they've made to the paper. However, I still have a concern over the static nature of the experimental environments.\n\n=====================\n\nThis paper proposes the use of iterative, linguistic corrections to guide (ie... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_HkgSEnA5KQ",
"Hygq4Iy4R7",
"BklVgmpgCQ",
"S1edmre36m",
"HyxheSl2a7",
"Syx6sF9JnQ",
"Hkx2E2HphX",
"r1glU6XCnQ",
"iclr_2019_HkgSEnA5KQ",
"iclr_2019_HkgSEnA5KQ"
] |
iclr_2019_HkgTkhRcKQ | AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods | Adam is shown not being able to converge to the optimal solution in certain cases. Researchers recently propose several algorithms to avoid the issue of non-convergence of Adam, but their efficiency turns out to be unsatisfactory in practice. In this paper, we provide a new insight into the non-convergence issue of Adam as well as other adaptive learning rate methods. We argue that there exists an inappropriate correlation between gradient gt and the second moment term vt in Adam (t is the timestep), which results in that a large gradient is likely to have small step size while a small gradient may have a large step size. We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating vt and gt will lead to unbiased step size for each gradient, thus solving the non-convergence problem of Adam. Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates vt and gt by temporal shifting, i.e., using temporally shifted gradient gt−n to calculate vt. The experiment results demonstrate that AdaShift is able to address the non-convergence issue of Adam, while still maintaining a competitive performance with Adam in terms of both training speed and generalization. | accepted-poster-papers | This paper proposes a new stochastic optimization scheme similar to Adam. The authors claim that Adam can be improved upon by decorrelating the second-moment estimate v_t from gradient estimates g_t. This is done through the temporal decorrelation scheme, as well as block-wise sharing of estimates v_t.
The reviewers agree that the paper is sufficiently well-written, original and significant to be accepted for ICLR, although some unclarity remains after the reviews. A disadvantage of the method is mainly an increased computational cost (linear in 'n', however this might be negligible when sharing v_t across blocks). | train | [
"B1gVyyYq0X",
"BklQd4otAQ",
"SkxDrCgdnX",
"r1gToxwR6X",
"BJe4vgwCaX",
"S1gWBeDCpQ",
"r1gzGyM5hX",
"S1guRoZwhQ",
"H1gOA4Dc57",
"Byg39Xum9Q",
"S1eEIQx-9Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"Thank you so much for the comments and useful references. We will put our effort into the convergence analysis. Hopefully, we will have some convergence analysis in our final version. ",
"I think the authors have further improved the paper, thus I have increased my score to 6. \n\nHowever, some further theoretic... | [
-1,
-1,
6,
-1,
-1,
-1,
6,
9,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1
] | [
"BklQd4otAQ",
"BJe4vgwCaX",
"iclr_2019_HkgTkhRcKQ",
"S1guRoZwhQ",
"SkxDrCgdnX",
"r1gzGyM5hX",
"iclr_2019_HkgTkhRcKQ",
"iclr_2019_HkgTkhRcKQ",
"Byg39Xum9Q",
"S1eEIQx-9Q",
"iclr_2019_HkgTkhRcKQ"
] |
iclr_2019_HkgYmhR9KX | AD-VAT: An Asymmetric Dueling mechanism for learning Visual Active Tracking | Visual Active Tracking (VAT) aims at following a target object by autonomously controlling the motion system of a tracker given visual observations. Previous work has shown that the tracker can be trained in a simulator via reinforcement learning and deployed in real-world scenarios. However, during training, such a method requires manually specifying the moving path of the target object to be tracked, which cannot ensure the tracker’s generalization on the unseen object moving patterns. To learn a robust tracker for VAT, in this paper, we propose a novel adversarial RL method which adopts an Asymmetric Dueling mechanism, referred to as AD-VAT. In AD-VAT, both the tracker and the target are approximated by end-to-end neural networks, and are trained via RL in a dueling/competitive manner: i.e., the tracker intends to lockup the target, while the target tries to escape from the tracker. They are asymmetric in that the target is aware of the tracker, but not vice versa. Specifically, besides its own observation, the target is fed with the tracker’s observation and action, and learns to predict the tracker’s reward as an auxiliary task. We show that such an asymmetric dueling mechanism produces a stronger target, which in turn induces a more robust tracker. To stabilize the training, we also propose a novel partial zero-sum reward for the tracker/target. The experimental results, in both 2D and 3D environments, demonstrate that the proposed method leads to a faster convergence in training and yields more robust tracking behaviors in different testing scenarios. For supplementary videos, see: https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS
The code is available at https://github.com/zfw1226/active_tracking_rl | accepted-poster-papers | The paper presents an adversarial learning framework for active visual tracking, a tracking setup where the tracker has camera control in order to follow a target object. The paper builds upon Luo et al. 2018 and proposes jointly learning tracker and target policies (as opposed to tracker policy alone). This automatically creates a curriculum of target trajectory difficulty, as opposed to the engineer designing the target trajectories. The paper further proposes a method for preventing the target to fast outperform the tracker and thus cause his policy to plateau. Experiments presented justify the problem formulation and design choices, and outperform Luo et al. . The task considered is very important, active surveillance with drones is just one sue case.
A downside of the paper is that certain sentences have English mistakes, such as this one: "The authors learn a policy that maps raw-pixel observation to control signal straightly with a Conv-LSTM network. Not only can it save
the effort in tuning an extra camera controller, but also does it outperform the..." However, overall the manuscript is well written, well structured, and easy to follow. The authors are encouraged to correct any remaining English mistakes in the manuscript. | train | [
"HkefcSaM6m",
"rJlb9G2fp7",
"ryl__Au-a7",
"r1xYj34yp7",
"Hkg8D92gTm",
"rkxScidq27",
"Hyxga8D52m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have updated our paper during the rebuttal period, which could be summarized as below:\n\na) To emphasize our major contribution and clarify the non-trivial different with Luo et al. (2018), we've rewritten Abstract and modified the Introduction. \nb) We've modified Section 3.3. The motivation for the tracker-a... | [
-1,
-1,
-1,
-1,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2019_HkgYmhR9KX",
"Hyxga8D52m",
"Hkg8D92gTm",
"rkxScidq27",
"iclr_2019_HkgYmhR9KX",
"iclr_2019_HkgYmhR9KX",
"iclr_2019_HkgYmhR9KX"
] |
iclr_2019_HkgqFiAcFm | Marginal Policy Gradients: A Unified Family of Estimators for Bounded Action Spaces with Applications | Many complex domains, such as robotics control and real-time strategy (RTS) games, require an agent to learn a continuous control. In the former, an agent learns a policy over R^d and in the latter, over a discrete set of actions each of which is parametrized by a continuous parameter. Such problems are naturally solved using policy based reinforcement learning (RL) methods, but unfortunately these often suffer from high variance leading to instability and slow convergence. Unnecessary variance is introduced whenever policies over bounded action spaces are modeled using distributions with unbounded support by applying a transformation T to the sampled action before execution in the environment. Recently, the variance reduced clipped action policy gradient (CAPG) was introduced for actions in bounded intervals, but to date no variance reduced methods exist when the action is a direction, something often seen in RTS games. To this end we introduce the angular policy gradient (APG), a stochastic policy gradient method for directional control. With the marginal policy gradients family of estimators we present a unified analysis of the variance reduction properties of APG and CAPG; our results provide a stronger guarantee than existing analyses for CAPG. Experimental results on a popular RTS game and a navigation task show that the APG estimator offers a substantial improvement over the standard policy gradient. | accepted-poster-papers | The paper introduces a new variance reduced policy gradient method, for directional and clipped action spaces, with provable guarantees that the gradient is lower variance. The paper is clearly written and the theory an important contribution. The experiments provide some preliminary insights that the algorithm could be beneficial in practice. | train | [
"rkg6-yBhJ4",
"Hyl_lXQF3X",
"H1eabnIj67",
"SJlQqs8spQ",
"S1eUaqUjTQ",
"SJgG0gkChX",
"H1gSPxyC3Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"After reading the rebuttal and other reviews, I would keep my original scores and think this paper presents some simple (of clipping action spaces and marginalizing distributions to lower dimensions and to take gradients) but very useful results in reducing variance of RL methods with continuous action spaces. To... | [
-1,
7,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
3,
3
] | [
"S1eUaqUjTQ",
"iclr_2019_HkgqFiAcFm",
"Hyl_lXQF3X",
"H1gSPxyC3Q",
"SJgG0gkChX",
"iclr_2019_HkgqFiAcFm",
"iclr_2019_HkgqFiAcFm"
] |
iclr_2019_Hkl5aoR5tm | On Self Modulation for Generative Adversarial Networks | Training Generative Adversarial Networks (GANs) is notoriously challenging. We propose and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings. Intuitively, self-modulation allows the intermediate feature maps of a generator to change as a function of the input noise vector. While reminiscent of other conditioning techniques, it requires no labeled data. In a large-scale empirical study we observe a relative decrease of 5%-35% in FID. Furthermore, all else being equal, adding this modification to the generator leads to improved performance in 124/144 (86%) of the studied settings. Self-modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN. | accepted-poster-papers | This manuscript proposes an architectural improvement for generative adversarial network that allows the intermediate layers of a generator to be modulated by the input noise vector using conditional batch normalization. The reviewers find the paper simple and well-supported by extensive experimental results. There were some concerns about the impact of such an empirical study. However, the strength and simplicity of the technique means that the method could be of practical interest to the ICLR community. | train | [
"rkef957sRm",
"Byl04qxA2X",
"r1gBxvdwpm",
"Bkgu7f_Dp7",
"rylaP-uP6m",
"HJgjezT1Tm",
"rylkjAtu2m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"It appears that Reviewer 2 and I disagree with Reviewer 3 in terms of submission rating. I feel strongly about the submission being publication-worthy, and I would like to challenge Reviewer 2’s score.\n\nThere is ample room in a research conference for empirical contributions, provided the experimentation is carr... | [
-1,
5,
-1,
-1,
-1,
7,
7
] | [
-1,
5,
-1,
-1,
-1,
4,
4
] | [
"Byl04qxA2X",
"iclr_2019_Hkl5aoR5tm",
"Byl04qxA2X",
"HJgjezT1Tm",
"rylkjAtu2m",
"iclr_2019_Hkl5aoR5tm",
"iclr_2019_Hkl5aoR5tm"
] |
iclr_2019_HklKui0ct7 | Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy | When learning from a batch of logged bandit feedback, the discrepancy between the policy to be learned and the off-policy training data imposes statistical and computational challenges. Unlike classical supervised learning and online learning settings, in batch contextual bandit learning, one only has access to a collection of logged feedback from the actions taken by a historical policy, and expect to learn a policy that takes good actions in possibly unseen contexts. Such a batch learning setting is ubiquitous in online and interactive systems, such as ad platforms and recommendation systems. Existing approaches based on inverse propensity weights, such as Inverse Propensity Scoring (IPS) and Policy Optimizer for Exponential Models (POEM), enjoy unbiasedness but often suffer from large mean squared error. In this work, we introduce a new approach named Maximum Likelihood Inverse Propensity Scoring (MLIPS) for batch learning from logged bandit feedback. Instead of using the given historical policy as the proposal in inverse propensity weights, we estimate a maximum likelihood surrogate policy based on the logged action-context pairs, and then use this surrogate policy as the proposal. We prove that MLIPS is asymptotically unbiased, and moreover, has a smaller nonasymptotic mean squared error than IPS. Such an error reduction phenomenon is somewhat surprising as the estimated surrogate policy is less accurate than the given historical policy. Results on multi-label classification problems and a large-scale ad placement dataset demonstrate the empirical effectiveness of MLIPS. Furthermore, the proposed surrogate policy technique is complementary to existing error reduction techniques, and when combined, is able to consistently boost the performance of several widely used approaches. | accepted-poster-papers | This is an interesting paper that shows how improved off-policy estimation (and optimization) can be improved by explicitly estimating the data logging policy. It is remarkable that the estimation variance can be reduced over using the original logging policy for IPW, although this result depends on the (somewhat impractical) assumption that the parametric form for the true logging policy is known. The reviewers unanimously recommended the paper be accepted. However, there remain criticisms of the theoretical analysis that the authors should take into account in preparing a final version (namely, motivating the assumptions needed to obtain the results, and providing stronger intuitions behind the reduced variance). | train | [
"ByeAL2huRX",
"SJeq-2ZGRQ",
"H1giFhZMC7",
"BJx1SnZM0m",
"H1eGHjd5hX",
"HJxF1VhYhQ",
"ByxKCFcOhQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have made a revision of our paper. We included 5-6 pages of extra details and proofs. The major changes are summarized as follows:\n\n(1). We highlighted a key fact (orthogonality between $\\Pi$ and $\\tilde{V} - V - \\Pi$) for a better understanding of the main theorem on page 6, before the interpretation of t... | [
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2019_HklKui0ct7",
"H1eGHjd5hX",
"ByxKCFcOhQ",
"HJxF1VhYhQ",
"iclr_2019_HklKui0ct7",
"iclr_2019_HklKui0ct7",
"iclr_2019_HklKui0ct7"
] |
iclr_2019_HklSf3CqKm | Subgradient Descent Learns Orthogonal Dictionaries | This paper concerns dictionary learning, i.e., sparse coding, a fundamental representation learning problem. We show that a subgradient descent algorithm, with random initialization, can recover orthogonal dictionaries on a natural nonsmooth, nonconvex L1 minimization formulation of the problem, under mild statistical assumption on the data. This is in contrast to previous provable methods that require either expensive computation or delicate initialization schemes. Our analysis develops several tools for characterizing landscapes of nonsmooth functions, which might be of independent interest for provable training of deep networks with nonsmooth activations (e.g., ReLU), among other applications. Preliminary synthetic and real experiments corroborate our analysis and show that our algorithm works well empirically in recovering orthogonal dictionaries. | accepted-poster-papers | This paper studies non smooth and non convex optimization and provides a global analysis for orthogonal dictionary learning. The referees indicate that the analysis is highly nontrivial compared with existing work.
The experiments fall a bit short and the relation to the loss landscape of neural networks could be described more clearly.
The reviewers pointed out that the experiments section was too short. The revision included a few more experiments. The paper has a theoretical focus, and scores high ratings there.
The confidence levels of the reviewers is relatively moderate, with only one confident reviewer. However, all five reviewers regard this paper positively, in particular the confident reviewer. | train | [
"r1g5pn1527",
"SJgbLGX96Q",
"rkg-4f75pQ",
"rkgvlzm96Q",
"H1gObZmc6m",
"ByeePKFwaX",
"r1gomAzrpQ",
"Bkxa-CzHpX",
"HJeARTGrpX",
"BklN3TzB6X",
"SkxX-tyrT7",
"Byl9W4UmTQ",
"H1lW-zgChm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a subgradient descent method to learn orthogonal, squared /complete n x n dictionaries under l1 norm regularization. The problem is interesting and relevant, and the paper, or at least the first part, is clear.\n\nThe most interesting property is that the solution does not depend on the diction... | [
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2019_HklSf3CqKm",
"Bkxa-CzHpX",
"r1gomAzrpQ",
"ByeePKFwaX",
"iclr_2019_HklSf3CqKm",
"iclr_2019_HklSf3CqKm",
"SkxX-tyrT7",
"r1g5pn1527",
"H1lW-zgChm",
"Byl9W4UmTQ",
"iclr_2019_HklSf3CqKm",
"iclr_2019_HklSf3CqKm",
"iclr_2019_HklSf3CqKm"
] |
iclr_2019_HklY120cYm | ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech | In this work, we propose a new solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (van Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we introduce the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. | accepted-poster-papers | The authors discuss an improved distillation scheme for parallel WaveNet using a Gaussian inverse autoregressive flow, which can be computed in closed-form, thus simplifying training. The work received favorable comments from the reviewers, along with a number of suggestions for improvement which have improved the draft considerably. The AC agrees with the reviewers that the work is a valuable contribution, particularly in the context of end-to-end neural text-to-speech systems. | val | [
"rJx4An_ZT7",
"r1xoUWp8k4",
"BygO0toz67",
"HyxgTgJH14",
"rygsCGE3CQ",
"r1efUdzcRX",
"HkxgRJx5Rm",
"SJgSu_vK07",
"rklM6VPtA7",
"B1lEY-wcjX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"After reading other reviews and author comments, I have raised my rating to a 6. My main concerns remain (lack of significant contribution and lack of an ablation study with more comprehensive experiments). However, I'm not against the paper as an interesting finding in and of itself. It would be great if the auth... | [
6,
-1,
9,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_HklY120cYm",
"HyxgTgJH14",
"iclr_2019_HklY120cYm",
"rklM6VPtA7",
"r1efUdzcRX",
"B1lEY-wcjX",
"rJx4An_ZT7",
"BygO0toz67",
"BygO0toz67",
"iclr_2019_HklY120cYm"
] |
iclr_2019_HkljioCcFQ | MARGINALIZED AVERAGE ATTENTIONAL NETWORK FOR WEAKLY-SUPERVISED LEARNING | In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions. To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner. The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from O(2T) to O(T2). Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization.
| accepted-poster-papers | The paper proposes a new attentional pooling mechanism that potentially addresses the issues of simple attention-based weighted averaging (where discriminative parts/frames might get disportionately high attentions). A nice contribution of the paper is to propose an alternative mechanism with theoretical proofs, and it also presents a method for fast recurrent computation. The experimental results show that the proposed attention mechanism improves over prior methods (e.g., STPN) on THUMOS14 and ActivityNet1.3 datasets. In terms of weaknesses: (1) the computational cost may be quite significant. (2) the proposed method should be evaluated over several tasks beyond activity recognition, but it’s unclear how it would work.
The authors provided positive proof-of-concept results on weakly supervised object localization task, improving over CAM-based methods. However, CAM baseline is a reasonable but not the strongest method and the weakly-supervised object recognition/segmentation domains are much more competitive domains, so it's unclear if the proposed method would achieve the state-of-the-art by simply replacing the weighted-averaging-attentional-pooling with the proposed attention mechanism. In addition, the description on how to perform attentional pooling over images is not clearly described (it’s not clear how the 1D sequence-based recurrent attention method can be extended to 2-D cases). However, this would not be a reason to reject the paper.
Finally, the paper’s presentation would need improvement. I would suggest that the authors give more intuitive explanations and rationale before going into technical details. The paper starts with Figure 1 which is not really well motivated/explained, so it could be moved to a later part. Overall, there are interesting technical contributions with positive results, but there are issues to be addressed.
| train | [
"H1lBtyH214",
"r1xzqeXtAm",
"r1lyqwftCQ",
"SkgHR0-FAQ",
"HkgzxyE0hm",
"Syec-wnqn7",
"rkxNicb5nm"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the updated results on weakly-supervised object localization on images. Overall, I think the paper has reasonable contributions. The improvement in THUMOS14 dataset over STPN is not significant, but the results on ActivityNet look promising and the results on weakly-supervised object localization are ... | [
-1,
-1,
-1,
-1,
5,
6,
3
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"r1lyqwftCQ",
"rkxNicb5nm",
"Syec-wnqn7",
"HkgzxyE0hm",
"iclr_2019_HkljioCcFQ",
"iclr_2019_HkljioCcFQ",
"iclr_2019_HkljioCcFQ"
] |
iclr_2019_HkxKH2AcFm | Towards GAN Benchmarks Which Require Generalization | For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered state-of-the-art; we consider this problematic.
We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be ``won'' by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation, implement an example black-box metric based on these ideas, and validate experimentally that it can measure a notion of generalization.
| accepted-poster-papers | The paper argues for a GAN evaluation metric that needs sufficiently large number of generated samples to evaluate. Authors propose a metric based on existing set of divergences computed with neural net representations. R2 and R3 appreciate the motivation behind the proposed method and the discussion in the paper to that end. The proposed NND based metric has some limitations as pointed out by R2/R3 and also acknowledged by the authors -- being biased towards GANs learned with the same NND metric; challenge in choosing the capacity of the metric neural network; being computationally expensive, etc. However, these points are discussed well in the paper, and R2 and R3 are in favor of accepting the paper (with R3 bumping their score up after the author response).
R1's main concern is the lack of rigorous theoretical analysis of the proposed metric, which the AC agrees with, but is willing to overlook, given that it is nontrivial and most existing evaluation metrics in the literature also lack this.
Overall, this is a borderline paper but falling on the accept side according to the AC. | val | [
"BkxqKQQ527",
"SJxruT-rCm",
"HkgTQTbBAm",
"HJlDkTZHCQ",
"H1gq7h-BAm",
"Bkx0g3Yn27",
"HJl4hUgtnX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThe paper looks at the problem of benchmarking models that unconditionally generate images. In particular they focus on GAN models and discuss the Inception Score (IS) and Fréchet Inception Distance (FID) metrics. The authors argue that a good benchmark should not have a trivial solution (e.g. memorising... | [
7,
-1,
-1,
-1,
-1,
6,
3
] | [
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_HkxKH2AcFm",
"HJl4hUgtnX",
"BkxqKQQ527",
"Bkx0g3Yn27",
"iclr_2019_HkxKH2AcFm",
"iclr_2019_HkxKH2AcFm",
"iclr_2019_HkxKH2AcFm"
] |
iclr_2019_HkxLXnAcFQ | A Closer Look at Few-shot Classification | Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline, 2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. | accepted-poster-papers | This paper provides a number of interesting experiments for few-shot learning using the CUB and miniImagenet datasets. One of the especially intriguing experiments is the analysis of backbone depth in the architecture, as it relates to few-shot performance. The strong performance of the baseline and baseline++ are quite surprising. Overall the reviewers agree that this paper raises a number of questions about current few-shot learning approaches, especially how they relate to architecture and dataset characteristics.
A few minor comments:
- In table 1, matching nets are mistakenly attributed to Ravi and Larochelle. Should be Vinyals et al.
- The notation for cosine similarity in section 3.2 is odd. It looks like you’re computing some cosine function of two vectors which doesn’t make sense. Please clarify this.
- There are a few results that were promised after the revision deadline, please be sure to include these in the final draft.
| train | [
"HklXEE2FeE",
"B1gwuwZLeV",
"S1lH_j94gN",
"S1eOn4vQeV",
"B1lauBoZlE",
"Byl1iLIhRX",
"HylrSpd9RQ",
"S1ehsqeSAX",
"BJlgV0er07",
"rklgcRlSC7",
"H1xDM6erA7",
"B1l0Vper0X",
"HyeAj7bFnX",
"HJlAtk3vhm",
"r1xNrc0Ts7"
] | [
"public",
"author",
"public",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the answers. \nI do appreciate this work. It provides rigorous experiments.\n\n",
"Hi, thanks for your questions! We reply to the three questions below.\n\n1. Did the authors run your learning for matching networks, prototypical networks, maml, and relation networks with episodic training (sampled ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4
] | [
"B1gwuwZLeV",
"S1lH_j94gN",
"iclr_2019_HkxLXnAcFQ",
"B1lauBoZlE",
"Byl1iLIhRX",
"HylrSpd9RQ",
"H1xDM6erA7",
"HyeAj7bFnX",
"r1xNrc0Ts7",
"r1xNrc0Ts7",
"HJlAtk3vhm",
"HJlAtk3vhm",
"iclr_2019_HkxLXnAcFQ",
"iclr_2019_HkxLXnAcFQ",
"iclr_2019_HkxLXnAcFQ"
] |
iclr_2019_HkxStoC5F7 | Meta-Learning Probabilistic Inference for Prediction | This paper introduces a new framework for data efficient and versatile learning. Specifically:
1) We develop ML-PIP, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic interpretations of meta-learning to cover a broad class of methods.
2) We introduce \Versa{}, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass. \Versa{} substitutes optimization at test time with forward passes through inference networks, amortizing the cost of inference and relieving the need for second derivatives during training.
3) We evaluate \Versa{} on benchmark datasets where the method sets new state-of-the-art results, and can handle arbitrary number of shots, and for classification, arbitrary numbers of classes at train and test time. The power of the approach is then demonstrated through a challenging few-shot ShapeNet view reconstruction task. | accepted-poster-papers | The paper proposes a decision-theoretic framework for meta-learning. The ideas and analysis are interesting and well-motivated, and the experiments are thorough. The primary concerns of the reviewers have been addressed in new revisions of the paper. The reviewers all agree that the paper should be accepted. Hence, I recommend acceptance. | train | [
"H1ghXV0kgN",
"ByldnORAJV",
"SygBAooCkE",
"B1gDhDl0y4",
"HyxhLmE6JN",
"r1eb9eOnyN",
"rJe6s67O6m",
"r1xx7y4dTQ",
"Hye5i0QOpQ",
"rkxV4A7_am",
"Byxh3JK9nQ",
"r1llzOxFhX",
"Syewq7hpoX"
] | [
"author",
"public",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the follow up questions.\n\nThe experiments with 1 task per batch yield the following result on 5-way 5-shot learning with Versa: (66.75 + / - 0.9)%. This is within the error bars of the current numbers in the paper, and above Prototypical networks trained and tested on 5-way (65.77 + / - 0.7)%. Further... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"ByldnORAJV",
"SygBAooCkE",
"B1gDhDl0y4",
"HyxhLmE6JN",
"r1eb9eOnyN",
"iclr_2019_HkxStoC5F7",
"iclr_2019_HkxStoC5F7",
"r1llzOxFhX",
"Byxh3JK9nQ",
"Syewq7hpoX",
"iclr_2019_HkxStoC5F7",
"iclr_2019_HkxStoC5F7",
"iclr_2019_HkxStoC5F7"
] |
iclr_2019_HkxaFoC9KQ | Deep reinforcement learning with relational inductive biases | We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability. Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene. In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmaster-level on four. In a novel navigation and planning task, our agent's performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training. Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agent's intentions. The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases. Our experiments show this approach can offer advantages in efficiency, generalization, and interpretability, and can scale up to meet some of the most challenging test environments in modern artificial intelligence. | accepted-poster-papers | The paper presents a family of models for relational reasoning over structured representations. The experiments show good results in learning efficiency and generalization, in Box-World (grid world) and StarCraft 2 mini-games, trained through reinforcement (IMPALA/off-policy A2C).
The final version would benefit from more qualitative and/or quantitative details in the experimental section, as noted by all reviewers.
The reviewers all agreed that this is worthy of publication at ICLR 2019. E.g. "The paper clearly demonstrates the utility of relational inductive biases in reinforcement learning." (R3) | test | [
"BJxkrbE5CX",
"ByxqT6b9RQ",
"BJxWAEv0nX",
"ryemxVtKAQ",
"HJeh45lmCm",
"H1gQS1H367",
"B1lNr-4nT7",
"SJg6aYHuaQ",
"Syg-myA9n7",
"Bkee1fmqhX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the response. Most of my concerns are addressed. I think this work is a nice contribution to the community. ",
"I believe the authors have addressed most of my comments and the revision has certainly improved the quality of the paper. I still think the overall contribution of the paper is very limited... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"B1lNr-4nT7",
"SJg6aYHuaQ",
"iclr_2019_HkxaFoC9KQ",
"iclr_2019_HkxaFoC9KQ",
"H1gQS1H367",
"Bkee1fmqhX",
"Syg-myA9n7",
"BJxWAEv0nX",
"iclr_2019_HkxaFoC9KQ",
"iclr_2019_HkxaFoC9KQ"
] |
iclr_2019_HkxjYoCqKX | Relaxed Quantization for Discretized Neural Networks | Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification. | accepted-poster-papers | This paper proposes an effective method to train neural networks with quantized reduced precision. It's fairly straight-forward idea and achieved good results and solid empirical work. reviewers have a consensus on acceptance. | train | [
"H1e-rddupm",
"r1lmTPcDp7",
"ByxwE2ROTm",
"S1gcOCLUpX",
"SJeTsd3tpX",
"H1eXz1E_T7",
"SJgl-zVD6m",
"B1lLTW4P6X",
"BylHD-NwTm",
"ryxyLkND6Q",
"rygmk1EDT7",
"rJevdabGpQ",
"BygITb1laQ",
"SJgFk25qhQ",
"rkgjbEeYnm"
] | [
"author",
"public",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers and commenters,\n\nWe have updated the submission to include all of the discussed points, except for the learning curves for VGG as we are currently rerunning the experiments in order to track them. We will perform another update as soon as that is finished. \n\nPlease also note that we have updated... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2019_HkxjYoCqKX",
"rygmk1EDT7",
"H1eXz1E_T7",
"iclr_2019_HkxjYoCqKX",
"ByxwE2ROTm",
"r1lmTPcDp7",
"rkgjbEeYnm",
"SJgFk25qhQ",
"BygITb1laQ",
"rJevdabGpQ",
"S1gcOCLUpX",
"iclr_2019_HkxjYoCqKX",
"iclr_2019_HkxjYoCqKX",
"iclr_2019_HkxjYoCqKX",
"iclr_2019_HkxjYoCqKX"
] |
iclr_2019_HkzRQhR9YX | Tree-Structured Recurrent Switching Linear Dynamical Systems for Multi-Scale Modeling | Many real-world systems studied are governed by complex, nonlinear dynamics. By modeling these dynamics, we can gain insight into how these systems work, make predictions about how they will behave, and develop strategies for controlling them. While there are many methods for modeling nonlinear dynamical systems, existing techniques face a trade off between offering interpretable descriptions and making accurate predictions. Here, we develop a class of models that aims to achieve both simultaneously, smoothly interpolating between simple descriptions and more complex, yet also more accurate models. Our probabilistic model achieves this multi-scale property through of a hierarchy of locally linear dynamics that jointly approximate global nonlinear dynamics. We call it the tree-structured recurrent switching linear dynamical system. To fit this model, we present a fully-Bayesian sampling procedure using Polya-Gamma data augmentation to allow for fast and conjugate Gibbs sampling. Through a variety of synthetic and real examples, we show how these models outperform existing methods in both interpretability and predictive capability. | accepted-poster-papers | This paper presents a recurrent tree-structured linear dynamical system to model the dynamics of a complex nonlinear dynamical system. All reviewers agree that the paper is interesting and useful, and is likely to have an impact in the community. Some of the doubts that reviewers had were resolved after the rebuttal period.
Overall, this is a good paper, and I recommend an acceptance. | train | [
"Hyx_x1UsyN",
"rygVmgaR3Q",
"SyetRg5a3Q",
"S1emGeRIyE",
"HJeZ36huRX",
"ryxsWdw927",
"r1eVWiHqRQ",
"ryxq_63dRm",
"B1eKkCndA7",
"rke9PnnOCQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thanks to the authors for the detailed and sufficient contents added to the appendix. I am satisfied with the new proof provided by the author and am willing to support it to be accepted. My score to the paper is also updated accordingly.",
"This paper introduces a probabilistic model to model nonlinear dynamic ... | [
-1,
7,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1
] | [
-1,
2,
2,
-1,
-1,
4,
-1,
-1,
-1,
-1
] | [
"ryxq_63dRm",
"iclr_2019_HkzRQhR9YX",
"iclr_2019_HkzRQhR9YX",
"HJeZ36huRX",
"SyetRg5a3Q",
"iclr_2019_HkzRQhR9YX",
"B1eKkCndA7",
"rygVmgaR3Q",
"ryxsWdw927",
"iclr_2019_HkzRQhR9YX"
] |
iclr_2019_HkzSQhCcK7 | STCN: Stochastic Temporal Convolutional Networks | Convolutional architectures have recently been shown to be competitive on many
sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs) while providing computational and modelling advantages due to inherent parallelism. However, currently, there remains a performance
gap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables. In this work, we propose stochastic temporal convolutional networks (STCNs), a novel architecture that combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces. In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales. The architecture is modular and flexible due to the decoupling of the deterministic and stochastic layers. We show that the proposed architecture achieves state of the art log-likelihoods across several tasks. Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modelling of handwritten text. | accepted-poster-papers | The paper presents a generative model of sequences based on the VAE framework, where the generative model is given by CNN with causal and dilated connections.
Novelty of the method is limited; it mainly consists of bringing together the idea of causal and dilated convolutions and the VAE framework. However, knowing how well this performs is valuable the community.
The proposed method appears to have significant benefits, as shown in experiments. The result on MNIST is, however, so strong that it seems incorrect; more digging into this result, or sourcecode, would have been better. | test | [
"SkgDcXX-x4",
"Skl3BDFoA7",
"SJghxFJ5hm",
"HyxGRY_Mk4",
"ByxYKYdz1E",
"Syg22hUs2Q",
"SJlBz456pX",
"BJeA9X5TTQ",
"r1lmRf5p6Q",
"S1xUAx5667",
"BkeTXrG6nm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"To better understand if the experimental improvements shown in our paper only stem from the hierarchical latent space or whether the synergy between the dilated CNNs and latent variable hierarchy is important, we ran additional experiments (as suggested by R1). We replaced the deterministic TCN blocks with LSTM ce... | [
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
5
] | [
"BkeTXrG6nm",
"SJlBz456pX",
"iclr_2019_HkzSQhCcK7",
"Syg22hUs2Q",
"Skl3BDFoA7",
"iclr_2019_HkzSQhCcK7",
"SJghxFJ5hm",
"Syg22hUs2Q",
"BkeTXrG6nm",
"iclr_2019_HkzSQhCcK7",
"iclr_2019_HkzSQhCcK7"
] |
iclr_2019_HyEtjoCqFX | Soft Q-Learning with Mutual-Information Regularization | We propose a reinforcement learning (RL) algorithm that uses mutual-information regularization to optimize a prior action distribution for better performance and exploration. Entropy-based regularization has previously been shown to improve both exploration and robustness in challenging sequential decision-making tasks. It does so by encouraging policies to put probability mass on all actions. However, entropy regularization might be undesirable when actions have significantly different importance. In this paper, we propose a theoretically motivated framework that dynamically weights the importance of actions by using the mutual-information. In particular, we express the RL problem as an inference problem where the prior probability distribution over actions is subject to optimization. We show that the prior optimization introduces a mutual-information regularizer in the RL objective. This regularizer encourages the policy to be close to a non-uniform distribution that assigns higher probability mass to more important actions. We empirically demonstrate that our method significantly improves over entropy regularization methods and unregularized methods. | accepted-poster-papers | The paper proposes a new RL algorithm (MIRL) in the control-as-inference framework that learns a state-independent action prior. A connection is provided to mutual information regularization. Compared to entropic regularization, this approach is expected to work better when actions have significantly different importance. The algorithm is shown to beat baselines in 11 out of 19 Atari games.
The paper is well written. The derivation is novel, and the resulting algorithm is interesting and has good empirical results. A few concerns were raised in initial reviews, including certain questions about experiments and potential negative impacts of the use of nonuniform action priors in MIRL. The author responses and the new version were quite helpful, and all reviewers agree the paper is an interesting contribution.
In a revised version, the authors are encouraged to
(1) include a discussion of when MIRL might fail, and
(2) improve the related work section to compare the proposed method to other entropy regularized RL (sometimes under a different name in the literature), for example the following recent works and the references therein:
https://arxiv.org/abs/1705.07798
http://proceedings.mlr.press/v70/asadi17a.html
http://papers.nips.cc/paper/6870-bridging-the-gap-between-value-and-policy-based-reinforcement-learning
http://proceedings.mlr.press/v80/dai18c.html | train | [
"HJe0z7jRJE",
"Hkg8ZQo0k4",
"ryea6zsRy4",
"rkeC9MsAkE",
"SkeEX3Q6JV",
"ryl__Ymjh7",
"SkgnPit3k4",
"SyejfjKhkN",
"H1l9u2InyN",
"Byx_Yunv37",
"HJepxiJ507",
"SJe43cJcR7",
"BJgtvCRFCQ",
"rJl62p0F0X",
"B1lIeTW537",
"Skgno2oEhX",
"rJx6sNdCom",
"rkxW2SgCo7",
"HJxlECDns7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"We are thankful to the reviewer for noticing the improvements and raising the score. \n",
"We thank the reviewer for appreciating the improvements of the paper. \n\n\nThe attached link indeed shows a different epsilon value for evaluation (and other hyperparameters) used in this particular DQN implementation. An... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1
] | [
"Byx_Yunv37",
"SyejfjKhkN",
"H1l9u2InyN",
"SkeEX3Q6JV",
"rJl62p0F0X",
"iclr_2019_HyEtjoCqFX",
"HJepxiJ507",
"SJe43cJcR7",
"BJgtvCRFCQ",
"iclr_2019_HyEtjoCqFX",
"SJe43cJcR7",
"Byx_Yunv37",
"B1lIeTW537",
"ryl__Ymjh7",
"iclr_2019_HyEtjoCqFX",
"rJx6sNdCom",
"rkxW2SgCo7",
"HJxlECDns7",
... |
iclr_2019_HyGBdo0qFm | On the Turing Completeness of Modern Neural Network Architectures | Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences. In spite of their relevance, the computational properties of these alternatives have not yet been fully explored. We study the computational power of two of the most paradigmatic architectures exemplifying these mechanisms: the Transformer (Vaswani et al., 2017) and the Neural GPU (Kaiser & Sutskever, 2016). We show both models to be Turing complete exclusively based on their capacity to compute and access internal dense representations of the data. In particular, neither the Transformer nor the Neural GPU requires access to an external memory to become Turing complete. Our study also reveals some minimal sets of elements needed to obtain these completeness results. | accepted-poster-papers | This paper provides a theoretical analysis of the Turing completeness of popular neural network architectures, specifically Neural Transformers and the Neural GPU. The reviewers agreed that this paper provides a meaningful theoretical contribution and should be accepted to the conference. Work of a theoretical nature is, amongst other types of work, called for by the ICLR CFP, but is not a very popular category for submissions, nor is it an easy one. As such, I am happy to follow the reviewers' recommendation and support this paper. | train | [
"HkxVcSSeg4",
"r1lxKiAJlV",
"H1gfQ37Cy4",
"BygjD2mA1E",
"BJl39ddR37",
"S1lRigk1y4",
"SkxNcv55nm",
"SkllClJky4",
"rkxzZyk1kN",
"SJedZxpFAX",
"SyemDwc_AX",
"SJgnoh3dp7",
"Hkg1123_pX",
"S1xAQn3dpX",
"Syx12inOam",
"r1ldhR0Knm",
"Syla8U8ZcQ",
"rJgJkiz-9X"
] | [
"author",
"public",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Thanks for your comment. \n\nWe believe that your doubt has been already clarified by the authors of the paper mentioned in your comment (\"Universal Transformers\"), and we thank the authors for their response. We just want to emphasize that our results only hold when unbounded precision is admitted, which is a s... | [
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1
] | [
-1,
-1,
-1,
-1,
2,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1
] | [
"H1gfQ37Cy4",
"H1gfQ37Cy4",
"iclr_2019_HyGBdo0qFm",
"SkxNcv55nm",
"iclr_2019_HyGBdo0qFm",
"BJl39ddR37",
"iclr_2019_HyGBdo0qFm",
"rkxzZyk1kN",
"SyemDwc_AX",
"SyemDwc_AX",
"Hkg1123_pX",
"Syx12inOam",
"Syx12inOam",
"Syx12inOam",
"iclr_2019_HyGBdo0qFm",
"iclr_2019_HyGBdo0qFm",
"rJgJkiz-9... |
iclr_2019_HyGEM3C9KQ | Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control | The Differentiable Neural Computer (DNC) can learn algorithmic and question answering tasks. An analysis of its internal activation patterns reveals three problems: Most importantly, the lack of key-value separation makes the address distribution resulting from content-based look-up noisy and flat, since the value influences the score calculation, although only the key should. Second, DNC's de-allocation of memory results in aliasing, which is a problem for content-based look-up. Thirdly, chaining memory reads with the temporal linkage matrix exponentially degrades the quality of the address distribution. Our proposed fixes of these problems yield improved performance on arithmetic tasks, and also improve the mean error rate on the bAbI question answering dataset by 43%. | accepted-poster-papers |
pros:
- Identification of several interesting problems with the original DNC model: masked attention, erasion of de-allocated elements, and sharpened temporal links
- An improved architecture which addresses the issues and shows improved performance on synthetic memory tasks and bAbI over the original model
- Clear writing
cons:
- Does not really show this modified DNC can solve a task that the original DNC could not and the bAbI tasks are effectively solved anyway. It is still not clear whether the DNC even with these improvements will have much impact beyond these toy tasks.
Overall the reviewers found this to be a solid paper with a useful analysis and I agree. I recommend acceptance.
| train | [
"rJxK_pj6TQ",
"rygRU5E5nm",
"H1epWZcnpQ",
"B1eovy5nam",
"BklGVJ92pX",
"SkxQaAFh6m",
"Hkg0R50bpQ",
"H1g7-dMz3m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for addressing the main concerns of my review, I have updated my score accordingly. ",
"\nOverview: \nThis paper proposes modifications to the original Differentiable Neural Computer architecture in three ways. First by introducing a masked content-based addressing which dynamically induces a key-value se... | [
-1,
7,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
5,
5
] | [
"BklGVJ92pX",
"iclr_2019_HyGEM3C9KQ",
"iclr_2019_HyGEM3C9KQ",
"H1g7-dMz3m",
"rygRU5E5nm",
"Hkg0R50bpQ",
"iclr_2019_HyGEM3C9KQ",
"iclr_2019_HyGEM3C9KQ"
] |
iclr_2019_HyGIdiRqtm | Evaluating Robustness of Neural Networks with Mixed Integer Programming | Neural networks trained only to optimize for training accuracy can often be fooled by adversarial examples --- slightly perturbed inputs misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art. We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup allows us to verify properties on convolutional and residual networks with over 100,000 ReLUs --- several orders of magnitude more than networks previously verified by any complete verifier. In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded l-∞ norm ε=0.1: for this classifier, we find an adversarial example for 4.38% of samples, and a certificate of robustness to norm-bounded perturbations for the remainder. Across all robust training procedures and network architectures considered, and for both the MNIST and CIFAR-10 datasets, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack. | accepted-poster-papers |
The paper investigates mixed-integer linear programming methods for neural net robustness verification in presence of adversarial attckas. The paper addresses and important problem, is well-written, presents a novel approach and demonstrates empirical improvements; all reviewers agree that this is a solid contribution to the field. | train | [
"SylMABVPyE",
"H1lIZjvUyE",
"ByxMEVJ90Q",
"HylmeSpd07",
"BkgoiNTOCm",
"B1xhqcM8CQ",
"HJeQNsfI0Q",
"rJxCCcM8CQ",
"rJg-8cfU07",
"SylXnSG8AQ",
"rygywrfIRm",
"Hkl4h-WLRX",
"rJlzefhupX",
"BygGl7Gva7",
"r1eSMishhQ",
"H1egVwcihm",
"S1eVvi_9hm"
] | [
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the explanation and the additional experimental data. \n\nIt seems for the 6x100 undefended networks, the method does not really improve over state of the art, which on top of that is an incomplete verifier (the main benefit of complete verifiers is precision gain for smaller networks). The approach is ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
1
] | [
"H1lIZjvUyE",
"ByxMEVJ90Q",
"HylmeSpd07",
"Hkl4h-WLRX",
"iclr_2019_HyGIdiRqtm",
"r1eSMishhQ",
"rJlzefhupX",
"BygGl7Gva7",
"r1eSMishhQ",
"H1egVwcihm",
"S1eVvi_9hm",
"iclr_2019_HyGIdiRqtm",
"BygGl7Gva7",
"r1eSMishhQ",
"iclr_2019_HyGIdiRqtm",
"iclr_2019_HyGIdiRqtm",
"iclr_2019_HyGIdiRqt... |
iclr_2019_HyGcghRct7 | Random mesh projectors for inverse problems | We propose a new learning-based approach to solve ill-posed inverse problems in imaging. We address the case where ground truth training samples are rare and the problem is severely ill-posed---both because of the underlying physics and because we can only get few measurements. This setting is common in geophysical imaging and remote sensing. We show that in this case the common approach to directly learn the mapping from the measured data to the reconstruction becomes unstable. Instead, we propose to first learn an ensemble of simpler mappings from the data to projections of the unknown image into random piecewise-constant subspaces. We then combine the projections to form a final reconstruction by solving a deconvolution-like problem. We show experimentally that the proposed method is more robust to measurement noise and corruptions not seen during training than a directly learned inverse. | accepted-poster-papers | This paper proposes a novel method of solving inverse problems that avoids direct inversion by first reconstructing various piecewise-constant projections of the unknown image (using a different CNN to learn each) and then combining them via optimization to solve the final inversion.
Two of the reviewers requested more intuitions into why this two stage process would fight the inherent ambiguity.
At the end of the discussion, two of the three reviewers are convinced by the derivations and empirical justification of the paper.
The authors also have significantly improved the clarity of the manuscript throughout the discussion period.
It would be interesting to see if there are any connections between such inversion via optimization with deep component analysis methods, e.g. “Deep Component Analysis via Alternating Direction Neural Networks
” of Murdock et al. , that train neural architectures to effectively carry out the second step of optimization, as opposed to learning a feedforward mapping.
| train | [
"HyxbtBh0kE",
"B1xEMHn0kV",
"SklYOP5e67",
"B1xxSs2nyE",
"HJx2u7-s14",
"HJxAoTvwCX",
"BkgMpADPCQ",
"Hkgez0wvAQ",
"HJx7u3wPRm",
"HJe46svvRX",
"S1gHWHAjpm",
"rklYC70opQ",
"rkxR0Zh-0X",
"SJg0kX0j6m",
"BkeFsHCiam",
"HygKVSRjTX",
"HyxbyBRi6m",
"Hke127Csa7",
"r1lmQNCjTX",
"BygsLNF62Q"... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for taking the time to read through all our responses. We are glad that you like our work.",
"Thank you for taking the time to read through our responses and for the positive assessment. We definitely intend to add the suggested information to the final version. We were perhaps a bit conservative tryin... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"HJx2u7-s14",
"B1xxSs2nyE",
"iclr_2019_HyGcghRct7",
"rklYC70opQ",
"r1lmQNCjTX",
"rkxR0Zh-0X",
"rkxR0Zh-0X",
"rkxR0Zh-0X",
"rkxR0Zh-0X",
"rkxR0Zh-0X",
"HyxbyBRi6m",
"Hke127Csa7",
"HyxbyBRi6m",
"iclr_2019_HyGcghRct7",
"HygKVSRjTX",
"S1gHWHAjpm",
"Sygyv3zq3X",
"SklYOP5e67",
"BygsLNF... |
iclr_2019_HyGhN2A5tm | Multi-Agent Dual Learning | Dual learning has attracted much attention in machine learning, computer vision and natural language processing communities. The core idea of dual learning is to leverage the duality between the primal task (mapping from domain X to domain Y) and dual task (mapping from domain Y to X) to boost the performances of both tasks. Existing dual learning framework forms a system with two agents (one primal model and one dual model) to utilize such duality. In this paper, we extend this framework by introducing multiple primal and dual models, and propose the multi-agent dual learning framework. Experiments on neural machine translation and image translation tasks demonstrate the effectiveness of the new framework.
In particular, we set a new record on IWSLT 2014 German-to-English translation with a 35.44 BLEU score, achieve a 31.03 BLEU score on WMT 2014 English-to-German translation with over 2.6 BLEU improvement over the strong Transformer baseline, and set a new record of 49.61 BLEU score on the recent WMT 2018 English-to-German translation. | accepted-poster-papers | A paper that studies two tasks: machine translation and image translation. The authors propose a new multi-agent dual learning technique that takes advantage of the symmetry of the problem. The empirical gains over a competitive baseline are quite solid. The reviewers consistently liked the paper but have in some cases fairly low confidence in their assessment. | train | [
"HklXxIdqn7",
"HJg-jI7a1N",
"HkeVjIX9hX",
"ryxsHLHn0X",
"BklUtMfnCX",
"ryePF6wKRm",
"H1l6suNjhX",
"BJgSyP6fC7",
"HklU6LTzRm",
"rJxe6Bpf07",
"HJgc0z6z07",
"BJxU4lAYh7",
"Hkxwf3VKnm",
"Hkxo4B28n7",
"Hylw94tIh7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"The author's present a dual learning framework that, instead of using a single mapping for each mapping task between two respective domains, the authors learn multiple diverse mappings. These diverse mappings are learned before the two main mappings are trained and are kept constant during the training of the two ... | [
6,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HyGhN2A5tm",
"HklXxIdqn7",
"iclr_2019_HyGhN2A5tm",
"BklUtMfnCX",
"HkeVjIX9hX",
"iclr_2019_HyGhN2A5tm",
"iclr_2019_HyGhN2A5tm",
"HkeVjIX9hX",
"HkeVjIX9hX",
"HklXxIdqn7",
"H1l6suNjhX",
"Hkxwf3VKnm",
"Hkxo4B28n7",
"Hylw94tIh7",
"iclr_2019_HyGhN2A5tm"
] |
iclr_2019_HyM7AiA5YX | Complement Objective Training | Learning with a primary objective, such as softmax cross entropy for classification and sequence generation, has been the norm for training deep neural networks for years. Although being a widely-adopted approach, using cross entropy as the primary objective exploits mostly the information from the ground-truth class for maximizing data likelihood, and largely ignores information from the complement (incorrect) classes. We argue that, in addition to the primary objective, training also using a complement objective that leverages information from the complement classes can be effective in improving model performance. This motivates us to study a new training paradigm that maximizes the likelihood of the ground-truth class while neutralizing the probabilities of the complement classes. We conduct extensive experiments on multiple tasks ranging from computer vision to natural language understanding. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks. In addition to the accuracy improvement, we also show that models trained with both primary and complement objectives are more robust to single-step adversarial attacks.
| accepted-poster-papers | This paper proposes adding a second objective to the training of neural network classifiers that aims to make the distribution over incorrect labels as flat as possible for each training sample. The authors describe this as "maximizing the complement entropy." Rather than adding the cross-entropy objective and the (negative) complement entropy term (since the complement entropy should be maximized while the cross-entropy is minimized), this paper proposes an alternating optimization framework in which first a step is taken to reduce the cross-entropy, then a step is taken to maximize the complement entropy. Extensive experiments on image classification (CIFAR-10, CIFAR-100, SVHN, Tiny Imagenet, and Imagenet), neural machine translation (IWSLT 2015 English-Vietnamese task), and small-vocabulary isolated-word recognition (Google Commands), show that the proposed two-objective approach outperforms training only to minimize cross-entropy. Experiments on CIFAR-10 also show that models trained in this framework have somewhat better resistance to single-step adversarial attacks. Concerns about the presentation of the adversarial attack experiments were raised by anonymous commenters and one of the reviewers, but these concerns were addressed in the revision and discussion. The primary remaining concern is a lack of any theoretical guarantees that the alternating optimization converges, but the strong empirical results compensate for this problem. | test | [
"rJe3xlA2yN",
"r1eubJAnJN",
"BkxPea7hyV",
"HygCehX3yN",
"r1lArwq9Am",
"Syx-XwqcCQ",
"H1lJy06u07",
"rkg-IxqSAQ",
"BkxI3CtBCQ",
"S1ebZ7sRam",
"HJlWte4WR7",
"SyeTbV6eA7",
"BkeB3kugCQ",
"rJeeh8J5pm",
"S1eR7XqFaQ",
"S1eWWmcKaQ",
"rJg8OW9FT7",
"HJlOtAKta7",
"r1ejg15YaQ",
"Syx8_6U63X"... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
"You are totally right. We did negate the complement entropy term (and added it to the primary objective) for maximizing complement entropy. We are sorry about the confusion and we will update the final manuscript to make this more clear: minimizing cross-entropy and maximizing complement entropy (e.g., in Algorith... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"BkxPea7hyV",
"HygCehX3yN",
"HygCehX3yN",
"r1lArwq9Am",
"rkg-IxqSAQ",
"BkxI3CtBCQ",
"iclr_2019_HyM7AiA5YX",
"HJlOtAKta7",
"r1ejg15YaQ",
"rJeeh8J5pm",
"BkeB3kugCQ",
"BkeB3kugCQ",
"S1ebZ7sRam",
"iclr_2019_HyM7AiA5YX",
"r1gc8uIw27",
"r1gc8uIw27",
"Syx8_6U63X",
"B1lv2Bdph7",
"B1lv2Bd... |
iclr_2019_HyN-M2Rctm | Mode Normalization | Normalization methods are a central building block in the deep learning toolbox. They accelerate and stabilize training, while decreasing the dependence on manually tuned learning rate schedules. When learning from multi-modal distributions, the effectiveness of batch normalization (BN), arguably the most prominent normalization method, is reduced. As a remedy, we propose a more flexible approach: by extending the normalization to more than a single mean and variance, we detect modes of data on-the-fly, jointly normalizing samples that share common features. We demonstrate that our method outperforms BN and other widely used normalization techniques in several experiments, including single and multi-task datasets. | accepted-poster-papers | The paper develops an original extension/generalization of standard batchnorm (and group norm) by employing a mixture-of-experts to separate incoming data into several modes and separately normalizing each mode. The paper is well written and technically correct, and the method yields consistent accuracy improvements over basic batchnorm on standard image classification tasks and models.
Reviewers and AC noted the following potential weaknesses: a) while large on artificially mixed data, improvements are relatively small on single standard datasets (<1% on CIFAR10 and CIFAR100) b) the paper could better motivate why multi-modality is important e.g. by showing histograms of node activations c) the important interplay between number of modes and batch size should be more thoroughly discussed
d) the closely related approach of Kalayeh & Shah 2018 should be presented and contrasted with in more details in the paper. Also comparing to it in experiments would enrich the work.
| train | [
"Hygq9xlc27",
"B1gol3FKAX",
"BklkkqFYA7",
"ByeV2TieRX",
"ryghy-BgRX",
"BJgbLwQjam",
"B1x5phA1T7",
"SkeHwZ-q2m",
"SyeC8uLEnm",
"HJx9Mg3gh7",
"H1eOYkPQ9m",
"ryxCU3AxcQ",
"SJeTPa9lqX",
"B1gx9Ady5X"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Summary:\nBatch Normalization (BN) suffers from 2 flaws: 1) It performs poorly when the batch size is small and 2) computing only one mean and one variance per feature might be a poor approximation for multi-modal features. To alleviate 2), this paper introduces Mode Normalization (MN) a new normalization techniqu... | [
6,
-1,
-1,
-1,
-1,
-1,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HyN-M2Rctm",
"ByeV2TieRX",
"SkeHwZ-q2m",
"BJgbLwQjam",
"B1x5phA1T7",
"Hygq9xlc27",
"iclr_2019_HyN-M2Rctm",
"iclr_2019_HyN-M2Rctm",
"HJx9Mg3gh7",
"iclr_2019_HyN-M2Rctm",
"ryxCU3AxcQ",
"SJeTPa9lqX",
"B1gx9Ady5X",
"iclr_2019_HyN-M2Rctm"
] |
iclr_2019_HyNA5iRcFQ | Detecting Egregious Responses in Neural Sequence-to-sequence Models | In this work, we attempt to answer a critical question: whether there exists some input sequence that will cause a well-trained discrete-space neural network sequence-to-sequence (seq2seq) model to generate egregious outputs (aggressive, malicious, attacking, etc.). And if such inputs exist, how to find them efficiently. We adopt an empirical methodology, in which we first create lists of egregious output sequences, and then design a discrete optimization algorithm to find input sequences that will cause the model to generate them. Moreover, the optimization algorithm is enhanced for large vocabulary search and constrained to search for input sequences that are likely to be input by real-world users. In our experiments, we apply this approach to dialogue response generation models trained on three real-world dialogue data-sets: Ubuntu, Switchboard and OpenSubtitles, testing whether the model can generate malicious responses. We demonstrate that given the trigger inputs our algorithm finds, a significant number of malicious sentences are assigned large probability by the model, which reveals an undesirable consequence of standard seq2seq training. | accepted-poster-papers | This work examines how to craft adversarial examples that will lead trained seq2seq models to generate undesired outputs (here defined as, assigning higher-than-average probability to undesired outputs). Making a model safe for deployment is an important unsolved problem and this work is looking at it from an interesting angle, and all reviewers agree that the paper is clear, well-presented, and offering useful observations. While the paper does not provide ways to fix the problem of egregious outputs being probable, as pointed out by reviewers, it is still a valuable study of the behavior of trained models and an interesting way to "probe" them, that would likely be of high interest to many people at ICLR. | train | [
"Skx74gyo07",
"BklKQvMAnQ",
"rkeN_ZS5Cm",
"rkgBookDAQ",
"SJx0z4aHRX",
"H1erS8ZYpQ",
"H1epDSZtaX",
"BJgV-LWK6X",
"SygNarWt6X",
"SklZWlF9nm",
"SJgy7QFK37"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Yes, will do. We think it is an interesting and informative investigation(thanks for the suggestion), and we will add these to the final version of the paper(if accepted).\n\nSorry, let us clarify: we first take all the target sentences that are \"hit\" w.r.t io_sample_min_hit in the mal-list(which is about 10% am... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"rkeN_ZS5Cm",
"iclr_2019_HyNA5iRcFQ",
"rkgBookDAQ",
"SJx0z4aHRX",
"SygNarWt6X",
"SJgy7QFK37",
"BklKQvMAnQ",
"SklZWlF9nm",
"H1epDSZtaX",
"iclr_2019_HyNA5iRcFQ",
"iclr_2019_HyNA5iRcFQ"
] |
iclr_2019_Hye9lnCct7 | Learning Actionable Representations with Goal Conditioned Policies | Representation learning is a central challenge across a range of machine learning areas. In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems. Most prior work on representation learning has focused on generative approaches, learning representations that capture all the underlying factors of variation in the observation space in a more disentangled or well-ordered manner. In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making -- that are "actionable". These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, eliminating the need for explicit reconstruction. We show how these learned representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks. We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning. | accepted-poster-papers | To borrow the succinct summary from R1, "the paper suggests a method for generating representations that are linked to goals in reinforcement learning. More precisely, it wishes to learn a representation so that two states are similar if the
policies leading to them are similar." The reviewers and AC agree that this is a novel and worthy idea.
Concerns about the paper are primarily about the following.
(i) the method already requires good solutions as input, i.e., in the form of goal-conditioned policies, (GCPs)
and the paper claims that these are easy to learn in any case.
As R3 notes, this then begs the question as to why the actionable representations are needed.
(ii) reviewers had questions regarding the evaluations, i.e., fairness of baselines, additional comparisons, and
additional detail.
After much discussion, there is now a fair degree of consensus. While R1 (the low score) still has a remaining issue with evaluation, particularly hyperparameter evaluation, they are also ok with acceptance. The AC is of the opinion that hyperparameter tuning is of course an important issue, but does not see it as the key issue for this particular paper.
The AC is of the opinion that the key issue is issue (i), raised by R3. In the discussion, the authors reconcile the inherent contradiction in (i) based on the need of additional downstream tasks that can then benefit from the actionable representation, and as demonstrated in a number of the evaluation examples (at least in the revised version). The AC believes in this logic, but believes that this should be stated more clearly in the final paper. And it should be explained
the extent to which training for auxiliary tasks implicitly solve this problem in any case.
The AC also suggests nominating R3 for a best-reviewer award. | train | [
"SyewvpG7sX",
"r1lO0VLPhm",
"S1lyiF78AX",
"HJeOi_mLRQ",
"H1ghTFkVR7",
"ByeyV10-TX",
"HkexM-C7RQ",
"Bke-uQ1WR7",
"rylLCj0e0Q",
"S1gtlIA6TQ",
"r1x_0rCT6m",
"HJlHiBC6pm",
"B1gKDS0TpQ",
"HkxTyBRTaX",
"r1gAKUmA3Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"In this paper, the authors propose a new approach to representation learning in the context of reinforcement learning.\nThe main idea is that two states should be distinguished *functionally* in terms of the actions that are needed to reach them,\nin contrast with generative methods which try to capture all aspect... | [
6,
6,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_Hye9lnCct7",
"iclr_2019_Hye9lnCct7",
"H1ghTFkVR7",
"HkexM-C7RQ",
"r1x_0rCT6m",
"iclr_2019_Hye9lnCct7",
"HkxTyBRTaX",
"r1gAKUmA3Q",
"iclr_2019_Hye9lnCct7",
"SyewvpG7sX",
"SyewvpG7sX",
"r1lO0VLPhm",
"r1lO0VLPhm",
"ByeyV10-TX",
"iclr_2019_Hye9lnCct7"
] |
iclr_2019_HyeFAsRctQ | Verification of Non-Linear Specifications for Neural Networks | Prior work on neural network verification has focused on specifications that are linear functions of the output of the network, e.g., invariance of the classifier output under adversarial perturbations of the input. In this paper, we extend verification algorithms to be able to certify richer properties of neural networks. To do this we introduce the class of convex-relaxable specifications, which constitute nonlinear specifications that can be verified using a convex relaxation. We show that a number of important properties of interest can be modeled within this class, including conservation of energy in a learned dynamics model of a physical system; semantic consistency of a classifier's output labels under adversarial perturbations and bounding errors in a system that predicts the summation of handwritten digits. Our experimental evaluation shows that our method is able to effectively verify these specifications. Moreover, our evaluation exposes the failure modes in models which cannot be verified to satisfy these specifications. Thus, emphasizing the importance of training models not just to fit training data but also to be consistent with specifications. | accepted-poster-papers | This paper proposes verification algorithms for a class of convex-relaxable specifications to evaluate the robustness of neural networks under adversarial examples.
The reviewers were unanimous in their vote to accept the paper. Note: the remaining score of 5 belongs to a reviewer who agreed to acceptance in the discussion. | val | [
"BJe6BOOoRQ",
"BJeG6VW537",
"rye8eGZ5CX",
"rJlto34nam",
"r1l4NRE3p7",
"HJgffAVhpX",
"ryej1AEhT7",
"H1e5Sp42TX",
"BJgDW64npX",
"HygxxT42p7",
"rJeS02Vhpm",
"HkxsOnE2p7",
"ryeLf5EhaX",
"ryepAig167",
"HklfxJG93Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for expanding the explanation on the high level idea of this paper. To me, these high level ideas matter much more than technical derivations or extensive experimental results. I think this paper can be accepted.",
"- Summary: This paper proposes verification algorithms for a class of convex-relaxable spe... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"H1e5Sp42TX",
"iclr_2019_HyeFAsRctQ",
"HkxsOnE2p7",
"ryepAig167",
"BJeG6VW537",
"BJeG6VW537",
"BJeG6VW537",
"HklfxJG93Q",
"ryepAig167",
"ryepAig167",
"ryepAig167",
"ryepAig167",
"iclr_2019_HyeFAsRctQ",
"iclr_2019_HyeFAsRctQ",
"iclr_2019_HyeFAsRctQ"
] |
iclr_2019_HyeGBj09Fm | Generating Liquid Simulations with Deformation-aware Neural Networks | We propose a novel approach for deformation-aware neural networks that learn the weighting and synthesis of dense volumetric deformation fields. Our method specifically targets the space-time representation of physical surfaces from liquid simulations. Liquids exhibit highly complex, non-linear behavior under changing simulation conditions such as different initial conditions. Our algorithm captures these complex phenomena in two stages: a first neural network computes a weighting function for a set of pre-computed deformations, while a second network directly generates a deformation field for refining the surface. Key for successful training runs in this setting is a suitable loss function that encodes the effect of the deformations, and a robust calculation of the corresponding gradients. To demonstrate the effectiveness of our approach, we showcase our method with several complex examples of flowing liquids with topology changes. Our representation makes it possible to rapidly generate the desired implicit surfaces. We have implemented a mobile application to demonstrate that real-time interactions with complex liquid effects are possible with our approach. | accepted-poster-papers | This paper presents a novel method for synthesizing fluid simulations, constrained to a set of parameterized variations,
such as the size and position of a water ball that is dropped. The results are solid; there is little related
work to compare to, in terms of methods that can "compute"/recall simulations at that speed.
The method is 2000x faster than the orginal simulations. This comes with the caveats that:
(a) the results are specific to the given set of parameterized environments; the method is learning a
compressed version of the original animations; (b) there is a loss of accuracy, and therefore
also a loss of visual plausibility.
The AC notes that the paper should use the ICLR format for citations, i.e., "(foo et al.)" rather than "(19)".
The AC also suggests that limitations should also be clearly documented, i.e., as seen from the
perspective of those working in the fluid simulation domain.
The principle (and only?) contentious issue relates to the suitability of the paper for the ICLR audience,
given its focus on the specific domain of fluid simulations. The AC is of two minds on this:
(i) the fluid simulation domain has different characteristics to other domains, and thu
understanding the ICLR audience can benefit from the specific nature of the predictive problems that
come the fluid simulation domain; new problems can drive new methods. There is a loose connection
between the given work and residual nets, and of course res-nets have also been recently reconceptualized as PDEs.
(ii) it's not clear how much the ICLR audience will get out of the specific solutions being described;
it requires understanding spatial transformer networks and a number of other domain-specific issues.
A problem with this type of paper in terms of graphics/SIGGRAPH is that it can also be seen as "falling short"
there, simply because it is not yet competitive in terms of visual quality or the generality of
fluid simulators; it really fulfills a different niche than classical fluid simulators.
The AC leans slightly in favor of acceptance, but is otherwise on the fence.
| train | [
"ByefRpv5nm",
"SJeEcGl9RX",
"BylpiCech7",
"S1l7TYggCX",
"BklwYtxlCm",
"Skg-NFgeC7",
"rkeUwxkin7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This is an application paper on dense volumetric synthesis of liquids and smoke. Given densely registered 4D implicit surfaces (volumes over time) for a structured scene, a neural-network based model is used to interpolate simulations for novel scene conditions (e.g. position and size of dropped water ball). The i... | [
7,
-1,
5,
-1,
-1,
-1,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
3
] | [
"iclr_2019_HyeGBj09Fm",
"BklwYtxlCm",
"iclr_2019_HyeGBj09Fm",
"BylpiCech7",
"ByefRpv5nm",
"rkeUwxkin7",
"iclr_2019_HyeGBj09Fm"
] |
iclr_2019_HyePrhR5KX | DyRep: Learning Representations over Dynamic Graphs | Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability. However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage. Two fundamental questions arise in learning over dynamic graphs: (i) How to elegantly model dynamical processes over graphs? (ii) How to leverage such a model to effectively encode evolving graph information into low-dimensional representations? We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely -- dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes). Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes. This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics. Our unified framework is trained using an efficient unsupervised procedure and has capability to generalize over unseen nodes. We demonstrate that DyRep outperforms state-of-the-art baselines for dynamic link prediction and time prediction tasks and present extensive qualitative insights into our framework. | accepted-poster-papers | After discussion, all reviewers agree to accept this paper. Congratulations!! | train | [
"rkl_EsCKpX",
"BJeonJJo3X",
"SylfmIbU07",
"S1g3PD-L07",
"ryx61DWL0X",
"SkePF8Z8CQ",
"HygUKS-807",
"S1g5-1zxRm",
"BJgv8BVFTm",
"rygMqV4YTX",
"Sygt67NF67",
"SyeBsVc93m",
"rkx45Fl3jX",
"S1eidr-Eom",
"HJg59orrqQ",
"SkeXykWAYm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"Overall the paper suffers from a lack of clarity in the presentation, especially in algorithm 1, and does not communicate well why the assumption of different dynamical processes should be important in practice. Experiments show some improvement compared to (Trivedi et al. 2017) but are limited to two datasets and... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HyePrhR5KX",
"iclr_2019_HyePrhR5KX",
"rkl_EsCKpX",
"iclr_2019_HyePrhR5KX",
"BJeonJJo3X",
"S1g5-1zxRm",
"rkl_EsCKpX",
"Sygt67NF67",
"SyeBsVc93m",
"BJeonJJo3X",
"BJeonJJo3X",
"iclr_2019_HyePrhR5KX",
"S1eidr-Eom",
"iclr_2019_HyePrhR5KX",
"SkeXykWAYm",
"iclr_2019_HyePrhR5KX"
] |
iclr_2019_HyeVtoRqtQ | Trellis Networks for Sequence Modeling | We present trellis networks, a new architecture for sequence modeling. On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers. On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices. Thus trellis networks with general weight matrices generalize truncated recurrent networks. We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models. Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention. The code is available at https://github.com/locuslab/trellisnet . | accepted-poster-papers | The paper proposes a novel network architecture for sequential learning, called trellis networks, which generalizes truncated RNNs and also links them to temporal convnets. The advantages of both types of nets are used to design trellis networks which appear to outperform state of art on several datasets. The paper is well-written and the results are convincing. | train | [
"r1esyAwhkE",
"rJxgjpN31E",
"rkenN3O267",
"S1gYPHu26m",
"HkearHd367",
"rJeU7ruhaX",
"BJxMvLjkam",
"BkeXf-gTnm",
"Byll8TOK37"
] | [
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your interest in our paper! \n\nTo obtain the 54.67 ppl on PTB using MoS, we trained for 400 epochs (similar for the 54.19 ppl result). We did not use finetuning step like Yang et al.\n\nIn addition, the code will be made available so that you can run on your own as well :-)",
"In your paper, you r... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"rJxgjpN31E",
"iclr_2019_HyeVtoRqtQ",
"iclr_2019_HyeVtoRqtQ",
"Byll8TOK37",
"BkeXf-gTnm",
"BJxMvLjkam",
"iclr_2019_HyeVtoRqtQ",
"iclr_2019_HyeVtoRqtQ",
"iclr_2019_HyeVtoRqtQ"
] |
iclr_2019_HyexAiA5Fm | Scalable Unbalanced Optimal Transport using Generative Adversarial Networks | Generative adversarial networks (GANs) are an expressive class of neural generative models with tremendous success in modeling high-dimensional continuous measures. In this paper, we present a scalable method for unbalanced optimal transport (OT) based on the generative-adversarial framework. We formulate unbalanced OT as a problem of simultaneously learning a transport map and a scaling factor that push a source measure to a target measure in a cost-optimal manner. We provide theoretical justification for this formulation, showing that it is closely related to an existing static formulation by Liero et al. (2018). We then propose an algorithm for solving this problem based on stochastic alternating gradient updates, similar in practice to GANs, and perform numerical experiments demonstrating how this methodology can be applied to population modeling. | accepted-poster-papers | After revision, all reviewers agree that this paper makes an interesting contribution to ICLR by proposing a new methodology for unbalanced optimal transport using GANs and should be accepted. | train | [
"Byx88RU9kE",
"B1lUURJanX",
"B1xdSvrYy4",
"rklJuu_Spm",
"rygfS6-Qk4",
"BklmCVWU3X",
"Hkenm8WQJV",
"H1xQ6gNIC7",
"B1lLbqXI0m",
"rygDCYmLA7",
"By6VIm8AQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"i have read the revised version. i also support accept. i have revised my score upwards.",
"In this paper the authors consider the unbalanced optimal transport problem between two measures with different total mass. The authors introduce first the now standard Kantorovich-like formulation, which considers a coup... | [
-1,
7,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"B1lLbqXI0m",
"iclr_2019_HyexAiA5Fm",
"By6VIm8AQ",
"iclr_2019_HyexAiA5Fm",
"Hkenm8WQJV",
"iclr_2019_HyexAiA5Fm",
"H1xQ6gNIC7",
"BklmCVWU3X",
"B1lUURJanX",
"B1lUURJanX",
"rklJuu_Spm"
] |
iclr_2019_Hyfn2jCcKm | Solving the Rubik's Cube with Approximate Policy Iteration | Recently, Approximate Policy Iteration (API) algorithms have achieved super-human proficiency in two-player zero-sum games such as Go, Chess, and Shogi without human data. These API algorithms iterate between two policies: a slow policy (tree search), and a fast policy (a neural network). In these two-player games, a reward is always received at the end of the game. However, the Rubik’s Cube has only a single solved state, and episodes are not guaranteed to terminate. This poses a major problem for these API algorithms since they rely on the reward received at the end of the game. We introduce Autodidactic Iteration: an API algorithm that overcomes the problem of sparse rewards by training on a distribution of states that allows the reward to propagate from the goal state to states farther away. Autodidactic Iteration is able to learn how to solve the Rubik’s Cube and the 15-puzzle without relying on human data. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge. | accepted-poster-papers | The paper introduces a version of approximate policy iteration (API), called Autodidactic Iteration (ADI), designed to overcome the problem of sparse rewards. In particular, the policy evaluation step of ADI is trained on a distribution of states that allows the reward to easily propagate from the goal state to states farther away. ADI is applied to successfully solve the Rubik's Cube (together with other existing techniques).
This work is an interesting contribution where the ADI idea may be useful in other scenarios. A limitation is that the whole empirical study is on the Rubik's Cube; a controlled experiment on other problems (even if simpler) can be useful to understand the pros & cons of ADI compared to others.
Minor: please update the bib entry of Bottou (2011). It's now published in MLJ 2014. | train | [
"SJx3iUcc27",
"r1xDiKEcC7",
"Bkgout45CX",
"Skg0UYN5Cm",
"Bye2VTcq27",
"HJlaAp7wn7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors show how to solve the Rubik cube using reinforcement learning (RL) with Monte-Carlo tree search (MCTS). As common in recent applications like AlphaZero, the RL part learns a deep network for policy and a value function that reduce the breadth (policy) and depth (value function) of the tree searched in ... | [
7,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_Hyfn2jCcKm",
"HJlaAp7wn7",
"SJx3iUcc27",
"Bye2VTcq27",
"iclr_2019_Hyfn2jCcKm",
"iclr_2019_Hyfn2jCcKm"
] |
iclr_2019_Hyg1G2AqtQ | Variance Reduction for Reinforcement Learning in Input-Driven Environments | We consider reinforcement learning in input-driven environments, where an exogenous, stochastic input process affects the dynamics of the system. Input processes arise in many applications, including queuing systems, robotics control with disturbances, and object tracking. Since the state dynamics and rewards depend on the input process, the state alone provides limited information for the expected future returns. Therefore, policy gradient methods with standard state-dependent baselines suffer high variance during training. We derive a bias-free, input-dependent baseline to reduce this variance, and analytically show its benefits over state-dependent baselines. We then propose a meta-learning approach to overcome the complexity of learning a baseline that depends on a long sequence of inputs. Our experimental results show that across environments from queuing systems, computer networks, and MuJoCo robotic locomotion, input-dependent baselines consistently improve training stability and result in better eventual policies. | accepted-poster-papers | This paper proposes an input-dependent baseline function to reduce variance in policy gradient estimation without adding bias. The approach is novel and theoretically validated, and the experimental results are convincing. The authors addressed nearly all of the reviewer's concerns. I recommend acceptance. | train | [
"BJx1GFr92X",
"S1e9N1YX6m",
"HklKcMENAm",
"ByxufjGtaX",
"SylCvSzKaX",
"SylcuqGYp7",
"H1xPbNztp7",
"ryxDVNFc2X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"\n\nSummary: This work considers the problem of learning in input-driven environments -- which are characterized by an addition stochastic variable z that can affect the dynamics of the environment and the associated reward the agent might see. The authors show how the PG theorem still applied for a input-aware cr... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
9
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_Hyg1G2AqtQ",
"iclr_2019_Hyg1G2AqtQ",
"iclr_2019_Hyg1G2AqtQ",
"SylcuqGYp7",
"BJx1GFr92X",
"S1e9N1YX6m",
"ryxDVNFc2X",
"iclr_2019_Hyg1G2AqtQ"
] |
iclr_2019_HygQBn0cYm | Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic | Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. In this work, we propose to train a policy while explicitly penalizing the mismatch between these two distributions over a fixed time horizon. We do this by using a learned model of the environment dynamics which is unrolled for multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory. This cost contains two terms: a policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We propose to measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction. | accepted-poster-papers | Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
| train | [
"ByeJ-YFahQ",
"HJlDT4Zq0m",
"HkeR-X8w2m",
"Hkgnb3au07",
"SJgb6s6uR7",
"HJl8WopdRQ",
"Ske7vSaORQ",
"H1g7krauRQ",
"SyeU6MK03Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper addresses the difficulty of covariate shift in model-based reinforcement learning. Here, the distribution over trajectories during is significantly different for the behaviour or data-collecting policy and the target or optimised policy. As a mean to address this, the authors propose to add an uncertaint... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2019_HygQBn0cYm",
"iclr_2019_HygQBn0cYm",
"iclr_2019_HygQBn0cYm",
"SJgb6s6uR7",
"ByeJ-YFahQ",
"HkeR-X8w2m",
"SyeU6MK03Q",
"iclr_2019_HygQBn0cYm",
"iclr_2019_HygQBn0cYm"
] |
iclr_2019_Hyg_X2C5FX | GAN Dissection: Visualizing and Understanding Generative Adversarial Networks | Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models.
In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models. | accepted-poster-papers | The paper proposes an interesting framework for visualizing and understanding GANs, that will be of clear help for understanding existing models and might provide insights for developing new ones. | train | [
"SJlKyI2FA7",
"Syga2ShtR7",
"HJlb9ShFRm",
"BJgk8S3K0m",
"rylRgFDnnQ",
"Bklj6-einQ",
"H1lQioJchm"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments and questions; we have incorporated your suggestions in the revision, and we also answer your questions below.\n\nQ7: apply the author's methods to other architecture, and to other application domains? \n \nA7: We have applied our method to WGAN-GP model with a different generator arch... | [
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"rylRgFDnnQ",
"Bklj6-einQ",
"H1lQioJchm",
"iclr_2019_Hyg_X2C5FX",
"iclr_2019_Hyg_X2C5FX",
"iclr_2019_Hyg_X2C5FX",
"iclr_2019_Hyg_X2C5FX"
] |
iclr_2019_HygjqjR9Km | Improving MMD-GAN Training with Repulsive Loss Function | Generative adversarial nets (GANs) are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget. This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions. First, we argue that the existing MMD loss function may discourage the learning of fine details in data as it attempts to contract the discriminator outputs of real data. To address this issue, we propose a repulsive loss function to actively learn the difference among the real data by simply rearranging the terms in MMD. Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function. The proposed methods are applied to the unsupervised image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets. Results show that the repulsive loss function significantly improves over the MMD loss at no additional computational cost and outperforms other representative loss functions. The proposed methods achieve an FID score of 16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral normalization. | accepted-poster-papers | The submission proposes two new things: a repulsive loss for MMD loss optimization and a bounded RBF kernel that stabilizes training of MMD-GAN. The submission has a number of unsupervised image modeling experiments on standard benchmarks and shows reasonable performance. All in all, this is an interesting piece of work that has a number of interesting ideas (e.g. the PICO method, which is useful to know). I agree with R2 that the RBF kernel seems somewhat hacky in its introduction, despite working well in practice.
That being said, the repulsive loss seems like something the research community would benefit from finding out more about, and I think the experiments and discussion are sufficiently extensive to warrant publication. | train | [
"BJe9UVmbaX",
"rJll-a6hnX",
"rkxQfAW0CX",
"rkxJC6g79Q",
"rylo1DbthX",
"SyxxllUYC7",
"r1gTa0rtRQ",
"Bkx58VUK07",
"HklD4r-bTQ",
"BJxg9N3IcQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"public"
] | [
"Thank you for your precious comments. Below we would try to clarify our study and address your concerns. \n\nQ1. What specifically contributed to the improvements in Table 1. Other good-scoring models need to be tested empirically. \nA1: In this study, we focused on comparing the proposed repulsive loss function w... | [
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
6,
-1
] | [
-1,
5,
-1,
-1,
5,
-1,
-1,
-1,
2,
-1
] | [
"HklD4r-bTQ",
"iclr_2019_HygjqjR9Km",
"iclr_2019_HygjqjR9Km",
"iclr_2019_HygjqjR9Km",
"iclr_2019_HygjqjR9Km",
"rylo1DbthX",
"rJll-a6hnX",
"BJxg9N3IcQ",
"iclr_2019_HygjqjR9Km",
"iclr_2019_HygjqjR9Km"
] |
iclr_2019_Hygn2o0qKX | Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience | The ability of overparameterized deep networks to generalize well has been linked to the fact that stochastic gradient descent (SGD) finds solutions that lie in flat, wide minima in the training loss -- minima where the output of the network is resilient to small random noise added to its parameters.
So far this observation has been used to provide generalization guarantees only for neural networks whose parameters are either \textit{stochastic} or \textit{compressed}. In this work, we present a general PAC-Bayesian framework that leverages this observation to provide a bound on the original network learned -- a network that is deterministic and uncompressed. What enables us to do this is a key novelty in our approach: our framework allows us to show that if on training data, the interactions between the weight matrices satisfy certain conditions that imply a wide training loss minimum, these conditions themselves {\em generalize} to the interactions between the matrices on test data, thereby implying a wide test loss minimum. We then apply our general framework in a setup where we assume that the pre-activation values of the network are not too small (although we assume this only on the training data). In this setup, we provide a generalization guarantee for the original (deterministic, uncompressed) network, that does not scale with product of the spectral norms of the weight matrices -- a guarantee that would not have been possible with prior approaches. | accepted-poster-papers | Existing PAC Bayes analysis gives generalization bounds for stochastic networks/classifiers. This paper develops a new approach to obtain generalization bounds for the original network, by generalizing noise resilience property from training data to test data. All reviewers agree that the techniques developed in the paper (namely Theorem 3.1) are novel and interesting. There was disagreement between reviewers on the usefulness of the new generalization bound (Theorem 4.1) shown in this paper using the above techniques. I believe authors have sufficiently addressed these concerns in their response and updated draft. Hence, despite the concerns of R3 on limitations of this bound and its dependence on pre-activation values, I agree with R2 and R4 that the techniques developed in the paper are of interest to the community and deserve publication. I suggest authors to keep comments of R3 in mind while preparing the final version. | train | [
"BygQ30jpRm",
"HklxW5QoAQ",
"H1eo-DD537",
"HJloFLguCX",
"B1eHBLxOAm",
"rkgv9MsD0Q",
"SylZrfAXC7",
"Byeh4EbxCX",
"ryxUAhWDAm",
"SkxjMyes3m",
"SygikY27Am",
"BygoO_mfAQ",
"ByePH_GzR7",
"Bkgs5rO8TX",
"BJxhNVMz07",
"SygAMq2eCm",
"SJxW0Xje0X",
"SJeNlQZlCQ",
"B1gAw-ZxAX",
"H1eypq036Q"... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
... | [
"We received an email notification from openreview with Reviewer 3's comment but we can't find it here on the website. The following is the comment we received:\n\n=============================\nComment: Thanks for the authors feedbacks. It is great to discuss the problems. \n\nAs I have discussed with the author,... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"rkgv9MsD0Q",
"H1eo-DD537",
"iclr_2019_Hygn2o0qKX",
"rkgv9MsD0Q",
"rkgv9MsD0Q",
"SylZrfAXC7",
"SygikY27Am",
"iclr_2019_Hygn2o0qKX",
"SkxjMyes3m",
"iclr_2019_Hygn2o0qKX",
"H1eo-DD537",
"ByePH_GzR7",
"BJxhNVMz07",
"iclr_2019_Hygn2o0qKX",
"SJxW0Xje0X",
"SJxW0Xje0X",
"Byeh4EbxCX",
"H1e... |
iclr_2019_HygsfnR9Ym | Recall Traces: Backtracking Models for Efficient Reinforcement Learning | In many environments only a tiny subset of all states yield high reward. In these cases, few of the interactions with the environment provide a relevant learning signal. Hence, we may want to preferentially train on those high-reward states and the probable trajectories leading to them.
To this end, we advocate for the use of a \textit{backtracking model} that predicts the preceding states that terminate at a given high-reward state. We can train a model which, starting from a high value state (or one that is estimated to have high value), predicts and samples which (state, action)-tuples may have led to that high value state. These traces of (state, action) pairs, which we refer to as Recall Traces, sampled from this backtracking model starting from a high value state, are informative as they terminate in good states, and hence we can use these traces to improve a policy. We provide a variational interpretation for this idea and a practical algorithm in which the backtracking model samples from an approximate posterior distribution over trajectories which lead to large rewards. Our method improves the sample efficiency of both on- and off-policy RL algorithms across several environments and tasks. | accepted-poster-papers | The paper presents "recall traces", a model based approach designed to improve reinforcement learning in sparse reward settings. The approach learns a generative model of trajectories leading to high-reward states, and is subsequently used to augment the real experience collected by the agent. This novel take on combining model-based and model-free learning is conceptually well motivated and is empirically shown to improve sample efficiency on several benchmark tasks.
The reviewers noted the following potential weaknesses in their initial reviews: the paper could provide a clearer motivation of why the proposed approach is expected to lead to performance improvements, and how it relates to learning (and uses of) a forward model. Details of the method, e.g., model parameterization is unclear, and the effect of hyperparameter choices is not fully evaluated.
The authors provided detailed replies to all reviewer suggestions, and ran extensive new experiments, including experiments to address questions about hyperparameter settings, and an entirely new use of the proposed model in a learning from demonstration setting. The authors also clarified the paper as requested by the reviewers. The reviewers have not responded to the rebuttal, but in the AC's assessment their concerns have been adequately addressed. The reviewers have updated their scores in response to the rebuttal, and the consensus is to accept the paper.
The AC notes that the authors seem unaware of related work by Oh et al. "Self Imitation Learning" which was published at ICML 2018. The paper is based on a similar conceptual motivation but imitates high-value traces directly, instead of using a generative model. The authors should include a discussion of how their paper relates to this earlier work in their camera ready version. | train | [
"HyxOprf0n7",
"Bkg6-MfsAm",
"H1eoDATPRQ",
"SJeB9tx4CX",
"HyxLpF2lC7",
"rye0EHNcpm",
"BkgkxrE567",
"BkxisNV5Tm",
"SkxjYXEq6Q",
"HylIyNNcpQ",
"S1lBH745TX",
"B1lWBJF93Q",
"r1lsbRy5hm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Revision:\nThe authors have thoroughly addressed my review and I have consequently updated my rating accordingly.\n\nSummary:\nModel-free reinforcement learning is inefficient at exploration if rewards are\nsparse / low probability.\nThe paper proposes a variational model for online learning to backtrack\nstate / ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_HygsfnR9Ym",
"iclr_2019_HygsfnR9Ym",
"HyxOprf0n7",
"rye0EHNcpm",
"iclr_2019_HygsfnR9Ym",
"BkgkxrE567",
"BkxisNV5Tm",
"HyxOprf0n7",
"B1lWBJF93Q",
"SkxjYXEq6Q",
"r1lsbRy5hm",
"iclr_2019_HygsfnR9Ym",
"iclr_2019_HygsfnR9Ym"
] |
iclr_2019_Hygxb2CqKm | Stable Recurrent Models | Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks. In this work, we conduct a thorough investigation of stable recurrent models. Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks. Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime. Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.
| accepted-poster-papers | The paper presents both theoretical analysis (based upon lambda-stability) and experimental evidence on stability of recurrent neural networks. The results are convincing but is concerns with a restricted definition of stability. Even with this restriction acceptance is recommended. | test | [
"Ske9haIsRX",
"ByeBXzI5AQ",
"r1xhUKOThQ",
"r1gN66Jqam",
"rkgdKTUw6X",
"BJeX698vpQ",
"SkgZ4aMPTm",
"SJxBPyJHam",
"SJeu1Uh7aQ",
"rkeKirnQp7",
"HJe5FH37pX",
"HJeGRbFZT7",
"rylX3rB_27"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the clarification and fixing the notations in Theorem 1. I think the discussion of unitary RNN models makes the paper more well-rounded. I hope this work will inspire more research in this direction in the future and help us understand the dynamics of recurrent networks. I would like to keep my rating."... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"SJeu1Uh7aQ",
"SJxBPyJHam",
"iclr_2019_Hygxb2CqKm",
"rkgdKTUw6X",
"BJeX698vpQ",
"SkgZ4aMPTm",
"rkeKirnQp7",
"HJe5FH37pX",
"rylX3rB_27",
"r1xhUKOThQ",
"HJeGRbFZT7",
"iclr_2019_Hygxb2CqKm",
"iclr_2019_Hygxb2CqKm"
] |
iclr_2019_HylTBhA5tQ | The Limitations of Adversarial Training and the Blind-Spot Attack | The adversarial training procedure proposed by Madry et al. (2018) is one of the most effective methods to defend against adversarial examples in deep neural net- works (DNNs). In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network. Test examples that are relatively far away from this manifold are more likely to be vulnerable to adversarial attacks. Consequentially, an adversarial training based defense is susceptible to a new class of attacks, the “blind-spot attack”, where the input images reside in “blind-spots” (low density regions) of the empirical distri- bution of training data but is still on the ground-truth data manifold. For MNIST, we found that these blind-spots can be easily found by simply scaling and shifting image pixel values. Most importantly, for large datasets with high dimensional and complex data manifold (CIFAR, ImageNet, etc), the existence of blind-spots in adversarial training makes defending on any valid test examples difficult due to the curse of dimensionality and the scarcity of training data. Additionally, we find that blind-spots also exist on provable defenses including (Kolter & Wong, 2018) and (Sinha et al., 2018) because these trainable robustness certificates can only be practically optimized on a limited set of training data. | accepted-poster-papers | Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
| train | [
"SylEXOI9AX",
"HkgrM_e9R7",
"SylN6rJ5Cm",
"H1gVYFVS3X",
"HkeeRxhmAX",
"B1gGnv0Y07",
"BylEr05YAX",
"rkeMtGiyCQ",
"BJlit-i1Rm",
"rkxhfWj1C7",
"BylRCJCM6m",
"Hygsieas2X",
"rJe1e3jj3X"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nWe have addressed all the concerns of AnonReviewer3. During the discussion with AnonReviewer3, we found that there might be some confusions on how we generate adversarial examples from blind-spot images, and how we calculate the $\\ell_p$ distortions for adversarial examples. Thus we slightly revise Section 3.3 ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2019_HylTBhA5tQ",
"SylN6rJ5Cm",
"B1gGnv0Y07",
"iclr_2019_HylTBhA5tQ",
"BylRCJCM6m",
"BylEr05YAX",
"HkeeRxhmAX",
"iclr_2019_HylTBhA5tQ",
"rJe1e3jj3X",
"Hygsieas2X",
"H1gVYFVS3X",
"iclr_2019_HylTBhA5tQ",
"iclr_2019_HylTBhA5tQ"
] |
iclr_2019_HylTXn0qYX | Efficiently testing local optimality and escaping saddles for ReLU networks | We provide a theoretical algorithm for checking local optimality and escaping saddles at nondifferentiable points of empirical risks of two-layer ReLU networks. Our algorithm receives any parameter value and returns: local minimum, second-order stationary point, or a strict descent direction. The presence of M data points on the nondifferentiability of the ReLU divides the parameter space into at most 2^M regions, which makes analysis difficult. By exploiting polyhedral geometry, we reduce the total computation down to one convex quadratic program (QP) for each hidden node, O(M) (in)equality tests, and one (or a few) nonconvex QP. For the last QP, we show that our specific problem can be solved efficiently, in spite of nonconvexity. In the benign case, we solve one equality constrained QP, and we prove that projected gradient descent solves it exponentially fast. In the bad case, we have to solve a few more inequality constrained QPs, but we prove that the time complexity is exponential only in the number of inequality constraints. Our experiments show that either benign case or bad case with very few inequality constraints occurs, implying that our algorithm is efficient in most cases. | accepted-poster-papers | This paper proposes a new method for verifying whether a given point of a two layer ReLU network is a local minima or a second order stationary point and checks for descent directions. All reviewers agree that the algorithm is based on number of new techniques involving both convex and non-convex QPs, and is novel. The method proposed in the paper has significant limitations as the method is not robust to handle approximate stationary points. Given these limitations, there is a disagreement between reviewers about the significance of the result . While I share the same concerns as R4, I agree with R3 and believe that the new ideas in the paper will inspire future work to extend the proposed method towards addressing these limitations. Hence I suggest acceptance. | train | [
"BJx4YOG6C7",
"rylPNDchRX",
"H1giRmc30Q",
"HJlqYuFqAX",
"rkl6k5D7am",
"SklaadxYRm",
"H1xBzsAQAQ",
"rJxE090Q0X",
"SyxnsqC7RQ",
"SyxXdq07Rm",
"rJl-Gq0mAQ",
"rkeveQE02m",
"SklEPann2X",
"SkxeZJD5nQ"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Sorry for the confusion. What worries me most is not a practical implementation. \n\nFrom a theoretical point of view, the current version can only test if a point is a real SOSP. Thus this is only a qualitative result. I expect a theoretical machine learning paper in ICLR/ICML/NIPS/COLT to have at least some quan... | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"H1giRmc30Q",
"SklaadxYRm",
"HJlqYuFqAX",
"H1xBzsAQAQ",
"iclr_2019_HylTXn0qYX",
"SyxXdq07Rm",
"rkl6k5D7am",
"rkeveQE02m",
"SklEPann2X",
"SkxeZJD5nQ",
"iclr_2019_HylTXn0qYX",
"iclr_2019_HylTXn0qYX",
"iclr_2019_HylTXn0qYX",
"iclr_2019_HylTXn0qYX"
] |
iclr_2019_HylVB3AqYm | ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware | Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10 4 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design. | accepted-poster-papers | This paper integrates a bunch of existing approaches for neural architecture search, including OneShot/DARTS, BinaryConnect, REINFORCE, etc. Although the novelty of the paper may be limited, empirical performance seems impressive. The source code is not available. I think this is a borderline paper but maybe good enough for acceptance.
| train | [
"rJe_QI3wlV",
"rkx-525OJ4",
"Hke4jK9OkE",
"H1eNgOayAX",
"rJe1QITpR7",
"HyxWxsxaCQ",
"BklS-ur9h7",
"HJelAtlcRm",
"rJl4Qn5KAQ",
"S1x0oTiHR7",
"SJlO7KpVp7",
"rJxuErCmTm",
"BkxbEO0XpX",
"S1lC-BtXpX",
"B1lXIuW-CQ",
"SkeDbIW-CX",
"H1x6eyDe0X",
"HyxqxNT5aQ",
"BygWA7OK6X",
"rkle4Pwda7"... | [
"public",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"public",
"author",
"public",
"official_reviewer"... | [
"Dear the authors,\n\nI want to echo with the reviewers/public readers that releasing your detailed training pipeline is quite crucial given the good performances reported in the paper. Furthermore, only evaluation code/model ckpts is definitely not enough since people have various unreasonable ways to obtain a goo... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
-1
] | [
"S1lC-BtXpX",
"S1x0oTiHR7",
"H1x6eyDe0X",
"rkle4Pwda7",
"rJl4Qn5KAQ",
"HJelAtlcRm",
"iclr_2019_HylVB3AqYm",
"rJxuErCmTm",
"SJlO7KpVp7",
"iclr_2019_HylVB3AqYm",
"HkxEMyl33m",
"BklS-ur9h7",
"rJl-2uUshQ",
"HkxRDmY7am",
"S1xBSKTN6m",
"iclr_2019_HylVB3AqYm",
"H1eNgOayAX",
"BygWA7OK6X",
... |
iclr_2019_Hyl_vjC5KQ | Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization | Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling.
In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks. | accepted-poster-papers | This paper proposes a method for hierarchical reinforcement learning that aims to maximize mutual information between options and state-action pairs. The approach and empirical analysis is interesting. The initial submission had many issues with clarity. However, the new revisions of the paper have significantly improved the clarity, better describing the idea and improving the terminology. The main remaining weakness is the scope of the experimental results.
However, the reviewers agree that the paper exceeds the bar for publication at ICLR with the existing experiments. | train | [
"BylepC6n3m",
"H1gOFYTK3m",
"rJxEm9nX0Q",
"Syegx9hmAX",
"BJlN6Qoy07",
"ryex2YGdpm",
"Hkgrq6jvaQ",
"SJxeXTsD6Q",
"rJx6n5iDa7",
"BklAL5sPT7",
"BJg4ANaphm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose an HRL algorithm that attempts to learn options that maximize their mutual information with the state-action density under the optimal policy.\n\nSeveral key terms are used in ways that differ from the rest of the literature. The authors claim options are learned in an \"unsupervised\" manner, ... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_Hyl_vjC5KQ",
"iclr_2019_Hyl_vjC5KQ",
"Hkgrq6jvaQ",
"rJx6n5iDa7",
"ryex2YGdpm",
"SJxeXTsD6Q",
"iclr_2019_Hyl_vjC5KQ",
"BylepC6n3m",
"H1gOFYTK3m",
"BJg4ANaphm",
"iclr_2019_Hyl_vjC5KQ"
] |
iclr_2019_Hyx4knR9Ym | Generalizable Adversarial Training via Spectral Normalization | Deep neural networks (DNNs) have set benchmarks on a wide array of supervised learning tasks. Trained DNNs, however, often lack robustness to minor adversarial perturbations to the input, which undermines their true practicality. Recent works have increased the robustness of DNNs by fitting networks using adversarially-perturbed training samples, but the improved performance can still be far below the performance seen in non-adversarial settings. A significant portion of this gap can be attributed to the decrease in generalization performance due to adversarial training. In this work, we extend the notion of margin loss to adversarial settings and bound the generalization error for DNNs trained under several well-known gradient-based attack schemes, motivating an effective regularization scheme based on spectral normalization of the DNN's weight matrices. We also provide a computationally-efficient method for normalizing the spectral norm of convolutional layers with arbitrary stride and padding schemes in deep convolutional networks. We evaluate the power of spectral normalization extensively on combinations of datasets, network architectures, and adversarial training schemes. | accepted-poster-papers | Adversarial training has quickly become important for training robust neural networks. However this training generally results in poor generalization behavior. This paper proposes using margin loss with adversarial training for better generalization. The paper provides generalization bounds for this adversarial training setup motivating the use of spectral regularization. The experimental results using the spectral regularization with adversarial training are very promising and all the reviewers agree that they show non-trivial improvement. Even though the spectral regularization techniques have been tried in different settings, hence of limited novelty, the experimental results in the paper are encouraging and I believe will motivate further study on this topic. Reviewers also opined that the writing in the paper is currently not that great with limited explanation of the theoretical results. More discussions interpreting the theoretical results and their significance can help the readers appreciate the paper better. | train | [
"HygGEVhzAQ",
"H1x3aUom2X",
"Skx_uPmGRQ",
"BJg4GCdkR7",
"ByeNL6dJAQ",
"HyxOkadJAX",
"H1eRd3dyCQ",
"SkeXPX7hhm",
"B1xw3F5KhQ",
"SkljOqLS9X",
"rkgEjrnm9Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for your reply. I have updated my rating.",
"This paper is well set-up to target the interesting problem of degraded generalisation after adversarial training. The proposal of applying spectral normalisation (SN) is well motivated, and is supported by margin-based bounds. However, the experimental resu... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
5,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
3,
-1,
-1
] | [
"H1eRd3dyCQ",
"iclr_2019_Hyx4knR9Ym",
"ByeNL6dJAQ",
"iclr_2019_Hyx4knR9Ym",
"SkeXPX7hhm",
"B1xw3F5KhQ",
"H1x3aUom2X",
"iclr_2019_Hyx4knR9Ym",
"iclr_2019_Hyx4knR9Ym",
"rkgEjrnm9Q",
"iclr_2019_Hyx4knR9Ym"
] |
iclr_2019_Hyx6Bi0qYm | Adversarial Domain Adaptation for Stable Brain-Machine Interfaces | Brain-Machine Interfaces (BMIs) have recently emerged as a clinically viable option
to restore voluntary movements after paralysis. These devices are based on the
ability to extract information about movement intent from neural signals recorded
using multi-electrode arrays chronically implanted in the motor cortices of the
brain. However, the inherent loss and turnover of recorded neurons requires repeated
recalibrations of the interface, which can potentially alter the day-to-day
user experience. The resulting need for continued user adaptation interferes with
the natural, subconscious use of the BMI. Here, we introduce a new computational
approach that decodes movement intent from a low-dimensional latent representation
of the neural data. We implement various domain adaptation methods
to stabilize the interface over significantly long times. This includes Canonical
Correlation Analysis used to align the latent variables across days; this method
requires prior point-to-point correspondence of the time series across domains.
Alternatively, we match the empirical probability distributions of the latent variables
across days through the minimization of their Kullback-Leibler divergence.
These two methods provide a significant and comparable improvement in the performance
of the interface. However, implementation of an Adversarial Domain
Adaptation Network trained to match the empirical probability distribution of the
residuals of the reconstructed neural signals outperforms the two methods based
on latent variables, while requiring remarkably few data points to solve the domain
adaptation problem. | accepted-poster-papers | BMIs need per-patient and per-session calibration, and this paper seeks to amend that. Using VAEs and RNNs, it relates sEEG to sEMG, in principle a ten-year old approach, but do so using a novel adversarial approach that seems to work.
The reviewers agree the approach is nice, the statements in the paper are too strong, but publication is recommended. Clinical evaluation is an important next step. | val | [
"Byx8cczQRm",
"rkxSLkcxCQ",
"rkgH8gGa6Q",
"HkxTjbBj67",
"Byx8TyLcpX",
"H1gRv1IqpX",
"B1lX3AHqa7",
"B1gJaHNcpm",
"BJlvxzmJaQ",
"S1gZuAoJ3X",
"SJljUupdoQ",
"Hke5lmpc9m",
"SyeDE0WYqX"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Dear reviewer #1,\n \nWe have submitted a response to your review and a revised version of our paper. We hope to have succeeded in answering all questions and comments. If there are any remaining concerns, please let us know so that we can address them before the deadline.\n\nThanks, ",
"Dear reviewers, \n\nWe ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
5,
7,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
-1,
-1
] | [
"S1gZuAoJ3X",
"iclr_2019_Hyx6Bi0qYm",
"SJljUupdoQ",
"BJlvxzmJaQ",
"S1gZuAoJ3X",
"S1gZuAoJ3X",
"S1gZuAoJ3X",
"S1gZuAoJ3X",
"iclr_2019_Hyx6Bi0qYm",
"iclr_2019_Hyx6Bi0qYm",
"iclr_2019_Hyx6Bi0qYm",
"SyeDE0WYqX",
"iclr_2019_Hyx6Bi0qYm"
] |
iclr_2019_HyxAfnA5tm | Deep Online Learning Via Meta-Learning: Continual Adaptation for Model-Based RL | Humans and animals can learn complex predictive models that allow them to accurately and reliably reason about real-world phenomena, and they can adapt such models extremely quickly in the face of unexpected changes. Deep neural network models allow us to represent very complex functions, but lack this capacity for rapid online adaptation. The goal in this paper is to develop a method for continual online learning from an incoming stream of data, using deep neural network models. We formulate an online learning procedure that uses stochastic gradient descent to update model parameters, and an expectation maximization algorithm with a Chinese restaurant process prior to develop and maintain a mixture of models to handle non-stationary task distributions. This allows for all models to be adapted as necessary, with new models instantiated for task changes and old models recalled when previously seen tasks are encountered again. Furthermore, we observe that meta-learning can be used to meta-train a model such that this direct online adaptation with SGD is effective, which is otherwise not the case for large function approximators. We apply our method to model-based reinforcement learning, where adapting the predictive model is critical for control; we demonstrate that our online learning via meta-learning algorithm outperforms alternative prior methods, and enables effective continuous adaptation in non-stationary task distributions such as varying terrains, motor failures, and unexpected disturbances. | accepted-poster-papers | The reviewers appreciated this contribution, particularly its ability to tackle nonstationary domains which are common in real-world tasks.
| train | [
"Syx2MNoERQ",
"HylGj6OV0m",
"HJg0BTu4C7",
"Hkg_xYA3n7",
"rygBgFLi3m",
"BJgYeRFP2X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review. We added an appendix to the paper that addresses your question, and we have also added this information (as well as illustrative videos) to the project website. To illustrate results with less meta-training data, we have evaluated the test-time performance of models from various meta-tra... | [
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
3,
3,
3
] | [
"Hkg_xYA3n7",
"rygBgFLi3m",
"BJgYeRFP2X",
"iclr_2019_HyxAfnA5tm",
"iclr_2019_HyxAfnA5tm",
"iclr_2019_HyxAfnA5tm"
] |
iclr_2019_HyxCxhRcY7 | Deep Anomaly Detection with Outlier Exposure | It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance. | accepted-poster-papers | The paper proposes a new fine-tuning method for improving the performance of existing anomaly detectors.
The reviewers and AC note the limitation of novelty beyond existing literature.
This is quite a borader line paper, but AC decided to recommend acceptance as comprehensive experimental results (still based on empirical observation though) are interesting. | train | [
"ByehoqpU1E",
"Bye5XYYT37",
"SygTG-v2Am",
"r1xNRARj0X",
"ryg5L0no0Q",
"HkeMqN7qCm",
"H1g7uFXf07",
"HklI1QZGA7",
"Skl4qxWN67",
"rJl4Tt7MC7",
"HJgkrz7_pX",
"H1lavG27nQ",
"B1xOlGHljQ",
"rkgIONKsq7",
"ByxTzKbkqX"
] | [
"public",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"official_reviewer",
"author",
"public",
"author"
] | [
"Thanks! Of course, we will be happy to cite your work on the first occasion.",
"This paper describes how a deep neural network can be fine-tuned to perform outlier detection in addition to its primary objective. For classification, the fine-tuning objective encourages out-of-distribution samples to have a unifor... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
8,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
4,
-1,
-1,
-1
] | [
"ryg5L0no0Q",
"iclr_2019_HyxCxhRcY7",
"r1xNRARj0X",
"HkeMqN7qCm",
"HJgkrz7_pX",
"Bye5XYYT37",
"Bye5XYYT37",
"Skl4qxWN67",
"iclr_2019_HyxCxhRcY7",
"H1lavG27nQ",
"iclr_2019_HyxCxhRcY7",
"iclr_2019_HyxCxhRcY7",
"rkgIONKsq7",
"iclr_2019_HyxCxhRcY7",
"iclr_2019_HyxCxhRcY7"
] |
iclr_2019_HyxGB2AcY7 | Contingency-Aware Exploration in Reinforcement Learning | This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning. To investigate this question, we consider an instantiation of this hypothesis evaluated on the Arcade Learning Element (ALE). In this study, we develop an attentive dynamics model (ADM) that discovers controllable elements of the observations, which are often associated with the location of the character in Atari games. The ADM is trained in a self-supervised fashion to predict the actions taken by the agent. The learned contingency information is used as a part of the state representation for exploration purposes. We demonstrate that combining actor-critic algorithm with count-based exploration using our representation achieves impressive results on a set of notoriously challenging Atari games due to sparse rewards. For example, we report a state-of-the-art score of >11,000 points on Montezuma's Revenge without using expert demonstrations, explicit high-level information (e.g., RAM states), or supervisory data. Our experiments confirm that contingency-awareness is indeed an extremely powerful concept for tackling exploration problems in reinforcement learning and opens up interesting research questions for further investigations. | accepted-poster-papers | The paper addresses the challenging and important problem of exploration in sparse-rewards settings. The authors propose a novel use of contingency awareness, i.e., the agent's understanding of the environment features that are under its direct control, in combination with a count-based approach to exploration. The model is trained using an inverse dynamics model and attention mechanism and is shown to be able to identify the controllable character. The resulting exploration approach achieves strong empirical results compared to alternative count-based exploration techniques. The reviewers note that the novel approach has potential for opening up potential fruitful directions for follow-up research. The obtained strong empirical results are another strong indication of the value of the proposed idea.
The reviewers mention several potential weaknesses. First, while the proposed idea is general, the specific implementation seems targetted specifically towards Atari games. While Atari is a popular benchmark domain, this raises questions as to whether insights can be more generally applied. Second, several questions were raised regarding the motivation for some of the presented modeling choices (e.g., loss terms) as well as their impact on the empirical results. Ablation studies were recommended as a step to resolving these questions Reviewer 3 questioned whether the learned state representation could be directly used as an additional input to the agent, and if it would improve performance. Finally, several related works were suggested that should be included in the discussion of related work.
The authors carefully addressed the issues raised by the reviewers, running additional comparisons and adding to the original empirical insights. Several issues of clarity were resolved in the paper and in the discussion. Reviewer 3 engaged with the authors and confirmed that they are satisfied with the resulting submission. The AC judges that the suggestions of reviewer 1 have been addressed to a satisfactory level. A remaining issue regarding results reporting was raised anonymously towards the end of the review period, and the AC encourages the authors to address this issue in their camera ready version. | train | [
"SygtznNglE",
"HkgYEAahJN",
"H1eXJyTACX",
"rJlI1eLA0Q",
"H1ejPD2tnm",
"BkxNyUCqR7",
"BkeVt5D9Am",
"rJecqfd9Cm",
"Hkxsdfd5Cm",
"HyelEz_qC7",
"SyeF5cv5AQ",
"SygATEK7pQ",
"HJljoJPp27"
] | [
"author",
"public",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your comment. You are correct that the reported performance of DDQN+ is achieved at 25M steps rather than at 50M steps. We will update the table in the final version of the paper. To the best of our knowledge, DDQN+ code is not publicly available and in our experience it was not trivial to ... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"HkgYEAahJN",
"iclr_2019_HyxGB2AcY7",
"rJlI1eLA0Q",
"BkxNyUCqR7",
"iclr_2019_HyxGB2AcY7",
"SyeF5cv5AQ",
"H1ejPD2tnm",
"SygATEK7pQ",
"SygATEK7pQ",
"HJljoJPp27",
"BkeVt5D9Am",
"iclr_2019_HyxGB2AcY7",
"iclr_2019_HyxGB2AcY7"
] |
iclr_2019_HyxKIiAqYQ | Context-adaptive Entropy Model for End-to-end Optimized Image Compression | We propose a context-adaptive entropy model for use in end-to-end optimized image compression. Our model exploits two types of contexts, bit-consuming contexts and bit-free contexts, distinguished based upon whether additional bit
allocation is required. Based on these contexts, we allow the model to more accurately estimate the distribution of each latent representation with a more generalized form of the approximation models, which accordingly leads to an
enhanced compression performance. Based on the experimental results, the proposed method outperforms the traditional image codecs, such as BPG and JPEG2000, as well as other previous artificial-neural-network (ANN) based approaches, in terms of the peak signal-to-noise ratio (PSNR) and multi-scale structural similarity (MS-SSIM) index. The test code is publicly available at https://github.com/JooyoungLeeETRI/CA_Entropy_Model. | accepted-poster-papers | This paper proposes an algorithm for end-to-end image compression outperforming previously proposed ANN-based techniques and typical image compression standards like JPEG.
Strengths
- All reviewers agreed that this a well written paper, with careful analysis and results.
Weaknesses
- One of the points raised during the review process was that 2 very recent publications propose very similar algorithms. Since these works appeared very close to ICLR paper submission deadline (within 30 days), the program committee decided to treat this as concurrent work.
The authors also clarified the differences and similarities with prior work, and included additional experiments to clarify some of the concerns raised during the review process. Overall the paper is a solid contribution towards improving image compression, and is therefore recommended to be accepted.
| train | [
"SkeXD949h7",
"rylzhKZ6hX",
"Hye-vwFXAm",
"SJgeanZnT7",
"SyUH3Znpm",
"HkxhG0ao6m",
"Byg5HXrlpQ",
"BJlVvzHgpX",
"BkeegQoyp7",
"HkxN269Ja7",
"B1xbVRde2Q",
"BJxhpCaC27",
"SyxV-q_3cm",
"SJxyx5u397",
"BkekKKXHq7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public"
] | [
"Update:\nI have updated my review to mention that we should accept this work as being concurrent with the two papers that are discussed below.\n\nOriginal review:\nThis paper is very similar to two previously published papers (as pointed by David Minnen before the review period was opened):\n\"Learning a Code-Spac... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HyxKIiAqYQ",
"iclr_2019_HyxKIiAqYQ",
"BJlVvzHgpX",
"rylzhKZ6hX",
"SkeXD949h7",
"B1xbVRde2Q",
"BkeegQoyp7",
"HkxN269Ja7",
"BJxhpCaC27",
"BJxhpCaC27",
"iclr_2019_HyxKIiAqYQ",
"iclr_2019_HyxKIiAqYQ",
"BkekKKXHq7",
"BkekKKXHq7",
"iclr_2019_HyxKIiAqYQ"
] |
iclr_2019_HyxPx3R9tm | Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow | Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods. | accepted-poster-papers | The paper proposes a simple and general technique based on the information bottleneck to constrain the information flow in the discriminator of adversarial models. It helps to train by maintaining informative gradients. While the information bottleneck is not novel, its application in adversarial learning to my knowledge is, and the empirical evaluation demonstrates impressive performance on a broad range of applications. Therefore, the paper should clearly be accepted.
| train | [
"S1ljRntcT7",
"BJeNdhYq67",
"B1eIr2t5Tm",
"ByxhshHL6Q",
"BJxmQSxU6Q",
"SylvFMGra7",
"Byl41tz9nX",
"Bkx6mnnK3Q",
"rJx9PNrv3X"
] | [
"author",
"author",
"author",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the insight and feedback. We have included additional experiments to further compare with previous techniques, along with some additional clarifications.\n\nRe: additional citations\nThank you for the pointers, we have included the additional citations.\n\nRe: GP for other task\nWe have conducted add... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
10,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"rJx9PNrv3X",
"Bkx6mnnK3Q",
"Byl41tz9nX",
"BJxmQSxU6Q",
"SylvFMGra7",
"iclr_2019_HyxPx3R9tm",
"iclr_2019_HyxPx3R9tm",
"iclr_2019_HyxPx3R9tm",
"iclr_2019_HyxPx3R9tm"
] |
iclr_2019_HyxnZh0ct7 | Meta-learning with differentiable closed-form solvers | Adapting deep networks to new concepts from a few examples is challenging, due to the high computational requirements of standard fine-tuning procedures.
Most work on few-shot learning has thus focused on simple learning techniques for adaptation, such as nearest neighbours or gradient descent.
Nonetheless, the machine learning literature contains a wealth of methods that learn non-deep models very efficiently.
In this paper, we propose to use these fast convergent methods as the main adaptation mechanism for few-shot learning.
The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.
This requires back-propagating errors through the solver steps.
While normally the cost of the matrix operations involved in such a process would be significant, by using the Woodbury identity we can make the small number of examples work to our advantage.
We propose both closed-form and iterative solvers, based on ridge regression and logistic regression components.
Our methods constitute a simple and novel approach to the problem of few-shot learning and achieve performance competitive with or superior to the state of the art on three benchmarks. | accepted-poster-papers | The reviewers disagree strongly on this paper. Reviewer 2 was the most positive, believing it to be an interesting contribution with strong results. Reviewer 3 however, was underwhelmed by the results. Reviewer 1 does not believe that the contribution is sufficiently novel, seeing it as too close to existing multi-task learning approaches.
After considering all of the discussion so far, I have to agree with reviewer 2 on their assessment. Much of the meta learning literature involves changing the base learner *for a fixed architecture* and seeing how it affects performance. There is a temptation to chase performance by changing the architecture, adding new regularizers, etc., and while this is important for practical reasons, it does not help to shed light on the underlying fundamentals. This is best done by considering carefully controlled and well understood experimental settings. Even still, the performance is quite good relative to popular base learners.
Regarding novelty, I agree it is a simple change to the base learner, using a technique that has been tried before in other settings (linear regression as opposed to classification), however its use in a meta learning setup is novel in my opinion, and the new experimental comparison regression on top of pre-trained CNN features helps to demonstrate the utility of its use in the meta-learning settings.
While the novelty can certainly be debated, I want to highlight two reasons why I am opting to accept this paper: 1) simple and effective ideas are often some of the most impactful. 2) sometimes taking ideas from one area (e.g., multi-task learning) and demonstrating that they can be effective in other settings (e.g., meta-learning) can itself be a valuable contribution. I believe that the meta-learning community would benefit from reading this paper.
| test | [
"BkgTUolegV",
"SJeA_NNyxE",
"r1l23D05y4",
"H1llepity4",
"SyxWU3iYkE",
"HyeglRQ_JE",
"BJg6awLwk4",
"rygq4xgwkN",
"r1e-G4YL1N",
"r1gGLd-507",
"BkljQHC-T7",
"Syg19STh6Q",
"SklLz3csp7",
"H1e3St5u67",
"SyedD2rXaQ",
"Byg2xy8MpX",
"Hkgvq5AZpm",
"rkevXdCb6m",
"B1lS7rnxT7",
"Syegwm5yaQ"... | [
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"author",
"official_reviewer",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",... | [
"1) We wrote: ““[multi-task learning] is different to our work, and in general to all of the previous literature on meta-learning applied to few-shot classification (e.g. Finn et al. 2017, Ravi & Larochelle 2017, Vinyals et al. 2016, etc). Notably, these methods and ours take into account adaptation *already during... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
7,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
-1
] | [
"SJeA_NNyxE",
"r1l23D05y4",
"SyxWU3iYkE",
"Syg19STh6Q",
"H1e3St5u67",
"BJg6awLwk4",
"rygq4xgwkN",
"r1e-G4YL1N",
"BkljQHC-T7",
"iclr_2019_HyxnZh0ct7",
"r1xPm1Kah7",
"SklLz3csp7",
"SyedD2rXaQ",
"SJlghKO937",
"Byg2xy8MpX",
"rkevXdCb6m",
"Syegwm5yaQ",
"r1ggxa85nm",
"SkeX8K6thm",
"r... |
iclr_2019_HyxzRsR9Y7 | Learning Self-Imitating Diverse Policies | The success of popular algorithms for deep reinforcement learning, such as policy-gradients and Q-learning, relies heavily on the availability of an informative reward signal at each timestep of the sequential decision-making process. When rewards are only sparsely available during an episode, or a rewarding feedback is provided only after episode termination, these algorithms perform sub-optimally due to the difficultly in credit assignment. Alternatively, trajectory-based policy optimization methods, such as cross-entropy method and evolution strategies, do not require per-timestep rewards, but have been found to suffer from high sample complexity by completing forgoing the temporal nature of the problem. Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need. In this work, we introduce a self-imitation learning algorithm that exploits and explores well in the sparse and episodic reward settings. We view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem. We show that with Jensen-Shannon divergence, this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped rewards learned from experience replays. Experimental results indicate that our algorithm works comparable to existing algorithms in environments with dense rewards, and significantly better in environments with sparse and episodic rewards. We then discuss limitations of self-imitation learning, and propose to solve them by using Stein variational policy gradient descent with the Jensen-Shannon kernel to learn multiple diverse policies. We demonstrate its effectiveness on a challenging variant of continuous-control MuJoCo locomotion tasks. | accepted-poster-papers | This paper proposes a reinforcement learning approach that better handles sparse reward environments, by using previously-experienced roll-outs that achieve high reward. The approach is intuitive, and the results in the paper are convincing. The authors addressed nearly all of the reviewer's concerns. The reviewers all agree that the paper should be accepted. | train | [
"Hkgln7Zw14",
"HJxRqW-v14",
"r1g9FkFUk4",
"B1gAK7lT2m",
"r1lt10yBJE",
"rkerMF4K0X",
"HylcXomtC7",
"rJxAv8QYAQ",
"HkxD8U7tCQ",
"H1xJ2emFA7",
"BkeCkTGFAQ",
"H1e1gh6sTX",
"Skx-JJf62Q",
"H1lUeG3vnQ"
] | [
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"We would merge pieces from the Appendix into the main sections for better coherence. Also, we would make our source code and scripts public. ",
"\n1. Experiments in section 3.1 use a parameterized discriminator since a single network suffices for self-imitation. Experiments in section 3.2 use $\\psi$ networks fo... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"r1lt10yBJE",
"r1g9FkFUk4",
"rkerMF4K0X",
"iclr_2019_HyxzRsR9Y7",
"HylcXomtC7",
"H1e1gh6sTX",
"B1gAK7lT2m",
"HkxD8U7tCQ",
"H1lUeG3vnQ",
"Skx-JJf62Q",
"iclr_2019_HyxzRsR9Y7",
"iclr_2019_HyxzRsR9Y7",
"iclr_2019_HyxzRsR9Y7",
"iclr_2019_HyxzRsR9Y7"
] |
iclr_2019_HyzMyhCcK7 | ProxQuant: Quantized Neural Networks via Proximal Operators | To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights. One common technique for quantizing neural networks is the straight-through gradient method, which enables back-propagation through the quantization mapping. Despite its empirical success, little is understood about why the straight-through gradient method works.
Building upon a novel observation that the straight-through gradient method is in fact identical to the well-known Nesterov’s dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant , that formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method. ProxQuant does back-propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps to encourage quantizedness. For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art results on binary quantization and is on par with state-of-the-art on multi-bit quantization. We further perform theoretical analyses showing that ProxQuant converges to stationary points under mild smoothness assumptions, whereas variants such as lazy prox-gradient method can fail to converge in the same setting. | accepted-poster-papers | A novel approach for quantized deep neural nets is proposed, which is more principled than commonly used straight-through gradient method. A theoretical analysis of the algorithm's converegence is presented, and empirical results show advantages of the proposed approach. | test | [
"HJe54G3sRm",
"BklDCt7c37",
"S1eLOUDJn7",
"rJlGBiktAX",
"HJxhpp6DAm",
"BJxebk0vA7",
"HyxI9lvQAX",
"r1gpSKtvam",
"rygyZPnvpX",
"SJxojtYwpm",
"HJgzFtYvaX",
"HJeb-tr93Q",
"rJe9XiOgq7",
"HJ7xolRtX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Thank you for the quick response after the rebuttal. We respond to the added comments in the following.\n\n--- “Novelty is limited”\nFirst, we would like to clarify that the difference between our method and BNN are two-fold: our method is a {non-lazy, soft} prox-gradient method whereas BNN (BinaryConnect) is {laz... | [
-1,
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1
] | [
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1
] | [
"S1eLOUDJn7",
"iclr_2019_HyzMyhCcK7",
"iclr_2019_HyzMyhCcK7",
"iclr_2019_HyzMyhCcK7",
"iclr_2019_HyzMyhCcK7",
"HyxI9lvQAX",
"r1gpSKtvam",
"BklDCt7c37",
"rJe9XiOgq7",
"HJeb-tr93Q",
"S1eLOUDJn7",
"iclr_2019_HyzMyhCcK7",
"HJ7xolRtX",
"iclr_2019_HyzMyhCcK7"
] |
iclr_2019_HyzdRiR9Y7 | Universal Transformers | Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset. | accepted-poster-papers | This paper presents Universal Transformers that generalizes Transformers with recurrent connections. The goal of Universal Transformers is to combine the strength of feed-forward convolutional architectures (parallelizability and global receptive fields) with the strength of recurrent neural networks (sequential inductive bias). In addition, the paper investigates a dynamic halting scheme (by adapting Adaptive Computation Time (ACT) of Graves 2016) to allow each individual subsequence to stop recurrent computation dynamically.
Pros:
The paper presents a new generalized architecture that brings a reasonable novelty over the previous Transformers when combined with the dynamic halting scheme. Empirical results are reasonably comprehensive and the codebase is publicly available.
Cons:
Unlike RNNs, the network recurs T times over the entire sequence of length M, thus it is not a literal combination of Transformers with RNNs, but only inspired by RNNs. Thus the proposed architecture does not precisely replicate the sequential inductive bias of RNNs. Furthermore, depending on how one views it, the network architecture is not entirely novel in that it is reminiscent of the previous memory network extensions with multi-hop reasoning (--- a point raised by R1 and R2). While several datasets are covered in the empirical study, the selected datasets may be biased toward simpler/easier tasks (--- R1).
Verdict:
While key ideas might not be entirely novel (R1/R2), the novelty comes from the fact that these ideas have not been combined and experimented in this exact form of Universal Transformers (with optional dynamic halting/ACT), and that the empirical results are reasonably broad and strong, while not entirely impressive (R1). Sufficient novelty and substance overall, and no issues that are dealbreakers. | train | [
"SylL9Yz1lN",
"rkginvfklN",
"rklvRIQR1N",
"HyxfZDmCk4",
"r1xW6d1jCX",
"rkxMwMsFn7",
"Hyx3t4h5A7",
"Skl0xm35CX",
"SkxrQ435AQ",
"B1luhGh90X",
"ByewREh90m",
"SkeCBTIYCQ",
"SklQ8hSt07",
"BkgUZgHKCX",
"Sye8Myd937",
"ByeMxPX9nm"
] | [
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"This is incorrect. Please see our response to the same comment with the heading \"Potentially wrong claim in this paper\".",
"Thanks for your comment.\n\nThe main point here is that in [1] the authors assume arbitrary-precision arithmetic, as clarified in their responses on OpenReview where they noted \"Our proo... | [
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"HyxfZDmCk4",
"rklvRIQR1N",
"iclr_2019_HyzdRiR9Y7",
"ByeMxPX9nm",
"SkxrQ435AQ",
"iclr_2019_HyzdRiR9Y7",
"rkxMwMsFn7",
"ByeMxPX9nm",
"rkxMwMsFn7",
"Sye8Myd937",
"Sye8Myd937",
"SklQ8hSt07",
"BkgUZgHKCX",
"iclr_2019_HyzdRiR9Y7",
"iclr_2019_HyzdRiR9Y7",
"iclr_2019_HyzdRiR9Y7"
] |
iclr_2019_HyztsoC5Y7 | Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning | Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time. Given that it is impractical to train separate policies to accommodate all situations the agent may see in the real world, this work proposes to learn how to quickly and effectively adapt online to new tasks. To enable sample-efficient learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context. Our experiments demonstrate online adaptation for continuous control tasks on both simulated and real-world agents. We first show simulated agents adapting their behavior online to novel terrains, crippled body parts, and highly-dynamic environments. We also illustrate the importance of incorporating online adaptation into autonomous agents that operate in the real world by applying our method to a real dynamic legged millirobot: We demonstrate the agent's learned ability to quickly adapt online to a missing leg, adjust to novel terrains and slopes, account for miscalibration or errors in pose estimation, and compensate for pulling payloads. | accepted-poster-papers | The authors consider the use of MAML with model based RL and applied this to robotics tasks with very encouraging results. There was definite interest in the paper, but also some concerns over how the results were situated, particularly with respect to the related research in the robotics community. The authors are strongly encouraged to carefully consider this feedback, as they have been doing in their responses, and address this as well as possible in the final version.
| test | [
"ByxoVJewyN",
"S1lzTCLUJN",
"BJxo8zPU67",
"rkl7x9ea3X",
"r1eBKWeYCX",
"SJeiiK_NAm",
"rkxmhBF10Q",
"ByeKOSYJRm",
"H1x4xrK1Rm",
"SJeknkUiTQ",
"Skgo5yLsT7",
"ryx-_D6z67",
"rJlYYIcC2X",
"BJeoWBEghm",
"S1gLnA52oQ"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"public"
] | [
"In the real-robot data collection is expensive, training a single model for the different terrains and conditions allows us to make a more efficient use of the data. Instead, in simulation we have separate experiments to have a more controlled comparison.\n\nThe task distribution during training and testing does i... | [
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1
] | [
-1,
-1,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1
] | [
"S1lzTCLUJN",
"iclr_2019_HyztsoC5Y7",
"iclr_2019_HyztsoC5Y7",
"iclr_2019_HyztsoC5Y7",
"iclr_2019_HyztsoC5Y7",
"ByeKOSYJRm",
"rkl7x9ea3X",
"SJeknkUiTQ",
"BJxo8zPU67",
"Skgo5yLsT7",
"ryx-_D6z67",
"rJlYYIcC2X",
"iclr_2019_HyztsoC5Y7",
"S1gLnA52oQ",
"iclr_2019_HyztsoC5Y7"
] |
iclr_2019_S1E3Ko09F7 | L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data | Instancewise feature scoring is a method for model interpretation, which yields, for each test instance, a vector of importance scores associated with features. Methods based on the Shapley score have been proposed as a fair way of computing feature attributions, but incur an exponential complexity in the number of features. This combinatorial explosion arises from the definition of Shapley value and prevents these methods from being scalable to large data sets and complex models. We focus on settings in which the data have a graph structure, and the contribution of features to the target variable is well-approximated by a graph-structured factorization. In such settings, we develop two algorithms with linear complexity for instancewise feature importance scoring on black-box models. We establish the relationship of our methods to the Shapley value and a closely related concept known as the Myerson value from cooperative game theory. We demonstrate on both language and image data that our algorithms compare favorably with other methods using both quantitative metrics and human evaluation. | accepted-poster-papers | The paper presents two new methods for model-agnostic interpretation of instance-wise feature importance.
Pros:
Unlike previous approaches based on the Shapley value, which had an exponential complexity in the number of features, the proposed methods have a linear-complexity when the data have a graph structure, which allows approximation based on graph-structured factorization. The proposed methods present solid technical novelty to study the important challenge of instance-wise, model-agnostic, linear-complexity interpretation of features.
Cons:
All reviewers wanted to see more extensive experimental results. Authors responded with most experiments requested. One issue raised by R3 was the need for comparing the proposed model-agnostic methods to existing model-specific methods. The proposed linear-complexity algorithm relies on the markov assumption, which some reviewers commented to be a potentially invalid assumption to make, but this does not seem to be a deal breaker since it is a relatively common assumption to make when deriving a polynomial-complexity approximation algorithm. Overall, the rebuttal addressed the reviewers' concerns well enough, leading to increased scores.
Verdict:
Accept. Solid technical novelty with convincing empirical results. | train | [
"HyenmOMRoX",
"HygvWsscTX",
"B1xadhicT7",
"rkeAS2s5pm",
"SyloPoi96m",
"Hyep4ji56Q",
"SyxgP3dn2Q",
"SkeChVXt27",
"rylA6HRv57",
"BkgYeky15Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"The paper proposes two approximations to the Shapley value used for generating feature scores for interpretability. Both exploit a graph structure over the features by considering only subsets of neighborhoods of features (rather than all subsets). The authors give some approximation guarantees under certain Marko... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
7,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
2,
-1,
-1
] | [
"iclr_2019_S1E3Ko09F7",
"iclr_2019_S1E3Ko09F7",
"HyenmOMRoX",
"HyenmOMRoX",
"SkeChVXt27",
"SyxgP3dn2Q",
"iclr_2019_S1E3Ko09F7",
"iclr_2019_S1E3Ko09F7",
"BkgYeky15Q",
"iclr_2019_S1E3Ko09F7"
] |
iclr_2019_S1EERs09YQ | Discovery of Natural Language Concepts in Individual Units of CNNs | Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret. Especially, little is known about how they represent language in their intermediate layers. In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns. In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text. We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language. | accepted-poster-papers | Important problem (making NN more transparent); reasonable approach for identifying which linguistic concepts different neurons are sensitive to; rigorous experiments. Paper was reviewed by three experts. Initially there were some concerns but after the author response and reviewer discussion, all three unanimously recommend acceptance. | val | [
"rklw7IW-xN",
"rkeFc0bDkV",
"BJeMqyfch7",
"HkxBOdwqAm",
"SJlEFDw9AQ",
"Ske8LPPq0m",
"Bkx4UOPqRQ",
"SyxDYjcq2m",
"rke4auot2Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nWe are deeply grateful to reviewer3 for thoughtful post-rebuttal suggestions. We will clarify terminology, add more analyses and modify the figures accordingly. For example, we will match the detected concepts with those in WordNet (ConceptNet) tree and update Fig 7 and Fig 14 to show which concepts are detected... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"BJeMqyfch7",
"SJlEFDw9AQ",
"iclr_2019_S1EERs09YQ",
"Bkx4UOPqRQ",
"SyxDYjcq2m",
"rke4auot2Q",
"BJeMqyfch7",
"iclr_2019_S1EERs09YQ",
"iclr_2019_S1EERs09YQ"
] |
iclr_2019_S1EHOsC9tX | Towards the first adversarially robust neural network model on MNIST | Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful L-inf defense by Madry et~al. (1) has lower L0 robustness than undefended networks and still highly susceptible to L2 perturbations, (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decision-based, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixels (L0). The results suggest that our approach yields state-of-the-art robustness on MNIST against L0, L2 and L-inf perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class. | accepted-poster-papers | The paper presents a technique of training robust classification models that uses the input distribution within each class to achieve high accuracy and robustness against adversarial perturbations.
Strengths:
- The resulting model offers good robustness guarantees for a wide range of norm-bounded perturbations
- The authors put a lot of care into the robustness evaluation
Weaknesses:
- Some of the "shortcomings" attributed to the previous work seem confusing, as the reported vulnerability corresponds to threat models that the previous work did not made claims about
Overall, this looks like a valuable and interesting contribution.
| train | [
"B1eu6wFIyN",
"r1lb0cYI1E",
"S1xZgzdYJE",
"BJx0OiRP1E",
"SylDJKFLk4",
"SyxI1zFUkE",
"rJlyOldp0X",
"HJgw5VPpAQ",
"r1eCatzaRm",
"ByeIeWEi0m",
"SJxNatjURX",
"SJegx_sU0X",
"H1xX7jsIAX",
"r1lfWKsIA7",
"SyeyTVeH0Q",
"H1lxliTMCX",
"BygGXkaqT7",
"SklgCVqq2Q",
"rylr8jLc3X",
"B1laSlHch7"... | [
"public",
"public",
"author",
"public",
"public",
"public",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author... | [
"I concur. Fashion-MNIST is a necessary datasets, which is similar to MNIST. Why not to choose Fashion-MNIST for analysis. The fact that the method performs well on MNIST is nice, but MNIST should be considered for what it is: a toy dataset. ",
"Why the authors choose these two classes (airplane, automobile) in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SyxI1zFUkE",
"SJxNatjURX",
"BJx0OiRP1E",
"iclr_2019_S1EHOsC9tX",
"SJegx_sU0X",
"SJegx_sU0X",
"SyeyTVeH0Q",
"r1eCatzaRm",
"ByeIeWEi0m",
"iclr_2019_S1EHOsC9tX",
"B1laSlHch7",
"SklgCVqq2Q",
"iclr_2019_S1EHOsC9tX",
"rylr8jLc3X",
"H1lxliTMCX",
"BygGXkaqT7",
"iclr_2019_S1EHOsC9tX",
"icl... |
iclr_2019_S1GkToR5tm | Discriminator Rejection Sampling | We propose a rejection sampling scheme using the discriminator of a GAN to
approximately correct errors in the GAN generator distribution. We show that
under quite strict assumptions, this will allow us to recover the data distribution
exactly. We then examine where those strict assumptions break down and design a
practical algorithm—called Discriminator Rejection Sampling (DRS)—that can be
used on real data-sets. Finally, we demonstrate the efficacy of DRS on a mixture of
Gaussians and on the state of the art SAGAN model. On ImageNet, we train an
improved baseline that increases the best published Inception Score from 52.52 to
62.36 and reduces the Frechet Inception Distance from 18.65 to 14.79. We then use
DRS to further improve on this baseline, improving the Inception Score to 76.08
and the FID to 13.75. | accepted-poster-papers | The paper proposes a discriminator dependent rejection sampling scheme for improving the quality of samples from a trained GAN. The paper is clearly written, presents an interesting idea and the authors extended and improved the experimental analyses as suggested by the reviewers. | train | [
"BkeHrKU1kE",
"r1lSsOUk1E",
"BkgY088kJV",
"Ske86fcY0Q",
"BJgZuI6m0m",
"SyxH1nd7R7",
"r1e5iSqf6X",
"r1gYHA1-a7",
"SkeXZRk-Tm",
"SylUYa1bpQ",
"SklRqc0yTQ",
"rJgNPOjOnX"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Please see this comment: https://openreview.net/forum?id=S1GkToR5tm¬eId=SyxH1nd7R7 or the updated PDF for experimental results on (what we think is) the simpler rejection scheme you mention. \n\nPlease also let us know if there's anything else you think we can do to improve the paper quality.",
"Thanks very m... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
3,
4
] | [
"SklRqc0yTQ",
"r1e5iSqf6X",
"Ske86fcY0Q",
"SkeXZRk-Tm",
"SyxH1nd7R7",
"iclr_2019_S1GkToR5tm",
"iclr_2019_S1GkToR5tm",
"iclr_2019_S1GkToR5tm",
"rJgNPOjOnX",
"SklRqc0yTQ",
"iclr_2019_S1GkToR5tm",
"iclr_2019_S1GkToR5tm"
] |
iclr_2019_S1M6Z2Cctm | Harmonic Unpaired Image-to-image Translation | The recent direction of unpaired image-to-image translation is on one hand very exciting as it alleviates the big burden in obtaining label-intensive pixel-to-pixel supervision, but it is on the other hand not fully satisfactory due to the presence of artifacts and degenerated transformations. In this paper, we take a manifold view of the problem by introducing a smoothness term over the sample graph to attain harmonic functions to enforce consistent mappings during the translation. We develop HarmonicGAN to learn bi-directional translations between the source and the target domains. With the help of similarity-consistency, the inherent self-consistency property of samples can be maintained. Distance metrics defined on two types of features including histogram and CNN are exploited. Under an identical problem setting as CycleGAN, without additional manual inputs and only at a small training-time cost, HarmonicGAN demonstrates a significant qualitative and quantitative improvement over the state of the art, as well as improved interpretability. We show experimental results in a number of applications including medical imaging, object transfiguration, and semantic labeling. We outperform the competing methods in all tasks, and for a medical imaging task in particular our method turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases. | accepted-poster-papers | The proposed method introduces a method for unsupervised image-to-image mapping, using a new term into the objective function that enforces consistency in similarity between image patches across domains. Reviewers left constructive and detailed comments, which, the authors have made substantial efforts to address.
Reviewers have ranked paper as borderline, and in Area Chair's opinion, most major issued have been addressed:
- R3&R2: Novelty compared to DistanceGAN/CRF limited: authors have clarified contributions in reference to DistanceGAN/CRF and demonstrated improved performance relative to several datasets.
- R3&R1: Evaluation on additional datasets required: authors added evaluation on 4 more tasks
- R3&R1: Details missing: authors added details.
| train | [
"Sye1L1Bn37",
"r1xN-MahTm",
"S1xNXGahT7",
"HJxUobThTQ",
"SJeW-lphpQ",
"Hkl52g6nam",
"H1xHSb636X",
"rJgUEgahpm",
"H1lGEHeonm",
"SylYVmZKnm",
"B1xpkXvb2Q",
"ryeFogT13Q",
"H1l5el1hjX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"This paper proposes a method called HarmonicGAN for unpaired image-to-image translation. The key idea is to introduce a regularization term on the basis of CycleGAN, which encourages similar image patches to acquire similar transformations. Two feature domains are explored for evaluating the patch-level similarit... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
-1,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
-1,
-1,
-1
] | [
"iclr_2019_S1M6Z2Cctm",
"SylYVmZKnm",
"r1xN-MahTm",
"H1xHSb636X",
"Sye1L1Bn37",
"H1lGEHeonm",
"Hkl52g6nam",
"SJeW-lphpQ",
"iclr_2019_S1M6Z2Cctm",
"iclr_2019_S1M6Z2Cctm",
"ryeFogT13Q",
"H1l5el1hjX",
"iclr_2019_S1M6Z2Cctm"
] |
iclr_2019_S1VWjiRcKX | Universal Successor Features Approximators | The ability of a reinforcement learning (RL) agent to learn about many reward functions at the same time has many potential benefits, such as the decomposition of complex tasks into simpler ones, the exchange of information between tasks, and the reuse of skills. We focus on one aspect in particular, namely the ability to generalise to unseen tasks. Parametric generalisation relies on the interpolation power of a function approximator that is given the task description as input; one of its most common form are universal value function approximators (UVFAs). Another way to generalise to new tasks is to exploit structure in the RL problem itself. Generalised policy improvement (GPI) combines solutions of previous tasks into a policy for the unseen task; this relies on instantaneous policy evaluation of old policies under the new reward function, which is made possible through successor features (SFs). Our proposed \emph{universal successor features approximators} (USFAs) combine the advantages of all of these, namely the scalability of UVFAs, the instant inference of SFs, and the strong generalisation of GPI. We discuss the challenges involved in training a USFA, its generalisation properties and demonstrate its practical benefits and transfer abilities on a large-scale domain in which the agent has to navigate in a first-person perspective three-dimensional environment. | accepted-poster-papers | This paper addresses an importnant and more realistic setting of multi-task RL where the reward function changes; the approach is elegant, and empirical results are convincing. The paper presents an importnant contribution to the challenging multi-task RL problem. | train | [
"HJllwrS8aQ",
"SJe_R952hm",
"rJxFXMyF2X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The goal here is multi-task learning and generalization, assuming that the expected one-step reward for any member of the task family can be written as $\\phi(s,a,s')^T w$. The authors propose universal successor features (USF) $\\psi$s, such that the action-value functions Q can be written as $Q(s,a,w,z)=\\psi(s,... | [
7,
5,
6
] | [
3,
2,
4
] | [
"iclr_2019_S1VWjiRcKX",
"iclr_2019_S1VWjiRcKX",
"iclr_2019_S1VWjiRcKX"
] |
iclr_2019_S1eK3i09YQ | Gradient Descent Provably Optimizes Over-parameterized Neural Networks | One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an m hidden node shallow neural network with ReLU activation and n training data, we show as long as m is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function.
Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods. | accepted-poster-papers | This paper proves that gradient descent with random initialization converges to global minima for a squared loss penalty over a two layer ReLU network and arbitrarily labeled data. The paper has several weakness such as, 1) assuming top layer is fixed, 2) large number of hidden units 'm', 3) analysis is for squared loss. Despite these weaknesses the paper makes a novel contribution to a relatively challenging problem, and is able to show convergence results without strong assumptions on the input data or the model. Reviewers find the results mostly interesting and have some concerns about the \lambda_0 requirement. I believe the authors have sufficiently addressed this issue in their response and I suggest acceptance. | train | [
"HygVLQBmkV",
"HJeK5b6f14",
"HygOMNNqhQ",
"S1es44k-JN",
"ryeydjfWJE",
"BJeygSVrTm",
"SkxPEuIhCQ",
"BkgphZviC7",
"B1lTu7zsC7",
"BylyV4KqRX",
"r1gzcd-5aX",
"H1llXbvcC7",
"rJgDnCeICm",
"Skeo1ukt07",
"B1l4CQpeAm",
"BkxSoWagAm",
"rygEFV6l0m",
"HylJ7E6xAm",
"HJg5_7pe0m",
"HJlyYM6gCQ"... | [
"public",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"official_reviewer",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",... | [
"Though assuming w fixed and only randomness of w(0), the random event still depends on w. I believe Lemma 3.2 actually proved that Prob[H(w) eigenvalues are lower bounded]>1 -delta, for any fixed w. But what is used in the latter proof seems to be Prob[for any fixed w, H(w) eigenvalues are lower bounded]>1 -delta.... | [
-1,
-1,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"ryeydjfWJE",
"ryeydjfWJE",
"iclr_2019_S1eK3i09YQ",
"rJgDnCeICm",
"B1l4CQpeAm",
"iclr_2019_S1eK3i09YQ",
"BkgphZviC7",
"iclr_2019_S1eK3i09YQ",
"r1gzcd-5aX",
"H1llXbvcC7",
"iclr_2019_S1eK3i09YQ",
"Skeo1ukt07",
"iclr_2019_S1eK3i09YQ",
"rJgDnCeICm",
"BJeygSVrTm",
"iclr_2019_S1eK3i09YQ",
... |
iclr_2019_S1eOHo09KX | Opportunistic Learning: Budgeted Cost-Sensitive Learning from Data Streams | In many real-world learning scenarios, features are only acquirable at a cost constrained under a budget. In this paper, we propose a novel approach for cost-sensitive feature acquisition at the prediction-time. The suggested method acquires features incrementally based on a context-aware feature-value function. We formulate the problem in the reinforcement learning paradigm, and introduce a reward function based on the utility of each feature. Specifically, MC dropout sampling is used to measure expected variations of the model uncertainty which is used as a feature-value function. Furthermore, we suggest sharing representations between the class predictor and value function estimator networks. The suggested approach is completely online and is readily applicable to stream learning setups. The solution is evaluated on three different datasets including the well-known MNIST dataset as a benchmark as well as two cost-sensitive datasets: Yahoo Learning to Rank and a dataset in the medical domain for diabetes classification. According to the results, the proposed method is able to efficiently acquire features and make accurate predictions. | accepted-poster-papers | This paper presents a reinforcement learning approach for online cost-aware feature acquisition. The utility of each feature is measured in terms of expected variations of the model uncertainty (using MC dropout sampling as an estimate of certainty) which is subsequently used as a reward function in the reinforcement learning formulation. The empirical evaluations show improvements over prior approaches in terms of accuracy-cost trade-off on three datasets. AC can confirm that all three reviewers have read the author responses and have significantly contributed to the revision of the manuscript.
Initially, R1 and R2 raised important concerns regarding low technical novelty. R1 requested an ablation study to understand which of the following components gives the most improvement: 1) using proper certainty estimation; 2) using immediate reward; 3) new policy architecture. Pleased to report that the authors addressed the ablation study in their rebuttal and confirmed that MC-dropout certainty plays a crucial rule in the performance of the proposed method. R1 subsequently increased the assigned score to 6. R2 raised concerns about related prior work Contardo et al 2016, which similarly evaluates the most informative features given budget constraints with a recurrent neural network approach. After a long discussion and a detailed rebuttal, R2 upgraded the rating from below the threshold to 7, albeit acknowledging an incremental technical contribution. R3 raised important concerns regarding presentation clarity that were subsequently addressed by the authors. In conclusion, all three reviewers were convinced by the authors rebuttal and have upgraded their initial rating, and AC recommends acceptance of this paper – congratulations to the authors!
| train | [
"H1g0t0mMnQ",
"rygaWIL0h7",
"SkeepuOYC7",
"Hkeo37zYAm",
"HJeqWMc_0Q",
"rkejR-qOCm",
"HJgr7KZahQ",
"H1g_9mF_07",
"SJlpP-Kd0m",
"BJg5AgFOC7",
"HyeK0TddCm",
"BJxJxn2l0Q",
"HkgGRh3gAQ",
"rJelzRnl0X",
"HyeFo62e07",
"HklsxJ6e07",
"rJexCC2xR7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper presents a novel method for budgeted cost sensitive learning from Data Streams.\nThis paper seems very similar to the work of Contrado’s RADIN algorithm which similarly evaluates sequential datapoints with a recurrent neural network by adaptively “purchasing” the most valuable features for the current d... | [
7,
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_S1eOHo09KX",
"iclr_2019_S1eOHo09KX",
"Hkeo37zYAm",
"HkgGRh3gAQ",
"H1g_9mF_07",
"SJlpP-Kd0m",
"iclr_2019_S1eOHo09KX",
"BJg5AgFOC7",
"HyeFo62e07",
"HyeK0TddCm",
"rJelzRnl0X",
"iclr_2019_S1eOHo09KX",
"rygaWIL0h7",
"HJgr7KZahQ",
"HJgr7KZahQ",
"H1g0t0mMnQ",
"H1g0t0mMnQ"
] |
iclr_2019_S1eYHoC5FX | DARTS: Differentiable Architecture Search | This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. | accepted-poster-papers | This paper introduces a very simple but effective method for the neural architecture search problem. The key idea of the method is a particular continuous relaxation of the architecture representation to enable gradient descent-like differentiable optimization. Results are quite good. Source code is also available. A concern of the approach is the (possibly large) integrality gap between the continuous solution and the discretized architecture. The solution provided in the paper is a heuristic without guarantees. Overall, this is a good paper. I recommend acceptance. | train | [
"HklMs3OuxE",
"H1l5BS-ueV",
"Syed9oKDgV",
"SJesF7dIg4",
"HklGrBN8gE",
"B1l8_f9Qx4",
"H1eNn09uAm",
"HyxF8gidA7",
"H1gA2JsOAQ",
"rkgEU1jdCX",
"r1lmv05d0Q",
"Byea04oQ0Q",
"S1gIGlGbT7",
"rJeh6xB5nQ",
"r1ekErZ53Q",
"HJg9ETOFn7",
"HyeSHfVcnQ",
"B1eNZMVcnX",
"SkxmNjGq27",
"HJek5PHFh7"... | [
"public",
"author",
"public",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"public",
"author",
"public",
"public",
"author... | [
"At least, you should provide experimental results without the wired strategy. I think it is a big problem for the literature, it will make the future NAS work confuses on whether to use your \"strategy\".",
"Dear Reviewers,\n\nIn response to the negative anonymous comments that we have received, we would like to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1l5BS-ueV",
"iclr_2019_S1eYHoC5FX",
"SJesF7dIg4",
"HklGrBN8gE",
"iclr_2019_S1eYHoC5FX",
"iclr_2019_S1eYHoC5FX",
"rJeh6xB5nQ",
"HJg9ETOFn7",
"rkgEU1jdCX",
"r1ekErZ53Q",
"iclr_2019_S1eYHoC5FX",
"iclr_2019_S1eYHoC5FX",
"iclr_2019_S1eYHoC5FX",
"iclr_2019_S1eYHoC5FX",
"iclr_2019_S1eYHoC5FX"... |
iclr_2019_S1ecm2C9K7 | Feature-Wise Bias Amplification | We study the phenomenon of bias amplification in classifiers, wherein a machine learning model learns to predict classes with a greater disparity than the underlying ground truth. We demonstrate that bias amplification can arise via inductive bias in gradient descent methods resulting in overestimation of importance of moderately-predictive ``weak'' features if insufficient training data is available. This overestimation gives rise to feature-wise bias amplification -- a previously unreported form of bias that can be traced back to the features of a trained model. Through analysis and experiments, we show that the while some bias cannot be mitigated without sacrificing accuracy, feature-wise bias amplification can be mitigated through targeted feature selection. We present two new feature selection algorithms for mitigating bias amplification in linear models, and show how they can be adapted to convolutional neural networks efficiently. Our experiments on synthetic and real data demonstrate that these algorithms consistently lead to reduced bias without harming accuracy, in some cases eliminating predictive bias altogether while providing modest gains in accuracy. | accepted-poster-papers | The authors identify a source of bias that occurs when a model overestimates the importance of weak features in the regime where sufficient training data is not available. The bias is characterized theoretically, and demonstrated on synthetic and real datasets. The authors then present two algorithms to mitigate this bias, and demonstrate that they are effective in experimental evaluations.
As noted by the reviewers, the work is well-motivated and clearly presented. Given the generally positive reviews, the AC recommends that the work be accepted. The authors should consider adding additional text describing the details concerning Figure 3 in the appendix.
| test | [
"Hyxcbl9SyV",
"SJlKHfoNJN",
"SkeSFyHzyN",
"B1l3c_5e14",
"rkek5d6R0Q",
"r1ei7VroCX",
"r1xgFKumnQ",
"S1x3UjW7RQ",
"ByelNylQ0m",
"S1xctjvOaQ",
"SJxsEivOaQ",
"S1g-livup7",
"H1lSw16ch7",
"BkldIeZq27"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your continued feedback. We ran the experiment suggested, where \\mu_1 = (1,0,1,0,1,...,0), and this results in no systematic bias (with a setup similar to that of Figure 2(a), but with 200 weak features - 100 per class - and N=1000, the average bias over 100 trials was 0.00031, which would round to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"SkeSFyHzyN",
"B1l3c_5e14",
"S1g-livup7",
"S1xctjvOaQ",
"r1ei7VroCX",
"S1x3UjW7RQ",
"iclr_2019_S1ecm2C9K7",
"ByelNylQ0m",
"SJxsEivOaQ",
"r1xgFKumnQ",
"BkldIeZq27",
"H1lSw16ch7",
"iclr_2019_S1ecm2C9K7",
"iclr_2019_S1ecm2C9K7"
] |
iclr_2019_S1erHoR5t7 | The relativistic discriminator: a key element missing from standard GAN | In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs.
We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function.
Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400%), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.
The code is freely available on https://github.com/AlexiaJM/RelativisticGAN. | accepted-poster-papers | All authors agree that the relativistic discriminator is an interesting idea, and a useful proposal to improve the stability and sample quality of GANs. In earlier drafts there were some clarity issues and missing details, but those have been fixed to the satisfaction of the reviewers. Both R1 and R3 expressed a desire for a more theoretical justification of why the relativistic discriminator should work better, but the empirical results are strong enough that this can be left for future work. | train | [
"HyxBUhUonX",
"HJxZFCitTQ",
"BJxS-DnO6m",
"HJej_y3d6Q",
"Bkl-U0WwT7",
"SkxVOxiLpX",
"ryeqtnJonQ",
"SJxhZqNKn7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes an interesting tweak of the standard GAN model (inspired by IPM based GANs) where both the generator and the discriminator optimize relative realness (and fakeness) of the (real, fake) image pairs. The authors give some intuition for this tweak and ran experiments with CIFAR10 and CAT datasets.... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
2,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_S1erHoR5t7",
"HyxBUhUonX",
"ryeqtnJonQ",
"SJxhZqNKn7",
"SkxVOxiLpX",
"iclr_2019_S1erHoR5t7",
"iclr_2019_S1erHoR5t7",
"iclr_2019_S1erHoR5t7"
] |
iclr_2019_S1fQSiCcYm | Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer | Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations. | accepted-poster-papers | The reviewers have reached a consensus that this paper is very interesting and add insights into interpolation in autoencoders. | train | [
"H1egXXwd14",
"HJeuQMG5n7",
"H1l1iK8OyE",
"B1gHm_rw14",
"SyefNNz8kN",
"S1eavUPSJE",
"rJlGBqmBy4",
"H1g-QchEJE",
"H1gUty5Ey4",
"BJgfoaTGkE",
"SJgQLPtKTm",
"r1g4OITYpX",
"HkxOaxgtp7",
"BJl0L1xt6m",
"S1e8zyxY67",
"ryx6nAJFaQ",
"rJlQxJGchX",
"B1gmB4Kv2m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for engaging in discussion with us, suggesting additional experiments, and being open to updating your review.",
"Main idea:\nThis paper investigates the desiderata for a successful interpolation:\n1) Interpolation looks realistic;\n2) The interpolation path is semantically smooth. \nAn adversarial regula... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
9
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"H1l1iK8OyE",
"iclr_2019_S1fQSiCcYm",
"B1gHm_rw14",
"SyefNNz8kN",
"S1eavUPSJE",
"rJlGBqmBy4",
"H1g-QchEJE",
"H1gUty5Ey4",
"ryx6nAJFaQ",
"S1e8zyxY67",
"iclr_2019_S1fQSiCcYm",
"SJgQLPtKTm",
"iclr_2019_S1fQSiCcYm",
"B1gmB4Kv2m",
"rJlQxJGchX",
"HJeuQMG5n7",
"iclr_2019_S1fQSiCcYm",
"icl... |
iclr_2019_S1fUpoR5FQ | Quasi-hyperbolic momentum and Adam for deep learning | Momentum-based acceleration of stochastic gradient descent (SGD) is widely used in deep learning. We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step. We describe numerous connections to and identities with other algorithms, and we characterize the set of two-state optimization algorithms that QHM can recover. Finally, we propose a QH variant of Adam called QHAdam, and we empirically demonstrate that our algorithms lead to significantly improved training in a variety of settings, including a new state-of-the-art result on WMT16 EN-DE. We hope that these empirical results, combined with the conceptual and practical simplicity of QHM and QHAdam, will spur interest from both practitioners and researchers. Code is immediately available. | accepted-poster-papers | This paper presents quasi-hyperbolic momentum, a generalization of Nesterov Accelerated Gradient. The method can be seen as adding an additional hyperparameter to NAG corresponding to the weighting of the direct gradient term in the update. The contribution is pretty simple, but the paper has good discussion of the relationships with other momentum methods, careful theoretical analysis, and fairly strong experimental results. All the reviewers believe this is a strong paper and should be accepted, and I concur.
| train | [
"Bklzj4_9n7",
"BygWhwctA7",
"ryxZv8qKAm",
"ByeKaCR-07",
"rygjk-aiaX",
"H1eadSvGpX",
"Bket2ivu6Q",
"H1lwIdcuTQ",
"BylCh-9OTm",
"Hyxyo6TDT7",
"B1x95wpPaQ",
"ryg5hkUDTX",
"rJe2cWND67",
"rkxTldzvT7",
"HkebiJLLT7",
"SyeWFkLL67",
"rJlL-18UTX",
"SkezCLH-6m",
"SklIQuS-Tm",
"HklAFD5lpX"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"public",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"Update after the author response: I am changing my rating from 6 to 7. The authors did a good job at clarifying where the gain might be coming from, and even though I maintain that decoupling the two variables is a simple modification, it leads to some valuable insights and good results which would of interest to ... | [
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_S1fUpoR5FQ",
"ByeKaCR-07",
"rygjk-aiaX",
"rJlL-18UTX",
"HkebiJLLT7",
"iclr_2019_S1fUpoR5FQ",
"Hyxyo6TDT7",
"BylCh-9OTm",
"Bket2ivu6Q",
"B1x95wpPaQ",
"ryg5hkUDTX",
"rJe2cWND67",
"rkxTldzvT7",
"iclr_2019_S1fUpoR5FQ",
"SyeWFkLL67",
"H1eadSvGpX",
"BJxvbVG9h7",
"Bklzj4_9n7",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.