paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2019_HJM4rsRqFX | Neural Variational Inference For Embedding Knowledge Graphs | Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data. In this paper, we introduce two generic Variational Inference frameworks for generative models of Knowledge Graphs; Latent Fact Model and Latent Information Model. While traditional variational methods derive an analytical approximation for the intractable distribution over the latent variables, here we construct an inference network conditioned on the symbolic representation of entities and relation types in the Knowledge Graph, to provide the variational distributions. The new framework can create models able to discover underlying probabilistic semantics for the symbolic representation by utilising parameterisable distributions which permit training by back-propagation in the context of neural variational inference, resulting in a highly-scalable method. Under a Bernoulli sampling framework, we provide an alternative justification for commonly used techniques in large-scale stochastic variational inference, which drastically reduces training time at a cost of an additional approximation to the variational lower bound. The generative frameworks are flexible enough to allow training under any prior distribution that permits a re-parametrisation trick, as well as under any scoring function that permits maximum likelihood estimation of the parameters. Experiment results display the potential and efficiency of this framework by improving upon multiple benchmarks with Gaussian prior representations. Code publicly available on Github. | rejected-papers | The paper proposes a novel variational inference framework for knowledge graphs which is evaluated on link prediction benchmark sets and is competitive to previous generative approaches.
While the idea is interstnig and technically correct, the originality of the contribution is limited,
and the paper would be clearly improved by providing a clearer motivation for using generative models instead of standard methods and a experimental demonstration of the benefits of using a generative instead of a discriminative model, especially since the standard method perform slightly better in the experiments. Overall, the work is slightly under the acceptance threshold.
| train | [
"BJgcOJ2BRX",
"HyeOCKDBRQ",
"HyekC8PrAm",
"r1g2CcuVAX",
"S1efAIW7Am",
"SyeWaXbXRX",
"Hkg787bXRX",
"H1l7Wh3hnX",
"SkljN1ID2m",
"Byl2jPWGjX"
] | [
"author",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer, \n\nWe thank you for the promising feedback. We have responded to the anonymous comment recently around missing comparisons and stressed the missing references are not crucial to the papers story —- as we focus on different tasks (uni vs multi relational data), they are instead only additional relat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"HyeOCKDBRQ",
"S1efAIW7Am",
"r1g2CcuVAX",
"S1efAIW7Am",
"Byl2jPWGjX",
"SkljN1ID2m",
"H1l7Wh3hnX",
"iclr_2019_HJM4rsRqFX",
"iclr_2019_HJM4rsRqFX",
"iclr_2019_HJM4rsRqFX"
] |
iclr_2019_HJMCdsC5tX | A fully automated periodicity detection in time series | This paper presents a method to autonomously find periodicities in a signal. It is based on the same idea of using Fourier Transform and autocorrelation function presented in Vlachos et al. 2005. While showing interesting results this method does not perform well on noisy signals or signals with multiple periodicities. Thus, our method adds several new extra steps (hints clustering, filtering and detrending) to fix these issues. Experimental results show that the proposed method outperforms the state of the art algorithms. | rejected-papers | This paper presents an heuristic method to detect periodicity in a time-series such that it can handle noise and multiple periods.
All reviewers agreed that this paper falls off the scope of ICLR since it does not discuss any learning-related question. Moreover, the authors did not provide any response nor updated manuscript addressing the reviewers remarks. The AC thus recommends rejection. | train | [
"Hye3POxspm",
"SylwnL-I6m",
"B1lxdWw927"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present a heuristic method to detect periodicity in time series. It extends a previous approach in dealing with noise and the setting of multiple periodicities.\n\nThe topic does not match the scope of ICLR and would be better suited for a different venue.\n\nThe method is demonstrated in a purely expe... | [
3,
5,
3
] | [
3,
2,
2
] | [
"iclr_2019_HJMCdsC5tX",
"iclr_2019_HJMCdsC5tX",
"iclr_2019_HJMCdsC5tX"
] |
iclr_2019_HJMRvsAcK7 | Dynamic Pricing on E-commerce Platform with Deep Reinforcement Learning | In this paper we develop an approach based on deep reinforcement learning (DRL) to address dynamic pricing problem on E-commerce platform. We models real-world E-commerce dynamic pricing problem as Markov Decision Process. Environment state are defined with four groups of different business data. We make several main improvements on the state-of-the-art DRL-based dynamic pricing approaches: 1. We first extend the application of dynamic pricing to a continuous pricing action space. 2. We solve the unknown demand function problem by designing different reward functions. 3. The cold-start problem is addressed by introducing pre-training and evaluation using the historical sales data. Field experiments are designed and conducted on real-world E-commerce platform, pricing thousands of SKUs of products lasting for months. The experiment results shows that, on E-commerce platform, the difference of the revenue conversion rates (DRCR) is a more suitable reward function than the revenue only, which is different from the conclusion from previous researches. Meanwhile, the proposed continuous action model performs better than the discrete one. | rejected-papers |
This is an interesting topic but the reviewers had substantial concerns on the clarity and significance of the contribution.
| test | [
"HJgwUkc607",
"HJemiY_sRm",
"ByxhHtOiRX",
"r1lHlYds0X",
"HJgUtyV6nm",
"rye65HF52m",
"HklnqASCoX",
"rJxWopi3om",
"rkxsVD65jX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thanks for the detailed response, and the updated version.",
"We would like to thank the reviewer for the valuable feedback and suggestions to improve this draft. We address the reviewer’s concerns:\n\nQ1: The proposed methodology of applying specific RL techniques such as DDPG to pricing appears novel. However,... | [
-1,
-1,
-1,
-1,
4,
4,
4,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
5,
3,
-1,
-1
] | [
"ByxhHtOiRX",
"HJgUtyV6nm",
"HklnqASCoX",
"rye65HF52m",
"iclr_2019_HJMRvsAcK7",
"iclr_2019_HJMRvsAcK7",
"iclr_2019_HJMRvsAcK7",
"rkxsVD65jX",
"iclr_2019_HJMRvsAcK7"
] |
iclr_2019_HJMXTsCqYQ | Constrained Bayesian Optimization for Automatic Chemical Design | Automatic Chemical Design provides a framework for generating novel molecules with optimized molecular properties. The current model suffers from the pathology that it tends to produce invalid molecular structures. By reformulating the search procedure as a constrained Bayesian optimization problem, we showcase improvements in both the validity and quality of the generated molecules. We demonstrate that the model consistently produces novel molecules ranking above the 90th percentile of the distribution over training set scores across a range of objective functions. Importantly, our method suffers no degradation in the complexity or the diversity of the generated molecules. | rejected-papers | This paper proposes to use constrained Bayesian optimization to improve the chemical compound generation. Unfortunately, the reviewers raises a range of critical issues which are not responded by authors' rebuttal. | train | [
"S1gUeQYchm",
"rJlRry6x2Q",
"Syl_iuMYom"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper proposes a novel method for generating novel molecules with some targeted properties. Many studies on how to generate chemically valid molecular graphs have been done, but it is still an open problem due to the essential difficulty of generating discrete structures from any continuous latent s... | [
3,
4,
5
] | [
4,
3,
4
] | [
"iclr_2019_HJMXTsCqYQ",
"iclr_2019_HJMXTsCqYQ",
"iclr_2019_HJMXTsCqYQ"
] |
iclr_2019_HJMXus0ct7 | iRDA Method for Sparse Convolutional Neural Networks | We propose a new approach, known as the iterative regularized dual averaging (iRDA), to improve the efficiency of convolutional neural networks (CNN) by significantly reducing the redundancy of the model without reducing its accuracy. The method has been tested for various data sets, and proven to be significantly more efficient than most existing compressing techniques in the deep learning literature. For many popular data sets such as MNIST and CIFAR-10, more than 95% of the weights can be zeroed out without losing accuracy. In particular, we are able to make ResNet18 with 95% sparsity to have an accuracy that is comparable to that of a much larger model ResNet50 with the best 60% sparsity as reported in the literature. | rejected-papers | This paper proposes an “iterative” regularized dual averaging method to sparsify CNN weights during learning. The main contribution seems to be in an iterative procedure where the weights are pruned out greedily by observing the sparsity of the averaged gradients. The reviewers agree that the idea seems straightforward and novelty is limited. For this reason, I recommend to reject this paper.
| train | [
"HJxBByxuaX",
"SklGAR9ah7",
"HJetJMY9hQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"iRDA Method for sparse convolutional neural networks \n\nThis paper considers the problem of training a sparse neural network. The main motivation is that usually all state of the art neural network’s size or the number of weights is enormous and saving them in memory is costly. So it would be of great interest to... | [
3,
3,
3
] | [
5,
4,
5
] | [
"iclr_2019_HJMXus0ct7",
"iclr_2019_HJMXus0ct7",
"iclr_2019_HJMXus0ct7"
] |
iclr_2019_HJMghjA9YX | Model Comparison for Semantic Grouping | We introduce a probabilistic framework for quantifying the semantic similarity between two groups of embeddings. We formulate the task of semantic similarity as a model comparison task in which we contrast a generative model which jointly models two sentences versus one that does not. We illustrate how this framework can be used for the Semantic Textual Similarity tasks using clear assumptions about how the embeddings of words are generated. We apply information criteria based model comparison to overcome the shortcomings of Bayesian model comparison, whilst still penalising model complexity. We achieve competitive results by applying the proposed framework with an appropriate choice of likelihood on the STS datasets. | rejected-papers | This paper presents a novel family of probabilistic approaches to computing the similarities between two sentences using bag-of-embeddings representations, and presents evaluations on a standard benchmark to demonstrate the effectiveness of the approach. While there seem to be no substantial disputes about the soundness of the paper in its current form, the reviewers were not convinced by the broad motivation for the approach, and did not find the empirical results compelling enough to serve as a motivation on its own. Given that, no reviewer was willing to argue that this paper makes an important enough contribution to be accepted.
It is unfortunate that one of the assigned reviewers—by their own admission—was not well qualified to review it and that a second reviewer did not submit a review at all, necessitating a late fill-in review (thank you, anonymous emergency reviewer!). However, the paper was considered seriously: I can attest that both of the two higher-confidence reviewers are well qualified to review work on problems and methods like these. | train | [
"ryx0pDkayE",
"HJev9Da014",
"rye5KHf6J4",
"rkx405TKAX",
"Bkx7YziO67",
"r1eamKOupm",
"SJeTGbYup7",
"S1e1AGj_pm",
"HyI7safM6X",
"rJlFVJbo3X",
"S1gnfMKc3Q"
] | [
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I think using uSIF (Ethayarajh, 2018) would be a better comparison for your method than SIF. uSIF fixes some of the problems with SIF and as a result does much better on the STS tasks, achieving state-of-the-art.\n\nThe GloVe version of SIF does better than your approach for STS'12, STS'13, STS'14, and STS'15 (STS... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"rkx405TKAX",
"ryx0pDkayE",
"ryx0pDkayE",
"iclr_2019_HJMghjA9YX",
"HyI7safM6X",
"S1gnfMKc3Q",
"rJlFVJbo3X",
"HyI7safM6X",
"iclr_2019_HJMghjA9YX",
"iclr_2019_HJMghjA9YX",
"iclr_2019_HJMghjA9YX"
] |
iclr_2019_HJMjW3RqtX | One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL | Humans are experts at high-fidelity imitation -- closely mimicking a demonstration, often in one attempt. Humans use this ability to quickly solve a task instance, and to bootstrap learning of new tasks. Achieving these abilities in autonomous agents is an open problem. In this paper, we introduce an off-policy RL algorithm (MetaMimic) to narrow this gap. MetaMimic can learn both (i) policies for high-fidelity one-shot imitation of diverse novel skills, and (ii) policies that enable the agent to solve tasks more efficiently than the demonstrators. MetaMimic relies on the principle of storing all experiences in a memory and replaying these to learn massive deep neural network policies by off-policy RL. This paper introduces, to the best of our knowledge, the largest existing neural networks for deep RL and shows that larger networks with normalization are needed to achieve one-shot high-fidelity imitation on a challenging manipulation task.
The results also show that both types of policy can be learned from vision, in spite of the task rewards being sparse, and without access to demonstrator actions. | rejected-papers | The paper introduces a setting called high-fidelity imitation where the goal one-shot generalization to new trajectories in a given environment. The authors contrast this with more standard one-shot imitation approaches where one-shot generalization is to a task rather than a precise trajectory. The authors propose a technique that works off of only state information, which is coupled with an RL algorithm that learns from a replay buffer that is populated by the imitator. The authors emphasize that their approach can leverage very large deep learning models, and demonstrate strong empirical performance in a (simulated) robotics setting.
A key weakness of the paper is its clarity. All reviewers were unclear about the precise setting as well as relation to prior work in one-shot imitation learning. As a result, there were substantial challenges in assessing the technical contribution of the paper. There were many requests for clarification, including for the motivation, difference between the present setting and those addressed in previous work, algorithmic details, and experiment details.
I believe that a further concern was the lack of a wide range of baselines. The authors construct several baselines that are relevant in the given setting, but did not consider "naive baseline" approaches proposed by the reviewers. For example, behavior cloning is mentioned as a potential baseline several times. The authors argue that this is not applicable as it would require expert actions. Instead of considering it a baseline, BC could be used as an "oracle" - performance that could be achieved if demonstration actions were known. As long as the access to additional information is clearly marked, such a comparison with a privileged oracle can be properly placed by the reader. Without including such commonly known reference approaches, it is very challenging to assess the proposed method's performance in the context of the difficulty of the task. Generally, whenever a paper introduces both a new task and a new approach, a lot of care needs to be taken to build up insights into whether the task appropriately reflects the domain / challenge the paper claims to address, how challenging the task is in comparison to those addressed in prior work, and to place the performance of the novel proposed method in the context of prior work. In the present paper, on top of the task and approach being novel, the pure RL baseline D4PG is not yet widely known in the community and it's performance relative to common approaches is not well understood. Including commonly known RL approaches would help put all these results in context.
The authors took great care to respond to the reviewer comments, providing thorough discussion of related work and clarifications of the task and approach, and these were very helpful to the AC to understand the paper. The AC believes that the paper has excellent potential. At the same time, a much more thorough empirical evaluation is needed to demonstrate the value of the proposed approach in this novel setting, as well as to provide additional conceptual insights into why and in what kinds of settings the algorithm performance well, or where its limitations are.
| train | [
"HJlj99OpCX",
"Skl4vd35R7",
"SJglgopFRm",
"SJg865aKRm",
"HkxSq9TFAX",
"B1gEqF6tRm",
"H1gJp1zPCX",
"S1x8ZgMwCX",
"BJl6Z6bDCQ",
"S1e__pZvRX",
"BylDJkZ_Tm",
"H1xY8ktPT7",
"SyeAUmWHTQ",
"r1lU9VybTX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"A few reviewers have asked us for more motivation for high-fidelity imitation. As pointed out in the paper, we were inspired by the phenomenon of over imitation in developmental psychology. In addition to the highly cited references in our manuscript, the youtube video at (https://www.youtube.com/watch?time_conti... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"iclr_2019_HJMjW3RqtX",
"H1gJp1zPCX",
"BylDJkZ_Tm",
"BylDJkZ_Tm",
"BylDJkZ_Tm",
"BylDJkZ_Tm",
"r1lU9VybTX",
"r1lU9VybTX",
"SyeAUmWHTQ",
"H1xY8ktPT7",
"iclr_2019_HJMjW3RqtX",
"iclr_2019_HJMjW3RqtX",
"iclr_2019_HJMjW3RqtX",
"iclr_2019_HJMjW3RqtX"
] |
iclr_2019_HJMsiiRctX | Probabilistic Program Induction for Intuitive Physics Game Play | Recent findings suggest that humans deploy cognitive mechanism of physics simulation engines to simulate the physics of objects. We propose a framework for bots to deploy similar tools for interacting with intuitive physics environments. The framework employs a physics simulation in a probabilistic way to infer about moves performed by an agent in a setting governed by Newtonian laws of motion. However, methods of probabilistic programs can be slow in such setting due to their need to generate many samples. We complement the model with a model-free approach to aid the sampling procedures in becoming more efficient through learning from experience during game playing. We present an approach where a myriad of model-free approaches (a convolutional neural network in our model) and model-based approaches (probabilistic physics simulation) is able to achieve what neither could alone. This way the model outperforms an all model-free or all model-based approach. We discuss a case study showing empirical results of the performance of the model on the game of Flappy Bird. | rejected-papers | The paper presents the combination of a model-based (probabilistic program representing the physics) and model-free (CNN trained with DQN) to play Flappy Bird.
The approach is interesting, but the paper is hard to follow at times, and the solution seems too specific to the Flappy Bird game. This feels more like a tech report on what was done to get this score on Flappy Bird, than a scientific paper with good comparisons on this environment (in terms of models, algorithms, approaches), and/or other environments to evaluate the method. We encourage the authors to do this additional work. | train | [
"BJe77yJc27",
"Skg3AIdYn7",
"SJehimfIj7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present an algorithm that incorporates deep learning and physics simulation, and apply this algorithm to the game Flappy Bird. The algorithm uses a convolutional network trained on agent play to predict the agent’s own actions given a sequence of frames. Using this action estimator output as a prior ... | [
3,
4,
2
] | [
4,
2,
4
] | [
"iclr_2019_HJMsiiRctX",
"iclr_2019_HJMsiiRctX",
"iclr_2019_HJMsiiRctX"
] |
iclr_2019_HJNJws0cF7 | Convolutional Neural Networks combined with Runge-Kutta Methods | A convolutional neural network for image classification can be constructed mathematically since it can be regarded as a multi-period dynamical system. In this paper, a novel approach is proposed to construct network models from the dynamical systems view. Since a pre-activation residual network can be deemed an approximation of a time-dependent dynamical system using the forward Euler method, higher order Runge-Kutta methods (RK methods) can be utilized to build network models in order to achieve higher accuracy. The model constructed in such a way is referred to as the Runge-Kutta Convolutional Neural Network (RKNet). RK methods also provide an interpretation of Dense Convolutional Networks (DenseNets) and Convolutional Neural Networks with Alternately Updated Clique (CliqueNets) from the dynamical systems view. The proposed methods are evaluated on benchmark datasets: CIFAR-10/100, SVHN and ImageNet. The experimental results are consistent with the theoretical properties of RK methods and support the dynamical systems interpretation. Moreover, the experimental results show that the RKNets are superior to the state-of-the-art network models on CIFAR-10 and on par on CIFAR-100, SVHN and ImageNet. | rejected-papers | The paper proposes a novel approach to neural net construction using dynamical systems approach, such as higher order Runge-Kutta method; this approach also allows a dynamical systems interpretation of DenseNets and CliqueNets. While all reviewers agree that this is an intersting a novel approach, along the lines of recent developments in the field on dynamical systems approaches to deep nets, they also suggest to further improve the writing/clarity of the paper and also strengthen the empirical results (currently, the method only provided advantage on CIFAR-10, while being somewhat suboptimal on other datasets, and more evidence for empirical advantages of the proposed approach would be great). Overall, this is a very interesting and promising work, and with a few more empirical demonstrations of the method's superiority as well as more polished wiriting the paper would make a nice contribution to ML community. | train | [
"B1gHGbttRm",
"rylPPC_t0m",
"HJeQbQKYA7",
"SygKRB7XaX",
"HJgNkKNshQ",
"BJlfMsXjhm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your constructive feedback.\n\n-->\"The presentation though is a bit convoluted and unclear and I would strongly encourage the writers to break it down and make it more readable\"\n\nWe have updated the paper following your advice.\n\n-->\"RK methods can be pretty sensitive to Chaos and other non lin... | [
-1,
-1,
-1,
4,
5,
6
] | [
-1,
-1,
-1,
3,
4,
3
] | [
"BJlfMsXjhm",
"HJgNkKNshQ",
"SygKRB7XaX",
"iclr_2019_HJNJws0cF7",
"iclr_2019_HJNJws0cF7",
"iclr_2019_HJNJws0cF7"
] |
iclr_2019_HJe3TsR5K7 | Learning Joint Wasserstein Auto-Encoders for Joint Distribution Matching | We study the joint distribution matching problem which aims at learning bidirectional mappings to match the joint distribution of two domains. This problem occurs in unsupervised image-to-image translation and video-to-video synthesis tasks, which, however, has two critical challenges: (i) it is difficult to exploit sufficient information from the joint distribution; (ii) how to theoretically and experimentally evaluate the generalization performance remains an open question. To address the above challenges, we propose a new optimization problem and design a novel Joint Wasserstein Auto-Encoders (JWAE) to minimize the Wasserstein distance of the joint distributions in two domains. We theoretically prove that the generalization ability of the proposed method can be guaranteed by minimizing the Wasserstein distance of joint distributions. To verify the generalization ability, we apply our method to unsupervised video-to-video synthesis by performing video frame interpolation and producing visually smooth videos in two domains, simultaneously. Both qualitative and quantitative comparisons demonstrate the superiority of our method over several state-of-the-arts. | rejected-papers | This paper proposes a new image to image translation technique, presenting a theoretical extension of Wasserstein GANs to the bidirectional mapping case.
Although the work presents promise, the extent of miscommunication and errors of the original presentation was too great to confidently conclude about the contribution of this work.
The authors have already included extensive edits and comments in response to the reviews to improve the clarity of method, experiments and statement of contribution. We encourage the authors to further incorporate the suggestions and seek to clarify points of confusion from other reviewers and submit a revised version to a future conference. | val | [
"B1xVUu45hm",
"rkeFnzYeyE",
"Bklj1wCJC7",
"HylYuK6JAm",
"rJlwcXAkCm",
"Hygakz2dh7",
"B1xfWwGqnm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the joint distribution matching problem where given data samples in two different domains, one is interested in learning a bi-directional mapping between unpaired data elements in these domains. The paper proposes a joint Wasserstein auto-encoder (JWAE) to solve this problem. The paper shows tha... | [
6,
-1,
-1,
-1,
-1,
4,
5
] | [
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_HJe3TsR5K7",
"HylYuK6JAm",
"Hygakz2dh7",
"B1xVUu45hm",
"B1xfWwGqnm",
"iclr_2019_HJe3TsR5K7",
"iclr_2019_HJe3TsR5K7"
] |
iclr_2019_HJeABnCqKQ | Generative Adversarial Self-Imitation Learning | This paper explores a simple regularizer for reinforcement learning by proposing Generative Adversarial Self-Imitation Learning (GASIL), which encourages the agent to imitate past good trajectories via generative adversarial imitation learning framework. Instead of directly maximizing rewards, GASIL focuses on reproducing past good trajectories, which can potentially make long-term credit assignment easier when rewards are sparse and delayed. GASIL can be easily combined with any policy gradient objective by using GASIL as a learned reward shaping function. Our experimental results show that GASIL improves the performance of proximal policy optimization on 2D Point Mass and MuJoCo environments with delayed reward and stochastic dynamics. | rejected-papers | The paper proposes an extension to reinforcement learning with self-imitation (SIL)[Oh et al. 2018]. It is based on the idea of leveraging previously encountered high-reward trajectories for reward shaping. This shaping is learned automatically using an adversarial setup, similar to GAIL [Ho & Ermon, 2016]. The paper clearly presents the proposed approach and relation to previous work. Empirical evaluation shows strong performance on a 2D point mass problem designed to examine the algorithms behavior. Of particular note are the insightful visualizations in Figure 2 and 3 which shed light on the algorithm's learning behavior. Empirical results on the Mujoco domain show that the proposed approach is particularly strong under delayed-reward (20 steps) and noisy-observation settings.
The reviewers and AC note the following potential weaknesses: The paper presents an empirical validation showing improvements over PPO, in particular in Mujoco tasks with delayed rewards and with noisy observations. However, given the close relation to SIL, a direct comparison with the latter algorithm seems more appropriate. Reviewers 2 and 3 pointed out that the empirical validation of SIL was more extensive, including results on a wide range of Atari games. The authors provided results on several hard-exploration Atari games in the rebuttal period, but the results of the comparison to SIL were inconclusive. Given that the main contribution of the paper is empirical, the reviewers and the AC consider the contribution incremental.
The reviewers noted that the proposed method was presented with little theoretical justification, which limits the contribution of the paper. During the rebuttal phase, the authors sketched a theoretical argument in their rebuttal, but noted that they are not able to provide a guarantee that trajectories in the replay buffer constitute an unbiased sample from the optimal policy, and that policy gradient methods in general are not guaranteed to converge to a globally optimal policy. The AC notes that conceptual insights can also be provided by motivating algorithmic or modeling choices, or through detailed analysis of the obtained results with the goal to further understanding of the observed behavior. Any such form of developing further insights would strengthen the contribution of the submission. | test | [
"BkgjnbxFyV",
"rJxYb-Ad1E",
"rJl86nOFCX",
"r1lx8KKlkE",
"Syelg-g92X",
"Sygjk6uFCm",
"Hyl3K2dYA7",
"ryeEwh_t0Q",
"SJgYyP5hhm",
"H1elPtU53m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The results provided on the ATARI games are not apples-to-apples with SIL[Oh et.al.], the baseline uses A2C and this paper uses PPO. Moreover, even in these comparisons, SIL[Oh et.al.] performs better on 4/6 games.\n\nUpon reviewing the author's responses and the update paper, I decided to keep my score the same. ... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
5,
5
] | [
"SJgYyP5hhm",
"Sygjk6uFCm",
"H1elPtU53m",
"rJl86nOFCX",
"iclr_2019_HJeABnCqKQ",
"Syelg-g92X",
"SJgYyP5hhm",
"iclr_2019_HJeABnCqKQ",
"iclr_2019_HJeABnCqKQ",
"iclr_2019_HJeABnCqKQ"
] |
iclr_2019_HJeB0sC9Fm | Detecting Memorization in ReLU Networks | We propose a new notion of 'non-linearity' of a network layer with respect to an input batch that is based on its proximity to a linear system, which is reflected in the non-negative rank of the activation matrix.
We measure this non-linearity by applying non-negative factorization to the activation matrix.
Considering batches of similar samples, we find that high non-linearity in deep layers is indicative of memorization. Furthermore, by applying our approach layer-by-layer, we find that the mechanism for memorization consists of distinct phases. We perform experiments on fully-connected and convolutional neural networks trained on several image and audio datasets. Our results demonstrate that as an indicator for memorization, our technique can be used to perform early stopping. | rejected-papers | This paper proposes a new measure to detect memorization based on how well the activations of the network are approximated by a low-rank decomposition. They compare decompositions and find that non-negative matrix factorization provides the best results. They evaluate of several datasets and show that the measure is well correlated with generalization and can be used for early stopping. All reviewers found the work novel, but there were concerns about the usefulness of the method, the experimental setup and the assumptions made. Some of these concerns were addressed by the revisions but concerns about usefulness and insights remained. These issues need to be properly addressed before acceptance. | val | [
"rJeP-hYZyN",
"rJgi5LI6AQ",
"Hkxm8rUa0Q",
"ByxMYy1j0X",
"ByxVZCnKRX",
"SygfhEWq3m",
"S1eP3MSQR7",
"SkgQvGrQA7",
"rkxAEfHXAQ",
"HJlzuZH7C7",
"Byg4UbSXRQ",
"ByxNZRV7AQ",
"ByePaSxs2X",
"SkgNroyY2m"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I comment using the numbering we have in the thread above:\n\n1) I find the explanation and the additional experiment convincing.\n\n2) I find the new experiment interesting and *partially* supporting the conjecture about the phases.\n\n3) Now I understand and this explains the behavior. However, the text in the m... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
9
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"rJgi5LI6AQ",
"ByxMYy1j0X",
"ByxVZCnKRX",
"Byg4UbSXRQ",
"rkxAEfHXAQ",
"iclr_2019_HJeB0sC9Fm",
"SkgNroyY2m",
"SygfhEWq3m",
"SygfhEWq3m",
"ByePaSxs2X",
"ByePaSxs2X",
"iclr_2019_HJeB0sC9Fm",
"iclr_2019_HJeB0sC9Fm",
"iclr_2019_HJeB0sC9Fm"
] |
iclr_2019_HJeKCi0qYX | MILE: A Multi-Level Framework for Scalable Graph Embedding | Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a novel graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while also often generating embeddings of better quality for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. | rejected-papers | Significant spread of scores across the reviewers and unfortunately not much discussion despite prompts from the area chair and the authors. The most positive reviewer is the least confident one. Very close to the decision boundary but after careful consideration by the senior PCs just below the acceptance threshold. There is significant literature already on this topic. The "thought delta" created by this paper and the empirical results are also not sufficient for acceptance. | train | [
"rkldvjVFA7",
"Hkll3Drtp7",
"S1xdcdBKam",
"S1eTTt4tp7",
"HkxLe0BtT7",
"H1g_pHZi27",
"BJldyjav2X",
"ByxAk8WMnX"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for providing insightful reviews. We present below the main changes to the document. Every change is a response to the detailed reviews we received. Specifically we:\n* Replaced Table 2 (results on selected coarsening levels, previously in the main body) with Figure 3 (results with all varyi... | [
-1,
-1,
-1,
-1,
-1,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"iclr_2019_HJeKCi0qYX",
"BJldyjav2X",
"BJldyjav2X",
"H1g_pHZi27",
"ByxAk8WMnX",
"iclr_2019_HJeKCi0qYX",
"iclr_2019_HJeKCi0qYX",
"iclr_2019_HJeKCi0qYX"
] |
iclr_2019_HJeNIjA5Y7 | Image Score: how to select useful samples | There has long been debates on how we could interpret neural networks and understand the decisions our models make. Specifically, why deep neural networks tend to be error-prone when dealing with samples that output low softmax scores. We present an efficient approach to measure the confidence of decision-making steps by statistically investigating each unit's contribution to that decision. Instead of focusing on how the models react on datasets, we study the datasets themselves given a pre-trained model. Our approach is capable of assigning a score to each sample within a dataset that measures the frequency of occurrence of that sample's chain of activation. We demonstrate with experiments that our method could select useful samples to improve deep neural networks in a semi-supervised leaning setting. | rejected-papers | Reviewers are in full agreement for rejection. | val | [
"SJexKHwa3Q",
"BklebLM2h7",
"B1gpZNU9nQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The idea of calculating a score to indicate the usefulness of a sample for training deep networks by analyzing the neural activations in semi-supervised learning is interesting.\n\nHowever, the effectiveness of the proposed method is not validated. In the cifar-10 semi-supervised image classification experiment, o... | [
4,
4,
3
] | [
3,
4,
3
] | [
"iclr_2019_HJeNIjA5Y7",
"iclr_2019_HJeNIjA5Y7",
"iclr_2019_HJeNIjA5Y7"
] |
iclr_2019_HJeOMhA5K7 | Human-Guided Column Networks: Augmenting Deep Learning with Advice | While extremely successful in several applications, especially with low-level representations; sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models. Column Networks, a deep architecture, can succinctly capture such domain structure and interactions, but may still be prone to sub-optimal learning from sparse and noisy samples. Inspired by the success of human-advice guided learning in AI, especially in data-scarce domains, we propose Knowledge-augmented Column Networks that leverage human advice/knowledge for better learning with noisy/sparse samples. Our experiments demonstrate how our approach leads to either superior overall performance or faster convergence. | rejected-papers | The paper considers the task of incorporating knowledge expressed as rules into column networks. The reviewers acknowledge the need for such techniques, like the flexibility of the proposed approach, and appreciate the improvements to convergence speed and accuracy afforded by the proposed work.
The reviewers and the AC note the following as the primary concerns of the paper:
(1) The primary concerned raised by the reviewers was that the evaluation is focused on whether KCLN can beat one with the knowledge, instead of measuring the efficacy of incorporating the knowledge itself (e.g. by comparing with other forms of incorporating knowledge, or by varying the quality of the rules that were introduced), (2) Even otherwise, the empirical results are not significant, offering slight improvements over the vanilla CLN (reviewer 1), (3) There are concerns that the rule-based gates are introduced but gradients are only computed on the final layer, which might lead to instability, and (4) There are a number of issues in the presentation, where the space is used on redundant information and description of datasets, instead of focusing on the proposed model.
The comments by the authors address some of these concerns, in particular, clarifying that the forms of knowledge/rules are not limited, however, they focused on simple rules in the paper. However, the primary concerns in the evaluation still remain: (1) it seems to focus on comparing against Vanilla-CLN, instead of focusing on the source of the knowledge, or on the efficacy in incorporating it (see earlier work on examples of how to evaluate these), and (2) the results are not considerably better with the proposed work, making the reviewers doubtful about the significance of the proposed work.
The reviewers agree that the paper is not ready for publication. | train | [
"HJg7IAvk07",
"SylpM0wk07",
"Syxn0aP1CX",
"rygtp4OAnX",
"BylRe8iFhQ",
"SkgQ8ZIdh7"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We understand the reviewer's perspective about stronger motivation. However, justification behind our usage of Column networks is as follows - (1) Human advice/knowledge/guidance has been proven to extremely effective in cases of systematic noise in data (Odom et al. 2018). Systematic noise can be attributed to tw... | [
-1,
-1,
-1,
6,
4,
5
] | [
-1,
-1,
-1,
3,
4,
5
] | [
"SkgQ8ZIdh7",
"BylRe8iFhQ",
"rygtp4OAnX",
"iclr_2019_HJeOMhA5K7",
"iclr_2019_HJeOMhA5K7",
"iclr_2019_HJeOMhA5K7"
] |
iclr_2019_HJePRoAct7 | Graph U-Net | We consider the problem of representation learning for graph data. Convolutional neural networks can naturally operate on images, but have significant challenges in dealing with graph data. Given images are special cases of graphs with nodes lie on 2D lattices, graph embedding tasks have a natural correspondence with image pixel-wise prediction tasks such as segmentation. While encoder-decoder architectures like U-Net have been successfully applied on many image pixel-wise prediction tasks, similar methods are lacking for graph data. This is due to the fact that pooling and up-sampling operations are not natural on graph data. To address these challenges, we propose novel graph pooling (gPool) and unpooling (gUnpool) operations in this work. The gPool layer adaptively selects some nodes to form a smaller graph based on their scalar projection values on a trainable projection vector. We further propose the gUnpool layer as the inverse operation of the gPool layer. The gUnpool layer restores the graph into its original structure using the position information of nodes selected in the corresponding gPool layer. Based on our proposed gPool and gUnpool layers, we develop an encoder-decoder model on graph, known as the graph U-Net. Our experimental results on node classification tasks demonstrate that our methods achieve consistently better performance than previous models. | rejected-papers | The authors supplied an updated paper resolving the most important reviewer concerns after the deadline for revisions. In part, this was due to reviewers requesting new experiments that take substantial time to complete.
After discussion with the reviewers, I believe that if the revised manuscript had arrived earlier, then it should be accepted. Without the new results I would recommend rejecting since I believe the original submission lacked important experiments to justify the approach (inductive setting experiments are very useful).
The community has an interest in uniform application of the rules surrounding the revision process. It is not fair to other authors to consider revisions past the deadline and we do not want to encourage late revisions. Better to submit a finished piece of work initially and not assume it will be possible to use up a lot of reviewer time and fix during the review process.
We also don't want to encourage shoddy, rushed experimental work. However, the way we typically handle requests from reviewers that require a lot of work to complete is by rejecting papers and encouraging them to be resubmitted sometime in the future, typically to another similar conference.
Thus I am recommending rejecting this paper on policy grounds, not on the merits of the latest draft. I believe that we should base the decision on the state of the paper at the same deadline that applies to all other authors.
However, I am asking the program chairs to review this case since ultimately they will be the final arbiters of policy questions like this. | train | [
"Hyl445AfeN",
"HJlgrVbtkV",
"S1eOKzDfA7",
"Syx3R8y5Cm",
"BkenHKxQAQ",
"r1enduxQRQ",
"rkglzX79nX",
"rJxkbxPGAQ",
"ryeNpxwMRX",
"rkgzTzwzCX",
"BJxG0Xg9h7",
"HJxUcVxrhm",
"BJxCB1HUs7",
"HJxRweG0FQ",
"rJesYj2atX",
"SJgK6c3ptX",
"Bygr-fsTYX",
"rklHQEEpYX"
] | [
"public",
"public",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"I found the proposed idea interesting, but there are a few issues in the experiments that should be addressed. \n\n1. Graph augmentation seems to be important to get state of the art results. Without it, this work is better than GAT only on one dataset (Cora) in node classification tasks. It is also not clear whic... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HJePRoAct7",
"iclr_2019_HJePRoAct7",
"BJxG0Xg9h7",
"ryeNpxwMRX",
"r1enduxQRQ",
"rJxkbxPGAQ",
"iclr_2019_HJePRoAct7",
"rkglzX79nX",
"HJxUcVxrhm",
"BJxG0Xg9h7",
"iclr_2019_HJePRoAct7",
"iclr_2019_HJePRoAct7",
"HJxRweG0FQ",
"iclr_2019_HJePRoAct7",
"SJgK6c3ptX",
"Bygr-fsTYX",
... |
iclr_2019_HJePno0cYm | Transformer-XL: Language Modeling with Longer-Term Dependency | We propose a novel neural architecture, Transformer-XL, for modeling longer-term dependency. To address the limitation of fixed-length contexts, we introduce a notion of recurrence by reusing the representations from the history. Empirically, we show state-of-the-art (SoTA) results on both word-level and character-level language modeling datasets, including WikiText-103, One Billion Word, Penn Treebank, and enwiki8. Notably, we improve the SoTA results from 1.06 to 0.99 in bpc on enwiki8, from 33.0 to 18.9 in perplexity on WikiText-103, and from 28.0 to 23.5 in perplexity on One Billion Word. Performance improves when the attention length increases during evaluation, and our best model attends to up to 1,600 words and 3,800 characters. To quantify the effective length of dependency, we devise a new metric and show that on WikiText-103 Transformer-XL manages to model dependency that is about 80% longer than recurrent networks and 450% longer than Transformer. Moreover, Transformer-XL is up to 1,800+ times faster than vanilla Transformer during evaluation. | rejected-papers | despite the (significant) improvement in language modelling, it has always been a thorny issue whether better language models (at this level) lead to better performance in the downstream task or whether such a technique could be used to build a better conditional language model which often focuses on the aspect of generation. in this context, the reviewers found it difficult to see the merit of the proposed approach, as the technique itself may be considered a rather trivial application of earlier approaches such as truncated backprop. it would be good to apply this technique to e.g. document-level generation and see if the proposed approach can strike an amazing balance between computational efficiency and generation performance. | train | [
"HJe5zCaEeV",
"rJeP_0nEx4",
"S1xnzg5cyN",
"rJx2-lcqk4",
"Syxqex55JN",
"Hkla0-dp27",
"SyeITxE1JN",
"S1g-IItiRQ",
"Hylxe2VcA7",
"HkxXKjV5RX",
"SyeGCoV5RX",
"rJeQ2jV50X",
"rkgHT-tq3m",
"SJgjpKAko7"
] | [
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your questions. We will publish our code along with our hyper-parameters on all the datasets very soon!",
"Very impressive results! For the billion-word benchmark, you are getting better perplexity numbers (23.5) than we have for models of comparable size (see https://arxiv.org/pdf/1811.02084.pdf)... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"rJeP_0nEx4",
"iclr_2019_HJePno0cYm",
"Hkla0-dp27",
"rkgHT-tq3m",
"SJgjpKAko7",
"iclr_2019_HJePno0cYm",
"S1g-IItiRQ",
"iclr_2019_HJePno0cYm",
"SJgjpKAko7",
"iclr_2019_HJePno0cYm",
"rkgHT-tq3m",
"Hkla0-dp27",
"iclr_2019_HJePno0cYm",
"iclr_2019_HJePno0cYm"
] |
iclr_2019_HJePy3RcF7 | Rethinking learning rate schedules for stochastic optimization | There is a stark disparity between the learning rate schedules used in the practice of large scale machine learning and what are considered admissible learning rate schedules prescribed in the theory of stochastic approximation. Recent results, such as in the 'super-convergence' methods which use oscillating learning rates, serve to emphasize this point even more.
One plausible explanation is that non-convex neural network training procedures are better suited to the use of fundamentally different learning rate schedules, such as the ``cut the learning rate every constant number of epochs'' method (which more closely resembles an exponentially decaying learning rate schedule); note that this widely used schedule is in stark contrast to the polynomial decay schemes prescribed in the stochastic approximation literature, which are indeed shown to be (worst case) optimal for classes of convex optimization problems.
The main contribution of this work shows that the picture is far more nuanced, where we do not even need to move to non-convex optimization to show other learning rate schemes can be far more effective. In fact, even for the simple case of stochastic linear regression with a fixed time horizon, the rate achieved by any polynomial decay scheme is sub-optimal compared to the statistical minimax rate (by a factor of condition number); in contrast the ```''cut the learning rate every constant number of epochs'' provides an exponential improvement (depending only logarithmically on the condition number) compared to any polynomial decay scheme. Finally, it is important to ask if our theoretical insights are somehow fundamentally tied to quadratic loss minimization (where we have circumvented minimax lower bounds for more general convex optimization problems)? Here, we conjecture that recent results which make the gradient norm small at a near optimal rate, for both convex and non-convex optimization, may also provide more insights into learning rate schedules used in practice.
| rejected-papers | R4 recommends acceptance while R2 is lukewarm and R1 argues for rejection to revise the presentation of the paper. As we unfortunately need to reject borderline papers given the space constraints, the AC recommends "revise and resubmit". | train | [
"rkgX6TUEy4",
"BylBHVDuT7",
"S1xgWYOWy4",
"Skx95ZPZ1V",
"H1xu26zBAm",
"BkxTg0MHCX",
"BJei2rzrRm",
"SyxfBhbBRQ",
"Byg6z7GqhX",
"Hke6PaYLnQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the answers, I have upgraded my rating accordingly.",
"This paper presents a theoretical study of different learning rate schedules. Its main result are statistical minimax lower bounds for both polynomial and constant-and-cut schemes.\n\nI enjoyed reading the paper and I think the contributions in it... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"SyxfBhbBRQ",
"iclr_2019_HJePy3RcF7",
"Skx95ZPZ1V",
"BkxTg0MHCX",
"Byg6z7GqhX",
"H1xu26zBAm",
"Hke6PaYLnQ",
"BylBHVDuT7",
"iclr_2019_HJePy3RcF7",
"iclr_2019_HJePy3RcF7"
] |
iclr_2019_HJeQToAqKQ | TherML: The Thermodynamics of Machine Learning | In this work we offer an information-theoretic framework for representation learning that connects with a wide class of existing objectives in machine learning. We develop a formal correspondence between this work and thermodynamics and discuss its implications. | rejected-papers | Connecting different fields and bringing new insights to machine learning are always appreciated. But since it is challenging to do it needs to be done well. This paper falls short here. | test | [
"HJlM26N9AX",
"rklxcTVcAQ",
"HkxVBTEqAX",
"H1lj4p7GaQ",
"BklzybW0hQ",
"ByeW2hxR3m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the review.\n\nWe agree Section 4 is rather terse. Given space constraints we weren't able to describe things in much detail and currently leave too much unsaid. We thought the analogy was interesting enough to discuss, even if not in detail.\n\nDo you think the paper would be improved if Section 4... | [
-1,
-1,
-1,
7,
3,
5
] | [
-1,
-1,
-1,
3,
4,
3
] | [
"ByeW2hxR3m",
"BklzybW0hQ",
"H1lj4p7GaQ",
"iclr_2019_HJeQToAqKQ",
"iclr_2019_HJeQToAqKQ",
"iclr_2019_HJeQToAqKQ"
] |
iclr_2019_HJeQbnA5tm | Noisy Information Bottlenecks for Generalization | We propose Noisy Information Bottlenecks (NIB) to limit mutual information between learned parameters and the data through noise. We show why this benefits generalization and allows mitigation of model overfitting both for supervised and unsupervised learning, even for arbitrarily complex architectures. We reinterpret methods including the Variational Autoencoder, beta-VAE, network weight uncertainty and a variant of dropout combined with weight decay as special cases of our approach, explaining and quantifying regularizing properties and vulnerabilities within information theory. | rejected-papers | The paper proposes a regularization method that introduces an information bottleneck between parameters and predictions.
The reviewers agree that the paper proposes some interesting ideas, but those idea need to be clarified. The paper lacks in clarity. The reviewers also doubt whether the paper is expected to have significant impact in the field. | val | [
"HkxD7LKWxE",
"BygQkyDWx4",
"rJxxMZL-x4",
"rygrcAAAyE",
"ryxS38C_am",
"SkEt8Cdp7",
"rkx7sBROTX",
"r1xY4JK62Q",
"SJllCbbahQ",
"BkgYfE5o3Q"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you so much for your prompt clarification.\n\nThe meaning of \\theta depends on the context, for supervised learning it would be the parameters of the model (e.g. a neural network), for a latent variable model in unsupervised learning it would also include the latents (which we have denoted as Y in the paper... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"BygQkyDWx4",
"rJxxMZL-x4",
"rygrcAAAyE",
"rkx7sBROTX",
"r1xY4JK62Q",
"SJllCbbahQ",
"BkgYfE5o3Q",
"iclr_2019_HJeQbnA5tm",
"iclr_2019_HJeQbnA5tm",
"iclr_2019_HJeQbnA5tm"
] |
iclr_2019_HJeRm3Aqt7 | GenEval: A Benchmark Suite for Evaluating Generative Models | Generative models are important for several practical applications, from low level image processing tasks, to model-based planning in robotics. More generally,
the study of generative models is motivated by the long-standing endeavor to model uncertainty and to discover structure by leveraging unlabeled data.
Unfortunately, the lack of an ultimate task of interest has hindered progress in the field, as there is no established way to
compare models and, often times, evaluation is based on mere visual inspection of samples drawn from such models.
In this work, we aim at addressing this problem by introducing a new benchmark evaluation suite, dubbed \textit{GenEval}.
GenEval hosts a large array of distributions capturing many important
properties of real datasets, yet in a controlled setting, such as lower intrinsic dimensionality, multi-modality, compositionality,
independence and causal structure. Any model can be easily plugged for evaluation, provided it can generate samples.
Our extensive evaluation suggests that different models have different strenghts, and that GenEval is a great tool to gain insights about how models and metrics work.
We offer GenEval to the community~\footnote{Available at: \it{coming soon}.} and believe that this benchmark will facilitate comparison and development of
new generative models. | rejected-papers | The paper introduces a benchmark suite providing a series of synthetic distributions and metrics for the evaluation of generative models. While providing such a tool-kit is interesting and helpful and it extends existing approaches for evaluating generative models on simple distributions, it seems not to allow for very different additional conclusions or insights.This limits the paper's significance. Adding more problems and metrics to the benchmark suite would make it more convincing. | train | [
"BJe5bbvFyN",
"H1ejlDZKJN",
"HJe6ONKE14",
"S1x64itm1V",
"BkefNpuA2Q",
"S1esM6qiCX",
"H1xrM48i0m",
"ByeYmJdF0X",
"SkgfBIGYCm",
"H1xNzHZFRQ",
"BkepyS-FAX",
"SkltHnlFRm",
"H1gCUtMchQ",
"B1xEK15_nX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Note that we do have \"image-like\" distributions in the set (the \"shifted bumps\" distribution). Moreover, all the distributions we show results for have parameterized difficulty: for example, with the shifted bumps, it is the size of the image (equivalently, the number of bumps), the random scalings to the he... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"H1ejlDZKJN",
"HJe6ONKE14",
"S1x64itm1V",
"SkgfBIGYCm",
"iclr_2019_HJeRm3Aqt7",
"H1xrM48i0m",
"H1xNzHZFRQ",
"SkltHnlFRm",
"BkefNpuA2Q",
"H1gCUtMchQ",
"H1gCUtMchQ",
"B1xEK15_nX",
"iclr_2019_HJeRm3Aqt7",
"iclr_2019_HJeRm3Aqt7"
] |
iclr_2019_HJedho0qFX | Using Word Embeddings to Explore the Learned Representations of Convolutional Neural Networks | As deep neural net architectures minimize loss, they build up information in a hierarchy of learned representations that ultimately serve their final goal. Different architectures tackle this problem in slightly different ways, but all models aim to create representational spaces that accumulate information through the depth of the network. Here we build on previous work that indicated that two very different model classes trained on two very different tasks actually build knowledge representations that have similar underlying representations. Namely, we compare word embeddings from SkipGram (trained to predict co-occurring words) to several CNN architectures (trained for image classification) in order to understand how this accumulation of knowledge behaves in CNNs. We improve upon previous work by including 5 times more ImageNet classes in our experiments, and further expand the scope of the analyses to include a network trained on CIFAR-100. We characterize network behavior in pretrained models, and also during training, misclassification, and adversarial attack. Our work illustrates the power of using one model to explore another, gives new insights for CNN models, and provides a framework for others to perform similar analyses when developing new architectures. | rejected-papers | The paper aims to study what is learned in the word representations by comparing SkipGram embeddings trained from a text corpus and CNNs trained from ImageNet.
Pros:
The paper tries to be comprehensive, including analysis of text representations and image representations, and the cases of misclassification and adversarial examples.
Cons:
The clarity of the paper is a major concern, as noted by all reviwers, and the authors did not come back with rebuttal to address reviewers' quetions. Also, as R1 and R2 pointed out the novelty over recent relevant papers such as (Dharmaretnam & Fyshe, 2018) is not clear.
Verdict:
Reject due to weak novelty and major clarity issues. | train | [
"Ske922ec3m",
"ryeMJgOpnm",
"H1g9GJoN37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors apply an existing method (mainly 2 vs 2 test) to explore the representations learned by CNNs both during/after training. \n\n## Strength\n\nThe analysis of misclassification and adversarial examples is interesting. The authors also propose potential ways of improving the robustness of DNNs for adversar... | [
4,
3,
4
] | [
4,
4,
2
] | [
"iclr_2019_HJedho0qFX",
"iclr_2019_HJedho0qFX",
"iclr_2019_HJedho0qFX"
] |
iclr_2019_HJehSnCcFX | Inference of unobserved event streams with neural Hawkes particle smoothing | Events that we observe in the world may be caused by other, unobserved events. We consider sequences of discrete events in continuous time. When only some of the events are observed, we propose particle smoothing to infer the missing events. Particle smoothing is an extension of particle filtering in which proposed events are conditioned on the future as well as the past. For our setting, we develop a novel proposal distribution that is a type of continuous-time bidirectional LSTM. We use the sampled particles in an approximate minimum Bayes risk decoder that outputs a single low-risk prediction of the missing events. We experiment in multiple synthetic and real domains, modeling the complete sequences in each domain with a neural Hawkes process (Mei & Eisner, 2017). On held-out incomplete sequences, our method is effective at inferring the ground-truth unobserved events. In particular, particle smoothing consistently improves upon particle filtering, showing the benefit of training a bidirectional proposal distribution. | rejected-papers | All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance. | train | [
"HygtSCgqAQ",
"H1lDaiOgAm",
"BJexOsOl0m",
"SJloisdeC7",
"HyxZ9oOeAX",
"ryeByi_e0X",
"r1l6a9ueCm",
"HygEs9de0X",
"HJeOuculCX",
"rJlf45ulRm",
"B1g0JtueRX",
"SyxMi84Z6Q",
"H1x57ne627",
"Hyxi0YA_nQ"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We wrote:\n> We can also add experiments with c_k = 0.5 to cement the expository point.\n\nWe have now added these experiments, which constitute new Appendix G in the supplementary material. \nIn these experiments, events are missing stochastically rather than deterministically.\nWe find that the method still wor... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"BJexOsOl0m",
"Hyxi0YA_nQ",
"Hyxi0YA_nQ",
"Hyxi0YA_nQ",
"Hyxi0YA_nQ",
"H1x57ne627",
"H1x57ne627",
"H1x57ne627",
"H1x57ne627",
"H1x57ne627",
"SyxMi84Z6Q",
"iclr_2019_HJehSnCcFX",
"iclr_2019_HJehSnCcFX",
"iclr_2019_HJehSnCcFX"
] |
iclr_2019_HJei-2RcK7 | Graph Transformer | Graph neural networks (GNN) have gained increasing research interests as a mean to the challenging goal of robust and universal graph learning. Previous GNNs have assumed single pre-fixed graph structure and permitted only local context encoding. This paper proposes a novel Graph Transformer (GTR) architecture that captures long-range dependency with global attention, and enables dynamic graph structures. In particular, GTR propagates features within the same graph structure via an intra-graph message passing, and transforms dynamic semantics across multi-domain graph-structured data (e.g. images, sequences, knowledge graphs) for multi-modal learning via an inter-graph message passing. Furthermore, GTR enables effective incorporation of any prior graph structure by weighted averaging of the prior and learned edges, which can be crucially useful for scenarios where prior knowledge is desired. The proposed GTR achieves new state-of-the-arts across three benchmark tasks, including few-shot learning, medical abnormality and disease classification, and graph classification. Experiments show that GTR is superior in learning robust graph representations, transforming high-level semantics across domains, and bridging between prior graph structure with automatic structure learning. | rejected-papers | The reviewers all agree that the work is interesting, but none have stood out and championed the paper as exceptional. The reviewers note that the paper is well-written, contributes a methodological innovation, and provides compelling experiments. However, given the reviewers' positive but unenthusiastic scores, and after discussion with PCs, this paper does not meet the bar for acceptance into ICLR. | test | [
"Syxt-SZJ14",
"H1lBD_EnC7",
"ryg4gQZ5Rm",
"HJeQ2ghbRQ",
"H1gLOlnb0X",
"r1lf7xnbCQ",
"Byl9qJhW0Q",
"BJehhTIgRQ",
"HkevBVyAh7",
"HygJ6sB52X",
"HkeG9tpuhX",
"HJgco12NnX",
"Skg-973qoQ",
"BkgjwgsViQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thanks for the clarifications in the comment, but I did not dispute that extra information is not available. I asked for clarifications to be added to the manuscript for your claims of being state-of-the-art on miniImageNet 5-way 1-shot which are simply not correct imho, since extra information was used. This extr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
-1,
-1,
-1
] | [
"r1lf7xnbCQ",
"H1gLOlnb0X",
"iclr_2019_HJei-2RcK7",
"HygJ6sB52X",
"HkevBVyAh7",
"HkeG9tpuhX",
"BJehhTIgRQ",
"iclr_2019_HJei-2RcK7",
"iclr_2019_HJei-2RcK7",
"iclr_2019_HJei-2RcK7",
"iclr_2019_HJei-2RcK7",
"iclr_2019_HJei-2RcK7",
"BkgjwgsViQ",
"iclr_2019_HJei-2RcK7"
] |
iclr_2019_HJej3s09Km | On the effect of the activation function on the distribution of hidden nodes in a deep network | We analyze the joint probability distribution on the lengths of the
vectors of hidden variables in different layers of a fully connected
deep network, when the weights and biases are chosen randomly according to
Gaussian distributions, and the input is binary-valued. We show
that, if the activation function satisfies a minimal set of
assumptions, satisfied by all activation functions that we know that
are used in practice, then, as the width of the network gets large,
the ``length process'' converges in probability to a length map
that is determined as a simple function of the variances of the
random weights and biases, and the activation function.
We also show that this convergence may fail for activation functions
that violate our assumptions. | rejected-papers | I appreciate that the authors are refuting a technical claim in Poole et al., however the paper has garnered zero enthusiasm the way it is written. I suggest to the authors that they rewrite the paper as a refutation of Poole et al., and name it as such. | train | [
"HyeYsYKPh7",
"BJxmsF5kp7",
"HJlETOcJTX",
"rygxqdckaX",
"BklpNmRoh7",
"S1gSYsX52Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: the paper proves the convergence of empirical length map (length process) in NN to the length map for a permissible activation functions in a wide-network limit. The authors also show why the assumptions on the permissible functions can not be relaxed.\n\nQuality: the paper seems to be technically correct... | [
4,
-1,
-1,
-1,
4,
5
] | [
3,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_HJej3s09Km",
"HyeYsYKPh7",
"S1gSYsX52Q",
"BklpNmRoh7",
"iclr_2019_HJej3s09Km",
"iclr_2019_HJej3s09Km"
] |
iclr_2019_HJej6jR5Fm | Meta-Learning to Guide Segmentation | There are myriad kinds of segmentation, and ultimately the `"right" segmentation of a given scene is in the eye of the annotator. Standard approaches require large amounts of labeled data to learn just one particular kind of segmentation. As a first step towards relieving this annotation burden, we propose the problem of guided segmentation: given varying amounts of pixel-wise labels, segment unannotated pixels by propagating supervision locally (within an image) and non-locally (across images). We propose guided networks, which extract a latent task representation---guidance---from variable amounts and classes (categories, instances, etc.) of pixel supervision and optimize our architecture end-to-end for fast, accurate, and data-efficient segmentation by meta-learning. To span the few-shot and many-shot learning regimes, we examine guidance from as little as one pixel per concept to as much as 1000+ images, and compare to full gradient optimization at both extremes. To explore generalization, we analyze guidance as a bridge between different levels of supervision to segment classes as the union of instances. Our segmentor concentrates different amounts of supervision of different types of classes into an efficient latent representation, non-locally propagates this supervision across images, and can be updated quickly and cumulatively when given more supervision. | rejected-papers | Paper proposes a meta-learning approach to interactive segmentation. After the author response, R2 and R3 recommend rejecting this paper citing concerns of limited novelty and insufficient experimental evaluation (given the popularity of this topic in computer vision). R1 does not seem be familiar with the extensive literature on interactive segmentation and their positive recommendation has been discounted. The AC finds no basis for accepting this paper.
| train | [
"rJemqdpaRm",
"rkx0CjV96Q",
"HJlEPLVqpQ",
"H1g_Df45aX",
"SygL0b496Q",
"HkxNQtn937",
"HkxrUnYc2Q",
"S1lxeZrq37"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the detailed explanation about the experimental setting and clarification.\nHowever, I'm still not convinced whether proposed model could learn diverse user's intent in case of interactive image segmentation.\nI think there should be significant amount of ambiguity given few sparse guidance for segme... | [
-1,
-1,
-1,
-1,
-1,
7,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"rkx0CjV96Q",
"S1lxeZrq37",
"H1g_Df45aX",
"HkxrUnYc2Q",
"HkxNQtn937",
"iclr_2019_HJej6jR5Fm",
"iclr_2019_HJej6jR5Fm",
"iclr_2019_HJej6jR5Fm"
] |
iclr_2019_HJepJh0qKX | Empirical Study of Easy and Hard Examples in CNN Training | Deep Neural Networks (DNNs) generalize well despite their massive size and capability of memorizing all examples.
There is a hypothesis that DNNs start learning from simple patterns based on the observations that are consistently well-classified at early epochs (i.e., easy examples) and examples misclassified (i.e., hard examples).
However, despite the importance of understanding the learning dynamics of DNNs, properties of easy and hard examples are not fully investigated.
In this paper, we study the similarities of easy and hard examples respectively among different CNNs, assessing those examples’ contributions to generalization.
Our results show that most easy examples are identical among different CNNs, as they share similar dataset-dependent patterns (e.g., colors, structures, and superficial cues in high-frequency).
Moreover, while hard examples tend to contribute more to generalization than easy examples, removing a large number of easy examples leads to poor generalization, and we find that most misclassified examples in validation dataset are hard examples.
By analyzing intriguing properties of easy and hard examples, we discover that the reason why easy and hard examples have such properties can be explained by biases in a dataset and Stochastic Gradient Descent (SGD). | rejected-papers | There is no author response for this paper. The paper formulates a definition of easy and hard examples for training a neural network (NN) in terms of their frequency of being classified correctly over several repeats. One repeat corresponds to training the NN from scratch. Top 10% and bottom 10% of the samples with the highest and the lowest frequency define easy and hard instances for training. The authors also compare easy and hard examples across different architectures of NNs.
On the positive side, all the reviewers acknowledge the potential usefulness of quantifying easy and hard examples in training NNs, and R1 was ready to improve his/her initial rating if the authors revisited the paper.
On the other hand, all the reviewers and AC agreed that the paper requires (1) major improvement in presentation clarity -- see detailed comments of R1 on how to improve as well as comments/questions from R3 and R2; try to avoid confusing terminology such as ‘contradicted patterns’.
R1 raised important concerns that the proposed notion of easiness is drawn from the experiment in Fig. 1 of Arpit et al (2017) which is not properly attributed. R3 and R2 agreed that in its current state the experimental results are not conclusive and often non informative. To strengthen the paper the reviewers suggested to include more experiments in terms of different datasets, to propose a better metric for defining easy and hard samples (see R3’s suggestions).
We hope the reviews are useful for improving the paper.
| val | [
"SkeAUrZt3Q",
"B1gTG1cw3X",
"S1xnL1sXnQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper formulates a definition of easy and hard examples and studies the properties and the training implications of such examples. The paper does not attempt to present insights that change training for the better (although suggests this could be future work), so the primary value it claims to add is our under... | [
3,
4,
3
] | [
4,
5,
4
] | [
"iclr_2019_HJepJh0qKX",
"iclr_2019_HJepJh0qKX",
"iclr_2019_HJepJh0qKX"
] |
iclr_2019_HJerDj05tQ | Optimization on Multiple Manifolds | Optimization on manifold has been widely used in machine learning, to handle optimization problems with constraint. Most previous works focus on the case with a single manifold. However, in practice it is quite common that the optimization problem involves more than one constraints, (each constraint corresponding to one manifold). It is not clear in general how to optimize on multiple manifolds effectively and provably especially when the intersection of multiple manifolds is not a manifold or cannot be easily calculated. We propose a unified algorithm framework to handle the optimization on multiple manifolds. Specifically, we integrate information from multiple manifolds and move along an ensemble direction by viewing the information from each manifold as a drift and adding them together. We prove the convergence properties of the proposed algorithms. We also apply the algorithms into training neural network with batch normalization layers and achieve preferable empirical results. | rejected-papers | The paper describes a constrained optimization strategy for optimizing on an intersection of two manifolds. Unfortunately, the paper suffers from generally weak presentation quality, with the technical exposition seriously criticized by two out of the three reviewers. (The single positive review is too short and devoid of content to be taken seriously. Even there, concerns are expressed.) This paper requires substantial improvement before it could be considered for publication. | train | [
"rJgEyG_ZjX",
"ByxNEz2p27",
"Hye2YPOu37",
"B1l_Mpupo7",
"HklxSrfT5X"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Thanks for your attention to this paper. For the manifold optimization with a single manifold, to ensure that the iterative point x_{k+1} always moves on manifold, we need the operator $Retr_{x}$: $Retr_{x}(v)$ is a map from $T_{x}\\mathcal{M}$ to manifold $\\mathcal{M}$, where $T_{x}\\mathcal{M}$ is the tangent s... | [
-1,
7,
1,
3,
-1
] | [
-1,
3,
5,
4,
-1
] | [
"HklxSrfT5X",
"iclr_2019_HJerDj05tQ",
"iclr_2019_HJerDj05tQ",
"iclr_2019_HJerDj05tQ",
"iclr_2019_HJerDj05tQ"
] |
iclr_2019_HJeuOiRqKQ | Pooling Is Neither Necessary nor Sufficient for Appropriate Deformation Stability in CNNs | Many of our core assumptions about how neural networks operate remain empirically untested. One common assumption is that convolutional neural networks need to be stable to small translations and deformations to solve image recognition tasks. For many years, this stability was baked into CNN architectures by incorporating interleaved pooling layers. Recently, however, interleaved pooling has largely been abandoned. This raises a number of questions: Are our intuitions about deformation stability right at all? Is it important? Is pooling necessary for deformation invariance? If not, how is deformation invariance achieved in its absence? In this work, we rigorously test these questions, and find that deformation stability in convolutional networks is more nuanced than it first appears: (1) Deformation invariance is not a binary property, but rather that different tasks require different degrees of deformation stability at different layers. (2) Deformation stability is not a fixed property of a network and is heavily adjusted over the course of training, largely through the smoothness of the convolutional filters. (3) Interleaved pooling layers are neither necessary nor sufficient for achieving the optimal form of deformation stability for natural image classification. (4) Pooling confers \emph{too much} deformation stability for image classification at initialization, and during training, networks have to learn to \emph{counteract} this inductive bias. Together, these findings provide new insights into the role of interleaved pooling and deformation invariance in CNNs, and demonstrate the importance of rigorous empirical testing of even our most basic assumptions about the working of neural networks. | rejected-papers | This paper studies the role of pooling in the success underpinning CNNs. Through several experiments, the authors conclude that pooling is neither necessary nor sufficient to achieve deformation stability, and that its inductive bias can be mostly recovered after training.
All reviewers agreed that this is a paper asking an important question, and that it is well-written and reproducible. On the other hand, they also agreed that, in its current form, this paper lacks a 'punchline' that can drive further research. In words of R6, "the paper does not discuss the links between pooling and aliasing", or in words of R4, "it seems to very readily jump to unwarranted conclusions". In summary, the AC recommends rejection at this time, and encourages the authors to pursue the line of attack by exploring the suggestions of the reviewers and resubmit. | train | [
"ryeCY26qAQ",
"SJlJYrBKA7",
"HJg6gEBY0Q",
"HygrNXrF0Q",
"HJeFxf4oa7",
"SJlP_2N-67",
"H1xY5EwIpm",
"S1eNZj4bpX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\"Perhaps we should have made this clearer in our writing, but our claim is not that pooling is *never* necessary. Instead, our claim is that pooling is not *always* necessary and that there is an alternative mechanism that can lead to stability to deformation, namely smooth filters.\" \n\nThanks for clarifying yo... | [
-1,
-1,
-1,
-1,
5,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
5,
2,
4,
2
] | [
"SJlJYrBKA7",
"SJlP_2N-67",
"H1xY5EwIpm",
"HJeFxf4oa7",
"iclr_2019_HJeuOiRqKQ",
"iclr_2019_HJeuOiRqKQ",
"iclr_2019_HJeuOiRqKQ",
"iclr_2019_HJeuOiRqKQ"
] |
iclr_2019_HJex0o05F7 | UaiNets: From Unsupervised to Active Deep Anomaly Detection | This work presents a method for active anomaly detection which can be built upon existing deep learning solutions for unsupervised anomaly detection. We show that a prior needs to be assumed on what the anomalies are, in order to have performance guarantees in unsupervised anomaly detection. We argue that active anomaly detection has, in practice, the same cost of unsupervised anomaly detection but with the possibility of much better results. To solve this problem, we present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method, presenting results on both synthetic and real anomaly detection datasets. | rejected-papers | Following the unanimous vote of the reviewers, this paper is not ready for publication at ICLR. The most significant concern raised is that there does not seem to be an adequate research contribution. Moreover, unsubstantiated claims of novelty do not adequately discuss or compare to past work. | train | [
"Sye1KuXPR7",
"rklEM4DVRX",
"B1xA8-vVCQ",
"Sygui0INCX",
"B1gpbIRl6Q",
"S1ghJ5FqnX",
"ryg9Strt2m",
"BklLwmi_n7",
"rkl6GQj_3X",
"rkx-wWBacQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public"
] | [
"Dear Authors, I appreciate your addressing some of the review comments. However, some major issues with the paper remain:\n\n1. Simply plugging deep-learning with active learning (for anomaly detection) is not a significant contribution.\n\n2. The theory in the paper is completely redundant and its implications ar... | [
-1,
-1,
-1,
-1,
4,
5,
3,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
2,
4,
-1,
-1,
-1
] | [
"rklEM4DVRX",
"ryg9Strt2m",
"S1ghJ5FqnX",
"B1gpbIRl6Q",
"iclr_2019_HJex0o05F7",
"iclr_2019_HJex0o05F7",
"iclr_2019_HJex0o05F7",
"rkl6GQj_3X",
"rkx-wWBacQ",
"iclr_2019_HJex0o05F7"
] |
iclr_2019_HJf7ts0cFm | State-Regularized Recurrent Networks | Recurrent networks are a widely used class of neural architectures. They have, however, two shortcomings. First, it is difficult to understand what exactly they learn. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term state-regularization, makes RNNs transition between a finite set of learnable states. We show that state-regularization (a) simplifies the extraction of finite state automata modeling an RNN's state transition dynamics, and (b) forces RNNs to operate more like automata with external memory and less like finite state machines. | rejected-papers | the authors propose to incorporate an additional layer between the consecutive steps in LSTM by introducing a radial basis function layer (with dot product kernel and softmax) followed by a linear layer to make LSTM similar to or better at (by being more explicit) capturing DFA-like transition. the motivation is relatively straightforward, but it does not really resolve the issue of whether existing formulations of RNN's cannot capture such transition. since this was not shown theoretically nor intuitively, it is important for empirical evaluations to be thorough and clearly show that the proposed approach does indeed outperform the vanilla LSTM (with peepholes) when the capacity (e.g., the number of parameters) matches. unfortunately it has been the consensus among the reviewers that more thorough comparison on more conventional benchmarks are needed to convince them of the merit of the proposed approach. | train | [
"rJxDU40zC7",
"SJlckk2taX",
"H1xGBniFp7",
"HJg-saGEpQ",
"SJeM_VnbaX",
"Hyel1qtb67",
"HJxlpD793m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers, \n\nThe period during which we can address your comments and make changes to the submission is coming to an end. We wanted to make sure that we have addressed your main concerns in a sufficient manner. Please let us know if you have additional suggestions for improvements. We'll be happy to incorpo... | [
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"iclr_2019_HJf7ts0cFm",
"SJeM_VnbaX",
"Hyel1qtb67",
"HJxlpD793m",
"iclr_2019_HJf7ts0cFm",
"iclr_2019_HJf7ts0cFm",
"iclr_2019_HJf7ts0cFm"
] |
iclr_2019_HJfQrs0qt7 | Convergence Properties of Deep Neural Networks on Separable Data | While a lot of progress has been made in recent years, the dynamics of learning in deep nonlinear neural networks remain to this day largely misunderstood. In this work, we study the case of binary classification and prove various properties of learning in such networks under strong assumptions such as linear separability of the data. Extending existing results from the linear case, we confirm empirical observations by proving that the classification error also follows a sigmoidal shape in nonlinear architectures. We show that given proper initialization, learning expounds parallel independent modes and that certain regions of parameter space might lead to failed training. We also demonstrate that input norm and features' frequency in the dataset lead to distinct convergence speeds which might shed some light on the generalization capabilities of deep neural networks. We provide a comparison between the dynamics of learning with cross-entropy and hinge losses, which could prove useful to understand recent progress in the training of generative adversarial networks. Finally, we identify a phenomenon that we baptize gradient starvation where the most frequent features in a dataset prevent the learning of other less frequent but equally informative features. | rejected-papers | The manuscript proposes to analyze the learning dynamics of deep networks with separable data. A variety of results are provided under various assumptions.
The reviewers and AC note the assumptions required for the analysis are quite strong, and perhaps too strong to provide useful insight into real problems. Reviewers also cite issues with writing and the breadth of the title (this was much improved after rebuttal). | train | [
"ByesVG2V0Q",
"HyeLIvs40Q",
"H1xjF_PVCX",
"SyeGJ1qEA7",
"SklEdDP4CQ",
"rJl4x6wqnX",
"HJlqXnE5nQ",
"HJl5RsCF3m"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review. Below we attempt to answer your concerns. We also want to point out that we have added some insights/results relaxing one of our main assumptions in Section 3.4 of the latest version of the paper. For more details, please see the comment above entitled: “Relaxing Assumption (H2)”.\n\nTh... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"HJl5RsCF3m",
"iclr_2019_HJfQrs0qt7",
"rJl4x6wqnX",
"HJlqXnE5nQ",
"iclr_2019_HJfQrs0qt7",
"iclr_2019_HJfQrs0qt7",
"iclr_2019_HJfQrs0qt7",
"iclr_2019_HJfQrs0qt7"
] |
iclr_2019_HJfxbhR9KQ | Mimicking actions is a good strategy for beginners: Fast Reinforcement Learning with Expert Action Sequences | Imitation Learning is the task of mimicking the behavior of an expert player in a Reinforcement Learning(RL) Environment to enhance the training of a fresh agent (called novice) beginning from scratch. Most of the Reinforcement Learning environments are stochastic in nature, i.e., the state sequences that an agent may encounter usually follow a Markov Decision Process (MDP). This makes the task of mimicking difficult as it is very unlikely that a new agent may encounter same or similar state sequences as an expert. Prior research in Imitation Learning proposes various ways to learn a mapping between the states encountered and the respective actions taken by the expert while mostly being agnostic to the order in which these were performed. Most of these methods need considerable number of states-action pairs to achieve good results. We propose a simple alternative to Imitation Learning by appending the novice’s action space with the frequent short action sequences that the expert has taken. This simple modification, surprisingly improves the exploration and significantly outperforms alternative approaches like Dataset Aggregation. We experiment with several popular Atari games and show significant and consistent growth in the score that the new agents achieve using just a few expert action sequences. | rejected-papers | The paper proposes an interesting idea for more effective imitation learning. The idea is to include short actions sequences as labels (in addition to the basic actions) in imitation learning. Results on a few Atari games demonstrate the potential of this approach.
Reviewers generally like the idea, think it is simple, and are encouraged by its empirical support. That said, the work still appears somewhat preliminary in the current stage: (1) some reviewer is still in doubt about the chosen baseline; (2) empirical evidence is all in the similar set of Atari games --- how broadly is this approach applicable? | train | [
"ryeDrw8qkN",
"HygO5LHq0Q",
"SJljw8S90X",
"B1g14LSq07",
"BkxqR35J07",
"BJgTCSrEp7",
"Hkgw1asmhX",
"ByxMU9pvjm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear authors,\n\nThank you for your clarifications and an additional comparison with a baseline using a random subset of action pairs.\n\nThe main idea of this paper is interesting. However, my major concern is still in the experimental part, as it remains unclear to me how we should use the method. Many of the hy... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"B1g14LSq07",
"ByxMU9pvjm",
"Hkgw1asmhX",
"BJgTCSrEp7",
"iclr_2019_HJfxbhR9KQ",
"iclr_2019_HJfxbhR9KQ",
"iclr_2019_HJfxbhR9KQ",
"iclr_2019_HJfxbhR9KQ"
] |
iclr_2019_HJg3rjA5tQ | Spread Divergences | For distributions p and q with different support, the divergence ÷pq generally will not exist. We define a spread divergence \sdivpq on modified p and q and describe sufficient conditions for the existence of such a divergence. We give examples of using a spread divergence to train implicit generative models, including linear models (Principal Components Analysis and Independent Components Analysis) and non-linear models (Deep Generative Networks). | rejected-papers | This manuscript proposes spread divergences as a technique for extending f-divergences to distributions with different supports. This is achieved by convolving with a noise distribution. This is an important topic worth further study in the community, particularly as it related to training generative models.
The reviewers and AC opinions were mixed, with reviewers either being unconvinced about the novelty of the proposed work, or expressing issues about the clarity of the presentation. Further improvement of the clarity, combined with additional convincing experiments would significantly strengthen this submission. | train | [
"rJxQy_kV0X",
"BkeWUvyNRQ",
"Hyx9nLkN0m",
"SkxupTwbTX",
"SJe4feWY37",
"B1ehvuxYh7"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"* We agree that the proposed experiments will be useful and we will include them.\n\n* The intuition for maxmising the divergence is similar to that in MMD and Wasserstein distance, namely that we wish to consider mappings which maximally enable us to discern the difference between two distrubutions (whilst retain... | [
-1,
-1,
-1,
5,
4,
6
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"B1ehvuxYh7",
"SJe4feWY37",
"SkxupTwbTX",
"iclr_2019_HJg3rjA5tQ",
"iclr_2019_HJg3rjA5tQ",
"iclr_2019_HJg3rjA5tQ"
] |
iclr_2019_HJg6e2CcK7 | Clean-Label Backdoor Attacks | Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks. Specifically, by altering a small set of training examples, an adversary is able to install a backdoor that can be used during inference to fully control the model’s behavior. While the attack is very powerful, it crucially relies on the adversary being able to introduce arbitrary, often clearly mislabeled, inputs to the training set and can thus be detected even by fairly rudimentary data filtering. In this paper, we introduce a new approach to executing backdoor attacks, utilizing adversarial examples and GAN-generated data. The key feature is that the resulting poisoned inputs appear to be consistent with their label and thus seem benign even upon human inspection. | rejected-papers | The present work proposes to improve backdoor poisoning attacks by only using "clean-label" images (images whose label would be judged correct by a human), with the motivation that this would make them harder to detect. It considers two approaches to this, one based on GANs and one based on adversarial examples, and shows that the latter works better (and is in general quite effective). It also identifies an interesting phenomenon---that simply using existing back-door attacks with clean labels is substantially less effective than with incorrect labels, because the network does not need to modify itself to accommodate these additional correctly-labeled examples.
The strengths of this paper are that it has a detailed empirical evaluation with multiple interesting insights (described above). It also considers efficacy against some basic defense measures based on random pre-processing.
A weakness of the paper is that the justification for clean-label attacks is somewhat heuristic, based on the claim that dirty-label attacks can be recognized by hand. There is additional justification that dirty labels tend to be correlated with low confidence, but this correlation (as shown in Figure 2) is actually quite weak. On the other hand, natural defense strategies against the adversarial examples based attack (such as detecting and removing points with large loss at intermediate stages of training) are not considered. This might be fine, as we often assume that the attacker can react to the defender, but it is unclear why we should reject dirty-label attacks on the basis that they can be recognized by one detection mechanism but not give the defender the benefit of other simple detection mechanisms for clean-label attacks.
A separate concern was brought up that the attack is too similar to that of Guo et al., and that the method was not run on large-scale datasets. The Guo et al. paper does somewhat diminish the novelty of the present work, but not in a way that I consider problematic; there are definitely new results in this paper, especially the interesting empirical finding that the Guo et al. attack crucially relies on dirty labels. I do not agree with the criticism about large-scale datasets; in general, not all authors have the resources to test on ImageNet, and it is not clear why this should be required unless there is a specific hypothesis that running on ImageNet would test. It is true that the GAN-based method might work more poorly on ImageNet than on CIFAR, but the adversarial attack method (which is in any case the stronger method) seems unlikely to run into scaling issues.
Overall, this paper is right on the borderline of acceptance. There are interesting results, and none of the weaknesses are critical. It was unfortunately the case that there wasn't room in the program this year, so the paper was ultimately rejected. However, I think this could be a strong piece of work (and a clear accept) with some additional development. Here are some ideas that might help:
(1) Further investigate the phenomenon that adding data points that are too easy to fit do not succeed in data poisoning. This is a fairly interesting point but is not emphasized in the paper.
(2) Investigate natural defense mechanisms in the clean-label setting (such as filtering by loss or other such strategies). I do not think it is crucial that the clean-label attack bypasses every simple defense, but considering such defenses can provide more insight into how the attack works--e.g., does it in fact lead to substantially higher loss during training? And if so, at what stage does this occur? If not, how does it succeed in altering the model without inducing high loss? | train | [
"B1eHKRmEe4",
"BJedSbMEgE",
"BkeQgHN0RQ",
"Hke0YdA3CX",
"HJgnjDA2Rm",
"B1gjWlMoA7",
"HkgVGbfqCm",
"rkeRA_KuAX",
"HJlyuOh1RX",
"H1xefd2J0m",
"r1xZxO3J0X",
"Bke-NGHlTQ",
"rkgXH4Xwn7",
"SJgU__OUn7"
] | [
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear AnonReviewer2,\n\nI am writing to bring your attention to my comments below, per the Area Chair's request. In my opinion, this paper's overarching idea is very interesting and yet the implementation could be improved. In particular, I had three major concerns in my initial review. \n\n1. This paper does not p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
4
] | [
"rkeRA_KuAX",
"HJgnjDA2Rm",
"Hke0YdA3CX",
"HkgVGbfqCm",
"B1gjWlMoA7",
"HJlyuOh1RX",
"iclr_2019_HJg6e2CcK7",
"H1xefd2J0m",
"SJgU__OUn7",
"rkgXH4Xwn7",
"Bke-NGHlTQ",
"iclr_2019_HJg6e2CcK7",
"iclr_2019_HJg6e2CcK7",
"iclr_2019_HJg6e2CcK7"
] |
iclr_2019_HJgJS30qtm | REVISTING NEGATIVE TRANSFER USING ADVERSARIAL LEARNING | An unintended consequence of feature sharing is the model fitting to correlated tasks within the dataset, termed negative transfer. In this paper, we revisit the problem of negative transfer in multitask setting and find that its corrosive effects are applicable to a wide range of linear and non-linear models, including neural networks. We first study the effects of negative transfer in a principled way and show that previously proposed counter-measures are insufficient, particularly for trainable features. We propose an adversarial training approach to mitigate the effects of negative transfer by viewing the problem in a domain adaptation setting. Finally, empirical results on attribute prediction multi-task on AWA and CUB datasets further validate the need for correcting negative sharing in an end-to-end manner. | rejected-papers | This paper proposes reducing so called "negative transfer" through adversarial feature learning. The application of DANN for this task is new. However, the problem setting and particular assumptions are not sufficiently justified. As commented by the reviewers and acknowledged by the authors there is miscommunication about the basic premise of negative transfer and the main assumptions about the target distribution and it's label distribution need further justification. The authors are advised to restructure their manuscript so as to clarify the main contribution, assumptions, and motivation for their problem statement.
In addition, the paper in it's current form is lacking sufficient experimental evidence to conclude that the proposed approach is preferable compared to prior work (such as Li 2018 and Zhang 2018) and lacks the proper ablation to conclude that the elimination of negative transfer is the main source of improvements.
We encourage the authors to improve these aspects of the work and resubmit to a future venue. | train | [
"rylr-32cA7",
"HyelBNg92m",
"rklt7bDY2X",
"SJlOAEjunQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for the constructive remarks on our idea and pointing out some relevant literature. Based on your comments, we will revamp the presentation and articulation of the paper for a future venue. \n\nThe concerns of the reviewers can be summarized as follows with our brief response (to foster a di... | [
-1,
4,
2,
6
] | [
-1,
4,
4,
4
] | [
"iclr_2019_HJgJS30qtm",
"iclr_2019_HJgJS30qtm",
"iclr_2019_HJgJS30qtm",
"iclr_2019_HJgJS30qtm"
] |
iclr_2019_HJgODj05KX | A preconditioned accelerated stochastic gradient descent algorithm | We propose a preconditioned accelerated stochastic gradient method suitable for large scale optimization. We derive sufficient convergence conditions for the minimization of convex functions using a generic class of diagonal preconditioners and provide a formal convergence proof based on a framework originally used for on-line learning. Inspired by recent popular adaptive per-feature algorithms, we propose a specific preconditioner based on the second moment of the gradient. The sufficient convergence conditions motivate a critical adaptation of the per-feature updates in order to ensure convergence. We show empirical results for the minimization of convex and non-convex cost functions, in the context of neural network training. The method compares favorably with respect to current, first order, stochastic optimization methods. | rejected-papers | Dear authors,
The reviewers pointed out a number of concerns about this work. It is thus not ready for publication. Should you decide to resubmit it to another venue, please address these concerns. | val | [
"SkgUtvKC27",
"rJem4FHcnQ",
"ryx5svN5hm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Authors propose combining Adam-like per-feature adaptative and Nesterov's momentum. Even though Nesterov's momentum is implemented in major frameworks, it is rarely used, so there's an obvious question of practical relevance of proposed method.\n\nSignificant part of the paper is dedicated to proof of convergence,... | [
4,
4,
5
] | [
3,
5,
3
] | [
"iclr_2019_HJgODj05KX",
"iclr_2019_HJgODj05KX",
"iclr_2019_HJgODj05KX"
] |
iclr_2019_HJgOl3AqY7 | Modulated Variational Auto-Encoders for Many-to-Many Musical Timbre Transfer | Generative models have been successfully applied to image style transfer and domain translation. However, there is still a wide gap in the quality of results when learning such tasks on musical audio. Furthermore, most translation models only enable one-to-one or one-to-many transfer by relying on separate encoders or decoders and complex, computationally-heavy models. In this paper, we introduce the Modulated Variational auto-Encoders (MoVE) to perform musical timbre transfer. First, we define timbre transfer as applying parts of the auditory properties of a musical instrument onto another. We show that we can achieve and improve this task by conditioning existing domain translation techniques with Feature-wise Linear Modulation (FiLM). Then, by replacing the usual adversarial translation criterion by a Maximum Mean Discrepancy (MMD) objective, we alleviate the need for an auxiliary pair of discriminative networks. This allows a faster and more stable training, along with a controllable latent space encoder. By further conditioning our system on several different instruments, we can generalize to many-to-many transfer within a single variational architecture able to perform multi-domain transfers. Our models map inputs to 3-dimensional representations, successfully translating timbre from one instrument to another and supporting sound synthesis on a reduced set of control parameters. We evaluate our method in reconstruction and generation tasks while analyzing the auditory descriptor distributions across transferred domains. We show that this architecture incorporates generative controls in multi-domain transfer, yet remaining rather light, fast to train and effective on small datasets. | rejected-papers | This paper proposes a VAE-based model which is able to perform musical timbre transfer.
The reviewers generally find the approach well-motivated. The idea to perform many-to-many transfer within a single architecture is found to be promising. However, there have been some unaddressed concerns, as detailed below.
R3 has some methodological concerns regarding negative transfer and asks for more extended experimental section. R1 and R2 ask for more interpretable results and, ultimately, a more conclusive study. R2 specifically finds the results to be insufficient.
The authors have agreed with some of the reviewers' feedback but have left most of it unaddressed in a new revision. That could be because some of the recommendations require significant extra work.
Given the above, it seems that this paper needs more work before being accepted in ICLR. | train | [
"SyxRA2-1JV",
"r1gzlZCTCm",
"SkgfU-gw0m",
"rklb2xgmAX",
"BJeCRTJ707",
"SyltxKsTpm",
"ryxdk0Yaa7",
"SJxnP8_eam",
"HJgaRiJcnm",
"Ske_2A9d3m",
"HyeLrre8jQ",
"rJxESNauq7",
"BJlgYBaQ9m",
"rkg6uDbMqQ",
"rJll5K8W97"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"public"
] | [
"thank you for the constructive remarks\n\naccording to it, we are reconsidering how we compute scores and particularly scaling transfer objectives (MMD,KNN,Energy Statistics) with scores on different partitions of source/target domains against reference batches of the target domain\n\nwe may also consider some sam... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"r1gzlZCTCm",
"SyltxKsTpm",
"BJeCRTJ707",
"SJxnP8_eam",
"ryxdk0Yaa7",
"HJgaRiJcnm",
"Ske_2A9d3m",
"iclr_2019_HJgOl3AqY7",
"iclr_2019_HJgOl3AqY7",
"iclr_2019_HJgOl3AqY7",
"iclr_2019_HJgOl3AqY7",
"BJlgYBaQ9m",
"iclr_2019_HJgOl3AqY7",
"rJll5K8W97",
"iclr_2019_HJgOl3AqY7"
] |
iclr_2019_HJgTHnActQ | Local Image-to-Image Translation via Pixel-wise Highway Adaptive Instance Normalization | Recently, image-to-image translation has seen a significant success. Among many approaches, image translation based on an exemplar image, which contains the target style information, has been popular, owing to its capability to handle multimodality as well as its suitability for practical use. However, most of the existing methods extract the style information from an entire exemplar and apply it to the entire input image, which introduces excessive image translation in irrelevant image regions. In response, this paper proposes a novel approach that jointly extracts out the local masks of the input image and the exemplar as targeted regions to be involved for image translation. In particular, the main novelty of our model lies in (1) co-segmentation networks for local mask generation and (2) the local mask-based highway adaptive instance normalization technique. We demonstrate the quantitative and the qualitative evaluation results to show the advantages of our proposed approach. Finally, the code is available at https://github.com/AnonymousIclrAuthor/Highway-Adaptive-Instance-Normalization | rejected-papers | The paper received mixed ratings. The proposed idea is quite reasonable but also sounds somewhat incremental. While the idea of separating foreground/background is reasonable, it also limits the applicability of the proposed method (i.e., the method is only demonstrated on aligned face images). In addition, combining AdaIn with foreground mask is a reasonable idea but doesn’t sound groundbreakingly novel. The comparison against StarGAN looks quite anecdotal and the proposed method seems to cause only hairstyle changes (but transfer with other attributes are not obvious). In addition, please refer to detailed reviewers’ comments for other concerns. Overall, it sounds like a good engineering paper that might be better fit to computer vision venue, but experimental validation seems somewhat preliminary and it’s unclear how much novel insight and general technical contributions that this work provides.
| train | [
"Syg6sFuweN",
"HkxMU5dPeE",
"S1l0L62kkE",
"S1eex_jDhm",
"Bke2XVeJy4",
"HyxSg-Lc0Q",
"Byx_lTB5AQ",
"SyxsQhH9RQ",
"BygNzp6j3X",
"SJxpmdAlnQ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"‘Updates on Fig. 4’\nReflecting the reviewer’s comments, we significantly updated Fig. 4, as shown in http://123.108.168.4:5000/figure/page/4\nFirst, we included the resulting masks after user edits. Regarding makeup-lipstick example, the reason for a small difference is because the `lipstick’ and `makeup’ attribu... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
5
] | [
"S1l0L62kkE",
"Bke2XVeJy4",
"HyxSg-Lc0Q",
"iclr_2019_HJgTHnActQ",
"SyxsQhH9RQ",
"S1eex_jDhm",
"BygNzp6j3X",
"SJxpmdAlnQ",
"iclr_2019_HJgTHnActQ",
"iclr_2019_HJgTHnActQ"
] |
iclr_2019_HJgVisRqtX | SEGEN: SAMPLE-ENSEMBLE GENETIC EVOLUTIONARY NETWORK MODEL | Deep learning, a rebranding of deep neural network research works, has achieved a remarkable success in recent years. With multiple hidden layers, deep learning models aim at computing the hierarchical feature representations of the observational data. Meanwhile, due to its severe disadvantages in data consumption, computational resources, parameter tuning costs and the lack of result explainability, deep learning has also suffered from lots of criticism. In this paper, we will introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models. Instead of building one single deep model, based on a set of sampled sub-instances, SEGEN adopts a genetic-evolutionary learning strategy to build a group of unit models generations by generations. The unit models incorporated in SEGEN can be either traditional machine learning models or the recent deep learning models with a much “narrower” and “shallower” architecture. The learning results of each instance at the final generation will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies. From the computational perspective, SEGEN requires far less data, fewer computational resources and parameter tuning efforts, but has sound theoretic interpretability of the learning process and results. Extensive experiments have been done on several different real-world benchmark datasets, and the experimental results obtained by SEGEN have demonstrated its advantages over the state-of-the-art representation learning models. | rejected-papers | This paper endeavors to combine genetic evolutionary algorithms with subsampling techniques. As noted by reviewers, this is an interesting topic and the paper is intriguing, but more work is required to make it convincing (fairer baselines, more detailed / clearer presentation, ablation studies to justify the claims made int he paper). Authors are encouraged to strengthen the paper by following reviewers' suggestions. | train | [
"ryejOUP7RQ",
"rkgRexJdAX",
"SJlE9Qm32m",
"B1x636G56Q",
"SkeQ8n-caQ",
"SJxPI_zcp7",
"HkgIko9h2m",
"rkxb8KF2hQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the response. Some of the concerns are resolved through the rebuttal. Here are some important issues: \n- Point <2> needs to be clarified in the manuscript.\n- Point <3> is not raised or mentioned in the manuscript! Needs clear clarification, and then ablation studies to show how they help.\n- With resp... | [
-1,
-1,
4,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
2,
5
] | [
"B1x636G56Q",
"SJxPI_zcp7",
"iclr_2019_HJgVisRqtX",
"SJlE9Qm32m",
"HkgIko9h2m",
"rkxb8KF2hQ",
"iclr_2019_HJgVisRqtX",
"iclr_2019_HJgVisRqtX"
] |
iclr_2019_HJgZrsC5t7 | Improving On-policy Learning with Statistical Reward Accumulation | Deep reinforcement learning has obtained significant breakthroughs in recent years. Most methods in deep-RL achieve good results via the maximization of the reward signal provided by the environment, typically in the form of discounted cumulative returns. Such reward signals represent the immediate feedback of a particular action performed by an agent. However, tasks with sparse reward signals are still challenging to on-policy methods. In this paper, we introduce an effective characterization of past reward statistics (which can be seen as long-term feedback signals) to supplement this immediate reward feedback. In particular, value functions are learned with multi-critics supervision, enabling complex value functions to be more easily approximated in on-policy learning, even when the reward signals are sparse. We also introduce a novel exploration mechanism called ``hot-wiring'' that can give a boost to seemingly trapped agents. We demonstrate the effectiveness of our advantage actor multi-critic (A2MC) method across the discrete domains in Atari games as well as continuous domains in the MuJoCo environments. A video demo is provided at https://youtu.be/zBmpf3Yz8tc and source codes will be made available upon paper acceptance. | rejected-papers | The paper proposes an interesting idea for efficient exploration of on-policy learning in sparse reward RL problems. The empirical results are promising, which is the main strength of the paper. On the other hand, reviewers generally feel that the proposed algorithm is rather ad hoc, sometimes with not-so-transparent algorithmic choices. As a result, it is really unclear whether the idea works only on the test problems, or applies to a broader set of problems. The author responses and new results are helpful and appreciated by all reviewers, but do not change the reviewers' concerns. | train | [
"S1x9ngK8R7",
"rkgxJT5lp7",
"SylK9vaA27",
"S1ebLyslTQ",
"SJxVP3qepm",
"r1gSOgY16X",
"BJxLw4N5n7"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We want to thank the reviewers for their kind and helpful feedback. We've followed all reviewers' comments and addressed the related concerns in our revision, outlined as follows:\n\n1) We revised *Related Work* to clarify the scope of our paper while explicitly mentioning reward shaping;\n2) We revised *Section 4... | [
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_HJgZrsC5t7",
"SJxVP3qepm",
"BJxLw4N5n7",
"rkgxJT5lp7",
"r1gSOgY16X",
"iclr_2019_HJgZrsC5t7",
"iclr_2019_HJgZrsC5t7"
] |
iclr_2019_HJglg2A9FX | Iteratively Learning from the Best | We study a simple generic framework to address the issue of bad training data; both bad labels in supervised problems, and bad samples in unsupervised ones. Our approach starts by fitting a model to the whole training dataset, but then iteratively improves it by alternating between (a) revisiting the training data to select samples with lowest current loss, and (b) re-training the model on only these selected samples. It can be applied to any existing model training setting which provides a loss measure for samples, and a way to refit on new ones. We show the merit of this approach in both theory and practice We first prove statistical consistency, and linear convergence to the ground truth and global optimum, for two simpler model settings: mixed linear regression, and gaussian mixture models. We then demonstrate its success empirically in (a) saving the accuracy of existing deep image classifiers when there are errors in the labels of training images, and (b) improving the quality of samples generated by existing DC-GAN models, when it is given training data that contains a fraction of the images from a different and unintended dataset. The experimental results show significant improvement over the baseline methods that ignore the existence of bad labels/samples. | rejected-papers | This paper addresses the problem of learning with outliers, which many reviewers agree is an important direction. However, reviewers point to issues with the experiments (missing baselines, ablations, etc.) and are concerned that the assumptions in the theoretical analysis are too strong. These were potentially addressed in a revised version of the paper, but the revisions are so major that I do not think it is appropriate to consider them in the review process (and it is hard to assess to what extent they address the issues without asking reviewers to do a thorough re-appraisal, which goes beyond the scope of their duties). I encourage the authors to take reviewer comments into account and prepare a more polished version of the manuscript for future submission. | train | [
"Hke_ZBQBp7",
"B1gAK0CrRQ",
"SJlFk0RHAX",
"ryg2OnAr07",
"BJe1_7BF2m",
"BkeEAjqS3Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose an iterative method for discarding outlying training data: first, learn a model on the entire training dataset; second, identify the training examples that have high loss under the learned model; and then alternate between re-learning the model on the training examples that do not have high los... | [
6,
-1,
-1,
-1,
3,
6
] | [
3,
-1,
-1,
-1,
5,
4
] | [
"iclr_2019_HJglg2A9FX",
"BkeEAjqS3Q",
"Hke_ZBQBp7",
"BJe1_7BF2m",
"iclr_2019_HJglg2A9FX",
"iclr_2019_HJglg2A9FX"
] |
iclr_2019_HJguLo0cKQ | Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles | While deep learning has led to remarkable results on a number of challenging problems, researchers have discovered a vulnerability of neural networks in adversarial settings, where small but carefully chosen perturbations to the input can make the models produce extremely inaccurate outputs. This makes these models particularly unsuitable for safety-critical application domains (e.g. self-driving cars) where robustness is extremely important. Recent work has shown that augmenting training with adversarially generated data provides some degree of robustness against test-time attacks. In this paper we investigate how this approach scales as we increase the computational budget given to the defender. We show that increasing the number of parameters in adversarially-trained models increases their robustness, and in particular that ensembling smaller models while adversarially training the entire ensemble as a single model is a more efficient way of spending said budget than simply using a larger single model. Crucially, we show that it is the adversarial training of the ensemble, rather than the ensembling of adversarially trained models, which provides robustness. | rejected-papers | The work brings little novelty compared to existing literature. | train | [
"ryxEWZhOh7",
"HJe-jclP37",
"HklJi6Ox37",
"HyxOTCUk6Q",
"rJxto0Ly6X",
"r1e6I_Ly6m",
"BkeLky8ypX",
"Skgx0Q8ypQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper presents a new adversarial training defense whereby an ensemble of models is trained against both benign and adversarial examples. The authors demonstrate on the CIFAR-10 dataset that the ensemble has improved robustness against a wide variety of white-box and transfer-based black-box attacks compared t... | [
5,
6,
4,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HJguLo0cKQ",
"iclr_2019_HJguLo0cKQ",
"iclr_2019_HJguLo0cKQ",
"HklJi6Ox37",
"HklJi6Ox37",
"HklJi6Ox37",
"ryxEWZhOh7",
"HJe-jclP37"
] |
iclr_2019_HJgyAoRqFQ | State-Denoised Recurrent Neural Networks | Recurrent neural networks (RNNs) are difficult to train on sequence processing tasks, not only because input noise may be amplified through feedback, but also because any inaccuracy in the weights has similar consequences as input noise. We describe a method for denoising the hidden state during training to achieve more robust representations thereby improving generalization performance. Attractor dynamics are incorporated into the hidden state to `clean up' representations at each step of a sequence. The attractor dynamics are trained through an auxillary denoising loss to recover previously experienced hidden states from noisy versions of those states. This state-denoised recurrent neural network (SDRNN) performs multiple steps of internal processing for each external sequence step. On a range of tasks, we show that the SDRNN outperforms a generic RNN as well as a variant of the SDRNN with attractor dynamics on the hidden state but without the auxillary loss. We argue that attractor dynamics---and corresponding connectivity constraints---are an essential component of the deep learning arsenal and should be invoked not only for recurrent networks but also for improving deep feedforward nets and intertask transfer. | rejected-papers | The paper is well written and develops a novel and original architecture and technique for RNNs to learn attractors for their hidden states (based on an auxiliary denoising training of an attractor network). All reviewers and AC found the idea very interesting and a promising direction of research for RNNs. However all also agreed that the experimental validation was currently too limited, in type and size of task and data, as in scope. Reviewers demand experimental comparisons with other (simpler) denoising / regularization techniques; more in depth experimental validation and analysis of the state-denoising behaviour; as well as experiments on larger datasets and more ambitious tasks. | train | [
"rkgctaSiCQ",
"Ske6I8SiAQ",
"HJx7C-SoCQ",
"Bye5F_Hb6Q",
"BJgkzBas37",
"S1gsJSsy2m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Your major point is appreciated, but we worry we have misled readers by using the term 'noise' in a fast and loose manner. Certainly corruption to the hidden state due to untrained or poorly trained weights is _not_ anything close to Gaussian. We see that we have been misleading in suggesting that our denoising tr... | [
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"S1gsJSsy2m",
"BJgkzBas37",
"Bye5F_Hb6Q",
"iclr_2019_HJgyAoRqFQ",
"iclr_2019_HJgyAoRqFQ",
"iclr_2019_HJgyAoRqFQ"
] |
iclr_2019_HJl0jiRqtX | EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE | Making decisions requires information relevant to the task at hand. Many real-life decision-making situations allow acquiring further relevant information at a specific cost. For example, in assessing the health status of a patient we may decide to take additional measurements such as diagnostic tests or imaging scans before making a final assessment. More information that is relevant allows for better decisions but it may be costly to acquire all of this information. How can we trade off the desire to make good decisions with the option to acquire further information at a cost? To this end, we propose a principled framework, named EDDI (Efficient Dynamic Discovery of high-value Information), based on the theory of Bayesian experimental design. In EDDI we propose a novel partial variational autoencoder (Partial VAE), to efficiently handle missing data over varying subsets of known information. EDDI combines this Partial VAE with an acquisition function that maximizes expected information gain on a set of target variables. EDDI is efficient and demonstrates that dynamic discovery of high-value information is possible; we show cost reduction at the same decision quality and improved decision quality at the same cost in benchmarks and in two health-care applications.. We believe there is great potential for realizing these gains in real-world decision support systems. | rejected-papers | This paper develops an active variable selection framework that couples a partial variational autoencoder capable of handling missing data with an information acquisition criteria derived from Bayesian experimental design. The paper is generally well written and the formulation appears to be natural, with a compelling real world healthcare application. The topic is relatively under-explored in deep learning and the paper appears to attempt to set a valuable baseline. However, the AC cannot recommend acceptance based on the fact that reviewer 2 has brought up concerns about the competitiveness of the approach relative to alternative methods reported in the experimental section, and all reviewers have found various parts of the paper to have room for improvement with regards to technical clarity. As such the paper would benefit from a revision and a stronger resubmission. | train | [
"HJxgYLy-pX",
"H1lvv_feA7",
"S1xhkTNxAX",
"rk-Q2zgA7",
"rJlodiMx07",
"rke_8oGgCm",
"B1lI6Wnah7",
"HJgQ25LF27"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"----I acknowledge that the authors have made improvements to the paper and have increased my score to 6\n\nThis is still definitely not my area of expertise and so I am leaving my confidence score low. \n---\n\nThe paper presents an algorithm EDDI that uses a a partial VAE and does active feature selection. The au... | [
6,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_HJl0jiRqtX",
"HJxgYLy-pX",
"iclr_2019_HJl0jiRqtX",
"HJgQ25LF27",
"B1lI6Wnah7",
"B1lI6Wnah7",
"iclr_2019_HJl0jiRqtX",
"iclr_2019_HJl0jiRqtX"
] |
iclr_2019_HJl1ujCct7 | A Multi-modal one-class generative adversarial network for anomaly detection in manufacturing | One class anomaly detection on high-dimensional data is one of the critical issue in both fundamental machine learning research area and manufacturing applica- tions. A good anomaly detection should accurately discriminate anomalies from normal data. Although most previous anomaly detection methods achieve good performances, they do not perform well on high-dimensional imbalanced data- set 1) with a limited amount of data; 2) multi-modal distribution; 3) few anomaly data. In this paper, we develop a multi-modal one-class generative adversarial net- work based detector (MMOC-GAN) to distinguish anomalies from normal data (products). Apart from a domain-specific feature extractor, our model leverage a generative adversarial network(GAN). The generator takes in a modified noise vector using a pseudo latent prior and generate samples at the low-density area of the given normal data to simulate the anomalies. The discriminator then is trained to distinguish the generate samples from the normal samples. Since the generated samples simulate the low density area for each modal, the discriminator could directly detect anomalies from normal data. Experiments demonstrate that our model outperforms the state-of-the-art one-class classification models and other anomaly detection methods on both normal data and anomalies accuracy, as well as the F1 score. Also, the generated samples can fully capture the low density area of different types of products.
| rejected-papers | The authors propose a GAN-based anomaly detection method based on simulating anomalies (low density regions of the data space) in order to train an anomaly classifier.
While the paper addresses an interesting take on an important problem, there are many concerns raised by reviewers including novelty, clarity, attribution, reproducibility, the use of exclusively proprietary data, and a multitude of textual mistakes. Overall, the paper shows promise but does not seem to be a mature and polished piece of work. As there has been no rebuttal or update to the paper I have no choice but to concur with the reviewers' initial assessments and reject. | train | [
"H1lbU7e-T7",
"Sye4vhvq37",
"S1xwFTpO3Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents an anomaly detection method called MMOCGAN which is claimed to work well on high-dimensional datasets with limited, multimodal data. The proposed idea is to train a GAN generator to simulate anomalies in the data in order to provide the one-class classifier with more negative examples. Overall I... | [
3,
4,
5
] | [
4,
5,
4
] | [
"iclr_2019_HJl1ujCct7",
"iclr_2019_HJl1ujCct7",
"iclr_2019_HJl1ujCct7"
] |
iclr_2019_HJl2Ns0qKX | Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourages convex latent distributions | We present a neural network architecture based upon the Autoencoder (AE) and Generative Adversarial Network (GAN) that promotes a convex latent distribution by training adversarially on latent space interpolations. By using an AE as both the generator and discriminator of a GAN, we pass a pixel-wise error function across the discriminator, yielding an AE which produces sharp samples that match both high- and low-level features of the original images. Samples generated from interpolations between data in latent space remain within the distribution of real data as trained by the discriminator, and therefore preserve realistic resemblances to the network inputs. | rejected-papers | The idea of the paper -- imposing a GAN type loss on the latent interpolations of an autoencoder -- is interesting. However there are strong concerns from R2 and R3 about limited experimental evaluation of the proposed method which falls short of demonstrating its advantages over latent spaces learned by existing GANs. Another point of concern was the use of only one real dataset (CelebA). Authors made substantial revisions to the paper in addressing many of the reviewers' points but these core concerns still persist with the current draft and it's not ready for publication at ICLR. Authors are encouraged to address these concerns and resubmit to another venue. | train | [
"SJgxU3FO2X",
"Sketw9Uc2X",
"S1ekTvuoam",
"SylWGL8ipm",
"B1xbbI8oaQ",
"HyxDaHIspQ",
"ByeTySIjpX",
"H1go7HIiTX",
"B1xFMBLiTm",
"HylSRfLjTX",
"HygDQVIj6X",
"SkxyGEUo6X",
"HJemNF4-2X",
"BJey_AW82Q",
"B1gUB0YrnX",
"BkxKgktfi7",
"rJgK-QXRtQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"public",
"public"
] | [
"\nUpdate:\n\nI’d like to thank the authors for their thoroughness in responding to the issues I raised. I will echo my fellow reviewers in saying that I would encourage the authors to submit to another venue, given the substantial modifications made to the original submission.\n\nThe updated version provides a cle... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HJl2Ns0qKX",
"iclr_2019_HJl2Ns0qKX",
"H1go7HIiTX",
"B1xbbI8oaQ",
"HyxDaHIspQ",
"HJemNF4-2X",
"SJgxU3FO2X",
"B1xFMBLiTm",
"ByeTySIjpX",
"Sketw9Uc2X",
"SkxyGEUo6X",
"HylSRfLjTX",
"iclr_2019_HJl2Ns0qKX",
"B1gUB0YrnX",
"iclr_2019_HJl2Ns0qKX",
"rJgK-QXRtQ",
"iclr_2019_HJl2Ns0qK... |
iclr_2019_HJlEUoR9Km | Improved resistance of neural networks to adversarial images through generative pre-training | We train a feed forward neural network with increased robustness against adversarial attacks compared to conventional training approaches. This is achieved using a novel pre-trained building block based on a mean field description of a Boltzmann machine. On the MNIST dataset the method achieves strong adversarial resistance without data augmentation or adversarial training. We show that the increased adversarial resistance is correlated with the generative performance of the underlying Boltzmann machine. | rejected-papers | No reviewer has made a strong case for accepting this paper or championed it so I am recommending rejecting it. The unfavorable reviewers, although they mention real issues, have not highlighted some of the most important barriers to accepting this work.
One major, but not necessarily dispositive, concern is that the paper only presents results on MNIST. However, even if we put aside this concern, there are several issues with the motivation and approach of this paper. If this technique is actually good at improving the model outside the clean image distribution, then the paper should show that and not just L2 worst case perturbations. To quote the intro of the paper: "How can deep learning systems successfully generalise and at the same time be extremely vulnerable to minute changes in the input?" The answer is: they don't generalize and this work does not show us improved generalization. Even a small amount of test error in the data distribution suggests that the closest test error to a given point will often be quite close to the starting point, although this is easier to see with linear models. The best way to fix this work would be to study (average case) error on noisy distributions (as in the concurrent submission https://openreview.net/forum?id=S1xoy3CcYX ). | train | [
"SklveLIAy4",
"HylWAmthJN",
"B1lA6qBKC7",
"BygvqqHKCQ",
"BygEO5SY0m",
"rylr-5StAX",
"Skl1-09T2m",
"Syl0HVschm",
"S1lNOhec27"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We give a quantitative analysis of the effectiveness of our method in Tab. 1 and more details in the histograms in Fig. 5. We give a comparison to the, to our knowledge, strongest defense against adversarial attacks on MNIST [VI]. In the same article, [IV], some more model evaluations can be found. This should be ... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"HylWAmthJN",
"BygEO5SY0m",
"S1lNOhec27",
"Syl0HVschm",
"Skl1-09T2m",
"iclr_2019_HJlEUoR9Km",
"iclr_2019_HJlEUoR9Km",
"iclr_2019_HJlEUoR9Km",
"iclr_2019_HJlEUoR9Km"
] |
iclr_2019_HJlWXhC5Km | Learning to Control Visual Abstractions for Structured Exploration in Deep Reinforcement Learning | Exploration in environments with sparse rewards is a key challenge for reinforcement learning. How do we design agents with generic inductive biases so that they can explore in a consistent manner instead of just using local exploration schemes like epsilon-greedy? We propose an unsupervised reinforcement learning agent which learns a discrete pixel grouping model that preserves spatial geometry of the sensors and implicitly of the environment as well. We use this representation to derive geometric intrinsic reward functions, like centroid coordinates and area, and learn policies to control each one of them with off-policy learning. These policies form a basis set of behaviors (options) which allows us explore in a consistent way and use them in a hierarchical reinforcement learning setup to solve for extrinsically defined rewards. We show that our approach can scale to a variety of domains with competitive performance, including navigation in 3D environments and Atari games with sparse rewards. | rejected-papers | The paper presents an unsupervised visual abstraction model, used for reinforcement learning tasks. It is trained through intrinsic rewards, generated from temporal differences of inputs. This is similar to "learning to control pixels". The method is tested in DM Lab (3D environment, 2D navigation tasks) and Atari (Montezuma's Revenge).
The paper is at times hard to follow, and it seems the improvements accompanying the rebuttals did not convince reviewers to change their notes significantly. The experiments do not contain enough comparisons to other models, baselines, nor ablations, to sustain the claims.
In its current form, this is not acceptable for publication at ICLR. | train | [
"r1g7AgzD0X",
"rkg-OWpO2Q",
"ByePRcoH07",
"HJgpllc4RX",
"r1x_6y94RX",
"Bkl0LJ9N0X",
"rklGJycVCm",
"rkg7LLJ6h7",
"S1eCQpbq3X"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for this question. The options in our setting maximize or minimize entity attributes, so there isn't a natural goal success criteria (e.g. sometimes there could be obstacles in an entity's path or none at all). In some cases it might be possible to make statements about goal achievement, for instance if the... | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"ByePRcoH07",
"iclr_2019_HJlWXhC5Km",
"r1x_6y94RX",
"rkg-OWpO2Q",
"S1eCQpbq3X",
"rkg7LLJ6h7",
"iclr_2019_HJlWXhC5Km",
"iclr_2019_HJlWXhC5Km",
"iclr_2019_HJlWXhC5Km"
] |
iclr_2019_HJlY0jA5F7 | Improving Sample-based Evaluation for Generative Adversarial Networks | In this paper, we propose an improved quantitative evaluation framework for Generative Adversarial Networks (GANs) on generating domain-specific images, where we improve conventional evaluation methods on two levels: the feature representation and the evaluation metric. Unlike most existing evaluation frameworks which transfer the representation of ImageNet inception model to map images onto the feature space, our framework uses a specialized encoder to acquire fine-grained domain-specific representation. Moreover, for datasets with multiple classes, we propose Class-Aware Frechet Distance (CAFD), which employs a Gaussian mixture model on the feature space to better fit the multi-manifold feature distribution. Experiments and analysis on both the feature level and the image level were conducted to demonstrate improvements of our proposed framework over the recently proposed state-of-the-art FID method. To our best knowledge, we are the first to provide counter examples where FID gives inconsistent results with human judgments. It is shown in the experiments that our framework is able to overcome the shortness of FID and improves robustness. Code will be made available. | rejected-papers | The paper proposes a novel sample based evaluation metric which extends the idea of FID by replacing the latent features of the inception network by those of a data-set specific (V)AE and the FID by the mean FID of the class-conditional distributions. Furthermore, the paper presents interesting examples for which FID fails to match the human judgment while the new metric does not. All reviewers agree, that while these ideas are interesting, they are not convinced about the originality and significance of the contribution and believe that the work could be improved by a deeper analysis and experimental investigation.
| test | [
"BJl1qdsKC7",
"SJgrhvWFhQ",
"ryliKWJBnX",
"BkeEmrOm2Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks much for your constructive comments and suggestions. \n\n- For the necessity of addressing the sample-based evaluation\n\nFID is a widely used metric for evaluating generative models. However, in our experiments we found that it appeared to be inconsistent with human judgements in some cases. Moreover, we f... | [
-1,
5,
5,
3
] | [
-1,
3,
4,
5
] | [
"iclr_2019_HJlY0jA5F7",
"iclr_2019_HJlY0jA5F7",
"iclr_2019_HJlY0jA5F7",
"iclr_2019_HJlY0jA5F7"
] |
iclr_2019_HJlYzhR9tm | Language Modeling with Graph Temporal Convolutional Networks | Recently, there have been some attempts to use non-recurrent neural models for language modeling.
However, a noticeable performance gap still remains.
We propose a non-recurrent neural language model, dubbed graph temporal convolutional network (GTCN), that relies on graph neural network blocks and convolution operations. While the standard recurrent neural network language models encode sentences sequentially without modeling higher-level structural information, our model regards sentences as graphs and processes input words within a message propagation framework, aiming to learn better syntactic information by inferring skip-word connections. Specifically, the graph network blocks operate in parallel and learn the underlying graph structures in sentences without any additional annotation pertaining to structure knowledge. Experiments demonstrate that the model without recurrence can achieve comparable perplexity results in language modeling tasks and successfully learn syntactic information. | rejected-papers | Though the overall direction is interesting, the reviewers are in consensus that the work is not ready for publication (better / larger scale evaluation is needed, comparison with other non-autoregressive architectures should be provided, esp Transformer as there is a close relation between the methods). | train | [
"HJlY7hKORX",
"S1lPc9F_AQ",
"BkgE6dYO0Q",
"Syl2RaaYnm",
"SyemNLcO2X",
"BJxgClFO27"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the insightful comments. CNNs process sentences by gathering larger and larger contexts in each layer, but do not explicitly model the relations among different words. That is why we said they are not easily interpretable and do not explicitly learn the structures of sentences. Our model explicitly lear... | [
-1,
-1,
-1,
4,
4,
4
] | [
-1,
-1,
-1,
5,
3,
5
] | [
"BJxgClFO27",
"SyemNLcO2X",
"Syl2RaaYnm",
"iclr_2019_HJlYzhR9tm",
"iclr_2019_HJlYzhR9tm",
"iclr_2019_HJlYzhR9tm"
] |
iclr_2019_HJldzhA5tQ | Learning powerful policies and better dynamics models by encouraging consistency | Model-based reinforcement learning approaches have the promise of being sample efficient. Much of the progress in learning dynamics models in RL has been made by learning models via supervised learning. There is enough evidence that humans build a model of the environment, not only by observing the environment but also by interacting with the environment. Interaction with the environment allows humans to carry out "experiments": taking actions that help uncover true causal relationships which can be used for building better dynamics models. Analogously, we would expect such interaction to be helpful for a learning agent while learning to model the environment dynamics. In this paper, we build upon this intuition, by using an auxiliary cost function to ensure consistency between what the agent observes (by acting in the real world) and what it imagines (by acting in the ``learned'' world). Our empirical analysis shows that the proposed approach helps to train powerful policies as well as better dynamics models. | rejected-papers | The paper proposes and approach for model-based reinforcement learning that adds a constraint to encourage the predictions from the model to be consistent with the observations from the environment. The reviewers had substantial concerns about the clarify of the initial submission, which has been significantly improved in revisions of the paper. The experiments have also been improved.
Strengths: The method is simple, the performance is competitive with state-of-the-art approaches, and the experiments are thorough including comparisons on seven different environments.
Weaknesses: The main concern of the reviewers is the lack of concrete discussion about how the method compares to prior work. While the paper cites many different prior methods, the paper would be significantly improved by explicitly comparing and contrasting the ideas presented in this paper and those presented in prior work. A secondary weakness is that, while the results appear to be statistically significant, the improvement over prior methods is still relatively small.
I do not think that this paper meets the bar for publication without an improved discussion of how this work is placed among the existing literature and without more convincing results.
As a side note, the authors should consider comparing to the below NeurIPS '18 paper, which significantly exceeds the performance of Nagabandi et al '17: https://arxiv.org/abs/1805.12114 | train | [
"S1xc36r03Q",
"rJeeY-HykN",
"HJxTwbH1J4",
"rJxFPeB114",
"BkePylr1yV",
"SkejX7IipQ",
"rJegedZ-AQ",
"r1x0avWWRQ",
"S1gajw-ZRm",
"HyeCIcqxA7",
"ryxVBatgA7",
"HJgB1w6Ynm",
"B1xXVSKxC7",
"B1lD3tUspX",
"rkeKFYUj6Q",
"BJgXUFLjaQ",
"B1lRzFUiTm",
"B1gtdSIspX",
"S1e3zRripX",
"H1lBnfLi67"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"offici... | [
"---Below is based on the original paper---\nThis paper presents a framework that allows the agent to learn from its observations, but never follows through on the motivation of experimentation---taking actions mainly for the purpose of learning an improved dynamics model. All of their experiments merely take actio... | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2019_HJldzhA5tQ",
"S1xc36r03Q",
"SJlm8XRxpX",
"iclr_2019_HJldzhA5tQ",
"iclr_2019_HJldzhA5tQ",
"S1xc36r03Q",
"HJgB1w6Ynm",
"S1xc36r03Q",
"SJlm8XRxpX",
"S1xc36r03Q",
"B1xXVSKxC7",
"iclr_2019_HJldzhA5tQ",
"B1gtdSIspX",
"HJgB1w6Ynm",
"HJgB1w6Ynm",
"HJgB1w6Ynm",
"HJgB1w6Ynm",
"HJg... |
iclr_2019_HJlfAo09KX | Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy | We study model recovery for data classification, where the training labels are generated from a one-hidden-layer fully -connected neural network with sigmoid activations, and the goal is to recover the weight vectors of the neural network. We prove that under Gaussian inputs, the empirical risk function using cross entropy exhibits strong convexity and smoothness uniformly in a local neighborhood of the ground truth, as soon as the sample complexity is sufficiently large. This implies that if initialized in this neighborhood, which can be achieved via the tensor method, gradient descent converges linearly to a critical point that is provably close to the ground truth without requiring a fresh set of samples at each iteration. To the best of our knowledge, this is the first global convergence guarantee established for the empirical risk minimization using cross entropy via gradient descent for learning one-hidden-layer neural networks, at the near-optimal sample and computational complexity with respect to the network input dimension. | rejected-papers | This paper shows local convergence results for gradient descent on one hidden layer network with Gaussian inputs and sigmoid activations. Later it shows global convergence by using spectral initialization. All the reviewers agree that the results are similar to existing work in the literature with little novelty. There are also some concerns about the correctness of the statements expressed by some reviewers. | train | [
"rygK6cLo3m",
"rklsJtp93m",
"HylHS_0L37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents theoretical analysis for recovering one-hidden-layer neural networks using logistic loss function. I have the following major concerns:\n\n(1.a) The paper does not mention identifiability at all. As has been known, neural networks with even only one hidden layer are not identifiable. The authors... | [
3,
4,
5
] | [
4,
4,
4
] | [
"iclr_2019_HJlfAo09KX",
"iclr_2019_HJlfAo09KX",
"iclr_2019_HJlfAo09KX"
] |
iclr_2019_HJlmhs05tm | EnGAN: Latent Space MCMC and Maximum Entropy Generators for Energy-based Models | Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function (unnormalized log-density) which is low for probable ones and high for improbable ones. Here we consider learning both an energy function and an efficient approximate sampling mechanism for the corresponding distribution. Whereas the critic (or discriminator) in generative adversarial networks (GANs) learns to separate data and generator samples, introducing an entropy maximization regularizer on the generator can turn the interpretation of the critic into an energy function, which separates the training distribution from everything else, and thus can be used for tasks like anomaly or novelty detection.
This paper is motivated by the older idea of sampling in latent space rather than data space because running a Monte-Carlo Markov Chain (MCMC) in latent space has been found to be easier and more efficient, and because a GAN-like generator can convert latent space samples to data space samples. For this purpose, we show how a Markov chain can be run in latent space whose samples can be mapped to data space, producing better samples. These samples are also used for the negative phase gradient required to estimate the log-likelihood gradient of the data space energy function. To maximize entropy at the output of the generator, we take advantage of recently introduced neural estimators of mutual information. We find that in addition to producing a useful scoring function for anomaly detection, the resulting approach produces sharp samples (like GANs) while covering the modes well, leading to high Inception and Fréchet scores.
| rejected-papers | The proposed method is an extension of Kim & Bengio (2016)'s energy-based GAN. The novel contributions are to approximate the entropy regularizer using a mutual information estimator, and to try to clean up the model samples using some Langevin steps. Experiments include mode dropping experiments on toy data, samples from the model on CelebA, and measures of inception score and FID.
The paper is well-written, and the proposal seems sensible. But as various reviewers point out, the work is a fairly incremental extension of Kim and Bengio (2016). Most of the new elements, such as Langevin sampling and the gradient penalty, have also been well-explored in the deep generative modeling literature. It's not clear there is a particular contribution here that really stands out.
The experimental evidence for improvement is also fairly limited. Generated samples, inception scores, and FID are pretty weak measures for generative models, though I'm willing to go with them since they seem to be standard in the field. But even by these measures, there doesn't seem to be much improvement. I wouldn't expect SOTA results because of computational limitations, but the generated samples and quantitative evaluations seem worse than the WGAN-GP, even though the proposed method includes the gradient penalty and hence should be able to at least match WGAN-GP. The MCMC sampling doesn't appear to have helped, as far as I can tell.
Overall, the proposal seems promising, but I don't think this paper is ready for publication at ICLR.
| train | [
"SJxxa341JN",
"SJl43Qz60Q",
"rylBNM1p0Q",
"B1e3y6R9CQ",
"ryg6_Ob8CQ",
"Sygm8_bUAX",
"S1g9XO-8Rm",
"SJe1WpEj6m",
"B1xh8rOKpQ",
"Hklnlcmtp7",
"SkxyPcXtTX",
"Hke-_uQKpX",
"BJe6b_7Y6m",
"r1xZNcnCh7",
"rklO0eJC27",
"H1xdBXjg3Q",
"rkeYbXIjtX",
"SyxC8mgZ5Q",
"BJeKYRtx97",
"rJgXAUSCtm"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author"
] | [
"> \"This further obscures the significance of section 4. And not good to state this in the appendix. Moreover, what is the evaluation result for the latent-space MCMC sampling?\"\n\nWe have conducted additional experiments to address the concerns raised by the reviewer.Your feedback has already been very helpful i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
-1,
-1,
-1,
-1
] | [
"SJl43Qz60Q",
"rylBNM1p0Q",
"B1e3y6R9CQ",
"rklO0eJC27",
"H1xdBXjg3Q",
"rklO0eJC27",
"r1xZNcnCh7",
"B1xh8rOKpQ",
"SkxyPcXtTX",
"H1xdBXjg3Q",
"H1xdBXjg3Q",
"rklO0eJC27",
"r1xZNcnCh7",
"iclr_2019_HJlmhs05tm",
"iclr_2019_HJlmhs05tm",
"iclr_2019_HJlmhs05tm",
"iclr_2019_HJlmhs05tm",
"BJe... |
iclr_2019_HJlt7209Km | Theoretical and Empirical Study of Adversarial Examples | Many techniques are developed to defend against adversarial examples at scale. So far, the most successful defenses generate adversarial examples during each training step and add them to the training data. Yet, this brings significant computational overhead. In this paper, we investigate defenses against adversarial attacks. First, we propose feature smoothing, a simple data augmentation method with little computational overhead. Essentially, feature smoothing trains a neural network on virtual training data as an interpolation of features from a pair of samples, with the new label remaining the same as the dominant data point. The intuition behind feature smoothing is to generate virtual data points as close as adversarial examples, and to avoid the computational burden of generating data during training. Our experiments on MNIST and CIFAR10 datasets explore different combinations of known regularization and data augmentation methods and show that feature smoothing with logit squeezing performs best for both adversarial and clean accuracy. Second, we propose an unified framework to understand the connections and differences among different efficient methods by analyzing the biases and variances of decision boundary. We show that under some symmetrical assumptions, label smoothing, logit squeezing, weight decay, mix up and feature smoothing all produce an unbiased estimation of the decision boundary with smaller estimated variance. All of those methods except weight decay are also stable when the assumptions no longer hold. | rejected-papers | The paper proposes a feature smoothing technique as a new and "cheaper" technique for training adversarially robust models.
Pros:
* the paper is generally well written and the claimed results seem quite promising
* the theory contribution are interesting
Cons:
* the main technique is fairly incremental
* there were concerns regarding the comprehensiveness of evaluations and baselines used | train | [
"HylC0e-C2m",
"BJgRZ_Yd3Q",
"B1eWYFtDnQ",
"BJgewee0hQ",
"BkxDE0KTnX",
"BklBpKu6tm",
"r1lVAt8TtQ",
"rJgUHOmTtX",
"Hkl3Eh-TFm",
"Byxwh9zpYQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"public",
"public",
"public",
"public"
] | [
"In this paper the authors introduce a novel method to defend against adversarial attacks that they call feature smoothing. The authors then discuss feature smoothing and related “cheap” data augmentation-based defenses against adversarial attacks in a nice general discussion. Next, the authors present empirical da... | [
5,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_HJlt7209Km",
"iclr_2019_HJlt7209Km",
"iclr_2019_HJlt7209Km",
"BkxDE0KTnX",
"iclr_2019_HJlt7209Km",
"r1lVAt8TtQ",
"rJgUHOmTtX",
"Byxwh9zpYQ",
"iclr_2019_HJlt7209Km",
"Hkl3Eh-TFm"
] |
iclr_2019_HJx38iC5KX | Domain Generalization via Invariant Representation under Domain-Class Dependency | Learning domain-invariant representation is a dominant approach for domain generalization, where we need to build a classifier that is robust toward domain shifts induced by change of users, acoustic or lighting conditions, etc. However, prior domain-invariance-based methods overlooked the underlying dependency of classes (target variable) on source domains during optimization, which causes the trade-off between classification accuracy and domain-invariance, and often interferes with the domain generalization performance. This study first provides the notion of domain generalization under domain-class dependency and elaborates on the importance of considering the dependency by expanding the analysis of Xie et al. (2017). We then propose a method, invariant feature learning under optimal classifier constrains (IFLOC), which explicitly considers the dependency and maintains accuracy while improving domain-invariance. Specifically, the proposed method regularizes the representation so that it has as much domain information as the class labels, unlike prior methods that remove all domain information. Empirical validations show the superior performance of IFLOC to baseline methods, supporting the importance of the domain-class dependency in domain generalization and the efficacy of the proposed method for overcoming the issue. | rejected-papers | This paper proposes a new solution to the problem of domain generalization where the label distribution may differ across domains. The authors argue that prior work which ignores this observation suffers from an accuracy-vs-invariance trade-off while their work does not.
The main contribution of the work is to 1) consider the case of different label distributions across domains and 2) to propose a regularizer extension to Xie 2017 to handle this.
There was disagreement between the reviewers on whether or not this contribution is significant enough to warrant publication. Two reviewers expressed concern of whether 1) naturally occurring data sources suffer substantially from this label distribution mismatch and 2) whether label distribution mismatch in practice results in significant performance loss for existing domain generalization techniques. Based on the experiments and discussions available now the answer to the above two points remains unclear. These key questions should be clarified and further justified before publication. | train | [
"rkglUrfjCm",
"HyxnMJwKT7",
"ryg7wJvK67",
"SJg4xkvKaQ",
"Hkei6C8FpX",
"HkxHep8Y67",
"ByeXo2UY6m",
"H1lur28F6m",
"SygAAIO3n7",
"SkeHjnHch7",
"SylAH62FhQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I wanted to thank the authors for the discussions about VAE and CrossGrad. It would be better to include the same discussions for future work in the manuscript. I have also read the other two reviewers' comments along with the authors' responses. My rating remains positive.",
"\n### Reply to “it is desirable to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"HkxHep8Y67",
"SJg4xkvKaQ",
"HyxnMJwKT7",
"Hkei6C8FpX",
"SylAH62FhQ",
"SkeHjnHch7",
"H1lur28F6m",
"SygAAIO3n7",
"iclr_2019_HJx38iC5KX",
"iclr_2019_HJx38iC5KX",
"iclr_2019_HJx38iC5KX"
] |
iclr_2019_HJx4KjRqYQ | Ergodic Measure Preserving Flows | Training probabilistic models with neural network components is intractable in most cases and requires to use approximations such as Markov chain Monte Carlo (MCMC), which is not scalable and requires significant hyper-parameter tuning, or mean-field variational inference (VI), which is biased. While there has been attempts at combining both approaches, the resulting methods have some important limitations in theory and in practice. As an alternative, we propose a novel method which is scalable, like mean-field VI, and, due to its theoretical foundation in ergodic theory, is also asymptotically accurate, like MCMC. We test our method on popular benchmark problems with deep generative models and Bayesian neural networks. Our results show that we can outperform existing approximate inference methods. | rejected-papers | This paper proposes to a simple method for tuning parameters of HMC by maximizing the log density under the final sample of the MCMC, and apply it for training VAE. The reviews and discussion raises some critical concerns and questions, which unfortunately, which unfortunately, is not adequately addressed. | train | [
"BJl8M8MApQ",
"HJg6p8EApX",
"SkeLDkmKCX",
"rJg_ZaIYCX",
"H1lXQXNFCX",
"BygzGtbt0Q",
"SkeaTWNR6X",
"ryxu5rERam",
"r1lFKbVCaX",
"SklSPtNATQ",
"HJeijVAJ67",
"Skxf6fdt2Q",
"ByggukyLhQ",
"BJe2k9yPhX",
"BJxb-WkPhm",
"B1e6_h-Un7",
"HklKJS-UhX",
"SyeA3Y-L27",
"H1e3Pug8h7",
"r1g4I4TrhQ"... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
... | [
"First, we would like to thank the reviewer's effort.\n\"Here’s a concrete example of how I could imagine this procedure going wrong: make q(z0) a delta at the latent vector z* that maximizes the log-joint, and set the step size of the Hamiltonian simulation to 0. This will make the entropy term (which is ignored) ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HJeijVAJ67",
"ryxu5rERam",
"BygzGtbt0Q",
"H1lXQXNFCX",
"SklSPtNATQ",
"SkeaTWNR6X",
"r1lFKbVCaX",
"ByggukyLhQ",
"Skxf6fdt2Q",
"ryxu5rERam",
"iclr_2019_HJx4KjRqYQ",
"iclr_2019_HJx4KjRqYQ",
"iclr_2019_HJx4KjRqYQ",
"BJxb-WkPhm",
"B1e6_h-Un7",
"SyeA3Y-L27",
"H1e3Pug8h7",
"HklKJS-UhX",
... |
iclr_2019_HJx7l309Fm | Actor-Attention-Critic for Multi-Agent Reinforcement Learning | Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in single-agent settings. We present an actor-critic algorithm that trains decentralized policies in multi-agent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multi-agent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems | rejected-papers | The authors propose an approach for a learnt attention mechanism to be used for selecting agents in a multi agent RL setting. The attention mechanism is learnt by a central critic, and it scales linearly with the number of agents rather than quadratically. There is some novelty in the proposed method, and the authors clearly explain and motivate the approach. However the empirical evaluation feels quite limited and does not show conclusively that the method is superior to the others. Moreover, the simple empirical results don't give any evidence how the attention mechanism is working or whether it is truly the attention that is affecting the results. The reviewers were split on their recommendation and did not come to a consensus. The AC feels that the paper is not quite strong enough and encourages the authors to broaden the work with additional experiments and analysis. | val | [
"rkl9CLxv27",
"S1gqkJrKTX",
"SkxC3RNF67",
"SygCO0Nt6X",
"S1xzqlrChm",
"B1xTnUUMhm",
"HkeukF12cQ",
"HkgCyD3c9Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper introduces a new method for multi-agent reinforcement learning. The proposed algorithm -- which uses shared critics at training time but individual policies at test time -- makes use of a specialised attention mechanism. The benefits include better scalability (as the dependency of the inputs is linear ... | [
7,
-1,
-1,
-1,
6,
4,
-1,
-1
] | [
3,
-1,
-1,
-1,
3,
4,
-1,
-1
] | [
"iclr_2019_HJx7l309Fm",
"B1xTnUUMhm",
"rkl9CLxv27",
"S1xzqlrChm",
"iclr_2019_HJx7l309Fm",
"iclr_2019_HJx7l309Fm",
"HkgCyD3c9Q",
"iclr_2019_HJx7l309Fm"
] |
iclr_2019_HJxFrs09YQ | GENERALIZED ADAPTIVE MOMENT ESTIMATION | Adaptive gradient methods have experienced great success in training deep neural networks (DNNs). The basic idea of the methods is to track and properly make use of the first and/or second moments of the gradient for model-parameter updates over iterations for the purpose of removing the need for manual interference. In this work, we propose a new adaptive gradient method, referred to as generalized adaptive moment estimation (Game). From a high level perspective, the new method introduces two more parameters w.r.t. AMSGrad (S. J. Reddi & Kumar (2018)) and one more parameter w.r.t. PAdam (Chen & Gu (2018)) to enlarge the parameter- selection space for performance enhancement while reducing the memory cost per iteration compared to AMSGrad and PAdam. The saved memory space amounts to the number of model parameters, which is significant for large-scale DNNs. Our motivation for introducing additional parameters in Game is to provide algorithmic flexibility to facilitate a reduction of the performance gap between training and validation datasets when training a DNN. Convergence analysis is provided for applying Game to solve both convex optimization and smooth nonconvex optmization. Empirical studies for training four convolutional neural networks over MNIST and CIFAR10 show that under proper parameter selection, Game produces promising validation performance as compared to AMSGrad and PAdam. | rejected-papers | The reviewers find the per difficult to read. Reviewers also had concerns regarding the correctness of various claims in the paper. The paper was also found lacking in experimental analysis, as it only tested on relatively small datasets, and only no a CNN architecture. Overall, the paper appears to be lacking in quality and clarity, and questionable in correctness and originality. | train | [
"HkxzFqFMJE",
"Hyl3vWjjRm",
"rkgyF-wjRQ",
"H1x2XrtWpQ",
"HklBuIvoRm",
"H1lmMCZ92Q",
"ByeOrl6FhX",
"SyxOce_LhQ",
"B1g079Mhhm"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We have finished refining our paper based on all the comments of the reviewers and the author of PAdam. \n\n* Firstly, we notice that the new version of Game includes AMSGrad and PAdam as special cases by setting (p,q) = (0.5, 2) and q = 2, respectively. The introduction of the additional parameter q in Game is fo... | [
-1,
-1,
-1,
-1,
-1,
3,
4,
7,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
-1
] | [
"iclr_2019_HJxFrs09YQ",
"H1lmMCZ92Q",
"ByeOrl6FhX",
"B1g079Mhhm",
"H1x2XrtWpQ",
"iclr_2019_HJxFrs09YQ",
"iclr_2019_HJxFrs09YQ",
"iclr_2019_HJxFrs09YQ",
"iclr_2019_HJxFrs09YQ"
] |
iclr_2019_HJxXynC9t7 | Expressiveness in Deep Reinforcement Learning | Representation learning in reinforcement learning (RL) algorithms focuses on extracting useful features for choosing good actions. Expressive representations are essential for learning well-performed policies. In this paper, we study the relationship between the state representation assigned by the state extractor and the performance of the RL agent. We observe that representations assigned by the better state extractor are more scattered than which assigned by the worse one. Moreover, RL agents achieving high performances always have high rank matrices which are composed by their representations. Based on our observations, we formally define expressiveness of the state extractor as the rank of the matrix composed by representations. Therefore, we propose to promote expressiveness so as to improve algorithm performances, and we call it Expressiveness Promoted DRL. We apply our method on both policy gradient and value-based algorithms, and experimental results on 55 Atari games show the superiority of our proposed method. | rejected-papers | The authors propose to define 'Expressiveness' in deep RL by the rank of a matrix comprising a number of feature vectors from propagating observations through the learnt representation, and show a correlation between higher rank and higher performance. They try 3 regularizers to increase rank and show that they improve the final score on Atari games compared to A3C or DQN. The AC and reviewers agree that the paper is interesting and novel and could have general significance for the RL field. Also, the authors were very responsive to the reviewers and added more details, plus several experiments and analyses to support their claims. However, the reviewers were concerned about a number of aspects and have recommended that the authors clean up their presentation and analysis a bit more. In particular, the fact that the regularization coefficient is tuned for each Atari game makes it very hard to compare to DQN/A3C which are very careful to keep the same hyperparameters across every game. | train | [
"B1ekBnl3yV",
"ryg53mQiyE",
"B1lTlCcvJV",
"ByggbPawR7",
"r1eT38av07",
"H1eKr8pP0Q",
"HkedDP6D07",
"HyeaeHQihX",
"rJlPHDfq2Q",
"Bylzc0V_n7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for addressing my questions. The revised paper seems to provide a bit more evidence about the relationship between expressiveness and performance in RL. One thing that I realized is that the authors used different hyperparameters (rank regularization term, alpha) for each Atari game, which makes the propose... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"H1eKr8pP0Q",
"r1eT38av07",
"ByggbPawR7",
"Bylzc0V_n7",
"rJlPHDfq2Q",
"HyeaeHQihX",
"iclr_2019_HJxXynC9t7",
"iclr_2019_HJxXynC9t7",
"iclr_2019_HJxXynC9t7",
"iclr_2019_HJxXynC9t7"
] |
iclr_2019_HJxYwiC5tm | Why do deep convolutional networks generalize so poorly to small image transformations? | Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, we see these failures to generalize more frequently in more modern networks. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans. | rejected-papers | This paper attempts to answer its suggestive title by arguing that this generic lack of invariance in large CNN architectures is due to aliasing introduced during the downsampling stages.
This paper received mixed reviews. Positive aspects include the clarity and exhaustive empirical setups, whereas negative aspects focused on the lack of substance behind some of the claims. Ultimately, the AC took these considerations into account and made his/her own assessment, summarized here.
The main claim of this paper implies the following: modern CNNs are unable to build invariance to small shifts, but somehow are able to learn far more complex invariances involving lighting, pose, texture, etc. This must be empirically verified beyond reasonable doubt, and the AC thinks that the current experimental setup does not achieve this threshold. As mentioned by reviewers and by public comments, the preprocessing pipeline is a key factor that may be confounding the analysis, and this should be better analysed. For example, as mentioned in the reviews below, the shift in the image can be either done by inpainting, cropping, or using a fixed background. The authors claim that there are no qualitative differences between those preprocessing choices, but by inspecting Figures 2B and 8C, the AC notices a severe change in 'jaggedness'; in other words, the choice of preprocessing *does* affect the quantitative measures of (un)stability, even though the qualitative assessment (unstable in all setups) is the same. In particular, using non-centered crops should be the default setup, since it requires no preprocessing. It is confusing that it appears in the appendix instead of the inpainting version of figure 2b. This is important, since it implies that the analysis is mixing two perturbations: the actual action of the translation group and the choice of preprocessing, and that the latter is by no means negligible. I would suggest the authors to perform the following experiment to disentangle the effect of translation by the effect of preprocessing. Since the translation forms a group, for any shift applied to the image, one can 'undo' it by applying the inverse shift. Say one applies a shift to image x of d pixels and obtains x'=T(x,+d) as a result (by using whatever border handling procedure). If border effects were negligible, then x''=T(x',-d) should give us back x, so a good measure of how unstable the network is is to measure the difference in prediction between x,x' and x''. If predicting x'' is as unstable as predicting x', it follows that the network is actually unstable to the border effect introduced by T.
Given this, the AC recommends rejection at this time, and encourages the authors to resubmit their work by addressing the above point. | test | [
"Sye2D_EhaX",
"B1glF1VkJV",
"H1epCO9zAX",
"BJlhBuqGRQ",
"HJlraD5MAQ",
"SJxCFsEbA7",
"HJgzBmsyAm",
"B1lUFfAjpQ",
"B1lXL9e32Q",
"H1g7rqLdnm",
"BJeBZeoN27"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comment. Indeed we had a correspondence about this issue following a posting of a previous version of our paper on ArXiv. \n\nThe reviewer is referring to one particular result shown at the bottom of figure 1. Contrary to the title of the reviewer’s comment, he/she acknowledged in our correspond... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"B1lUFfAjpQ",
"HJlraD5MAQ",
"BJeBZeoN27",
"H1g7rqLdnm",
"B1lXL9e32Q",
"HJgzBmsyAm",
"Sye2D_EhaX",
"iclr_2019_HJxYwiC5tm",
"iclr_2019_HJxYwiC5tm",
"iclr_2019_HJxYwiC5tm",
"iclr_2019_HJxYwiC5tm"
] |
iclr_2019_HJxdAoCcYX | Characterizing Malicious Edges targeting on Graph Neural Networks | Deep neural networks on graph structured data have shown increasing success in various applications. However, due to recent studies about vulnerabilities of machine learning models, researchers are encouraged to explore the robustness of graph neural networks (GNNs). So far there are two work targeting to attack GNNs by adding/deleting edges to fool graph based classification tasks. Such attacks are challenging to be detected since the manipulation is very subtle compared with traditional graph attacks. In this paper we propose the first detection mechanism against these two proposed attacks. Given a perturbed graph, we propose a novel graph generation method together with link prediction as preprocessing to detect potential malicious edges. We also propose novel features which can be leveraged to perform outlier detection when the number of added malicious edges are large. Different detection components are proposed and tested, and we also evaluate the performance of final detection pipeline. Extensive experiments are conducted to show that the proposed detection mechanism can achieve AUC above 90% against the two attack strategies on both Cora and Citeseer datasets. We also provide in-depth analysis of different attack strategies and corresponding suitable detection methods. Our results shed light on several principles for detecting different types of attacks. | rejected-papers | All reviewers recommended rejecting this submission so I will as well. However, I do not believe it is fundamentally misguided or anything of that nature.
Unfortunately, reviewers did not participate as much in discussions with the authors as I believe they should. However, this paper concerns a relatively niche problem of modest interest to the ICLR community. I believe a stronger version of this work would be a more application-focused paper that delved into practical details about a specific case study where this work provides a clear benefit. | train | [
"rJgbbWRtC7",
"rye7sxAKC7",
"B1ee9e0KAQ",
"SklFh1AtAQ",
"BkloN1RFAm",
"B1luW-jf6X",
"rygr5DrMpm",
"HJg5k3ZMTQ"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the valuable comments. Here are our responses to the concerns:\n\nQ1:[Unclear relationship with GCN attacks] To what degree do these methods address the particulars of GCN attacks? This could possibly be addressed by better recapping the GCN attacks and explaining how these methods direc... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"HJg5k3ZMTQ",
"B1ee9e0KAQ",
"rygr5DrMpm",
"B1luW-jf6X",
"iclr_2019_HJxdAoCcYX",
"iclr_2019_HJxdAoCcYX",
"iclr_2019_HJxdAoCcYX",
"iclr_2019_HJxdAoCcYX"
] |
iclr_2019_HJxfm2CqKm | Discovering General-Purpose Active Learning Strategies | We propose a general-purpose approach to discovering active learning (AL) strategies from data. These strategies are transferable from one domain to another and can be used in conjunction with many machine learning models. To this end, we formalize the annotation process as a Markov decision process, design universal state and action spaces and introduce a new reward function that precisely reflects the AL objective of minimizing the annotation cost We seek to find an optimal (non-myopic) AL strategy using reinforcement learning. We evaluate the learned strategies on multiple unrelated domains and show that they consistently outperform state-of-the-art baselines. | rejected-papers | This paper provides further insight into using RL for active learning, particularly by formulating AL as an MDP and then using RL methods for that MDP. Though the paper has a few insights, it does not sufficiently place itself amongst the many other similar strategies using an MDP formulation. I recommend better highlighting what is novel in this work (e.g., more focus on the reward function, if that is key). Additionally, avoid general statements like “To this end, we formalize the annotation process as a Markov decision process”, which suggests that this is part of the contribution, but as highlighted by reviewers, has been a standard approach. | train | [
"HyecsNFYJV",
"BylT2JLYA7",
"S1l6yTrtAX",
"BkxIFhSKRX",
"HkeE1fkTpQ",
"rJxMVWxQTQ",
"SJgx1F79hX",
"SJxio9y9hX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks to the authors for their clarifications.\n\nIn general my feeling is still \"not quite good enough\" on novelty grounds.\n\nGiven the similarities in the current approach and various recent papers, the analysis and evaluation should be top notch to quality for a top tier ICLR publication. \n\nVarious issues... | [
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"SJgx1F79hX",
"S1l6yTrtAX",
"BkxIFhSKRX",
"iclr_2019_HJxfm2CqKm",
"iclr_2019_HJxfm2CqKm",
"iclr_2019_HJxfm2CqKm",
"iclr_2019_HJxfm2CqKm",
"iclr_2019_HJxfm2CqKm"
] |
iclr_2019_HJxpDiC5tX | Large-Scale Visual Speech Recognition | This work presents a scalable solution to continuous visual speech recognition. To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video). In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words. The proposed system achieves a word error rate (WER) of 40.9% as measured on a held-out set. In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information. Our approach significantly improves on previous lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8% and 76.8% WER respectively. | rejected-papers | This paper describes the development of a large-scale continuous visual speech recognition (lipreading) system, including an audiovisual processing pipeline that is used to extract stabilized videos of lips and corresponding phone sequences from YouTube videos, a deep network architecture trained with CTC loss that maps video sequences to sequences of distributions over phones, and an FST-based decoder that produces word sequences from the phone score sequences. A performance evaluation shows that the proposed system outperforms other models described in the literature, as well as professional lipreaders. A number of ablation experiments compare the performance of the proposed architecture to the previously proposed LipNet and "Watch, Attend, and Spell" architectures, explore the performance differences caused by using phone- or character-based CTC models, and some variations on the proposed architecture. This paper was extremely controversial and received a robust discussion between the authors and reviewers, with the primary point of contention being the suitability of the paper for ICLR. All reviewers agree that the quality of the work in the paper is excellent and that the reported results are impressive, but there was strong disagreement on whether or not this was sufficient for an ICLR paper. One reviewer thought so, while the other two reviewers argued that this is insufficient, and that to appear in ICLR the paper either (1) should have focused more on the preparation of the dataset, included public release of the data so other researchers could build on the work, and put forth the V2P model as a (very) strong baseline for the task; or (2) done a more in-depth exploration of the representation learning aspects of the work by comparing phoneme and viseme units and providing more (admittedly costly) ablation experiments to shed more light on what aspects of the V2P architecture lead to the reported improvements in performance. The AC finds the arguments of the two negative reviewers to be persuasive. It is quite clear at this point that many supervised classification tasks (even structured classification tasks like lipreading) can be effectively tackled by a combination of a sufficiently flexible learning architecture and collection of a massive, annotated dataset, and the modeling techniques used in this paper are not new, per se, even if their application to lipreading is. Moreover, if the dataset is not publicly available, it is impossible for anyone else to build on this work. The paper, as it currently stands, would be appropriate in a more applications-oriented venue. | train | [
"S1eGOIsj0Q",
"ryl0WeoiRQ",
"SJlCpxcjAQ",
"Hkg6z_OFAX",
"Hygtu_HMCQ",
"B1eY8B3_3X",
"S1xa2n4j6X",
"SygJnn4oaX",
"BJx0PnNoa7",
"HkeVIhvVa7",
"ByewPrEEpX",
"SJeECW-Mp7",
"B1lIVJ5Z6X",
"HJltpiAgTQ",
"Bkx42sCgpm",
"B1xeN85yam",
"r1gTwoOJ67",
"B1lEYoOyT7",
"ryldzq_kTQ",
"rylyGAF53m"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"public",
"author",
"public",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thanks to the authors for the revision and detailed response to all the concerns/comments raised by the reviewers. As I said, I enjoyed reading this paper which I think is a really good piece of work with a great impact to the community. Personally, I think it is a fit to this conference and deserves to get in. ... | [
-1,
-1,
-1,
-1,
-1,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
5
] | [
"Hkg6z_OFAX",
"Hkg6z_OFAX",
"Hkg6z_OFAX",
"iclr_2019_HJxpDiC5tX",
"HJltpiAgTQ",
"iclr_2019_HJxpDiC5tX",
"ByewPrEEpX",
"ByewPrEEpX",
"HkeVIhvVa7",
"iclr_2019_HJxpDiC5tX",
"SJeECW-Mp7",
"B1lIVJ5Z6X",
"iclr_2019_HJxpDiC5tX",
"B1xeN85yam",
"B1xeN85yam",
"iclr_2019_HJxpDiC5tX",
"rylyGAF53... |
iclr_2019_HJxqMhC5YQ | End-to-End Multi-Lingual Multi-Speaker Speech Recognition | The expressive power of end-to-end automatic speech recognition (ASR) systems enables direct estimation of the character or word label sequence from a sequence of acoustic features. Direct optimization of the whole system is advantageous because it not only eliminates the internal linkage necessary for hybrid systems, but also extends the scope of potential application use cases by training the model for multiple objectives. Several multi-lingual ASR systems were recently proposed based on a monolithic neural network architecture without language-dependent modules, showing that modeling of multiple languages is well within the capabilities of an end-to-end framework. There has also been growing interest in multi-speaker speech recognition, which enables generation of multiple label sequences from single-channel mixed speech. In particular, a multi-speaker end-to-end ASR system that can directly model one-to-many mappings without additional auxiliary clues was recently proposed. In this paper, we propose an all-in-one end-to-end multi-lingual multi-speaker ASR system that integrates the capabilities of these two systems. The proposed model is evaluated using mixtures of two speakers generated by using 10 languages, including mixed-language utterances. | rejected-papers | The authors present a system for end-to-end multi-lingual and multi-speaker speech recognition. The presented method is based on multiple prior works that propose end-to-end models for multi-lingual ASR and multi-speaker ASR; the work combines these techniques and shows that a single system can do both with minimal changes.
The main critique from the reviewers is that the paper lacks novelty. It builds heavily on existing work, and does not make any enough contributions to be accepted at ICLR. Furthermore, training and evaluations are all on simulated test sets that are not very realistic. So it is unclear how well the techniques would generalize to real use-cases. For these reasons, the recommendation is to reject the paper. | train | [
"HkeUS1ulpm",
"H1gjdIl93m",
"rklQVnJK3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an end-to-end system that can recognize single-channel multiple-speaker speech with multiple languages.\n\nPros:\n- The paper is well written.\n- It shows the existing end-to-end multi-lingual ASR (Seki et al., 2018b) and end-to-end multi-speaker ASR (Seki et al., 2018a) techniques can be combi... | [
3,
3,
3
] | [
4,
5,
4
] | [
"iclr_2019_HJxqMhC5YQ",
"iclr_2019_HJxqMhC5YQ",
"iclr_2019_HJxqMhC5YQ"
] |
iclr_2019_HJxwAo09KQ | Learned optimizers that outperform on wall-clock and validation loss | Deep learning has shown that learned functions can dramatically outperform hand-designed functions on perceptual tasks. Analogously, this suggests that learned update functions may similarly outperform current hand-designed optimizers, especially for specific tasks. However, learned optimizers are notoriously difficult to train and have yet to demonstrate wall-clock speedups over hand-designed optimizers, and thus are rarely used in practice. Typically, learned optimizers are trained by truncated backpropagation through an unrolled optimization process. The resulting gradients are either strongly biased (for short truncations) or have exploding norm (for long truncations). In this work we propose a training scheme which overcomes both of these difficulties, by dynamically weighting two unbiased gradient estimators for a variational loss on optimizer performance. This allows us to train neural networks to perform optimization faster than well tuned first-order methods. Moreover, by training the optimizer against validation loss, as opposed to training loss, we are able to use it to train models which generalize better than those trained by first order methods. We demonstrate these results on problems where our learned optimizer trains convolutional networks in a fifth of the wall-clock time compared to tuned first-order methods, and with an improvement | rejected-papers | The paper conveys interesting idea but need more work in terms of fair empirical study and also improvement of the writing. The AC based her summary only on the technical argumentation presented by reviewers and authors. | train | [
"ryeUzvVKkN",
"B1gnWF_FkV",
"B1ljRQC4yN",
"rkl4p-n6sm",
"S1eAW5pfCm",
"BJgDdU6GAm",
"S1x__R5z0m",
"r1gjUikY6Q",
"r1ewyjyFpX",
"SJlea51tTQ",
"Byetapx9nQ",
"Byl8hoXzom"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I just looked into the version of this work accepted at the NeurIPS workshop on MetaLearning (http://metalearning.ml/2018/papers/metalearn2018_paper38.pdf -- warning to other reviewers: clicking this link will reveal the authors' identity), and I am disappointed to see that the issues with the experiments are not ... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"SJlea51tTQ",
"ryeUzvVKkN",
"SJlea51tTQ",
"iclr_2019_HJxwAo09KQ",
"r1ewyjyFpX",
"r1gjUikY6Q",
"r1ewyjyFpX",
"Byl8hoXzom",
"rkl4p-n6sm",
"Byetapx9nQ",
"iclr_2019_HJxwAo09KQ",
"iclr_2019_HJxwAo09KQ"
] |
iclr_2019_HJzLdjR9FX | DeepTwist: Learning Model Compression via Occasional Weight Distortion | Model compression has been introduced to reduce the required hardware resources while maintaining the model accuracy. Lots of techniques for model compression, such as pruning, quantization, and low-rank approximation, have been suggested along with different inference implementation characteristics. Adopting model compression is, however, still challenging because the design complexity of model compression is rapidly increasing due to additional hyper-parameters and computation overhead in order to achieve a high compression ratio. In this paper, we propose a simple and efficient model compression framework called DeepTwist which distorts weights in an occasional manner without modifying the underlying training algorithms. The ideas of designing weight distortion functions are intuitive and straightforward given formats of compressed weights. We show that our proposed framework improves compression rate significantly for pruning, quantization, and low-rank approximation techniques while the efforts of additional retraining and/or hyper-parameter search are highly reduced. Regularization effects of DeepTwist are also reported. | rejected-papers | The authors propose a framework for compressing neural network models which involves applying a weight distortion function periodically as part of training. The proposed approach is relatively simple to implement, and is shown to work for weight pruning, low-rank compression and quantization, without sacrificing accuracy.
However, the reviewers had a number of concerns about the work. Broadly, the reviewers felt that the work was incremental. Further, if the proposed techniques are important to get the approach to work well in practice, then the paper would be significantly strengthened by further analyses. Finally, the reviewers noted that the paper does not consider whether the specific weight pruning strategies result in a reduction of computational resources beyond potential storage savings, which would be important if this method is to be used in practice.
Overall, the AC tends to agree with the reviewers criticisms. The authors are encouraged to address some of these issues in future revisions of the work.
| train | [
"rklYzwH3yV",
"rkg4iIN2yV",
"BklKbsaskV",
"HyxWKEhskE",
"rJxN1yBik4",
"ryxfxjEjkE",
"H1xKRwGskE",
"rJgUwKHDhX",
"HJlozZE9pX",
"SJgFV7N9a7",
"r1leNn2YpQ",
"B1emOH0BaX",
"SkgGUdiV6Q",
"H1l7nBc4pQ",
"BJedNc5VaQ",
"rygI3jtN6X",
"HJe2kdKNTm",
"B1gCk1tNpm",
"HJe0Kq69h7",
"HJlhGPM9hm"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
... | [
"Thank you for the reply.\nWe believe that this paper can be a motivation for the model compression community to rethink model compression.\n\nRecently, increasing number of model compression papers involve not only 'hard' compression for every mini-batch but also more hyper-parameters and ask modifications to the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"rkg4iIN2yV",
"H1l7nBc4pQ",
"HyxWKEhskE",
"HJlozZE9pX",
"ryxfxjEjkE",
"H1xKRwGskE",
"HJe2kdKNTm",
"iclr_2019_HJzLdjR9FX",
"iclr_2019_HJzLdjR9FX",
"r1leNn2YpQ",
"B1emOH0BaX",
"SkgGUdiV6Q",
"BJedNc5VaQ",
"HJe0Kq69h7",
"rygI3jtN6X",
"B1gCk1tNpm",
"HJlhGPM9hm",
"rJgUwKHDhX",
"iclr_20... |
iclr_2019_Hk41X2AqtQ | Hierarchically-Structured Variational Autoencoders for Long Text Generation | Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation. Existing methods primarily focus on synthesizing relatively short sentences (with less than twenty words). In this paper, we propose a novel framework, hierarchically-structured variational autoencoder (hier-VAE), for generating long and coherent units of text. To enhance the model’s plan-ahead ability, intermediate sentence representations are introduced into the generative networks to guide the word-level predictions. To alleviate the typical optimization challenges associated with textual VAEs, we further employ a hierarchy of stochastic layers between the encoder and decoder networks. Extensive experiments are conducted to evaluate the proposed method, where hier-VAE is shown to make effective use of the latent codes and achieve lower perplexity relative to language models. Moreover, the generated samples from hier-VAE also exhibit superior quality according to both automatic and human evaluations. | rejected-papers | Strengths: Interesting work on using latent variables for generating long text sequences.
The paper shows convincing results on perplexity, N-gram based and human qualitative evaluation.
Weaknesses: More extensive comparisons with hierarchical VAEs and the approach in Serban et. al in terms of language generation quality and perplexity would have been helpful. Another point of reference for which additional comparisons were desired was: "A Hierarchical Latent Structure for Variational Conversation Modeling" by Park et al. Some additional substantive experiments were added during the discussion period.
Contention: Authors differentiated their work from Park et al. and the reviewer bringing this work up ended up upgrading their score to a 7. The other reviewers kept their scores at 5.
Consensus: The positive reviewer raised their score to a 7 through the author rebuttal and discussion period. One negative reviewer was not responsive, but the other reviewer giving a 5 asserts that they maintain their position. The AC recommends rejection. Situating this work with respect to other prior work and properly comparing with it seems to be the contentious issue. Authors are encouraged to revise and re-submit elsewhere. | train | [
"SJxiPwaapm",
"HJlvgTYkpm",
"H1lZieB2n7",
"HkeE8jS_RX",
"HJxrbUCURX",
"HyehKP0uhX",
"B1goPoCSCX",
"rJxbKjRrCm",
"BJxL95kZAQ",
"BJgPXD66am",
"SklM0La66m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author"
] | [
"We would like to thank the reviewer for the detailed comments and suggestions for the manuscript.\n\n> The paper is well written though some parts are confusing. For example, equation 4 refers to q as the prior distribution but this seems like it's the posterior distribution as it is described just below equation ... | [
-1,
5,
5,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"H1lZieB2n7",
"iclr_2019_Hk41X2AqtQ",
"iclr_2019_Hk41X2AqtQ",
"HJxrbUCURX",
"rJxbKjRrCm",
"iclr_2019_Hk41X2AqtQ",
"BJxL95kZAQ",
"B1goPoCSCX",
"BJgPXD66am",
"SklM0La66m",
"HyehKP0uhX"
] |
iclr_2019_HkElFj0qYQ | PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning | Deep neural networks have demonstrated cutting edge performance on various tasks including classification. However, it is well known that adversarially designed imperceptible perturbation of the input can mislead advanced classifiers. In this paper, Permutation Phase Defense (PPD), is proposed as a novel method to resist adversarial attacks. PPD combines random permutation of the image with phase component of its Fourier transform. The basic idea behind this approach is to turn adversarial defense problems analogously into symmetric cryptography, which relies solely on safekeeping of the keys for security. In PPD, safe keeping of the selected permutation ensures effectiveness against adversarial attacks. Testing PPD on MNIST and CIFAR-10 datasets yielded state-of-the-art robustness against the most powerful adversarial attacks currently available. | rejected-papers | This paper presents a new defense against adversarial examples using random permutations and a Fourier transform. The technique is clearly novel, and the paper is clearly written.
However, as the reviewers and commenters pointed out, there is a significant degradation in natural accuracy, which does not seem to be easily recoverable. This degradation is due to the random permutation of the images, which effectively disallows the use of convolutions.
Furthermore, Reviewer 1 points out that the baselines are insufficient, as the authors do not explore (a) learning the transformation, or (b) using expectation over transformation to attack the model.
This concern is further validated by the fact that Black-box attacks are often the best-performing, which is a sign of gradient masking. The authors try to address this by performing an attack against an ensemble of models, and against a substitute model attack. However, attacking an ensemble is not equivalent to optimizing the expectation, which would require sampling a new permutation at each step.
The paper thus requires significantly stronger baselines and attacks. | val | [
"Ske-mUxFCX",
"BJezPa1FCX",
"B1eu-5JYR7",
"r1eZktyYCX",
"ByxUfwyKR7",
"B1xMkk49am",
"r1gIr-d0hX",
"HJedRiS5hX",
"HJlPpLn_2Q",
"r1gvH3hZim",
"HyxIf9wC9Q",
"ryxazwD0qQ",
"HklMPJPC5X",
"rklxM4To9Q",
"Hyeb7Rhi57",
"r1xTlHni57",
"SJgdGN2i97",
"r1g-LRjic7",
"rJlwltEs9X",
"B1xKozVj9X"... | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"public",
"public",
"public",
"public",
"public",
"author",
"author",
"public",
"public",
"public",
"author... | [
"Thanks for your detailed comments and sorry for the late reply. We had to perform some experiments to answer your questions.\n\n- The accuracy reported for CIFAR10 is for a simple 3 layer dense network. We agree that this is far from desirable, but we believe that if future work can reach higher accuracy on clean ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"r1g-LRjic7",
"B1xMkk49am",
"HJlPpLn_2Q",
"HJedRiS5hX",
"r1gIr-d0hX",
"iclr_2019_HkElFj0qYQ",
"iclr_2019_HkElFj0qYQ",
"iclr_2019_HkElFj0qYQ",
"iclr_2019_HkElFj0qYQ",
"ryxazwD0qQ",
"rklxM4To9Q",
"Hyeb7Rhi57",
"SJgdGN2i97",
"iclr_2019_HkElFj0qYQ",
"ryxP9XMicQ",
"B1xKozVj9X",
"rJlwltEs9... |
iclr_2019_HkGGfhC5Y7 | Towards a better understanding of Vector Quantized Autoencoders | Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQ-VAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete autoencoder with EM and combining it with sequence level knowledge distillation alows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.
| rejected-papers | Strengths:
- well-written
- strong results for non-autoregressive NMT
- a novel soft EM version of VQ-VAE
Weaknesses:
- as pointed out by reviewers, the improvements are mostly not due to the VQ-VAE modification rather due to orthogonal (and not interesting) changes e.g., knowledge distillation. If there is a genuine contribution of VQ-VAE, it is small and required extensive parameter selection
- the explanations provided in the paper do not match the empirical results
Two reviewers criticize the experiments / experimental section: rigour / their discussion. Overall, there is nothing wrong with the method but the experiments are not showing that the modification is particularly beneficial. Given these results and also given that the method is not particularly novel (switching from EM to Soft EM in VQ-VAE), it is hard for me to argue for accepting the paper. | train | [
"rkxyFEXDyE",
"ByxiE4XPJ4",
"ryl0iUEU14",
"r1lCtgTx27",
"H1xKgVEHkN",
"r1gBDs-Xk4",
"BJxg8jZ71V",
"SyeQNj-714",
"H1xD-S0dh7",
"BygRDlrs6Q",
"rkeQ2ZXspm",
"HkeSbl7j6X",
"Byearq9zpQ",
"ryxRaw5zpm",
"SylKiwqf6m",
"Byg4DS-Zp7",
"BJeUvaXU2Q"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"We thank the reviewer for reading our updated manuscript and for their feedback. We acknowledge that we missed this sentence in the introduction, since we focused on updating the experimental section and the writing therein as per the comments of R3. We will definitely go over the manuscript carefully and make the... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
5,
-1,
-1,
-1,
-1,
3
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4
] | [
"H1xKgVEHkN",
"ryl0iUEU14",
"r1gBDs-Xk4",
"iclr_2019_HkGGfhC5Y7",
"SyeQNj-714",
"Byearq9zpQ",
"ryxRaw5zpm",
"BygRDlrs6Q",
"iclr_2019_HkGGfhC5Y7",
"rkeQ2ZXspm",
"HkeSbl7j6X",
"iclr_2019_HkGGfhC5Y7",
"r1lCtgTx27",
"SylKiwqf6m",
"BJeUvaXU2Q",
"H1xD-S0dh7",
"iclr_2019_HkGGfhC5Y7"
] |
iclr_2019_HkGSniC9FQ | An Analysis of Composite Neural Network Performance from Function Composition Perspective | This work investigates the performance of a composite neural network, which is composed of pre-trained neural network models and non-instantiated neural network models, connected to form a rooted directed graph. A pre-trained neural network model is generally a well trained neural network model targeted for a specific function. The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other's intelligence and diligence and the other is saving the efforts in data preparation and resources and time in training. However, the overall performance of composite neural network is still not clear. In this work, we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions. In addition, if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings. | rejected-papers | Dear authors,
All reviewers pointed out the fact that your result is about the expressivity of the big network rather than its accuracy, a result which is already known for the literature.
I encourage you to carefully read all reviews should you wish to resubmit this work to a future conference. | train | [
"rJe7YK4gaX",
"SyeedwNlpX",
"r1e5AB4xaX",
"r1lt6QF02X",
"r1xSK7vq2Q",
"Hkeeh48IhX"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1. Thank you for comments. Actually, we considered one or more pre-trained neural network in the paper.\n\n2. Please pardon our non- scientific/mathematical tone that we just tried to emphasize of arrival of pre-trained neural network. \n\n3. Yes, in simple wording, it is the main claim of this paper. Many people ... | [
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
2,
3,
4
] | [
"Hkeeh48IhX",
"r1xSK7vq2Q",
"r1lt6QF02X",
"iclr_2019_HkGSniC9FQ",
"iclr_2019_HkGSniC9FQ",
"iclr_2019_HkGSniC9FQ"
] |
iclr_2019_HkGTwjCctm | Pyramid Recurrent Neural Networks for Multi-Scale Change-Point Detection | Many real-world time series, such as in activity recognition, finance, or climate science, have changepoints where the system's structure or parameters change. Detecting changes is important as they may indicate critical events. However, existing methods for changepoint detection face challenges when (1) the patterns of change cannot be modeled using simple and predefined metrics, and (2) changes can occur gradually, at multiple time-scales. To address this, we show how changepoint detection can be treated as a supervised learning problem, and propose a new deep neural network architecture that can efficiently identify both abrupt and gradual changes at multiple scales. Our proposed method, pyramid recurrent neural network (PRNN), is designed to be scale-invariant, by incorporating wavelets and pyramid analysis techniques from multi-scale signal processing. Through experiments on synthetic and real-world datasets, we show that PRNN can detect abrupt and gradual changes with higher accuracy than the state of the art and can extrapolate to detect changepoints at novel timescales that have not been seen in training. | rejected-papers | This paper studies change-point detection in time series using a multiscale neural network architecture which contains recurrent connections across different time scales.
Reviewers were mixed in this submission. They found the paper generally clear and well-written, and the idea of adding a multiscale component to the model interesting. However, they also pointed out weaknesses in the related work section and found the experimental setup somewhat limited. In particular, the paper provides little to no analysis of the learnt features. Taking these assessments into consideration, the AC concludes this submission cannot be accepted at this time. | train | [
"BylaQscd2m",
"rkxouFFipm",
"S1lJwOFi67",
"SkgZkuYiTm",
"BylefLP02X",
"Sylmkuj_2m"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"1. This papers leverages the concept of wavelet transform within a deep architecture to solve the classic problem (especially for wavelet analysis) of change point detection. The authors do a reasonably comprehensive job of demonstrating the efficacy of the proposed framework using various synthetic and real data ... | [
7,
-1,
-1,
-1,
4,
6
] | [
4,
-1,
-1,
-1,
5,
3
] | [
"iclr_2019_HkGTwjCctm",
"BylaQscd2m",
"Sylmkuj_2m",
"BylefLP02X",
"iclr_2019_HkGTwjCctm",
"iclr_2019_HkGTwjCctm"
] |
iclr_2019_HkGmDsR9YQ | Generalization and Regularization in DQN | Deep reinforcement learning (RL) algorithms have shown an impressive ability to learn complex control policies in high-dimensional environments. However, despite the ever-increasing performance on popular benchmarks like the Arcade Learning Environment (ALE), policies learned by deep RL algorithms can struggle to generalize when evaluated in remarkably similar environments. These results are unexpected given the fact that, in supervised learning, deep neural networks often learn robust features that generalize across tasks. In this paper, we study the generalization capabilities of DQN in order to aid in understanding this mismatch between generalization in deep RL and supervised learning methods. We provide evidence suggesting that DQN overspecializes to the domain it is trained on. We then comprehensively evaluate the impact of traditional methods of regularization from supervised learning, ℓ2 and dropout, and of reusing learned representations to improve the generalization capabilities of DQN. We perform this study using different game modes of Atari 2600 games, a recently introduced modification for the ALE which supports slight variations of the Atari 2600 games used for benchmarking in the field. Despite regularization being largely underutilized in deep RL, we show that it can, in fact, help DQN learn more general features. These features can then be reused and fine-tuned on similar tasks, considerably improving the sample efficiency of DQN. | rejected-papers | The authors have presented an empirical study of generalization and regularization in DQN. They evaluate generalization on different variants of Atari games and show that dropout and L2 regularization are beneficial. The paper does not contain any major revelations, nor does it propose new algorithms or approaches, but it is a well-written and clear demonstration, and it would be interesting to the deep RL community. However, the reviewers did not feel that the paper met the bar for publication at ICLR because the experiments were not more comprehensive, which would be expected for an empirical study. The AC will side with the reviewers but hopes that the authors will expand their study and resubmit to another venue in the future. | train | [
"S1gj1M3N0m",
"SylbPbhEAm",
"SJg2yZ24A7",
"BklXppaonm",
"BylM33v92X",
"rkeceWxK3m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewer for feedback on our work. We are pleased to see that the reviewer thinks our paper is “well written”, that its “experimental methodology is clear & sound”. We agree with the assessment that it is “good methodological paper that can inform others on taking regularization more ser... | [
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
3,
5,
5
] | [
"rkeceWxK3m",
"BylM33v92X",
"BklXppaonm",
"iclr_2019_HkGmDsR9YQ",
"iclr_2019_HkGmDsR9YQ",
"iclr_2019_HkGmDsR9YQ"
] |
iclr_2019_HkGsHj05tQ | Effective and Efficient Batch Normalization Using Few Uncorrelated Data for Statistics' Estimation | Deep Neural Networks (DNNs) thrive in recent years in which Batch Normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the reduction operations. In this paper, we propose alleviating the BN’s cost by using only a small fraction of data for mean & variance estimation at each iteration. The key challenge to reach this goal is how to achieve a satisfactory balance between normalization effectiveness and execution efficiency. We identify that the effectiveness expects less data correlation while the efficiency expects regular execution pattern. To this end, we propose two categories of approach: sampling or creating few uncorrelated data for statistics’ estimation with certain strategy constraints. The former includes “Batch Sampling (BS)” that randomly selects few samples from each batch and “Feature Sampling (FS)” that randomly selects a small patch from each feature map of all samples, and the latter is “Virtual Dataset Normalization (VDN)” that generates few synthetic random samples. Accordingly, multi-way strategies are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime. All the proposed methods are comprehensively evaluated on various DNN models, where an overall training speedup by up to 21.7% on modern GPUs can be practically achieved without the support of any specialized libraries, and the loss of model accuracy and convergence rate are negligible. Furthermore, our methods demonstrate powerful performance when solving the well-known “micro-batch normalization” problem in the case of tiny batch size. | rejected-papers | This paper proposes a faster approximation to batch norm, which avoids summing over the entire batch by subsampling either random examples or random image locations. It analyzes some of the tradeoffs of computation time vs. statistical efficiency of gradient estimation, and proposes schemes for decorrelating the samples to make good use of smaller numbers of samples.
The proposal is a reasonable one, and seems to give a noticeable improvement in efficiency. However, it's not clear there is a substantial enough contribution for an ICLR paper. The idea of subsampling is fairly obvious, and various other methods have already been proposed which decouple the computation of BN statistics from the training batch. From a practical standpoint, it's not clear that the observed benefit is large enough to justify the considerable complexity of an efficient implementation.
| train | [
"BJxx3U3U3Q",
"Hkg4TwQG0m",
"SJlPcO7f0Q",
"r1geNO7fCQ",
"HJgxJDXzAQ",
"r1xx5UQMCm",
"B1eIx741Tm",
"rkeKwjvY27"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to use subsampling to reduce the computation cost of BN, which buys around 20% of the computational cost. \n- In normal BN, the gradient is propagated through the normalization factor as well, how would that change in the case of subsampled BN?\n- The minimum amount of gains makes it less appea... | [
5,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_HkGsHj05tQ",
"rkeKwjvY27",
"BJxx3U3U3Q",
"Hkg4TwQG0m",
"r1xx5UQMCm",
"B1eIx741Tm",
"iclr_2019_HkGsHj05tQ",
"iclr_2019_HkGsHj05tQ"
] |
iclr_2019_HkGzUjR5tQ | DATNet: Dual Adversarial Transfer for Low-resource Named Entity Recognition | We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER). Specifically, two variants of DATNet, i.e., DATNet-F and DATNet-P, are proposed to explore effective feature fusion between high and low resource. To address the noisy and imbalanced training data, we propose a novel Generalized Resource-Adversarial Discriminator (GRAD). Additionally, adversarial training is adopted to boost model generalization. We examine the effects of different components in DATNet across domains and languages and show that significant improvement can be obtained especially for low-resource data. Without augmenting any additional hand-crafted features, we achieve new state-of-the-art performances on CoNLL and Twitter NER---88.16% F1 for Spanish, 53.43% F1 for WNUT-2016, and 42.83% F1 for WNUT-2017. | rejected-papers | A focused contribution that is clearly presented. That being said, the task of low-resource named entity recognition is fairly narrow and it is hard to tell how significant the empirical results are. The paper could be much stronger if it evaluated on a second task (and third task). Right now it is unclear whether the technique would generalize to other tasks. | train | [
"Skg5diLZCm",
"SJl90tU-07",
"HylBY88Z0X",
"SklaLn3rTX",
"rklsdRj5h7",
"rJeJabBc27"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nComment 1: “...the technical novelty of each component is limited. GRAD, for example, is rather a minor modification of Language Adversarial Discriminator (Kim et al., 2017) with a scalar weight parameter on the loss. … for this authors need to report quantitative results of the (Base + AT + F/P-transfer with AD... | [
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
4,
5,
4
] | [
"rklsdRj5h7",
"rJeJabBc27",
"SklaLn3rTX",
"iclr_2019_HkGzUjR5tQ",
"iclr_2019_HkGzUjR5tQ",
"iclr_2019_HkGzUjR5tQ"
] |
iclr_2019_HkM3vjCcF7 | Multi-Scale Stacked Hourglass Network for Human Pose Estimation | Stacked hourglass network has become an important model for Human pose estimation. The estimation of human body posture depends on the global information of the keypoints type and the local information of the keypoints location. The consistent processing of inputs and constraints makes it difficult to form differentiated and determined collaboration mechanisms for each stacked hourglass network. In this paper, we propose a Multi-Scale Stacked Hourglass (MSSH) network to high-light the differentiation capabilities of each Hourglass network for human pose estimation. The pre-processing network forms feature maps of different scales,and dispatch them to various locations of the stack hourglass network, where the small-scale features reach the front of stacked hourglass network, and large-scale features reach the rear of stacked hourglass network. And a new loss function is proposed for multi-scale stacked hourglass network. Different keypoints have different weight coefficients of loss function at different scales, and the keypoints weight coefficients are dynamically adjusted from the top-level hourglass network to the bottom-level hourglass network. Experimental results show that the pro-posed method is competitive with respect to the comparison algorithm on MPII and LSP datasets. | rejected-papers | The paper presents a multi-scale extension of the hourglass network. As the reviewers point out, the paper is below ICLR publication standard due to low novelty (i.e., multi-scale extension is not a new idea) and significance (i.e., not a significant performance gain against the state-of-the-art method or other baselines). | train | [
"SkeJiMvq2m",
"SJgLtT493m",
"ryxRUPb83X",
"S1geiETm37",
"ryliR3nci7",
"rkgw4W8a5Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"In this paper a method for pose estimation is proposed, which is based on the well known neural model “stacked hourglass networks”. The novelty in the proposed paper is a multi-scale formulation, which creates multiple scales from the input image and feeds them into different hourglass modules. The different scale... | [
3,
4,
3,
-1,
-1,
-1
] | [
5,
4,
5,
-1,
-1,
-1
] | [
"iclr_2019_HkM3vjCcF7",
"iclr_2019_HkM3vjCcF7",
"iclr_2019_HkM3vjCcF7",
"ryliR3nci7",
"rkgw4W8a5Q",
"iclr_2019_HkM3vjCcF7"
] |
iclr_2019_HkMlGnC9KQ | On Regularization and Robustness of Deep Neural Networks | In this work, we study the connection between regularization and robustness of deep neural networks by viewing them as elements of a reproducing kernel Hilbert space (RKHS) of functions and by regularizing them using the RKHS norm. Even though this norm cannot be computed, we consider various approximations based on upper and lower bounds. These approximations lead to new strategies for regularization, but also to existing ones such as spectral norm penalties or constraints, gradient penalties, or adversarial training. Besides, the kernel framework allows us to obtain margin-based bounds on adversarial generalization. We show that our new algorithms lead to empirical benefits for learning on small datasets and learning adversarially robust models. We also discuss implications of our regularization framework for learning implicit generative models. | rejected-papers | Reviewers generally found the RKHS perspective interesting, but did not feel that the results in the work (many of which were already known or follow easily from known theory) are sufficient to form a complete paper. Authors are encouraged to read the detailed reviewer comments which contain a number of critiques and suggestions for improvement. | train | [
"rkloNE79C7",
"SyevlN2F0X",
"Skx8HkyGT7",
"HJxIJl1za7",
"SJeWhgkfpQ",
"S1eid00WaQ",
"S1x3WD8ChQ",
"SylRu6annQ",
"rkxTYDf2i7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Paper1233 Authors,\n\nAfter reading the response carefully, I still feel like this paper is not ready to publish. Part of the reason is the organization of this paper does not highlight its main contributions, and also the paper lacks in-depth original contribution.\n\nI encourage the authors to continue thei... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"Skx8HkyGT7",
"iclr_2019_HkMlGnC9KQ",
"S1x3WD8ChQ",
"SylRu6annQ",
"rkxTYDf2i7",
"iclr_2019_HkMlGnC9KQ",
"iclr_2019_HkMlGnC9KQ",
"iclr_2019_HkMlGnC9KQ",
"iclr_2019_HkMlGnC9KQ"
] |
iclr_2019_HkMwHsCctm | Principled Deep Neural Network Training through Linear Programming | Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks. Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently results for the complexity of training deep neural networks have been obtained. In this work we show that large classes of deep neural networks with various architectures (e.g., DNNs, CNNs, Binary Neural Networks, and ResNets), activation functions (e.g., ReLUs and leaky ReLUs), and loss functions (e.g., Hinge loss, Euclidean loss, etc) can be trained to near optimality with desired target accuracy using linear programming in time that is exponential in the input data and parameter space dimension and polynomial in the size of the data set; improvements of the dependence in the input dimension are known to be unlikely assuming P≠NP, and improving the dependence on the parameter space dimension remains open. In particular, we obtain polynomial time algorithms for training for a given fixed network architecture. Our work applies more broadly to empirical risk minimization problems which allows us to generalize various previous results and obtain new complexity results for previously unstudied architectures in the proper learning setting. | rejected-papers | The strength of the paper is that it designs an LP-based algorithm for training neural networks with runtime exponential in the number of parameters and linear in the size of the datasets and the algorithm works for worst-case datasets. As reviewer 2 and reviewer 3 pointed out, the cons include a) it's not clear why the algorithm provides any theoretical insights on how to design in the future polynomial time algorithm --- it seems that the algorithms are inherently exponential time and b) it's not clear whether the algorithm is practically at all relevant. The AC also noted that brute-force search algorithm is also exponential in # parameters and linear in size of datasets, and the authors agreed with it. This leaves the main contribution of the paper be that it works for the worst datasets. However, theoretically speaking, it's not clear this should be counted as a feature for algorithm design because we cannot go beyond the intractability without making assumptions on the data and in the AC's opinion, the big open question is how to make additional assumptions on the data (instead of removing them.) In summary, the drawback b) makes this a purely theoretical paper and the theoretical significance of the paper is unclear due to a). Therefore based on a), the AC decided to recommend reject, although the AC suggested the authors to re-submit to other top theory or ML theory conferences which may better evaluate the theoretical significance of the paper. | test | [
"H1xmEVha0Q",
"SJxrd356Am",
"SyejCe6hRm",
"ByeXDEy3hm",
"HklY84IQCm",
"BJgHZon_aQ",
"SyeUXih_TX",
"BJxYsSnOpQ",
"HygOw7hdpX",
"SJeK-G2dTm",
"HJeC4Fx52Q",
"SkxC5Jptnm"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We are glad our replies addressed most concerns of the reviewer and we are thankful for the score increase. We agree with the reviewer in that the current title can be misleading, as it is not reflecting the theoretical nature of the paper. We will change the title as suggested.\n\nBest regards.",
"The AC is rig... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"ByeXDEy3hm",
"SyejCe6hRm",
"SJeK-G2dTm",
"iclr_2019_HkMwHsCctm",
"ByeXDEy3hm",
"ByeXDEy3hm",
"ByeXDEy3hm",
"HJeC4Fx52Q",
"SkxC5Jptnm",
"iclr_2019_HkMwHsCctm",
"iclr_2019_HkMwHsCctm",
"iclr_2019_HkMwHsCctm"
] |
iclr_2019_Hke8Do0cF7 | Deep processing of structured data | We construct a general unified framework for learning representation of structured
data, i.e. data which cannot be represented as the fixed-length vectors (e.g. sets,
graphs, texts or images of varying sizes). The key factor is played by an intermediate
network called SAN (Set Aggregating Network), which maps a structured
object to a fixed length vector in a high dimensional latent space. Our main theoretical
result shows that for sufficiently large dimension of the latent space, SAN is
capable of learning a unique representation for every input example. Experiments
demonstrate that replacing pooling operation by SAN in convolutional networks
leads to better results in classifying images with different sizes. Moreover, its direct
application to text and graph data allows to obtain results close to SOTA, by
simpler networks with smaller number of parameters than competitive models. | rejected-papers | The reviewers agree this paper is not good enough for ICLR. | test | [
"B1lI6n7_a7",
"Byg3dPot37",
"Hylth5Zwhm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThe paper argues that plain (fully connected) neural networks cannot represent structured data, e.g. sequences, graphs, etc. Specialized architectures have instead been invented for each such case, e.g. recurrent neural networks, graph networks etc. The paper then proposes to treat these structured dat... | [
4,
4,
4
] | [
3,
4,
3
] | [
"iclr_2019_Hke8Do0cF7",
"iclr_2019_Hke8Do0cF7",
"iclr_2019_Hke8Do0cF7"
] |
iclr_2019_HkeILsRqFQ | An experimental study of layer-level training speed and its impact on generalization | How optimization influences the generalization ability of a DNN is still an active area of research. This work aims to unveil and study a factor of influence: the speed at which each layer trains. In our preliminary work, we develop a visualization technique and an optimization algorithm to monitor and control the layer rotation rate, a tentative measure of layer-level training speed, and show that it has a remarkably consistent and substantial impact on generalization. Our experiments further suggest that weight decay's and adaptive gradients methods' impact on both generalization performance and speed of convergence are solely due to layer rotation rate changes compared to vanilla SGD, offering a novel interpretation of these widely used techniques, and providing supplementary evidence that layer-level training speed indeed impacts generalization. Besides these fundamental findings, we also expect that on a practical level, the tools we introduce will reduce the meta-parameter tuning required to get the best generalization out of a deep network. | rejected-papers | Dear authors,
The reviewers all appreciated the question you are asking and the study of the impact of each layer is definitely an interesting one.
They were however uncertain about the actual metrics you used to emphasize your points. Further, as you noted, there were quite a few presentation issues that led to skepticism of the reviewers, despite them spending quite a bit of time reading the paper and engaging in discussion.
Hence, I regret to inform you that your work is not yet ready for publication. A more focused analysis would be a great addition to the questions you raise. | train | [
"SyxwJJXq14",
"Bkx3wj7bJN",
"SJefljmW14",
"HklF6583CX",
"H1gbIzInRX",
"BJgSGzI2C7",
"SygkgfI20Q",
"rygUO-Ln0m",
"SJehpxIh07",
"HkeXqyLh07",
"HkxopZrq2m",
"HkxU-cNthQ",
"S1gKct_DoX"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the detailed and clarifying response. I have taken some time to reread the paper in light of these comments and hope to provide an adequate response here.\n\nFirst, thank you for your comments with regards to flattening the weight vector/manifold optimization. I think that we are in agreement on most... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"SJefljmW14",
"HklF6583CX",
"HklF6583CX",
"SygkgfI20Q",
"S1gKct_DoX",
"HkxU-cNthQ",
"HkxU-cNthQ",
"HkxopZrq2m",
"iclr_2019_HkeILsRqFQ",
"iclr_2019_HkeILsRqFQ",
"iclr_2019_HkeILsRqFQ",
"iclr_2019_HkeILsRqFQ",
"iclr_2019_HkeILsRqFQ"
] |
iclr_2019_HkeKVh05Fm | Multi-Grained Entity Proposal Network for Named Entity Recognition | In this paper, we focus on a new Named Entity Recognition (NER) task, i.e., the Multi-grained NER task. This task aims to simultaneously detect both fine-grained and coarse-grained entities in sentences. Correspondingly, we develop a novel Multi-grained Entity Proposal Network (MGEPN). Different from traditional NER models which regard NER as a sequential labeling task, MGEPN provides a new method that proposes entity candidates in the Proposal Network and classifies entities into different categories in the Classification Network. All possible entity candidates including fine-grained ones and coarse-grained ones are proposed in the Proposal Network, which enables the MGEPN model to identify multi-grained entities. In order to better identify named entities and determine their categories, context information is utilized and transferred from the Proposal Network to the Classification Network during the learning process. A novel Entity-Context attention mechanism is also introduced to help the model focus on entity-related context information. Experiments show that our model can obtain state-of-the-art performance on two real-world datasets for both the Multi-grained NER task and the traditional NER task. | rejected-papers | The authors present a method for fine grained entity tagging, which could be useful in certain practical scenarios.
I found the labeling of the CoNLL data with the fine grained entities a bit confusing. The authors did not talk about the details of how the coarse grained labels were changed to fine grained ones. This detail is important and is missing from the paper. Moreover, there are concerns about the novelty of the work, both in terms of the task definition and the model (see the review of Reviewer 1, e.g.).
There is consensus amongst the reviewers, in that, their feedback is lukewarm about the paper.
| train | [
"Hyl4pH5tC7",
"ryekgAcKA7",
"BklfWVcFA7",
"HJx9i5BTh7",
"ryx2YYTqh7",
"Sye6lx5K37"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks a lot for your review.\n\nWe’ve updated our table with baselines [1,2,3] in the revised paper.\n\nThis is a brief version of Multi-grained NER F1 Performance on the test sets of three datasets:\n\n\t\t\t\t\t\t MG CoNLL2003 \tMG OntoNotes 5.0 \t\t ACE 2005\nLample et al. (2016)\t | 78.52\t... | [
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"Sye6lx5K37",
"ryx2YYTqh7",
"HJx9i5BTh7",
"iclr_2019_HkeKVh05Fm",
"iclr_2019_HkeKVh05Fm",
"iclr_2019_HkeKVh05Fm"
] |
iclr_2019_HkeWSnR5Y7 | Provable Defenses against Spatially Transformed Adversarial Inputs: Impossibility and Possibility Results | One intriguing property of neural networks is their inherent vulnerability to adversarial inputs, which are maliciously crafted samples to trigger target networks to misbehave. The state-of-the-art attacks generate adversarial inputs using either pixel perturbation or spatial transformation. Thus far, several provable defenses have been proposed against pixel perturbation-based attacks; yet, little is known about whether such solutions exist for spatial transformation-based attacks. This paper bridges this striking gap by conducting the first systematic study on provable defenses against spatially transformed adversarial inputs. Our findings convey mixed messages. On the impossibility side, we show that such defenses may not exist in practice: for any given networks, it is possible to find legitimate inputs and imperceptible transformations to generate adversarial inputs that force arbitrarily large errors. On the possibility side, we show that it is still feasible to construct adversarial training methods to significantly improve the resilience of networks against adversarial inputs over empirical datasets. We believe our findings provide insights for designing more effective defenses against spatially transformed adversarial inputs. | rejected-papers | This paper conducts a study on provable defenses to spatially transformed adversarial examples. In general, the paper pursues an interesting direction, but reviewers had many concerns regarding the clarity of the presentation and the depth of the experimental results, which the authors did not address in a rebuttal. | train | [
"S1lrfwa327",
"H1xG3oOtnX",
"r1xiO7ruhX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a defense against spatially transformed adversarial inputs and give the two main results on possibility (still possible to construct adversarial training methods to improve robustness) and impossibility (always exist spatially-transformed adversarial examples for any given networks and thus no ... | [
5,
3,
5
] | [
3,
4,
3
] | [
"iclr_2019_HkeWSnR5Y7",
"iclr_2019_HkeWSnR5Y7",
"iclr_2019_HkeWSnR5Y7"
] |
iclr_2019_HkekMnR5Ym | Meta-Learning Neural Bloom Filters | There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In many applications this expensive initialization is not practical, for example streaming algorithms --- where inputs are ephemeral and can only be inspected a small number of times. In this paper we explore the learning of approximate set membership over a stream of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which we show to be more compressive than Bloom Filters and several existing memory-augmented neural networks in scenarios of skewed data or structured sets. | rejected-papers | This work proposes and interesting approach to learn approximate set membership. While the proposed architecture is rather closely related to existing work, it is still interesting, as recognized by reviewers. Authors's substantial rewrites has also helped make the paper clearer. However, the empirical merits of the approach are still a bit limited; when combined with the narrow novelty compared to existing work, this makes the overall contribution a bit too thin for ICLR. Authors are encouraged to strengthen their work by showing more convincing practical benefit of their approach. | train | [
"HJlfcMvI3m",
"Hkl9AlNPA7",
"ByxXh6ZPAQ",
"H1lg3Mjdn7",
"ryeKu7-wCX",
"S1lXv_J707",
"B1llq3T-0Q",
"rJeha6qa6m",
"BylFBY5aa7",
"BklHAWcapX",
"HyeHtaK6pm",
"S1gCw_-T6X",
"BkxdWIvphQ"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a learnable bloom filter architecture. While the details of the architecture seemed a bit too complicated for me to grasp (see more on this later), via experiments the authors show that the learned bloom filters are more compact that regular bloom filters and can outperform other neural architec... | [
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1
] | [
"iclr_2019_HkekMnR5Ym",
"iclr_2019_HkekMnR5Ym",
"ryeKu7-wCX",
"iclr_2019_HkekMnR5Ym",
"BklHAWcapX",
"iclr_2019_HkekMnR5Ym",
"BylFBY5aa7",
"BkxdWIvphQ",
"HJlfcMvI3m",
"H1lg3Mjdn7",
"S1gCw_-T6X",
"iclr_2019_HkekMnR5Ym",
"iclr_2019_HkekMnR5Ym"
] |
iclr_2019_Hkemdj09YQ | Rectified Gradient: Layer-wise Thresholding for Sharp and Coherent Attribution Maps | Saliency map, or the gradient of the score function with respect to the input, is the most basic means of interpreting deep neural network decisions. However, saliency maps are often visually noisy. Although several hypotheses were proposed to account for this phenomenon, there is no work that provides a rigorous analysis of noisy saliency maps. This may be a problem as numerous advanced attribution methods were proposed under the assumption that the existing hypotheses are true. In this paper, we identify the cause of noisy saliency maps. Then, we propose Rectified Gradient, a simple method that significantly improves saliency maps by alleviating that cause. Experiments showed effectiveness of our method and its superiority to other attribution methods. Codes and examples for the experiments will be released in public. | rejected-papers | The main goal of the submission is to figure out a way to produce less "noisy" saliency maps. The RectGrad method uses some thresholding during backprop, like Guided Backprop. The visuals of the proposed method are good, but the reviewers rightfully point out that evaluating whether the proposed method is any good is not obvious. The ROAR/KAR results are perhaps not telling the whole story (and the authors claim that RectGrad is not expected to get a high ROAR score, but I would like to see this developed more in a further version of this work).
Generally, I feel like there was a healthy back and forth between authors and R3 on the main concerns of this work. I agree that the mathematical justification for RectGrad seems not fully developed. Given all of these concerns, at this point I cannot support acceptance of this work at ICLR. | train | [
"SkxkhRKJy4",
"BklsGzERC7",
"BJxuD67CR7",
"Hke7H_a3AQ",
"ryx6j31n6Q",
"BJxYhsk367",
"Hkeud9k2TX",
"BkxmQ5JnTX",
"Hyx_l8y267",
"H1ldbTknTX",
"Sylw8nkhpX",
"B1xU8sk3Tm",
"S1e82cJ3a7",
"SkgYnI1n6Q",
"SJlHOIJnTm",
"SJlgP_d5h7",
"BJlWRgW53Q",
"BJxGAKO6oQ"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We give both intuitive and rigorous explanations as to why we should use a * R instead of |a * R|.\n\nIntuitive explanation: Since |a * R| does not work for even the simplest examples, it is highly likely that this will not work DNNs which are constructed by composing multiple affine layers. On the other hand, for... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"BklsGzERC7",
"BJxuD67CR7",
"Hke7H_a3AQ",
"H1ldbTknTX",
"Sylw8nkhpX",
"B1xU8sk3Tm",
"BkxmQ5JnTX",
"SJlgP_d5h7",
"iclr_2019_Hkemdj09YQ",
"ryx6j31n6Q",
"BJxGAKO6oQ",
"BJlWRgW53Q",
"Hkeud9k2TX",
"SJlHOIJnTm",
"iclr_2019_Hkemdj09YQ",
"iclr_2019_Hkemdj09YQ",
"iclr_2019_Hkemdj09YQ",
"icl... |
iclr_2019_Hkes0iR9KX | DEEP GEOMETRICAL GRAPH CLASSIFICATION | Most of the existing Graph Neural Networks (GNNs) are the mere extension of the Convolutional Neural Networks (CNNs) to graphs. Generally, they consist of several steps of message passing between the nodes followed by a global indiscriminate feature pooling function. In many data-sets, however, the nodes are unlabeled or their labels provide no information about the similarity between the nodes and the locations of the nodes in the graph. Accordingly, message passing may not propagate helpful information throughout the graph. We show that this conventional approach can fail to learn to perform even simple graph classification tasks. We alleviate this serious shortcoming of the GNNs by making them a two step method. In the first of the proposed approach, a graph embedding algorithm is utilized to obtain a continuous feature vector for each node of the graph. The embedding algorithm represents the graph as a point-cloud in the embedding space. In the second step, the GNN is applied to the point-cloud representation of the graph provided by the embedding method. The GNN learns to perform the given task by inferring the topological structure of the graph encoded in the spatial distribution of the embedded vectors. In addition, we extend the proposed approach to the graph clustering problem and a new architecture for graph clustering is proposed. Moreover, the spatial representation of the graph is utilized to design a graph pooling algorithm. We turn the problem of graph down-sampling into a column sampling problem, i.e., the sampling algorithm selects a subset of the nodes whose feature vectors preserve the spatial distribution of all the feature vectors. We apply the proposed approach to several popular benchmark data-sets and it is shown that the proposed geometrical approach strongly improves the state-of-the-art result for several data-sets. For instance, for the PTC data-set, we improve the state-of-the-art result for more than 22 %. | rejected-papers | The extension of convnets to non-Euclidean data is a major theme of research in computer vision and signal processing. This paper is concerned with Graph structured datasets. The main idea seems to be interesting: to improve graph neural nets by first embedding the graph in a Euclidean space reducing it to a point cloud, and then exploiting the induced topological structure implicit in the point cloud.
However, all reviewers found this paper hard to read and improperly motivated due to poor writing quality. The experimental results are somewhat promising but not completely convincing, and the proposed framework lacks a solid theoretical footing. Hence, the AC cannot recommend acceptance at ICLR-2019. | train | [
"rylNypVmyN",
"ryl_hKczJV",
"SylL8nCdA7",
"H1lBLg1FR7",
"B1lJaWuTCX",
"HkexO4FN3X",
"H1x4fS1KC7",
"Syxu1oN9nX",
"HJg5UASP37"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"1 - Thanks for the comment. The paper has been thoroughly edited. \n\n2- The novel contributions are explicitly explained in abstract and in Section 1. The main contribution is to translate the graph analysis task into a point-cloud analysis task. \n\n3 - The proposed pooling method is compared with existing pooli... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
4,
3
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
4,
4
] | [
"ryl_hKczJV",
"H1lBLg1FR7",
"Syxu1oN9nX",
"HJg5UASP37",
"H1x4fS1KC7",
"iclr_2019_Hkes0iR9KX",
"HkexO4FN3X",
"iclr_2019_Hkes0iR9KX",
"iclr_2019_Hkes0iR9KX"
] |
iclr_2019_Hkesr205t7 | Learning shared manifold representation of images and attributes for generalized zero-shot learning | Many of the zero-shot learning methods have realized predicting labels of unseen images by learning the relations between images and pre-defined class-attributes. However, recent studies show that, under the more realistic generalized zero-shot learning (GZSL) scenarios, these approaches severely suffer from the issue of biased prediction, i.e., their classifier tends to predict all the examples from both seen and unseen classes as one of the seen classes. The cause of this problem is that they cannot properly learn a mapping to the representation space generalized to the unseen classes since the training set does not include any unseen class information. To solve this, we propose a concept to learn a mapping that embeds both images and attributes to the shared representation space that can be generalized even for unseen classes by interpolating from the information of seen classes, which we refer to shared manifold learning. Furthermore, we propose modality invariant variational autoencoders, which can perform shared manifold learning by training variational autoencoders with both images and attributes as inputs. The empirical validation of well-known datasets in GZSL shows that our method achieves the significantly superior performances to the existing relation-based studies. | rejected-papers | The paper addresses generalized zero shot learning (test data contains examples from both seen as well as unseen classes) and proposes to learn a shared representation of images and attributes via multimodal variational autoencoders.
The reviewers and AC note the following potential weaknesses: (1) low technical contribution, i.e. the proposed multimodal VAE model is very similar to Vedantam et al (2017) as noted by R2, and to JMVAE model by Suzuki et al, 2016, as noted by R1. The authors clarified in their response that indeed VAE in Vedantam et al (2017) is similar, but it has been used for image synthesis and not classification/GZSL. (2) Empirical evaluations and setup are not convincing (R2) and not clear -- R3 has provided a very detailed review and a follow up discussion raising several important concerns such as (i) absence of a validation set to test generalization, (ii) the hyperparameters set up; (iii) not clear advantages of learning a joint model as opposed to unidirectional mappings (R1 also supports this claim). The authors partially addressed some of these concerns in their response, however more in-depth analysis and major revision is required to assess the benefits and feasibility of the proposed approach.
| val | [
"S1g_k-jDhX",
"ryx31P9PyV",
"r1xKsv3F2Q",
"r1xcP-e0Am",
"SJgxOVxoCm",
"rkeR7Nxi07",
"r1ge1qmYAm",
"ryxUSkCdRm",
"SklW-AE5nQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper considers the problem of (Generalized) Zero-Shot Learning. Most zero-shot learning methods embed images and text/attribute representations into a common space. The main difference here seems to be that Variational AutoEncoder (VAEs) are used to learn the mappings that take different sources as input (ima... | [
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2019_Hkesr205t7",
"r1xcP-e0Am",
"iclr_2019_Hkesr205t7",
"SJgxOVxoCm",
"rkeR7Nxi07",
"r1xKsv3F2Q",
"SklW-AE5nQ",
"S1g_k-jDhX",
"iclr_2019_Hkesr205t7"
] |
iclr_2019_HketHo0qFm | Hybrid Policies Using Inverse Rewards for Reinforcement Learning | This paper puts forward a broad-spectrum improvement for reinforcement learning algorithms, which combines the policies using original rewards and inverse (negative) rewards. The policies using inverse rewards are competitive with the original policies, and help the original policies correct their mis-actions. We have proved the convergence of the inverse policies. The experiments for some games in OpenAI gym show that the hybrid polices based on deep Q-learning, double Q-learning, and on-policy actor-critic obtain the rewards up to 63.8%, 97.8%, and 54.7% more than the original algorithms. The improved polices are more stable than the original policies as well. | rejected-papers | Pros:
- an original idea: learn an additional inverse policy (that minimizes reward) to help find actions that should be avoided.
Cons:
- not clearly presented
- conclusions are not not validated
- empirical evidence is weak
- no rebuttal
The three reviewers reached consensus that the paper should be rejected in its current form, but make numerous suggestions for improving it for a future submission.
| train | [
"SkltanUIpX",
"SJeT9IXc2Q",
"HJgUh1it3m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a method for improving the stability of reinforcement learning with value function approximation, e.g., deep Q-learning. The key idea is fitting a Q function to rewards, fitting another Q function to negative rewards, then estimating Q values using a linear combination of the two Q functions. Th... | [
3,
2,
4
] | [
4,
5,
5
] | [
"iclr_2019_HketHo0qFm",
"iclr_2019_HketHo0qFm",
"iclr_2019_HketHo0qFm"
] |
iclr_2019_HkeyZhC9F7 | Learning Heuristics for Automated Reasoning through Reinforcement Learning | We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We focus on backtracking search algorithms for quantified Boolean logics, which already can solve formulas of impressive size - up to 100s of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For challenging problems, the heuristic learned through our approach reduces execution time by a factor of 10 compared to the existing handwritten heuristics. | rejected-papers | The paper proposes the use of reinforcement learning to learn heuristics in backtracking search algorithm for quantified boolean formulas, using a neural network to learn a suitable representation of literals and clauses to predict actions. The writing and the description of the method and results are generally clear. The main novelty lies in finding a good architecture/representation of the input, and demonstrating the use of RL in a new domain. While there is no theoretical justification for why this heuristic should work better than existing ones, the experimental results look convincing, although they are somewhat limited and the improvements are dataset dependent. In practice, the overhead of the proposed method could be an issue. There was some disagreement among the reviewers as to whether the improvements and the results are significant enough for publication. | train | [
"BJlrwT-cR7",
"Hke2JfcoT7",
"ryxELVAF6m",
"Skx240VFpQ",
"r1ggEpNKaX",
"Bye14hEKpQ",
"SklWGYIq2X",
"BklL0lZq3X",
"ryxJA8gF2Q"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"As promised, we added additional experiments to the appendix. Please find them in Appendix E and Appendix F.",
"We were not aware of this work, and will discuss it in our related work section. There are several key differences compared to our work: Khalil et al. present an approach to learn to predict an existin... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"Skx240VFpQ",
"ryxELVAF6m",
"Skx240VFpQ",
"ryxJA8gF2Q",
"BklL0lZq3X",
"SklWGYIq2X",
"iclr_2019_HkeyZhC9F7",
"iclr_2019_HkeyZhC9F7",
"iclr_2019_HkeyZhC9F7"
] |
iclr_2019_HkezfhA5Y7 | A Rate-Distortion Theory of Adversarial Examples | The generalization ability of deep neural networks (DNNs) is intertwined with model complexity, robustness, and capacity. Through establishing an equivalence between a DNN and a noisy communication channel, we characterize generalization and fault tolerance for unbounded adversarial attacks in terms of information-theoretic quantities. Invoking rate-distortion theory, we suggest that excess capacity is a significant cause of vulnerability to adversarial examples. | rejected-papers | Both authors and reviewers agree that the ideas in the paper were not presented clearly enough. | val | [
"r1gLCc78J4",
"rJlBx_662m",
"SyeNJUn52Q",
"HkgXexY9nX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their honest feedback. We agree that the ideas were not presented clearly and will work on improving this.",
"The paper discusses on a rate distortion interpretation of adversarial examples by building the equivalence of DNN and a noisy channel. The proposed topic is very interesting. ... | [
-1,
4,
3,
2
] | [
-1,
4,
3,
3
] | [
"iclr_2019_HkezfhA5Y7",
"iclr_2019_HkezfhA5Y7",
"iclr_2019_HkezfhA5Y7",
"iclr_2019_HkezfhA5Y7"
] |
iclr_2019_HkfwpiA9KX | Automata Guided Skill Composition | Skills learned through (deep) reinforcement learning often generalizes poorly
across tasks and re-training is necessary when presented with a new task. We
present a framework that combines techniques in formal methods with reinforcement
learning (RL) that allows for the convenient specification of complex temporal
dependent tasks with logical expressions and construction of new skills from existing
ones with no additional exploration. We provide theoretical results for our
composition technique and evaluate on a simple grid world simulation as well as
a robotic manipulation task. | rejected-papers | The authors present an interesting approach for combining finite state automata to compose new policies using temporal logic. The reviewers found this contribution interesting but had several questions that suggests that the current paper presentation could be significantly clarified and situated with respect to other literature. Given the strong pool of papers, this paper was borderline and the authors are encouraged to revise their paper to address the reviewers’ feedback.
| train | [
"BJgNUg4wl4",
"BJx8nZ85JN",
"BylKx8NIpX",
"Sylz7XQ5CQ",
"H1gEQFfc0m",
"BJxpVMfqCQ",
"SylO40-9RQ",
"BJxCCT-90X",
"SygGIu4n2X",
"ryeWe3Dc3Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the additional comments and adjustment to the score. We acknowledge that comparison with a good number of state-of-the-art methods would better situate our work in the field. Our work presented here is a combination of both reward engineering (using TL) and skill composition, along with the hierarchi... | [
-1,
-1,
5,
-1,
7,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
2,
-1,
3,
-1,
-1,
-1,
4,
2
] | [
"BJx8nZ85JN",
"BJxpVMfqCQ",
"iclr_2019_HkfwpiA9KX",
"H1gEQFfc0m",
"iclr_2019_HkfwpiA9KX",
"BylKx8NIpX",
"SygGIu4n2X",
"ryeWe3Dc3Q",
"iclr_2019_HkfwpiA9KX",
"iclr_2019_HkfwpiA9KX"
] |
iclr_2019_Hkg1YiAcK7 | Learning Implicit Generative Models by Teaching Explicit Ones | Implicit generative models are difficult to train as no explicit probability density functions are defined. Generative adversarial nets (GANs) propose a minimax framework to train such models, which suffer from mode collapse in practice due to the nature of the JS-divergence. In contrast, we propose a learning by teaching (LBT) framework to learn implicit models, which intrinsically avoid the mode collapse problem because of using the KL-divergence. In LBT, an auxiliary explicit model is introduced to learn the distribution defined by the implicit model while the later one's goal is to teach the explicit model to match the data distribution. LBT is formulated as a bilevel optimization problem, whose optimum implies that we obtain the maximum likelihood estimation of the implicit model. We adopt an unrolling approach to solve the challenging learning problem. Experimental results demonstrate the effectiveness of our method. | rejected-papers | The paper proposes a learning by teaching (LBT) framework to train an implicit generative model via an explicit one. It is shown experimentally, that the framework can help to avoid mode collapse. The reviewers commonly raised the question why this is the case, which was answered in the rebuttal by pointing to the differences between the KL- and the JS-divergence and by showing a toy problem for which the JS-divergence has local minima while the KL-divergence has not. However, it still remains unclear why this should be generally and for explicit models with insufficient capacity the case, and if the model will be scalable to larger settings, therefore the paper can not be accepted in the current form. | val | [
"Sye4Dr15nm",
"ByxP511mRm",
"rJgoI11mRm",
"BJe67kJmAm",
"rkx9UARG07",
"Hye7NRRfAX",
"BJxlWA0zCQ",
"SyeaT3YihX",
"H1gLMCrs3X"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work introduces a framework for learning implicit models that is robust to mode collapse. It consists in learning an explicit model of the implicit model through maximum likelihood while the later is used to teach the explicit model to better match the data distribution. The resulting bi-level optimization is... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_Hkg1YiAcK7",
"Sye4Dr15nm",
"H1gLMCrs3X",
"H1gLMCrs3X",
"SyeaT3YihX",
"iclr_2019_Hkg1YiAcK7",
"iclr_2019_Hkg1YiAcK7",
"iclr_2019_Hkg1YiAcK7",
"iclr_2019_Hkg1YiAcK7"
] |
iclr_2019_Hkg1csA5Y7 | A fast quasi-Newton-type method for large-scale stochastic optimisation | During recent years there has been an increased interest in stochastic adaptations of limited memory quasi-Newton methods, which compared to pure gradient-based routines can improve the convergence by incorporating second order information. In this work we propose a direct least-squares approach conceptually similar to the limited memory quasi-Newton methods, but that computes the search direction in a slightly different way. This is achieved in a fast and numerically robust manner by maintaining a Cholesky factor of low dimension. This is combined with a stochastic line search relying upon fulfilment of the Wolfe condition in a backtracking manner, where the step length is adaptively modified with respect to the optimisation progress. We support our new algorithm by providing several theoretical results guaranteeing its performance. The performance is demonstrated on real-world benchmark problems which shows improved results in comparison with already established methods. | rejected-papers | The paper investigates a novel formulation of a stochastic, quasi-Newton optimization strategy based on the natural idea of relaxing the secant conditions. This is an interesting and promising idea, but unfortunately none of the reviewers recommended acceptance. The reviewers unanimously fixated on weaknesses in the paper's technical presentation. In particular, the reviewers expressed some dissatisfaction with many aspects, including:
- Key details of the experimental evaluation were omitted (particularly concerning configuration of the baseline competitors), which is an essential aspect of reproducibility. One consequence is that the reviewers were not confident in the veracity of the experimental comparison.
- The reviewers struggled with a lack of clarity and accurate rendering of some key technical details. An example is dissatisfaction with the non-symmetry of the inverse Hessian approximation, which was not fully alleviated by the author responses.
- The proposed approach does not appear to possess any intrinsic advantage over standard methods from a computational complexity perspective.
I think this is promising work, but a careful revision that strengthened the underlying technical claims appears necessary to make this a solid contribution. | train | [
"rklAzwpVhm",
"Syla5N-qAm",
"HyeoUMuUAm",
"HJli9j4HRQ",
"H1gArsVH0Q",
"S1l41oVBRQ",
"HyeT9q4BCX",
"BkgBV1acnm",
"B1lbltnL3X"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new quasi-Newton method for stochastic optimization that solves a regularized least-squares problem to approximate curvature information that relaxes both the symmetry and secant conditions typically ensured in quasi-Newton methods. In addition to this, the authors propose a stochastic Armijo... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2019_Hkg1csA5Y7",
"HyeoUMuUAm",
"HJli9j4HRQ",
"rklAzwpVhm",
"B1lbltnL3X",
"BkgBV1acnm",
"iclr_2019_Hkg1csA5Y7",
"iclr_2019_Hkg1csA5Y7",
"iclr_2019_Hkg1csA5Y7"
] |
iclr_2019_Hkg313AcFX | Metropolis-Hastings view on variational inference and adversarial training | In this paper we propose to view the acceptance rate of the Metropolis-Hastings algorithm as a universal objective for learning to sample from target distribution -- given either as a set of samples or in the form of unnormalized density. This point of view unifies the goals of such approaches as Markov Chain Monte Carlo (MCMC), Generative Adversarial Networks (GANs), variational inference. To reveal the connection we derive the lower bound on the acceptance rate and treat it as the objective for learning explicit and implicit samplers. The form of the lower bound allows for doubly stochastic gradient optimization in case the target distribution factorizes (i.e. over data points). We empirically validate our approach on Bayesian inference for neural networks and generative models for images. | rejected-papers | This paper provides a good finding that maximizing a lower bound of the M-H acceptance rate is equivalent to minimizing the symmetric KL divergence between target the proposal. This lower bound is then used to learn sampler for both density and sample-based settings. It also nicely connects GAN with MCMC by providing a novel loss function to train the discriminator. Experiment on MNIST dataset in Sec 4.2 shows training the proposal with the symmetric KL is better than variational inference that optimizes KL(q||p).
However, there are a few concerns raised in both the reviews and other comments that should be further clarified.
1. Training an independent proposal may reduce the rate of convergence.
2. In the density-based setting experiments, the learnt independent proposal is only used to provide an initial point and a random-walk kernel is actually used for sampling. This is different from what is proposed algorithm in Section 3.
3. The proposed algorithm is only compared with VI in density-based setting, and there are no comparison with other baselines in the sample-based setting, despite the close connections of the proposed method with other models. Stochastic gradient MCMC methods, A-NICE-MC, GAN will be good baselines for empirical comparisons. Also, the dataset in Sec 4.2 is a subset of the standard MNIST, which makes comparison with other literatures difficult.
For the first concern, the authors provided new experiments for low-dimensional synthetic distributions. It is very helpful to show the comparable performance with A-NICE-MC in this case, but the real challenge in high-dimensional distributions remains unexamined. For the second concern, the authors consider the use of random-walk as a heuristic that allows to obtain better samples from the posterior, but that significantly changes the proposed transition kernel in Alg. 1.
This paper would be significantly stronger and make a very good contribution to this area by addressing the problems above. | train | [
"S1xgAXoWJ4",
"H1xaz7obJN",
"BJllVttk1N",
"BklWnDOk14",
"SJgcTjApRQ",
"SJx-1SnF3Q",
"Sylt4u5nRm",
"HJggW5cFhQ",
"BklgioCFCX",
"BklQxnRYCX",
"BJeZxjRFAQ",
"rkgCV90K0X",
"BkgWy94p2m"
] | [
"author",
"author",
"public",
"public",
"author",
"official_reviewer",
"public",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"\nWe are very pleased with your interest in our paper!\nWe also were enjoyed by reading yours! :)\n\nIt is very important for us to provide the reader with a comprehensive overview of different sampling methods, so we will glad to cite both papers in the camera-ready version of our paper.\n",
"\nThank you for re... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
9,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
3
] | [
"Sylt4u5nRm",
"BJllVttk1N",
"iclr_2019_Hkg313AcFX",
"Sylt4u5nRm",
"SJx-1SnF3Q",
"iclr_2019_Hkg313AcFX",
"iclr_2019_Hkg313AcFX",
"iclr_2019_Hkg313AcFX",
"SJx-1SnF3Q",
"HJggW5cFhQ",
"BkgWy94p2m",
"iclr_2019_Hkg313AcFX",
"iclr_2019_Hkg313AcFX"
] |
iclr_2019_HkgDTiCctQ | Knowledge Distillation from Few Samples | Current knowledge distillation methods require full training data to distill knowledge from a large "teacher" network to a compact "student" network by matching certain statistics between "teacher" and "student" such as softmax outputs and feature responses. This is not only time-consuming but also inconsistent with human cognition in which children can learn knowledge from adults with few examples. This paper proposes a novel and simple method for knowledge distillation from few samples. Taking the assumption that both "teacher" and "student" have the same feature map sizes at each corresponding block, we add a 1×1 conv-layer at the end of each block in the student-net, and align the block-level outputs between "teacher" and "student" by estimating the parameters of the added layer with limited samples. We prove that the added layer can be absorbed/merged into the previous conv-layer \hl{to formulate a new conv-layer with the same size of parameters and computation cost as previous one. Experiments verifies that the proposed method is very efficient and effective to distill knowledge from teacher-net to student-net constructing in different ways on various datasets. | rejected-papers | The paper considers the problem of knowledge distillation from a few samples. The proposed solution is to align feature representations of the student network with the teacher by adding 1x1 convolutions to each student block, and learning only the parameters of those layers. As noted by Reviewers 1 and 2, the performance of the proposed method is rather poor in absolute terms, and the use case considered (distillation from a few samples) is not motivated well enough. Reviewers also note the method is quite simplistic and incremental. | train | [
"rye7rQUc1V",
"r1ldl_Nqy4",
"HJlp2q150Q",
"SJxjSu1cRm",
"HylT0D1qAm",
"rye8Ow1cAQ",
"S1gj3tk50X",
"S1lp283D6Q",
"B1e5MapjhQ",
"HkxK78ZK27",
"SkgVZoJ7h7",
"BJgfkfRUhQ",
"H1x7gjxLnm",
"BJg7ZjdQh7",
"HJgOAKO7nQ",
"SkgvhosGhQ",
"rJlpXbFb27"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"public",
"public"
] | [
"Thanks for the valuable comments and suggestions. Below further response your concerns. \n\n### Hong et al focuses on convex optimization problem\nWe agree that CNN optimization problem ins non-convex. However, our problem is not a standard CNN optimization problem. The loss function in eq(2) contains multiple dis... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"r1ldl_Nqy4",
"rye8Ow1cAQ",
"iclr_2019_HkgDTiCctQ",
"SkgVZoJ7h7",
"HkxK78ZK27",
"B1e5MapjhQ",
"S1lp283D6Q",
"iclr_2019_HkgDTiCctQ",
"iclr_2019_HkgDTiCctQ",
"iclr_2019_HkgDTiCctQ",
"iclr_2019_HkgDTiCctQ",
"H1x7gjxLnm",
"iclr_2019_HkgDTiCctQ",
"SkgvhosGhQ",
"rJlpXbFb27",
"iclr_2019_HkgDT... |
iclr_2019_HkgHk3RctX | Seq2Slate: Re-ranking and Slate Optimization with RNNs | Ranking is a central task in machine learning and information retrieval. In this task, it is especially important to present the user with a slate of items that is appealing as a whole. This in turn requires taking into account interactions between items, since intuitively, placing an item on the slate affects the decision of which other items should be chosen alongside it.
In this work, we propose a sequence-to-sequence model for ranking called seq2slate. At each step, the model predicts the next item to place on the slate given the items already chosen. The recurrent nature of the model allows complex dependencies between items to be captured directly in a flexible and scalable way. We show how to learn the model end-to-end from weak supervision in the form of easily obtained click-through data. We further demonstrate the usefulness of our approach in experiments on standard ranking benchmarks as well as in a real-world recommendation system. | rejected-papers | The paper addresses the problem of learning to (re)rank slates of search results while optimizing some performance metric across the entire list of results (the slate). The work builds on a wealth of prior work on slate optimization from the information retrieval community, and proposes a novel approach to this problem, an extension of pointer networks, previously used in sequence learning tasks.
The paper is motivated by an important real world application, and has potential for significant practical impact. Reviewers noted in particular the valuable evaluation in an A/B test against a strong production system - showing that the work has practical impact. Reviewers positively noted the discussion of practical issues related to applying the work at scale. The paper was found to be clearly written, and demonstrating a thorough understanding of related work.
The authors and AC also note several potential weaknesses. Several of these were addressed by the authors, as follows. R3 asked for more breadth on metrics, and additional clarifications - the authors provided the requested information. Several questions were raised regarding the diverse-clicks setting and choice of hyperparameter \eta - both were discussed in the rebuttal. Further analysis / discussion of computational and performance trade-offs are requested and discussed.
Overall, the main drawback of the paper, raised by all three reviewers, is the size of the contribution. The paper extends an approach called "pointer networks" to the model application setting considered here. The reviewers and AC agree that, while practically relevant and interesting, the research contribution of the resulting approach limited. As a result, the recommendation is to not accept the paper for publication at ICLR in its current form.
| train | [
"ryg7Gjan2m",
"SylT5YJK6X",
"HyerwFkY6X",
"BJeFxK1YaQ",
"r1ghFuJYaX",
"rkl_UuyYp7",
"HJeMKMiBpX",
"Hyg0a4MCnX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"If the stated revisions are incorporated into the paper, it will be a substantially stronger version. I'm leaning towards accepting the revised version -- all my concerns are addressed by the authors' comments.\n---\nThe paper uses a Seq2Seq network to re-rank candidate items in an information retrieval task so as... | [
7,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_HkgHk3RctX",
"iclr_2019_HkgHk3RctX",
"HJeMKMiBpX",
"Hyg0a4MCnX",
"rkl_UuyYp7",
"ryg7Gjan2m",
"iclr_2019_HkgHk3RctX",
"iclr_2019_HkgHk3RctX"
] |
iclr_2019_HkgSk2A9Y7 | Stochastic Gradient Push for Distributed Deep Learning | Large mini-batch parallel SGD is commonly used for distributed training of deep networks. Approaches that use tightly-coupled exact distributed averaging based on AllReduce are sensitive to slow nodes and high-latency communication. In this work we show the applicability of Stochastic Gradient Push (SGP) for distributed training. SGP uses a gossip algorithm called PushSum for approximate distributed averaging, allowing for much more loosely coupled communications which can be beneficial in high-latency or high-variability scenarios. The tradeoff is that approximate distributed averaging injects additional noise in the gradient which can affect the train and test accuracies. We prove that SGP converges to a stationary point of smooth, non-convex objective functions. Furthermore, we validate empirically the potential of SGP. For example, using 32 nodes with 8 GPUs per node to train ResNet-50 on ImageNet, where nodes communicate over 10Gbps Ethernet, SGP completes 90 epochs in around 1.5 hours while AllReduce SGD takes over 5 hours, and the top-1 validation accuracy of SGP remains within 1.2% of that obtained using AllReduce SGD. | rejected-papers | The reviewers liked the paper in general but the empirical evaluation lacks studies on a wider range of different data sets. | val | [
"HkerLplOlE",
"rygxaxOHlE",
"ByllISH3y4",
"H1lWrrS2k4",
"rkehmHHh14",
"BylL6dXfAQ",
"BJxxhfIj67",
"HkeUrGUjTm",
"ryxuXzUiam",
"BJgRaZ8opX",
"SJe42bLjpm",
"Hkl3RCOAnQ",
"rkxdTUvqhX",
"BkgKBPEc2Q"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
">> Looking at your updated manuscript I don't see mention of the active/passive time differences you note in the comment below, and the timing reported in Table 2 indicates that the AD-PSGD time is actually faster than SGP across each of the node counts measured (not slower as you mention in your comment).\n\nBetw... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"rygxaxOHlE",
"ByllISH3y4",
"BkgKBPEc2Q",
"rkxdTUvqhX",
"Hkl3RCOAnQ",
"iclr_2019_HkgSk2A9Y7",
"BkgKBPEc2Q",
"ryxuXzUiam",
"rkxdTUvqhX",
"SJe42bLjpm",
"Hkl3RCOAnQ",
"iclr_2019_HkgSk2A9Y7",
"iclr_2019_HkgSk2A9Y7",
"iclr_2019_HkgSk2A9Y7"
] |
iclr_2019_HkghV209tm | Optimistic Acceleration for Optimization | We consider new variants of optimization algorithms. Our algorithms are based on the observation that mini-batch of stochastic gradients in consecutive iterations do not change drastically and consequently may be predictable. Inspired by the similar setting in online learning literature called Optimistic Online learning, we propose two new optimistic algorithms for AMSGrad and Adam, respectively, by exploiting the predictability of gradients. The new algorithms combine the idea of momentum method, adaptive gradient method, and algorithms in Optimistic Online learning, which leads to speed up in training deep neural nets in practice. | rejected-papers | The reviewers expressed some interest in this paper, but overall were lukewarm about its contributions. R4 raises a fundamental issue with the presentation of the analysis (see the D_infty assumption). The AC thus goes for a "revise and resubmit". | train | [
"BkxG7ywE1E",
"H1eizHCITm",
"HJlG728ZJE",
"HkgiB4shAX",
"S1eV7r_9Cm",
"rklDgYP907",
"rJxli_wcAQ",
"SJg5TdwqAm",
"BJehLc1R3m",
"HJe5D4oj27",
"Hye3jHqf2Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I think that assuming that $D_\\infty$ is bounded is a lack of rigor. It is not a property of the problem but a property of the algorithm you use. The goal is to show the appealing properties of your algorithm.\n\nLet me give you an example. if we consider the simple sequence $x_{t+1} = x_t - \\eta x_t$ (which is... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"rJxli_wcAQ",
"iclr_2019_HkghV209tm",
"S1eV7r_9Cm",
"rklDgYP907",
"HJe5D4oj27",
"Hye3jHqf2Q",
"H1eizHCITm",
"BJehLc1R3m",
"iclr_2019_HkghV209tm",
"iclr_2019_HkghV209tm",
"iclr_2019_HkghV209tm"
] |
iclr_2019_HkgmzhC5F7 | A Modern Take on the Bias-Variance Tradeoff in Neural Networks | We revisit the bias-variance tradeoff for neural networks in light of modern empirical findings. The traditional bias-variance tradeoff in machine learning suggests that as model complexity grows, variance increases. Classical bounds in statistical learning theory point to the number of parameters in a model as a measure of model complexity, which means the tradeoff would indicate that variance increases with the size of neural networks. However, we empirically find that variance due to training set sampling is roughly constant (with both width and depth) in practice. Variance caused by the non-convexity of the loss landscape is different. We find that it decreases with width and increases with depth, in our setting. We provide theoretical analysis, in a simplified setting inspired by linear models, that is consistent with our empirical findings for width. We view bias-variance as a useful lens to study generalization through and encourage further theoretical explanation from this perspective. | rejected-papers | The paper revisits the traditional bias-variance trade-off for the case
of large capacity neural networks. Reviewers requested several clarifications
on the experimental setting and underlying results. Authors provided some,
but it was deemed not enough for the paper to be strong enough to be accepted.
Reviewers discussed among themselved but think that given the paper is mostly
experimental, it needs more experimental evidence to be acceptable.
Overall, I found the paper borderline but concur with the reviewers to reject
it in its current form. | train | [
"r1lJbSzikV",
"Hylv7-RqyV",
"r1lx0v-c1V",
"Skl9QvYE1N",
"BkxIGRh7T7",
"BkgEWaIy14",
"H1lnV2UyJV",
"r1es552m67",
"SkxYWjhXTm",
"rygWfqh767",
"SJlSHgE03Q",
"rkecRSD23m",
"S1gMIHts37"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your time.",
"Thank you for your reply. Unfortunately, without more convincing experiments (along the lines suggested in my previous comment), this work appears incomplete.\n\nI appreciate the fact that you may not have the necessary compute resources to carry out these experiments but that does no... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"Hylv7-RqyV",
"r1lx0v-c1V",
"Skl9QvYE1N",
"r1es552m67",
"S1gMIHts37",
"S1gMIHts37",
"SJlSHgE03Q",
"SJlSHgE03Q",
"rkecRSD23m",
"iclr_2019_HkgmzhC5F7",
"iclr_2019_HkgmzhC5F7",
"iclr_2019_HkgmzhC5F7",
"iclr_2019_HkgmzhC5F7"
] |
iclr_2019_Hkgnii09Ym | Set Transformer | Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data. | rejected-papers | This paper introduces set transformer for set inputs. The idea is built upon the transformer and introduces the attention mechanism. Major concerns on novelty were raised by the reviewers. | train | [
"HkepSRl1JN",
"rkx3ROgchX",
"ryl1nBJ0pQ",
"SklNCMkRp7",
"rkxjwGJRTQ",
"B1lmlfyRa7",
"BJec0UPRhm",
"B1eH_an927"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for raising the score. Sorry for not updating Table 1, we were aware of it but forgot to update it when we uploaded our revision. We will correct it upon our acceptance. We will also try to discuss more about permutation equivariant layers as you suggested.",
"This paper looks at stacking att... | [
-1,
6,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
4,
3
] | [
"rkx3ROgchX",
"iclr_2019_Hkgnii09Ym",
"iclr_2019_Hkgnii09Ym",
"rkx3ROgchX",
"B1eH_an927",
"BJec0UPRhm",
"iclr_2019_Hkgnii09Ym",
"iclr_2019_Hkgnii09Ym"
] |
iclr_2019_HkgnpiR9Y7 | Recycling the discriminator for improving the inference mapping of GAN | Generative adversarial networks (GANs) have achieved outstanding success in generating the high-quality data. Focusing on the generation process, existing GANs learn a unidirectional mapping from the latent vector to the data. Later, various studies point out that the latent space of GANs is semantically meaningful and can be utilized in advanced data analysis and manipulation. In order to analyze the real data in the latent space of GANs, it is necessary to investigate the inverse generation mapping from the data to the latent vector. To tackle this problem, the bidirectional generative models introduce an encoder to establish the inverse path of the generation process. Unfortunately, this effort leads to the degradation of generation quality because the imperfect generator rather interferes the encoder training and vice versa.
In this paper, we propose an effective algorithm to infer the latent vector based on existing unidirectional GANs by preserving their generation quality.
It is important to note that we focus on increasing the accuracy and efficiency of the inference mapping but not influencing the GAN performance (i.e., the quality or the diversity of the generated sample).
Furthermore, utilizing the proposed inference mapping algorithm, we suggest a new metric for evaluating the GAN models by measuring the reconstruction error of unseen real data.
The experimental analysis demonstrates that the proposed algorithm achieves more accurate inference mapping than the existing method and provides the robust metric for evaluating GAN performance. | rejected-papers | The paper presents a method to learn inference mapping for GANs by reusing the learned discriminator's features and fitting a model over these features to reconstruct the original latent code z. R1 pointed out the connection to InfoGAN which the authors have addressed. R2 is concerned about limited novelty of the proposed method, which the AC agrees with, and lack of comparison to a related iGAN work by Zhu et al. (2016). The authors have provided the comparison in the revised version but the proposed method seems to be worse than iGAN in terms of the metrics used (PSNR and SSIM), though more efficient. The benefits of using the proposed metrics for evaluating GAN quality are also not established well, particularly in the context of other recent metrics such as FID and GILBO.
| val | [
"BygIKlaj6X",
"SyldBAQtaQ",
"ByeRQAmKam",
"HJes5EG46X",
"S1xZsyw16Q",
"ByxcyXr53m",
"rylzg9Tu2Q"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your comments,\nWe thank the Reviewer1 for constructive feedback. Reviewer1 suggests the comparison among the results of various unidirectional GANs to strengthen the experimental evaluation. We agree that the comparison mentioned by Reviewer1 helps improving the quality of the paper. To reflect this co... | [
-1,
-1,
-1,
-1,
3,
3,
7
] | [
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"rylzg9Tu2Q",
"ByxcyXr53m",
"ByxcyXr53m",
"S1xZsyw16Q",
"iclr_2019_HkgnpiR9Y7",
"iclr_2019_HkgnpiR9Y7",
"iclr_2019_HkgnpiR9Y7"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.