paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2019_S1GcHsAqtm | Adaptive Pruning of Neural Language Models for Mobile Devices | Neural language models (NLMs) exist in an accuracy-efficiency tradeoff space where better perplexity typically comes at the cost of greater computation complexity. In a software keyboard application on mobile devices, this translates into higher power consumption and shorter battery life. This paper represents the first attempt, to our knowledge, in exploring accuracy-efficiency tradeoffs for NLMs. Building on quasi-recurrent neural networks (QRNNs), we apply pruning techniques to provide a "knob" to select different operating points. In addition, we propose a simple technique to recover some perplexity using a negligible amount of memory. Our empirical evaluations consider both perplexity as well as energy consumption on a Raspberry Pi, where we demonstrate which methods provide the best perplexity-power consumption operating point. At one operating point, one of the techniques is able to provide energy savings of 40% over the state of the art with only a 17% relative increase in perplexity. | rejected-papers | The area chair agrees with the authors and the reviewers that the topic of this work is relevant and important. The area chair however shares the concerns of the reviewers about the setup and the empirical evaluation:
- Having one model that can be pruned to varying sizes at run-time is convenient, but in practice it is likely to be OK to do the pruning at training time. In light of this, the empirical results are not so impressive.
- Without quantization, distillation and fused ops, the value of the empirical results seems questionable as these are important and well-known techniques that are often used in practice. A more thorough evaluation that includes these techniques would make the paper much stronger. | train | [
"rkgJIIoPn7",
"BygJ8-PKhX",
"S1xKdMa_A7",
"Hkxkrb6_07",
"HkgAwXOvAX",
"BkxxgxuPAm",
"HylJTQ3b6X",
"Byeurdx0hm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"In this paper, the authors investigate the accuracy-efficiency tradeoff for neural language models. In particular, they explore how different compression strategies impact the accuracy (and flops), and more interestingly, also how it impacts the power use for a RaspberryPi. The authors consider the QRNNs and SRUs ... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_S1GcHsAqtm",
"iclr_2019_S1GcHsAqtm",
"HylJTQ3b6X",
"Byeurdx0hm",
"BygJ8-PKhX",
"rkgJIIoPn7",
"iclr_2019_S1GcHsAqtm",
"iclr_2019_S1GcHsAqtm"
] |
iclr_2019_S1MAriC5F7 | Massively Parallel Hyperparameter Tuning | Modern learning models are characterized by large hyperparameter spaces. In order to adequately explore these large spaces, we must evaluate a large number of configurations, typically orders of magnitude more configurations than available parallel workers. Given the growing costs of model training, we would ideally like to perform this search in roughly the same wall-clock time needed to train a single model. In this work, we tackle this challenge by introducing ASHA, a simple and robust hyperparameter tuning algorithm with solid theoretical underpinnings that exploits parallelism and aggressive early-stopping. Our extensive empirical results show that ASHA outperforms state-of-the-art hyperparameter tuning methods; scales linearly with the number of workers in distributed settings; converges to a high quality configuration in half the time taken by Vizier, Google's internal hyperparameter tuning service) in an experiment with 500 workers; and beats the published result for a near state-of-the-art LSTM architecture in under 2× the time to train a single model. | rejected-papers | The paper proposes and evaluates an asynchronous hyperparameter optimization algorithm.
Strengths: The experiments are certainly thorough, and I appreciated the discussion of the algorithm and side-experiments demonstrating its operation in different settings. Overall the paper is pretty clear. It's a good thing when a proposed method is a simple variant of an existing method.
Weaknesses: The first page could have been half the length, and it's not clear why we should care about the stated goal of this work. Isn't the real goal just to get good test performance in a small amount of time? The title is also a bit obnoxious and land-grabby - it could have been used for almost any of the comparison methods. The proposed method is a minor change to SHA. The proposed change is kind of obvious, and the resulting method does have a number of hyper-hyperparameters.
Consensus: Ultimately I agree with the reviewers that is just below the bar of acceptance. This does seem like a valid contribution to the hyperparameter-tuning literature, but more of an engineering contribution than a research contribution. It's also getting a little bit away from the subject of machine learning, and might be more appropriate for say, SysML. | train | [
"rkxyoWkqnQ",
"ryep9aQaAm",
"H1xM-nkKRm",
"r1xROVMZR7",
"rylHU4M-0Q",
"Bke6GNzbAm",
"SklmhCr927",
"HJeFvOLDhX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes a simple, asynchronous way to parallelize successive halving. In a nutshell, this method, dubbed ASHA, promotes a hyperparameter configuration to the next rung of successive halving when ever possible, instead of waiting that all configurations of the current rung have finished. ASHA can easily... | [
5,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_S1MAriC5F7",
"H1xM-nkKRm",
"iclr_2019_S1MAriC5F7",
"iclr_2019_S1MAriC5F7",
"iclr_2019_S1MAriC5F7",
"iclr_2019_S1MAriC5F7",
"iclr_2019_S1MAriC5F7",
"iclr_2019_S1MAriC5F7"
] |
iclr_2019_S1MB-3RcF7 | Multi-objective training of Generative Adversarial Networks with multiple discriminators | Recent literature has demonstrated promising results on the training of Generative Adversarial Networks by employing a set of discriminators, as opposed to the traditional game involving one generator against a single adversary. Those methods perform single-objective optimization on some simple consolidation of the losses, e.g. an average. In this work, we revisit the multiple-discriminator approach by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction computation can be done efficiently. Our results indicate that hypervolume maximization presents a better compromise between sample quality and diversity, and computational cost than previous methods. | rejected-papers | The reviewers found that paper is well written, clear and that the authors did a good job placing the work in the relevant literature. The proposed method for using multiple discriminators in a multi-objective setting to train GANs seems interesting and compelling. However, all the reviewers found the paper to be on the borderline. The main concern was the significance of the work in the context of existing literature. Specifically, the reviewers did not find the experimental results significant enough to be convinced that this work presents a major advance in GAN training. | val | [
"BkxqMAxkl4",
"r1eIdLBA14",
"ByeCEZOlJE",
"BklSxbdxkE",
"rJgbpOntnX",
"HJxRbLlRa7",
"H1gifE_paQ",
"Byg-vvVt6X",
"Ske4BPVYTX",
"rygTVw4KTQ",
"r1elJcCVpX",
"HJxOmF0ET7",
"B1gaR7CNpX",
"HyeH5yCEaX",
"rJeo4xjp37",
"rkxn9hTqhX"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We first thank the reviewer for her/his reply.\n\nThe more discriminators one can use the better. This is inline with findings in [1 - Theorem A.2]. We found 24 discriminators to work very well across different datasets and models.\n\nAs we have previously mentioned, the comparison between 1 and 24 discriminators ... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"r1eIdLBA14",
"B1gaR7CNpX",
"rkxn9hTqhX",
"rJeo4xjp37",
"iclr_2019_S1MB-3RcF7",
"H1gifE_paQ",
"HyeH5yCEaX",
"iclr_2019_S1MB-3RcF7",
"B1gaR7CNpX",
"HJxOmF0ET7",
"iclr_2019_S1MB-3RcF7",
"rJeo4xjp37",
"rkxn9hTqhX",
"rJgbpOntnX",
"iclr_2019_S1MB-3RcF7",
"iclr_2019_S1MB-3RcF7"
] |
iclr_2019_S1MQ6jCcK7 | ChoiceNet: Robust Learning by Revealing Output Correlations | In this paper, we focus on the supervised learning problem with corrupt training data. We assume that the training dataset is generated from a mixture of a target distribution and other unknown distributions. We estimate the quality of each data by revealing the correlation between the generated distribution and the target distribution. To this end, we present a novel framework referred to here as ChoiceNet that can robustly infer the target distribution in the presence of inconsistent data. We demonstrate that the proposed framework is applicable to both classification and regression tasks. Particularly, ChoiceNet is evaluated in comprehensive experiments, where we show that it constantly outperforms existing baseline methods in the handling of noisy data in synthetic regression tasks as well as behavior cloning problems. In the classification tasks, we apply the proposed method to the MNIST and CIFAR-10 datasets and it shows superior performances in terms of robustness to different types of noisy labels. | rejected-papers | The paper addresses an interesting problem (learning in the presence of noisy labels) and provides extensive experiments. However, while the experiments in some sense cover a good deal of ground, reviewers raised issues with their quality, especially concerning baselines and depth (in terms of realism of the data). The authors provided many additional experiments during the rebuttal, but the reviewers did not find them sufficiently convincing. | train | [
"B1eAxJHGa7",
"B1eqVtIKRm",
"BkxWU_UtR7",
"Syxsn_8KRQ",
"BylM8YLYRQ",
"B1xx0YLtAm",
"H1lLgc8YCm",
"rklBiO8YAQ",
"ByeZ3mlo3m",
"SyxB2-Bd3m"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an apparently original method targeted toward models training in the presence of low-quality or corrupted data. To accomplish this they introduce a \"mixture of correlated density network\" (MCDN), which processes representations from a backbone network, and the MCDN models the corrupted data g... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2019_S1MQ6jCcK7",
"ByeZ3mlo3m",
"B1eAxJHGa7",
"B1eAxJHGa7",
"ByeZ3mlo3m",
"SyxB2-Bd3m",
"SyxB2-Bd3m",
"B1eAxJHGa7",
"iclr_2019_S1MQ6jCcK7",
"iclr_2019_S1MQ6jCcK7"
] |
iclr_2019_S1MeM2RcFm | BlackMarks: Black-box Multi-bit Watermarking for Deep Neural Networks | Deep Neural Networks (DNNs) are increasingly deployed in cloud servers and autonomous agents due to their superior performance. The deployed DNN is either leveraged in a white-box setting (model internals are publicly known) or a black-box setting (only model outputs are known) depending on the application. A practical concern in the rush to adopt DNNs is protecting the models against Intellectual Property (IP) infringement. We propose BlackMarks, the first end-to-end multi-bit watermarking framework that is applicable in the black-box scenario. BlackMarks takes the pre-trained unmarked model and the owner’s binary signature as inputs. The output is the corresponding marked model with specific keys that can be later used to trigger the embedded watermark. To do so, BlackMarks first designs a model-dependent encoding scheme that maps all possible classes in the task to bit ‘0’ and bit ‘1’. Given the owner’s watermark signature (a binary string), a set of key image and label pairs is designed using targeted adversarial attacks. The watermark (WM) is then encoded in the distribution of output activations of the DNN by fine-tuning the model with a WM-specific regularized loss. To extract the WM, BlackMarks queries the model with the WM key images and decodes the owner’s signature from the corresponding predictions using the designed encoding scheme. We perform a comprehensive evaluation of BlackMarks’ performance on MNIST, CIFAR-10, ImageNet datasets and corroborate its effectiveness and robustness. BlackMarks preserves the functionality of the original DNN and incurs negligible WM embedding overhead as low as 2.054%. | rejected-papers | The reviews agree the paper is not ready for publication at ICLR. | val | [
"BJg3dt1yTX",
"ryxrFSh62m",
"r1gtuXeYh7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"A method for multi-bit watermarking of neural networks in a black-box setting is proposed. In particular, the authors demonstrate that the predictions of existing models can carry a multi-bit string that can later be used to verify ownership.\nExperiments on MNIST, CIFAR-10 and ImageNet are presented in addition t... | [
5,
4,
4
] | [
3,
4,
4
] | [
"iclr_2019_S1MeM2RcFm",
"iclr_2019_S1MeM2RcFm",
"iclr_2019_S1MeM2RcFm"
] |
iclr_2019_S1eB3sRqtm | Exploring Curvature Noise in Large-Batch Stochastic Optimization | Using stochastic gradient descent (SGD) with large batch-sizes to train deep neural networks is an increasingly popular technique. By doing so, one can improve parallelization by scaling to multiple workers (GPUs) and hence leading to significant reductions in training time. Unfortunately, a major drawback is the so-called generalization gap: large-batch training typically leads to a degradation in generalization performance of the model as compared to small-batch training. In this paper, we propose to correct this generalization gap by adding diagonal Fisher curvature noise to large-batch gradient updates. We provide a theoretical analysis of our method in the convex quadratic setting. Our empirical study with state-of-the-art deep learning models shows that our method not only improves the generalization performance in large-batch training but furthermore, does so in a way where the training convergence remains desirable and the training duration is not elongated. We additionally connect our method to recent works on loss surface landscape in the experimental section. | rejected-papers | Dear authors,
Your proposition of adding a noise scaling with the diagonal of the gradient covariance to the updates as a middle-ground between the identity and the full covariance is interesting and tackles the timely question of the links between optimization and generalization.
However, the reviewers had concerns about the experiments that did not reveal to which extent each trick had an influence.
I would like to add that, even though the term Fisher is used for both the true Fisher and tne empirical one, these two matrices encore very different kind of information. In particular, the latter is only defined when there is a dataset. Hence, your case study (section 3.2) which uses the true Fisher does not apply to the empirical Fisher.
I encourage the authors to pursue in this direction but to update the experimental section in order to highlight the impact of each technique used. | train | [
"BkeBSvna1E",
"B1lHKVmqCQ",
"SJgEZScp14",
"BJedjk56yE",
"HyeuKhg814",
"BylA5I6SyE",
"SyeuC9hH1N",
"BJxR7rTV1V",
"SylgkRc4J4",
"HJlZqUPEkV",
"HJlD8AIEyE",
"HJeyrOZyyV",
"SkeavTvnRX",
"rke-BB7qAX",
"rklLxHQcAQ",
"SyxV0EQqAQ",
"SJehhkWGpX",
"rJlPbhPypm",
"B1gCwCJzaQ",
"HJlX9S0bpm"... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_r... | [
"We agree that the optimization can't be used alone to choose the step-size. All we want to say is optimization is one of the criterions to choose the step size especially in the case that the difference in the training loss can be observed.\n\nBut how to choose the best step-size is orthogonal to our paper. Our ma... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"SJgEZScp14",
"iclr_2019_S1eB3sRqtm",
"HyeuKhg814",
"HJxOq24cnX",
"BylA5I6SyE",
"SyeuC9hH1N",
"BJxR7rTV1V",
"SylgkRc4J4",
"HJlD8AIEyE",
"SJehhkWGpX",
"HJeyrOZyyV",
"SkeavTvnRX",
"SyxV0EQqAQ",
"HJx0hZqPnm",
"ryg_FtoF2m",
"HJxOq24cnX",
"B1gCwCJzaQ",
"HJx0hZqPnm",
"HJlX9S0bpm",
"r... |
iclr_2019_S1eBzhRqK7 | Evolutionary-Neural Hybrid Agents for Architecture Search | Neural Architecture Search has recently shown potential to automate the design of Neural Networks. The use of Neural Network agents trained with Reinforcement Learning can offer the possibility to learn complex patterns, as well as the ability to explore a vast and compositional search space. On the other hand, evolutionary algorithms offer the greediness and sample efficiency needed for such an application, as each sample requires a considerable amount of resources. We propose a class of Evolutionary-Neural hybrid agents (Evo-NAS), that retain the best qualities of the two approaches. We show that the Evo-NAS agent can outperform both Neural and Evolutionary agents, both on a synthetic task, and on architecture search for a suite of text classification datasets. | rejected-papers | Reviewers are in a consensus and recommended to reject after engaging with the authors. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.
| train | [
"B1lCfqNOh7",
"SygXI27VTQ",
"BklviPgZ6Q",
"Syxf0zQch7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a class of Evolutionary-Neural hybrid agents (Evo-NAS) to take advantage of both evolutionary algorithms and reinforcement learning algorithms for efficient neural architecture search. \n\n1. Doesn't explain how exactly the mutation action is learned, and missing the explanation of how RL acts o... | [
4,
-1,
5,
4
] | [
4,
-1,
2,
4
] | [
"iclr_2019_S1eBzhRqK7",
"B1lCfqNOh7",
"iclr_2019_S1eBzhRqK7",
"iclr_2019_S1eBzhRqK7"
] |
iclr_2019_S1eEdj0cK7 | On the Relationship between Neural Machine Translation and Word Alignment | Prior researches suggest that attentional neural machine translation (NMT) is able to capture word alignment by attention, however, to our surprise, it almost fails for NMT models with multiple attentional layers except for those with a single layer. This paper introduce two methods to induce word alignment from general neural machine translation models. Experiments verify that both methods obtain much better word alignment than the method by attention. Furthermore, based on one of the proposed method, we design a criterion to divide target words into two categories (i.e. those mostly contributed from source "CFS" words and the other words mostly contributed from target "CFT" words), and analyze word alignment under these two categories in depth. We find that although NMT models are difficult to capture word alignment for CFT words but these words do not sacrifice translation quality significantly, which provides an explanation why NMT is more successful for translation yet worse for word alignment compared to statistical machine translation. We further demonstrate that word alignment errors for CFS words are responsible for translation errors in some extent by measuring the correlation between word alignment and translation for several NMT systems. | rejected-papers | This paper examines the relationship between attention and alignment in NMT. The reviewers all agreed that this is a valuable topic that is worth thinking about.
However, there were concerns both about the clarity of the paper and the framing with respect to previous work. First, it was hard for some reviewers to understand exactly what the paper was trying to do due to issues of the paper structure, etc. Second, there are a number of previous works that also examine similar concepts, and the description of how the proposed method differs seemed lacking.
Due to these issues, I cannot recommend it for acceptance in its current form. | train | [
"rygE92FMA7",
"B1l-APxhTm",
"B1gxs8AZR7",
"B1ez_-YeC7",
"HkxSe15qTQ",
"BJxkBrOipm",
"rylCpWWiaX",
"rkeFI1596X",
"HJgr2AK96X",
"rkefoTRt2X",
"rylolAUKn7",
"SkxwFxnu2Q"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your comment! We have added AER of the Transformer attention to the table3 in the new manuscript.",
"Thank you for the constructive suggestion.\n\nWe have added the experiments to compare the alignment performance on different\ntranslation models in section 4.3. Because training multiple-layers RNN is... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"B1gxs8AZR7",
"BJxkBrOipm",
"B1l-APxhTm",
"iclr_2019_S1eEdj0cK7",
"rylolAUKn7",
"HJgr2AK96X",
"rkeFI1596X",
"SkxwFxnu2Q",
"rkefoTRt2X",
"iclr_2019_S1eEdj0cK7",
"iclr_2019_S1eEdj0cK7",
"iclr_2019_S1eEdj0cK7"
] |
iclr_2019_S1eEmn05tQ | Uncertainty in Multitask Transfer Learning | Using variational Bayes neural networks, we develop an algorithm capable of accumulating knowledge into a prior from multiple different tasks. This results in a rich prior capable of few-shot learning on new tasks. The posterior can go beyond the mean field approximation and yields good uncertainty on the performed experiments. Analysis on toy tasks show that it can learn from significantly different tasks while finding similarities among them. Experiments on Mini-Imagenet reach state of the art with 74.5% accuracy on 5 shot learning. Finally, we provide two new benchmarks, each showing a failure mode of existing meta learning algorithms such as MAML and prototypical Networks. | rejected-papers | This paper presents a meta-learning approach which relies on a learned prior over neural networks for different tasks.
The reviewers found this work to be well-motivated and timely. While there are some concerns regarding experiments, the results in the miniImageNet one seem to have impressed some reviewers.
However, all reviewers found the presentation to be inaccurate in more than one points. R1 points out to "issues with presentation" for the hierarchical Bayes motivation, R2 mentions that the motivation and derivation in Section 2 is "misleading" and R3 talks about "short presentation shortcomings".
R3 also raises important concerns about correctness of the derivation. The authors have replied to the correctness critique by explaining that the paper has been proofread by strong mathematicians, however they do not specifically rebut R3's points. The authors requested R3 to more specifically point to the location of the error, however it seems that R3 had already explained in a very detailed manner the source of the concern, including detailed equations.
There have been other raised issues, such as concerns about experimental evaluation. However, the reviewers' almost complete agreement in the presentation issue is a clear signal that this paper needs to be substantially re-worked. | train | [
"HkxmQr9ACm",
"HJxoLQm5AQ",
"BylIl6z5C7",
"HygDbeg50X",
"SklZMRiEjQ",
"HJlrvD5Gp7",
"H1eeyvnq2Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Following and the other reviewers I would indeed suggest the whole rewrite of section 2.2 and not talking about Hierarchical Bayes, but rather the VI framework that you actually use in practice. I do feel this is currently a major issue and needs significant addressing to make the paper publication ready. \n\n\"Be... | [
-1,
-1,
-1,
-1,
4,
3,
2
] | [
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"HJxoLQm5AQ",
"SklZMRiEjQ",
"HJlrvD5Gp7",
"H1eeyvnq2Q",
"iclr_2019_S1eEmn05tQ",
"iclr_2019_S1eEmn05tQ",
"iclr_2019_S1eEmn05tQ"
] |
iclr_2019_S1eFtj0cKQ | Generative Models from the perspective of Continual Learning | Which generative model is the most suitable for Continual Learning? This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks. We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning. We used two quantitative metrics to estimate the generation quality and memory ability. We experiment with sequential tasks on three commonly used benchmarks for Continual Learning (MNIST, Fashion MNIST and CIFAR10). We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods. Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge. | rejected-papers | This paper presents empirical evaluation and comparison of different generative models (such as GANs and VAE) in the continual learning setting.
To avoid catastrophic forgetting, the following strategies are considered: rehearsal, regularization, generative replay and fine-tuning. The empirical evaluations are carried out using three datasets (MNIST, Fashion MNIST and CIFAR).
While all reviewers and AC acknowledge the importance and potential usefulness of studying and comparing different generative models in continual learning, they raised several important concerns that place this paper bellow the acceptance bar: (1) in an empirical study paper, an in-depth analysis and more insightful evaluations are required to better understand the benefits and shortcomings of the available models (R1 and R2), e.g. analyzing why generative replay fails to improve VAE, why is rehearsal better for likelihood models, and in general why certain combinations are more effective than others – see more suggestions in R1’s and R2’s comments. The authors discussed in their response to the reviews some of these questions, but a more detailed analysis is required to fully understand the benefits of this empirical study. (2) The evaluation is geared towards quality metrics for the generative models and lacks evaluation for catastrophic forgetting in continual learning (hence it favours GANs models) -- See R3’s suggestion how to improve.
To conclude, the reviewers and AC suggest that in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.
| train | [
"r1eXzFuF0Q",
"HkgkhddtAQ",
"S1g1x_OYA7",
"ryxp0IOYRX",
"SkxPAq452m",
"SJg-oxTunm",
"Hyx3Jj1qnQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer, \nThanks for the thorough review, the comments are particularly helpful. \n- \"Is the reason GANs are better than likelihood models with generative replay purely because of sample quality? Or is it sufficient for the generator to learn some key characteristics for a class that leads to sufficient di... | [
-1,
-1,
-1,
-1,
4,
4,
5
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"SJg-oxTunm",
"Hyx3Jj1qnQ",
"SkxPAq452m",
"iclr_2019_S1eFtj0cKQ",
"iclr_2019_S1eFtj0cKQ",
"iclr_2019_S1eFtj0cKQ",
"iclr_2019_S1eFtj0cKQ"
] |
iclr_2019_S1eVe2AqKX | PCNN: Environment Adaptive Model Without Finetuning | Convolutional Neural Networks (CNNs) have achieved tremendous success for many computer vision tasks, which shows a promising perspective of deploying CNNs on mobile platforms. An obstacle to this promising perspective is the tension between intensive resource consumption of CNNs and limited resource budget on mobile platforms. Existing works generally utilize a simpler architecture with lower accuracy for a higher energy-efficiency, \textit{i.e.}, trading accuracy for resource consumption. An emerging opportunity to both increasing accuracy and decreasing resource consumption is \textbf{class skew}, \textit{i.e.}, the strong temporal and spatial locality of the appearance of classes. However, it is challenging to efficiently utilize the class skew due to both the frequent switches and the huge number of class skews. Existing works use transfer learning to adapt the model towards the class skew during runtime, which consumes resource intensively. In this paper, we propose \textbf{probability layer}, an \textit{easily-implemented and highly flexible add-on module} to adapt the model efficiently during runtime \textit{without any fine-tuning} and achieving an \textit{equivalent or better} performance than transfer learning. Further, both \textit{increasing accuracy} and \textit{decreasing resource consumption} can be achieved during runtime through the combination of probability layer and pruning methods. | rejected-papers | All reviewers rate the paper as below threshold. While the authors responded to an earlier request for clarification, there is no rebuttal to the actual reviews. Thus, there is no basis by which the paper can be accepted. | val | [
"HylkPTJ0hX",
"S1glW4P5hX",
"SkgZr2nwh7",
"HJl1rrQdnX",
"r1eLxATDhm",
"Syen8Xawn7",
"SkgDx7nD3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"The paper proposes a simple idea to calibrate probabilities outputted by a CNN model to adapt easily to environments where class distributions change with space and time (and are often skewed). The paper shows that such a simple approach is sufficient to get good accuracies without requiring any costly retraining ... | [
4,
3,
4,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2019_S1eVe2AqKX",
"iclr_2019_S1eVe2AqKX",
"iclr_2019_S1eVe2AqKX",
"r1eLxATDhm",
"Syen8Xawn7",
"SkgDx7nD3X",
"iclr_2019_S1eVe2AqKX"
] |
iclr_2019_S1eX-nA5KX | VHEGAN: Variational Hetero-Encoder Randomized GAN for Zero-Shot Learning | To extract and relate visual and linguistic concepts from images and textual descriptions for text-based zero-shot learning (ZSL), we develop variational hetero-encoder (VHE) that decodes text via a deep probabilisitic topic model, the variational posterior of whose local latent variables is encoded from an image via a Weibull distribution based inference network. To further improve VHE and add an image generator, we propose VHE randomized generative adversarial net (VHEGAN) that exploits the synergy between VHE and GAN through their shared latent space. After training with a hybrid stochastic-gradient MCMC/variational inference/stochastic gradient descent inference algorithm, VHEGAN can be used in a variety of settings, such as text generation/retrieval conditioning on an image, image generation/retrieval conditioning on a document/image, and generation of text-image pairs. The efficacy of VHEGAN is demonstrated quantitatively with experiments on both conventional and generalized ZSL tasks, and qualitatively on (conditional) image and/or text generation/retrieval. | rejected-papers | The paper received borderline ratings due to concerns regarding novelty and experimental results/settings (e.g. zero shot learning). On my side, I believe that the proposed method would need more evaluations on other benchmarks (e.g., SUN, AWA1 and AWA2) for both ZSL and GZSL settings to make the results more convincing. Overall, none of the reviewers championed this paper and I would recommend weak rejection. | train | [
"HkxBMZfiAX",
"SJed3mcuC7",
"HkgzAN5u0m",
"BkgfOBcOAQ",
"Hyg7NsYJaX",
"rkeaSSZc3X",
"Syl6BrmO2Q",
"S1lGVauM9X"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Dear Reviewers,\n\nThank you for your constructive feedback. We have added more discussions in our revision to explain why we choose certain components to construct the proposed VHEGAN.\n\nAs also noted in your reviews, VHEGAN is not limited to the ZSL application. We choose to focus on ZSL mainly because 1) it is... | [
-1,
-1,
-1,
-1,
5,
5,
5,
-1
] | [
-1,
-1,
-1,
-1,
4,
5,
5,
-1
] | [
"iclr_2019_S1eX-nA5KX",
"Hyg7NsYJaX",
"Syl6BrmO2Q",
"rkeaSSZc3X",
"iclr_2019_S1eX-nA5KX",
"iclr_2019_S1eX-nA5KX",
"iclr_2019_S1eX-nA5KX",
"iclr_2019_S1eX-nA5KX"
] |
iclr_2019_S1e_H3AqYQ | Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification | Text classification must sometimes be applied in situations with no training data in a target language. However, training data may be available in a related language. We introduce a cross-lingual document classification framework CACO between related language pairs. To best use limited training data, our transfer learning scheme exploits cross-lingual subword similarity by jointly training a character-based embedder and a word-based classifier. The embedder derives vector representations for input words from their written forms, and the classifier makes predictions based on the word vectors. We use a joint character representation for both the source language and the target language, which allows the embedder to generalize knowledge about source language words to target language words with similar forms. We propose a multi-task objective that can further improve the model if additional cross-lingual or monolingual resources are available. CACO models trained under low-resource settings rival cross-lingual word embedding models trained under high-resource settings on related language pairs.
| rejected-papers | The paper's contribution lies in using cross-lingual sharing of subword representations for improving document classification. The paper presents interesting models and results.
While the paper is good (two out of three reviewers are happy about it), I do agree with the reviewer who suggests the experimentation with relatively dissimilar languages and showing whether or not the approach works for those cases. I am also not very happy with the author response to the reviewer. Moreover, I think the paper could improve further if the authors presented experiments on more tasks apart from document classification. | train | [
"S1gw8ImdhX",
"H1xySl3F0X",
"Bygm3enFCQ",
"Hyl7wenF0m",
"Sklq012KAX",
"rJe977-j2Q",
"Hylf35yihX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Overview:\n\nThis paper proposes an approach to document classification in a low-resource language using transfer learning from a related higher-resource language. For the case where limited resources are available in the target low-resource language (e.g. a dictionary, pretrained embeddings, parallel text), multi... | [
6,
-1,
-1,
-1,
-1,
4,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_S1e_H3AqYQ",
"Hylf35yihX",
"iclr_2019_S1e_H3AqYQ",
"rJe977-j2Q",
"S1gw8ImdhX",
"iclr_2019_S1e_H3AqYQ",
"iclr_2019_S1e_H3AqYQ"
] |
iclr_2019_S1e_ssC5F7 | Hyper-Regularization: An Adaptive Choice for the Learning Rate in Gradient Descent | We present a novel approach for adaptively selecting the learning rate in gradient descent methods. Specifically, we impose a regularization term on the learning rate via a generalized distance, and cast the joint updating process of the parameter and the learning rate into a maxmin problem. Some existing schemes such as AdaGrad (diagonal version) and WNGrad can be rederived from our approach. Based on our approach, the updating rules for the learning rate do not rely on the smoothness constant of optimization problems and are robust to the initial learning rate. We theoretically analyze our approach in full batch and online learning settings, which achieves comparable performances with other first-order gradient-based algorithms in terms of accuracy as well as convergence rate. | rejected-papers | All three reviewers found that the motivation for the proposed method was lacking and recommend rejection. The AC thus recommends the authors to take these comments in consideration when revising their manuscript. | val | [
"rygyfBVelV",
"S1gxz6cfkN",
"ryleH4UBA7",
"BJeQlWUHRX",
"HJgg6z8H0m",
"H1g51qT92m",
"S1lb8qKMnm",
"Syeq6mf-3m"
] | [
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Hello,\n\nDuring the implementation of the 2d neural network on MNIST using the proposed algorithm, I got a problem. The initial value of gradient is big (since I put the random values in Ws and they are not probably close to the optimum) so when I use this new learning rate, it doesn't converge. I want to ask if ... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2019_S1e_ssC5F7",
"ryleH4UBA7",
"Syeq6mf-3m",
"H1g51qT92m",
"S1lb8qKMnm",
"iclr_2019_S1e_ssC5F7",
"iclr_2019_S1e_ssC5F7",
"iclr_2019_S1e_ssC5F7"
] |
iclr_2019_S1ej8o05tm | Object detection deep learning networks for Optical Character Recognition | In this article, we show how we applied a simple approach coming from deep learning networks for object detection to the task of optical character recognition in order to build image features taylored for documents. In contrast to scene text reading in natural images using networks pretrained on ImageNet, our document reading is performed with small networks inspired by MNIST digit recognition challenge, at a small computational budget and a small stride. The object detection modern frameworks allow a direct end-to-end training, with no other algorithm than the deep learning and the non-max-suppression algorithm to filter the duplicate predictions. The trained weights can be used for higher level models, such as, for example, document classification, or document segmentation.
| rejected-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
The paper tackles an interesting and relevant problem for ICLR: optical character recognition in document images.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- The authors propose to use small networks to localize text in document images, claiming that for document images smaller networks work better than standard SOTA networks for scene text. As pointed out in the reviews, the authors didn't make any comparisons to SOTA object detection networks (trained either on scene text or on document images) so their central claim has not been experimentally verified.
- The reviewers were unanimous that the work lacks novelty as object detection pipelines have already been used for OCR so a contribution of considering smaller detection networks is minor.
- There were serious issues with formatting and clarity.
These three issues all informed the final decision.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
There were no major points of contention and no author feedback.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be rejected.
| val | [
"BkxgKFy5hQ",
"rJgd4DB4TQ",
"Hkg0A3UJ67",
"rkg9q-Wc3m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper applied an object detection network, like SSD, for optical character detection and recognition. This paper doesn't give any new contributions and has no potential values.\n\nweakness:\n1. the paper is lack of novelty and the motivation is weak. I even can't find any contribution to OCR or object detecti... | [
2,
1,
2,
1
] | [
5,
5,
5,
5
] | [
"iclr_2019_S1ej8o05tm",
"iclr_2019_S1ej8o05tm",
"iclr_2019_S1ej8o05tm",
"iclr_2019_S1ej8o05tm"
] |
iclr_2019_S1en0sRqKm | On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent | Increasing the mini-batch size for stochastic gradient descent offers significant opportunities to reduce wall-clock training time, but there are a variety of theoretical and systems challenges that impede the widespread success of this technique (Daset al., 2016; Keskar et al., 2016). We investigate these issues, with an emphasis on time to convergence and total computational cost, through an extensive empirical analysis of network training across several architectures and problem domains, including image classification, image segmentation, and language modeling. Although it is common practice to increase the batch size in order to fully exploit available computational resources, we find a substantially more nuanced picture. Our main finding is that across a wide range of network architectures and problem domains, increasing the batch size beyond a certain point yields no decrease in wall-clock time to convergence for either train or test loss. This batch size is usually substantially below the capacity of current systems. We show that popular training strategies for large batch size optimization begin to fail before we can populate all available compute resources, and we show that the point at which these methods break down depends more on attributes like model architecture and data complexity than it does directly on the size of the dataset. | rejected-papers | The paper presents an interesting empirical analysis showing that increasing the batch size beyond a certain point yields no decrease in time to convergence. This is an interesting finding, since it indicates that parallelisation approaches might have their limits. On the other hand, the study does not allow the practitioners to tune their hyperparamters since the optimal batch size is dependent on the model architecture and the dataset. Furthermore, as also pointed out in an anonymous comment, the batch size is VERY large compared to the size of the benchmark sets. Therefore, it would be nice to see if the observation carries over to large-scale data sets, where the number of samples in the mini-batch is still small compared to the total number of samples. | test | [
"rygQP9i_0Q",
"rJg-T_iO0X",
"SygQyusu0X",
"BygHDPou0X",
"BJepmDsdRQ",
"SyeA0WaOT7",
"SylNiW91T7",
"SJxGhB28hX",
"rJgLKxFVhX"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments. As you point out, Ma et al. (2017) have already shown that increasing the batch size indefinitely eventually stops yielding any improvement in convergence speed. Missing from this theoretical analysis is a prediction of what exact batch size is too large, rendering their results of li... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"SylNiW91T7",
"rJgLKxFVhX",
"SJxGhB28hX",
"SyeA0WaOT7",
"iclr_2019_S1en0sRqKm",
"iclr_2019_S1en0sRqKm",
"iclr_2019_S1en0sRqKm",
"iclr_2019_S1en0sRqKm",
"iclr_2019_S1en0sRqKm"
] |
iclr_2019_S1ey2sRcYQ | Direct Optimization through argmax for Discrete Variational Auto-Encoder | Reparameterization of variational auto-encoders is an effective method for reducing the variance of their gradient estimates. However, when the latent variables are discrete, a reparameterization is problematic due to discontinuities in the discrete space. In this work, we extend the direct loss minimization technique to discrete variational auto-encoders. We first reparameterize a discrete random variable using the argmax function of the Gumbel-Max perturbation model. We then use direct optimization to propagate gradients through the non-differentiable argmax using two perturbed argmax operations.
| rejected-papers | The paper presents a novel gradient estimator for optimizing VAEs with discrete latents, that is based on using a Direct Loss Minimization approach (as initially developed for structured prediction) on top of the Gumble-max trick. This is an interesting and original alternative to the use of REINFORCE or Gumble Softmax. The approach is mathematically well detailed, but exposition could be easier to follow if it used a more standard notation. After clarifications by the authors, reviewers agreed that the main theorerm is correct. The proposed method is shown empirically to converge faster than Gumbel-softmax, REBAR, and RELAX baselines in number of epochs. However, as questioned by one reviewer, the proposed method appears to require many more forward passes (evaluations) of the decoder for each example. Authors replied by highlighting that an argmax can be more computationally efficient than softmax (in cases when the discrete latent space is structured), and also clarified in the paper their use of an essential computational approximation they make for discrete product spaces. These are important aspects that affect computational complexity. But they do not address the question raised about using significantly more decoder evaluations for each example. A fair comparison for sampling based gradient estimation methods should rest on actual number of decoder evaluations and on resulting timing. The paper currently does not sufficiently discuss the computational complexity of the proposed estimator against alternatives, nor take this essential aspect into account in the empirical comparisons it reports.
We encourage the authors to refocus the paper and fully develop and showcase a use case where the approach could yield a clear a computational advantage, like the structured encoder setting they mentioned in the rebuttal.
| train | [
"rylSgPw367",
"BJxSXov1RQ",
"S1gDwY7Na7",
"rJefZTGLnQ",
"SJeixZ5NJ4",
"BJedFgq41V",
"BkewQsl62Q"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you for your thoughtful comments. Here are answers to the concerns raised by the review. We complement these answers with a revised submission (appendix and paper)\n\nWall-clock time for the CelebA with $10$ binary attributes takes 0.13 seconds for Gumbel-Softmax and 0.06 seconds for Gumbel-Max, when the ... | [
-1,
-1,
7,
5,
-1,
-1,
7
] | [
-1,
-1,
4,
4,
-1,
-1,
4
] | [
"rJefZTGLnQ",
"S1gDwY7Na7",
"iclr_2019_S1ey2sRcYQ",
"iclr_2019_S1ey2sRcYQ",
"BJedFgq41V",
"BJxSXov1RQ",
"iclr_2019_S1ey2sRcYQ"
] |
iclr_2019_S1fDssA5Y7 | Distributionally Robust Optimization Leads to Better Generalization: on SGD and Beyond | In this paper, we adopt distributionally robust optimization (DRO) (Ben-Tal et al., 2013) in hope to achieve a better generalization in deep learning tasks. We establish the generalization guarantees and analyze the localized Rademacher complexity for DRO, and conduct experiments to show that DRO obtains a better performance. We reveal the profound connection between SGD and DRO, i.e., selecting a batch can be viewed as choosing a distribution over the training set. From this perspective, we prove that SGD is prone to escape from bad stationary points and small batch SGD outperforms large batch SGD. We give an upper bound for the robust loss when SGD converges and keeps stable. We propose a novel Weighted SGD (WSGD) algorithm framework, which assigns high-variance weights to the data of the current batch. We devise a practical implement of WSGD that can directly optimize the robust loss. We test our algorithm on CIFAR-10 and CIFAR-100, and WSGD achieves significant improvements over the conventional SGD. | rejected-papers | This paper received high quality reviews, which highlighted numerous issues with the paper. A common criticism was that the results in the paper seemed disconnected. Numerous technical concerns were raised. Reading the responses, it seems that some of these issues are nonissues, but it seems also that the writing was not sufficiently up to the standard required of this type of technical work. I suggest the authors produce a rewrite and resubmit to the next ML conference, taking the criticisms they've received here very seriously. | train | [
"B1xf4Gbf0Q",
"HJgjIKqF0Q",
"SkxeTTP8Am",
"S1gcqpwICm",
"HylnGTP8AQ",
"SkeZ7hDL07",
"r1gREiBIa7",
"SkxDmWLphX",
"BJg0BCNW9Q",
"HJlGy6ogqQ",
"Skg4rIslqm",
"SkgPhovg57",
"Skgpnjql9X",
"r1eMXmYl57",
"B1lZOdyx5Q",
"S1lr7gmJcX",
"rJlmXAkAY7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"The paper aims to connect \"distributionally robust optimization\" (DRO) with stochastic gradient descent. The paper purports to explain how SGD escapes from bad local optima and purports to use (local) Rademacher averages (actually, a generalization defined for the robust loss) to explain the generalization perf... | [
3,
-1,
-1,
-1,
-1,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_S1fDssA5Y7",
"SkxDmWLphX",
"r1gREiBIa7",
"r1gREiBIa7",
"r1gREiBIa7",
"B1xf4Gbf0Q",
"iclr_2019_S1fDssA5Y7",
"iclr_2019_S1fDssA5Y7",
"HJlGy6ogqQ",
"Skg4rIslqm",
"Skgpnjql9X",
"B1lZOdyx5Q",
"r1eMXmYl57",
"SkgPhovg57",
"S1lr7gmJcX",
"rJlmXAkAY7",
"iclr_2019_S1fDssA5Y7"
] |
iclr_2019_S1fcnoR9K7 | Learning with Random Learning Rates. | Hyperparameter tuning is a bothersome step in the training of deep learning mod- els. One of the most sensitive hyperparameters is the learning rate of the gradient descent. We present the All Learning Rates At Once (Alrao) optimization method for neural networks: each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude. This comes at practically no computational cost. Perhaps surprisingly, stochastic gra- dient descent (SGD) with Alrao performs close to SGD with an optimally tuned learning rate, for various architectures and problems. Alrao could save time when testing deep learning models: a range of models could be quickly assessed with Alrao, and the most promising models could then be trained more extensively. This text comes with a PyTorch implementation of the method, which can be plugged on an existing PyTorch model. | rejected-papers | The paper proposes a new optimization approach for neural nets where, instead
of a fixed learning rate (often hard to tune), there is one learning rate per
unit, randomly sampled from a distribution. Reviewers think the idea is
novel, original and simple. Overall, reviewers found the experiments
unconvincing enough in practice. I found the paper really borderline,
and decided to side with the reviewers in rejecting the paper. | train | [
"HyeBGxI52Q",
"SJe7y-kmyE",
"Hyly32Fqhm",
"r1lrajxM07",
"SJgycsgMAQ",
"rklxDjxz07",
"HyleC5eMR7",
"rkg4HYxL27"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"POST REBUTTAL: I think the paper is decent, there are some significant downsides to the method but it could constitute a first step towards a more mature learning-rate-free method. However, in its current state the paper is left with some gaping holes in its experiment section. The authors tried to add experiments... | [
4,
-1,
6,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_S1fcnoR9K7",
"rklxDjxz07",
"iclr_2019_S1fcnoR9K7",
"rkg4HYxL27",
"HyeBGxI52Q",
"Hyly32Fqhm",
"iclr_2019_S1fcnoR9K7",
"iclr_2019_S1fcnoR9K7"
] |
iclr_2019_S1g2V3Cct7 | Experience replay for continual learning | Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution - that of using experience replay buffers for all past events - with a mixture of on- and off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one. | rejected-papers | This paper and revisions have some interesting insights into using ER for catastrophic forgetting, and comparisons to other methods for reducing catastrophic forgetting. However, the paper is currently pitched as the first to notice that ER can be used for this purpose, whereas it was well explored in the cited paper "Selective Experience Replay for Lifelong Learning", 2018. For example, the abstract says "While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events". It seems unnecessary to claim this as a main contribution in this work. Rather, the main contributions seem to be to include behavioural cloning, and do provide further empirical evidence that selective ER can be effective for catastrophic forgetting.
Further, to make the paper even stronger, it would be interesting to better understand even smaller replay buffers. A buffer size of 5 million is still quite large. What is a realistic size for continual learning? Hypothesizing how ER can be part of a real continual learning solution, which will likely have more than 3 tasks, is important to understand how to properly restrict the buffer size.
Finally, it is recommended to reconsider the strong stance on catastrophic interference and forgetting. Catastrophic interference has been considered for incremental training, where recent updates can interfere with estimates for older (or other values). This definition does not precisely match the provided definition in the paper. Further, it is true that forgetting has often been used explicitly for multiple tasks, trained in sequence; however, the issues are similar (new learning overriding older learning). These two definitions need not be so separate, and further it is not clear that the provided definitions are congruent with older literature on interference.
Overall, there is most definitely useful ideas and experiments in this paper, but it is as yet a bit preliminary. Improvements on placement, motivation and experimental choices would make this work much stronger, and provide needed clarity on the use of ER for forgetting. | train | [
"S1e7Vx12JV",
"rklWJVN5Rm",
"B1WC7GV9C7",
"BJxnm07qC7",
"Skg0ier92Q",
"Hyg6qmSY27",
"Hklq4yMYn7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We are grateful to all reviewers for their time and helpful suggestions. Our goal in this paper was to demonstrate that a surprisingly simple replay-based approach (CLEAR) dramatically reduces catastrophic forgetting in reinforcement learning tasks, improving upon existing methods while not requiring information a... | [
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"iclr_2019_S1g2V3Cct7",
"Hklq4yMYn7",
"Hyg6qmSY27",
"Skg0ier92Q",
"iclr_2019_S1g2V3Cct7",
"iclr_2019_S1g2V3Cct7",
"iclr_2019_S1g2V3Cct7"
] |
iclr_2019_S1g9N2A5FX | Interpretable Continual Learning | We present a framework for interpretable continual learning (ICL). We show that explanations of previously performed tasks can be used to improve performance on future tasks. ICL generates a good explanation of a finished task, then uses this to focus attention on what is important when facing a new task. The ICL idea is general and may be applied to many continual learning approaches. Here we focus on the variational continual learning framework to take advantage of its flexibility and efficacy in overcoming catastrophic forgetting. We use saliency maps to provide explanations of performed tasks and propose a new metric to assess their quality. Experiments show that ICL achieves state-of-the-art results in terms of overall continual learning performance as measured by average classification accuracy, and also in terms of its explanations, which are assessed qualitatively and quantitatively using the proposed metric. | rejected-papers | The presented method proposes to use saliency maps as a component for an additional metric of forgetting in continual learning, and as a tool as additional information to improve learning on new tasks.
Pros:
+ R2 & R3: Clearly written and easy to follow.
+ R3: New metric to compare saliency masks
+ R3: Interesting idea to utilize previously learned saliency masks to augment learning new tasks.
+ R1: Performance improvements observed.
Cons:
- R1 & R2: Novelty is limited in the context of prior works in this field. Unanswered by authors.
- R2: Concerns around method's ability to use salient but disconnected components. Unanswered by authors.
- R2: Experiments needed on more realistic datasets, such as ImageNet. Unanswered by authors.
- R3: Performance gains are small.
- R1 & R2: Literature review is insufficient.
Reviewers are leaning reject, and R2's concerns have not been answered by the authors at all. Idea seems interesting, authors are encouraged to take into careful consideration the feedback from authors and continue their research. | train | [
"B1xXNM8qpX",
"Bkggfq0VTm",
"HkeYvOCVTm",
"HJeXDu9h2X",
"ryg4Fgr9hX"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Authors propose an incremental continual learning framework which is based on saliency maps on the learned tasks(i.e., explanations) with the ultimate goal of learning new tasks, while avoiding catastrophic forgetting. To this end, authors employ an attention mechanism based on average saliency masks computed on t... | [
4,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
3,
3
] | [
"iclr_2019_S1g9N2A5FX",
"ryg4Fgr9hX",
"HJeXDu9h2X",
"iclr_2019_S1g9N2A5FX",
"iclr_2019_S1g9N2A5FX"
] |
iclr_2019_S1gARiAcFm | Modeling Dynamics of Biological Systems with Deep Generative Neural Networks | Biological data often contains measurements of dynamic entities such as cells or organisms in various states of progression. However, biological systems are notoriously difficult to describe analytically due to their many interacting components, and in many cases, the technical challenge of taking longitudinal measurements. This leads to difficulties in studying the features of the dynamics, for examples the drivers of the transition. To address this problem, we present a deep neural network framework we call Dynamics Modeling Network or DyMoN. DyMoN is a neural network framework trained as a deep generative Markov model whose next state is a probability distribution based on the current state. DyMoN is well-suited to the idiosyncrasies of biological data, including noise, sparsity, and the lack of longitudinal measurements in many types of systems. Thus, DyMoN can be trained using probability distributions derived from the data in any way, such as trajectories derived via dimensionality reduction methods, and does not require longitudinal measurements. We show the advantage of learning deep models over shallow models such as Kalman filters and hidden Markov models that do not learn representations of the data, both in terms of learning embeddings of the data and also in terms training efficiency, accuracy and ability to multitask. We perform three case studies of applying DyMoN to different types of biological systems and extracting features of the dynamics in each case by examining the learned model. | rejected-papers | The paper tackles an interesting problem, which is effectively modeling biological time-series data. The advantages of deep neural networks over structured models like HMMs are their ability to learn features from the data, whereas probabilistic graphical models suffer from "model mismatch", where the available data must be carefully processed in order to fit the assumptions of the PGM. Any work advancing this topic would be extremely welcome in the world of machine learning in biology.
However, the reviewers each raised individual concerns about the paper regarding its clarity and quality, and the authors did not respond. Thus, the reviewers scores remain unchanged, and the rough consensus is a rejection. | train | [
"HJgeVhWa3Q",
"SyehoaKh3m",
"rJl9Nkl5h7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper tackles the important challenge of making sense of temporal measurements made in biological systems. Among other, those have the peculiarity that they are not independent but with a dependency structure, which can be encoded as a graph or a network. The authors claim that their approach, DyMoN is adapte... | [
6,
4,
3
] | [
2,
5,
5
] | [
"iclr_2019_S1gARiAcFm",
"iclr_2019_S1gARiAcFm",
"iclr_2019_S1gARiAcFm"
] |
iclr_2019_S1gBgnR9Y7 | End-to-end learning of pharmacological assays from high-resolution microscopy images | Predicting the outcome of pharmacological assays based on high-resolution microscopy
images of treated cells is a crucial task in drug discovery which tremendously
increases discovery rates. However, end-to-end learning on these images
with convolutional neural networks (CNNs) has not been ventured for this task
because it has been considered infeasible and overly complex. On the largest
available public dataset, we compare several state-of-the-art CNNs trained in an
end-to-end fashion with models based on a cell-centric approach involving segmentation.
We found that CNNs operating on full images containing hundreds
of cells perform significantly better at assay prediction than networks operating
on a single-cell level. Surprisingly, we could predict 29% of the 209 pharmacological
assays at high predictive performance (AUC > 0.9). We compared a
novel CNN architecture called “GapNet” against four competing CNN architectures
and found that it performs on par with the best methods and at the same time
has the lowest training time. Our results demonstrate that end-to-end learning on
high-resolution imaging data is not only possible but even outperforms cell-centric
and segmentation-dependent approaches. Hence, the costly cell segmentation and
feature extraction steps are not necessary, in fact they even hamper predictive performance.
Our work further suggests that many pharmacological assays could
be replaced by high-resolution microscopy imaging together with convolutional
neural networks. | rejected-papers | This work studies the performance of several end-to-end CNN architectures for the prediction of biomedical assays in microscopy images. One of the architectures, GAPnet, is a minor modification of existing global average pooling (GAP) networks, involving skip connections and concatenations. The technical novelties are low, as outlined by several reviewers and confirmed by the authors, as most of the value of the work lies in the empirical evaluation of existing methods, or minor variants thereof.
Given the low technical novelty and reviewer consensus, recommend reject, however area chair recognizes that the discovered utility may be of value for the biomedical community. Authors are encouraged to use reviewer feedback to improve the work, and submit to a biomedical imaging venue for dissemination to the appropriate communities.
| train | [
"B1xmKZJcRm",
"H1gG1xkqRX",
"SklS6DpF0Q",
"rye6wDTFRm",
"BkxIVj-92m",
"rJgQ_qzq3Q",
"SJeOBN-qh7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the authors' responses to my comments; however, they do not really address my concerns about the contribution of the empirical comparison. I believe a revised version of the paper which addresses some of the questions which are still open (in particular, 3 and 4) would significantly improve the contri... | [
-1,
-1,
-1,
-1,
3,
6,
5
] | [
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"H1gG1xkqRX",
"BkxIVj-92m",
"SJeOBN-qh7",
"rJgQ_qzq3Q",
"iclr_2019_S1gBgnR9Y7",
"iclr_2019_S1gBgnR9Y7",
"iclr_2019_S1gBgnR9Y7"
] |
iclr_2019_S1gBz2C9tX | Importance Resampling for Off-policy Policy Evaluation | Importance sampling is a common approach to off-policy learning in reinforcement learning. While it is consistent and unbiased, it can result in high variance updates to the parameters for the value function. Weighted importance sampling (WIS) has been explored to reduce variance for off-policy policy evaluation, but only for linear value function approximation. In this work, we explore a resampling strategy to reduce variance, rather than a reweighting strategy. We propose Importance Resampling (IR) for off-policy learning, that resamples experience from the replay buffer and applies a standard on-policy update. The approach avoids using importance sampling ratios directly in the update, instead correcting the distribution over transitions before the update. We characterize the bias and consistency of the our estimator, particularly compared to WIS. We then demonstrate in several toy domains that IR has improved sample efficiency and parameter sensitivity, as compared to several baseline WIS estimators and to IS. We conclude with a demonstration showing IR improves over IS for learning a value function from images in a racing car simulator. | rejected-papers | The paper proposes to use importance resampling (IR) as an alternative to the more popular importance sampling (IS) approach to off-policy RL. The hope is to reduce variance, as shown in experiments. However, there is no analysis why/when IR will be better than IS for variance reduction, and a few baselines were suggested by reviewers. While the authors rebuttal was helpful in clarifying several issues, the overall contribution does not seem strong enough for ICLR, on both theoretical and empirical sides.
The high variance of IS is known, and the following work may be referenced for better 1st order updates when IS weights are used: Karampatziakis & Langford (UAI'11).
In section 3, the paper says that most off-policy work uses d_mu, instead of d_pi, to weigh states. This is true, but in the current context (infinite-horizon RL), there are more recent works that should probably be referenced:
http://proceedings.mlr.press/v70/hallak17a.html
https://papers.nips.cc/paper/7781-breaking-the-curse-of-horizon-infinite-horizon-off-policy-estimation | test | [
"r1gTd5V214",
"BkeiWN7t0X",
"H1lq6XQFRX",
"S1lNgEC1CQ",
"H1edRZXkR7",
"Hkxm5TG1RX",
"ryeGVsf1RX",
"ryekoEmc6m",
"SkeKENrLTX",
"B1xhJIFQpm",
"BJeFeZSG6m",
"B1gXCgBfp7",
"BJek6RxA3X"
] | [
"public",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"I have gone through the reviews and found the explanations reasonable. I also agree that theoretical comparisons between the variance of IR, IS (and/or WIS) is non-trivial, and there seems to be no known analysis on this. \n\nRe comparisons, although the above explanations (on ABQ, Vtrace, Impala) are very reasona... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
-1,
-1,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
-1,
3
] | [
"S1lNgEC1CQ",
"H1edRZXkR7",
"iclr_2019_S1gBz2C9tX",
"SkeKENrLTX",
"Hkxm5TG1RX",
"ryeGVsf1RX",
"ryekoEmc6m",
"B1xhJIFQpm",
"iclr_2019_S1gBz2C9tX",
"iclr_2019_S1gBz2C9tX",
"B1gXCgBfp7",
"BJek6RxA3X",
"iclr_2019_S1gBz2C9tX"
] |
iclr_2019_S1gDCiCqtQ | Learning Representations in Model-Free Hierarchical Reinforcement Learning | Common approaches to Reinforcement Learning (RL) are seriously challenged by large-scale applications involving huge state spaces and sparse delayed reward feedback. Hierarchical Reinforcement Learning (HRL) methods attempt to address this scalability issue by learning action selection policies at multiple levels of temporal abstraction. Abstraction can be had by identifying a relatively small set of states that are likely to be useful as subgoals, in concert with the learning of corresponding skill policies to achieve those subgoals. Many approaches to subgoal discovery in HRL depend on the analysis of a model of the environment, but the need to learn such a model introduces its own problems of scale. Once subgoals are identified, skills may be learned through intrinsic motivation, introducing an internal reward signal marking subgoal attainment. In this paper, we present a novel model-free method for subgoal discovery using incremental unsupervised learning over a small memory of the most recent experiences of the agent. When combined with an intrinsic motivation learning mechanism, this method learns subgoals and skills together, based on experiences in the environment. Thus, we offer an original approach to HRL that does not require the acquisition of a model of the environment, suitable for large-scale applications. We demonstrate the efficiency of our method on two RL problems with sparse delayed feedback: a variant of the rooms environment and the ATARI 2600 game called Montezuma's Revenge.
| rejected-papers | Pros:
- good results on Montezuma
Cons:
- moderate novelty
- questionable generalization
- lack of ablations and analysis
- lack of stronger baselines
- no rebuttal
The reviewers agree that the paper should be rejected in its current form, and the authors have not bothered revising it to take into account the detailed reviews. | train | [
"BJexixtqhX",
"SkxOFb1937",
"S1eU-a3K3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a model-free HRL method, which is combined with unsupervised learning methods, including abnormality discovery and clustering for subgoal discovery. In all, this paper studies a very important problem in RL and is easy to follow. The technique is sound. Although the novelty is not that signific... | [
5,
4,
3
] | [
4,
4,
5
] | [
"iclr_2019_S1gDCiCqtQ",
"iclr_2019_S1gDCiCqtQ",
"iclr_2019_S1gDCiCqtQ"
] |
iclr_2019_S1gQ5sRcFm | Consistent Jumpy Predictions for Videos and Scenes | Stochastic video prediction models take in a sequence of image frames, and generate a sequence of consecutive future image frames. These models typically generate future frames in an autoregressive fashion, which is slow and requires the input and output frames to be consecutive. We introduce a model that overcomes these drawbacks by generating a latent representation from an arbitrary set of frames that can then be used to simultaneously and efficiently sample temporally consistent frames at arbitrary time-points. For example, our model can "jump" and directly sample frames at the end of the video, without sampling intermediate frames. Synthetic video evaluations confirm substantial gains in speed and functionality without loss in fidelity. We also apply our framework to a 3D scene reconstruction dataset. Here, our model is conditioned on camera location and can sample consistent sets of images for what an occluded region of a 3D scene might look like, even if there are multiple possibilities for what that region might contain. Reconstructions and videos are available at https://bit.ly/2O4Pc4R.
| rejected-papers | This paper proposes a probabilistic model for data indexed by an observed parameter (such as time in video frames, or camera locations in 3d scenes), which enables a global encoding of all available frames and is able to sample consistently at arbitrary indexes. Experiments are reported on several synthetic datasets.
Reviewers acknowledged the significance of the proposed model, noted that the paper is well-written, and the design choices are sounds. However, they also expressed concerns about the experimental setup, which only includes synthetic examples. Although the authors acknowledged during the response phase that this is indeed a current limitation, they argued it is not specific to their particular architecture, but to the task itself. Another concern raised by R1 is the lack of clarity in some experimental setups (for instance where only a subset of the best runs are used to compute error bars, and this subset appears to be of different size depending on the experiment, cf fig 5), and the fact that the datasets used in this paper to compare against GQNs are specifically designed.
Overall, this is a really borderline submission, with several strengths and weaknesses. After taking the reviewer discussion into account and making his/her own assessment, the AC recommends rejection at this time, but strongly encourages the authors to resubmit their work after improving their experimental setup, which will make the paper much stronger. | train | [
"HygZkkpxJ4",
"Skglc7CETQ",
"BkxEUuHc0Q",
"HygNlzDcRX",
"Hyx4Qz4vCX",
"BJld2nsjTQ",
"rklTmyDcRm",
"HJevqCBqAX",
"rkxOTfRan7",
"BygeFmRs37"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear R1 and R2,\n\nThank you for reviewing our paper. We believe there were some initial misunderstandings about our contributions, but hope we addressed them in our response. If we have addressed these we hope you consider re-evaluating your judgement, and if not giving us feedback on how we can improve our work.... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
4,
4
] | [
"iclr_2019_S1gQ5sRcFm",
"rkxOTfRan7",
"BygeFmRs37",
"rkxOTfRan7",
"BygeFmRs37",
"iclr_2019_S1gQ5sRcFm",
"BJld2nsjTQ",
"BJld2nsjTQ",
"iclr_2019_S1gQ5sRcFm",
"iclr_2019_S1gQ5sRcFm"
] |
iclr_2019_S1gUVjCqKm | Unsupervised classification into unknown number of classes | We propose a novel unsupervised classification method based on graph Laplacian. Unlike the widely used classification method, this architecture does not require the labels of data and the number of classes. Our key idea is to introduce a approximate linear map and a spectral clustering theory on the dimension reduced spaces into generative adversarial networks. Inspired by the human visual recognition system, the proposed framework can classify and also generate images as the human brains do. We build an approximate linear connector network C analogous to the cerebral cortex, between the discriminator D and the generator G. The connector network allows us to estimate the unknown number of classes. Estimating the number of classes is one of the challenging researches in the unsupervised learning, especially in spectral clustering. The proposed method can also classify the images by using the estimated number of classes. Therefore, we define our method as an unsupervised classification method. | rejected-papers | Following the unanimous vote of the submitted reviews, this paper is not ready for publication at ICLR. Among other concerns raised, the experiments need significant work, and the exposition needs clarification. | train | [
"SJlxFlSeJV",
"S1luPQSlkN",
"HJlpmmSe14",
"SkxC8eHeyN",
"HkeP5C4lkE",
"B1xflDDDT7",
"SkxbehICn7",
"Bkx8Oznq27"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"4. section 3 is confusing\nAfter the first submission, we find out the result by footprint mask is somewhat unstable, and the theoretical meaning is lack as you mentioned. We find another classification criterion which shares the same theoretical background.\n\nwe want to find a disentangled property among each cl... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"B1xflDDDT7",
"SkxbehICn7",
"SkxbehICn7",
"B1xflDDDT7",
"Bkx8Oznq27",
"iclr_2019_S1gUVjCqKm",
"iclr_2019_S1gUVjCqKm",
"iclr_2019_S1gUVjCqKm"
] |
iclr_2019_S1gWz2CcKX | Neural MMO: A massively multiplayer game environment for intelligent agents | We present an artificial intelligence research platform inspired by the human game genre of MMORPGs (Massively Multiplayer Online Role-Playing Games, a.k.a. MMOs). We demonstrate how this platform can be used to study behavior and learning in large populations of neural agents. Unlike currently popular game environments, our platform supports persistent environments, with variable number of agents, and open-ended task descriptions. The emergence of complex life on Earth is often attributed to the arms race that ensued from a huge number of organisms all competing for finite resources. Our platform aims to simulate this setting in microcosm: we conduct a series of experiments to test how large-scale multiagent competition can incentivize the development of skillful behavior. We find that population size magnifies the complexity of the behaviors that emerge and results in agents that out-compete agents trained in smaller populations. | rejected-papers | The reviewers raise a number of concerns including limited methodological novelty, limited experimental evaluation (comparisons), and poor readability. Although the authors did address some of the concerns, the paper as is needs a lot of polishing and rewriting. Hence, I cannot recommend this work for presentation at ICLR. | train | [
"BylsB35O6m",
"SygiCyqi1V",
"r1lZUCMvyN",
"rkggOp9Y37",
"rJeSmPOthm",
"SJgIcAKL07",
"rkl9_0FL07",
"SyxEQRY8CX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper proposes a multi agent life simulators as an environment for RL. The environment is procedurally generated, with possibly many different game dynamics including foraging, and combat. They train deep RL agents in this environment and show various emergent behaviors such as exploration, and niche developm... | [
6,
-1,
-1,
5,
7,
-1,
-1,
-1
] | [
4,
-1,
-1,
2,
5,
-1,
-1,
-1
] | [
"iclr_2019_S1gWz2CcKX",
"r1lZUCMvyN",
"BylsB35O6m",
"iclr_2019_S1gWz2CcKX",
"iclr_2019_S1gWz2CcKX",
"rJeSmPOthm",
"rkggOp9Y37",
"BylsB35O6m"
] |
iclr_2019_S1g_EsActm | ATTENTION INCORPORATE NETWORK: A NETWORK CAN ADAPT VARIOUS DATA SIZE | In traditional neural networks for image processing, the inputs of the neural networks should be the same size such as 224×224×3. But how can we train the neural net model with different input size? A common way to do is image deformation which accompany a problem of information loss (e.g. image crop or wrap). In this paper we propose a new network structure called Attention Incorporate Network(AIN). It solve the problem of different size of input images and extract the key features of the inputs by attention mechanism, pay different attention depends on the importance of the features not rely on the data size. Experimentally, AIN achieve a higher accuracy, better convergence comparing to the same size of other network structure. | rejected-papers | All reviewers agree that the paper should be rejected and there is no rebuttal. | train | [
"BkeITkMjnQ",
"BJlj-O9v2X",
"Skg-gEMl37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a strategy to overcome the limitation of fixed input image sizes in CNN classifiers. To this end, the authors incorporate some local and local attention modules, which fit inputs of arbitrary size to the fixed-size fully connected layer of a CNN. The method is evaluated on three public classif... | [
3,
4,
2
] | [
5,
4,
4
] | [
"iclr_2019_S1g_EsActm",
"iclr_2019_S1g_EsActm",
"iclr_2019_S1g_EsActm"
] |
iclr_2019_S1gd7nCcF7 | Self-Supervised Generalisation with Meta Auxiliary Learning | Auxiliary learning has been shown to improve the generalisation performance of a principal task. But typically, this requires manually-defined auxiliary tasks based on domain knowledge. In this paper, we consider that it may be possible to automatically learn these auxiliary tasks to best suit the principal task, towards optimum auxiliary tasks without any human knowledge. We propose a novel method, Meta Auxiliary Learning (MAXL), which we design for the task of image classification, where the auxiliary task is hierarchical sub-class image classification. The role of the meta learner is to determine sub-class target labels to train a multi-task evaluator, such that these labels improve the generalisation performance on the principal task. Experiments on three different CIFAR datasets show that MAXL outperforms baseline auxiliary learning methods, and is competitive even with a method which uses human-defined sub-class hierarchies. MAXL is self-supervised and general, and therefore offers a promising new direction towards automated generalisation. | rejected-papers | This paper proposes a framework for generating auxiliary tasks as a means to regularize learning. The idea is interesting, and the method is simple. Two of the three reviewers found the paper to be well-written. The experiment include a promising result on the CIFAR dataset. The reviewer's brought up several concerns regarding the description of the method, the generality of the method (e.g. the requirement for class hierarchy), the validity and description of the comparisons, and the lack of experiments on domains with much more complex hierarchies. None of these concerns were not addressed in revisions to the paper. Hence, the paper in it's current state does not meet the bar for publication. | train | [
"B1esP4YtAm",
"HkesnzFKCQ",
"BygTJNKtRQ",
"rJl-0ufypm",
"Bkxpr4aq3m",
"rylvbv_93Q"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank for the reviewer for their positive comments on our work, and we share our responses below.\n\nThe purpose of our work is not to achieve state-of-the-art performance simply by incorporating the latest network architectures and optimisers. Instead, we provide a novel general framework for automating genera... | [
-1,
-1,
-1,
4,
4,
6
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"rylvbv_93Q",
"rJl-0ufypm",
"Bkxpr4aq3m",
"iclr_2019_S1gd7nCcF7",
"iclr_2019_S1gd7nCcF7",
"iclr_2019_S1gd7nCcF7"
] |
iclr_2019_S1geJhC9Km | Feature quantization for parsimonious and interpretable predictive models | For regulatory and interpretability reasons, the logistic regression is still widely used by financial institutions to learn the refunding probability of a loan from applicant's historical data. To improve prediction accuracy and interpretability, a preprocessing step quantizing both continuous and categorical data is usually performed: continuous features are discretized by assigning factor levels to intervals and, if numerous, levels of categorical features are grouped. However, a better predictive accuracy can be reached by embedding this quantization estimation step directly into the predictive estimation step itself. By doing so, the predictive loss has to be optimized on a huge and untractable discontinuous quantization set. To overcome this difficulty, we introduce a specific two-step optimization strategy: first, the optimization problem is relaxed by approximating discontinuous quantization functions by smooth functions; second, the resulting relaxed optimization problem is solved via a particular neural network and stochastic gradient descent. The strategy gives then access to good candidates for the original optimization problem after a straightforward maximum a posteriori procedure to obtain cutpoints. The good performances of this approach, which we call glmdisc, are illustrated on simulated and real data from the UCI library and Crédit Agricole Consumer Finance (a major European historic player in the consumer credit market). The results show that practitioners finally have an automatic all-in-one tool that answers their recurring needs of quantization for predictive tasks. | rejected-papers | All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance. | train | [
"B1ehjp9SC7",
"rJe3snqBRQ",
"Sygx9ccS0Q",
"Byl9x2qB0Q",
"HyxFMmvtn7",
"S1eWiCYE3m",
"Bkl6ds5G2m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review. We are aware that the first submitted version has typos and mistakes and have updated the paper accordingly. We apologize for the inconvenience and hope you might see the revised paper with new eyes. Moreover, we have addressed your following remarks:\n\n- The state of the art consists i... | [
-1,
-1,
-1,
-1,
2,
3,
4
] | [
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"Bkl6ds5G2m",
"S1eWiCYE3m",
"iclr_2019_S1geJhC9Km",
"HyxFMmvtn7",
"iclr_2019_S1geJhC9Km",
"iclr_2019_S1geJhC9Km",
"iclr_2019_S1geJhC9Km"
] |
iclr_2019_S1giVsRcYm | Count-Based Exploration with the Successor Representation | The problem of exploration in reinforcement learning is well-understood in the tabular case and many sample-efficient algorithms are known. Nevertheless, it is often unclear how the algorithms in the tabular setting can be extended to tasks with large state-spaces where generalization is required. Recent promising developments generally depend on problem-specific density models or handcrafted features. In this paper we introduce a simple approach for exploration that allows us to develop theoretically justified algorithms in the tabular case but that also give us intuitions for new algorithms applicable to settings where function approximation is required. Our approach and its underlying theory is based on the substochastic successor representation, a concept we develop here. While the traditional successor representation is a representation that defines state generalization by the similarity of successor states, the substochastic successor representation is also able to implicitly count the number of times each state (or feature) has been observed. This extension connects two until now disjoint areas of research. We show in traditional tabular domains (RiverSwim and SixArms) that our algorithm empirically performs as well as other sample-efficient algorithms. We then describe a deep reinforcement learning algorithm inspired by these ideas and show that it matches the performance of recent pseudo-count-based methods in hard exploration Atari 2600 games. | rejected-papers | This paper was on the borderline. I am sympathetic to the authors' point about computational resources. It is helpful to demonstrate performance gains that offer "jump start" performance benefits, as the authors argue. However, the empirical results even on this part are still somewhat mixed-- for example, the proposed approach struggles on Private Eye (doing far worse than DQN) in Table 2. In addition, while it is beneficial to remove the need for training a density model, it would be good to show a place where a density model fails (perhaps because it is so hard to find a good one) compared to their proposed approach. | train | [
"H1xZ8z8D1E",
"Bkg1bKHv14",
"SkeIs_vLJE",
"SJlWDCQQAm",
"H1eOmAQQ0m",
"rygPJAQ70X",
"rJg0jT7XR7",
"Hyl3ZDxqh7",
"HyxypQychm",
"Skx_9r3_2X"
] | [
"author",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for responding our rebuttal. It was not our intention to suggest that the low scores \"come from the fact that we trained our agents for 100M frames\". Our sentence was just pushing back on the comment that our results were not convincing. We do think our results are convincing when you compare the perfo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"Bkg1bKHv14",
"rygPJAQ70X",
"rygPJAQ70X",
"Skx_9r3_2X",
"HyxypQychm",
"Hyl3ZDxqh7",
"Hyl3ZDxqh7",
"iclr_2019_S1giVsRcYm",
"iclr_2019_S1giVsRcYm",
"iclr_2019_S1giVsRcYm"
] |
iclr_2019_S1giro05t7 | Reducing Overconfident Errors outside the Known Distribution | Intuitively, unfamiliarity should lead to lack of confidence. In reality, current algorithms often make highly confident yet wrong predictions when faced with unexpected test samples from an unknown distribution different from training. Unlike domain adaptation methods, we cannot gather an "unexpected dataset" prior to test, and unlike novelty detection methods, a best-effort original task prediction is still expected. We compare a number of methods from related fields such as calibration and epistemic uncertainty modeling, as well as two proposed methods that reduce overconfident errors of samples from an unknown novel distribution without drastically increasing evaluation time: (1) G-distillation, training an ensemble of classifiers and then distill into a single model using both labeled and unlabeled examples, or (2) NCR, reducing prediction confidence based on its novelty detection score. Experimentally, we investigate the overconfidence problem and evaluate our solution by creating "familiar" and "novel" test splits, where "familiar" are identically distributed with training and "novel" are not. We discover that calibrating using temperature scaling on familiar data is the best single-model method for improving novel confidence, followed by our proposed methods. In addition, some methods' NLL performance are roughly equivalent to a regularly trained model with certain degree of smoothing. Calibrating can also reduce confident errors, for example, in gender recognition by 95% on demographic groups different from the training data. | rejected-papers | The paper proposes methods to deal with estimating classification confidence on unseen data distributions.
The reviewers and AC note the following potential weaknesses: (1) limited novelty and (2) the authors' new comparison with Guo et al. (2017) asked by Reviewer 2 is not convincing enough.
AC thinks the proposed method has potential and is interesting, but decided that the authors need new ideas to meet the high standard of ICLR. | train | [
"SklwB8IgJV",
"r1lbDawJJE",
"rklzWzMKAm",
"BJe2RZfK0X",
"ryl3iZGtR7",
"HylxAeMYRm",
"rJlvrgfFAX",
"rkglwmfVam",
"ryeR0Ya537",
"SJeIrcAK3Q",
"Hklw-QDGnm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Regarding uniformity, that makes sense. We may have misunderstood your original comment. The main idea behind using the unlabeled data for G-distillation is that it enables the distilled classifier to mimic the ensembles predictions in a larger domain than the training set, which may (as experiments support) impr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"r1lbDawJJE",
"ryl3iZGtR7",
"rkglwmfVam",
"iclr_2019_S1giro05t7",
"Hklw-QDGnm",
"SJeIrcAK3Q",
"ryeR0Ya537",
"iclr_2019_S1giro05t7",
"iclr_2019_S1giro05t7",
"iclr_2019_S1giro05t7",
"iclr_2019_S1giro05t7"
] |
iclr_2019_S1grRoR9tQ | Bayesian Deep Learning via Stochastic Gradient MCMC with a Stochastic Approximation Adaptation | We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables. Inspired by dropout, a popular tool for regularization and model ensemble, we assign sparse priors to the weights in deep neural networks (DNN) in order to achieve automatic “dropout” and avoid over-fitting. By alternatively sampling from posterior distribution through stochastic gradient Markov Chain Monte Carlo (SG-MCMC) and optimizing latent variables via stochastic approximation (SA), the trajectory of the target weights is proved to converge to the true posterior distribution conditioned on optimal latent variables. This ensures a stronger regularization on the over-fitted parameter space and more accurate uncertainty quantification on the decisive variables. Simulations from large-p-small-n regressions showcase the robustness of this method when applied to models with latent variables. Additionally, its application on the convolutional neural networks (CNN) leads to state-of-the-art performance on MNIST and Fashion MNIST datasets and improved resistance to adversarial attacks. | rejected-papers | This paper proposes a Bayesian alternative to dropout for deep networks by extending the EM-based variable selection method with SG-MCMC for sampling weights and stochastic approximation for tuning hyper-parameters. The method is well presented with a clear motivation. The combination of SMVS, SG-MCMC, and SA as a mixed optimization-sampling approach is technically sound.
The main concern raised by the readers is the limited originality. SG-MCMC has been studied extensively for Bayesian deep networks and applying the spike-and-slab prior as an alternative to dropout is a straightforward idea. The main contribution of the paper appears to be extending EMVS to deep net with commonly used sampling techniques for Bayesian networks.
Another concern is the lack of experimental justification for the advantage of the proposed method. While the authors promise to include more experiment results in the camera-ready version, it requires a considerable amount of effort and the decision unfortunately has to be made based on the current revision. | train | [
"BJeOIPB9nQ",
"ByguuVqt3X",
"HyxiMQrDh7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors describe a new method of posterior sampling with latent variables based on SG-MCMC and stochastic approximation (SA). The new method uses a spike and slab prior on the weights of the deep neural networks to encourage sparsity. Experiments on toy regressions, classification and adversarial attacks demon... | [
5,
4,
6
] | [
4,
2,
5
] | [
"iclr_2019_S1grRoR9tQ",
"iclr_2019_S1grRoR9tQ",
"iclr_2019_S1grRoR9tQ"
] |
iclr_2019_S1lKSjRcY7 | Improved Gradient Estimators for Stochastic Discrete Variables | In many applications we seek to optimize an expectation with respect to a distribution over discrete variables. Estimating gradients of such objectives with respect to the distribution parameters is a challenging problem. We analyze existing solutions including finite-difference (FD) estimators and continuous relaxation (CR) estimators in terms of bias and variance. We show that the commonly used Gumbel-Softmax estimator is biased and propose a simple method to reduce it. We also derive a simpler piece-wise linear continuous relaxation that also possesses reduced bias. We demonstrate empirically that reduced bias leads to a better performance in variational inference and on binary optimization tasks. | rejected-papers | Strengths: This paper provides a useful review of some of the recent work on gradient estimators for discrete variables, and proposes both a computationally more efficient variant of one, and a new estimator based on piecewise linear functions.
Weaknesses: Many new ideas are scattered throughout the paper. The notation is a bit dense. Comparisons to RELAX, which had better results than REBAR, are missing. Finally, it seems that REBAR was trained with a fixed temperature, instead of optimizing it during training, which is one of the main benefits of the method.
Points of contention: Only R1 mentioned the omission of REBAR and RELAX. A discussion and a few comparisons to REBAR were added to the paper, but only in a few experiments.
Consensus: This paper is borderline. I agree with R1: quality 6, clarity 8, originality 6, significance 4. All reviewers agreed that this was a decent paper but I think that R2 and R3 were relatively unfamiliar with the existing literature.
Update for clarification:
=====================
This section has been added to clarify the reasons for rejection. The abstract of the paper states:
"We show that the commonly used Gumbel-Softmax estimator is biased and propose a simple method to reduce it. We also derive a simpler piece-wise linear continuous relaxation that also possesses reduced bias. We demonstrate empirically that reduced bias leads to a better
performance in variational inference and on binary optimization tasks."
The fact that Gumbel-Softmax is biased is well-known. Reducing its bias was the motivation for developing the _exactly_ unbiased REBAR method, which already has similar asymptotic complexity. A major side-benefit of using an exactly unbiased estimator is that the estimator's hyperparameters can be automatically tuned to reduce variance, as in REBAR and RELAX.
This paper focuses on methods for reducing bias and variance, but hardly discusses related methods that already achieved its stated aims. This a major weakness of the paper. The experiments only compared with REBAR, and did not even tune the temperature to reduce variance (removing one of its major advantages).
This reject decision is not made mainly on lack of experiments or state-of-the-art results. It's because the idea of reducing the bias of continuous-relaxation-based gradient estimators has already been fruitfully explored, and zero-bias CR estimators have been developed, but this work mostly ignores them. However, thorough experiments are always going to be necessary for a paper proposing biased estimators, because there are already many such estimators, and little theory to say which ones will work well in which situations.
Suggestions to improve the paper: Run experiments on all methods that directly measure bias and variance. Incorporate discussion of REBAR throughout, not just in an appendix. Run comparisons against REBAR and RELAX without crippling their ability to reduce variance. Do more to characterize when different estimators will be expected to be effective. | train | [
"r1lMeCKcnm",
"SklhF5_oaQ",
"HklqFsdiTX",
"HyeW6cuoaX",
"SkesQF453Q",
"HJg9OkW_hQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"After revision:\nThe authors have addressed all points in my review. Although I will not be increasing the score, these fixes certainly increase the confidence of my evaluation and I think it deserves to be accepted.\n\n====================\n\nSummary: This paper analyzes finite-difference and continuous relaxatio... | [
7,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_S1lKSjRcY7",
"r1lMeCKcnm",
"HJg9OkW_hQ",
"SkesQF453Q",
"iclr_2019_S1lKSjRcY7",
"iclr_2019_S1lKSjRcY7"
] |
iclr_2019_S1lPShAqFm | Empirically Characterizing Overparameterization Impact on Convergence | A long-held conventional wisdom states that larger models train more slowly when using gradient descent. This work challenges this widely-held belief, showing that larger models can potentially train faster despite the increasing computational requirements of each training step. In particular, we study the effect of network structure (depth and width) on halting time and show that larger models---wider models in particular---take fewer training steps to converge.
We design simple experiments to quantitatively characterize the effect of overparametrization on weight space traversal. Results show that halting time improves when growing model's width for three different applications, and the improvement comes from each factor: The distance from initialized weights to converged weights shrinks with a power-law-like relationship, the average step size grows with a power-law-like relationship, and gradient vectors become more aligned with each other during traversal.
| rejected-papers | This paper studies the behavior of training of over parametrized models. All the reviewers agree that the questions studied in this paper are important. However the experiments in the paper are fairly preliminary and the paper does not offer any answers to the questions it studies. Further the writing is very loose and the paper is not ready for publication. I advise authors to take the reviews seriously into account before submitting the paper again. | train | [
"BylWTKda27",
"B1l-_TA237",
"Sye-0-Kt37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper discusses the effect of increasing the widths in deep neural networks on the convergence of optimization. To this end, the paper focuses on RNNs and applications to NLP and speech recognition, and designs several groups of experiments/measurements to show that wider RNNs improve the convergence speed in... | [
5,
4,
3
] | [
3,
5,
4
] | [
"iclr_2019_S1lPShAqFm",
"iclr_2019_S1lPShAqFm",
"iclr_2019_S1lPShAqFm"
] |
iclr_2019_S1lTg3RcFm | Perception-Aware Point-Based Value Iteration for Partially Observable Markov Decision Processes | Partially observable Markov decision processes (POMDPs) are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome. In conventional POMDP models, the observations that the agent receives originate from fixed known distribution. However, in a variety of real-world scenarios the agent has an active role in its perception by selecting which observations to receive. Due to combinatorial nature of such selection process, it is computationally intractable to integrate the perception decision with the planning decision. To prevent such expansion of the action space, we propose a greedy strategy for observation selection that aims to minimize the uncertainty in state.
We develop a novel point-based value iteration algorithm that incorporates the greedy strategy to achieve near-optimal uncertainty reduction for sampled belief points. This in turn enables the solver to efficiently approximate the reachable subspace of belief simplex by essentially separating computations related to perception from planning.
Lastly, we implement the proposed solver and demonstrate its performance and computational advantage in a range of robotic scenarios where the robot simultaneously performs active perception and planning. | rejected-papers | This was a borderline paper and a very difficult decision to make.
The paper addresses a potentially interesting problem in approximate POMDP planning, based on simplifying assumptions that perception can be decoupled from action and that a set of sensors exhibits certain conditional independence structure. As a result, a simple approach can be devised that incorporates a simple greedy perception method within a point-based value iteration scheme.
Unfortunately, the assumptions the paper makes are so strong and seemingly artificial to the extent that they appear reverse engineered to the use of a simple perception heuristic. In principle, such a simplification might not be a problem if the resulting formulation captured practically important scenarios, but that was not convincingly achieved in this paper---indeed, another major limitation of the paper is its weak motivation. In more detail, the proposed approach relies on decoupling of perception and action, which is a restrictive assumption that bypasses the core issue of exploration versus exploitation in POMDPS. As model of active perception, the proposal is simplistic and somewhat artificial; the motivation for the particular cost model (cardinality of the sensor set) is particularly weak---a point that was not convincingly defended in the discussion. Perhaps the biggest underlying weakness is the experimental evaluation, which is inadequate to support a claim that the proposed methods show meaningful advantages over state-of-the-art approaches in important scenarios. A reviewer also raised legitimate questions about the strength of the theoretical analysis.
In the end, the reviewers did not disagree on any substantive technical matter, but nevertheless did disagree in their assessments of the significance of the contribution. This is clearly a borderline paper, which on the positive side, was competently executed, but on the negative side, is pursuing an artificial scenario that enables a particularly simple algorithmic approach.
Despite the lack of consensus, a difficult decision has to be made nonetheless. In the end, my judgement is that the paper is not yet strong enough for publication. I would recommend the authors significantly strengthen the experimental evaluation to cover off at least two of the major shortcomings of the current paper: (1) The true utility of the proposed method needs to be better established against stronger baselines in more realistic scenarios. (2) The relevance of the restrictive assumptions needs to be more convincingly established by providing concrete, realistic and more challenging case studies where the proposed techniques are still applicable. The paper would also be improved if the theoretical analysis could be strengthened to better address the criticisms of Reviewer 4. | train | [
"HkeUQ089RQ",
"ryxh9685Am",
"BklEFRIqAQ",
"BylVdUNZR7",
"r1enwzJYn7",
"SyeBQi3d3X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the thoughtful and constructive feedback. Please find the authors’ response below .\n\n-----Clarifying motivation\n\nThe cost of measurement acquisition and processing in terms of power, communication, and processing computations define the physical constraints on the agent’s perception. ... | [
-1,
-1,
-1,
6,
4,
7
] | [
-1,
-1,
-1,
4,
4,
2
] | [
"r1enwzJYn7",
"BylVdUNZR7",
"SyeBQi3d3X",
"iclr_2019_S1lTg3RcFm",
"iclr_2019_S1lTg3RcFm",
"iclr_2019_S1lTg3RcFm"
] |
iclr_2019_S1lVniC5Y7 | From Nodes to Networks: Evolving Recurrent Neural Networks | Gated recurrent networks such as those composed of Long Short-Term Memory
(LSTM) nodes have recently been used to improve state of the art in many sequential
processing tasks such as speech recognition and machine translation. However,
the basic structure of the LSTM node is essentially the same as when it was
first conceived 25 years ago. Recently, evolutionary and reinforcement learning
mechanisms have been employed to create new variations of this structure. This
paper proposes a new method, evolution of a tree-based encoding of the gated
memory nodes, and shows that it makes it possible to explore new variations more
effectively than other methods. The method discovers nodes with multiple recurrent
paths and multiple memory cells, which lead to significant improvement in the
standard language modeling benchmark task. Remarkably, this node did not perform
well in another task, music modeling, but it was possible to evolve a different
node that did, demonstrating that the approach discovers customized structure for
each task. The paper also shows how the search process can be speeded up by
training an LSTM network to estimate performance of candidate structures, and
by encouraging exploration of novel solutions. Thus, evolutionary design of complex
neural network structures promises to improve performance of deep learning
architectures beyond human ability to do so. | rejected-papers | In this work, the authors explore using genetic programming to search over network architectures. The reviewers noted that the proposed approach is simple and fast. However, the reviewers expressed concerns about the experimental validation (e.g., experiments were conducted on small tasks; issues with comparisons (cf. feedback from Reviewer2)), and the fact that the method were not compared against various baseline methods related to architecture search. | val | [
"HkgjYAaj3Q",
"HylpXeaq3Q",
"S1ge9p1bhX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"A genetic algorithm is used to do an evolutionary architecture search to find better tree-like architectures with multiple memory cells and recurrent paths. To speed up search, an LSTM based seq2seq framework is also developed that can predict the final performance of the child model based on partial training resu... | [
5,
4,
4
] | [
4,
4,
4
] | [
"iclr_2019_S1lVniC5Y7",
"iclr_2019_S1lVniC5Y7",
"iclr_2019_S1lVniC5Y7"
] |
iclr_2019_S1llBiR5YX | Accidental exploration through value predictors | Infinite length of trajectories is an almost universal assumption in the theoretical foundations of reinforcement learning. In practice learning occurs on finite trajectories. In this paper we examine a specific result of this disparity, namely a strong bias of the time-bounded Every-visit Monte Carlo value estimator. This manifests as a vastly different learning dynamic for algorithms that use value predictors, including encouraging or discouraging exploration.
We investigate these claims theoretically for a one dimensional random walk, and empirically on a number of simple environments. We use GAE as an algorithm involving a value predictor and evolution strategies as a reference point. | rejected-papers | The paper studies the mismatch between value estimation in RL from finite vs. infinite trajectories. This is an interesting problem, but the reviewers raised concerns regarding (1) the consistency and coherence of the story (2) the significance of theoretical analysis and (3) significance of the results. I appreciate that the authors made significant changes to the paper to address the comments. However, given the extent of changes, I think another review cycle is needed to check the details of the paper again. | train | [
"S1gcp4rAAQ",
"SygD5bm2RX",
"rkgn8XzboX",
"BJlYSahViQ",
"SkxaDU2OAX",
"S1gFNcyU0Q",
"HJx1-qyURQ",
"rJgv5FJU07",
"BJgXLXSp3m"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thanks for the reply.\nA couple of remarks:\n1. The definition of the MDP formalism doesn't explicitly say what the *computational cost* of a transition is, only that the next state is sampled conditioned on the previous one. You can think of settings where the computational cost cost depends on the state. Now, if... | [
-1,
-1,
3,
5,
-1,
-1,
-1,
-1,
4
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"SygD5bm2RX",
"SkxaDU2OAX",
"iclr_2019_S1llBiR5YX",
"iclr_2019_S1llBiR5YX",
"HJx1-qyURQ",
"rkgn8XzboX",
"BJlYSahViQ",
"BJgXLXSp3m",
"iclr_2019_S1llBiR5YX"
] |
iclr_2019_S1lwRjR9YX | Stability of Stochastic Gradient Method with Momentum for Strongly Convex Loss Functions | While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods. In practice, the momentum parameter is often chosen in a heuristic fashion with little theoretical guidance. In this work, we use the framework of algorithmic stability to provide an upper-bound on the generalization error for the class of strongly convex loss functions, under mild technical assumptions. Our bound decays to zero inversely with the size of the training set, and increases as the momentum parameter is increased. We also develop an upper-bound on the expected true risk, in terms of the number of training steps, the size of the training set, and the momentum parameter. | rejected-papers | The paper according to Reviewers needs more work for publication and significantly more clarifications. The Reviewers are not convinced on publishing even after intensive discussion that the AC read in full. The AC recommends further improvements on the paper to address better Reviewer's concerns. | test | [
"rJxdSh34yV",
"HJlGj7UR0Q",
"Skl6ykATA7",
"HkgEmbtaA7",
"SJet7-5TC7",
"B1e21iAhA7",
"HJlL1YjoCm",
"ryemHZssRm",
"HyeCCsdoRm",
"rJxsFK_iRQ",
"ryeWFpGiCX",
"HkxhFKzs07",
"SkglZ4I90Q",
"SkewoU49R7",
"HJxr0SV90m",
"SyenUHNcA7",
"r1lvWTHc2X",
"Skl1MikYnX",
"SJx-bmHbnm"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"... | [
"Dear reviewer,\n\nRegarding the comparison with (Jain et al., 2018), we should clarify that only linear regression problem with quadratic loss function has been studied in (Jain et al., 2018), while we consider a general strongly-convex loss function. In addition, our generalization bound is based on uniform stabi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"SJet7-5TC7",
"Skl6ykATA7",
"HkgEmbtaA7",
"Skl1MikYnX",
"SyenUHNcA7",
"HJlL1YjoCm",
"ryemHZssRm",
"HyeCCsdoRm",
"rJxsFK_iRQ",
"ryeWFpGiCX",
"HkxhFKzs07",
"SkglZ4I90Q",
"SkewoU49R7",
"SJx-bmHbnm",
"Skl1MikYnX",
"r1lvWTHc2X",
"iclr_2019_S1lwRjR9YX",
"iclr_2019_S1lwRjR9YX",
"iclr_20... |
iclr_2019_S1x2aiRqFX | Differentiable Expected BLEU for Text Generation | Neural text generation models such as recurrent networks are typically trained by maximizing data log-likelihood based on cross entropy. Such training objective shows a discrepancy from test criteria like the BLEU metric. Recent work optimizes expected BLEU under the model distribution using policy gradient, while such algorithm can suffer from high variance and become impractical. In this paper, we propose a new Differentiable Expected BLEU (DEBLEU) objective that permits direct optimization of neural generation models with gradient descent. We leverage the decomposability and sparsity of BLEU, and reformulate it with moderate approximations, making the evaluation of the objective and its gradient efficient, comparable to common cross-entropy loss. We further devise a simple training procedure with ground-truth masking and annealing for stable optimization. Experiments on neural machine translation and image captioning show our method significantly improves over both cross-entropy and policy gradient training. | rejected-papers | The paper presents a differentiable approximation of BLEU score, which can be directly optimized using SGD. The reviewers raised concerns about (1) direct evaluation of the quality of the approximation and (2) the significance of the experimental results. There is also a concern (3) regarding the significance of BLEU score in the first place, and whether BLEU is the right metric that one needs to directly optimize. The authors did not provide a response, and based on the concerns above (especially 1-2) I believe that the paper does not pass the bar for acceptance at ICLR. | train | [
"SyxopJ_537",
"rJluy5-qn7",
"S1lpbM__nX",
"rygg9AB-hm",
"BJe1ZVt3iQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper proposed a differentiable metric for text generation tasks inspired by BLEU and a random training method by the Gumbel-softmax trick to utilize the proposed metric. Experiments showed that the proposed method improves BLEU compared with simple cross entropy training and policy gradient training.\n\nPros... | [
4,
4,
6,
-1,
-1
] | [
4,
5,
4,
-1,
-1
] | [
"iclr_2019_S1x2aiRqFX",
"iclr_2019_S1x2aiRqFX",
"iclr_2019_S1x2aiRqFX",
"BJe1ZVt3iQ",
"iclr_2019_S1x2aiRqFX"
] |
iclr_2019_S1x8WnA5Ym | Learning Diverse Generations using Determinantal Point Processes | Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic looking images. A fundamental characteristic of generative models is their ability to produce multi-modal outputs. However, while training, they are often susceptible to mode collapse, which means that the model is limited in mapping the input noise to only a few modes of the true data distribution. In this paper, we draw inspiration from Determinantal Point Process (DPP) to devise a generative model that alleviates mode collapse while producing higher quality samples. DPP is an elegant probabilistic measure used to model negative correlations within a subset and hence quantify its diversity. We use DPP kernel to model the diversity in real data as well as in synthetic data. Then, we devise a generation penalty term that encourages the generator to synthesize data with a similar diversity to real data. In contrast to previous state-of-the-art generative models that tend to use additional trainable parameters or complex training paradigms, our method does not change the original training scheme. Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming state-of-the-art methods for data-efficiency, convergence-time, and generation quality. Our code will be made publicly available. | rejected-papers | The paper proposes GAN regularized by Determinantal Point Process to learn diverse data samples.
The reviewers and AC commonly note the critical limitation of novelty of this paper. The authors pointed out
"To the best of our knowledge, we are the first to introduce modeling data diversity using a Point process kernel that we embed within a generative model. "
AC does not think this is convincing enough to meet the high standard of ICLR.
AC decided the paper might not be ready to publish in the current form. | train | [
"rJeWmZCRJV",
"S1xHtzNJyE",
"HJgrdf4yy4",
"S1laIMVJJN",
"SkgACBULRm",
"Hklz3wLUAQ",
"BJx4OI8IAX",
"rkgO-i9H6Q",
"Skx_mWw7TX",
"S1lVMpOypX",
"SyxGMY50h7"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"We wish all the reviewers happy holidays and we understand that this is a busy time. Motivated by the ICLR spirit to interact more during the review process, we aim to interact more with the reviewers. We worked hard to address the comments and make the paper stronger based on the reviewer's feedback that we reall... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
-1,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
-1,
4
] | [
"iclr_2019_S1x8WnA5Ym",
"S1lVMpOypX",
"Hklz3wLUAQ",
"BJx4OI8IAX",
"iclr_2019_S1x8WnA5Ym",
"Skx_mWw7TX",
"rkgO-i9H6Q",
"iclr_2019_S1x8WnA5Ym",
"iclr_2019_S1x8WnA5Ym",
"SyxGMY50h7",
"iclr_2019_S1x8WnA5Ym"
] |
iclr_2019_S1xBioR5KX | Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization | Modern deep neural networks are highly overparameterized, and often of huge sizes. A number of post-training model compression techniques, such as distillation, pruning and quantization, can reduce the size of network parameters by a substantial fraction with little loss in performance. However, training a small network of the post-compression size de novo typically fails to reach the same level of accuracy achieved by compression of a large network, leading to a widely-held belief that gross overparameterization is essential to effective learning. In this work, we argue that this is not necessarily true. We describe a dynamic sparse reparameterization technique that closed the performance gap between a model compressed through iterative pruning and a model of the post-compression size trained de novo. We applied our method to training deep residual networks and showed that it outperformed existing reparameterization techniques, yielding the best accuracy for a given parameter budget for training. Compared to existing dynamic reparameterization methods that reallocate non-zero parameters during training, our approach achieved better performance at lower computational cost. Our method is not only of practical value for training under stringent memory constraints, but also potentially informative to theoretical understanding of generalization properties of overparameterized deep neural networks.
| rejected-papers |
The authors presents a technique for training neural networks, through dynamic sparse reparameterization. The work builds on previous work notably SET (Mocanu et al., 18), but the authors propose to use an adaptive threshold for and a heuristic for determining how to reparameterize weights across layers.
The reviewers raised a number of concerns on the original manuscript, most notably 1) that the work lacked comparisons against existing dynamic reparameterization schemes, 2) an analysis of the computational complexity of the proposed method relative to other works, and that 3) the work is an incremental improvement over SET.
In the revised version, the authors revised the paper to address the various concerns raised by the reviewers. To address weakness 1) the authors ran experiments comparing the proposed approach to SET and DeepR, and demonstrated that the proposed method performs at least as well, or is better than either approach. While the new draft is in the ACs view a significant improvement over the initial version, the reviewers still had concerns about the fact that the work appears to be incremental relative to SET, and that the differences in performance between the two models were not very large (although the author’s note that the differences are statistically significant). The reviewers were not entirely unanimous in their decision, which meant that the scores that this work received placed it at the borderline for acceptance. As such, the AC ultimately decide to recommend rejection, though the authors are encouraged to resubmit the revised version of the paper to a future venue.
| train | [
"BygrZof1eV",
"r1ec05zJgN",
"Hkg_eTIICQ",
"SJe6PrRM1E",
"rJgKJhLI0m",
"B1lQ5wWRAQ",
"ByxlFw-CCQ",
"HygbBOiaCQ",
"HJeW2aMhAm",
"rkgM53LLA7",
"Hyl7OhLI0m",
"H1g-TRNAn7",
"HyldHxls2X",
"Hkl9XKWc37",
"Byg8iX-V2m",
"HyeBcckEnm",
"rketmW5fnQ",
"ByepGM9GnX",
"HJgPaBbb2X",
"SJgRILw0jm"... | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"public",
"author",
"public",
"author",... | [
"\nThank you again for your useful comments, which were common concerns raised by all 3 reviewers, including (1) lack of comparison to prior work, and (2) inaccurate claims of contributions. \n\nIn the revision submitted together with a point-by-point rebuttal to your review on Nov. 23, we have fully addressed thes... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SJe6PrRM1E",
"rJgKJhLI0m",
"Hkl9XKWc37",
"ByxlFw-CCQ",
"H1g-TRNAn7",
"HygbBOiaCQ",
"HygbBOiaCQ",
"Hyl7OhLI0m",
"Hkg_eTIICQ",
"HyldHxls2X",
"HyldHxls2X",
"iclr_2019_S1xBioR5KX",
"iclr_2019_S1xBioR5KX",
"iclr_2019_S1xBioR5KX",
"HyeBcckEnm",
"ByepGM9GnX",
"HJgPaBbb2X",
"rketmW5fnQ",
... |
iclr_2019_S1xLZ2R5KQ | Maximum a Posteriori on a Submanifold: a General Image Restoration Method with GAN | We propose a general method for various image restoration problems, such as denoising, deblurring, super-resolution and inpainting. The problem is formulated as a constrained optimization problem. Its objective is to maximize a posteriori probability of latent variables, and its constraint is that the image generated by these latent variables must be the same as the degraded image. We use a Generative Adversarial Network (GAN) as our density estimation model. Convincing results are obtained on MNIST dataset. | rejected-papers | This paper proposes a framework of image restoration by searching for a MAP in a trained GAN subject to a degradation constraint. Experiments on MNIST show good performance in restoring the images under different types of degradation.
The main problem as pointed out by R1 and R3 is that there has been rich literature of image restoration methods and also several recent works that also utilized GAN, but the authors failed to make comparison any of those baselines in the experiments. Additional experiments on natural images would provide more convincing evidence for the proposed algorithm.
The authors argue that the restoration tasks in the experiments are too difficult for TV to work. It would be great to provide actual experiments to verify the claim. | test | [
"Hkge4MIj0m",
"HklKrfHc0X",
"HyghIbr50m",
"Bye3WZB907",
"S1gQAUDB27",
"HygHWpHH2m",
"BJe2djKE3X"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Follow-up comments after authors' response. \n\nFor Q1, I mentioned in the previous comments that the missing reference is Yeh et al., ICASSP 2018, not the CVPR 2017 paper. \n\nYeh, Raymond A., et al. \"Image Restoration with Deep Generative Models.\" 2018 IEEE International Conference on Acoustics, Speech and Sig... | [
-1,
-1,
-1,
-1,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"HyghIbr50m",
"BJe2djKE3X",
"HygHWpHH2m",
"S1gQAUDB27",
"iclr_2019_S1xLZ2R5KQ",
"iclr_2019_S1xLZ2R5KQ",
"iclr_2019_S1xLZ2R5KQ"
] |
iclr_2019_S1xiOjC9F7 | Graph Matching Networks for Learning the Similarity of Graph Structured Objects | This paper addresses the challenging problem of retrieval and matching of graph structured objects, and makes two key contributions. First, we demonstrate how Graph Neural Networks (GNN), which have emerged as an effective model for various supervised prediction problems defined on structured data, can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning. Second, we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism. We demonstrate the effectiveness of our models on different domains including the challenging problem of control-flow-graph based function similarity search that plays an important role in the detection of vulnerabilities in software systems. The experimental analysis demonstrates that our models are not only able to exploit structure in the context of similarity learning but they can also outperform domain-specific baseline systems that have been carefully hand-engineered for these problems. | rejected-papers | This is a tough choice as it is a reasonably strong paper.
I am similar to another reviewer quite confused how this graph matching can "only focus on important nodes in the graph"
This seems counter-intuitive and the only reason given in the rebuttal is that other people have done it also..
Relatedly: "In graph matching, we not only care about the overall similarity of two graphs but also are interested in finding the correspondence between the nodes of two graphs"
I am sorry for the authors and hope they will get it accepted at the next conference. | train | [
"ryxIsgyc27",
"HJlK8IVhTX",
"rklATS4npm",
"r1lkzfN3TQ",
"BkercW6an7",
"r1g2n75E3Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors introduce a Graph Matching Network for retrieval and matching of graph structured objects. The proposed methods demonstrates improvements compared to baseline methods. However, I have have three main concerns: \n1) Unconvining experiments.\n\ta) Experiments in Sec4.1. The experiments seem not convincin... | [
6,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_S1xiOjC9F7",
"r1g2n75E3Q",
"ryxIsgyc27",
"BkercW6an7",
"iclr_2019_S1xiOjC9F7",
"iclr_2019_S1xiOjC9F7"
] |
iclr_2019_S1xjdoC9Fm | Offline Deep models calibration with bayesian neural networks | We apply Bayesian Neural Networks to improve calibration of state-of-the-art deep
neural networks. We show that, even with the most basic amortized approximate
posterior distribution, and fast fully connected neural network for the likelihood,
the Bayesian framework clearly outperforms other simple maximum likelihood
based solutions that have recently shown very good performance, as temperature
scaling. As an example, we reduce the Expected Calibration
Error (ECE) from 0.52 to 0.24 on CIFAR-10 and from 4.28 to 2.456 on CIFAR-100
on two Wide ResNet with 96.13% and 80.39% accuracy respectively, which are
among the best results published for this task. We demonstrate our robustness and
performance with experiments on a wide set of state-of-the-art computer vision
models. Moreover, our approach acts off-line, and thus can be applied to any
probabilistic model regardless of the limitations that the model may present during
training. This make it suitable to calibrate systems that make use of pre-trained
deep neural networks that are expensive to train for a specific task, or to directly
train a calibrated deep convolutional model with Monte Carlo Dropout approximations, among others. However,
our method is still complementary with any Bayesian Neural Network for further
improvement. | rejected-papers | Reviewers are in a consensus and recommended to reject after engaging with the authors. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.
| train | [
"HygG0Ptt0X",
"BkxanPtFCm",
"S1xp1GSKR7",
"HJxdLtC70Q",
"Bkl1r-CUTX",
"H1gLHzCI67",
"Hkg1efC8pX",
"H1giCW0ITm",
"BklBaWRLTQ",
"r1lNub0U6X",
"SklEwZR8pQ",
"BklwwA692m",
"BkgLAU2QnQ",
"ByxWj_pFnQ",
"rkexLzBch7",
"SJe1V7f-qm"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author"
] | [
"We have significantly changed our submission. We hope that our contribution seems more clear in its new form. We have:\n\n-rewrite most of the paper\n-give an insight on why Bayesian models are a good choice for calibration\n-discuss our method more clearly towards other approaches\n",
"We have significantly cha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1,
-1
] | [
"S1xp1GSKR7",
"HJxdLtC70Q",
"BkgLAU2QnQ",
"H1giCW0ITm",
"iclr_2019_S1xjdoC9Fm",
"rkexLzBch7",
"BkgLAU2QnQ",
"ByxWj_pFnQ",
"BklwwA692m",
"iclr_2019_S1xjdoC9Fm",
"iclr_2019_S1xjdoC9Fm",
"iclr_2019_S1xjdoC9Fm",
"iclr_2019_S1xjdoC9Fm",
"iclr_2019_S1xjdoC9Fm",
"iclr_2019_S1xjdoC9Fm",
"iclr_... |
iclr_2019_S1xoy3CcYX | Adversarial Examples Are a Natural Consequence of Test Error in Noise | Over the last few years, the phenomenon of adversarial examples --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when the adversary is restricted to making small modifications of a correctly handled input. At the same time, less surprisingly, image classifiers lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this work, we show that these are two manifestations of the same underlying phenomenon. We establish this connection in several ways. First, we find that adversarial examples exist at the same distance scales we would expect from a linear model with the same performance on corrupted images. Next, we show that Gaussian data augmentation during training improves robustness to small adversarial perturbations and that adversarial training improves robustness to several types of image corruptions. Finally, we present a model-independent upper bound on the distance from a corrupted image to its nearest error given test performance and show that in practice we already come close to achieving the bound, so that improving robustness further for the corrupted image distribution requires significantly reducing test error. All of this suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. This yields a computationally tractable evaluation metric for defenses to consider: test error in noisy image distributions. | rejected-papers | In light of the reviews and the rebuttal, it seems that the paper needs to be rewritten to head off some of the confusions and criticisms that the reviewers have made. That said, the main argument seems to contradict some of the lower bounds recently established by Madry and colleagues, showing the existence of distributions where the sample complexity for finding robust classifiers is arbitrarily larger than that for finding low-risk classifiers. I recommend the authors take a closer look at this apparent contradiction when revising. | train | [
"BkgC_UJ0CX",
"rygto--BAQ",
"r1xOqb-HR7",
"BygMs4cPTX",
"HyxZTVo9hQ",
"HkxpwrWcn7",
"S1gVOsyuh7",
"B1ezN5f9nQ",
"Ske_pCTe2Q",
"BklKxJBF57",
"HyevjK2U5X",
"Hkl2ae_r5X",
"HJeOIqrr57",
"HJxH79BB97"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"public",
"author",
"public",
"public"
] | [
"Thanks for your comments. However, we feel you haven’t addressed the main point of our paper in your review. \n\nOur results draw a close connection between the adversarial defense literature and the robustness literature [1,2,3,4]. This is important and significant because it suggests that methods useful for impr... | [
-1,
-1,
-1,
-1,
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyxZTVo9hQ",
"r1xOqb-HR7",
"HkxpwrWcn7",
"iclr_2019_S1xoy3CcYX",
"iclr_2019_S1xoy3CcYX",
"iclr_2019_S1xoy3CcYX",
"iclr_2019_S1xoy3CcYX",
"Ske_pCTe2Q",
"HyevjK2U5X",
"HyevjK2U5X",
"Hkl2ae_r5X",
"HJeOIqrr57",
"HJxH79BB97",
"iclr_2019_S1xoy3CcYX"
] |
iclr_2019_S1xzyhR9Y7 | Improving Sentence Representations with Multi-view Frameworks | Multi-view learning can provide self-supervision when different views are available of the same data. Distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora. Motivated by the asymmetry in the two hemispheres of the human brain as well as the observation that different learning architectures tend to emphasise different aspects of sentence meaning, we present two multi-view frameworks for learning sentence representations in an unsupervised fashion. One framework uses a generative objective and the other a discriminative one. In both frameworks, the final representation is an ensemble of two views, in which, one view encodes the input sentence with a Recurrent Neural Network (RNN), and the other view encodes it with a simple linear model. We show that, after learning, the vectors produced by our multi-view frameworks provide improved representations over their single-view learnt counterparts, and the combination of different views gives representational improvement over each view and demonstrates solid transferability on standard downstream tasks. | rejected-papers | This paper offers a new method for sentence representation learning, fitting loosely into the multi-view learning framework, with fairly strong results. The paper is clearly borderline, with one reviewer arguing for acceptance and another arguing for rejection. While it is a tough decision, I have to argue for rejection in this case.
There was a robust discussion and the authors revised the paper, so none of the remaining technical issues strike me as fatal. My primary concern is simply that the reviewers could not reach a consensus in favor of the paper. In particular, two reviewers expressed concerns that this paper makes too small an advance in NLP to be of interest to non-NLP researchers. I think it should be possible to broaden the scope of the paper and resubmit it to another general ML venue, and (as one reviewer suggested explicitly), this paper may have a better chance at an NLP-specific venue.
While neither of these factors was crucial in the decision, I'd encourage the authors (i) to put more effort into comparing properly with the Subramanian and Radford baselines, and (ii) to clarify the points about the human brain. For the second point: While none of the claims about the brain are false *or misleading*, as far as I know, the authors do not make a convincing case that the claims about the brain are actually relevant to the work being done here. | train | [
"rJe4cI5H1N",
"SklVxIqBJN",
"S1eOrBb9RX",
"B1ggt-L93m",
"Skex_lDyCm",
"S1lcRhycnX",
"rkeakQq46Q",
"rke0Pfc4aX",
"Hye46WcE6Q",
"SygDgAxi3Q"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"We note that none of the examples in [1] create new views de novo but rely on natural complementary views (e.g. from audio and video; or from “links to the page” and “words on the page”). Here we are constructing two views in feature space (representing the processing resulting from a complex RNN and a simple line... | [
-1,
-1,
-1,
6,
-1,
5,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
5,
-1,
4,
-1,
-1,
-1,
4
] | [
"rke0Pfc4aX",
"rkeakQq46Q",
"iclr_2019_S1xzyhR9Y7",
"iclr_2019_S1xzyhR9Y7",
"iclr_2019_S1xzyhR9Y7",
"iclr_2019_S1xzyhR9Y7",
"S1lcRhycnX",
"B1ggt-L93m",
"SygDgAxi3Q",
"iclr_2019_S1xzyhR9Y7"
] |
iclr_2019_S1z9ehAqYX | Shrinkage-based Bias-Variance Trade-off for Deep Reinforcement Learning | Deep reinforcement learning has achieved remarkable successes in solving various challenging artificial intelligence tasks. A variety of different algorithms have been introduced and improved towards human-level performance. Although technical advances have been developed for each individual algorithms, there has been strong evidence showing that further substantial improvements can be achieved by properly combining multiple approaches with difference biases and variances. In this work, we propose to use the James-Stein (JS) shrinkage estimator to combine on-policy policy gradient estimators which have low bias but high variance, with low-variance high-bias gradient estimates such as those constructed based on model-based methods or temporally smoothed averaging of historical gradients. Empirical results show that our simple shrinkage approach is very effective in practice and substantially improve the sample efficiency of the state-of-the-art on-policy methods on various continuous control tasks.
| rejected-papers | The paper introduces the use of J-S shrinkage estimator in policy optimization, which is new and promising. The results also show the potential. That said, reviewers are not fully convinced that in its current stage the paper is ready for publication. The approach taken here is essentially a combination existing techniques. While it is useful, more work is probably needed to strengthen the contribution. A few directions have been suggested by reviewers, including theoretical guarantees and stronger empirical support. | train | [
"ryevzrzchm",
"S1xJU7v5A7",
"ryes-mv50X",
"BkeuBMPcCm",
"SklWpVwT2m",
"rylMEf6qn7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper suggest a shrinkage-based estimator (James-Stein estimator) to compute policy gradients in reinforcement learning to reduce the variance by trading some bias. Two versions are suggested: The on-policy gradients is shrinked either towards (i) model based gradient, or towards (ii) a delayed average of prev... | [
5,
-1,
-1,
-1,
4,
4
] | [
3,
-1,
-1,
-1,
2,
4
] | [
"iclr_2019_S1z9ehAqYX",
"ryevzrzchm",
"rylMEf6qn7",
"SklWpVwT2m",
"iclr_2019_S1z9ehAqYX",
"iclr_2019_S1z9ehAqYX"
] |
iclr_2019_S1zlmnA5K7 | Where Off-Policy Deep Reinforcement Learning Fails | This work examines batch reinforcement learning--the task of maximally exploiting a given batch of off-policy data, without further data collection. We demonstrate that due to errors introduced by extrapolation, standard off-policy deep reinforcement learning algorithms, such as DQN and DDPG, are only capable of learning with data correlated to their current policy, making them ineffective for most off-policy applications. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space to force the agent towards behaving on-policy with respect to a subset of the given data. We extend this notion to deep reinforcement learning, and to the best of our knowledge, present the first continuous control deep reinforcement learning algorithm which can learn effectively from uncorrelated off-policy data. | rejected-papers | The paper proposes batch-constrained approach to batch RL, where the policy is optimized under the constrain that at a state only actions appearing in the training data are allowed. An extension to continuous cases is given.
While the paper has some interesting idea and the problem of dealing with extrapolation in RL is important, the approach appears somewhat ad hoc and the contributions limited.
For example, the constraint is based on whether (s,a) is in B, but this condition can be quite delicate in a stochastic problem (seeing a in s *once* may still allow large extrapolation error if that only observed transition is not representative). Section 4.1 gives some nice insights for the special finite MDP case, but those results are a little weak (requiring strong assumption that may not hold in practice) --- an example being the requirement that s' be included in data if (s,a) is in data and P(s'|s,a)>0 [beginning of section 4.1].
In contrast, there are other more robust and principled ways, such as counterfactual risk minimization (CRM) for contextual bandits (http://www.jmlr.org/papers/v16/swaminathan15a.html). For MDPs, the Bayesian version of DQN (the cited Azizzadenesheli et al., as well as Lipton et al. at AAAI'18) can be used to constrain the learned policy as well, with a simple modification of using the CRM idea for bandits. Would these algorithms be reasonable baselines? | train | [
"HJe89GvsyE",
"r1eYNjE9k4",
"BkeBv0rmC7",
"Hygun0mzAQ",
"BJlW8CJGCm",
"r1eG-1RlCQ",
"BJeevRYjpX",
"Hyglr0YiTm",
"Bye92aKsTQ",
"H1lXqTYiaQ",
"SJxX5LHp3X",
"rylyPcQ9h7",
"HJeQ-p0F2Q",
"SkgTBMocc7",
"r1lpP08c5Q",
"H1lHSV89q7",
"rJxRRCmcqX",
"Skxqda6YqX"
] | [
"author",
"public",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public",
"author",
"public"
] | [
"It is true that specific/adversarial counter-examples exist for most forms of function approximation. However, in this work we examine environments where deep Q-learning algorithms have already been shown to perform well on. We show that given the same dataset, deep off-policy algorithms can perform very different... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"r1eYNjE9k4",
"iclr_2019_S1zlmnA5K7",
"Hygun0mzAQ",
"BJlW8CJGCm",
"r1eG-1RlCQ",
"iclr_2019_S1zlmnA5K7",
"iclr_2019_S1zlmnA5K7",
"HJeQ-p0F2Q",
"rylyPcQ9h7",
"SJxX5LHp3X",
"iclr_2019_S1zlmnA5K7",
"iclr_2019_S1zlmnA5K7",
"iclr_2019_S1zlmnA5K7",
"r1lpP08c5Q",
"iclr_2019_S1zlmnA5K7",
"rJxRR... |
iclr_2019_SJ4Z72Rctm | Composing Entropic Policies using Divergence Correction | Deep reinforcement learning (RL) algorithms have made great strides in recent years. An important remaining challenge is the ability to quickly transfer existing skills to novel tasks, and to combine existing skills with newly acquired ones. In domains where tasks are solved by composing skills this capacity holds the promise of dramatically reducing the data requirements of deep RL algorithms, and hence increasing their applicability. Recent work has studied ways of composing behaviors represented in the form of action-value functions. We analyze these methods to highlight their strengths and weaknesses, and point out situations where each of them is susceptible to poor performance. To perform this analysis we extend generalized policy improvement to the max-entropy framework and introduce a method for the practical implementation of successor features in continuous action spaces. Then we propose a novel approach which, in principle, recovers the optimal policy during transfer. This method works by explicitly learning the (discounted, future) divergence between policies. We study this approach in the tabular case and propose a scalable variant that is applicable in multi-dimensional continuous action spaces.
We compare our approach with existing ones on a range of non-trivial continuous control problems with compositional structure, and demonstrate qualitatively better performance despite not requiring simultaneous observation of all task rewards. | rejected-papers | Multiple reviewers had concerns about the clarity of the presentation and the significance of the results.
| train | [
"B1lKIlfJlE",
"S1xE-pEnkV",
"ByglHlE31N",
"ByeCGb_vRX",
"Byl3o8kR6Q",
"rJgcmIkRaQ",
"B1lgxSkATX",
"BJgmOxy0am",
"SyeAsTCaam",
"BkgRW5786Q",
"r1l45Kv92X",
"rkxl5ZFc2m"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the updates and clarifications. My rating remains the same. I consider this paper to have enough novelty to be interesting. A limiting factor is that ICLR may not be the best venue for this work.",
"Thank you for your comment and the opportunity for further clarification.\n\nWe cannot change the su... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"SyeAsTCaam",
"ByglHlE31N",
"B1lgxSkATX",
"iclr_2019_SJ4Z72Rctm",
"BkgRW5786Q",
"BkgRW5786Q",
"BkgRW5786Q",
"r1l45Kv92X",
"rkxl5ZFc2m",
"iclr_2019_SJ4Z72Rctm",
"iclr_2019_SJ4Z72Rctm",
"iclr_2019_SJ4Z72Rctm"
] |
iclr_2019_SJ4vTjRqtQ | Dynamic Planning Networks | We introduce Dynamic Planning Networks (DPN), a novel architecture for deep reinforcement learning, that combines model-based and model-free aspects for online planning. Our architecture learns to dynamically construct plans using a learned state-transition model by selecting and traversing between simulated states and actions to maximize valuable information before acting. In contrast to model-free methods, model-based planning lets the agent efficiently test action hypotheses without performing costly trial-and-error in the environment. DPN learns to efficiently form plans by expanding a single action-conditional state transition at a time instead of exhaustively evaluating each action, reducing the required number of state-transitions during planning by up to 96%. We observe various emergent planning patterns used to solve environments, including classical search methods such as breadth-first and depth-first search. Learning To Plan shows improved data efficiency, performance, and generalization to new and unseen domains in comparison to several baselines. | rejected-papers |
pros:
- Good quantitative results showing clear improvement over other model-based methods in sample efficiency and computational cost (though see Reviewer 2's concerns about the need for more experiments on computational cost).
- Cool qualitative results showing discovery of BFS and DFS
- Potentially novel approach (see cons)
cons:
- Lack of clarity especially concerning equation (1). Both Reviewers 1 and 3 were unsure of the rationale for this equation which lies at the heart of the method. It looks to me like a combination of surprise and value but the motivation is not clear. There are a number of other such places pointed out by the reviewers where model choices were made that seem ad hoc or not well motivated.
- In general it's hard to understand which factors are important in driving the results you report. As Reviewer 3 points out, more ablation studies and analysis would help here. Providing more motivation, explanation and analysis would help the reader understand better the reasons for the performance of the model.
The results are nice and the method is intriguing. I think this potentially a very nice paper and if you can address the above concerns but isn't quite up to the acceptance bar for ICLR this year.
| train | [
"Syx2OuTP0X",
"H1xMm35n14",
"Hyx9HvDG2X",
"S1ecNr53kN",
"HJxxDN2h07",
"ByxHmN3nAm",
"B1gs36av0m",
"SygCmp6wA7",
"r1e5UKTwC7",
"SygUDjunhX",
"HygvTCvLn7",
"S1lAPB0g2m"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author"
] | [
"We would like to thank each reviewer for taking the time to review our paper, providing insightful comments, and the thoughtful questions asked. We feel that our paper has been strengthened as a result and are grateful for this outcome.\n\nFor the convenience of the reviewers and AC below we summarize the changes ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
-1
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
-1
] | [
"iclr_2019_SJ4vTjRqtQ",
"S1ecNr53kN",
"iclr_2019_SJ4vTjRqtQ",
"B1gs36av0m",
"r1e5UKTwC7",
"r1e5UKTwC7",
"Hyx9HvDG2X",
"HygvTCvLn7",
"SygUDjunhX",
"iclr_2019_SJ4vTjRqtQ",
"iclr_2019_SJ4vTjRqtQ",
"iclr_2019_SJ4vTjRqtQ"
] |
iclr_2019_SJG1wjRqFQ | Discrete Structural Planning for Generating Diverse Translations | Planning is important for humans when producing complex languages, which is a missing part in current language generation models. In this work, we add a planning phase in neural machine translation to control the global sentence structure ahead of translation. Our approach learns discrete structural representations to encode syntactic information of target sentences. During translation, we can either let beam search to choose the structural codes automatically or specify the codes manually. The word generation is then conditioned on the selected discrete codes. Experiments show that the translation performance remains intact by learning the codes to capture pure structural variations. Through structural planning, we are able to control the global sentence structure by manipulating the codes. By evaluating with a proposed structural diversity metric, we found that the sentences sampled using different codes have much higher diversity scores. In qualitative analysis, we demonstrate that the sampled paraphrase translations have drastically different structures. | rejected-papers | This paper introduces a planning phase for NMT. It first generates a discrete set of tags at decoding time, and then the actual words are generated conditioned on those tags. The idea in the paper is interesting.
However, the paper's experimental settings could improve by comparing on larger datasets and also using stronger baselines. The writing could also improve -- why were only the few coarse POS tags used? Have the authors tried a larger set? I think without such controlled comparisons, it would be hard to understand why only those coarse tags are used.
The reviewers express concern about some of the above issues and there is consensus that the paper should be improved for acceptance at a venue like ICLR. | train | [
"ryg9daF53m",
"HyxXDrx70m",
"BylTbuUGAQ",
"BkexY2vOTX",
"BkgXLmTIaQ",
"ryx7VpEw3Q",
"SyxRv1PUaX",
"Hyx4SKX-6m",
"BJgt-vklam",
"BkxQsCrAn7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors consider the problem of generating diverse translations from a neural machine translation model. This is a very interesting problem and indeed, even the best models lack meaningful diversity when generating with beam-search. The method proposed by the authors relies on prefixing the generation with dis... | [
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
2
] | [
5,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
5
] | [
"iclr_2019_SJG1wjRqFQ",
"BylTbuUGAQ",
"BkgXLmTIaQ",
"BkgXLmTIaQ",
"SyxRv1PUaX",
"iclr_2019_SJG1wjRqFQ",
"ryx7VpEw3Q",
"ryg9daF53m",
"BkxQsCrAn7",
"iclr_2019_SJG1wjRqFQ"
] |
iclr_2019_SJGyFiRqK7 | Decoupling Gating from Linearity | The gap between the empirical success of deep learning and the lack of strong theoretical guarantees calls for studying simpler models. By observing that a ReLU neuron is a product of a linear function with a gate (the latter determines whether the neuron is active or not), where both share a jointly trained weight vector, we propose to decouple the two. We introduce GaLU networks — networks in which each neuron is a product of a Linear Unit, defined by a weight vector which is being trained, with a Gate, defined by a different weight vector which is not being trained. Generally speaking, given a base model and a simpler version of it, the two parameters that determine the quality of the simpler version are whether its practical performance is close enough to the base model and whether it is easier to analyze it theoretically. We show that GaLU networks perform similarly to ReLU networks on standard datasets and we initiate a study of their theoretical properties, demonstrating that they are indeed easier to analyze. We believe that further research of GaLU networks may be fruitful for the development of a theory of deep learning. | rejected-papers | The reviewers reached a consensus that the paper is not ready for publication in ICLR. (see more details in the reviews below. ) | train | [
"Sygl2gUT3Q",
"r1gaTaqY3X",
"rkge4jSvnX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces a GaLU activation function, which is the product of a random gate function and a learnable linear function. The authors argue that empirically, neural networks with the GaLU activation is as effective as that with the ReLU activation, but theoretically, the GaLU activation is easier to underst... | [
3,
2,
3
] | [
4,
5,
5
] | [
"iclr_2019_SJGyFiRqK7",
"iclr_2019_SJGyFiRqK7",
"iclr_2019_SJGyFiRqK7"
] |
iclr_2019_SJLhxnRqFQ | Adversarially Learned Mixture Model | The Adversarially Learned Mixture Model (AMM) is a generative model for unsupervised or semi-supervised data clustering. The AMM is the first adversarially optimized method to model the conditional dependence between inferred continuous and categorical latent variables. Experiments on the MNIST and SVHN datasets show that the AMM allows for semantic separation of complex data when little or no labeled data is available. The AMM achieves unsupervised clustering error rates of 3.32% and 20.4% on the MNIST and SVHN datasets, respectively. A semi-supervised extension of the AMM achieves a classification error rate of 5.60% on the SVHN dataset. | rejected-papers | The paper presents a method for unsupervised/semi-supervised clustering, combining adversarial learning and the Mixture of Gaussians model. The authors follow the methodology of ALI, extending the Q and P models with discrete variables, in such a way that the latent space in the P model comprises a mixture-of-Gaussians model.
The problem of generative modeling and semi-supervised learning are interesting topics for the ICLR community.
The reviewers think that the novelty of the method is unclear. The technique appears to be a mix of various pre-existing techniques, combined with a novel choice of model. The experimental results are somewhat promising, and it is encouraging to see that good generative model results are consistent with improved semi-supervised classification results. The paper seems to rely heavily on empirical results, but they are difficult to verify without published source code. The datasets chosen for experimental validation are also quite limited, making it it difficult to assess the strengths of the proposed method. | train | [
"H1lPUJ5iyE",
"r1gBO2w_3m",
"rJgFb-iMhQ",
"S1lt0AiWnQ"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This is an interesting paper. We tried to implement it but can not achieve the same performance in the paper. The margin is large. Could the author help to provide some implementation details or share the code? ",
"The paper uses Generative Adversarial Networks (GAN) for unsupervised and semi-supervised clusteri... | [
-1,
6,
5,
6
] | [
-1,
1,
4,
2
] | [
"iclr_2019_SJLhxnRqFQ",
"iclr_2019_SJLhxnRqFQ",
"iclr_2019_SJLhxnRqFQ",
"iclr_2019_SJLhxnRqFQ"
] |
iclr_2019_SJMBM2RqKQ | Uncertainty-guided Lifelong Learning in Bayesian Networks | Sequentially learning of tasks arriving in a continuous stream is a complex problem and becomes more challenging when the model has a fixed capacity. Lifelong learning aims at learning new tasks without forgetting previously learnt ones as well as freeing up capacity for learning future tasks. We argue that identifying the most influential parameters in a representation learned for one task plays a critical role to decide on \textit{what to remember} for continual learning. Motivated by the statistically-grounded uncertainty defined in Bayesian neural networks, we propose to formulate a Bayesian lifelong learning framework, \texttt{BLLL}, that addresses two lifelong learning directions: 1) completely eliminating catastrophic forgetting using weight pruning, where a hard selection mask freezes the most certain parameters (\texttt{BLLL-PRN}) and 2) reducing catastrophic forgetting by adaptively regularizing the learning rates using the parameter uncertainty (\texttt{BLLL-REG}). While \texttt{BLLL-PRN} is by definition a zero-forgetting guaranteed method, \texttt{BLLL-REG}, despite exhibiting some small forgetting, is a task-agnostic lifelong learner, which does not require to know when a new task arrives. This feature makes \texttt{BLLL-REG} a more convenient candidate for applications such as robotics or on-line learning in which such information is not available. We evaluate our Bayesian learning approaches extensively on diverse object classification datasets in short and long sequences of tasks and perform superior or marginally better than the existing approaches. | rejected-papers | Reviewers are in a consensus and recommended to reject. However, the reviewers did not engage at all with the authors, and did not acknowledge whether their concerns have been answered. I therefore lean to reject, and would recommend the authors to resubmit. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.
| train | [
"HkgRrpYcAX",
"SylXWhF5CX",
"SklZQcuqR7",
"B1e-kOu90X",
"Syl6wT6qnm",
"BJx-HPCY3m",
"B1l2REPYh7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank you again for your time and your feedback. Please see first our meta response to major comments shared between reviewers. Here we refer to separate comments/questions we received from you.\n\nWe also found it behaving inconsistent through the new experimental setting we adapted per reviewers... | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"B1l2REPYh7",
"BJx-HPCY3m",
"Syl6wT6qnm",
"iclr_2019_SJMBM2RqKQ",
"iclr_2019_SJMBM2RqKQ",
"iclr_2019_SJMBM2RqKQ",
"iclr_2019_SJMBM2RqKQ"
] |
iclr_2019_SJMO2iCct7 | A NOVEL VARIATIONAL FAMILY FOR HIDDEN NON-LINEAR MARKOV MODELS | Latent variable models have been widely applied for the analysis and visualization of large datasets. In the case of sequential data, closed-form inference is possible when the transition and observation functions are linear. However, approximate inference techniques are usually necessary when dealing with nonlinear evolution and observations. Here, we propose a novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics (VIND), that is able to uncover nonlinear observation and latent dynamics from sequential data. The framework includes a structured approximate posterior, and an algorithm that relies on the fixed-point iteration method to find the best estimate for latent trajectories. We apply the method to several datasets and show that it is able to accurately infer the underlying dynamics of these systems, in some cases substantially outperforming state-of-the-art methods. | rejected-papers | The reviewers in general like the paper but has serous reservations regarding relation to other work (novelty) and clarity of presentation. Given non-linear state space models is a crowded field it is perhaps better that these points are dealt with first and then submitted elsewhere. | train | [
"Sk1hV4YRQ",
"rye_9NEY0Q",
"HkxxIgNtAm",
"BkgfAJVKAQ",
"Skl3Q5xYAQ",
"rJgBQzFjnX",
"Bygvx6sc2X",
"rJe98VxNhm"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"5.- \"...exhaustive search is used for finding dimension of latent variable. ... non-parametric approaches to find best latent dimension, .... same technique could be adopted .....\"\n \nThis is a very interesting idea. In this paper the datasets considered were small enough that performing a simple exhaustive se... | [
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"rJgBQzFjnX",
"rJgBQzFjnX",
"Bygvx6sc2X",
"Bygvx6sc2X",
"rJe98VxNhm",
"iclr_2019_SJMO2iCct7",
"iclr_2019_SJMO2iCct7",
"iclr_2019_SJMO2iCct7"
] |
iclr_2019_SJMZRsC9Y7 | A NON-LINEAR THEORY FOR SENTENCE EMBEDDING | This paper revisits the Random Walk model for sentence embedding in the context of non-extensive statistics. We propose a non-extensive algebra to compute the discourse vector. We argue that by doing so we are taking into account high non-linearity in the semantic space. Furthermore, we show that by considering a non-extensive algebra, the compounding effect of the vector length is mitigated. Overall, we show that the proposed model leads to good sentence embedding. We evaluate the embedding method on textual similarity tasks. | rejected-papers | The paper is poorly written and below the bar of ICLR. The paper could be improved with better exposition and stronger experiment results (or clearer exposition of the experimental results.)
| train | [
"HkxvJlI1aX",
"rJgV3Kf_2Q",
"B1gXRA-u37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: this paper discussed an incremental improvement over the Random Walk based model for sentence embedding.\n\nConclusion: this paper is not ready for publication, very poor written and well below the bar of ICLR-caliber papers. \n\nMore:\nThis paper spent the majority of its content explaining background (t... | [
3,
3,
3
] | [
3,
3,
4
] | [
"iclr_2019_SJMZRsC9Y7",
"iclr_2019_SJMZRsC9Y7",
"iclr_2019_SJMZRsC9Y7"
] |
iclr_2019_SJMeTo09YQ | Guided Exploration in Deep Reinforcement Learning | This paper proposes a new method to drastically speed up deep reinforcement learning (deep RL) training for problems that have the property of \textit{state-action permissibility} (SAP). Two types of permissibility are defined under SAP. The first type says that after an action at is performed in a state st and the agent reaches the new state st+1, the agent can decide whether the action at is \textit{permissible} or \textit{not permissible} in state st. The second type says that even without performing the action at in state st, the agent can already decide whether at is permissible or not in st. An action is not permissible in a state if the action can never lead to an optimal solution and thus should not be tried. We incorporate the proposed SAP property into two state-of-the-art deep RL algorithms to guide their state-action exploration. Results show that the SAP guidance can markedly speed up training. | rejected-papers | The paper presents a simple and interesting idea to improve exploration efficiency, using the notion of action permissibility. Experiments in two problems (lane keeping, and flappy bird) show that exploration can be improved over baselines like DQN and DDPG. However, action permissibility appears to be very strong domain knowledge that limits the use in complex problems.
Rephrasing one of reviewers, action permissibility essentially implies that some one-step information can be used to rule out suboptimal actions, while a defining challenge in RL is that the agent needs to learn/plan/reason over multiple steps to decide whether an action is suboptimal or not. Indeed, the two problems in the experiments have such a property that a myopic agent can solve the tasks pretty well. The paper would be stronger if the AP function can be defined for more common RL benchmarks, with similar benefits demonstrated. | train | [
"rJlX0bI5CX",
"SyxUxFjuAQ",
"HJgayKo_Am",
"Skg0LCPkRQ",
"BJgm5hW6nm",
"HJefUYx9nQ",
"HyeHZTbdn7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have revised our paper following your comments and addressed your concerns in the revised version. Please consider the revised version as a reference to our responses. \n\nWe thank you for reviewing our work and providing valuable feedbacks.\n",
"We thank you for your valuable comments. Please find our respon... | [
-1,
-1,
-1,
-1,
7,
5,
3
] | [
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"iclr_2019_SJMeTo09YQ",
"HJefUYx9nQ",
"BJgm5hW6nm",
"HyeHZTbdn7",
"iclr_2019_SJMeTo09YQ",
"iclr_2019_SJMeTo09YQ",
"iclr_2019_SJMeTo09YQ"
] |
iclr_2019_SJMnG2C9YX | Complementary-label learning for arbitrary losses and models | In contrast to the standard classification paradigm where the true (or possibly noisy) class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label. This only specifies one of the classes that the pattern does not belong to. The seminal paper on complementary-label learning proposed an unbiased estimator of the classification risk that can be computed only from complementarily labeled data. How- ever, it required a restrictive condition on the loss functions, making it impossible to use popular losses such as the softmax cross-entropy loss. Recently, another formulation with the softmax cross-entropy loss was proposed with consistency guarantee. However, this formulation does not explicitly involve a risk estimator. Thus model/hyper-parameter selection is not possible by cross-validation— we may need additional ordinarily labeled data for validation purposes, which is not available in the current setup. In this paper, we give a novel general framework of complementary-label learning, and derive an unbiased risk estimator for arbitrary losses and models. We further improve the risk estimator by non-negative correction and demonstrate its superiority through experiments. | rejected-papers | The paper studies learning from complementary labels – the setting when example comes with the label information about one of the classes that the example does not belong to. The paper core contribution is an unbiased risk estimator for arbitrary losses and models under this learning scenario, which is an improvement over the previous work, as rightly acknowledged by R1 and R2.
The reviewers and AC note the following potential weaknesses: (1) R3 raised an important concern that the core technical contribution is a special case of previously published more general framework which is not cited in the paper. The authors agree with R3 on this matter; (2) the proposed unbiased estimator is not practical, e.g. it leads to overfitting when the cross-entropy loss is used, it is unbounded from below as pointed out by R1; (3) the two proposed modifications of the unbiased estimator are biased estimators, which defeats the motivation of the work and limits its main technical contributions; (4) R2 rightly pointed out that the assumption that complementary label is selected uniformly at random is unrealistic – see R2’s suggestions on how to address this issue.
While all the reviewers acknowledged that the proposed biased estimators show advantageous performance on practice, the AC decides that in its current state the paper does not present significant contributions to the prior work, given (1)-(3), and needs major revision before submitting for another round of reviews.
| val | [
"HyeaZMtYRX",
"SJe6JGKF0Q",
"HJxJsWYtCX",
"SkeJ7R_m6X",
"B1gDnQhq3X",
"rJedhpHq3X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for the insightful reviews!\n\nQ) Theorem 1 and derived loss are special cases of more general framework published in prior work.\nA) Thank you very much for pointing this out and explaining the relationship between our paper. We would like to clarify our contributions carefully and explain th... | [
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"rJedhpHq3X",
"B1gDnQhq3X",
"SkeJ7R_m6X",
"iclr_2019_SJMnG2C9YX",
"iclr_2019_SJMnG2C9YX",
"iclr_2019_SJMnG2C9YX"
] |
iclr_2019_SJNRHiAcYX | Boosting Trust Region Policy Optimization by Normalizing flows Policy | We propose to improve trust region policy search with normalizing flows policy. We illustrate that when the trust region is constructed by KL divergence constraint, normalizing flows policy can generate samples far from the 'center' of the previous policy iterate, which potentially enables better exploration and helps avoid bad local optima. We show that normalizing flows policy significantly improves upon factorized Gaussian policy baseline, with both TRPO and ACKTR, especially on tasks with complex dynamics such as Humanoid. | rejected-papers | This work proposes to improve trust region policy search (TRPO) by using normalizing flow policies. This idea is a straightforward combination of two existing techniques and is not super surprising in terms of novelty. In this case, really strong experiments are needed to support the work; this is , unfortunately, is the not the case. For example, it was notice by the reviewers that the Mujoco TRPO experiments does not use the best implementation of TRPO, which makes it difficult to judge the strength of the work compared with state of the art. | train | [
"SJxd2NNKhX",
"HklshWdUp7",
"rkxmRGW86X",
"HJe1jGZ86m",
"rkeZNVXihX",
"B1eWSaTuhX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper generalizes basic policy gradient methods by replacing the original Gaussian or Gaussian mixture policy with a normalizing flow policy, which is defined by a sequence of invertible transformations from a base policy.\n\nAlthough the concept of normalizing flow is simple, and it has been applied to other... | [
4,
-1,
-1,
-1,
6,
4
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_SJNRHiAcYX",
"rkeZNVXihX",
"B1eWSaTuhX",
"SJxd2NNKhX",
"iclr_2019_SJNRHiAcYX",
"iclr_2019_SJNRHiAcYX"
] |
iclr_2019_SJNceh0cFX | A RECURRENT NEURAL CASCADE-BASED MODEL FOR CONTINUOUS-TIME DIFFUSION PROCESS | Many works have been proposed in the literature to capture the dynamics of diffusion in networks. While some of them define graphical markovian models to extract temporal relationships between node infections in networks, others consider diffusion episodes as sequences of infections via recurrent neural models. In this paper we propose a model at the crossroads of these two extremes, which embeds the history of diffusion in infected nodes as hidden continuous states. Depending on the trajectory followed by the content before reaching a given node, the distribution of influence probabilities may vary. However, content trajectories are usually hidden in the data, which induces challenging learning problems. We propose a topological recurrent neural model which exhibits good experimental performances for diffusion modelling and prediction. | rejected-papers | This paper introduces a recurrent neural network approach for learning diffusion dynamics in networks. The main advantage is that it embeds the history of diffusion and incorporates the structure of independent cascades for diffusion modeling and prediction. This is an important problem, and the proposed approach is novel and provides some empirical improvements.
However, there is a lack of theoretical analysis, and in particular modeling choices and consequences of these choices should be emphasized more clearly. While there wasn't a consensus, a majority of the reviewers believe the paper is not ready for publication.
| train | [
"HJl6vZ-EJV",
"Skx3-KvOC7",
"B1xiMN_dAm",
"B1gDcrvdCm",
"HJgQO5qAnQ",
"SylvLhvph7",
"SkeyBm0VnQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"To summary, the main changes in the revision are the following: \n\n - A new evaluation measure on artificial datasets which considers the rate of correct choices of infectors. This highlights the good ability of our method to discover the paths of diffusion;\n\n - New experiments on an additional artificial... | [
-1,
-1,
-1,
-1,
7,
4,
4
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2019_SJNceh0cFX",
"SylvLhvph7",
"SkeyBm0VnQ",
"HJgQO5qAnQ",
"iclr_2019_SJNceh0cFX",
"iclr_2019_SJNceh0cFX",
"iclr_2019_SJNceh0cFX"
] |
iclr_2019_SJe2so0qF7 | Learning data-derived privacy preserving representations from information metrics | It is clear that users should own and control their data and privacy. Utility providers are also becoming more interested in guaranteeing data privacy. Therefore, users and providers can and should collaborate in privacy protecting challenges, and this paper addresses this new paradigm. We propose a framework where the user controls what characteristics of the data they want to share (utility) and what they want to keep private (secret), without necessarily asking the utility provider to change its existing machine learning algorithms. We first analyze the space of privacy-preserving representations and derive natural information-theoretic bounds on the utility-privacy trade-off when disclosing a sanitized version of the data X. We present explicit learning architectures to learn privacy-preserving representations that approach this bound in a data-driven fashion. We describe important use-case scenarios where the utility providers are willing to collaborate with the sanitization process. We study space-preserving transformations where the utility provider can use the same algorithm on original and sanitized data, a critical and novel attribute to help service providers accommodate varying privacy requirements with a single set of utility algorithms. We illustrate this framework through the implementation of three use cases; subject-within-subject, where we tackle the problem of having a face identity detector that works only on a consenting subset of users, an important application, for example, for mobile devices activated by face recognition; gender-and-subject, where we preserve facial verification while hiding the gender attribute for users who choose to do so; and emotion-and-gender, where we hide independent variables, as is the case of hiding gender while preserving emotion detection. | rejected-papers | This paper addresses data sanitization, using a KL-divergence-based notion of privacy. While an interesting goal, the use of average-case as opposed to worst-case privacy misses the point of privacy guarantees, which must protect all individuals. (Otherwise, individuals with truly anomalous private values may be the only ones who opt for the highest levels of privacy, yet this situation will itself leak some information about their private values).
| train | [
"Skg-3CDukE",
"Syxle0BY3Q",
"SJlJvdaSCm",
"rJlYm_aSA7",
"Hyl2M7THCX",
"BylZF46rCm",
"H1xAaMTSR7",
"HJlxjQO6nQ",
"BJgbp4wT2Q"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewer for their comments on the updated manuscript.\n\nWe addressed the typo in Equation 20.\n\nRegarding the comments on stronger privacy guarantees:\n\nIndeed we would like to get tighter and stronger guarantees, this is a starting point. We do feel that the type of privacy concerns... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"Syxle0BY3Q",
"iclr_2019_SJe2so0qF7",
"rJlYm_aSA7",
"Syxle0BY3Q",
"H1xAaMTSR7",
"BJgbp4wT2Q",
"HJlxjQO6nQ",
"iclr_2019_SJe2so0qF7",
"iclr_2019_SJe2so0qF7"
] |
iclr_2019_SJe8DsR9tm | Dynamic Early Terminating of Multiply Accumulate Operations for Saving Computation Cost in Convolutional Neural Networks | Deep learning has been attracting enormous attention from academia as well as industry due to its great success in many artificial intelligence applications. As more applications are developed, the need for implementing a complex neural network model on an energy-limited edge device becomes more critical. To this end, this paper proposes a new optimization method to reduce the computation efforts of convolutional neural networks. The method takes advantage of the fact that some convolutional operations are actually wasteful since their outputs are pruned by the following activation or pooling layers. Basically, a convolutional filter conducts a series of multiply-accumulate (MAC) operations. We propose to set a checkpoint in the MAC process to determine whether a filter could terminate early based on the intermediate result. Furthermore, a fine-tuning process is conducted to recover the accuracy drop due to the applied checkpoints. The experimental results show that the proposed method can save approximately 50% MAC operations with less than 1% accuracy drop for CIFAR-10 example model and Network in Network on the CIFAR-10 and CIFAR-100 datasets. Additionally, compared with the state-of- the-art method, the proposed method is more effective on the CIFAR-10 dataset and is competitive on the CIFAR-100 dataset. | rejected-papers | This paper proposes a new method for speeding up convolutional neural networks. It uses the idea of early terminating the computation of convolutional layers. It saves FLOPs, but the reviewers raised a critical concern that it doesn't save wall-clock time. The time overhead is about 4 or 5 times of the original model. There is not any reduced execution time but much longer. The authors agreed that "the overhead on the inference time is certainly an issue of our method". The work is not mature and practical. recommend for rejection. | val | [
"SkeBAKJryE",
"H1x3X_mupX",
"SyxBwHgxl4",
"SJgC-tiFR7",
"S1ebtiJZC7",
"ryeuPukWRm",
"SJleS7y-0Q",
"ryezAoybCX",
"SkxRHIDl0X",
"rylgW2I53m",
"ryxtgdYQnQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the explanation and extra experiments.\n\nFrom your time measurement, it seems that the time overhead is about 4 or 5 times of the original model. I am not sure if I understand the results correctly. But from my understanding, it seems that there is not any reduced execution time but much longer. So it ... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"SJleS7y-0Q",
"iclr_2019_SJe8DsR9tm",
"SkeBAKJryE",
"iclr_2019_SJe8DsR9tm",
"rylgW2I53m",
"rylgW2I53m",
"H1x3X_mupX",
"rylgW2I53m",
"ryxtgdYQnQ",
"iclr_2019_SJe8DsR9tm",
"iclr_2019_SJe8DsR9tm"
] |
iclr_2019_SJeFNoRcFQ | Traditional and Heavy Tailed Self Regularization in Neural Network Models | Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. Empirical and theoretical results clearly indicate that the empirical spectral density (ESD) of DNN layer matrices displays signatures of traditionally-regularized statistical models, even in the absence of exogenously specifying traditional forms of regularization, such as Dropout or Weight Norm constraints. Building on recent results in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit Self-Regularization. For smaller and/or older DNNs, this Implicit Self-Regularization is like traditional Tikhonov regularization, in that there is a "size scale" separating signal from noise. For state-of-the-art DNNs, however, we identify a novel form of Heavy-Tailed Self-Regularization, similar to the self-organization seen in the statistical physics of disordered systems. This implicit Self-Regularization can depend strongly on the many knobs of the training process. By exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size. | rejected-papers | While it appears that the authors have done significant amount of work to investigate this topic, there are concerns that the theorems are not rigorously/precisely presented, and it is unclear how they can guide the design and training of neural network models in practice. The response and revision of the authors do not provide sufficient materials to address these concerns. | train | [
"SyxRrIzX0X",
"H1gn4x1MaX",
"Syx_tA0-6X",
"r1lZZNN5nQ",
"rkgsDBCZsQ",
"S1gvacTwiX",
"r1xqJr5IjQ"
] | [
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"public",
"public"
] | [
" \nThanks to the two positive reviewers who were willing to write their names.\n\nRegarding AnonReviewer3, we will simply point to the rebuke from anonymous \"Response to AnonReviewer3\". We could not have said it better ourselves.\n\nActually, some of these comments also are relevant for AnonReviewer4.\n\nAnonRe... | [
-1,
4,
-1,
4,
6,
-1,
-1
] | [
-1,
4,
-1,
1,
5,
-1,
-1
] | [
"iclr_2019_SJeFNoRcFQ",
"iclr_2019_SJeFNoRcFQ",
"r1lZZNN5nQ",
"iclr_2019_SJeFNoRcFQ",
"iclr_2019_SJeFNoRcFQ",
"iclr_2019_SJeFNoRcFQ",
"iclr_2019_SJeFNoRcFQ"
] |
iclr_2019_SJeT_oRcY7 | Localized random projections challenge benchmarks for bio-plausible deep learning | Similar to models of brain-like computation, artificial deep neural networks rely
on distributed coding, parallel processing and plastic synaptic weights. Training
deep neural networks with the error-backpropagation algorithm, however, is
considered bio-implausible. An appealing alternative to training deep neural networks
is to use one or a few hidden layers with fixed random weights or trained
with an unsupervised, local learning rule and train a single readout layer with a
supervised, local learning rule. We find that a network of leaky-integrate-andfire
neurons with fixed random, localized receptive fields in the hidden layer and
spike timing dependent plasticity to train the readout layer achieves 98.1% test
accuracy on MNIST, which is close to the optimal result achievable with error-backpropagation
in non-convolutional networks of rate neurons with one hidden
layer. To support the design choices of the spiking network, we systematically
compare the classification performance of rate networks with a single hidden
layer, where the weights of this layer are either random and fixed, trained with
unsupervised Principal Component Analysis or Sparse Coding, or trained with
the backpropagation algorithm. This comparison revealed, first, that unsupervised
learning does not lead to better performance than fixed random projections for
large hidden layers on digit classification (MNIST) and object recognition (CIFAR10);
second, networks with random projections and localized receptive fields
perform significantly better than networks with all-to-all connectivity and almost
reach the performance of networks trained with the backpropagation algorithm.
The performance of these simple random projection networks is comparable to
most current models of bio-plausible deep learning and thus provides an interesting
benchmark for future approaches. | rejected-papers | This paper presents a biologically plausible architecture and learning algorithm for deep neural networks. The authors then go on to show that the proposed approach achieves competitive results on the MNIST dataset. In general, the reviewers found that the paper was well written and the motivation compelling. However, they were not convinced by the experiments, analysis or comparison to existing literature. In particular, they did not find MNIST to be a particularly interesting problem and had questions about the novelty of this approach over past literature. Perhaps the paper would be more impactful and convincing if the authors demonstrated competitive performance on a more challenging problem (e.g. machine translation, speech recognition or imagenet) using a biologically plausible approach. | train | [
"SJgNksTdA7",
"HJg-gdzNAm",
"rkgphPzER7",
"HJxddwGN0m",
"ByeBD7ugTX",
"rkgtvIeoh7",
"B1eOmToK37"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1. \" MNIST is not suitable for evaluating deep learning capabilities since it can be solved by our simple system.\"\nAgreed, but I think this was already clear before your benchmark and I think the paper does not elaborate enough on this point if this is supposed to be an important finding of the paper. \n\n3. \n... | [
-1,
-1,
-1,
-1,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"rkgphPzER7",
"ByeBD7ugTX",
"B1eOmToK37",
"rkgtvIeoh7",
"iclr_2019_SJeT_oRcY7",
"iclr_2019_SJeT_oRcY7",
"iclr_2019_SJeT_oRcY7"
] |
iclr_2019_SJeUAj05tQ | DADAM: A consensus-based distributed adaptive gradient method for online optimization | Online and stochastic optimization methods such as SGD, ADAGRAD and ADAM are key algorithms in solving large-scale machine learning problems including deep learning. A number of schemes that are based on communications of nodes with a central server have been recently proposed in the literature to parallelize them. A bottleneck of such centralized algorithms lies on the high communication cost incurred by the central node. In this paper, we present a new consensus-based distributed adaptive moment estimation method (DADAM) for online optimization over a decentralized network that enables data parallelization, as well as decentralized computation. Such a framework note only can be extremely useful for learning agents with access to only local data in a communication constrained environment, but as shown in this work also outperform centralized adaptive algorithms such as ADAM for certain realistic classes of loss functions. We analyze the convergence properties of the proposed algorithm and provide a \textit{dynamic regret} bound on the convergence rate of adaptive moment estimation methods in both stochastic and deterministic settings. Empirical results demonstrate that DADAM works well in practice and compares favorably to competing online optimization methods. | rejected-papers | The paper provides a distributed optimization method, applicable to decentralized computation while retaining provable guarantees. This was a borderline paper and a difficult decision.
The proposed algorithm is straightforward (a compliment), showing how adaptive optimization algorithms can still be coordinated in a distributed fashion. The theoretical analysis is interesting, but additional assumptions about the mixing are needed to reach clear conclusions: for example, additional assumptions are required to demonstrate potential advantages over non-distributed adaptive optimization algorithms.
The initial version of the paper was unfortunately sloppy, with numerous typographical errors. More importantly, some key relevant literature was not cited:
- Duchi, John C., Alekh Agarwal, and Martin J. Wainwright. "Dual averaging for distributed optimization: Convergence analysis and network scaling." IEEE Transactions on Automatic control 57.3 (2012): 592-606.
In addition to citing this work, this and the other related works need to be discussed in relation to the proposed approach earlier in the paper, as suggested by Reviewer 3.
There was disagreement between the reviewers in the assessment of this paper. Generally the dissenting reviewer produced the highest quality assessment. This paper is on the borderline, however given the criticisms raised it would benefit from additional theoretical strengthening, improved experimental reporting, and better framing with respect to the existing literature. | train | [
"Ske0nAkFe4",
"H1x6qqZtx4",
"HylSyjA0A7",
"rkxtNFr90X",
"BJggSWZaTQ",
"HJxViZZa6X",
"HygPwzZapm",
"B1lJXkO8CX",
"ryxfxIFE0X",
"Skl0E4qGCQ",
"ryeU9QY627",
"H1e-5sxy0Q",
"r1xPATVAam",
"HJldj5nM2m",
"HJgQoTS-2m"
] | [
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Again, thank you for your valuable feedback. \n\n\nComments 1-1, 1-2 and 1-3) [Design of mixing matrix $W$]\n\nThere are several designs for network matrix $W$ in [BPX04], [TLR12] and [SLWY15]. In earlier papers [TLR12,N015], the role of network constraints on the consensus-based distributed optimization has been... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
4,
4
] | [
"ryxfxIFE0X",
"Ske0nAkFe4",
"B1lJXkO8CX",
"ryeU9QY627",
"HJldj5nM2m",
"HJldj5nM2m",
"HJgQoTS-2m",
"Skl0E4qGCQ",
"BJggSWZaTQ",
"H1e-5sxy0Q",
"iclr_2019_SJeUAj05tQ",
"r1xPATVAam",
"iclr_2019_SJeUAj05tQ",
"iclr_2019_SJeUAj05tQ",
"iclr_2019_SJeUAj05tQ"
] |
iclr_2019_SJekyhCctQ | Detecting Adversarial Examples Via Neural Fingerprinting | Deep neural networks are vulnerable to adversarial examples: input data that has been manipulated to cause dramatic model output errors. To defend against such attacks, we propose NeuralFingerprinting: a simple, yet effective method to detect adversarial examples that verifies whether model behavior is consistent with a set of fingerprints. These fingerprints are encoded into the model response during training and are inspired by the use of biometric and cryptographic signatures. In contrast to previous defenses, our method does not rely on knowledge of the adversary and can scale to large networks and input data. The benefits of our method are that 1) it is fast, 2) it is prohibitively expensive for an attacker to reverse-engineer which fingerprints were used, and 3) it does not assume knowledge of the adversary. In this work, we 1) theoretically analyze NeuralFingerprinting for linear models and 2) show that NeuralFingerprinting significantly improves on state-of-the-art detection mechanisms for deep neural networks, by detecting the strongest known adversarial attacks with 98-100% AUC-ROC scores on the MNIST, CIFAR-10 and MiniImagenet (20 classes) datasets. In particular, we consider several threat models, including the most conservative one in which the attacker has full knowledge of the defender's strategy. In all settings, the detection accuracy of NeuralFingerprinting generalizes well to unseen test-data and is robust over a wide range of hyperparameters. | rejected-papers | * Strengths
The paper proposes a novel and interesting method for detecting adversarial examples, which has the advantage of being based on general “fingerprint statistics” of a model and is not restricted to any specific threat model (in contrast to much of the work in the area which is restricted to adversarial examples in some L_p norm ball). The writing is clear and the experiments are extensive.
* Weaknesses
The experiments are thorough. However, they contain a subtle but important flaw. During discussion it was revealed that the attacks used to evaluate the method fail to reduce accuracy even at large values of epsilon where there are simple adversarial attacks that should reduce the accuracy to zero. This casts doubt on whether the attacks at small values of epsilon really are providing a good measure of the method’s robustness.
* Discussion
There was substantial disagreement about the paper, with R1 feeling that the evaluation issues were serious enough to merit rejection and R3 feeling that they were not a large issue. In discussion with me, both R1 and R3 agreed that if an attack were demonstrated to break the method, that would be grounds for rejection. They also both agreed that there probably is an attack that breaks the method. A potential key difference is that R3 thinks this might be quite difficult to find and so merits publishing the paper to motivate stronger attacks.
I ultimately agree with R1 that the evaluation issues are indeed serious. One reason for this is that there is by now a long record of adversarial defense papers posting impressive numbers that are often invalidated within a short period (often less than a month or so) of the paper being published. The “Obfuscated Gradients” paper of Athalye, Carlini, and Wagner suggests several basic sanity checks to help avoid this. One of the sanity checks (which the present paper fails) is to test that attacks work when epsilon is large. This is not an arbitrary test but gets at a key issue---any given attack provides only an *upper bound* on the worst-case accuracy of a method. For instance, if an attack only brings the accuracy of a method down to 80% at epsilon=1 (when we know the true accuracy should be 0%), then at epsilon=0.01 we know that the measured accuracy of the attack comes 80% from the over-optimistic accuracy at epsilon=1 and at most 20% from the true accuracy at epsilon=0.01. If the measured accuracy at epsilon=1 is close to 100%, then accuracy at lower values of epsilon provides basically no information. This means that the experiments as currently performed give no information about the true accuracy of the method, which is a serious issue that the authors should address before the paper can be accepted. | train | [
"HJxNopwee4",
"ryxY5IDex4",
"ByeYgX16JN",
"S1xVUS0hJ4",
"SyxblBd2yV",
"BkgSEnvn1V",
"Hyg3sqeSyE",
"H1xcUXJNkE",
"Hyeh8SgMy4",
"HygINeIsRm",
"ryxlKRJ60X",
"SkxcMTJ60Q",
"r1xCwpgqAQ",
"rJxPKmksCQ",
"B1gGtTl9RX",
"BkljP8wFRQ",
"rylIKYfn2m",
"HkgIwaDrRX",
"ByeJoMCmCX",
"Hyx_7PKyT7"... | [
"author",
"public",
"author",
"public",
"author",
"public",
"author",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"official_reviewer",
"author",
"officia... | [
"The SPSA, FGSM, PGD, BIM attacks are untargeted. This is already mentioned in the paper in the section on experiments. Please refer to the paper (end of page 7) and code for details. \n",
"I found that there's no mention about if these attack are targeted or untargeted but I guess they're all targeted. I wonder ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"ryxY5IDex4",
"iclr_2019_SJekyhCctQ",
"S1xVUS0hJ4",
"SyxblBd2yV",
"BkgSEnvn1V",
"iclr_2019_SJekyhCctQ",
"H1xcUXJNkE",
"BkljP8wFRQ",
"rylIKYfn2m",
"iclr_2019_SJekyhCctQ",
"SkxcMTJ60Q",
"iclr_2019_SJekyhCctQ",
"iclr_2019_SJekyhCctQ",
"S1gxPy_ham",
"r1xCwpgqAQ",
"Skldi0wrhm",
"iclr_2019... |
iclr_2019_SJequsCqKQ | Cautious Deep Learning | Most classifiers operate by selecting the maximum of an estimate of the conditional distribution p(y|x) where x stands for the features of the instance to be classified and y denotes its label. This often results in a hubristic bias: overconfidence in the assignment of a definite label. Usually, the observations are concentrated on a small volume but the classifier provides definite predictions for the entire space. We propose constructing conformal prediction sets which contain a set of labels rather than a single label. These conformal prediction sets contain the true label with probability 1−α. Our construction is based on p(x|y) rather than p(y|x) which results in a classifier that is very cautious: it outputs the null set --- meaning ``I don't know'' --- when the object does not resemble the training examples. An important property of our approach is that classes can be added or removed without having to retrain the classifier. We demonstrate the performance on the ImageNet ILSVRC dataset and the CelebA and IMDB-Wiki facial datasets using high dimensional features obtained from state of the art convolutional neural networks. | rejected-papers | The paper presents a conformal prediction approach to supervised classification, with the goal of reducing the overconfidence of standard soft-max learning techniques. The proposal is based on previously published methods, which are extended for use with deep learning predictors. Empirical evaluation suggests the proposal results in competitive performance. This work seems to be timely, and the topic is of interest to the community.
The reviewers and AC opinions were mixed, with reviewers either being unconvinced about the novelty of the proposed work or expressing issues about the strength of the empirical evidence supporting the claims. Additional experiments would significantly strengthen this submission. | train | [
"S1x15ydTAm",
"H1e6pi52am",
"H1lrct9hTQ",
"SyxlLFc3T7",
"r1l-TOq2pm",
"HJewgjWRnm",
"HkgpF_ehhX",
"BJelopbHn7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
">> We believe current experiments better exemplify the method contribution. \n\nThis is precisely the problem I am having. The experiments are anecdotal (except the gender recognition experiment) and competing methods are working on top of different feature sets. This creates a question in my mind as to whether th... | [
-1,
-1,
-1,
-1,
-1,
4,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
5
] | [
"SyxlLFc3T7",
"HkgpF_ehhX",
"BJelopbHn7",
"HJewgjWRnm",
"iclr_2019_SJequsCqKQ",
"iclr_2019_SJequsCqKQ",
"iclr_2019_SJequsCqKQ",
"iclr_2019_SJequsCqKQ"
] |
iclr_2019_SJerEhR5Km | Novel positional encodings to enable tree-structured transformers | With interest in program synthesis and similarly flavored problems rapidly increasing, neural models optimized for tree-domain problems are of great value. In the sequence domain, transformers can learn relationships across arbitrary pairs of positions with less bias than recurrent models. Under the intuition that a similar property would be beneficial in the tree domain, we propose a method to extend transformers to tree-structured inputs and/or outputs. Our approach abstracts transformer's default sinusoidal positional encodings, allowing us to substitute in a novel custom positional encoding scheme that represents node positions within a tree. We evaluated our model in tree-to-tree program translation and sequence-to-tree semantic parsing settings, achieving superior performance over the vanilla transformer model on several tasks.
| rejected-papers | This paper extends the transformer model of Vashwani et al. by replacing the sine/cosine positional encodings with information reflecting the tree stucture of appropriately parsed data. According to the reviews, the paper, while interesting, does not make the cut. My concern here is that the quality of the reviews, in particular those of reviewers 2 and 3, is very sub par. They lack detail (or, in the case of R2, did so until 05 Dec(!!)), and the reviewers did not engage much (or at all) in the subsequent discussion period despite repeated reminders. Infuriatingly, this puts a lot of work squarely in the lap of the AC: if the review process fails the authors, I cannot make a decision on the basis of shoddy reviews and inexistent discussion! Clearly, as this is not the fault of the authors, the best I can offer is to properly read through the paper and reviews, and attempt to make a fair assessment.
Having done so, I conclude that while interesting, I agree with the sentiment expressed in the reviews that the paper is very incremental. In particular, the points of comparison are quite limited and it would have been good to see a more thorough comparison across a wider range of tasks with some more contemporary baselines. Papers like Melis et al. 2017 have shown us that an endemic issue throughout language modelling (and certainly also other evaluation areas) is that complex model improvements are offered without comparison against properly tuned baselines and benchmarks, failing to offer assurances that the baselines would not match performance of the proposed model with proper regularisation. As some of the reviewers, the scope of comparison to prior art in this paper is extremely limited, as is the bibliography, which opens up this concern I've just outlined that it's difficult to take the results with the confidence they require. In short, my assessment, on the basis of reading the paper and reviews, is that the main failing of this paper is the lack of breadth and depth of evaluation, not that it is incremental (as many good ideas are). I'm afraid this paper is not ready for publication at this time, and am sorry the authors will have had a sub-par review process, but I believe it's in the best interest of this work to encourage the authors to further evaluate their approach before publishing it in conference proceedings. | train | [
"SyehfLgq37",
"rygwHC_c0m",
"r1eRq6OcCm",
"HklUr6u50Q",
"SkgZrsYu2X",
"HJgu4gdM6Q",
"SJe_mn_x67",
"B1xJTjdxpX",
"Ske48oug67",
"BJeiRhrpn7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes an interesting idea for using Vashwani's transformer with tree-structured data, where nodes' positions in the tree are encoded using unique affine transformations. They test the idea in several program translation tasks, and find small-to-medium improvements in performance. \n\nOverall the idea... | [
4,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
5
] | [
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_SJerEhR5Km",
"SkgZrsYu2X",
"SyehfLgq37",
"HJgu4gdM6Q",
"iclr_2019_SJerEhR5Km",
"Ske48oug67",
"SkgZrsYu2X",
"SyehfLgq37",
"BJeiRhrpn7",
"iclr_2019_SJerEhR5Km"
] |
iclr_2019_SJf6BhAqK7 | Variadic Learning by Bayesian Nonparametric Deep Embedding | Learning at small or large scales of data is addressed by two strong but divided frontiers: few-shot learning and standard supervised learning. Few-shot learning focuses on sample efficiency at small scale, while supervised learning focuses on accuracy at large scale. Ideally they could be reconciled for effective learning at any number of data points (shot) and number of classes (way). To span the full spectrum of shot and way, we frame the variadic learning regime of learning from any number of inputs. We approach variadic learning by meta-learning a novel multi-modal clustering model that connects bayesian nonparametrics and deep metric learning. Our bayesian nonparametric deep embedding (BANDE) method is optimized end-to-end with a single objective, and adaptively adjusts capacity to learn from variable amounts of supervision. We show that multi-modality is critical for learning complex classes such as Omniglot alphabets and carrying out unsupervised clustering. We explore variadic learning by measuring generalization across shot and way between meta-train and meta-test, show the first results for scaling from few-way, few-shot tasks to 1692-way Omniglot classification and 5k-shot CIFAR-10 classification, and find that nonparametric methods generalize better than parametric methods. On the standard few-shot learning benchmarks of Omniglot and mini-ImageNet, BANDE equals or improves on the state-of-the-art for semi-supervised classification. | rejected-papers | All reviewers wrote strong and long reviews with good feedback but do not believe the work is currently ready for publication.
I encourage the authors to update and resubmit.
| train | [
"B1lb42diAX",
"H1xqRlNo0m",
"BkgRXxNoR7",
"ByeUA1EsA7",
"ryx26SYORm",
"S1xCeCI9nm",
"rylxGxO_A7",
"BkxOEjwOA7",
"HyxmaXDd0Q",
"rJgo9Hiuam",
"BJxN_rsOT7",
"SyltxBsuTm",
"HJxdpNodTX",
"B1eAoVj_6m",
"S1xc8-Uchm",
"rJeDOoul2m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you to all reviewers for useful feedback on the submission. We have posted a revision with the following changes:\n\nMethod:\nOverall, we edited the method section to make the algorithm more clear, give a clearer introduction to meta-learning and episodic optimization, and better delineate our contributions ... | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2019_SJf6BhAqK7",
"ryx26SYORm",
"BkxOEjwOA7",
"rylxGxO_A7",
"rJgo9Hiuam",
"iclr_2019_SJf6BhAqK7",
"B1eAoVj_6m",
"HJxdpNodTX",
"SyltxBsuTm",
"BJxN_rsOT7",
"rJeDOoul2m",
"S1xc8-Uchm",
"B1eAoVj_6m",
"S1xCeCI9nm",
"iclr_2019_SJf6BhAqK7",
"iclr_2019_SJf6BhAqK7"
] |
iclr_2019_SJfFTjA5KQ | Unification of Recurrent Neural Network Architectures and Quantum Inspired Stable Design | Various architectural advancements in the design of recurrent neural networks~(RNN) have been focusing on improving the empirical stability and representability by sacrificing the complexity of the architecture. However, more remains to be done to fully understand the fundamental trade-off between these conflicting requirements. Towards answering this question, we forsake the purely bottom-up approach of data-driven machine learning to understand, instead, the physical origin and dynamical properties of existing RNN architectures. This facilitates designing new RNNs with smaller complexity overhead and provable stability guarantee. First, we define a family of deep recurrent neural networks, n-t-ORNN, according to the order of nonlinearity n and the range of temporal memory scale t in their underlying dynamics embodied in the form of discretized ordinary differential equations. We show that most of the existing proposals of RNN architectures belong to different orders of n-t-ORNNs. We then propose a new RNN ansatz, namely the Quantum-inspired Universal computing Neural Network~(QUNN), to leverage the reversibility, stability, and universality of quantum computation for stable and universal RNN. QUNN provides a complexity reduction in the number of training parameters from being polynomial in both data and correlation time to only linear in correlation time. Compared to Long-Short-Term Memory (LSTM), QUNN of the same number of hidden layers facilitates higher nonlinearity and longer memory span with provable stability. Our work opens new directions in designing minimal RNNs based on additional knowledge about the dynamical nature of both the data and different training architectures. | rejected-papers | although the way in which the authors characterize existing rnn variants and how they derive a new type of rnn are interesting, the submission lacks justification (either empirical or theoretical) that supports whether and how the proposed rnn's behave in a "learning" setting different from the existing rnn variants. | train | [
"Syxt_zF4pm",
"B1xSrPE7pQ",
"HJgqWQUq2m",
"BJxJkz4cn7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new framework to describe and understand the dynamics of RNNs inspired by quantum physics. The authors also propose a novel RNN architecture derived by their analysis. \n \nAlthough I found the idea quite interesting, my main concern is that the jargon used in the paper makes it hard to under... | [
5,
4,
4,
5
] | [
2,
2,
3,
2
] | [
"iclr_2019_SJfFTjA5KQ",
"iclr_2019_SJfFTjA5KQ",
"iclr_2019_SJfFTjA5KQ",
"iclr_2019_SJfFTjA5KQ"
] |
iclr_2019_SJfHg2A5tQ | BNN+: Improved Binary Network Training | Deep neural networks (DNN) are widely used in many applications. However, their deployment on edge devices has been difficult because they are resource hungry. Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit. We propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions. Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation. These additions are based on linear operations that are easily implementable into the binary training framework. We show experimental results on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network. On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively. | rejected-papers | The paper makes two fairly incremental contributions regarding training binarized neural networks: (1) the swish-based STE, and (2) a regularization that pushes weights to take on values in {-1, +1}. Reviewer1 and reviewer2 both pointed out concerns about the incremental contribution, the thoroughness of the evaluation, the poor clarity and consistency of the writing. Reviewer3 was muted during the discussion. Given the valid concerns from reviewer1/2, this paper is recommended for rejection. | train | [
"H1lk7FFT1E",
"SJefvMNKyN",
"HklbhU-50X",
"rJlij_ptR7",
"SJeHx3hKRQ",
"BkgIvK4Lhm",
"HJecywK_A7",
"B1gCOQY_Rm",
"HyeeWC2jh7",
"B1xXsPPZ0X",
"H1xXePv-AQ",
"S1esUOwb0X",
"rklWMOvZCm",
"SJePcEvWRm",
"SkldLNDZRX",
"S1lWV4P-CQ",
"rJl3OQvWCX",
"SygwCr4qh7",
"rkeWAp24j7",
"H1xlsA5Vjm"... | [
"author",
"public",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"author",
... | [
"-----------------------------------------------------------------------------------------------------------------\n*comment: \"Since the paper (Tang, AAAI 2017, how to train a compact binary neural network with high accuracy) introduced regularization function 1-w^2, we have tried this technique to improve our per... | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SJefvMNKyN",
"iclr_2019_SJfHg2A5tQ",
"rJlij_ptR7",
"SJeHx3hKRQ",
"BkgIvK4Lhm",
"iclr_2019_SJfHg2A5tQ",
"B1xXsPPZ0X",
"S1esUOwb0X",
"iclr_2019_SJfHg2A5tQ",
"BkgIvK4Lhm",
"BkgIvK4Lhm",
"BkgIvK4Lhm",
"BkgIvK4Lhm",
"SygwCr4qh7",
"SygwCr4qh7",
"SygwCr4qh7",
"HyeeWC2jh7",
"iclr_2019_SJf... |
iclr_2019_SJf_XhCqKm | Open Loop Hyperparameter Optimization and Determinantal Point Processes | Driven by the need for parallelizable hyperparameter optimization methods, this paper studies open loop search methods: sequences that are predetermined and can be generated before a single configuration is evaluated. Examples include grid search, uniform random search, low discrepancy sequences, and other sampling distributions.
In particular, we propose the use of k-determinantal point processes in hyperparameter optimization via random search. Compared to conventional uniform random search where hyperparameter settings are sampled independently, a k-DPP promotes diversity. We describe an approach that transforms hyperparameter search spaces for efficient use with a k-DPP. In addition, we introduce a novel Metropolis-Hastings algorithm which can sample from k-DPPs defined over any space from which uniform samples can be drawn, including spaces with a mixture of discrete and continuous dimensions or tree structure. Our experiments show significant benefits in realistic scenarios with a limited budget for training supervised learners, whether in serial or parallel. | rejected-papers | This is a very clearly written, well composed paper that does a good job of placing the proposed contribution in the scope of hyperparameter optimization techniques. This paper certainly appears to have been improved over the version submitted to the previous ICLR. In particular, the writing is much clearer and easy to follow and the methodology and experiments have been improved. The ideas are well motivated and it's exciting to see that sampling from a k-DPP can give better low discrepancy sequences than e.g. Sobol. However, the reviewers still seem to have two major concerns, namely novelty of the approach (DPPs have been used for Bayesian optimization before) and the empirical evaluation.
Empirical evaluation: As Reviewer1 notes, there are much more recent approaches for Bayesian optimization that have improved significantly over the TPE method, also for conditional parameters. There are also more recent approaches proposing variants of random search such as hyperband.
Novelty: There is some work on using determinantal point processes for Bayesian optimization and related work in optimal experimental design. Optimal design has a significant amount of literature dedicated to designing a set of experiments according to the determinant of their covariance matrix - i.e. D-Optimal Design. This work may add some interesting contributions to that literature, including fast sampling from k-DPPs, etc. It would be useful, however, to add some discussion of that literature in the paper. Jegelka and Sra's tutorial at NeurIPS on negative dependence had a nice overview of some of this literature.
Unfortunately, two of the three reviewers thought the paper was just below the borderline and none of the reviewers were willing to champion it. There are very promising and interesting ideas in the paper, however, that have a lot of potential. In the opinion of the AC, one of the most powerful aspects of DPPs over e.g. low discrepancy sequences, random search, etc. is the ability to learn a distance over a space under which samples will be diverse. This can make a search *much* more efficient since (as the authors note when discussing random search vs. grid search) the DPP can sample more densely in areas and dimensions that have higher sensitivity. It would be exciting to learn kernels specifically for hyperparameter optimization problems (e.g. a kernel specifically for learning rates that can capture e.g. logarithmic scaling). Taking the objective into account through the quality score, as proposed for future work, also seems very sensible and could significantly improve results as well. | train | [
"rklkvgsL27",
"BklWLzRl07",
"BkeUrUReAX",
"rkl41X0xC7",
"SylYFraThX",
"Bke7SBzq2m"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to use k-DPP to select a set of diverse parameters and use them to search for a good a hyperparameter setting. \n\nThis paper covers the related work nicely, with details on both closed loop and open loop methods. The rest of the paper are also clearly written. However, I have some concerns abo... | [
5,
-1,
-1,
-1,
5,
6
] | [
3,
-1,
-1,
-1,
5,
4
] | [
"iclr_2019_SJf_XhCqKm",
"SylYFraThX",
"rklkvgsL27",
"Bke7SBzq2m",
"iclr_2019_SJf_XhCqKm",
"iclr_2019_SJf_XhCqKm"
] |
iclr_2019_SJg013C5KX | Teaching to Teach by Structured Dark Knowledge | To educate hyper deep learners, \emph{Curriculum Learnings} (CLs) require either human heuristic participation or self-deciding the difficulties of training instances. These coaching manners are blind to the coherent structures among examples, categories, and tasks, which are pregnant with more knowledgeable curriculum-routed teachers. In this paper, we propose a general methodology \emph{Teaching to Teach} (T2T). T2T is facilitated by \emph{Structured Dark Knowledge} (SDK) that constitutes a communication protocol between structured knowledge prior and teaching strategies. On one hand, SDK adaptively extracts structured knowledge by selecting a training subset consistent with the previous teaching decisions. On the other hand, SDK teaches curriculum-agnostic teachers by transferring this knowledge to update their teaching policy. This virtuous cycle can be flexibly-deployed in most existing CL platforms and more importantly, very generic across various structured knowledge characteristics, e.g., diversity, complementarity, and causality. We evaluate T2T across different learners, teachers, and tasks, which significantly demonstrates that structured knowledge can be inherited by the teachers to further benefit learners' training.
| rejected-papers | This work proposes a "teaching to teach" (T2T) method to incorporate structured prior knowledge into the teaching model for machine learning tasks. This is an interesting and timely topic. Unfortunately, among many other issues, this paper is fairly poorly writing and can benefit from a significant rewriting. The authors did not provide a rebuttal and hence we recommend a rejection.
| train | [
"B1x3kuCDa7",
"H1gvqF3T3m",
"B1xgqUztnQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary\n==============\nThis paper proposes \"teaching to teach\". A drawback of existing curriculum learning approaches is that it requires a human to manually and heuristically define a curriculum. Teaching to teach avoids this using a teacher model to perform subset selection, and reweighting the data points b... | [
4,
3,
6
] | [
1,
4,
5
] | [
"iclr_2019_SJg013C5KX",
"iclr_2019_SJg013C5KX",
"iclr_2019_SJg013C5KX"
] |
iclr_2019_SJg6nj09F7 | NEURAL MALWARE CONTROL WITH DEEP REINFORCEMENT LEARNING | Antimalware products are a key component in detecting malware attacks, and their engines typically execute unknown programs in a sandbox prior to running them on the native operating system. Files cannot be scanned indefinitely so the engine employs heuristics to determine when to halt execution. Previous research has investigated analyzing the sequence of system calls generated during this emulation process to predict if an unknown file is malicious, but these models require the emulation to be stopped after executing a fixed number of events from the beginning of the file. Also, these classifiers are not accurate enough to halt emulation in the middle of the file on their own. In this paper, we propose a novel algorithm which overcomes this limitation and learns the best time to halt the file's execution based on deep reinforcement learning (DRL). Because the new DRL-based system continues to emulate the unknown file until it can make a confident decision to stop, it prevents attackers from avoiding detection by initiating malicious activity after a fixed number of system calls. Results show that the proposed malware execution control model automatically halts emulation for 91.3\% of the files earlier than heuristics employed by the engine. Furthermore, classifying the files at that time improves the true positive rate by 61.5%, at a false positive rate of 1%, compared to a baseline classifier. | rejected-papers | The paper trains a classifier to decide if a program is a malware and when to halt its execution. The malware classifier is mostly composed of an RNN acting on featurized API calls (events). The presentation could be improved. The results are encouraging, but the experiments lack solid baselines, comparisons, and grounding of the task usefulness, as this is not done on an established benchmark. | train | [
"HklnZlx90m",
"SJglUyl5RQ",
"SJlG7yecRX",
"r1lI1xpkam",
"HkeV9i9227",
"SyeeaLc52X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for reviewing the paper and your helpful feedback.\n1. We believe this paper is important because it shows another “real-world” application of DRL on a very important problem in the security field. To the best of our knowledge, it is also the first study in security field to detect the malware ... | [
-1,
-1,
-1,
5,
4,
5
] | [
-1,
-1,
-1,
2,
3,
2
] | [
"SyeeaLc52X",
"HkeV9i9227",
"r1lI1xpkam",
"iclr_2019_SJg6nj09F7",
"iclr_2019_SJg6nj09F7",
"iclr_2019_SJg6nj09F7"
] |
iclr_2019_SJg7IsC5KQ | On the Convergence and Robustness of Batch Normalization | Despite its empirical success, the theoretical underpinnings of the stability, convergence and acceleration properties of batch normalization (BN) remain elusive. In this paper, we attack this problem from a modelling approach, where we perform thorough theoretical analysis on BN applied to simplified model: ordinary least squares (OLS). We discover that gradient descent on OLS with BN has interesting properties, including a scaling law, convergence for arbitrary learning rates for the weights, asymptotic acceleration effects, as well as insensitivity to choice of learning rates. We then demonstrate numerically that these findings are not specific to the OLS problem and hold qualitatively for more complex supervised learning problems. This points to a new direction towards uncovering the mathematical principles that underlies batch normalization. | rejected-papers | The reviewers agree that providing more insights on why batch normalization work is an important topic of investigation, but they all raised several problems with the current submission which need to be addressed before publication. The AC thus proposes "revise and sesubmit". | test | [
"r1gpa53uJV",
"S1x_FLZ5nQ",
"S1eO241e1N",
"BJehplJJyN",
"S1guTMyC0m",
"rkxGw1BXT7",
"ryxOLlVX67",
"Hkge1xEQam",
"S1gGgammaQ",
"rylDkM7mTQ",
"BJl9bn0q2m",
"HkgUrzR927"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the comments.\n\n1. The convergence rate. \nLet us recall the convergence of GD on OLS: the convergence is linear and the rate constant $\\rho$ depends on the step size $\\varepsilon$ (suppose it is positive and less than $2/\\lambda_{max}$). By contrast, we find the rate constant of BNGD... | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"S1eO241e1N",
"iclr_2019_SJg7IsC5KQ",
"BJehplJJyN",
"S1guTMyC0m",
"S1gGgammaQ",
"HkgUrzR927",
"S1x_FLZ5nQ",
"S1x_FLZ5nQ",
"S1x_FLZ5nQ",
"BJl9bn0q2m",
"iclr_2019_SJg7IsC5KQ",
"iclr_2019_SJg7IsC5KQ"
] |
iclr_2019_SJgTps0qtQ | Exploiting Environmental Variation to Improve Policy Robustness in Reinforcement Learning | Conventional reinforcement learning rarely considers how the physical variations in the environment (eg. mass, drag, etc.) affect the policy learned by the agent. In this paper, we explore how changes in the environment affect policy generalization. We observe experimentally that, for each task we considered, there exists an optimal environment setting that results in the most robust policy that generalizes well to future environments. We propose a novel method to exploit this observation to develop robust actor policies, by automatically developing a sampling curriculum over environment settings to use in training. Ours is a model-free approach and experiments demonstrate that the performance of our method is on par with the best policies found by an exhaustive grid search, while bearing a significantly lower computational cost. | rejected-papers | The paper presents a strategy for randomizing the underlying physical hyper-parameters of RL environments to improve policy's robustness. The paper has a simple and effective idea, however, the machine learning content is minimal. I agree with the reviewers that in order for the paper to pass the bar at ICLR, either the proposed ideas need to be extended theoretically or it should be backed with much more convincing results. Please take the reviewers' feedback into account and improve the paper. | train | [
"H1ezADy9CQ",
"rygMev1q0m",
"r1lZIB19CX",
"HyeiUJ1laQ",
"SklnoXrk6m",
"BJgPWcE0hQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review.",
"Thank you for your review.\n\n--------------------------------\nI) Response to your questions:\n\n1. K and M are unrelated. 'M' is the total number of environment settings can be changed. K is the number of 'tasks' initialized under a specific environment configuration - i.e. after ... | [
-1,
-1,
-1,
5,
3,
6
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"BJgPWcE0hQ",
"SklnoXrk6m",
"HyeiUJ1laQ",
"iclr_2019_SJgTps0qtQ",
"iclr_2019_SJgTps0qtQ",
"iclr_2019_SJgTps0qtQ"
] |
iclr_2019_SJgiNo0cKX | Multiple Encoder-Decoders Net for Lane Detection | For semantic image segmentation and lane detection, nets with a single spatial pyramid structure or encoder-decoder structure are usually exploited. Convolutional neural networks (CNNs) show great results on both high-level and low-level features representations, however, the capability has not been fully embodied for lane detection task. In especial, it's still a challenge for model-based lane detection to combine the multi-scale context with a pixel-level accuracy because of the weak visual appearance and strong prior information. In this paper, we we propose an novel network for lane detection, the three main contributions are as follows. First, we employ multiple encoder-decoders module in end-to-end ways and show the promising results for lane detection. Second, we analysis different configurations of multiple encoder-decoders nets. Third, we make our attempts to rethink the evaluation methods of lane detection for the limitation of the popular methods based on IoU. | rejected-papers | As the reviewers point out, the paper is below the acceptance standard of ICLR due to low novelty, unclear presentation, and lack of experimental comparison against the state-of-the-art baselines. | train | [
"Hkgb3hJ53X",
"r1gn90G_3X",
"H1ePzKpP27"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a neural network architecture for lane detection in on-road driving. The architecture consists of multiple encoder-decoder stages. This is motivated by a need to overcome limitations of traditional CNNs with respect to considering high-level context while providing pixel-level accuracy. The pap... | [
2,
2,
4
] | [
4,
5,
4
] | [
"iclr_2019_SJgiNo0cKX",
"iclr_2019_SJgiNo0cKX",
"iclr_2019_SJgiNo0cKX"
] |
iclr_2019_SJgs1n05YQ | Learning and Planning with a Semantic Model | Building deep reinforcement learning agents that can generalize and adapt to unseen environments remains a fundamental challenge for AI. This paper describes progresses on this challenge in the context of man-made environments, which are visually diverse but contain intrinsic semantic regularities. We propose a hybrid model-based and model-free approach, LEArning and Planning with Semantics (LEAPS), consisting of a multi-target sub-policy that acts on visual inputs, and a Bayesian model over semantic structures. When placed in an unseen environment, the agent plans with the semantic model to make high-level decisions, proposes the next sub-target for the sub-policy to execute, and updates the semantic model based on new observations. We perform experiments in visual navigation tasks using House3D, a 3D environment that contains diverse human-designed indoor scenes with real-world objects. LEAPS outperforms strong baselines that do not explicitly plan using the semantic content. | rejected-papers | The paper presents LEAPS, a hybrid model-based and model-free algorithm that uses a Bayesian approach to reason/plan over semantic features, while low level behavior is learned in a model-free manner. The approach is designed for human-made environments with semantic similarity, such as indoor navigation, and is empirically validated in a virtual indoor navigation task, House3D. Reviewers and AC note the interesting approach to this challenging problem. The presented approach can provide an elegant way to incorporate domain knowledge into RL approaches.
The reviewers and AC note several potential weaknesses. The reviewers are concerned about the very low success rate, and critiqued the use of success rate as a key metric itself, given that random search with a sufficiently high cut-off could solve the task. The authors added additional results in a metric that incorporates path length, and provided clarifying details. However, key concerns remained given the low success rates. The AC notes that e.g., results in the top and middle row of figure 4 show very similar results for LEAPS and the reported baselines. Further, "figure 5" shows no confidence / error bars, and it is not possible to assess whether any differences are statistically significant. Overall, the questions of whether something substantial has been learned, should be addressed with a detailed error analysis of the proposed approach and the baselines, to provide insight into whether and how the approaches solve the task. At the moment, the paper presents a potentially valuable approach, but does not provide convincing evidence and conceptual insights into the approach's effectiveness. | train | [
"BkxW6t-4jm",
"SkgUxkoe07",
"HyghQ49gCm",
"r1xmEqG4p7",
"B1gCIsfV67",
"HJxKpsMETm",
"SygmrKz4T7",
"HyeBO63p3X",
"B1xSyCP53m"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a hybrid model-free and model-based RL agent for the task of navigation. Reaching the target is decomposed into a set of sub-goals, and the plan is updated as the agent explores the environment. The method has been tested in the House3D environment for the task of RoomNav, where the goal is to n... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_SJgs1n05YQ",
"HyghQ49gCm",
"B1gCIsfV67",
"HyeBO63p3X",
"BkxW6t-4jm",
"B1xSyCP53m",
"iclr_2019_SJgs1n05YQ",
"iclr_2019_SJgs1n05YQ",
"iclr_2019_SJgs1n05YQ"
] |
iclr_2019_SJl2ps0qKQ | Learning to Decompose Compound Questions with Reinforcement Learning | As for knowledge-based question answering, a fundamental problem is to relax the assumption of answerable questions from simple questions to compound questions. Traditional approaches firstly detect topic entity mentioned in questions, then traverse the knowledge graph to find relations as a multi-hop path to answers, while we propose a novel approach to leverage simple-question answerers to answer compound questions. Our model consists of two parts: (i) a novel learning-to-decompose agent that learns a policy to decompose a compound question into simple questions and (ii) three independent simple-question answerers that classify the corresponding relations for each simple question. Experiments demonstrate that our model learns complex rules of compositionality as stochastic policy, which benefits simple neural networks to achieve state-of-the-art results on WebQuestions and MetaQA. We analyze the interpretable decomposition process as well as generated partitions. | rejected-papers | + an interesting task -- learning to decompose questions without supervision
- reviewers are not convinced by evaluation. Initially evaluated on MetaQA only, later relation classification on WebQuestions has been added. It is not really clear that the approach is indeed beneficial on WebQuestion relation classification (no analysis / ablations) and MetaQA is not a very standard dataset.
- Reviewers have concerns about comparison to previous work / the lack of state-of-the-art baselines. Some of these issues have been addressed though (e.g., discussion of Iyyer et al. 2016)
| train | [
"ryeHUaMqRQ",
"Bkl2pJE5Am",
"rkgD8DMcC7",
"BJln2XG5AX",
"BJe28_qT27",
"BJlhURN5nm",
"BJesG858hQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your valuable review! We have updated our paper with additional experiments! We will provide detailed explanation for your concerns. \n\nPlease refer to global comments for brief version of model improvement and paper refinement!\n\nQ1: Does this mean that the model can have <=3 partitions,... | [
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"BJlhURN5nm",
"BJesG858hQ",
"BJe28_qT27",
"iclr_2019_SJl2ps0qKQ",
"iclr_2019_SJl2ps0qKQ",
"iclr_2019_SJl2ps0qKQ",
"iclr_2019_SJl2ps0qKQ"
] |
iclr_2019_SJl7DsR5YQ | ReNeg and Backseat Driver: Learning from demonstration with continuous human feedback | Reinforcement learning (RL) is a powerful framework for solving problems by exploring and learning from mistakes. However, in the context of autonomous vehicle (AV) control, requiring an agent to make mistakes, or even allowing mistakes, can be quite dangerous and costly in the real world. For this reason, AV RL is generally only viable in simulation. Because these simulations have imperfect representations, particularly with respect to graphics, physics, and human interaction, we find motivation for a framework similar to RL, suitable to the real world. To this end, we formulate a learning framework that learns from restricted exploration by having a human demonstrator do the exploration. Existing work on learning from demonstration typically either assumes the collected data is performed by an optimal expert, or requires potentially dangerous exploration to find the optimal policy. We propose an alternative framework that learns continuous control from only safe behavior. One of our key insights is that the problem becomes tractable if the feedback score that rates the demonstration applies to the atomic action, as opposed to the entire sequence of actions. We use human experts to collect driving data as well as to label the driving data through a framework we call ``Backseat Driver'', giving us state-action pairs matched with scalar values representing the score for the action. We call the more general learning framework ReNeg, since it learns a regression from states to actions given negative as well as positive examples. We empirically validate several models in the ReNeg framework, testing on lane-following with limited data. We find that the best solution in this context outperforms behavioral cloning has strong connections to stochastic policy gradient approaches. | rejected-papers | The authors consider the interesting and important problem of how to train a robust driving policy without allowing unsafe exploration, an important challenge for real-world training scenarios. They suggest that both good and intentionally bad human demonstrations could be used, with the intuition being that humans can readily produce unsafe exploration such as swerving which can then be learnt using both positive and negative regressions. The reviewers all agree that the paper would not appeal to or have relevance for the wider community. The reviewers also agree that the main ideas are not well presented, that some of the claims are confusing, and that the writing is not technical enough. They also question the thoroughness of the empirical validation. | train | [
"SyemnkrcRX",
"Hyl0TehY0m",
"BklF-yntC7",
"H1eXcwIYR7",
"H1xgQsPuAX",
"SkeiCDv1RQ",
"ByxaYwPy0m",
"BJxJGwPJRm",
"ryg5ySvRTX",
"Sylk8NxXpX",
"rJe94evchm",
"BylsKnbq37",
"H1erWeb5hm",
"HyxAk38o2X"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"3) If we view our feedback f as Q* values (as we intended), then sure, our FNet already is a Q-Net. If we view our feedback f as reward, r, then try to do Q learning, we need to calculate the TD error. That is, the target value would need to be r+max_a'[Q(s',a')]. This max becomes very tricky in continuous space. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
-1
] | [
"Hyl0TehY0m",
"BklF-yntC7",
"H1eXcwIYR7",
"ByxaYwPy0m",
"BJxJGwPJRm",
"Sylk8NxXpX",
"ryg5ySvRTX",
"H1erWeb5hm",
"BylsKnbq37",
"rJe94evchm",
"iclr_2019_SJl7DsR5YQ",
"iclr_2019_SJl7DsR5YQ",
"iclr_2019_SJl7DsR5YQ",
"iclr_2019_SJl7DsR5YQ"
] |
iclr_2019_SJl8J30qFX | Learning Global Additive Explanations for Neural Nets Using Model Distillation | Interpretability has largely focused on local explanations, i.e. explaining why a model made a particular prediction for a sample. These explanations are appealing due to their simplicity and local fidelity. However, they do not provide information about the general behavior of the model. We propose to leverage model distillation to learn global additive explanations that describe the relationship between input features and model predictions. These global explanations take the form of feature shapes, which are more expressive than feature attributions. Through careful experimentation, we show qualitatively and quantitatively that global additive explanations are able to describe model behavior and yield insights about models such as neural nets. A visualization of our approach applied to a neural net as it is trained is available at https://youtu.be/ErQYwNqzEdc | rejected-papers | This paper introduces a distillation approach for black-box classifiers that trains generalized additive models (GAM), an additive model over feature shapes, thus providing global explanations for the model. Given the importance of interpretability, the reviewers appreciated the focus of this work. The reviewers also found the experiments, both on real and synthetic datasets, extremely thorough and were impressed by the results. Finally, they also mentioned that the paper was clearly well-written.
The reviewers and AC note the following potential weaknesses:
(1) The primary concern, raised by all of the reviewers, is the lack of novelty;the proposed approach is a straightforward application of GAMs to model distillation, where black box output is the training data of the GAM, (2) The reviewers are also concern that the proposed approach is limited in scope to tabular datasets, and would not work for more interesting, complex domains like text or images, and (3) The reviewers are concerned that the interpretability of GAMS is assumed, without describing the limitations, for example, if there are correlated features, the shapes would affect each other in uninterpretable ways. Amongst other concerns, the reviewers were concerned about the formatting of the plots and tables in the paper, which made it difficult to read them, and the lack of a user study to verify the interpretability claims.
In response to these criticisms, the authors provided comments and a substantial revision to the papers, heavily restructuring the paper to fit extra experiments (comparison to other global explanation techniques, including a user study) and make the figures and tables readable. While the paper was much improved by these changes, and two of the reviewers increased their scores accordingly, concerns about the limited novelty and scope still remained.
Ultimately, the reviewers did not reach a conclusion, but the concerns of novelty and scope overwhelmed the clear benefits of the approach and the strong results. This paper was very close to getting accepted, and we strongly urge the authors to submit it to other premier ML conferences. | train | [
"HJxB2mDv6m",
"SygYf2TKAX",
"HkxPrKTY07",
"BJlxmK6tCQ",
"SyxxeGTYCX",
"ByeWsH6YR7",
"H1e-_ZYUpX",
"rJgzAykST7",
"Hylmxde_h7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"Summary:\nThis paper incorporates Generalized Additive Models (GAMs) with model distillation to provide global explanations of neural nets (fully-connected nets as black-box in the paper). It is well written with detailed experiments of synthetic and real tabular data, and makes some contribution towards the inter... | [
6,
-1,
-1,
-1,
-1,
-1,
4,
-1,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
5,
-1,
4
] | [
"iclr_2019_SJl8J30qFX",
"Hylmxde_h7",
"H1e-_ZYUpX",
"HJxB2mDv6m",
"iclr_2019_SJl8J30qFX",
"iclr_2019_SJl8J30qFX",
"iclr_2019_SJl8J30qFX",
"Hylmxde_h7",
"iclr_2019_SJl8J30qFX"
] |
iclr_2019_SJl8gnAqtX | Prob2Vec: Mathematical Semantic Embedding for Problem Retrieval in Adaptive Tutoring | We propose a new application of embedding techniques to problem retrieval in adaptive tutoring. The objective is to retrieve problems similar in mathematical concepts. There are two challenges: First, like sentences, problems helpful to tutoring are never exactly the same in terms of the underlying concepts. Instead, good problems mix concepts in innovative ways, while still displaying continuity in their relationships. Second, it is difficult for humans to determine a similarity score consistent across a large enough training set. We propose a hierarchical problem embedding algorithm, called Prob2Vec, that consists of an abstraction and an embedding step. Prob2Vec achieves 96.88\% accuracy on a problem similarity test, in contrast to 75\% from directly applying state-of-the-art sentence embedding methods. It is surprising that Prob2Vec is able to distinguish very fine-grained differences among problems, an ability humans need time and effort to acquire. In addition, the sub-problem of concept labeling with imbalanced training data set is interesting in its own right. It is a multi-label problem suffering from dimensionality explosion, which we propose ways to ameliorate. We propose the novel negative pre-training algorithm that dramatically reduces false negative and positive ratios for classification, using an imbalanced training data set. | rejected-papers | I tend to agree with reviewers. This is a bit more of an applied type of work and does not lead to new insights in learning representations.
Lack of technical novelty
Dataset too small | train | [
"B1l3_3TepQ",
"r1eU8lgW67",
"S1etFWjga7",
"BkeXRoYqhm",
"Syxz3gT_2Q",
"Skgof-pjiX",
"Skxyj9mRom",
"r1lDqoAiiX",
"HkeZu1ho9Q",
"B1gv_Bz_cX"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"\n1- The idea of using concepts to represent a problem is simple, but using it along with neural network based embedding gives us the opportunity to gain concept continuity as discussed on the last paragraph on page 7 and table 2, which is an active field of research in education.\n\nThe focus of this work is on p... | [
-1,
-1,
-1,
3,
5,
4,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
3,
4,
-1,
-1,
-1,
-1
] | [
"Syxz3gT_2Q",
"Skgof-pjiX",
"BkeXRoYqhm",
"iclr_2019_SJl8gnAqtX",
"iclr_2019_SJl8gnAqtX",
"iclr_2019_SJl8gnAqtX",
"r1lDqoAiiX",
"iclr_2019_SJl8gnAqtX",
"B1gv_Bz_cX",
"iclr_2019_SJl8gnAqtX"
] |
iclr_2019_SJl98sR5tX | Interactive Agent Modeling by Learning to Probe | The ability of modeling the other agents, such as understanding their intentions and skills, is essential to an agent's interactions with other agents. Conventional agent modeling relies on passive observation from demonstrations. In this work, we propose an interactive agent modeling scheme enabled by encouraging an agent to learn to probe. In particular, the probing agent (i.e. a learner) learns to interact with the environment and with a target agent (i.e., a demonstrator) to maximize the change in the observed behaviors of that agent. Through probing, rich behaviors can be observed and are used for enhancing the agent modeling to learn a more accurate mind model of the target agent. Our framework consists of two learning processes: i) imitation learning for an approximated agent model and ii) pure curiosity-driven reinforcement learning for an efficient probing policy to discover new behaviors that otherwise can not be observed. We have validated our approach in four different tasks. The experimental results suggest that the agent model learned by our approach i) generalizes better in novel scenarios than the ones learned by passive observation, random probing, and other curiosity-driven approaches do, and ii) can be used for enhancing performance in multiple applications including distilling optimal planning to a policy net, collaboration, and competition. A video demo is available at https://www.dropbox.com/s/8mz6rd3349tso67/Probing_Demo.mov?dl=0 | rejected-papers | The submission proposes a setting of two agents, one of them probing the other (the latter being the "demonstrator"). The probing is done in a way that learns to imitate the expert's behavior, with some curiosity-driven reward that maximizes the chance that the probing agents has the expert do trajectories that the probing agent hasn't seen before.
All the reviewers found the idea and experiments interesting. The major concern is whether the setup and the environments are too contrived. At least 2 reviewers commented on the fact that the environments/dataset seemed engineered for success of the given method, which is a concern about how this method would generalize to something other than the proposed setup.
I also share the concern with R3 regarding the practicality of the proposed method: it is not obvious to me what problems this would actually be *useful* for, given that the method requires online interaction with an expert agent in order to succeed. The space of such scenarios where we can continuously probe an expert agent many many times for free/cheap is very small and frankly I'm not entirely sure why you would need to do imitation learning in that case at all (if the method was shown to work using only a state, rather than requiring a state/action pair from the expert, then maybe it'd be more useful).
It's a tough call, but despite the nice results and interesting ideas, I think the method lacks generality and practical utility/significance and thus at this point I cannot recommend acceptance in its current form. | train | [
"S1xy89IwJV",
"SkeZ007v14",
"B1egtgpd37",
"SJehFf9EyE",
"BkxDMO4fkV",
"r1eaIH8raQ",
"SJexqnAvC7",
"SygN8oCDAm",
"rkgV950DCm",
"ryxlCO0vC7",
"SylqUvAPCQ",
"BkgDQecb6Q",
"Hyxuz9BAnQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your reply. In Figure 10, each dot represents a specific latent vector m^t. Basically we plot all m^t that appeared in multiple episodes. Since we have qualitatively shown in the demo video that probing can incite new behaviors, the fact that we do observe new latent vectors m^t with probin... | [
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"SkeZ007v14",
"rkgV950DCm",
"iclr_2019_SJl98sR5tX",
"ryxlCO0vC7",
"SJexqnAvC7",
"iclr_2019_SJl98sR5tX",
"r1eaIH8raQ",
"BkgDQecb6Q",
"Hyxuz9BAnQ",
"B1egtgpd37",
"iclr_2019_SJl98sR5tX",
"iclr_2019_SJl98sR5tX",
"iclr_2019_SJl98sR5tX"
] |
iclr_2019_SJldZ2RqFX | D-GAN: Divergent generative adversarial network for positive unlabeled learning and counter-examples generation | Positive Unlabeled (PU) learning consists in learning to distinguish samples of our class of interest, the positive class, from the counter-examples, the negative class, by using positive labeled and unlabeled samples during the training. Recent approaches exploit the GANs abilities to address the PU learning problem by generating relevant counter-examples. In this paper, we propose a new GAN-based PU learning approach named Divergent-GAN (D-GAN). The key idea is to incorporate a standard Positive Unlabeled learning risk inside the GAN discriminator loss function. In this way, the discriminator can ask the generator to converge towards the unlabeled samples distribution while diverging from the positive samples distribution. This enables the generator convergence towards the unlabeled counter-examples distribution without using prior knowledge, while keeping the standard adversarial GAN architecture. In addition, we discuss normalization techniques in the context of the proposed framework. Experimental results show that the proposed approach overcomes previous GAN-based PU learning methods issues, and it globally outperforms two-stage state of the art PU learning performances in terms of stability and prediction on both simple and complex image datasets. | rejected-papers | With positive unlabeled learning the paper targets an interesting problem and proposes a new GAN based method to tackle it. All reviewers however agree that the write-up and the motivation behind the method could be made more clear and that novelty compared to other GAN based methods is limited. Also the experimental analysis does not show a strong clear performance advantage over existing models. | train | [
"HkeDSYqSkE",
"rJeLfUqHJ4",
"HkxbRJNCRm",
"BkePN_Q0Am",
"SyxHcFfl0m",
"ryxSFRFuCm",
"r1xj3iKdCX",
"Bkej0zBdAX",
"Bkgh7MSu0Q",
"S1giigBd07",
"HkeOL7GeAX",
"B1lmrzQ937",
"BkxY2b-5hX",
"SygDGpKthQ",
"SJlKFsUy3m"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThe presented section “Without discriminator batch normalization” discussing normalization techniques is also relevant for the GenPU method.\n\nSince very recently, the spectral normalization (SN) is a GAN state of the art normalization technique. For example, recent interesting GAN models like the SAGAN (“Self-... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
5,
-1
] | [
"BkePN_Q0Am",
"HkxbRJNCRm",
"S1giigBd07",
"Bkej0zBdAX",
"HkeOL7GeAX",
"r1xj3iKdCX",
"BkxY2b-5hX",
"Bkgh7MSu0Q",
"S1giigBd07",
"SygDGpKthQ",
"B1lmrzQ937",
"iclr_2019_SJldZ2RqFX",
"iclr_2019_SJldZ2RqFX",
"iclr_2019_SJldZ2RqFX",
"iclr_2019_SJldZ2RqFX"
] |
iclr_2019_SJlgOjAqYQ | A quantifiable testing of global translational invariance in Convolutional and Capsule Networks | We design simple and quantifiable testing of global translation-invariance in deep learning models trained on the MNIST dataset. Experiments on convolutional and capsules neural networks show that both models have poor performance in dealing with global translation-invariance; however, the performance improved by using data augmentation. Although the capsule network is better on the MNIST testing dataset, the convolutional neural network generally has better performance on the translation-invariance. | rejected-papers | The paper presents an empirical comparison of translation invariance property in CNN and capsule networks. As the reviewers point out, the paper is not acceptable quality at ICLR due to low novelty and significance. | train | [
"SJeSpVJt3Q",
"SJl-99jQ2Q",
"ryxaa-_1hX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Authors present a study to compare global translation invariance capabilities of CNNs and CapsuleNets.\nThe paper doesn't introduce any novel concept or technique but it simply compares two established techniques on MNIST dataset. The interest on this paper is rather limited. Besides many technical concepts are no... | [
3,
4,
3
] | [
5,
4,
5
] | [
"iclr_2019_SJlgOjAqYQ",
"iclr_2019_SJlgOjAqYQ",
"iclr_2019_SJlgOjAqYQ"
] |
iclr_2019_SJlh2jR9FX | Learning with Reflective Likelihoods | Models parameterized by deep neural networks have achieved state-of-the-art results in many domains. These models are usually trained using the maximum likelihood principle with a finite set of observations. However, training deep probabilistic models with maximum likelihood can lead to the issue we refer to as input forgetting. In deep generative latent-variable models, input forgetting corresponds to posterior collapse---a phenomenon in which the latent variables are driven independent from the observations. However input forgetting can happen even in the absence of latent variables. We attribute input forgetting in deep probabilistic models to the finite sample dilemma of maximum likelihood. We formalize this problem and propose a learning criterion---termed reflective likelihood---that explicitly prevents input forgetting. We empirically observe that the proposed criterion significantly outperforms the maximum likelihood objective when used in classification under a skewed class distribution. Furthermore, the reflective likelihood objective prevents posterior collapse when used to train stochastic auto-encoders with amortized inference. For example in a neural topic modeling experiment, the reflective likelihood objective leads to better quantitative and qualitative results than the variational auto-encoder and the importance-weighted auto-encoder. | rejected-papers | The proposed “input forgetting” problem is interesting, and the reflective likelihood can come to be seen as a natural solution, however the reviewers overall are concerned about the rigor of the paper. Reviewer 2 pointed out a technical flaw and this was addressed, however the reviewers remain unconvinced about the theoretical justification for the approach. One suggestion made by reviewer 1 is to focus on simpler models that can be studied more rigorously. Alternatively, it could be useful to focus on stronger empirical results. The method works in the experiments given, but for example in the imbalanced data experiments, only MLE is compared to as a baseline. I think it would be more convincing to compare against stronger baselines from the literature. If they are orthogonal to the choice of estimator, then it would be even better to show that these baselines + RLL outperforms the baselines + MLE. Alternatively, you mention some challenging tasks like seq2seq, where a convincing demonstration would greatly strengthen the paper. While the paper is not yet ready in its current form, it seems like a promising approach that is worth further exploration. | train | [
"HkebgVQXk4",
"rJxOVW7714",
"BkeaOAeg1E",
"BJlhz2qAAQ",
"SJlHDE5ARQ",
"S1xoTX5ARQ",
"Hyg41uYRRQ",
"r1lJnt8AAQ",
"S1eOz28sCm",
"BkeqBtejAm",
"Sye3Xh1sR7",
"SJlhsHJsAX",
"rJeJhKMcRX",
"rJlp3F0d2Q",
"rJl5OkRLnm",
"HklxkbeU3X",
"H1e43-v127",
"r1xoJGzy3X"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"Dear Reviewer 3,\n\nThanks again for your review. We were wondering if we have addressed all your concerns and if you have further comments.\n\n",
"Thank you Reviewer 1 for your insightful comments. We answer your questions below.\n\n1--\"I don't find the motivations and logic behind the derivations to be rigoro... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1,
-1
] | [
"Sye3Xh1sR7",
"BkeaOAeg1E",
"BkeqBtejAm",
"SJlHDE5ARQ",
"S1xoTX5ARQ",
"Hyg41uYRRQ",
"r1lJnt8AAQ",
"SJlhsHJsAX",
"iclr_2019_SJlh2jR9FX",
"rJlp3F0d2Q",
"HklxkbeU3X",
"rJl5OkRLnm",
"iclr_2019_SJlh2jR9FX",
"iclr_2019_SJlh2jR9FX",
"iclr_2019_SJlh2jR9FX",
"iclr_2019_SJlh2jR9FX",
"r1xoJGzy3... |
iclr_2019_SJlpM3RqKQ | Expanding the Reach of Federated Learning by Reducing Client Resource Requirements | Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation. To address this issue, we introduce two novel strategies to reduce communication costs: (1) the use of lossy compression on the global model sent server-to-client; and (2) Federated Dropout, which allows users to efficiently train locally on smaller subsets of the global model and also provides a reduction in both client-to-server communication and local computation. We empirically show that these strategies, combined with existing compression approaches for client-to-server communication, collectively provide up to a 9.6x reduction in server-to-client communication, a 1.5x reduction in local computation, and a 24x reduction in upload communication, all without degrading the quality of the final model. We thus comprehensively reduce FL's impact on client device resources, allowing higher capacity models to be trained, and a more diverse set of users to be reached. | rejected-papers | This paper focuses on communication efficient Federated Learning (FL) and proposes an approach for training large models on heterogeneous edge devices. The paper is well-written and the approach is promising, but all reviewers pointed out that both novelty of the approach and empirical evaluation, including comparison with state-of-art, are somewhat limited. We hope that suggestions provided by the reviewers will be helpful for extending and improving this work. | test | [
"H1gklIwgAX",
"HJxbjnnJAQ",
"HkgOK2nk0X",
"SJxeXnnJAX",
"HJl3ehnkAX",
"ryg0Jo2kRQ",
"Bke_mOmC2m",
"ByeY8Xq63Q",
"SJek5lVo3m"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all reviewers for their suggestions and helping us see how the paper can be improved.\n\nWe think the common misunderstanding among the reviews is that they don’t fully recognize some aspects and challenges of Federated Learning (FL). We provide individual responses of why some of the reviewers’ suggestio... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"iclr_2019_SJlpM3RqKQ",
"SJek5lVo3m",
"SJek5lVo3m",
"Bke_mOmC2m",
"Bke_mOmC2m",
"ByeY8Xq63Q",
"iclr_2019_SJlpM3RqKQ",
"iclr_2019_SJlpM3RqKQ",
"iclr_2019_SJlpM3RqKQ"
] |
iclr_2019_SJlt6oA9Fm | Selective Convolutional Units: Improving CNNs via Channel Selectivity | Bottleneck structures with identity (e.g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently. In this paper, we focus on the information-preserving nature of identity connection and utilize this to enable a convolutional layer to have a new functionality of channel-selectivity, i.e., re-distributing its computations to important channels. In particular, we propose Selective Convolutional Unit (SCU), a widely-applicable architectural unit that improves parameter efficiency of various modern CNNs with bottlenecks. During training, SCU gradually learns the channel-selectivity on-the-fly via the alternative usage of (a) pruning unimportant channels, and (b) rewiring the pruned parameters to important channels. The rewired parameters emphasize the target channel in a way that selectively enlarges the convolutional kernels corresponding to it. Our experimental results demonstrate that the SCU-based models without any postprocessing generally achieve both model compression and accuracy improvement compared to the baselines, consistently for all tested architectures. | rejected-papers | This paper proposed Selective Convolutional Unit (SCU) for improving the 1x1 convolutions used in the bottleneck of a ResNet block. The main idea is to remove channels of low “importance” and replace them by other ones which are in a similar fashion found to be important. To this end the authors propose the so-called expected channel damage score (ECDS) which is used for channel selection. The authors also show the effectiveness of SCU on CIFAR-10, CIFAR-100 and Imagenet.
The major concerns from various reviewers are that the design seems the over-complicated as well as the experiments are not state-of-the-art. In response, the authors add some explanations on the design idea and new experiments of DenseNet-BC-190 on CIFAR10/100. But the reviewers’ major concerns are still there and did not change their ratings (6,5,5). Based on current results, the paper is proposed for borderline lean reject.
| train | [
"rkesW6WlyN",
"ryemIJH_0Q",
"SygKcAVOCQ",
"SkezmpVOCm",
"Skeoah4OCX",
"HJe_GbTYpQ",
"BkgdeU8phQ",
"r1e0SGTY2Q"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewers and AC,\n\nWe hope that all of you checked our rebuttal/revision and we would be very happy to answer any remaining questions/concerns.\n\nThanks for your contribution to ICLR 2019,\nAuthors",
"We sincerely thank all the reviewers for their valuable comments and effort to improve our manuscript. M... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"ryemIJH_0Q",
"iclr_2019_SJlt6oA9Fm",
"HJe_GbTYpQ",
"BkgdeU8phQ",
"r1e0SGTY2Q",
"iclr_2019_SJlt6oA9Fm",
"iclr_2019_SJlt6oA9Fm",
"iclr_2019_SJlt6oA9Fm"
] |
iclr_2019_SJx5kn0cK7 | HAPPIER: Hierarchical Polyphonic Music Generative RNN | Generating polyphonic music with coherent global structure is a major challenge for automatic composition algorithms. The primary difficulty arises due to the inefficiency of models to recognize underlying patterns beneath music notes across different levels of time scales and remain long-term consistency while composing. Hierarchical architectures can capture and represent learned patterns in different temporal scales and maintain consistency over long time spans, and this corresponds to the hierarchical structure in music. Motivated by this, focusing on leveraging the idea of hierarchical models and improve them to fit the sequence modeling problem, our paper proposes HAPPIER: a novel HierArchical PolyPhonic musIc gEnerative RNN. In HAPPIER, A higher `measure level' learns correlations across measures and patterns for chord progressions, and a lower `note level' learns a conditional distribution over the notes to generate within a measure. The two hierarchies operate at different clock rates: the higher one operates on a longer timescale and updates every measure, while the lower one operates on a shorter timescale and updates every unit duration. The two levels communicate with each other, and thus the entire architecture is trained jointly end-to-end by back-propagation. HAPPIER, profited from the strength of the hierarchical structure, generates polyphonic music with long-term dependencies compared to the state-of-the-art methods. | rejected-papers | Although all the reviewers find the problem and the approach of using hierarchical models important and interesting, how it has been executed in this submission has not been found favourable by the reviewers. | train | [
"ByeiePV93Q",
"BJgk-7VF3Q",
"H1gdsfJY27",
"HJlXsN4Xq7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"This paper proposes a hierarchical RNN, where the first layer is note-level and the second level is measure-level. In an experiment on the Nottingham MIDI dataset, they show slight improvements in log-likelihood.\n\nOverall:\n\nThis is an interesting application of hierarchical RNNs. However, hierarchical RNNs are... | [
2,
3,
3,
-1
] | [
4,
4,
5,
-1
] | [
"iclr_2019_SJx5kn0cK7",
"iclr_2019_SJx5kn0cK7",
"iclr_2019_SJx5kn0cK7",
"iclr_2019_SJx5kn0cK7"
] |
iclr_2019_SJx94o0qYX | Precision Highway for Ultra Low-precision Quantization | Quantization of a neural network has an inherent problem called accumulated quantization error, which is the key obstacle towards ultra-low precision, e.g., 2- or 3-bit precision. To resolve this problem, we propose precision highway, which forms an end-to-end high-precision information flow while performing the ultra-low-precision computation. First, we describe how the precision highway reduce the accumulated quantization error in both convolutional and recurrent neural networks. We also provide the quantitative analysis of the benefit of precision highway and evaluate the overhead on the state-of-the-art hardware accelerator. In the experiments, our proposed method outperforms the best existing quantization methods while offering 3-bit weight/activation quantization with no accuracy loss and 2-bit quantization with a 2.45 % top-1 accuracy loss in ResNet-50. We also report that the proposed method significantly outperforms the existing method in the 2-bit quantization of an LSTM for language modeling. | rejected-papers | The submission proposes a strategy for quantization of neural networks with skip connections that quantizes only the convolution paths, while leaving the skip paths at full precision. The approach can save computation through compressing the convolution kernels, while spending more on the skip connections.
Empirical results show improved performance at 2-bit quantization compared to a handful of competing methods. Figure 5 provides some interpretation of why the method might be working in terms of "smoothness" of the loss surface (term not used in the traditional mathematical sense).
The paper seems to focus too much on selling the name "precision highway" rather than providing proper definitions of their strategy (a definition block would be a good first step), and there is little mathematical analysis of the consequences of the chosen approach.
There are concerns about the novelty of the method, specifically compared to Liu et al. (2018) and Choi et al. (2018b), which propose approximately the same strategy. Footnote 1 claims that these works were conducted in parallel with the current submission, but it is unambiguously the case that Choi et al appeared on arXiv in May, and Liu et al. appeared in ECCV 2018 and on arXiv more than 30 days before the ICLR deadline, and can fairly be considered prior work https://iclr.cc/Conferences/2019/Reviewer_Guidelines
The reviewer scores were on aggregate borderline for the ICLR acceptance threshold. On the balance, the paper seems to fall under the threshold due to insufficient novelty and analysis of the method.
| train | [
"BklpLk1EJV",
"Byx2RgOZ1E",
"r1g8RtfxyE",
"Hkx1PUS6AQ",
"HkxdNLHaCm",
"r1gEj8y3AQ",
"HkeWxbrxR7",
"rkeJ-QBl07",
"Bkl1SxreR7",
"BJeYeCw-aQ",
"BylJ0G-03X",
"Syego60FhQ"
] | [
"public",
"author",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThanks to authors for the responses to my question. Here's my rebuttal to your answers.\n\n- It seems to be clear that the concept of Precision Highway for LSTM (at least for the LSTM cells) is very similar to the conventional quantization approaches, if not identical. In fact, the first answer of the authors al... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"iclr_2019_SJx94o0qYX",
"r1g8RtfxyE",
"HkxdNLHaCm",
"HkxdNLHaCm",
"r1gEj8y3AQ",
"iclr_2019_SJx94o0qYX",
"BylJ0G-03X",
"Syego60FhQ",
"BJeYeCw-aQ",
"iclr_2019_SJx94o0qYX",
"iclr_2019_SJx94o0qYX",
"iclr_2019_SJx94o0qYX"
] |
iclr_2019_SJxCsj0qYX | Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures | We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design. The discrepancy between the minimax and maximin objective values could serve as a proxy for the difficulties that the alternating gradient descent encounters in the optimization of GANs. In this work, we give new results on the benefits of multi-generator architecture of GANs. We show that the minimax gap shrinks to \epsilon as the number of generators increases with rate O(1/\epsilon). This improves over the best-known result of O(1/\epsilon^2). At the core of our techniques is a novel application of Shapley-Folkman lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constraint optimization problem. Our proposed Stackelberg GAN performs well experimentally in both synthetic and real-world datasets, improving Frechet Inception Distance by 14.61% over the previous multi-generator GANs on the benchmark datasets. | rejected-papers | This paper proposes new GAN training method with multi generator architecture inspired by Stackelberg competition in game theory. The paper has theoretical results showing that minmax gap scales to \eps for number of generators O(1/\eps), improving over previous bounds. Paper also has some experimental results on Fashion Mnist and CIFAR10 datasets.
Reviewers find the theoretical results of the paper interesting. However, reviewers have multiple concerns about comparison with other multi generator architectures, optimization dynamics of the new objective and clarity of writing of the original submission. While authors have addressed some of these concerns in their response reviewers still remain skeptical of the contributions. Perhaps more experiments on imagenet quality datasets with detailed comparison can help make the contributions of the paper clearer. | train | [
"HygcVFrRyV",
"rkeRfHNu1E",
"SkxUTyZxyE",
"HJxvQznKn7",
"Hkx_aMpapQ",
"rylVJXBmaX",
"ByghlSHQTQ",
"rkxGpTi9hX",
"r1ekBWtq3X",
"HyebUgh0nQ",
"Hkl61NqR2m",
"SyxWuDuj3X",
"BygSR2UR37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Thank the authors for their detailed reply.\n\nQ1. \"On one hand, the population form of Stackelberg GAN includes some forms of previous multi-generators GANs as special cases.\" They are definitely not your special cases, but your formulation is a simplified version of theirs. \"On the other hand, the empirical l... | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
5,
7,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1
] | [
"rylVJXBmaX",
"SkxUTyZxyE",
"HyebUgh0nQ",
"iclr_2019_SJxCsj0qYX",
"iclr_2019_SJxCsj0qYX",
"rkxGpTi9hX",
"HJxvQznKn7",
"iclr_2019_SJxCsj0qYX",
"iclr_2019_SJxCsj0qYX",
"Hkl61NqR2m",
"BygSR2UR37",
"r1ekBWtq3X",
"SyxWuDuj3X"
] |
iclr_2019_SJxFN3RcFX | Functional Bayesian Neural Networks for Model Uncertainty Quantification | In this paper, we extend the Bayesian neural network to functional Bayesian neural network with functional Monte Carlo methods that use the samples of functionals instead of samples of networks' parameters for inference to overcome the curse of dimensionality for uncertainty quantification. Based on the previous work on Riemannian Langevin dynamics, we propose the stochastic gradient functional Riemannian dynamics for training functional Bayesian neural network. We show the effectiveness and efficiency of our proposed approach with various experiments. | rejected-papers | This paper addresses a promising and challenging idea in Bayesian deep learning, namely thinking about distributions over functions rather than distributions over parameters. This is formulated by doing MCMC in a functional space rather than directly in the parameter space. The reviewers were unfortunately not convinced by the approach citing a variety of technical flaws, a lack of clarity of exposition and critical experiments. In general, it seems that the motivation of the paper is compelling and the idea promising, but perhaps the paper was hastily written before the ideas were fully developed and comprehensive experiments could be run. Hopefully the reviewer feedback will be helpful to further develop the work and lead to a future submission.
Note: Unfortunately one review was too short to be informative. However, fortunately the other two reviews were sufficiently thorough to provide enough signal. | train | [
"HyeDT0djn7",
"SygA0zT93m",
"SylyBDEqhX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose an approximate MCMC method for sampling a posterior distribution of weights in a Bayesian neural network. They claim that existing MCMC methods are limited by poor scaling with dimensionality of the weights, and they propose a method inspired by HMC on finite-dimensional approximations of meas... | [
3,
4,
5
] | [
3,
4,
2
] | [
"iclr_2019_SJxFN3RcFX",
"iclr_2019_SJxFN3RcFX",
"iclr_2019_SJxFN3RcFX"
] |
iclr_2019_SJxJtiRqt7 | Generating Images from Sounds Using Multimodal Features and GANs | Although generative adversarial networks (GANs) have enabled us to convert images from one domain to another similar one, converting between different sensory modalities, such as images and sounds, has been difficult. This study aims to propose a network that reconstructs images from sounds. First, video data with both images and sounds are labeled with pre-trained classifiers. Second, image and sound features are extracted from the data using pre-trained classifiers. Third, multimodal layers are introduced to extract features that are common to both the images and sounds. These layers are trained to extract similar features regardless of the input modality, such as images only, sounds only, and both images and sounds. Once the multimodal layers have been trained, features are extracted from input sounds and converted into image features using a feature-to-feature GAN. Finally, the generated image features are used to reconstruct images. Experimental results show that this method can successfully convert from the sound domain into the image domain. When we applied a pre-trained classifier to both the generated and original images, 31.9% of the examples had at least one of their top 10 labels in common, suggesting reasonably good image generation. Our results suggest that common representations can be learned for different modalities, and that proposed method can be applied not only to sound-to-image conversion but also to other conversions, such as from images to sounds. | rejected-papers | The work presents a new way to generate images from sounds. The reviewers found the problem ill-defined, the method not well-motivated and the results not compelling. There are a number of missing references and things to compare to, which the authors should change in a follow-up. | train | [
"Skgbfz3Z6Q",
"SkgazwrThm",
"ryxapKiBhX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"PROS:\n* The paper was well-written and explained the method and the experiments well\n\nCONS:\n* The problem seems ill-posed to me. Sound is temporal and the problem should probably be sound-to-video conversion not sound-to-image. \n* A link to generated images from sounds where one could actually evaluate the ge... | [
3,
4,
4
] | [
4,
4,
5
] | [
"iclr_2019_SJxJtiRqt7",
"iclr_2019_SJxJtiRqt7",
"iclr_2019_SJxJtiRqt7"
] |
iclr_2019_SJxfxnA9K7 | Structured Prediction using cGANs with Fusion Discriminator | We propose a novel method for incorporating conditional information into a generative adversarial network (GAN) for structured prediction tasks. This method is based on fusing features from the generated and conditional information in feature space and allows the discriminator to better capture higher-order statistics from the data. This method also increases the strength of the signals passed through the network where the real or generated data and the conditional data agree. The proposed method is conceptually simpler than the joint convolutional neural network - conditional Markov random field (CNN-CRF) models and enforces higher-order consistency without being limited to a very specific class of high-order potentials. Experimental results demonstrate that this method leads to improvement on a variety of different structured prediction tasks including image synthesis, semantic segmentation, and depth estimation. | rejected-papers | All three reviewers argue for rejection on the basis that this paper does not make a sufficiently novel and substantial contribution to warrant publication. The AC follows their recommendation. | val | [
"SkgLoB3p3Q",
"SkgtYOLq0Q",
"rJehRQM9nQ",
"HJeZKeJqh7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new method for incorporating conditional information into a GAN for structured prediction tasks (image conditioned GAN problems). The proposed method is based on fusing features from the generated and conditional information in feature space and allows the discriminator to better capture high... | [
5,
-1,
3,
3
] | [
3,
-1,
4,
4
] | [
"iclr_2019_SJxfxnA9K7",
"SkgLoB3p3Q",
"iclr_2019_SJxfxnA9K7",
"iclr_2019_SJxfxnA9K7"
] |
iclr_2019_SJxzPsAqFQ | Multi-turn Dialogue Response Generation in an Adversarial Learning Framework | We propose an adversarial learning approach to the generation of multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embedding with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows major advantages over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This superiority is demonstrated on the Movie triples and Ubuntu dialogue datasets with both the automatic and human evaluations. | rejected-papers |
This paper proposes an adversarial learning framework for dialogue generation. The generator is based on previously proposed hierarchical recurrent encoder-decoder network (HRED) by Serban et al., and the discriminator is a bidirectional RNN. Noise is introduced in generator for response generation.
The approach is evaluated on two commonly used corpora, movie data and ubuntu corpus.
In the original version of the paper, human evaluation was missing, an issue raised by all reviewers, however, this has been added in the revisions. These supplement the previous automated measures in demonstrating the benefits and significant gains from the proposed approach.
All reviewers raise the issue of the work being incremental and not novel enough given the previous work in HRED/VHRED and use of hierarchical approaches to model dialogue context. Furthermore, noise generation seems new, but is not well motivated, justified and analyzed.
| val | [
"r1gtwL1sCQ",
"SyxLQr8e07",
"HylO8ndG6Q",
"HJgW29uM6Q",
"BJl8atOfpX",
"SklBxSyDpm",
"Byxl-o003Q",
"Syx-L-5C3m",
"BkeBRkMT3X"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your reviews. We have conducted human evaluation of the models presented in the paper and added the results with some discussion to the updated paper. We also expanded the previous work section with additional citations and added more detailed results to the ablation studies to show the impact of loc... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"iclr_2019_SJxzPsAqFQ",
"SklBxSyDpm",
"BkeBRkMT3X",
"Syx-L-5C3m",
"Byxl-o003Q",
"iclr_2019_SJxzPsAqFQ",
"iclr_2019_SJxzPsAqFQ",
"iclr_2019_SJxzPsAqFQ",
"iclr_2019_SJxzPsAqFQ"
] |
iclr_2019_SJz6MnC5YQ | DEEP GRAPH TRANSLATION | The tremendous success of deep generative models on generating continuous data
like image and audio has been achieved; however, few deep graph generative models
have been proposed to generate discrete data such as graphs. The recently proposed
approaches are typically unconditioned generative models which have no
control over modes of the graphs being generated. Differently, in this paper, we
are interested in a new problem named Deep Graph Translation: given an input
graph, the goal is to infer a target graph by learning their underlying translation
mapping. Graph translation could be highly desirable in many applications such
as disaster management and rare event forecasting, where the rare and abnormal
graph patterns (e.g., traffic congestions and terrorism events) will be inferred prior
to their occurrence even without historical data on the abnormal patterns for this
specific graph (e.g., a road network or human contact network). To this end, we
propose a novel Graph-Translation-Generative Adversarial Networks (GT-GAN)
which translates one mode of the input graphs to its target mode. GT-GAN consists
of a graph translator where we propose new graph convolution and deconvolution
layers to learn the global and local translation mapping. A new conditional
graph discriminator has also been proposed to classify target graphs by conditioning
on input graphs. Extensive experiments on multiple synthetic and real-world
datasets demonstrate the effectiveness and scalability of the proposed GT-GAN. | rejected-papers | Although one reviewer recommended accepting this paper, they were not willing to champion it during the discussion phase and did not seem to truly believe it is currently ready for publication. Thus I am recommending rejecting this submission. | train | [
"r1ek3L5VkV",
"Bkez3clqRX",
"ByliURT_Rm",
"HJeWPj6eA7",
"SyeX3DBwpm",
"ryly9wHvTQ",
"Hkex58HP6Q",
"SJgG2EBDpQ",
"SyeBGNHD6m",
"SJlugpiVaX",
"Hyeo_6oEaQ",
"H1lkUZCQp7",
"SJlERxAm67",
"SkemSsMfpQ",
"ryl8Oq-53X",
"S1l4P2HuhX",
"Hyx-b0Vtnm"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the comments. I still think the work is interesting and the comments and improvements to the paper help. I'm unfortunately not convinced that it's yet good enough to go up to the next category.",
"Dear Reviewer,\n\nThank you very much for your new and previous comments. We have revised our paper again... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
-1
] | [
"SyeBGNHD6m",
"SkemSsMfpQ",
"HJeWPj6eA7",
"SJlERxAm67",
"S1l4P2HuhX",
"S1l4P2HuhX",
"S1l4P2HuhX",
"S1l4P2HuhX",
"S1l4P2HuhX",
"SkemSsMfpQ",
"SkemSsMfpQ",
"ryl8Oq-53X",
"ryl8Oq-53X",
"iclr_2019_SJz6MnC5YQ",
"iclr_2019_SJz6MnC5YQ",
"iclr_2019_SJz6MnC5YQ",
"iclr_2019_SJz6MnC5YQ"
] |
iclr_2019_SJzYdsAqY7 | Spatial-Winograd Pruning Enabling Sparse Winograd Convolution | Deep convolutional neural networks (CNNs) are deployed in various applications but demand immense computational requirements. Pruning techniques and Winograd convolution are two typical methods to reduce the CNN computation. However, they cannot be directly combined because Winograd transformation fills in the sparsity resulting from pruning. Li et al. (2017) propose sparse Winograd convolution in which weights are directly pruned in the Winograd domain, but this technique is not very practical because Winograd-domain retraining requires low learning rates and hence significantly longer training time. Besides, Liu et al. (2018) move the ReLU function into the Winograd domain, which can help increase the weight sparsity but requires changes in the network structure. To achieve a high Winograd-domain weight sparsity without changing network structures, we propose a new pruning method, spatial-Winograd pruning. As the first step, spatial-domain weights are pruned in a structured way, which efficiently transfers the spatial-domain sparsity into the Winograd domain and avoids Winograd-domain retraining. For the next step, we also perform pruning and retraining directly in the Winograd domain but propose to use an importance factor matrix to adjust weight importance and weight gradients. This adjustment makes it possible to effectively retrain the pruned Winograd-domain network without changing the network structure. For the three models on the datasets of CIFAR-10, CIFAR-100, and ImageNet, our proposed method can achieve the Winograd-domain sparsities of 63%, 50%, and 74%, respectively. | rejected-papers | Reviewer scores straddle the decision boundary but overall this does work does not meet the bar yet. Even after discussion with the authors, the reviewers reconfirmed there 'reject' recommendation and the area chair agrees with that assessment. | train | [
"HkepknPC0Q",
"ryeevww007",
"rJlppRjF0Q",
"rJxqlOulAX",
"SJlqPc8KTm",
"HklodPUF6Q",
"SJxFpDUtp7",
"B1g1k7LKpm",
"B1x8ypt0nQ",
"Syl7K_eS37",
"Bygy2gmWhX"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the response.\n\n(1) For the practical considerations, we agree with the reviewer that real speedups verified on hardware platforms are important. We will include more related results in the future version.\n\n(2) For the generality of the result, this is actually what we want to improve ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"rJlppRjF0Q",
"rJxqlOulAX",
"SJxFpDUtp7",
"SJlqPc8KTm",
"Bygy2gmWhX",
"Syl7K_eS37",
"Syl7K_eS37",
"B1x8ypt0nQ",
"iclr_2019_SJzYdsAqY7",
"iclr_2019_SJzYdsAqY7",
"iclr_2019_SJzYdsAqY7"
] |
iclr_2019_SJzuHiA9tQ | Generative Adversarial Network Training is a Continual Learning Problem | Generative Adversarial Networks (GANs) have proven to be a powerful framework for learning to draw samples from complex distributions. However, GANs are also notoriously difficult to train, with mode collapse and oscillations a common problem. We hypothesize that this is at least in part due to the evolution of the generator distribution and the catastrophic forgetting tendency of neural networks, which leads to the discriminator losing the ability to remember synthesized samples from previous instantiations of the generator. Recognizing this, our contributions are twofold. First, we show that GAN training makes for a more interesting and realistic benchmark for continual learning methods evaluation than some of the more canonical datasets. Second, we propose leveraging continual learning techniques to augment the discriminator, preserving its ability to recognize previous generator samples. We show that the resulting methods add only a light amount of computation, involve minimal changes to the model, and result in better overall performance on the examined image and text generation tasks. | rejected-papers | This paper studies training of the generative adversarial networks (GANs), specifically the discriminator, as a continual learning problem, where the discriminator does not forget previously generated samples. This model can be potentially used for improving GANs training and for generating synthetic datasets for evaluating continual learning methods. All the reviewers and AC agree that showing how continual learning techniques applied to the discriminator can alleviate mode collapse in GANs training is an important direction to study.
There is reviewer disagreement on this paper. AC can confirm that all three reviewers have read the author responses and have contributed to the final discussion.
While acknowledging that continual learning setting is potentially useful, the reviewers have raised several important concerns: (1) low technical novelty in light of EWC++ and online EWC methods (R1 and R3) -- methodological and empirical comparison to these baselines is required to assess the difference and benefits of the proposed approach; the authors response to these concerns (and also R2’s comments in the discussion) were insufficient to assess the scope of the contribution. (2) More diverse/convincing empirical findings would strengthen the evaluation (e.g. assessing whether or not generator could help to overcome forgetting; showing that memory replay strategy by storing sufficient fake examples from previously generated samples cannot prevent mode collapse in GANs training – see the R3’s comment; showing the benefits of the generated samples for evaluating continual learning methods). (3) R1 left unconvinced that GAN training can be improved via continual learning training, as the relation between the proposed view and the minimax optimization difficulties in GANs is not addressed – see R1’s comment about this. The authors briefly discussed in their response to the review that the proposed approach is orthogonal to these works. However, a better (possibly theoretical) analysis of GANs training and continual learning would indeed help to evaluate the scope of the contribution of this work.
Regarding the available datasets that exhibit a coherent time evolution -- see the Continuous Manifold Based Adaptation for Evolving Visual Domains by Hoffman et al, CVPR 2014.
Among (1)-(3): (2) and (3) did not have a substantial impact on the decision, but would be helpful to address in a subsequent revision. However, (1) makes it very difficult to assess the benefits of the proposed approach, and was viewed by AC as a critical issue.
AC suggests that in this current state the paper can be considered for a workshop and recommend to prepare a major revision before resubmitting it for the second round of reviews.
| train | [
"SkxfQsC0yE",
"SJg91x9L27",
"r1xG7Aqfy4",
"BJeYmAAe1N",
"HJlG9xXe1V",
"rJgmj-eeJ4",
"B1xk7V3hA7",
"BJlwgA-YaQ",
"HJeK6eFqaX",
"BJeKoOd5pX",
"rJgUjwzKam",
"Hyl1ZkfFp7",
"rJg3P0WKpm",
"BylKc9Hp27",
"Bylvu4Qc2Q",
"rylR2IXZoQ",
"SygFQ_B2cX",
"BJxnMBno5m",
"S1x3jYZscm",
"rke1vMC9c7"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
... | [
"This was also discussed and presented in Fig 2 of unrolled GAN: https://arxiv.org/abs/1611.02163",
"The authors argue that catastrophic forgetting may cause mode collapse and oscillation, and propose a novel plug-and-play regularizer that can be applied to a variety of GANs' training process to counter catastro... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HJlG9xXe1V",
"iclr_2019_SJzuHiA9tQ",
"rJg3P0WKpm",
"HJlG9xXe1V",
"rJgmj-eeJ4",
"HJeK6eFqaX",
"HJeK6eFqaX",
"BylKc9Hp27",
"BJeKoOd5pX",
"rJgUjwzKam",
"Hyl1ZkfFp7",
"SJg91x9L27",
"Bylvu4Qc2Q",
"iclr_2019_SJzuHiA9tQ",
"iclr_2019_SJzuHiA9tQ",
"SygFQ_B2cX",
"BJxnMBno5m",
"S1x3jYZscm",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.