paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2019_S1g2JnRcFX | Local SGD Converges Fast and Communicates Little | Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed training. The scheme can reach a linear speed-up with respect to the number of workers, but this is rarely seen in practice as the scheme often suffers from large network delays and bandwidth limits. To overcome this communication bottleneck recent works propose to reduce the communication frequency. An algorithm of this type is local SGD that runs SGD independently in parallel on different workers and averages the sequences only once in a while. This scheme shows promising results in practice, but eluded thorough theoretical analysis.
We prove concise convergence rates for local SGD on convex problems and show that it converges at the same rate as mini-batch SGD in terms of number of evaluated gradients, that is, the scheme achieves linear speed-up in the number of workers and mini-batch size. The number of communication rounds can be reduced up to a factor of T^{1/2}---where T denotes the number of total steps---compared to mini-batch SGD. This also holds for asynchronous implementations.
Local SGD can also be used for large scale training of deep learning models. The results shown here aim serving as a guideline to further explore the theoretical and practical aspects of local SGD in these applications. | accepted-poster-papers | This paper analyzes local SGD optimization for strongly convex functions, and proves that local SGD enjoys a linear speedup (in the number of workers and minibatch size) over vanilla SGD, while also communicating less than distributed mini-batch SGD. A similar analysis is also provided for the asynchronous case, and limited empirical confirmation of the theory is provided. The main weakness of the current revision is that it does not yet properly relate this work to two prior publications: Dekel et al., 2012 (https://arxiv.org/pdf/1012.1367.pdf) and Jain et al., 2016 (https://arxiv.org/abs/1610.03774). It is critical that these references and suitable discussion be added in the camera-ready paper, since this issue was the subject of considerable discussion and the authors promised to include the references and discussion in the final paper. | train | [
"ByxMDz1fk4",
"r1xRplIR0X",
"H1ed8V0TA7",
"Sklci2Dg07",
"S1gTViwlRQ",
"r1ePa5DeR7",
"SJxvQJ4WT7",
"HkxcmSCa2m",
"Byl1lBwchX",
"BJeqkEXq3m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Please excuse our negligence of not having included these references in the current revision. We agree that both algorithms work at “the end of the local SGD spectrum” (H=1 for mini-batch SGD (Deckel et al.), and H=T for the model averaging discussed in (Jain et al.)) and thus merit discussion. \n\nWe will certain... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"r1xRplIR0X",
"Sklci2Dg07",
"r1ePa5DeR7",
"Byl1lBwchX",
"HkxcmSCa2m",
"BJeqkEXq3m",
"iclr_2019_S1g2JnRcFX",
"iclr_2019_S1g2JnRcFX",
"iclr_2019_S1g2JnRcFX",
"iclr_2019_S1g2JnRcFX"
] |
iclr_2019_S1gOpsCctm | Learning Finite State Representations of Recurrent Policy Networks | Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems. RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features. In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features. The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior. We present results of this approach on synthetic environments and six Atari games. The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy. We also show that these finite policy representations lead to improved interpretability. | accepted-poster-papers | The paper addresses the problem of interpreting recurrent neural networks by quantizing their states an mapping them onto a Moore Machine. The paper presents some interesting results on reinforcement learning and other tasks. I believe the experiments could have been more informative if the proposed technique was compared against a simple quantization baseline (e.g. based on k-means) so that one can get a better understanding of the difficulty of these task.
This paper is clearly above the acceptance threshold at ICLR. | train | [
"HJgliFSL3Q",
"Skx4XkYBA7",
"Bkl17B4Yp7",
"rkeFP2BDp7",
"BJglR0elhm",
"S1xiusCLT7",
"B1lSGQhLaQ",
"HyxpmBrBTX",
"HkgFJXeEp7",
"H1ea2Me4pX",
"r1g9dMeNa7",
"Skg3Cr8_3m"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a method to learn a quantization of both observations and hidden states in an RNN. Its findings suggest that many problems can be reduced to relatively simple Moore Machines, even for complex environments such as Atari games.\n\nThe method works by pretraing an RNN to learn a policy (e.g. throu... | [
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_S1gOpsCctm",
"iclr_2019_S1gOpsCctm",
"rkeFP2BDp7",
"H1ea2Me4pX",
"iclr_2019_S1gOpsCctm",
"B1lSGQhLaQ",
"HyxpmBrBTX",
"HkgFJXeEp7",
"BJglR0elhm",
"HJgliFSL3Q",
"Skg3Cr8_3m",
"iclr_2019_S1gOpsCctm"
] |
iclr_2019_S1gUsoR9YX | Multilingual Neural Machine Translation with Knowledge Distillation | Multilingual machine translation, which translates multiple languages with a single model, has attracted much attention due to its efficiency of offline training and online serving. However, traditional multilingual translation usually yields inferior accuracy compared with the counterpart using individual models for each language pair, due to language diversity and model capacity limitations. In this paper, we propose a distillation-based approach to boost the accuracy of multilingual machine translation. Specifically, individual models are first trained and regarded as teachers, and then the multilingual model is trained to fit the training data and match the outputs of individual models simultaneously through knowledge distillation. Experiments on IWSLT, WMT and Ted talk translation datasets demonstrate the effectiveness of our method. Particularly, we show that one model is enough to handle multiple languages (up to 44 languages in our experiment), with comparable or even better accuracy than individual models. | accepted-poster-papers | This paper presents good empirical results on an important and interesting task (translation between several language pairs with a single model). There was solid communication between the authors and the reviewers leading to an improved updated version and consensus among the reviewers about the merits of the paper. | train | [
"B1l6eTYLxN",
"r1eGErKF2m",
"HkxYyeX5n7",
"Hkg1r9LyeE",
"rkxYUy8c0m",
"B1eviDZ90Q",
"r1eMyJtFAX",
"H1xAh5EE0Q",
"SJl9EyzYCX",
"r1xF8KtDAQ",
"HklZyU6H0X",
"BJl2FgvHAQ",
"Byl2y81BA7",
"Syl_hCHNCX",
"rylLCTSV0Q",
"B1lkPgr40m",
"Skgw0644AQ",
"rJxh264VAX",
"H1egc944A7",
"rkeY6_gchm"... | [
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_r... | [
"Based upon the Algorithm 1 ALL loss and NLL loss can oscillate around the threshold point but in Section 3.3 in the discussion about early stopping it seems that you imply that once student improves beyond the threshold then the NLL loss is used always. Which scenario is correct? will there be an oscillation?",
... | [
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_S1gUsoR9YX",
"iclr_2019_S1gUsoR9YX",
"iclr_2019_S1gUsoR9YX",
"r1xF8KtDAQ",
"B1eviDZ90Q",
"SJl9EyzYCX",
"HklZyU6H0X",
"rkeY6_gchm",
"B1lkPgr40m",
"BJl2FgvHAQ",
"Byl2y81BA7",
"Skgw0644AQ",
"rylLCTSV0Q",
"H1egc944A7",
"H1xAh5EE0Q",
"HkxYyeX5n7",
"rJxh264VAX",
"r1eGErKF2m",
... |
iclr_2019_S1lDV3RcKm | MisGAN: Learning from Incomplete Data with Generative Adversarial Networks | Generative adversarial networks (GANs) have been shown to provide an effective way to model complex distributions and have obtained impressive results on various challenging tasks. However, typical GANs require fully-observed data during training. In this paper, we present a GAN-based framework for learning from complex, high-dimensional incomplete data. The proposed framework learns a complete data generator along with a mask generator that models the missing data distribution. We further demonstrate how to impute missing data by equipping our framework with an adversarially trained imputer. We evaluate the proposed framework using a series of experiments with several types of missing data processes under the missing completely at random assumption. | accepted-poster-papers | The paper proposes an adversarial framework that learns a generative model along with a mask generator to model missing data and by this enables a GAN to learn from incomplete data.
The method builds on AmbientGAN but it is a novel and clever adjustment to the specific problem setting of learning from incomplete data, that is of high practical interest. | test | [
"BkxEh-d4Rm",
"SJlTFNdNCm",
"BklU-g_VCm",
"S1xWJ2UW6Q",
"Hkgdo5Da2m",
"SylB2O2q27"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the constructive comments, which we address below.\n\nAs stated in the introduction, MisGAN is designed for learning the distribution from high-dimensional data in the presence of a potentially large amount of missing values. However, the five benchmark datasets that GAIN (Yoon et al, 2018) is evalua... | [
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
4,
5,
4
] | [
"Hkgdo5Da2m",
"SylB2O2q27",
"S1xWJ2UW6Q",
"iclr_2019_S1lDV3RcKm",
"iclr_2019_S1lDV3RcKm",
"iclr_2019_S1lDV3RcKm"
] |
iclr_2019_S1lIMn05F7 | A Direct Approach to Robust Deep Learning Using Adversarial Networks | Deep neural networks have been shown to perform well in many classical machine learning problems, especially in image classification tasks. However, researchers have found that neural networks can be easily fooled, and they are surprisingly sensitive to small perturbations imperceptible to humans. Carefully crafted input images (adversarial examples) can force a well-trained neural network to provide arbitrary outputs. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. In this paper we propose a new defensive mechanism under the generative adversarial network~(GAN) framework. We model the adversarial noise using a generative network, trained jointly with a classification discriminative network as a minimax game. We show empirically that our adversarial network approach works well against black box attacks, with performance on par with state-of-art methods such as ensemble adversarial training and adversarial training with projected gradient descent.
| accepted-poster-papers | The paper proposed a GAN approach to robust learning against adversarial examples, where a generator produces adversarial examples as perturbations and a discriminator is used to distinguish between adversarial and raw images. The performance on MNIST, SVHN, and CIFAR10 demonstrate the effectiveness of the approach, and in general, the performance is on par with carefully crafted algorithms for such task.
The architecture of GANs used in the paper is standard, yet the defensive performance seems good. The reviewers wonder the reason behind this good mechanism and the novelty compared with other works in similar spirits. In response, the authors add some insights on discussing the mechanism as well as comparisons with other works mentioned by the reviewers.
The reviewers all think that the paper presents a simple scheme for robust deep learning based on GANs, which shows its effectiveness in experiments. The understanding on why it works may need further explorations. Thus the paper is proposed to be borderline lean accept.
| train | [
"BJli_EaR07",
"B1x2VA5jCQ",
"SygmUMgI07",
"Bye9qmg80m",
"rylMb7gLRX",
"BkxjO3X5a7",
"HJlkCrUa3Q",
"BJeFNgN5nm",
"rylwmWGq2X",
"B1xHSDgVo7",
"SJlj7BhF5X",
"SJlz9Ez2tm"
] | [
"author",
"public",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public"
] | [
"The results for the wide resnet is in the Appendix due to space limitations. \n\nThe tables in the paper itself are also updated after tuning of the weight decay parameter. Most of the numbers are close to the original. ",
"\" We will update the results table accordingly. Using a deeper or wider network can push... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
-1,
-1,
-1
] | [
"B1x2VA5jCQ",
"B1xHSDgVo7",
"HJlkCrUa3Q",
"rylwmWGq2X",
"BJeFNgN5nm",
"iclr_2019_S1lIMn05F7",
"iclr_2019_S1lIMn05F7",
"iclr_2019_S1lIMn05F7",
"iclr_2019_S1lIMn05F7",
"SJlz9Ez2tm",
"iclr_2019_S1lIMn05F7",
"iclr_2019_S1lIMn05F7"
] |
iclr_2019_S1lTEh09FQ | Combinatorial Attacks on Binarized Neural Networks | Binarized Neural Networks (BNNs) have recently attracted significant interest due to their computational efficiency. Concurrently, it has been shown that neural networks may be overly sensitive to ``attacks" -- tiny adversarial changes in the input -- which may be detrimental to their use in safety-critical domains. Designing attack algorithms that effectively fool trained models is a key step towards learning robust neural networks.
The discrete, non-differentiable nature of BNNs, which distinguishes them from their full-precision counterparts, poses a challenge to gradient-based attacks. In this work, we study the problem of attacking a BNN through the lens of combinatorial and integer optimization. We propose a Mixed Integer Linear Programming (MILP) formulation of the problem. While exact and flexible, the MILP quickly becomes intractable as the network and perturbation space grow. To address this issue, we propose IProp, a decomposition-based algorithm that solves a sequence of much smaller MILP problems. Experimentally, we evaluate both proposed methods against the standard gradient-based attack (PGD) on MNIST and Fashion-MNIST, and show that IProp performs favorably compared to PGD, while scaling beyond the limits of the MILP. | accepted-poster-papers | The paper provides a novel attack method and contributes to evaluating the robustness of neural networks with recently proposed defenses. The evaluation is convincing overall and the authors have answered most questions from the reviewers. We recommend acceptance. | train | [
"HJgVhR3R1V",
"SJxUbuV5yN",
"HJeKfyftCQ",
"rJejIzN5Tm",
"H1l0z6X9a7",
"S1eiXngK6Q",
"SyxipoxF6m",
"Bklt8KeKpm",
"B1lKEYgtaX",
"HkxFLc32nX",
"H1xCvGx3hX",
"H1gJQ_S53m",
"r1lwasqC37"
] | [
"author",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Dear reviewer, thanks for taking the time to read our revised paper.\n\nRegarding 1: The sample of 1,000 test points that we used shows clear trends. We report standard deviation/quantiles whenever possible to give a full view of the results. Given that we lay out the results clearly and discuss regimes where our ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1
] | [
"SJxUbuV5yN",
"B1lKEYgtaX",
"iclr_2019_S1lTEh09FQ",
"H1l0z6X9a7",
"S1eiXngK6Q",
"r1lwasqC37",
"H1xCvGx3hX",
"H1gJQ_S53m",
"HkxFLc32nX",
"iclr_2019_S1lTEh09FQ",
"iclr_2019_S1lTEh09FQ",
"iclr_2019_S1lTEh09FQ",
"iclr_2019_S1lTEh09FQ"
] |
iclr_2019_S1lTg3RqYQ | Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency | Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process. | accepted-poster-papers | This paper proposes an image to image translation technique which decomposes into style and content transfer using a semantic consistency loss to encourage corresponding semantics (using feature masks) before and after translation. Performance is evaluated on a set of MNIST variants as well as from simulated to real world driving imagery.
All reviewers found this paper well written with clear contribution compared to related work by focusing on the problem when one-to-one mappings are not available across two domains which also have multimodal content or sub-style.
The main weakness as discussed by the reviewers relates to the experiments and whether or not the set provided does effectively validate the proposed approach. The authors argue their use of MNIST as a toy problem but with full control to clearly validate their approach. Their semantic segmentation experiment shows modest performance improvement. Based on the experiments as is and the relative novelty of the proposed approach, the AC recommends poster and encourages the authors to extend their analysis of the current results in a final version. | train | [
"SJguY0hSCX",
"HJlpBp2rCX",
"S1e9uT3BAX",
"S1lxxj3BA7",
"S1g7d5hHRm",
"Skg7ixp5nX",
"rJgsuogq2Q",
"HkxhpFFSn7"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank Reviewer 3 for the constructive review and detailed comments. \n\n1. Ablation study \nIn our paper, we present the ablation study on the MNIST-Single dataset because it is a more controlled setting where we can generate ground truth for comparisons. Furthermore, as mentioned in a previous answer, we belie... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"HkxhpFFSn7",
"rJgsuogq2Q",
"rJgsuogq2Q",
"Skg7ixp5nX",
"iclr_2019_S1lTg3RqYQ",
"iclr_2019_S1lTg3RqYQ",
"iclr_2019_S1lTg3RqYQ",
"iclr_2019_S1lTg3RqYQ"
] |
iclr_2019_S1lg0jAcYm | ARM: Augment-REINFORCE-Merge Gradient for Stochastic Binary Networks | To backpropagate the gradients through stochastic binary layers, we propose the augment-REINFORCE-merge (ARM) estimator that is unbiased, exhibits low variance, and has low computational complexity. Exploiting variable augmentation, REINFORCE, and reparameterization, the ARM estimator achieves adaptive variance reduction for Monte Carlo integration by merging two expectations via common random numbers. The variance-reduction mechanism of the ARM estimator can also be attributed to either antithetic sampling in an augmented space, or the use of an optimal anti-symmetric "self-control" baseline function together with the REINFORCE estimator in that augmented space. Experimental results show the ARM estimator provides state-of-the-art performance in auto-encoding variational inference and maximum likelihood estimation, for discrete latent variable models with one or multiple stochastic binary layers. Python code for reproducible research is publicly available. | accepted-poster-papers | This paper introduces a new way to estimate gradients of expectations of discrete random variables by introducing antithetic noise samples for use in a control variate.
Quality: The experiments are mostly appropriate, although I disagree with the choice to present validation and test-set results instead of training-time results. If the goal of the method is to reduce variance, then checking whether optimization is improved (training loss) is the most direct measure. However reasonable people can disagree about this.
I also think the toy experiment (copied from the REBAR and RELAX paper) is a bit too easy for this method, since it relies on taking two antithetic samples. I would have liked to see a categorical extension of the same experiment.
Clarity: I think that this method will not have the impact it otherwise could because of the authors' fearless use of long equations and heavy notation throughout. This is unavoidable to some degree, but
1) The title of the paper isn't very descriptive
2) Why not follow previous work and use \theta instead of \phi for the parameters being optimized?
The presentation has come a long way, but I fear that few besides our intrepid reviewers will have the stomach. I recommend providing more intuition throughout.
Originality: The use of antithetic samples to reduce variance is old, but this seems like a well-thought-through and non-trivial application of the idea to this setting.
Significance: Ultimately I think this is a new direction in gradient estimators for discrete RVs. I don't think this is the last word in this direction but it's both an empirical improvement, and will inspire further work. | train | [
"BkewAbI90m",
"S1e-Q1U9RQ",
"rkl9cK9Uam",
"Hyxt0RfFAX",
"rygqdW892X",
"rJxEaPoRn7",
"rklw0lkOA7",
"r1gn81e_aX",
"r1eg1BCR3X",
"rkgyPSRRhX",
"HJgQoB0An7",
"BkgaNi6vRX",
"S1xuO4yqn7"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"We greatly appreciate that you have taken our revision and response into consideration and moved your rating upwards.\n\nWe agree with your suggestion on using \"variable augmentation\" when describing the augmentation of a random variable. ",
"We greatly appreciate your additional comments and suggestions. Belo... | [
-1,
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"rygqdW892X",
"Hyxt0RfFAX",
"iclr_2019_S1lg0jAcYm",
"rklw0lkOA7",
"iclr_2019_S1lg0jAcYm",
"S1xuO4yqn7",
"r1gn81e_aX",
"rkl9cK9Uam",
"rygqdW892X",
"r1eg1BCR3X",
"rkgyPSRRhX",
"iclr_2019_S1lg0jAcYm",
"iclr_2019_S1lg0jAcYm"
] |
iclr_2019_S1lhbnRqF7 | Building Dynamic Knowledge Graphs from Text using Machine Reading Comprehension | We propose a neural machine-reading model that constructs dynamic knowledge graphs from procedural text. It builds these graphs recurrently for each step of the described procedure, and uses them to track the evolving states of participant entities. We harness and extend a recently proposed machine reading comprehension(MRC) model to query for entity states, since these states are generally communicated in spans of text and MRC models perform well in extracting entity-centric spans. The explicit, structured, and evolving knowledge graph representations that our model constructs can be used in downstream question answering tasks to improve machine comprehension of text, as we demonstrate empirically. On two comprehension tasks from the recently proposed ProPara dataset, our model achieves state-of-the-art results. We further show that our model is competitive on the Recipes dataset, suggesting it may be generally applicable. | accepted-poster-papers | This paper investigates a new approach to machine reading for procedural text, where the task of reading comprehension is formulated as dynamic construction of a procedural knowledge graph. The proposed model constructs a recurrent knowledge graph (as a bipartite graph between entities and location nodes) and tracks the entity states for two domains: scientific processes and recipes.
Pros:
The idea of formulating reading comprehension as dynamic construction of a knowledge graph is novel and interesting. The proposed model is tested on two different domains: scientific processes (ProPara) and cooking recipes.
Cons:
The initial submission didn't have the experimental results on the full recipe dataset and also had several clarity issues, all of which have been resolved through the rebuttal.
Verdict:
Accept. An interesting task & models with solid empirical results.
| train | [
"rkxUMYW6hX",
"S1e-ClGZkV",
"H1e6Tr4X0Q",
"r1e8lyS7RQ",
"r1gfdkBXAm",
"S1xtKI4mAX",
"HkeB75RZRQ",
"H1gOvMYT37",
"Sklcn-_c3m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a recurrent knowledge graph (bipartite graph between entities and location nodes) construction & updating mechanism for entity state tracking datasets such as (two) ProPara tasks and Recipes. The model goes through the following three steps: 1) it reads a sentence at each time step t and identif... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_S1lhbnRqF7",
"H1e6Tr4X0Q",
"rkxUMYW6hX",
"H1gOvMYT37",
"r1e8lyS7RQ",
"Sklcn-_c3m",
"iclr_2019_S1lhbnRqF7",
"iclr_2019_S1lhbnRqF7",
"iclr_2019_S1lhbnRqF7"
] |
iclr_2019_S1lqMn05Ym | Information asymmetry in KL-regularized RL | Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time. In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning. We start from the KL regularized expected reward objective which introduces an additional component, a default policy. Instead of relying on a fixed default policy, we learn it from data. But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster. We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm. We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning.
Please watch the video demonstrating learned experts and default policies on several continuous control tasks ( https://youtu.be/U2qA3llzus8 ). | accepted-poster-papers | Strengths
The paper introduces a promising and novel idea, i.e., regularizing RL via an informationally asymmetric default policy
The paper is well written. It has solid and extensive experimental results.
Weaknesses
There is a lack of benefit on dense-reward problems as a limitation, which the authors further
acknowledge as a limitation. There also some similarities to HRL approaches.
A lack of theoretical results is also suggested. To be fair, the paper makes a number of connections
with various bits of theory, although it perhaps does not directly result in any new theoretical analysis.
A concern of one reviewer is the need for extensive compute, and making comparisons to stronger (maxent) baselines.
The authors provide a convincing reply on these issues.
Points of Contention
While the scores are non-uniform (7,7,5), the most critical review, R1(5), is in fact quite positive on many
aspects of the paper, i.e., "this paper would have good impact in coming up with new
learning algorithms which are inspired from cognitive science literature as well as mathematically grounded."
The specific critiques of R1 were covered in detail by the authors.
Overall
The paper presents a novel and fairly intuitive idea, with very solid experimental results.
While the methods has theoretical results, the results themselves are more experimental than theoretic.
The reviewers are largely enthused about the paper. The AC recommends acceptance as a poster.
| train | [
"rJxZQQzMCX",
"BJxuzZMMCm",
"B1gITyzGCQ",
"rygVCXQWR7",
"HJxrOYAJpX",
"S1gXUjKKhm",
"Bye57sxdhQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate the reviewers positive feedback and the insightful comments. Thank you. Below we provide replies to the three concerns raised by the reviewer.\n\nComment: My understanding is that this \"informationally asymmetric\" KL-regularization approach is a general approach and can be combined with many policy... | [
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"Bye57sxdhQ",
"S1gXUjKKhm",
"HJxrOYAJpX",
"iclr_2019_S1lqMn05Ym",
"iclr_2019_S1lqMn05Ym",
"iclr_2019_S1lqMn05Ym",
"iclr_2019_S1lqMn05Ym"
] |
iclr_2019_S1lvm305YQ | TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer | In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness. In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as high-quality waveform generation. We introduce TimbreTron, a method for musical timbre transfer which applies “image” domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer. We show that the Constant Q Transform (CQT) representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance. Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples. We made an accompanying demo video here: https://www.cs.toronto.edu/~huang/TimbreTron/index.html which we strongly encourage you to watch before reading the paper. | accepted-poster-papers | Strengths: This paper is "thorough and well written", exploring the timbre transfer problem in a novel way. There is a video accompanying the work and some reviewers assessed the quality of the results as being good relative to other approaches. Two of the reviewers were quite positive about the work.
Weaknesses: Reviewer 2 (the lowest scoring reviewer) felt that the paper was a little too far from solving the problem to be of high significance and that there was:
- too much focus on STFT vs. CQT
- too little focus on getting WaveNet synthesis right
- too limited experimental validation (too restricted choice of instruments)
- poor resulting audio quality
- feels too much of combining black boxes
AMT listening tests were performed, but better baselines could have been used.
The author response addressed some of these points.
Contention:
An anonymous commenter noted that the revised manuscript added some names in the acknowledgements, thereby violating double blind review guidelines. However, the aggregated initial scores for this work were past the threshold for acceptance. Reviewer 2 was the most critical of the work but did not engage in dialog or comment on the author response.
Consensus:
The two positive reviewers felt that this work is worth of presentation at ICLR. The AC recommends accept as poster unless the PC feel the issue of names in the Acknowledgements in an updated draft is too serious of an issue.
| train | [
"HJg6f75xe4",
"HyxuHufle4",
"HylL4LaKCX",
"Bkx0mGhxRX",
"HyeAAbiq57",
"H1x8JZ2lA7",
"Skx38gheCm",
"H1eiwWnxRm",
"rJeIRWhgAX",
"H1g0p_-ChX",
"Skx4CYgj3X",
"r1gf_Og53X",
"SJgsgqXK2X",
"HkgRLhKU97"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thanks for your comments! \n\n- Sorry about the broken YouTube link, that link stopped working and please use this link instead: https://youtu.be/2ypcAZRYZJg We checked that this one is working. \n\n- We agree with your point that, at least in certain contexts, a research system should be better than commercially... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
8,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
-1,
-1
] | [
"HyxuHufle4",
"HylL4LaKCX",
"H1x8JZ2lA7",
"iclr_2019_S1lvm305YQ",
"HkgRLhKU97",
"Skx38gheCm",
"H1g0p_-ChX",
"Skx4CYgj3X",
"r1gf_Og53X",
"iclr_2019_S1lvm305YQ",
"iclr_2019_S1lvm305YQ",
"iclr_2019_S1lvm305YQ",
"iclr_2019_S1lvm305YQ",
"iclr_2019_S1lvm305YQ"
] |
iclr_2019_S1x2Fj0qKQ | Whitening and Coloring Batch Transform for GANs | Batch Normalization (BN) is a common technique used to speed-up and stabilize training. On the other hand, the learnable parameters of BN are commonly used in conditional Generative Adversarial Networks (cGANs) for representing class-specific information using conditional Batch Normalization (cBN). In this paper we propose to generalize both BN and cBN using a Whitening and Coloring based batch normalization. We show that our conditional Coloring can represent categorical conditioning information which largely helps the cGAN qualitative results. Moreover, we show that full-feature whitening is important in a general GAN scenario in which the training process is known to be highly unstable. We test our approach on different datasets and using different GAN networks and training protocols, showing a consistent improvement in all the tested frameworks. Our CIFAR-10 conditioned results are higher than all previous works on this dataset. | accepted-poster-papers | The paper addresses normalisation and conditioning of GANs. The authors propose to replace class-conditional batch norm with whitening and class-conditional coloring. Evaluation demonstrates that the method performs very well, and the ablation studies confirm the design choices. After extensive discussion, all reviewers agreed that this is a solid contribution, and the paper should be accepted. | train | [
"SygwEwj7hQ",
"BJlOMdwL1V",
"S1e0RerUJN",
"HJe2HPNEyN",
"HkxNyV7kJN",
"H1lZnaW1yN",
"HkxgtGiTC7",
"BylerOc6Rm",
"Byg2gu5TCX",
"SJe3354y6m",
"BJg7KNitCQ",
"BJl274iK07",
"rylaMQjKRQ",
"B1xUUyjKAX",
"r1locJstAX",
"Bkxhog2O2Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposed Whitening and Coloring (WC) transform to replace batch normalization (BN) in generators for GAN. WC generalize BN by normalizing features with decorrelating (whitening) matrix, and then denormalizing (coloring) features by learnable weights. The main advantage of WC is that it exploits the full... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_S1x2Fj0qKQ",
"S1e0RerUJN",
"HJe2HPNEyN",
"HkxNyV7kJN",
"H1lZnaW1yN",
"HkxgtGiTC7",
"BylerOc6Rm",
"Byg2gu5TCX",
"SygwEwj7hQ",
"iclr_2019_S1x2Fj0qKQ",
"BJl274iK07",
"SygwEwj7hQ",
"Bkxhog2O2Q",
"SJe3354y6m",
"B1xUUyjKAX",
"iclr_2019_S1x2Fj0qKQ"
] |
iclr_2019_S1xLN3C9YX | Learnable Embedding Space for Efficient Neural Architecture Compression | We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search. Given a teacher network, we search for a compressed network architecture by using Bayesian Optimization (BO) with a kernel function defined over our proposed embedding space to select architectures for evaluation. We demonstrate that our search algorithm can significantly outperform various baseline methods, such as random search and reinforcement learning (Ashok et al., 2018). The compressed architectures found by our method are also better than the state-of-the-art manually-designed compact architecture ShuffleNet (Zhang et al., 2018). We also demonstrate that the learned embedding space can be transferred to new settings for architecture search, such as a larger teacher network or a teacher network in a different architecture family, without any training. | accepted-poster-papers | The authors propose a method to learn a neural network architecture which achieves the same accuracy as a reference network, with fewer parameters through Bayesian Optimization. The search is carried out on embeddings of the neural network architecture using a train bi-directional LSTM. The reviewers generally found the work to be clearly written, and well motivated, with thorough experimentation, particularly in the revised version. Given the generally positive reviews from the authors, the AC recommends that the paper be accepted.
| val | [
"B1xjh9VUx4",
"r1lSlEa4x4",
"HJgs_Ol51V",
"BkePNnddJN",
"B1e700_3sQ",
"rJeIV9wpRX",
"Bygrx6CthX",
"rygTLwXI07",
"r1e9ywl7A7",
"Skl77Dx7A7",
"S1eGpLg70m",
"HJx0P9-xAQ",
"BygS45WxAm",
"ryglzcbg0m",
"BJei_UPcTX",
"SygyWQPqT7",
"BylTSOvc6X",
"S1ectNwqam",
"S1eeX4vcTX",
"BJxRNWmx6Q"... | [
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_r... | [
"Thanks for the valuable comment. This paper “Neural Architecture Optimization” (NAO) was publicly available on Arxiv at the end of August 2018, about one month before the submission deadline for ICLR. We will add a discussion of NAO in the related work section in the final version of this paper.\n\nNAO and our wor... | [
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1
] | [
-1,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1
] | [
"r1lSlEa4x4",
"iclr_2019_S1xLN3C9YX",
"BkePNnddJN",
"BylTSOvc6X",
"iclr_2019_S1xLN3C9YX",
"BJei_UPcTX",
"iclr_2019_S1xLN3C9YX",
"ryglzcbg0m",
"BygS45WxAm",
"HJx0P9-xAQ",
"ryglzcbg0m",
"SygyWQPqT7",
"S1eeX4vcTX",
"S1ectNwqam",
"B1e700_3sQ",
"Bygrx6CthX",
"BJxRNWmx6Q",
"Bygrx6CthX",
... |
iclr_2019_S1xNEhR9KX | On the Sensitivity of Adversarial Robustness to Input Data Distributions | Neural networks are vulnerable to small adversarial perturbations. Existing literature largely focused on understanding and mitigating the vulnerability of learned models. In this paper, we demonstrate an intriguing phenomenon about the most popular robust training method in the literature, adversarial training: Adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution. Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarial trained model that is both trained and evaluated on the new distribution. Our discovery of such sensitivity on data distribution is based on a study which disentangles the behaviors of clean accuracy and robust accuracy of the Bayes classifier. Empirical investigations further confirm our finding. We construct semantically-identical variants for MNIST and CIFAR10 respectively, and show that standardly trained models achieve comparable clean accuracies on them, but adversarially trained models achieve significantly different robustness accuracies. This counter-intuitive phenomenon indicates that input data distribution alone can affect the adversarial robustness of trained neural networks, not necessarily the tasks themselves. Lastly, we discuss the practical implications on evaluating adversarial robustness, and make initial attempts to understand this complex phenomenon. | accepted-poster-papers | This paper studies an interesting phenomenon related to adversarial training -- that adversarial robustness is quite sensitive to semantically lossless shifts in input data distribution.
Strengths
- Characterizes a previously unobserved phenomenon in adversarial training, which is quite relevant to ongoing research in the area.
- Interesting and novel theoretical analysis that motivates the relationship between adversarial robustness and the shape of input distribution.
Weaknesses
- Reviewers pointed out some shortcomings in experiments, and analysis of causes and remedies to adversarial robustness. The authors agree that given the current state of understanding, these are hard questions to pose good answers for. The result and observations by themselves are interesting and useful for the community.
The weakness that the paper does not propose a solution for the observed phenomenon remains, but all reviewers agree that the observation in itself is interesting. Therefore, I recommend that the paper be accepted.
| train | [
"ByeXyX2j67",
"SkeFL72oTX",
"S1ey4m3jTm",
"Bkxs4_XJaQ",
"HJgnyglkaQ",
"rke7hBji2m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their time and efforts. We especially appreciate that all reviewers find that the problem being investigated is interesting. \nWe summarize our main contributions to address common concerns in this post, and provide more details in the responses to each reviewer.\n\nTo avoid clutter, we ... | [
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
3,
4,
2
] | [
"iclr_2019_S1xNEhR9KX",
"rke7hBji2m",
"HJgnyglkaQ",
"iclr_2019_S1xNEhR9KX",
"iclr_2019_S1xNEhR9KX",
"iclr_2019_S1xNEhR9KX"
] |
iclr_2019_S1xNb2A9YX | Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images | The human ability to recognize objects is impaired when the object is not shown in full. "Minimal images" are the smallest regions of an image that remain recognizable for humans. Ullman et al. (2016) show that a slight modification of the location and size of the visible region of the minimal image produces a sharp drop in human recognition accuracy. In this paper, we demonstrate that such drops in accuracy due to changes of the visible region are a common phenomenon between humans and existing state-of-the-art deep neural networks (DNNs), and are much more prominent in DNNs. We found many cases where DNNs classified one region correctly and the other incorrectly, though they only differed by one row or column of pixels, and were often bigger than the average human minimal image size. We show that this phenomenon is independent from previous works that have reported lack of invariance to minor modifications in object location in DNNs. Our results thus reveal a new failure mode of DNNs that also affects humans to a much lesser degree. They expose how fragile DNN recognition ability is in natural images even without adversarial patterns being introduced. Bringing the robustness of DNNs in natural images to the human level remains an open challenge for the community. | accepted-poster-papers | This paper characterizes a particular kind of fragility in the image classification ability of deep networks: minimal image regions which are classified correctly, but for which neighboring regions shifted by one row or column of pixels are classified incorrectly. Comparisons are made to human vision. All three reviewers recommend acceptance. AnonReviewer1 places the paper marginally above threshold, due to limited originality over Ullman et al. 2016, and concerns about overall significance.
| train | [
"SyxRF01G1V",
"rJl_o4iG6Q",
"BygMhU1Gy4",
"Hyl7zKrTnm",
"r1lABXCFAX",
"SylXYGAtAm",
"Hyx5WGAtRX",
"ByxajLCdhQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Note that evaluating human FRIs on DNNs (Ullman et al.) is quite different from extracting FRIs from DNNs (our paper). These two experiments investigate different things: \n\n*evaluating human FRIs on DNNs (Ullman et al.): Are DNNs affected by human FRIs? The answer was no.\n*extracting FRIs on DNNs (our paper): d... | [
-1,
7,
-1,
7,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
4
] | [
"BygMhU1Gy4",
"iclr_2019_S1xNb2A9YX",
"r1lABXCFAX",
"iclr_2019_S1xNb2A9YX",
"ByxajLCdhQ",
"Hyl7zKrTnm",
"rJl_o4iG6Q",
"iclr_2019_S1xNb2A9YX"
] |
iclr_2019_S1xcx3C5FX | A Statistical Approach to Assessing Neural Network Robustness | We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated. Specifically, we estimate the probability of the event that the property is violated under an input model. Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable. Furthermore, it provides an ability to scale to larger networks than formal verification approaches. Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found. Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework. We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability. | accepted-poster-papers | * Strengths
The paper addresses an important topic: how to bound the probability that a given “bad” event occurs for a neural network under some distribution of inputs. This could be relevant, for instance, in autonomous robotics settings where there is some environment model and we would like to bound the probability of an adverse outcome (e.g. for an autonomous aircraft, the time to crash under a given turbulence model). The desired failure probabilities are often low enough that direct Monte Carlo simulation is too expensive. The present work provides some preliminary but meaningful progress towards better methods of estimating such low-probability events, and provides some evidence that the methods can scale up to larger networks. It is well-written and of high technical quality.
* Weaknesses
In the initial submission, one reviewer was concerned that the term “verification” was misleading, as the methods had no formal guarantees that the estimated probability was correct. The authors proposed to revise the paper to remove reference to verification in the title and the text, and afterwards all reviewers agreed the work should be accepted. The paper also may slightly overstate the generality of the method. For instance, the claim that this can be used to show that adversarial examples do not exist is probably wrong---adversarial examples often occupy a negligibly small portion of the input space. There was also concern that most comparisons were limited to naive Monte Carlo.
* Discussion
While there was initial disagreement among reviewers, after the discussion all reviewers agree the paper should be accepted. However, we remind the authors to implement the changes promised during the discussion period. | train | [
"HkeQCAq6hQ",
"HJgESA-i0m",
"r1lDreZjAX",
"BylSNxbjAm",
"S1gKOXHc0Q",
"SkxO_sApa7",
"rkgd_qCpTQ",
"B1xZ6uCaT7",
"S1lNxt0aTQ",
"Hke5mvR6p7",
"rJxoeNq92X",
"HkxcI2vu37"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Verifying the properties of neural networks can be very difficult. Instead of\nfinding a formal proof for a property that gives a True/False answer, this\npaper proposes to take a sufficiently large number of samples around the input\npoint point and estimate the probability that a violation can be found. Naive\... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_S1xcx3C5FX",
"r1lDreZjAX",
"BylSNxbjAm",
"S1gKOXHc0Q",
"S1lNxt0aTQ",
"HkxcI2vu37",
"rJxoeNq92X",
"HkeQCAq6hQ",
"HkeQCAq6hQ",
"iclr_2019_S1xcx3C5FX",
"iclr_2019_S1xcx3C5FX",
"iclr_2019_S1xcx3C5FX"
] |
iclr_2019_S1xtAjR5tX | Improving Sequence-to-Sequence Learning via Optimal Transport | Sequence-to-sequence models are commonly trained via maximum likelihood estimation (MLE). However, standard MLE training considers a word-level objective, predicting the next word given the previous ground-truth partial sentence. This procedure focuses on modeling local syntactic patterns, and may fail to capture long-range semantic structure. We present a novel solution to alleviate these issues. Our approach imposes global sequence-level guidance via new supervision based on optimal transport, enabling the overall characterization and preservation of semantic features. We further show that this method can be understood as a Wasserstein gradient flow trying to match our model to the ground truth sequence distribution. Extensive experiments are conducted to validate the utility of the proposed approach, showing consistent improvements over a wide variety of NLP tasks, including machine translation, abstractive text summarization, and image captioning. | accepted-poster-papers | The paper proposes the idea of using optimal transport to evaluate the semantic correspondence between two sets of words predicted by the model and ground truth sequences. Strong empirical results are presented which support the use of optimal transport in conjunction with log-likelihood for training sequence models. I appreciate the improvements to the manuscript during the review process, and I encourage the authors to address the rest of the comments in the final version. | test | [
"SkxsRsBBeV",
"S1gsH3rHxN",
"HJlcs_vT3X",
"B1x9vC6K27",
"ryluNc6Y67",
"Bkg6P5TKaQ",
"B1lXhKaKam",
"SJl5T5TYaQ",
"BJxfQjpFpQ",
"rJekQDxD2m"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Dear AnonReviewer3:\n\nThanks for your updated comments, we will continue revising our draft to make sure our method is well-justified.\n\nThanks again for your valuable time.\n\nBest,\nAuthors",
"Dear AnonReviewer1:\n\nThanks very much for the updated review and your valuable time.\n\nBest,\nAuthors",
"======... | [
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"HJlcs_vT3X",
"B1x9vC6K27",
"iclr_2019_S1xtAjR5tX",
"iclr_2019_S1xtAjR5tX",
"HJlcs_vT3X",
"ryluNc6Y67",
"iclr_2019_S1xtAjR5tX",
"B1x9vC6K27",
"rJekQDxD2m",
"iclr_2019_S1xtAjR5tX"
] |
iclr_2019_S1zk9iRqF7 | PATE-GAN: Generating Synthetic Data with Differential Privacy Guarantees | Machine learning has the potential to assist many communities in using the large datasets that are becoming more and more available. Unfortunately, much of that potential is not being realized because it would require sharing data in a way that compromises privacy. In this paper, we investigate a method for ensuring (differential) privacy of the generator of the Generative Adversarial Nets (GAN) framework. The resulting model can be used for generating synthetic data on which algorithms can be trained and validated, and on which competitions can be conducted, without compromising the privacy of the original dataset. Our method modifies the Private Aggregation of Teacher Ensembles (PATE) framework and applies it to GANs. Our modified framework (which we call PATE-GAN) allows us to tightly bound the influence of any individual sample on the model, resulting in tight differential privacy guarantees and thus an improved performance over models with the same guarantees. We also look at measuring the quality of synthetic data from a new angle; we assert that for the synthetic data to be useful for machine learning researchers, the relative performance of two algorithms (trained and tested) on the synthetic dataset should be the same as their relative performance (when trained and tested) on the original dataset. Our experiments, on various datasets, demonstrate that PATE-GAN consistently outperforms the state-of-the-art method with respect to this and other notions of synthetic data quality. | accepted-poster-papers | This paper improves upon the PATE-GAN framework for differentially-private synthetic data generation. They eliminate the need for public data samples for training the GAN, by providing a distribution which can be sampled from instead.
The authors were unanimous in their vote to accept. | train | [
"ryxVTXd62X",
"r1g49cO_CX",
"B1glPc_dRQ",
"B1x60Mr0aQ",
"S1gY5Z8saQ",
"r1gVLn1qaQ",
"rylIwo19aQ",
"HJeqCskq6m",
"r1gKFckcaX",
"Hyg0gt1q6Q",
"Bkgo4YlMaQ",
"rylkWx4-aQ",
"HyxXmnTC27",
"HygVgKtjn7"
] | [
"official_reviewer",
"author",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer"
] | [
"[Post revision update] The authors' comments addressed my concerns, especially on the experiment side. I changed the score.\n\nThis paper applies the PATE framework to GAN, and evaluates the quality of the generated data with some predictive tasks. The experimental results on some real datasets show that the propo... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_S1zk9iRqF7",
"B1x60Mr0aQ",
"S1gY5Z8saQ",
"Hyg0gt1q6Q",
"r1gKFckcaX",
"HyxXmnTC27",
"ryxVTXd62X",
"HygVgKtjn7",
"Bkgo4YlMaQ",
"rylkWx4-aQ",
"iclr_2019_S1zk9iRqF7",
"iclr_2019_S1zk9iRqF7",
"iclr_2019_S1zk9iRqF7",
"iclr_2019_S1zk9iRqF7"
] |
iclr_2019_S1zz2i0cY7 | Integer Networks for Data Compression with Latent-Variable Models | We consider the problem of using variational latent-variable models for data compression. For such models to produce a compressed binary sequence, which is the universal data representation in a digital world, the latent representation needs to be subjected to entropy coding. Range coding as an entropy coding technique is optimal, but it can fail catastrophically if the computation of the prior differs even slightly between the sending and the receiving side. Unfortunately, this is a common scenario when floating point math is used and the sender and receiver operate on different hardware or software platforms, as numerical round-off is often platform dependent. We propose using integer networks as a universal solution to this problem, and demonstrate that they enable reliable cross-platform encoding and decoding of images using variational models. | accepted-poster-papers | This paper addresses the issue of numerical rounding-off errors that can arise when using latent variable models for data compression, e.g., because of differences in floating point arithmetic across different platforms (sender and receiver). The authors propose using neural networks that perform integer arithmetic (integer networks) to mitigate this issue. The problem statement is well described, and the presentation is generally OK, although it could be improved in certain aspects as pointed out by the reviewers. The experiments are properly carried out, and the experimental results are good.
Thank you for addressing the questions raised by the reviewers. After taking into account the author's responds, there is consensus that the paper is worthy of publication. I therefore recommend acceptance. | test | [
"rJgzoY72kV",
"Syx712aCn7",
"SJeUDvnYkE",
"HJeiMdGdyN",
"B1lVsHYh07",
"S1gRGcFcRm",
"BJe8Typu3Q",
"BylRxUzXC7",
"rJeSLLtxAm",
"r1x3GUFeAX",
"rJguaBteCQ",
"ByxKt0Z8pX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thank you for updating your score.\n\nNote that Huffman coding with a conditional probability model would have equivalent issues, because the design of the Huffman code would again be very sensitive to fluctuations in the probabilities. It's a fundamental issue with entropy coding methods in general.\n\nWe'll thin... | [
-1,
7,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3
] | [
"SJeUDvnYkE",
"iclr_2019_S1zz2i0cY7",
"B1lVsHYh07",
"S1gRGcFcRm",
"S1gRGcFcRm",
"r1x3GUFeAX",
"iclr_2019_S1zz2i0cY7",
"rJeSLLtxAm",
"BJe8Typu3Q",
"Syx712aCn7",
"ByxKt0Z8pX",
"iclr_2019_S1zz2i0cY7"
] |
iclr_2019_SJG6G2RqtX | Value Propagation Networks | We present Value Propagation (VProp), a set of parameter-efficient differentiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. We show that the modules enable learning to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems. We evaluate on static and dynamic configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input. | accepted-poster-papers |
Interesting idea, reviewers were positive and indicated presentation should be improved.
| train | [
"rkexJb4i2X",
"B1lJJiUc27",
"S1gEGt6KC7",
"rklgnOpKA7",
"BJxWBITYAX",
"B1lNYtd03m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Update:\nI thank the authors for their clarifications. I have raised my rating, however I believe the exposition of the paper should be improved and some of their responses should be integrated to the main text.\n\nThe paper proposes two new modules to overcome some limitations of VIN, but the additional or altern... | [
6,
7,
-1,
-1,
-1,
7
] | [
3,
3,
-1,
-1,
-1,
3
] | [
"iclr_2019_SJG6G2RqtX",
"iclr_2019_SJG6G2RqtX",
"B1lJJiUc27",
"rkexJb4i2X",
"B1lNYtd03m",
"iclr_2019_SJG6G2RqtX"
] |
iclr_2019_SJGvns0qK7 | Bayesian Policy Optimization for Model Uncertainty | Addressing uncertainty is critical for autonomous systems to robustly adapt to the real world. We formulate the problem of model uncertainty as a continuous Bayes-Adaptive Markov Decision Process (BAMDP), where an agent maintains a posterior distribution over latent model parameters given a history of observations and maximizes its expected long-term reward with respect to this belief distribution. Our algorithm, Bayesian Policy Optimization, builds on recent policy optimization algorithms to learn a universal policy that navigates the exploration-exploitation trade-off to maximize the Bayesian value function. To address challenges from discretizing the continuous latent parameter space, we propose a new policy network architecture that encodes the belief distribution independently from the observable state. Our method significantly outperforms algorithms that address model uncertainty without explicitly reasoning about belief distributions and is competitive with state-of-the-art Partially Observable Markov Decision Process solvers. | accepted-poster-papers | The paper proposed a deep, Bayesian optimization approach to RL with model uncertainty (BAMDP). The algorithm is a variant of policy gradient, which in each iteration uses a Bayes filter on sampled MDPs to update the posterior belief distribution of the parameters. An extension is also made to POMDPs.
The work is a combination of existing techniques, and the algorithmic novelty is a bit low. Initial reviews suggested the empirical study could be improved with better baselines, and the main idea of the proposed method could be expended. The revised version moves towards this direction, and the author responses were helpful. Overall, the paper is a useful contribution. | train | [
"S1eTGXOBgE",
"S1l-AjjgxE",
"BkeVWK4MyV",
"BJe0P_VMJV",
"Byl3qfTT0m",
"Hklm9vec3m",
"HJxJYMhcA7",
"H1xPAYSdAm",
"SyxoCPr_0Q",
"B1lyqwS_C7",
"ryl2kYHdRX",
"rkgKO-c637",
"ryg0dfCYnX"
] | [
"author",
"public",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your feedback. \n\nBPO vs. Peng et al. [1]: \nThe posterior belief distribution compactly summarizes the history of observations, and LSTMs can be interpreted similarly. The key difference between BPO and [1] is that BPO explicitly utilizes the belief distribution, while in [1] the LSTM must implicit... | [
-1,
-1,
-1,
-1,
5,
7,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"S1l-AjjgxE",
"iclr_2019_SJGvns0qK7",
"HJxJYMhcA7",
"Byl3qfTT0m",
"iclr_2019_SJGvns0qK7",
"iclr_2019_SJGvns0qK7",
"H1xPAYSdAm",
"ryg0dfCYnX",
"rkgKO-c637",
"iclr_2019_SJGvns0qK7",
"Hklm9vec3m",
"iclr_2019_SJGvns0qK7",
"iclr_2019_SJGvns0qK7"
] |
iclr_2019_SJVmjjR9FX | Variational Bayesian Phylogenetic Inference | Bayesian phylogenetic inference is currently done via Markov chain Monte Carlo with simple mechanisms for proposing new states, which hinders exploration efficiency and often requires long runs to deliver accurate posterior estimates. In this paper we present an alternative approach: a variational framework for Bayesian phylogenetic analysis. We approximate the true posterior using an expressive graphical model for tree distributions, called a subsplit Bayesian network, together with appropriate branch length distributions. We train the variational approximation via stochastic gradient ascent and adopt multi-sample based gradient estimators for different latent variables separately to handle the composite latent space of phylogenetic models. We show that our structured variational approximations are flexible enough to provide comparable posterior estimation to MCMC, while requiring less computation due to a more efficient tree exploration mechanism enabled by variational inference. Moreover, the variational approximations can be readily used for further statistical analysis such as marginal likelihood estimation for model comparison via importance sampling. Experiments on both synthetic data and real data Bayesian phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods. | accepted-poster-papers | The reviewers lean to accept, and the authors clearly put a significant amount of time into their response. I will also lean to accept. However, the comments of reviewer 2 should be taken seriously, and addressed if possible, including an attempt to cut the paper length down. | train | [
"r1x5JtLm0X",
"rylqTcJF67",
"Bkl2HqJYTm",
"SkgNStowpX",
"rkeE3OswTX",
"HklnmwjDaX",
"BkgUn-Vwpm",
"rke2eXKkTm",
"SJeInxZA3m"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all reviewers for the constructive feedback. We have revised the paper, and have incorporated their suggestions with the following major changes:\n\n- We reorganized the SBN section and added more detailed discussion on SBN implementations and parameter sharing to better explain the subsplit Bayesian netw... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"iclr_2019_SJVmjjR9FX",
"BkgUn-Vwpm",
"BkgUn-Vwpm",
"iclr_2019_SJVmjjR9FX",
"rke2eXKkTm",
"SJeInxZA3m",
"iclr_2019_SJVmjjR9FX",
"iclr_2019_SJVmjjR9FX",
"iclr_2019_SJVmjjR9FX"
] |
iclr_2019_SJe3HiC5KX | LEARNING FACTORIZED REPRESENTATIONS FOR OPEN-SET DOMAIN ADAPTATION | Domain adaptation for visual recognition has undergone great progress in the past few years. Nevertheless, most existing methods work in the so-called closed-set scenario, assuming that the classes depicted by the target images are exactly the same as those of the source domain. In this paper, we tackle the more challenging, yet more realistic case of open-set domain adaptation, where new, unknown classes can be present in the target data. While, in the unsupervised scenario, one cannot expect to be able to identify each specific new class, we aim to automatically detect which samples belong to these new classes and discard them from the recognition process. To this end, we rely on the intuition that the source and target samples depicting the known classes can be generated by a shared subspace, whereas the target samples from unknown classes come from a different, private subspace. We therefore introduce a framework that factorizes the data into shared and private parts, while encouraging the shared representation to be discriminative. Our experiments on standard benchmarks evidence that our approach significantly outperforms the state-of-the-art in open-set domain adaptation. | accepted-poster-papers | This paper proposes a new approach to domain adaptation based on sub-spacing, such that outliers are filtered out. While similar ideas have been used e.g. in multi-view learning, their application to domain adaptation makes it a novel and interesting approach.
While the above is considered by the AC an adequate contribution to ICLR, the authors are encouraged to investigate further the implications of the assumptions made, in a way that the derived criteria seem less heuristic, as R1 pointed out.
There had been some concerns regarding the experiments, but the authors have been very active in the rebuttal period and addressed these concerns satisfactorily.
| train | [
"S1e2Kwm41V",
"B1xndmvrhm",
"rylFaU-Q1E",
"Hklg2V-Yhm",
"HJew2UzZ1N",
"HkxVfJFnCm",
"rkxO1KVBRQ",
"S1gMYY4SRm",
"HJgM4Y4BCm",
"rJx5jO4BAX",
"HylQS64NAm",
"rJgehkAj67",
"HylgmsCnnm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"Thank you for your response.\n\nWe re-ran the experiments on truly-unknown classes by further removing the Phone class in the Office experiment and compared the accuracy of our approach against the SVM baseline and the open-set ATI method of (Busto & Gall, 2017) using remaining 4 truly-unknown classes. The conclus... | [
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"rylFaU-Q1E",
"iclr_2019_SJe3HiC5KX",
"HJew2UzZ1N",
"iclr_2019_SJe3HiC5KX",
"HkxVfJFnCm",
"S1gMYY4SRm",
"HylgmsCnnm",
"B1xndmvrhm",
"Hklg2V-Yhm",
"iclr_2019_SJe3HiC5KX",
"rJgehkAj67",
"Hklg2V-Yhm",
"iclr_2019_SJe3HiC5KX"
] |
iclr_2019_SJe9rh0cFX | On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks | Compression is a key step to deploy large neural networks on resource-constrained platforms. As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight. In this paper, we study the representation power of quantized neural networks. First, we prove the universal approximability of quantized ReLU networks on a wide class of functions. Then we provide upper bounds on the number of weights and the memory size for a given approximation error bound and the bit-width of weights for function-independent and function-dependent structures. Our results reveal that, to attain an approximation error bound of ϵ, the number of weights needed by a quantized network is no more than O(log5(1/ϵ)) times that of an unquantized network. This overhead is of much lower order than the lower bound of the number of weights needed for the error bound, supporting the empirical success of various quantization techniques. To the best of our knowledge, this is the first in-depth study on the complexity bounds of quantized neural networks. | accepted-poster-papers | This paper addresses a well motivated problem and provides new insight on the theoretical analysis of representational power in quantized networks. The results contribute towards a better understanding of quantized networks in a way that has not been treated in the past.
The most moderate rating (marginally above acceptance threshold) explains that while the paper is technically quite simple, it gives an interesting study and blends well into recent literature on an important topic.
A criticism is that the approach uses modules to approximate the basic operations of non quantized networks. As such it not compatible with quantizing the weights of a given network structure, but rather with choosing the network structure under a given level of quantization. However, reviewers consider that this issue is discussed directly and clearly in the paper.
The reviewers report to be only fairly confident about their assessment, but they all give a positive or very positive evaluation of the paper. | train | [
"rklKqbep3X",
"SJgusz_wpX",
"Byx9YluPp7",
"rylbKJuw67",
"rygt00vwpm",
"BkeWLpq_37",
"HyeAz3FNnQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the expressive power of quantized ReLU networks from a theoretical point of view. This is well-motivated by the recent success of using quantized neural networks as a compression technique. This paper considers both linear quantization and non-linear quantization, both function independent netwo... | [
7,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_SJe9rh0cFX",
"iclr_2019_SJe9rh0cFX",
"HyeAz3FNnQ",
"BkeWLpq_37",
"rklKqbep3X",
"iclr_2019_SJe9rh0cFX",
"iclr_2019_SJe9rh0cFX"
] |
iclr_2019_SJeXSo09FQ | Learning Localized Generative Models for 3D Point Clouds via Graph Convolution | Point clouds are an important type of geometric data and have widespread use in computer graphics and vision. However, learning representations for point clouds is particularly challenging due to their nature as being an unordered collection of points irregularly distributed in 3D space. Graph convolution, a generalization of the convolution operation for data defined over graphs, has been recently shown to be very successful at extracting localized features from point clouds in supervised or semi-supervised tasks such as classification or segmentation. This paper studies the unsupervised problem of a generative model exploiting graph convolution. We focus on the generator of a GAN and define methods for graph convolution when the graph is not known in advance as it is the very output of the generator. The proposed architecture learns to generate localized features that approximate graph embeddings of the output geometry. We also study the problem of defining an upsampling layer in the graph-convolutional generator, such that it learns to exploit a self-similarity prior on the data distribution to sample more effectively. | accepted-poster-papers | All reviewers gave an accept rating: 9, 7 &6.
A clear accept -- just not strong enough reviewer support for an oral. | test | [
"B1gurrtaR7",
"rJeZKPJ90m",
"Byx9eV8FCQ",
"HylKKUQB3m",
"BJeBIrSE07",
"B1g6gDPFTQ",
"rJxTaLDKpX",
"HJxK98vY6m",
"HygP3M9rnm",
"rJeoI89NnQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your detailed response. My rating remains unchanged.",
"Thanks for the response. I am keeping my rating.",
"We thank the reviewer for the reply and updating the rating. \n\nConcerning feature learning, we agree that discriminator features can be used in prediction tasks. However, in our statements w... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
9,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"B1g6gDPFTQ",
"Byx9eV8FCQ",
"BJeBIrSE07",
"iclr_2019_SJeXSo09FQ",
"rJxTaLDKpX",
"rJeoI89NnQ",
"HylKKUQB3m",
"HygP3M9rnm",
"iclr_2019_SJeXSo09FQ",
"iclr_2019_SJeXSo09FQ"
] |
iclr_2019_SJfPFjA9Fm | ACCELERATING NONCONVEX LEARNING VIA REPLICA EXCHANGE LANGEVIN DIFFUSION | Langevin diffusion is a powerful method for nonconvex optimization, which enables the escape from local minima by injecting noise into the gradient. In particular, the temperature parameter controlling the noise level gives rise to a tradeoff between ``global exploration'' and ``local exploitation'', which correspond to high and low temperatures. To attain the advantages of both regimes, we propose to use replica exchange, which swaps between two Langevin diffusions with different temperatures. We theoretically analyze the acceleration effect of replica exchange from two perspectives: (i) the convergence in χ2-divergence, and (ii) the large deviation principle. Such an acceleration effect allows us to faster approach the global minima. Furthermore, by discretizing the replica exchange Langevin diffusion, we obtain a discrete-time algorithm. For such an algorithm, we quantify its discretization error in theory and demonstrate its acceleration effect in practice. | accepted-poster-papers | The main criticisms were around novelty: that the analysis is rather standard. Given that all the reviewers agreed the paper is well written, I'm inclined to think the paper will be a useful contribution to the literature. The authors also highlight the analysis of the discretization, which seems to be missed by the most critical reviewer. I would suggest to the reviewers that they use the criticisms to rework the paper's introduction, to better explain which parts of the work are novel and which parts are standard. I would also suggest that standard background be moved to the appendix so that it is there for the nonexpert, while making the body of the work more focused on the novel aspects. | train | [
"rJlr5Z-t0Q",
"Syx9fKn7RX",
"SJgu9sxfAQ",
"Sygv6hHya7",
"S1eEnYWC3Q",
"SkxMd1K32Q"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate your valuable comments. As you have said, these methods are popular in practice and achieve good performance. However, most of them are done in the setting of MCMC, and people rarely use them in nonconvex optimization. One contribution of this paper is to apply these techniques to optimization proble... | [
-1,
-1,
-1,
4,
7,
6
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"Sygv6hHya7",
"S1eEnYWC3Q",
"SkxMd1K32Q",
"iclr_2019_SJfPFjA9Fm",
"iclr_2019_SJfPFjA9Fm",
"iclr_2019_SJfPFjA9Fm"
] |
iclr_2019_SJfZKiC5FX | Dynamically Unfolding Recurrent Restorer: A Moving Endpoint Control Method for Image Restoration | In this paper, we propose a new control framework called the moving endpoint control to restore images corrupted by different degradation levels in one model. The proposed control problem contains a restoration dynamics which is modeled by an RNN. The moving endpoint, which is essentially the terminal time of the associated dynamics, is determined by a policy network. We call the proposed model the dynamically unfolding recurrent restorer (DURR). Numerical experiments show that DURR is able to achieve state-of-the-art performances on blind image denoising and JPEG image deblocking. Furthermore, DURR can well generalize to images with higher degradation levels that are not included in the training stage. | accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- The approach is novel
- The experimental results are convincing.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- The authors didn't show results with non-Gaussian noise
- Some details that could help the understanding of the method are missing.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
No major points of contention.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be accepted.
| train | [
"BJl2pJ8_JV",
"H1xU19HrR7",
"HJlCZ9SBCm",
"HkxJZkLBRX",
"BylKLKBS0m",
"SyeNxz5NaX",
"HyxZJPrEaX",
"HJgmIonkp7",
"HJg-BVqJTX",
"HJgtlrWj3m",
"rygjryLgn7"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"public",
"public",
"official_reviewer",
"official_reviewer"
] | [
"We have done our experiments on color image gaussian denoising, the quantitative results are reported as follows:\n\n\\sigma 25 35 45 55 65* 75*\nCBM3D 30.71 28.89 27.83 26.97 26.29 25.74\nCDnCNN 31.22 29.57 28.40 27.46 26.40 24.47\nCDURR 31.25 29.63 28.48... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
5,
4
] | [
"iclr_2019_SJfZKiC5FX",
"HJgtlrWj3m",
"rygjryLgn7",
"SyeNxz5NaX",
"HJgmIonkp7",
"iclr_2019_SJfZKiC5FX",
"HJg-BVqJTX",
"iclr_2019_SJfZKiC5FX",
"iclr_2019_SJfZKiC5FX",
"iclr_2019_SJfZKiC5FX",
"iclr_2019_SJfZKiC5FX"
] |
iclr_2019_SJfb5jCqKm | Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers | We consider the problem of uncertainty estimation in the context of (non-Bayesian) deep neural classification. In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand. We demonstrate that such techniques tend to introduce biased estimates for instances whose predictions are supposed to be highly confident. We argue that this deficiency is an artifact of the dynamics of training with SGD-like optimizers, and it has some properties similar to overfitting. Based on this observation, we develop an uncertainty estimation algorithm that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered (and way before they are ready for actual classification). We present extensive experiments indicating that the proposed algorithm provides uncertainty estimates that are consistently better than all known methods. | accepted-poster-papers | The paper proposes an improved method for uncertainty estimation in deep neural networks.
Reviewer 2 and AC note that the paper is a bit isolated in terms of comparing the literature.
However, as all of reviewers and AC found, the paper is well written and the proposed idea is clearly new/interesting. | train | [
"Hyed9FrMg4",
"SJliMPf3JN",
"B1lSBhHOh7",
"SyejFK4tR7",
"rketdetd0X",
"Syx6Ultu0m",
"SyeGHeYORm",
"Hkg9ojuuAm",
"SklM8yHlC7",
"SygCe6Ee0m",
"SJeUqaVgCX",
"H1lwmXbbTm",
"HyxLcIjwh7"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your comment. Your are correct. Indeed, when inference cost is a concern, we recommend using the PES algorithm which is more expensive to train, but much cheaper at test time. An open direction is distillation [1] of AES to a single, fast model (see concluding remarks in our paper).\n\n[1] - Geoffrey Hi... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"SJliMPf3JN",
"iclr_2019_SJfb5jCqKm",
"iclr_2019_SJfb5jCqKm",
"B1lSBhHOh7",
"HyxLcIjwh7",
"B1lSBhHOh7",
"H1lwmXbbTm",
"iclr_2019_SJfb5jCqKm",
"B1lSBhHOh7",
"H1lwmXbbTm",
"HyxLcIjwh7",
"iclr_2019_SJfb5jCqKm",
"iclr_2019_SJfb5jCqKm"
] |
iclr_2019_SJgEl3A5tm | CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild | In this paper, we conduct an intriguing experimental study about the physical adversarial attack on object detectors in the wild. In particular, we learn a camouflage pattern to hide vehicles from being detected by state-of-the-art convolutional neural network based detectors. Our approach alternates between two threads. In the first, we train a neural approximation function to imitate how a simulator applies a camouflage to vehicles and how a vehicle detector performs given images of the camouflaged vehicles. In the second, we minimize the approximated detection score by searching for the optimal camouflage. Experiments show that the learned camouflage can not only hide a vehicle from the image-based detectors under many test cases but also generalizes to different environments, vehicles, and object detectors. | accepted-poster-papers | This work develops a method for learning camouflage patterns that could be painted onto a 3d object in order to reliably fool an image-based object detector. Experiments are conducted in a simulated environment.
All reviewers agree that the problem and approach are interesting. Reviewers 1 and 3 are highly positive, while Reviewer 2 believes that real-world experiments are necessary to substantiate the claims of the paper. While such experiments would certainly enhance the impact of the work, I agree with Reviewers 1 and 3 that the current approach is sufficiently interesting and well-developed on its own. | train | [
"Bkl5sVtIAQ",
"H1g6Z8FURX",
"BJg7t_Y8AX",
"HyxNHfKU0m",
"HylF5fZc37",
"SJgCAUuK3X",
"B1gpWORE2m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewer 2,\n\nThank you for your detailed reviews. We will address your concerns one by one.\n\nQ: the fact that this is done in the simulation is understandable but to some degree negates the authors’ point that physicality matters because it is harder than images. How effective is learned camouflage in rea... | [
-1,
-1,
-1,
-1,
4,
8,
7
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"HylF5fZc37",
"SJgCAUuK3X",
"B1gpWORE2m",
"iclr_2019_SJgEl3A5tm",
"iclr_2019_SJgEl3A5tm",
"iclr_2019_SJgEl3A5tm",
"iclr_2019_SJgEl3A5tm"
] |
iclr_2019_SJgNwi09Km | Learning Latent Superstructures in Variational Autoencoders for Deep Multidimensional Clustering | We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features. In general, our superstructure is a tree structure of multiple super latent variables and it is automatically learned from data. When there is only one latent variable in the superstructure, our model reduces to one that assumes the latent features to be generated from a Gaussian mixture model. We call our model the latent tree variational autoencoder (LTVAE). Whereas previous deep learning methods for clustering produce only one partition of data, LTVAE produces multiple partitions of data, each being given by one super latent variable. This is desirable because high dimensional data usually have many different natural facets and can be meaningfully partitioned in multiple ways. | accepted-poster-papers | A well-written paper that proposes an original approach for leaning a structured prior for VAEs, as a latent tree model whose structure and parameters are simultaneously learned. It describes a well-principled approach to learning a multifaceted clustering, and is shown empirically to be competitive with other unsupervised clustering models.
Reviewers noted that the approach reached a worse log-likelihood than regular VAE (which it should be able to find as a special case), hinting towards potential optimization difficulties (local minimum?). This would benefit form a more in-depth analysis.
But reviewers appreciated the gain in interpretability and insights from the model, and unanimously agreed that the paper was an interesting novel contribution worth publishing.
| train | [
"BJgTFT7FAX",
"B1xAvgXQs7",
"B1lZLuXKAm",
"BJlD5qmOCm",
"ByxITWxvC7",
"H1l3m3kUhm",
"ryetQThr0X",
"BkeKT09rR7",
"BkxQvp5HCX",
"HJgqmTcSRX",
"Bke3mO223X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thank you for the speedy further revision - I appreciate that I have been a little demanding! Though I think the paper could still be refined a bit further on the points I and the other reviewers have raised for its final version (in particular I think there are still some open questions relating to the encoder), ... | [
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
8
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4
] | [
"BJlD5qmOCm",
"iclr_2019_SJgNwi09Km",
"BkeKT09rR7",
"ryetQThr0X",
"BkxQvp5HCX",
"iclr_2019_SJgNwi09Km",
"HJgqmTcSRX",
"B1xAvgXQs7",
"H1l3m3kUhm",
"Bke3mO223X",
"iclr_2019_SJgNwi09Km"
] |
iclr_2019_SJggZnRcFQ | Learning Programmatically Structured Representations with Perceptor Gradients | We present the perceptor gradients algorithm -- a novel approach to learning symbolic representations based on the idea of decomposing an agent's policy into i) a perceptor network extracting symbols from raw observation data and ii) a task encoding program which maps the input symbols to output actions. We show that the proposed algorithm is able to learn representations that can be directly fed into a Linear-Quadratic Regulator (LQR) or a general purpose A* planner. Our experimental results confirm that the perceptor gradients algorithm is able to efficiently learn transferable symbolic representations as well as generate new observations according to a semantically meaningful specification.
| accepted-poster-papers | This paper considers the problem of learning symbolic representations from raw data. The reviewers are split on the importance of the paper. The main argument in favor of acceptance is that bridges neural and symbolic approaches in the reinforcement learning problem domain, whereas most previous work that have attempted to bridge this gap have been in inverse graphics or physical dynamics settings. Hence, it makes for a contribution that is relevant to the ICLR community. The main downside is that the paper does not provide particularly surprising insights, and could become much stronger with more complex experimental domains.
It seems like the benefits slightly outweigh the weaknesses. Hence, I recommend accept. | train | [
"SJl6pksG6X",
"rye-jhH507",
"SJxnyJB_0m",
"Hyxed34OCQ",
"Bkxrj2N_CX",
"Hyg6yj_qnX",
"HJxKv6VUaQ",
"BJgi_JbSaQ",
"Bkl4fe-r6X",
"r1xETQsMTX",
"r1xbCrZgpQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The high-level problem this paper tackles is that of learning symbolic representations from raw noisy data, based on the hypothesis that symbolic representations that are grounded in the semantic content of the environment are less susceptible to overfitting.\n\nThe authors propose the perceptor gradients algorith... | [
7,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
6
] | [
5,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
1
] | [
"iclr_2019_SJggZnRcFQ",
"Bkxrj2N_CX",
"iclr_2019_SJggZnRcFQ",
"SJl6pksG6X",
"Hyxed34OCQ",
"iclr_2019_SJggZnRcFQ",
"BJgi_JbSaQ",
"Hyg6yj_qnX",
"r1xETQsMTX",
"Hyg6yj_qnX",
"iclr_2019_SJggZnRcFQ"
] |
iclr_2019_SJgsCjCqt7 | Variational Autoencoders with Jointly Optimized Latent Dependency Structure | We propose a method for learning the dependency structure between latent variables in deep latent variable models. Our general modeling and inference framework combines the complementary strengths of deep generative models and probabilistic graphical models. In particular, we express the latent variable space of a variational autoencoder (VAE) in terms of a Bayesian network with a learned, flexible dependency structure. The network parameters, variational parameters as well as the latent topology are optimized simultaneously with a single objective. Inference is formulated via a sampling procedure that produces expectations over latent variable structures and incorporates top-down and bottom-up reasoning over latent variable values. We validate our framework in extensive experiments on MNIST, Omniglot, and CIFAR-10. Comparisons to state-of-the-art structured variational autoencoder baselines show improvements in terms of the expressiveness of the learned model. | accepted-poster-papers | Strengths:
This paper develops a method for learning the structure of discrete latent variables in a VAE. The overall approach is well-explained and reasonable.
Weaknesses:
Ultimately, this is done using the usual style of discrete relaxations, which come with tradeoffs and inconsistencies.
Consensus:
The reviewers all agreed that the paper is above the bar. | train | [
"BygZQun5nQ",
"BJg0X5SnRX",
"H1eOmiKiom",
"ByxGUZqcCQ",
"BkeRKhQ5Rm",
"rJgSaaLv3m",
"B1l6cBxDAX",
"BJxeiNEXR7",
"HyeaxR-fAX",
"SklYoT-G0X",
"HyxKzFZfAm",
"HkeyruWzAm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Often in a deep generative model with multiple latent variables, the structure amongst the latent variables is pre-specified before parameter estimation. This work aims to learn the structure as part of the parameters. To do so, this work represents all possible dependencies amongst the latent random variables via... | [
7,
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_SJgsCjCqt7",
"HkeyruWzAm",
"iclr_2019_SJgsCjCqt7",
"BkeRKhQ5Rm",
"BJxeiNEXR7",
"iclr_2019_SJgsCjCqt7",
"HyxKzFZfAm",
"SklYoT-G0X",
"iclr_2019_SJgsCjCqt7",
"H1eOmiKiom",
"rJgSaaLv3m",
"BygZQun5nQ"
] |
iclr_2019_SJgw_sRqFQ | The Unusual Effectiveness of Averaging in GAN Training | We examine two different techniques for parameter averaging in GAN training. Moving Average (MA) computes the time-average of parameters, whereas Exponential Moving Average (EMA) computes an exponentially discounted sum. Whilst MA is known to lead to convergence in bilinear settings, we provide the -- to our knowledge -- first theoretical arguments in support of EMA. We show that EMA converges to limit cycles around the equilibrium with vanishing amplitude as the discount parameter approaches one for simple bilinear games and also enhances the stability of general GAN training. We establish experimentally that both techniques are strikingly effective in the non-convex-concave GAN setting as well. Both improve inception and FID scores on different architectures and for different GAN objectives. We provide comprehensive experimental results across a range of datasets -- mixture of Gaussians, CIFAR-10, STL-10, CelebA and ImageNet -- to demonstrate its effectiveness. We achieve state-of-the-art results on CIFAR-10 and produce clean CelebA face images.\footnote{~The code is available at \url{https://github.com/yasinyazici/EMA_GAN}} | accepted-poster-papers | This work analyses the use of parameter averaging in GANs. It can mainly be seen as an empirical study (while also a convergence analysis of EMA for a concrete example provides some minor theoretical result) but experimental results are very convincing and could promote using parameter averaging in the GAN community. Therefore, even if the technical novelty is limited, the insights brought by the paper are intesting. | val | [
"BJe2qgPNyE",
"r1l0ZexSnm",
"rJlm_Y8E14",
"B1gylLeFCQ",
"rkltTVlYCm",
"rke6SVxtRX",
"ByxU5fxYA7",
"BkxOZLU93Q",
"SklOVdg5hQ",
"SJg686LscQ",
"SkgPuRKs9X",
"HJlhxSOxcm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public"
] | [
"I have adjusted my rating accordingly.",
"The submission analyzes parameter averaging in GAN training, positing that using the exponential moving average (EMA) leads to more well-behaved solutions than using moving averages (MA) or no averaging (None). \n\nWhile reading the submission, the intuitively given expl... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
6,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
2,
4,
-1,
-1,
-1
] | [
"rke6SVxtRX",
"iclr_2019_SJgw_sRqFQ",
"B1gylLeFCQ",
"SklOVdg5hQ",
"iclr_2019_SJgw_sRqFQ",
"r1l0ZexSnm",
"BkxOZLU93Q",
"iclr_2019_SJgw_sRqFQ",
"iclr_2019_SJgw_sRqFQ",
"HJlhxSOxcm",
"SJg686LscQ",
"iclr_2019_SJgw_sRqFQ"
] |
iclr_2019_SJl2niR9KQ | Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer | Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow. | accepted-poster-papers | The paper describes the use of differentiable physics based rendering schemes to generate adversarial perturbations that are constrained by physics of image formation.
The paper puts forth a fairly novel approach to tackle an interesting question. However, some of the claims made regarding the "believability" of the adversarial examples produced by existing techniques are not fully supported. Also, the adversarial examples produced by the proposed techniques are not fully "physical" at least compared to how "physical" adversarial examples presented in some of the prior work were.
Overall though this paper constitutes a valuable contribution. | train | [
"HJxMD1N20Q",
"S1eSUQsK0X",
"BygeiOOKA7",
"rkl80bNdCX",
"Hylylvwcn7",
"H1gc6aNcaQ",
"S1xI5a4cTQ",
"ryeo7aN56X",
"SklLzoVcaX",
"B1lBHma23X",
"HyekoBIv37"
] | [
"author",
"public",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Indeed, current pixel-based attacks can fool classifiers with imperceivable perturbations. The magnitude of a perturbation is not the only factor that determines how realistic or plausible it is to occur in the real world. Figure 1 demonstrates, reductio ad absurdum, that very large pixel perturbations can be real... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"S1eSUQsK0X",
"iclr_2019_SJl2niR9KQ",
"H1gc6aNcaQ",
"SklLzoVcaX",
"iclr_2019_SJl2niR9KQ",
"HyekoBIv37",
"Hylylvwcn7",
"B1lBHma23X",
"iclr_2019_SJl2niR9KQ",
"iclr_2019_SJl2niR9KQ",
"iclr_2019_SJl2niR9KQ"
] |
iclr_2019_SJx63jRqFm | Diversity is All You Need: Learning Skills without a Reward Function | Intelligent creatures can explore their environments and learn useful skills without supervision.
In this paper, we propose ``Diversity is All You Need''(DIAYN), a method for learning useful skills without a reward function. Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy. On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping. In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward. We show how pretrained skills can provide a good parameter initialization for downstream tasks, and can be composed hierarchically to solve complex, sparse reward tasks. Our results suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning. | accepted-poster-papers | There is consensus among the reviewer that this is a good paper. It is a bit incremental compared to Gregor et al 2016. This paper show quite better empirical results. | train | [
"SkgRuY_lCQ",
"r1xu8tdlR7",
"rJxPVKde0X",
"BkhzbQF1a7",
"HJeAJln337",
"HJxA5jvqnQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for all the feedback!\n\nLearning p(z): We would like that emphasize that prior work (VIC [Gregor et al]) also requires the user to choose the number of skills. Choosing this parameter is analogous to choosing K in K-means. While we can propose various heuristics, the right choice ultimately depends on the ... | [
-1,
-1,
-1,
8,
7,
7
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"HJxA5jvqnQ",
"HJeAJln337",
"BkhzbQF1a7",
"iclr_2019_SJx63jRqFm",
"iclr_2019_SJx63jRqFm",
"iclr_2019_SJx63jRqFm"
] |
iclr_2019_SJxTroR9F7 | Supervised Policy Update for Deep Reinforcement Learning | We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning. Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples. The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem. We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. The SPU implementation is much simpler than TRPO. In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks. | accepted-poster-papers | The paper presents an interesting technique for constrained policy optimization, which is applicable to existing RL algorithms such as TRPO and PPO. All of the reviewers agree that the paper is above the bar and the authors have improved the exposition during the review process. I encourage the authors to address all of the comments in the final version. | train | [
"Skx4C43dnQ",
"SkedlYQK0X",
"SJgZokYpRQ",
"r1esEGuNAm",
"S1e9_RO70m",
"H1l6PdyfAX",
"rkxoOFRWA7",
"Syg6SrRbR7",
"BkldmS0WAX",
"S1g0iDsNp7",
"BJlXkBAbAX",
"S1x3hIAlRX",
"rJlSEZ6gC7",
"BJlGckTeAm",
"B1lMx0hgAm",
"HJxWSOjNaQ",
"Syl9f8jVTm",
"HJl0FDoEpX",
"BkerOwoV6m",
"BJldMPoV6Q"... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
"The authors formulate policy optimization as a two step iterative procedure: 1) solving a constrained optimization problem in the non-parameterized policy space, 2) using supervised regression to project this onto a parameterized policy. This approach generally applies to both continuous and discrete action spaces... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_SJxTroR9F7",
"iclr_2019_SJxTroR9F7",
"Skx4C43dnQ",
"S1e9_RO70m",
"H1l6PdyfAX",
"rkxoOFRWA7",
"Syg6SrRbR7",
"S1x3hIAlRX",
"BJlGckTeAm",
"Syl9f8jVTm",
"B1lMx0hgAm",
"BJldMPoV6Q",
"B1lNNDiVaQ",
"BkerOwoV6m",
"Syl9f8jVTm",
"Sylkwyddn7",
"Skx4C43dnQ",
"Syl9f8jVTm",
"Syl9f8j... |
iclr_2019_SJxsV2R5FQ | Learning sparse relational transition models | We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state. An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. Feed-forward neural networks are used to learn the transition distribution on the relevant objects' properties. This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table. | accepted-poster-papers | pros:
- the paper is well-written and precise
- the proposed method is novel
- valuable for real-world problems
cons:
- Reviewer 2 expresses some concern about the organization of the paper and over-generality in the exposition
- There could be more discussion of scalability | test | [
"Byluqeapam",
"ByePXT9anX",
"HJgA6WL5nm",
"SyelB6J93m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their constructive feedback and address individual questions below. \n\nAR2:\n\nQ: \"the authors could have gotten to the loss function sooner.\" \nA: We will add an explanation near Eq. (1) and emphasize that the transition will be learned.\n\nQ: ... \"it was unclear if more than a few... | [
-1,
6,
7,
8
] | [
-1,
4,
2,
3
] | [
"iclr_2019_SJxsV2R5FQ",
"iclr_2019_SJxsV2R5FQ",
"iclr_2019_SJxsV2R5FQ",
"iclr_2019_SJxsV2R5FQ"
] |
iclr_2019_SJxu5iR9KQ | Learning to Schedule Communication in Multi-agent Reinforcement Learning | Many real-world reinforcement learning tasks require multiple agents to make sequential decisions under the agents’ interaction, where well-coordinated actions among the agents are crucial to achieve the target goal better at these tasks. One way to accelerate the coordination effect is to enable multiple agents to communicate with each other in a distributed manner and behave as a group. In this paper, we study a practical scenario when (i) the communication bandwidth is limited and (ii) the agents share the communication medium so that only a restricted number of agents are able to simultaneously use the medium, as in the state-of-the-art wireless networking standards. This calls for a certain form of communication scheduling. In that regard, we propose a multi-agent deep reinforcement learning framework, called SchedNet, in which agents learn how to schedule themselves, how to encode the messages, and how to select actions based on received messages. SchedNet is capable of deciding which agents should be entitled to broadcasting their (encoded) messages, by learning the importance of each agent’s partially observed information. We evaluate SchedNet against multiple baselines under two different applications, namely, cooperative communication and navigation, and predator-prey. Our experiments show a non-negligible performance gap between SchedNet and other mechanisms such as the ones without communication and with vanilla scheduling methods, e.g., round robin, ranging from 32% to 43%. | accepted-poster-papers | The authors present a learnt scheduling mechanism for managing communications in bandwidth-constrained, contentious multi-agent RL domains. This is well-positioned in the rapidly advancing field of MARL and the contribution of the paper is both novel, interesting, and effective. The agents learn how to schedule themselves, how to encode messages, and how to select actions. The approach is evaluated against several other methods and achieves a good performance increase. The reviewers had concerns regarding the difficulty of evaluating the overall performance and also about how it would fare in more real-world scenarios, but all agree that this paper should be accepted. | val | [
"r1ePGFoonQ",
"SJxC36iFkE",
"S1gQoIpqhX",
"BJe2OZVW0X",
"ByxY2Jzg0Q",
"r1l5FjajTm",
"SyxSKqpsTX",
"HklktK6i6X",
"BJxKmdbi2Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors present a study on scheduling multi-agent communication. Specifically, the authors look into cases where agents share the same reward and they are in a partially observable environment, each of them with different observations. The main contribution of this work is that authors provide a model for comm... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2019_SJxu5iR9KQ",
"r1l5FjajTm",
"iclr_2019_SJxu5iR9KQ",
"ByxY2Jzg0Q",
"SyxSKqpsTX",
"S1gQoIpqhX",
"BJxKmdbi2Q",
"r1ePGFoonQ",
"iclr_2019_SJxu5iR9KQ"
] |
iclr_2019_SJz1x20cFQ | Hierarchical RL Using an Ensemble of Proprioceptive Periodic Policies | In this paper we introduce a simple, robust approach to hierarchically training an agent in the setting of sparse reward tasks.
The agent is split into a low-level and a high-level policy. The low-level policy only accesses internal, proprioceptive dimensions of the state observation. The low-level policies are trained with a simple reward that encourages changing the values of the non-proprioceptive dimensions. Furthermore, it is induced to be periodic with the use a ``phase function.'' The high-level policy is trained using a sparse, task-dependent reward, and operates by choosing which of the low-level policies to run at any given time. Using this approach, we solve difficult maze and navigation tasks with sparse rewards using the Mujoco Ant and Humanoid agents and show improvement over recent hierarchical methods. | accepted-poster-papers | Strengths
The paper presents a method of training two-level hierarchies that is based on relatively intuitive ideas and that performs well.
The challenges of hierarchical RL makes this an important problem. The benefits of periodicity and the
separation of internal state from external state is a clean principle that can potentially be broadly employed.
The method does well in outperforming the alternative baselines.
Weaknesses
There is no video of the results. There is related work, i.e., [Peng et al. 2016] (rev 4) uses
a policy ensemble; phase info is used in DeepLoco/DeepMimic; methods such as "Virtual Windup Toys for Animation"
exploited periodicity (25y ago); More comparisons with prior work such as Florensa et al. would help.
The separation of internal and external state is an assumption that may not hold in many cases.
The results are locomotion focussed. There are only two timescales.
Decision
The reviewers are largely in agreement to accept the paper.
There are fairly-simple-but-useful lessons to be found in the paper
for those working on HRL problems, particularly those for movement and locomotion.
The AC sees the novely with respect to different pieces of related work is the weakest point of the paper.
The reviews contain good suggestions for revisions and improvements; the latest version may take care
of these (uploaded after the last reviewer comments). Overall, the paper will make a good contribution
to ICLR 2019.
| train | [
"HJxrb0bXRQ",
"SJgSt3agRX",
"BkeWvatlRQ",
"Syxli5rPnm",
"SylIOSTYTm",
"Byxy_ETYa7",
"HyxIZNptpm",
"Byx307aKaX",
"HkgHBApm6X",
"SkeJoQQI37"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We went ahead and uploaded a revised version making the couple minor changes we promised in our responses.",
"Thanks to everyone for their discussions thus far.\nDo the authors wish to submit a revised version of the paper?\nDo the reviewers have comments to the current author responses?\nThanks... Area Chair"... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
3
] | [
"SJgSt3agRX",
"iclr_2019_SJz1x20cFQ",
"HyxIZNptpm",
"iclr_2019_SJz1x20cFQ",
"HkgHBApm6X",
"SkeJoQQI37",
"Byx307aKaX",
"Syxli5rPnm",
"iclr_2019_SJz1x20cFQ",
"iclr_2019_SJz1x20cFQ"
] |
iclr_2019_SJzR2iRcK7 | Multi-class classification without multi-class labels | This work presents a new strategy for multi-class classification that requires no class-specific labels, but instead leverages pairwise similarity between examples, which is a weaker form of annotation. The proposed method, meta classification learning, optimizes a binary classifier for pairwise similarity prediction and through this process learns a multi-class classifier as a submodule. We formulate this approach, present a probabilistic graphical model for it, and derive a surprisingly simple loss function that can be used to learn neural network-based models. We then demonstrate that this same framework generalizes to the supervised, unsupervised cross-task, and semi-supervised settings. Our method is evaluated against state of the art in all three learning paradigms and shows a superior or comparable accuracy, providing evidence that learning multi-class classification without multi-class labels is a viable learning option. | accepted-poster-papers | This paper provides a technique to learn multi-class classifiers without multi-class labels, by modeling the multi-class labels as hidden variables and optimizing the likelihood of the input variables and the binary similarity labels.
The majority of reviewers voted to accept. | test | [
"HJxzjXAVy4",
"H1esn7VCA7",
"SJe-pjP7C7",
"SyxKXsPQ0X",
"BygCV9vXAm",
"HJg98_4jn7",
"r1g7C-lP3m",
"S1eTrxCosm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for raising the nice discussion; yes the cluster assumption is related. In our opinion, the cluster assumption implies separability but additionally assumes that the data distribution has a higher density in a semantic category and a lower density between categories. Since our method is driven by the constr... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"H1esn7VCA7",
"SyxKXsPQ0X",
"S1eTrxCosm",
"r1g7C-lP3m",
"HJg98_4jn7",
"iclr_2019_SJzR2iRcK7",
"iclr_2019_SJzR2iRcK7",
"iclr_2019_SJzR2iRcK7"
] |
iclr_2019_SJzSgnRcKX | What do you learn from context? Probing for sentence structure in contextualized word representations | Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline. | accepted-poster-papers | Pros
- Thorough analysis on a large number of diverse tasks
- Extending the probing technique typically applied to individual encoder states to testing for presence of certain (linguistic) information based on pairs of encoders states (corresponding to pairs of words)
- The comparison can be useful when deciding which representations to use for a given task
Cons
- Nothing serious, it is solid and important empirical study
The reviewers are in consensus. | test | [
"SkxaqkptCX",
"Skey6mqlA7",
"SJlErExtT7",
"HklxVExYpQ",
"SJekzNxtaX",
"rylJ-ovR2X",
"SJgb3cQF2Q",
"rJx0hgbKnX"
] | [
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This is definitely related; we'll be sure to add a citation!",
"Just wanted to mention a related work: Yonatan Belinkov's thesis ( http://people.csail.mit.edu/belinkov/assets/pdf/thesis2018.pdf ) has some prior experiments with the edge probing task design outlined in this paper. See Chapter 4, \"Sentence Struct... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Skey6mqlA7",
"iclr_2019_SJzSgnRcKX",
"rJx0hgbKnX",
"SJgb3cQF2Q",
"rylJ-ovR2X",
"iclr_2019_SJzSgnRcKX",
"iclr_2019_SJzSgnRcKX",
"iclr_2019_SJzSgnRcKX"
] |
iclr_2019_SJzqpj09YQ | Spectral Inference Networks: Unifying Deep and Spectral Learning | We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics. As such, they can be a powerful tool for unsupervised representation learning from video or graph-structured data. We cast training Spectral Inference Networks as a bilevel optimization problem, which allows for online learning of multiple eigenfunctions. We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets. Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators and can discover interpretable representations from video in a fully unsupervised manner. | accepted-poster-papers | The paper proposes a deep learning framework to solve large-scale spectral decomposition.
The reviewers and AC note that the paper is quite weak from presentation. However, technically, the proposed ideas make sense, as Reviewer 1 and Reviewer 2 mentioned. In particular, as Reviewer 1 pointed out, the paper has high practical value as it aims for solving the problem at a scale larger than any existing method. Reviewer 3 pointed out no comparison with existing algorithms, but this is understandable due to the new goal.
In overall, AC thinks this is quite a boarderline paper. But, AC tends to suggest acceptance since the paper can be interested for a broad range of readers if presentation is improved. | train | [
"r1g65MYbA7",
"SkgJEMFbCm",
"SyxHlzFbRQ",
"SkeVIGb037",
"HJxVNdzj2Q",
"Hke1_V2qnm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Response: We thank the reviewer for their comments. We are glad that they found the technical contribution strong, and hope we can address their issues with the presentation. To the specific points raised:\n\n(1) We use the term “network” in spectral inference networks in the sense of “neural network”, similar to ... | [
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
3,
3,
3
] | [
"Hke1_V2qnm",
"HJxVNdzj2Q",
"SkeVIGb037",
"iclr_2019_SJzqpj09YQ",
"iclr_2019_SJzqpj09YQ",
"iclr_2019_SJzqpj09YQ"
] |
iclr_2019_Sk4jFoA9K7 | PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks | Deep learning systems have become ubiquitous in many aspects of our lives. Unfortunately, it has been shown that such systems are vulnerable to adversarial attacks, making them prone to potential unlawful uses.
Designing deep neural networks that are robust to adversarial attacks is a fundamental step in making such systems safer and deployable in a broader variety of applications (e.g. autonomous driving), but more importantly is a necessary step to design novel and more advanced architectures built on new computational paradigms rather than marginally building on the existing ones.
In this paper we introduce PeerNets, a novel family of convolutional networks alternating classical Euclidean convolutions with graph convolutions to harness information from a graph of peer samples. This results in a form of non-local forward propagation in the model, where latent features are conditioned on the global structure induced by the graph, that is up to 3 times more robust to a variety of white- and black-box adversarial attacks compared to conventional architectures with almost no drop in accuracy. | accepted-poster-papers | The paper presents a novel with compelling experiments. Good paper, accept.
| train | [
"rJxejaXmhQ",
"HylOU1LpAQ",
"SJeLIho2C7",
"Byg8yHtZRX",
"HklyONYZAX",
"Bkecy4F-0Q",
"B1lhSkeq67",
"HkgMKRcth7",
"BkxmCEYpnX",
"HygvKNtThX",
"H1lwBrml27",
"H1x6i6fgnm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"public",
"official_reviewer",
"author",
"author",
"public",
"public"
] | [
"After reading the authors' response, I'm revising my score upwards from 5 to 6.\n\nThe authors propose a defense against adversarial examples, that is inspired by \"non local means filtering\". The underlying assumption seems to be that, at feature level, adversarial examples manifest as IID noise in feature maps,... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2019_Sk4jFoA9K7",
"SJeLIho2C7",
"Bkecy4F-0Q",
"B1lhSkeq67",
"rJxejaXmhQ",
"HkgMKRcth7",
"iclr_2019_Sk4jFoA9K7",
"iclr_2019_Sk4jFoA9K7",
"H1x6i6fgnm",
"H1lwBrml27",
"iclr_2019_Sk4jFoA9K7",
"iclr_2019_Sk4jFoA9K7"
] |
iclr_2019_SkE6PjC9KX | Attentive Neural Processes | Neural Processes (NPs) (Garnelo et al., 2018) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an input, conditioned on the context. NPs have the benefit of fitting observed data efficiently with linear complexity in the number of context input-output pairs, and can learn a wide family of conditional distributions; they learn predictive distributions conditioned on context sets of arbitrary size. Nonetheless, we show that NPs suffer a fundamental drawback of underfitting, giving inaccurate predictions at the inputs of the observed data they condition on. We address this issue by incorporating attention into NPs, allowing each input location to attend to the relevant context points for the prediction. We show that this greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled. | accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- The paper is clear and well-motivated.
- The experimental results indicate that the proposed method outperforms the SOTA
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- The novelty is somewhat minor.
- An interesting (but not essential) ablation study is missing (but the authors promised to include it in the final version).
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
There were no major points of contention.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be accepted.
| train | [
"ByxU_jfkyE",
"S1lyaR6DA7",
"S1x1unoFTm",
"HklNVhjK67",
"Hylne3oKaX",
"SylBJkja3X",
"S1lKdp2qnm",
"rkgOpjMKnX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear authors,\n\nI wonder whether the authors added the experimental results of the models having self-attention without the cross-attention. If so, could you point out where to look? I could not find it.\n\nThanks,",
"I appreciate the authors' response. All of my questions are resolved. I am looking forward to ... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"S1x1unoFTm",
"S1x1unoFTm",
"rkgOpjMKnX",
"S1lKdp2qnm",
"SylBJkja3X",
"iclr_2019_SkE6PjC9KX",
"iclr_2019_SkE6PjC9KX",
"iclr_2019_SkE6PjC9KX"
] |
iclr_2019_SkEYojRqtm | Representation Degeneration Problem in Training Natural Language Generation Models | We study an interesting problem in training neural network-based models for natural language generation tasks, which we call the \emph{representation degeneration problem}. We observe that when training a model for natural language generation tasks through likelihood maximization with the weight tying trick, especially with big training datasets, most of the learnt word embeddings tend to degenerate and be distributed into a narrow cone, which largely limits the representation power of word embeddings. We analyze the conditions and causes of this problem and propose a novel regularization method to address it. Experiments on language modeling and machine translation show that our method can largely mitigate the representation degeneration problem and achieve better performance than baseline algorithms. | accepted-poster-papers | although i (ac) believe the contribution is fairly limited (e.g., (1) only looking at the word embedding which goes through many nonlinear layers, in which case it's not even clear whether how word vectors are distributed matters much, (2) only considering the case of tied embeddings, which is not necessarily the most common setting, ...), all the reviewers found the execution of the submission (motivation, analysis and experimentation) to be done well, and i'll go with the reviewers' opinion. | train | [
"SygJPRi_yN",
"rJgdnzqDnX",
"Sye8DWvvJN",
"ryeWCxFd3X",
"r1gmPQReRQ",
"BkxfbJmpTm",
"B1xVDem6TQ",
"HJgSCjMYh7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"I just updated my scores. Thanks for your clarification and update.",
"The paper presents and discusses a new phenomenon that infrequent words tend to learn degenerate embeddings. A cosine regularization term is proposed to address this issue.\n\nPros\n1. The degenerate embedding problem is novel and interesting... | [
-1,
7,
-1,
7,
-1,
-1,
-1,
7
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
3
] | [
"Sye8DWvvJN",
"iclr_2019_SkEYojRqtm",
"r1gmPQReRQ",
"iclr_2019_SkEYojRqtm",
"rJgdnzqDnX",
"HJgSCjMYh7",
"ryeWCxFd3X",
"iclr_2019_SkEYojRqtm"
] |
iclr_2019_SkEqro0ctQ | Hierarchical interpretations for neural network predictions | Deep neural networks (DNNs) have achieved impressive predictive performance due to their ability to learn complex, non-linear relationships between variables. However, the inability to effectively visualize these relationships has led to DNNs being characterized as black boxes and consequently limited their applications. To ameliorate this problem, we introduce the use of hierarchical interpretations to explain DNN predictions through our proposed method: agglomerative contextual decomposition (ACD). Given a prediction from a trained DNN, ACD produces a hierarchical clustering of the input features, along with the contribution of each cluster to the final prediction. This hierarchy is optimized to identify clusters of features that the DNN learned are predictive. We introduce ACD using examples from Stanford Sentiment Treebank and ImageNet, in order to diagnose incorrect predictions, identify dataset bias, and extract polarizing phrases of varying lengths. Through human experiments, we demonstrate that ACD enables users both to identify the more accurate of two DNNs and to better trust a DNN's outputs. We also find that ACD's hierarchy is largely robust to adversarial perturbations, implying that it captures fundamental aspects of the input and ignores spurious noise. | accepted-poster-papers | The paper receives a unanimous accept over reviewers, though some concerns on novelty exist. So it is suggested to be a probable accept. | train | [
"BJgqJWpijm",
"SJgRJN1chm",
"HJeLz1ht0m",
"Byxr3jst0Q",
"S1gxYcrUCQ",
"SylmiHLXAX",
"Syxrlyfzpm",
"r1lzrabz6Q",
"SygJ7JMzp7",
"ryeseAZMaQ",
"r1lU3TWMaQ",
"Hyx_i_0h3Q",
"S1xxnzLo2m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public"
] | [
"This paper proposes a novel approach to explain neural network predictions by learning hierarchical representations of groups of input features and their contribution to the final prediction. The proposed method is a straightforward extension of the contextual decomposition work by (Murdoch et. al. 2018) which est... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1
] | [
"iclr_2019_SkEqro0ctQ",
"iclr_2019_SkEqro0ctQ",
"Byxr3jst0Q",
"SJgRJN1chm",
"SylmiHLXAX",
"Syxrlyfzpm",
"BJgqJWpijm",
"iclr_2019_SkEqro0ctQ",
"S1xxnzLo2m",
"SJgRJN1chm",
"Hyx_i_0h3Q",
"iclr_2019_SkEqro0ctQ",
"iclr_2019_SkEqro0ctQ"
] |
iclr_2019_SkGuG2R5tm | Spreading vectors for similarity search | Discretizing floating-point vectors is a fundamental step of modern indexing methods. State-of-the-art techniques learn parameters of the quantizers on training data for optimal performance, thus adapting quantizers to the data. In this work, we propose to reverse this paradigm and adapt the data to the quantizer: we train a neural net whose last layers form a fixed parameter-free quantizer, such as pre-defined points of a sphere. As a proxy objective, we design and train a neural network that favors uniformity in the spherical latent space, while preserving the neighborhood structure after the mapping. For this purpose, we propose a new regularizer derived from the Kozachenko-Leonenko differential entropy estimator and combine it with a locality-aware triplet loss.
Experiments show that our end-to-end approach outperforms most learned quantization methods, and is competitive with the state of the art on widely adopted benchmarks. Further more, we show that training without the quantization step results in almost no difference in accuracy, but yields a generic catalyser that can be applied with any subsequent quantization technique.
| accepted-poster-papers | . Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- The proposed method is novel and effective
- The paper is clear and the experiments and literature review are sufficient (especially after revision).
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
The original weaknesses (mainly clarity and missing details) were adequately addressed in the revisions.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
No major points of contention.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be accepted. | train | [
"BkgZlICzyN",
"HkeDt3GOsQ",
"rygYNIkTRm",
"Ske30l-iAm",
"S1g23fYDRX",
"BJeQGL2HCQ",
"S1epFsSOpm",
"SJlQccS_TQ",
"BygLXqBd6m",
"Bkxs4_VP2m",
"B1x0Ka5M2X"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the feedback. The method \"Catalyst + Lattice + end2end\" refers to using a quantization layer during training with the straight-through estimator described in Section 4.2. In contrast, the version \"Catalyst + Lattice\" also optimizes Eqn (4) but without including the quantization layer during train... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"rygYNIkTRm",
"iclr_2019_SkGuG2R5tm",
"Ske30l-iAm",
"S1g23fYDRX",
"SJlQccS_TQ",
"iclr_2019_SkGuG2R5tm",
"HkeDt3GOsQ",
"B1x0Ka5M2X",
"Bkxs4_VP2m",
"iclr_2019_SkGuG2R5tm",
"iclr_2019_SkGuG2R5tm"
] |
iclr_2019_SkMQg3C5K7 | A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks | We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network by minimizing the L2 loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our results significantly extend previous analyses, e.g., of deep linear residual networks (Bartlett et al., 2018). | accepted-poster-papers | This is a well written paper that contributes a clear advance to the understanding of how gradient descent behaves when training deep linear models. Reviewers were unanimously supportive. | train | [
"H1e_FvysRm",
"r1lBZQJZnm",
"Byxc7V1sC7",
"rklG12aqCX",
"r1eIQn65A7",
"rJlJC5aqAm",
"HygW8mcwaX",
"rklm2LHvpQ",
"H1lFEi7Dam",
"ByeG0q7wTm",
"B1laO57wam",
"ryx_BxxhhQ",
"SyeSyuqq3m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the swift response and positive feedback!",
"Summary: \n \nThe paper provides the convergence analysis at linear rate of gradient descent to global minima for deep linear neural networks – the fully-connected neural networks with linear activation with l2 loss. The convergence only works under two ... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"Byxc7V1sC7",
"iclr_2019_SkMQg3C5K7",
"r1eIQn65A7",
"HygW8mcwaX",
"rklm2LHvpQ",
"iclr_2019_SkMQg3C5K7",
"rklm2LHvpQ",
"B1laO57wam",
"SyeSyuqq3m",
"ryx_BxxhhQ",
"r1lBZQJZnm",
"iclr_2019_SkMQg3C5K7",
"iclr_2019_SkMQg3C5K7"
] |
iclr_2019_SkMuPjRcKQ | Feed-forward Propagation in Probabilistic Neural Networks with Categorical and Max Layers | Probabilistic Neural Networks deal with various sources of stochasticity: input noise, dropout, stochastic neurons, parameter uncertainties modeled as random variables, etc.
In this paper we revisit a feed-forward propagation approach that allows one to estimate for each neuron its mean and variance w.r.t. all mentioned sources of stochasticity. In contrast, standard NNs propagate only point estimates, discarding the uncertainty.
Methods propagating also the variance have been proposed by several authors in different context. The view presented here attempts to clarify the assumptions and derivation behind such methods, relate them to classical NNs and broaden their scope of applicability.
The main technical contributions are new approximations for the distributions of argmax and max-related transforms, which allow for fully analytic uncertainty propagation in networks with softmax and max-pooling layers as well as leaky ReLU activations.
We evaluate the accuracy of the approximation and suggest a simple calibration. Applying the method to networks with dropout allows for faster training and gives improved test likelihoods without the need of sampling. | accepted-poster-papers | Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
| train | [
"rJlwqywr1E",
"rJgQkXZ5Cm",
"rkla8-4tRm",
"HyxIiujrCX",
"r1xl0Njl6Q",
"rJxcjNogTX",
"Hyls8EixaQ",
"r1gp_wDw3m",
"HklGBGu8h7",
"HyeBAMSynQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your response. It addressed some of my concerns. However, I still have concerns on the lack of rigor when mentioning the term posterior distribution and the notation. The explanation/clarification provided does not convince me unfortunately. That said, I am still happy if the authors could be more caref... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"rJxcjNogTX",
"rkla8-4tRm",
"r1xl0Njl6Q",
"iclr_2019_SkMuPjRcKQ",
"HyeBAMSynQ",
"HklGBGu8h7",
"r1gp_wDw3m",
"iclr_2019_SkMuPjRcKQ",
"iclr_2019_SkMuPjRcKQ",
"iclr_2019_SkMuPjRcKQ"
] |
iclr_2019_SkMwpiR9Y7 | Measuring and regularizing networks in function space | To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs. Since a change in the parameters might serve as a poor proxy for the change in the function, it is of some concern that primacy is given to parameters but that the correspondence has not been tested. Here, we show that it is simple and computationally feasible to calculate distances between functions in a L2 Hilbert space. We examine how typical networks behave in this space, and compare how parameter ℓ2 distances compare to function L2 distances between various points of an optimization trajectory. We find that the two distances are nontrivially related. In particular, the L2/ℓ2 ratio decreases throughout optimization, reaching a steady value around when test error plateaus. We then investigate how the L2 distance could be applied directly to optimization. We first propose that in multitask learning, one can avoid catastrophic forgetting by directly limiting how much the input/output function changes between tasks. Secondly, we propose a new learning rule that constrains the distance a network can travel through L2-space in any one update. This allows new examples to be learned in a way that minimally interferes with what has previously been learned. These applications demonstrate how one can measure and regularize function distances directly, without relying on parameters or local approximations like loss curvature. | accepted-poster-papers | This paper proposes to regularize neural network in function space rather than in parameter space, a proposal which makes sense and is also different than the natural gradient approach.
After discussion and considering the rebuttal, all reviewers argue for acceptance. The AC does agree that this direction of research is an important one for deep learning, and while the paper could benefit from revision and tightening the story (and stronger experiments); these do not preclude publishing in its current state.
Side comment: the visualization of neural networks in function space was done profusely when the effect of unsupervised pre-training on neural networks was investigated (among others). See e.g. Figure 7 in Erhan et al. AISTATS 2009 "The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training". This literature should be cited (and it seems that tSNE might be a more appropriate visualization techniques for non-linear functions than MDS). | train | [
"HJlmxzl52X",
"rkewpFkoyE",
"Sylzu7J6nX",
"SJxn7ibY3X",
"HyeDAbOQCX",
"ByeOl39lAQ",
"HkeQ3Ocl0Q",
"S1eC1wclCm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Summary:\nThis paper proposes first to measure distances, in a L2 space, between functions computed by neural networks. It then compares those distances with the parameter l2 distances of those networks, and empirically shows that the l2 parameter distance is a poor proxy for distances in the function space. Follo... | [
6,
-1,
6,
6,
-1,
-1,
-1,
-1
] | [
4,
-1,
3,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2019_SkMwpiR9Y7",
"HkeQ3Ocl0Q",
"iclr_2019_SkMwpiR9Y7",
"iclr_2019_SkMwpiR9Y7",
"ByeOl39lAQ",
"SJxn7ibY3X",
"HJlmxzl52X",
"Sylzu7J6nX"
] |
iclr_2019_SkNksoRctQ | Fluctuation-dissipation relations for stochastic gradient descent | The notion of the stationary equilibrium ensemble has played a central role in statistical mechanics. In machine learning as well, training serves as generalized equilibration that drives the probability distribution of model parameters toward stationarity. Here, we derive stationary fluctuation-dissipation relations that link measurable quantities and hyperparameters in the stochastic gradient descent algorithm. These relations hold exactly for any stationary state and can in particular be used to adaptively set training schedule. We can further use the relations to efficiently extract information pertaining to a loss-function landscape such as the magnitudes of its Hessian and anharmonicity. Our claims are empirically verified. | accepted-poster-papers | The paper presents interesting idea, but the reviewers ask for improving further paper clarity - that includes, but is not limited to, providing in-depth explanation of assumptions and also improving the writing that is too heavy and difficult to understand. | train | [
"SkxIvGxF6X",
"BJxciglKpm",
"r1eky6kYTX",
"Bkxa5zr52Q",
"SylFlge92Q",
"rJlcCDd_2m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your clarifying comments! Please see below for our responses to your comments (and note our responses to other reviewers as well, especially the discussion with the Reviewer 1 which led to additional experiments).\n\n\n(1) We fully agree that the equation (6) was very confusing. Please see ... | [
-1,
-1,
-1,
8,
5,
6
] | [
-1,
-1,
-1,
5,
4,
3
] | [
"rJlcCDd_2m",
"SylFlge92Q",
"Bkxa5zr52Q",
"iclr_2019_SkNksoRctQ",
"iclr_2019_SkNksoRctQ",
"iclr_2019_SkNksoRctQ"
] |
iclr_2019_Ske5r3AqK7 | Poincare Glove: Hyperbolic Word Embeddings | Words are not created equal. In fact, they form an aristocratic graph with a latent hierarchical structure that the next generation of unsupervised learned word embeddings should reveal. In this paper, justified by the notion of delta-hyperbolicity or tree-likeliness of a space, we propose to embed words in a Cartesian product of hyperbolic spaces which we theoretically connect to the Gaussian word embeddings and their Fisher geometry. This connection allows us to introduce a novel principled hypernymy score for word embeddings. Moreover, we adapt the well-known Glove algorithm to learn unsupervised word embeddings in this type of Riemannian manifolds. We further explain how to solve the analogy task using the Riemannian parallel transport that generalizes vector arithmetics to this new type of geometry. Empirically, based on extensive experiments, we prove that our embeddings, trained unsupervised, are the first to simultaneously outperform strong and popular baselines on the tasks of similarity, analogy and hypernymy detection. In particular, for word hypernymy, we obtain new state-of-the-art on fully unsupervised WBLESS classification accuracy. | accepted-poster-papers | Word vectors are well studied but this paper adds yet another interesting dimension to the field. | val | [
"SygdlyBS2Q",
"SklyIE7ZhX",
"rklr8qcvpX",
"rkxUJnqPT7",
"BJxjvncPpX",
"BJl8XsqPaQ",
"HyehC-uA2m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper adapts the Glove word embedding (Pennington et al 2014) to a hyperbolic space given by the Poincare half-plane model. The embedding objective function is given by equation (3), where h=cosh^2 so that it corresponds to a hyperbolic geometry. The author(s) showed that their hyperbolic version of Glove is... | [
6,
7,
-1,
-1,
-1,
-1,
6
] | [
4,
3,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_Ske5r3AqK7",
"iclr_2019_Ske5r3AqK7",
"SklyIE7ZhX",
"SygdlyBS2Q",
"HyehC-uA2m",
"iclr_2019_Ske5r3AqK7",
"iclr_2019_Ske5r3AqK7"
] |
iclr_2019_SkeK3s0qKQ | Episodic Curiosity through Reachability | Rewards are sparse in the real world and most of today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known "couch-potato" issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. The code is available at https://github.com/google-research/episodic-curiosity/. | accepted-poster-papers |
The authors present a novel method for tackling exploration and exploitation that yields promising results on some hard navigation-like domains. The reviewers were impressed by the contribution and had some suggestions for improvement that should be addressed in the camera ready version.
| train | [
"rJeSKyuo3Q",
"SygYM7vnCm",
"B1g0V-P20X",
"HygF5-whA7",
"Skg-kmPhRm",
"BkxDl6b9R7",
"HyeN9kl9CQ",
"HkeNioTSR7",
"Bylb69aHRQ",
"BkegueeB0m",
"HJeOyO8XRX",
"SkefzFUvpm",
"rJgSKGHcnm",
"S1gFJ_4Npm",
"SygAOUV4pm",
"H1e1YGI9h7",
"B1eCNi44T7",
"Skl0AOEVa7",
"r1lgD_E46Q",
"rJgv2u0kTX"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
... | [
"The main idea of this paper is to propose a heuristic method for exploration in deep reinforcement learning. The work is fairly innovative in its approach, where an episodic memory is used to store agent’s observations while rewarding the agent for reaching novel observations not yet stored in memory. The novelty ... | [
8,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_SkeK3s0qKQ",
"Skg-kmPhRm",
"iclr_2019_SkeK3s0qKQ",
"BkxDl6b9R7",
"HyeN9kl9CQ",
"HJeOyO8XRX",
"iclr_2019_SkeK3s0qKQ",
"iclr_2019_SkeK3s0qKQ",
"BkegueeB0m",
"B1eCNi44T7",
"SkefzFUvpm",
"r1lgD_E46Q",
"iclr_2019_SkeK3s0qKQ",
"rJeSKyuo3Q",
"iclr_2019_SkeK3s0qKQ",
"iclr_2019_SkeK3... |
iclr_2019_SkeRTsAcYm | Phase-Aware Speech Enhancement with Deep Complex U-Net | Most deep learning-based models for speech enhancement have mainly focused on estimating the magnitude of spectrogram while reusing the phase from noisy speech for reconstruction. This is due to the difficulty of estimating the phase of clean speech. To improve speech enhancement performance, we tackle the phase estimation problem in three ways. First, we propose Deep Complex U-Net, an advanced U-Net structured model incorporating well-defined complex-valued building blocks to deal with complex-valued spectrograms. Second, we propose a polar coordinate-wise complex-valued masking method to reflect the distribution of complex ideal ratio masks. Third, we define a novel loss function, weighted source-to-distortion ratio (wSDR) loss, which is designed to directly correlate with a quantitative evaluation measure. Our model was evaluated on a mixture of the Voice Bank corpus and DEMAND database, which has been widely used by many deep learning models for speech enhancement. Ablation experiments were conducted on the mixed dataset showing that all three proposed approaches are empirically valid. Experimental results show that the proposed method achieves state-of-the-art performance in all metrics, outperforming previous approaches by a large margin. | accepted-poster-papers | The authors propose an algorithm for enhancing noisy speech by also accounting for the phase information. This is done by adapting UNets to handle features defined in the complex space, and by adapting the loss function to improve an appropriate evaluation metric.
Strengths
- Modifies existing techniques well to better suit the domain for which the algorithm is being proposed. Modifications like extending UNet to complex Unet to deal with phase, redefining the mask and loss are all interesting improvements.
- Extensive results and analysis.
Weaknesses
- The work is centered around speech enhancement, and hence has limited focus.
Even though the paper is limited to speech enhancement, the reviewers agreed that the contributions made by the paper are significant and can help improve related applications like ASR. The paper is well written with interesting results and analysis. Therefore, it is recommended that the paper be accepted.
| val | [
"r1lmk_KH0X",
"SygPU_KB0Q",
"Hke4DDVN6m",
"rylC1oN4p7",
"rklUSYVEpX",
"SyxRGSbR37",
"HylyWCnnnm",
"S1lX3gYqhm"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank all the reviewers for their fruitful comments and suggestions that help make our paper more complete and comprehensive. \nWe have uploaded a newly revised paper reflecting almost all the comments, concerns and suggestions. \nWe mainly focused on revising the Introduction and Conclusion secti... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2019_SkeRTsAcYm",
"rylC1oN4p7",
"SyxRGSbR37",
"S1lX3gYqhm",
"HylyWCnnnm",
"iclr_2019_SkeRTsAcYm",
"iclr_2019_SkeRTsAcYm",
"iclr_2019_SkeRTsAcYm"
] |
iclr_2019_SkeVsiAcYm | Generative predecessor models for sample-efficient imitation learning | We propose Generative Predecessor Models for Imitation Learning (GPRIL), a novel imitation learning algorithm that matches the state-action distribution to the distribution observed in expert demonstrations, using generative models to reason probabilistically about alternative histories of demonstrated states. We show that this approach allows an agent to learn robust policies using only a small number of expert demonstrations and self-supervised interactions with the environment. We derive this approach from first principles and compare it empirically to a state-of-the-art imitation learning method, showing that it outperforms or matches its performance on two simulated robot manipulation tasks and demonstrate significantly higher sample efficiency by applying the algorithm on a real robot. | accepted-poster-papers | This paper proposes to estimate the predecessor state dynamics for more sample-efficient imitation learning. While backward models have been used in the past in reinforcement learning, the application to imitation learning has not been previously studied. The paper is well-written and the results are good, showing clear improvements over GAIL in the presented experiments. The primary weakness of the paper is the lack of comparisons to the baselines suggested by reviewer 1 (a jumpy forward model and a single step predecessor model) to fully evaluate the contribution, and to SAIL and AIRL. Despite these weaknesses, the paper slightly exceeds the bar for acceptance at ICLR.
The authors are strongly encouraged to include these comparisons in the final version. | train | [
"SygFjbDuAQ",
"SkxG5hUuA7",
"S1g5jJWU0m",
"BJguJBJIRm",
"BJxza4aHRm",
"SJlFLGnSRX",
"r1gSfAsSRX",
"B1x9M-TO37",
"rJedUOWm0Q",
"S1xOiCceCQ",
"Hyg1Q9I7pQ",
"rJlWIFUQpX",
"H1lGouImTm",
"BkeXEPYKnQ",
"Skl8Fy36hm"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"To clarify based on my understanding:\n* In the backward view, we train a model to generate trajectories that end at a demonstrated state and thus train the agent to recover. The model is factorized split into B(s|s') and B(a|s,s'), the latter corresponds to a policy that has been given additional information whi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"SkxG5hUuA7",
"S1g5jJWU0m",
"BJguJBJIRm",
"BJxza4aHRm",
"SJlFLGnSRX",
"r1gSfAsSRX",
"S1xOiCceCQ",
"iclr_2019_SkeVsiAcYm",
"Hyg1Q9I7pQ",
"rJlWIFUQpX",
"B1x9M-TO37",
"BkeXEPYKnQ",
"Skl8Fy36hm",
"iclr_2019_SkeVsiAcYm",
"iclr_2019_SkeVsiAcYm"
] |
iclr_2019_SkeZisA5t7 | Adaptive Estimators Show Information Compression in Deep Neural Networks | To improve how neural networks function it is crucial to understand their learning process. The information bottleneck theory of deep learning proposes that neural networks achieve good generalization by compressing their representations to disregard information that is not relevant to the task. However, empirical evidence for this theory is conflicting, as compression was only observed when networks used saturating activation functions. In contrast, networks with non-saturating activation functions achieved comparable levels of task performance but did not show compression. In this paper we developed more robust mutual information estimation techniques, that adapt to hidden activity of neural networks and produce more sensitive measurements of activations from all functions, especially unbounded functions. Using these adaptive estimation techniques, we explored compression in networks with a range of different activation functions. With two improved methods of estimation, firstly, we show that saturation of the activation function is not required for compression, and the amount of compression varies between different activation functions. We also find that there is a large amount of variation in compression between different network initializations. Secondary, we see that L2 regularization leads to significantly increased compression, while preventing overfitting. Finally, we show that only compression of the last layer is positively correlated with generalization. | accepted-poster-papers | This paper suggests that noise-regularized estimators of mutual information in deep neural networks should be adaptive, in the sense that the variance of the regularization noise should be proportional to the range of the hidden activity. Two adaptive estimators are proposed: (1) an entropy-based adaptive binning (EBAB) estimator that chooses the bin boundaries such that each bin contains the same number of unique observed activation levels, and (2) an adaptive kernel density estimator (aKDE) that adds isotropic Gaussian noise, where the variance of the noise is proportional to the maximum activity value in a given layer. These estimators are then used to show that (1) ReLU networks can compress, but that compression may or may not occur depending on the specific weight initialization; (2) different nonsaturating noninearities exhibit different information plane behaviors over the course of training; and (3) L2 regularization in ReLU networks encourages compression. The paper also finds that only compression in the last (softmax) layer correlates with generalization performance. The reviewers liked the range of experiments and found the observations in the paper interesting, but had reservations about the lack of rigor in the paper (no theoretical analysis of the convergence of the proposed estimator), were worried that post-hoc addition of noise distorts the function of the network, and felt that there wasn't much insight provided on the cause of compression in deep neural networks. The AC shares these concerns, and considers them to be more significant than the reviewers do, but doesn't wish to override the reviewers' recommendation that the paper be accepted. | train | [
"HkeSOAARCX",
"ByldLTe9nm",
"HkxxUIYERQ",
"Hkxrz7YV07",
"S1etTbY4CX",
"Bkxk6xFN0Q",
"BkeLvlKNRX",
"ryeconnthX",
"Hklpw8gLnQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the rebuttal. The modifications proposed do address my concerns. Also, I do agree that the scale of experiments should not be the only factor for evaluating the quality of an article. I am moving my score to 7 hoping to see more grounded results and extensions of the presented work.",
"The authors ... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"Hkxrz7YV07",
"iclr_2019_SkeZisA5t7",
"iclr_2019_SkeZisA5t7",
"ByldLTe9nm",
"ryeconnthX",
"Hklpw8gLnQ",
"Hklpw8gLnQ",
"iclr_2019_SkeZisA5t7",
"iclr_2019_SkeZisA5t7"
] |
iclr_2019_Skeke3C5Fm | Multilingual Neural Machine Translation With Soft Decoupled Encoding | Multilingual training of neural machine translation (NMT) systems has led to impressive accuracy improvements on low-resource languages. However, there are still significant challenges in efficiently learning word representations in the face of paucity of data. In this paper, we propose Soft Decoupled Encoding (SDE), a multilingual lexicon encoding framework specifically designed to share lexical-level information intelligently without requiring heuristic preprocessing such as pre-segmenting the data. SDE represents a word by its spelling through a character encoding, and its semantic meaning through a latent embedding space shared by all languages. Experiments on a standard dataset of four low-resource languages show consistent improvements over strong multilingual NMT baselines, with gains of up to 2 BLEU on one of the tested languages, achieving the new state-of-the-art on all four language pairs. | accepted-poster-papers | although some may find the proposed approach as incremental over e.g. gu et al. (2018) and kiela et al. (2018), i believe the authors' clear motivation, formulation, experimentation and analysis are solid enough to warrant the presentation at the conference. the relative simplicity and successful empirical result show that the proposed approach could be one of the standard toolkits in deep learning for multilingual processing.
J Gu, H Hassan, J Devlin, VOK Li. Universal Neural Machine Translation for Extremely Low Resource Languages. NAACL 2018.
D Kiela, C Wang, K Cho. Context-Attentive Embeddings for Improved Sentence Representations. EMNLP 2018. | train | [
"Skx2miyxxN",
"r1lzq5FJlE",
"B1lkOSEX0X",
"BylOnrwnT7",
"Hkxv-rD2p7",
"rkg0svOopQ",
"ryeWrj7ETQ",
"HJecB8iba7",
"S1xR3Ss-pQ",
"SJe-3psYnQ",
"H1lOhvAt27"
] | [
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"First, thank you for the comments! We’d like to note that we definitely didn't mean to mis-represent or do an unfair comparison, and we apologize if it came across this way. After receiving this comment we do realize that this might not have been clear enough in the paper, so in order to remedy this, we will remov... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
4,
4
] | [
"r1lzq5FJlE",
"H1lOhvAt27",
"rkg0svOopQ",
"ryeWrj7ETQ",
"iclr_2019_Skeke3C5Fm",
"S1xR3Ss-pQ",
"iclr_2019_Skeke3C5Fm",
"SJe-3psYnQ",
"H1lOhvAt27",
"iclr_2019_Skeke3C5Fm",
"iclr_2019_Skeke3C5Fm"
] |
iclr_2019_SkfMWhAqYQ | Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet | Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions. We here introduce a high-performance DNN architecture on ImageNet whose decisions are considerably easier to explain. Our model, a simple variant of the ResNet-50 architecture called BagNet, classifies an image based on the occurrences of small local image features without taking into account their spatial ordering. This strategy is closely related to the bag-of-feature (BoF) models popular before the onset of deep learning and reaches a surprisingly high accuracy on ImageNet (87.6% top-5 for 32 x 32 px features and Alexnet performance for 16 x16 px features). The constraint on local features makes it straight-forward to analyse how exactly each part of the image influences the classification. Furthermore, the BagNets behave similar to state-of-the art deep neural networks such as VGG-16, ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different decision strategies. | accepted-poster-papers | This paper presents an approach that relies on DNNs and bags of features that are fed into them, towards object recognition. The strength of the papers lie in the strong performance of these simple and interpretable models compared to more complex architectures. The authors stress on the interpretability of the results that is indeed a strength of this paper.
There is plenty of discussion between the first reviewer and the authors regarding the novelty of the work as the former point out to several related papers; however, the authors provide relatively convincing rebuttal of the concerns.
Overall, after the long discussion, there is enough consensus for this paper to be accepted to the conference. | train | [
"B1g56mREeE",
"rklqfQFACQ",
"rJeVUiaFRQ",
"rkeeML9EAX",
"Hyg_lE5NRX",
"Skg7JULVRm",
"SyxHQ7UVRQ",
"Skgkfb8NCX",
"rkewfwbVCQ",
"Bkeb0FIa2Q",
"SkxzS7El6Q",
"BylpZJNeTQ",
"B1eZr6Qxam",
"HJew0I7lpX",
"rken1AAkpm",
"rkerSzKq2X",
"HJxDPF1t37",
"BJgUOUvkpX",
"rkxKqUBka7",
"BkxqZHV1pm"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"... | [
"Dear author,\n\nI guess I missed this answer. I'm not sure it is fair to claim this CNN is more interpretable, in the sens that this work opens more questions than it closes. \"a transparent and interpretable spatial aggregation mechanism\", is a bit of an overkill in my humble opinion. Do not worry, this does not... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SkxzS7El6Q",
"Hyg_lE5NRX",
"iclr_2019_SkfMWhAqYQ",
"rken1AAkpm",
"SyxHQ7UVRQ",
"rkerSzKq2X",
"Skgkfb8NCX",
"HJxDPF1t37",
"BkxLk4fp3X",
"iclr_2019_SkfMWhAqYQ",
"B1eZr6Qxam",
"rken1AAkpm",
"HJew0I7lpX",
"rken1AAkpm",
"BkxqZHV1pm",
"iclr_2019_SkfMWhAqYQ",
"iclr_2019_SkfMWhAqYQ",
"Bke... |
iclr_2019_SkfrvsA9FX | Reward Constrained Policy Optimization | Solving tasks in Reinforcement Learning is no easy feat. As the goal of the agent is to maximize the accumulated reward, it often learns to exploit loopholes and misspecifications in the reward signal resulting in unwanted behavior. While constraints may solve this issue, there is no closed form solution for general constraints. In this work we present a novel multi-timescale approach for constrained policy optimization, called `Reward Constrained Policy Optimization' (RCPO), which uses an alternative penalty signal to guide the policy towards a constraint satisfying one. We prove the convergence of our approach and provide empirical evidence of its ability to train constraint satisfying policies. | accepted-poster-papers | This work is novel, reasonably clearly written with a thorough literature survey. The proposed approach also empirically seems promising. The paper could be improved with a bit more discussion about the sensitivity, particularly as a two-timescale approach can be more difficult to tune. | train | [
"HJeE9ewdaQ",
"Skgy209NCX",
"SJgFtQgmnQ",
"SJx3lfwuTm",
"H1eJQbv_6X",
"r1eEFRcAnm",
"S1elwWzshQ",
"HyxcGFra3m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We thank the reviewer for his/her helpful comments and feedback. \n\nMultiple timescales: \nAs the reviewer correctly pointed out, there are certain requirements placed on the step-sizes. It is important to note though, that these requirements are standard assumptions that are used in numerous works [e.g.,1, 2] to... | [
-1,
-1,
6,
-1,
-1,
6,
7,
-1
] | [
-1,
-1,
2,
-1,
-1,
4,
2,
-1
] | [
"r1eEFRcAnm",
"SJx3lfwuTm",
"iclr_2019_SkfrvsA9FX",
"SJgFtQgmnQ",
"S1elwWzshQ",
"iclr_2019_SkfrvsA9FX",
"iclr_2019_SkfrvsA9FX",
"iclr_2019_SkfrvsA9FX"
] |
iclr_2019_SkgEaj05t7 | On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length | The training of deep neural networks with Stochastic Gradient Descent (SGD) with a large learning rate or a small batch-size typically ends in flat regions of the weight space, as indicated by small eigenvalues of the Hessian of the training loss. This was found to correlate with a good final generalization performance. In this paper we extend previous work by investigating the curvature of the loss surface along the whole training trajectory, rather than only at the endpoint. We find that initially SGD visits increasingly sharp regions, reaching a maximum sharpness determined by both the learning rate and the batch-size of SGD. At this peak value SGD starts to fail to minimize the loss along directions in the loss surface corresponding to the largest curvature (sharpest directions). To further investigate the effect of these dynamics in the training process, we study a variant of SGD using a reduced learning rate along the sharpest directions which we show can improve training speed while finding both sharper and better generalizing solution, compared to vanilla SGD. Overall, our results show that the SGD dynamics in the subspace of the sharpest directions influence the regions that SGD steers to (where larger learning rate or smaller batch size result in wider regions visited), the overall training speed, and the generalization ability of the final model. | accepted-poster-papers | The reviewers found the paper insightful and the authors explanations well-provided. However the paper would benefit from more systematic empirical evaluation and corresponding theoretical intuition. | train | [
"HylZPuIoi7",
"S1gZjQ_5nQ",
"BkxM5OQS0X",
"HkeS0ToOTm",
"ByxZp6oupm",
"BJe6UaoOam",
"BkenqdEc27"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Updated rating after author response from 8 to 7 because I agree that Figure 1 and some discussions were confusing in the original manuscript.\n--------------------------------------------------------------------------\n\nThis paper investigates the relationship between the eigenvectors of the Hessian. This paper ... | [
7,
6,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_SkgEaj05t7",
"iclr_2019_SkgEaj05t7",
"iclr_2019_SkgEaj05t7",
"HylZPuIoi7",
"BkenqdEc27",
"S1gZjQ_5nQ",
"iclr_2019_SkgEaj05t7"
] |
iclr_2019_SkgQBn0cF7 | Modeling the Long Term Future in Model-Based Reinforcement Learning | In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined. If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures. This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration. To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference. We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions. Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid. An exploration strategy can be devised by searching for unlikely trajectories under the model. Our methods achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings. | accepted-poster-papers | This paper explores the use of multi-step latent variable models of the dynamics in imitation learning, planning, and finding sub-goals. The reviewers found the approach to be interesting. The initial experiments were a main weakpoint in the initial submission. However, the authors updated the experimental results to address these concerns to a significant degree. The reviewers all agree that the paper is above the bar for acceptance. I recommend accept. | test | [
"ryln2JpOk4",
"r1eK_NeLi7",
"SkxokekkyE",
"r1gl889CAX",
"BkgihzhTR7",
"HJlGMQh6C7",
"BJlOcMnT07",
"HJlR5FsT2Q",
"HJl6NdMsR7",
"rkeeGBQ5AQ",
"Ske-CdpK0X",
"rkxdPmhYCm",
"B1xorU9XCm",
"SkeIssotCX",
"BkgyCu_uRQ",
"SklnLOduCX",
"Hyxyb_OdRX",
"SkgclFyU07",
"SyxYMYcXCQ",
"B1eIKu97Rm"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"I want to thank the authors for the thorough engagement with the reviewers and the additional effort for improving the original submission.",
"After the rebuttal and the authors providing newer experimental results, I've increased my score. They have addressed both the issue with the phrasing of the auxiliary lo... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"BJlOcMnT07",
"iclr_2019_SkgQBn0cF7",
"r1gl889CAX",
"SklnLOduCX",
"BkgyCu_uRQ",
"Hyxyb_OdRX",
"SklnLOduCX",
"iclr_2019_SkgQBn0cF7",
"iclr_2019_SkgQBn0cF7",
"Ske-CdpK0X",
"rkxdPmhYCm",
"SkeIssotCX",
"iclr_2019_SkgQBn0cF7",
"B1xorU9XCm",
"Skg9apeK2X",
"r1eK_NeLi7",
"HJlR5FsT2Q",
"Syx... |
iclr_2019_Skh4jRcKQ | Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets | Training activation quantized neural networks involves minimizing a piecewise constant training loss whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule. An empirical way around this issue is to use a straight-through estimator (STE) (Bengio et al., 2013) in the backward pass only, so that the "gradient" through the modified chain rule becomes non-trivial. Since this unusual "gradient" is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss? In this paper, we provide the theoretical justification of the concept of STE by answering this question. We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data. We shall refer to the unusual "gradient" given by the STE-modifed chain rule as coarse gradient. The choice of STE is not unique. We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss. We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments. | accepted-poster-papers | The paper contributes to the understanding of straight-through estimation for single hidden layer neural networks, revealing advantages for ReLU and clipped ReLU over identity activations. A thorough and convincing theoretical analysis is provided to support these findings. After resolving various issues during the response period, the reviewers concluded with a unanimous recommendation of acceptance. Valid criticisms of the presentation quality were raised during the review and response period, and the authors would be well served by continuing to improve the paper's clarity. | train | [
"Hkl9yq0h0Q",
"HyleRgVE2Q",
"H1x_VE6nAQ",
"SJlSnDK20m",
"rkey8cX9hm",
"S1li8XtnCm",
"Hklnz95R67",
"S1xGGzEi0Q",
"rJg9up5R67",
"B1eJB_xK2m",
"rye0Bh9RpX",
"SJewEjcCam",
"H1gwDocR6X",
"Hyg6EmKe5Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"public"
] | [
"We thank the reviewer for the response and suggestion. We'll further work on combination with weight quantization in the future.",
"This paper provides theoretical analysis for two kinds of straight-through estimation (STE) for activation bianrized neural networks. It is theoretically shown that the ReLU STE ha... | [
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1
] | [
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1
] | [
"H1x_VE6nAQ",
"iclr_2019_Skh4jRcKQ",
"rye0Bh9RpX",
"S1li8XtnCm",
"iclr_2019_Skh4jRcKQ",
"Hklnz95R67",
"rkey8cX9hm",
"B1eJB_xK2m",
"iclr_2019_Skh4jRcKQ",
"iclr_2019_Skh4jRcKQ",
"HyleRgVE2Q",
"B1eJB_xK2m",
"SJewEjcCam",
"iclr_2019_Skh4jRcKQ"
] |
iclr_2019_SklEEnC5tQ | DISTRIBUTIONAL CONCAVITY REGULARIZATION FOR GANS | We propose Distributional Concavity (DC) regularization for Generative Adversarial Networks (GANs), a functional gradient-based method that promotes the entropy of the generator distribution and works against mode collapse.
Our DC regularization is an easy-to-implement method that can be used in combination with the current state of the art methods like Spectral Normalization and Wasserstein GAN with gradient penalty to further improve the performance.
We will not only show that our DC regularization can achieve highly competitive results on ILSVRC2012 and CIFAR datasets in terms of Inception score and Fr\'echet inception distance, but also provide a mathematical guarantee that our method can always increase the entropy of the generator distribution. We will also show an intimate theoretical connection between our method and the theory of optimal transport. | accepted-poster-papers | This paper proposes distributional concavity regularization for GANs which encourages producing generator distributions with higher entropy.
The reviewers found the contribution interesting for the ICLR community. R3 initially found the paper lacked clarity, but the authors took the feedback in consideration and made significant improvements in their revision. The reviewers all agreed that the updated paper should be accepted. | train | [
"Hyx3bbVc27",
"HJlnEwUMRQ",
"rkxoIlOOaQ",
"Skgk8OrDTm",
"rJllM_RzaQ",
"rkl84uRMaQ",
"HJxRuUCz6Q",
"ryxaWURfTX",
"HJeZ1SFh3Q",
"rJgOIkIKnm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"GANs (generative adversarial network) represent a recently introduced min-max generative modelling scheme with several successful applications. Unfortunately, GANs often show unstable behaviour during the training phase. The authors of the submission propose a functional-gradient type entropy-promoting approach to... | [
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
8,
7
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
1,
1
] | [
"iclr_2019_SklEEnC5tQ",
"rkxoIlOOaQ",
"Skgk8OrDTm",
"iclr_2019_SklEEnC5tQ",
"Hyx3bbVc27",
"rJllM_RzaQ",
"rJgOIkIKnm",
"HJeZ1SFh3Q",
"iclr_2019_SklEEnC5tQ",
"iclr_2019_SklEEnC5tQ"
] |
iclr_2019_SkloDjAqYm | LeMoNADe: Learned Motif and Neuronal Assembly Detection in calcium imaging videos | Neuronal assemblies, loosely defined as subsets of neurons with reoccurring spatio-temporally coordinated activation patterns, or "motifs", are thought to be building blocks of neural representations and information processing. We here propose LeMoNADe, a new exploratory data analysis method that facilitates hunting for motifs in calcium imaging videos, the dominant microscopic functional imaging modality in neurophysiology. Our nonparametric method extracts motifs directly from videos, bypassing the difficult intermediate step of spike extraction. Our technique augments variational autoencoders with a discrete stochastic node, and we show in detail how a differentiable reparametrization and relaxation can be used. An evaluation on simulated data, with available ground truth, reveals excellent quantitative performance. In real video data acquired from brain slices, with no ground truth available, LeMoNADe uncovers nontrivial candidate motifs that can help generate hypotheses for more focused biological investigations. | accepted-poster-papers | This paper is about representation learning for calcium imaging and thus a bit different in scope that most ICLR submissions. But the paper is well-executed with good choices for the various parts of the model making it relevant for other similar domains. | val | [
"Sye_xEmt2X",
"SJgD2xKHA7",
"BJef6TNVAQ",
"SkluQYSgRX",
"SJlbVaTcTQ",
"H1lDvaPu6Q",
"r1xhWlnDTX",
"Hyxv_pZwa7",
"B1l-SROznQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"The paper proposes a VAE-style model for identifying motifs from calcium imaging videos. As opposed to standard VAE with Gaussian latent variables it relies on Bernouli variables and hence, requires Gumbel-softmax trick for inference. Compared to methods based on matrix factorization, the proposed method has the a... | [
5,
-1,
-1,
-1,
8,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
5
] | [
"iclr_2019_SkloDjAqYm",
"BJef6TNVAQ",
"SkluQYSgRX",
"SJlbVaTcTQ",
"iclr_2019_SkloDjAqYm",
"Sye_xEmt2X",
"Hyxv_pZwa7",
"B1l-SROznQ",
"iclr_2019_SkloDjAqYm"
] |
iclr_2019_Sklsm20ctX | Competitive experience replay | Deep learning has achieved remarkable successes in solving challenging reinforcement learning (RL) problems when dense reward function is provided. However, in sparse reward environment it still often suffers from the need to carefully shape reward function to guide policy optimization. This limits the applicability of RL in the real world since both reinforcement learning and domain-specific knowledge are required. It is therefore of great practical importance to develop algorithms which can learn from a binary signal indicating successful task completion or other unshaped, sparse reward signals. We propose a novel method called competitive experience replay, which efficiently supplements a sparse reward by placing learning in the context of an exploration competition between a pair of agents. Our method complements the recently proposed hindsight experience replay (HER) by inducing an automatic exploratory curriculum. We evaluate our approach on the tasks of reaching various goal locations in an ant maze and manipulating objects with a robotic arm. Each task provides only binary rewards indicating whether or not the goal is achieved. Our method asymmetrically augments these sparse rewards for a pair of agents each learning the same task, creating a competitive game designed to drive exploration. Extensive experiments demonstrate that this method leads to faster converge and improved task performance. | accepted-poster-papers | The paper proposes a new method to improve exploration in sparse reward problems, by having two agents competing with each other to generate shaping reward that relies on how novel a newly visited state is.
The idea is nice and simple, and the results are promising. The authors implemented more baselines suggested in initial reviews, which was also helpful. On the other hand, the approach appears somewhat ad hoc. It is not always clear why (and when) the method works, although some intuitions are given. One reviewer gave a nice suggestion of obtaining further insights by running experiments in less complex environments. Overall, this work is an interesting contribution. | test | [
"ryeSkJO1xV",
"BkgA6ZKc14",
"Bkltsc1uC7",
"BkedtqyuCX",
"r1lUvqk_Rm",
"rkxn75k_AX",
"BJefLzCXaX",
"SygdrZr53Q",
"BygCiBx9nm",
"HJeEbx9t27"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your reply!\nThe following are the necessary details to get HER+ICM to work on those tasks:\nWe adopt code accompanying the paper 'Large-Scale Study of Curiosity-Driven Learning' to implement HER+ICM for fair comparison with HER+CER. An intrinsic reward is computed via ICM for each transiti... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"BkgA6ZKc14",
"rkxn75k_AX",
"HJeEbx9t27",
"BygCiBx9nm",
"SygdrZr53Q",
"BJefLzCXaX",
"iclr_2019_Sklsm20ctX",
"iclr_2019_Sklsm20ctX",
"iclr_2019_Sklsm20ctX",
"iclr_2019_Sklsm20ctX"
] |
iclr_2019_Sklv5iRqYX | Multi-Domain Adversarial Learning | Multi-domain learning (MDL) aims at obtaining a model with minimal average risk across multiple domains. Our empirical motivation is automated microscopy data, where cultured cells are imaged after being exposed to known and unknown chemical perturbations, and each dataset displays significant experimental bias. This paper presents a multi-domain adversarial learning approach, MuLANN, to leverage multiple datasets with overlapping but distinct class sets, in a semi-supervised setting. Our contributions include: i) a bound on the average- and worst-domain risk in MDL, obtained using the H-divergence; ii) a new loss to accommodate semi-supervised multi-domain learning and domain adaptation; iii) the experimental validation of the approach, improving on the state of the art on two standard image benchmarks, and a novel bioimage dataset, Cell. | accepted-poster-papers | This paper extends the single source H-divergence theory for domain adaptation to the case of multiple domains. Thus, drawing on the known connection between H-divergence and learning the domain classifier for adversarial adaptation, the authors propose a multi-domain adversarial learning algorithm. The approach builds upon the gradient reversal version of adversarial adaptation proposed by Ganin et al 2016.
Overall, multi-domain learning and limiting the worst case performance on any single domain is an interesting problem which has been relatively underexplored. Though this work does not have the highest performance on all datasets across competing methods, as noted by reviewers, it proposes a useful theoretical result which future research may build on. I would encourage the reviewers to compare against and discuss the missing prior work cited by Rev 3. | train | [
"SygyoFtapX",
"S1giTvK6am",
"ryxYrEt66X",
"HylcynXA2X",
"SkeOpvO227",
"SkgS2HI5hQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their insightful comments.\n\nQ1 \"all the experiments except the last row of Table 2 concern adaptation between two domains. Given the paper title, the reviewer would have expected more experiments in a multiple domain context.\" \n\nA1 A main difference between domain adaptation and MDL... | [
-1,
-1,
-1,
5,
8,
6
] | [
-1,
-1,
-1,
4,
5,
5
] | [
"SkgS2HI5hQ",
"HylcynXA2X",
"SkeOpvO227",
"iclr_2019_Sklv5iRqYX",
"iclr_2019_Sklv5iRqYX",
"iclr_2019_Sklv5iRqYX"
] |
iclr_2019_SkxXCi0qFX | ProMP: Proximal Meta-Policy Search | Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood. Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively. This leads to poor sample-efficiency during meta-training as well as ineffective task identification strategies.
This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL. Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients. By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning. Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance. | accepted-poster-papers | The paper studies the credit assignment problem in meta-RL, proposes a new algorithm that computes the right gradient, and demonstrates its superior empirical performance over others. The paper is well written, and all reviewers agree the work is a solid contribution to an important problem. | train | [
"BylH5tG_gE",
"H1eJ4OsokE",
"r1lq0oIiyV",
"HyginiQ60m",
"ByxeiCr3C7",
"r1e_2SCDAQ",
"SyeydrCDAX",
"HJlP-mCDAm",
"B1lMrTR7Am",
"SyewFM2lRX",
"B1xzeWBWp7",
"SJgxb2j16m",
"HklUqmw637",
"r1gzs8Eth7",
"H1lnbxMbn7",
"rklCmE7ai7",
"HylBEICnom",
"BJgVCjFGo7",
"SkeoPJlncX"
] | [
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"public"
] | [
"Hi authors, \n\nI used PyTorch to implement LVC on TRPO, and compare it with MAML+TRPO. It turns out that LVC has a lower variance and a worse average reward. When implement LVC on PPO, which has a name ProMP in your paper, it has a lower variance than MAML+TRPO and a higher average reward. \n\nThis indicates that... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
7,
9,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1
] | [
"HyginiQ60m",
"r1lq0oIiyV",
"iclr_2019_SkxXCi0qFX",
"iclr_2019_SkxXCi0qFX",
"SJgxb2j16m",
"r1gzs8Eth7",
"HklUqmw637",
"B1xzeWBWp7",
"SyewFM2lRX",
"iclr_2019_SkxXCi0qFX",
"iclr_2019_SkxXCi0qFX",
"iclr_2019_SkxXCi0qFX",
"iclr_2019_SkxXCi0qFX",
"iclr_2019_SkxXCi0qFX",
"rklCmE7ai7",
"HylBE... |
iclr_2019_SkxXg2C5FX | Don't Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors | Recent literature suggests that averaged word vectors followed by simple post-processing outperform many deep learning methods on semantic textual similarity tasks. Furthermore, when averaged word vectors are trained supervised on large corpora of paraphrases, they achieve state-of-the-art results on standard STS benchmarks. Inspired by these insights, we push the limits of word embeddings even further. We propose a novel fuzzy bag-of-words (FBoW) representation for text that contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors. We show that max-pooled word vectors are only a special case of fuzzy BoW and should be compared via fuzzy Jaccard index rather than cosine similarity. Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair. This method is both efficient and easy to implement, yet outperforms current baselines on STS tasks by a large margin and is even competitive with supervised word vectors trained to directly optimise cosine similarity. | accepted-poster-papers | This paper presents new generalized methods for representing sentences and measuring their similarities based on word vectors. More specifically, the paper presents Fuzzy Bag-of-Words (FBoW), a generalized approach to composing sentence embeddings by combining word embeddings with different degrees of membership, which generalize more commonly used average or max-pooled vector representations. In addition, the paper presents DynaMax, an unsupervised and non-parametric similarity measure that can dynamically extract and max-pool features from a sentence pair.
Pros:
The proposed methods are natural generalization of exiting average and max-pooled vectors. The proposed methods are elegant, simple, easy to implement, and demonstrate strong performance on STS tasks.
Cons:
The paper is solid, no significant con other than that the proposed methods are not groundbreaking innovations per say.
Verdict:
The simplicity is what makes the proposed methods elegant. The empirical results are strong. The paper is worthy of acceptance. | train | [
"ryl7fx3iyV",
"BklqRMCkeE",
"Bkg3mlksRm",
"HkxPI1yi0X",
"BkxkUCC9CQ",
"BJlNKpAcCm",
"SkxAJTC5R7",
"SylUlhAqCX",
"rke4o1cOTX",
"ByeCpELDTX",
"HJx3gDVvpX",
"BygtQLlm6Q",
"rylBW7K1pX",
"S1lC2zgqhQ",
"SyeN7H9S3m",
"rygwTXlCiX",
"SJgsHga3i7",
"r1e6Hyfos7"
] | [
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public"
] | [
"Very interesting paper! \n\nI was wondering why you left out the results from uSIF (Ethayarajh, 2018) in your Table 2, despite briefly citing it earlier on. avg-uSIF+PCA -- which the original paper denotes as UP -- looks like it gets much better results on the STS tasks than DynaMax-SIF (see Table 1 in (Ethayarajh... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
5,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
-1,
-1,
-1
] | [
"iclr_2019_SkxXg2C5FX",
"ryl7fx3iyV",
"iclr_2019_SkxXg2C5FX",
"rygwTXlCiX",
"r1e6Hyfos7",
"SyeN7H9S3m",
"S1lC2zgqhQ",
"rylBW7K1pX",
"rylBW7K1pX",
"SyeN7H9S3m",
"S1lC2zgqhQ",
"iclr_2019_SkxXg2C5FX",
"iclr_2019_SkxXg2C5FX",
"iclr_2019_SkxXg2C5FX",
"iclr_2019_SkxXg2C5FX",
"iclr_2019_SkxXg... |
iclr_2019_SyGjjsC5tQ | Stable Opponent Shaping in Differentiable Games | A growing number of learning methods are actually differentiable games whose players optimise multiple, interdependent objectives in parallel – from GANs and intrinsic curiosity to multi-agent RL. Opponent shaping is a powerful approach to improve learning dynamics in these games, accounting for player influence on others’ updates. Learning with Opponent-Learning Awareness (LOLA) is a recent algorithm that exploits this response and leads to cooperation in settings like the Iterated Prisoner’s Dilemma. Although experimentally successful, we show that LOLA agents can exhibit ‘arrogant’ behaviour directly at odds with convergence. In fact, remarkably few algorithms have theoretical guarantees applying across all (n-player, non-convex) games. In this paper we present Stable Opponent Shaping (SOS), a new method that interpolates between LOLA and a stable variant named LookAhead. We prove that LookAhead converges locally to equilibria and avoids strict saddles in all differentiable games. SOS inherits these essential guarantees, while also shaping the learning of opponents and consistently either matching or outperforming LOLA experimentally. | accepted-poster-papers | This paper provides interesting results on convergence and stability in general differentiable games. The theory appears to be correct, and the paper reasonably well written. The main concern is in connections to an area of related work that has been omitted, with overly strong statements in the paper that there has been little work for general game dynamics. This is a serious omission, since it calls into question some of the novelty of the results because they have not been adequately placed relative to this work. The authors should incorporate a thorough discussion on relations to this work, and adjust claims about novelty (and potentially even results) based on that literature. | train | [
"SJg1HSY30m",
"S1lFlwbjR7",
"S1g8jRe9AX",
"r1gygsCu07",
"HkeiQRSTnQ",
"S1l_xkOXa7",
"SkgljTPmTQ",
"Ske-EpPQTm",
"H1l2AeXanm",
"BkeyQHEcnQ"
] | [
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for these important references. Unfortunately we were not aware of this literature, especially the monograph of Facchinei and Kanzow and the older work mentioned. Thanks also for linking to the preprint on general games with continuous action sets. This is a great starting point to explore further in thi... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
2,
1
] | [
"S1g8jRe9AX",
"r1gygsCu07",
"iclr_2019_SyGjjsC5tQ",
"Ske-EpPQTm",
"iclr_2019_SyGjjsC5tQ",
"BkeyQHEcnQ",
"H1l2AeXanm",
"HkeiQRSTnQ",
"iclr_2019_SyGjjsC5tQ",
"iclr_2019_SyGjjsC5tQ"
] |
iclr_2019_SyMDXnCcF7 | A Mean Field Theory of Batch Normalization | We develop a mean field theory for batch normalization in fully-connected feedforward neural networks. In so doing, we provide a precise characterization of signal propagation and gradient backpropagation in wide batch-normalized networks at initialization. Our theory shows that gradient signals grow exponentially in depth and that these exploding gradients cannot be eliminated by tuning the initial weight variances or by adjusting the nonlinear activation function. Indeed, batch normalization itself is the cause of gradient explosion. As a result, vanilla batch-normalized networks without skip connections are not trainable at large depths for common initialization schemes, a prediction that we verify with a variety of empirical simulations. While gradient explosion cannot be eliminated, it can be reduced by tuning the network close to the linear regime, which improves the trainability of deep batch-normalized networks without residual connections. Finally, we investigate the learning dynamics of batch-normalized networks and observe that after a single step of optimization the networks achieve a relatively stable equilibrium in which gradients have dramatically smaller dynamic range. Our theory leverages Laplace, Fourier, and Gegenbauer transforms and we derive new identities that may be of independent interest. | accepted-poster-papers | This paper provides a mean-field-theory analysis of batch normalization. First there is a negative result as to the necessity of gradient explosion when using batch normalization in a fully connected network. They then provide further insights as to what can be done about this, along with experiments to confirm their theoretical predictions.
The reviewers (and random commenters) found this paper very interesting. The reviewers were unanimous in their vote to accept. | test | [
"HJeOL_au6Q",
"BJlt5Odvam",
"rkx3VuuPpQ",
"H1gQCvdvTm",
"SkxSjPODaQ",
"Sklr8el-6Q",
"HkgmPcrZpX",
"r1lORJDq3m",
"BJxndjHwnm",
"HJgsnvp59X",
"rygMio2X57"
] | [
"public",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thanks for the detailed reply!",
"Thank you for your careful review and useful comments! Overall, in response to your review and that of referee 3 we will include a more intuitive discussion of our results in the next revision of our text.\n\nTo reply to your other specific comments,\n\n1) The intuition for batc... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
-1,
-1
] | [
"rkx3VuuPpQ",
"HkgmPcrZpX",
"Sklr8el-6Q",
"r1lORJDq3m",
"BJxndjHwnm",
"iclr_2019_SyMDXnCcF7",
"iclr_2019_SyMDXnCcF7",
"iclr_2019_SyMDXnCcF7",
"iclr_2019_SyMDXnCcF7",
"rygMio2X57",
"iclr_2019_SyMDXnCcF7"
] |
iclr_2019_SyMWn05F7 | Learning Exploration Policies for Navigation | Numerous past works have tackled the problem of task-driven navigation. But, how to effectively explore a new environment to enable a variety of down-stream tasks has received much less attention. In this work, we study how agents can autonomously explore realistic and complex 3D environments without the context of task-rewards. We propose a learning-based approach and investigate different policy architectures, reward functions, and training paradigms. We find that use of policies with spatial memory that are bootstrapped with imitation learning and finally finetuned with coverage rewards derived purely from on-board sensors can be effective at exploring novel environments. We show that our learned exploration policies can explore better than classical approaches based on geometry alone and generic learning-based exploration techniques. Finally, we also show how such task-agnostic exploration can be used for down-stream tasks. Videos are available at https://sites.google.com/view/exploration-for-nav/. | accepted-poster-papers | The authors have proposed an approach for directly learning a spatial exploration policy which is effective in unseen environments. Rather than use external task rewards, the proposed approach uses an internally computed coverage reward derived from on-board sensors. The authors use imitation learning to bootstrap the training and then fine-tune using the intrinsic coverage reward. Multiple experiments and ablations are given to support and understand the approach. The paper is well-written and interesting. The experiments are appropriate, although further evaluations in real-world settings really ought to be done to fully explore the significance of the approach. The reviewers were divided, with one reviewer finding fault with the paper in terms of the claims made, the positioning against prior art, and the chosen baselines. The other two reviewers supported publication even after considering the opposition of R1, noting that they believe that the baselines are sufficient, and the contribution is novel. After reviewing the long exchange and discussion, the AC sides with accepting the paper. Although R1 raises some valid concerns, the authors defend themselves convincingly and the arguments do not, in any case, detract substantially from what is a solid submission. | train | [
"B1e5Q2Pn1N",
"HJgsJ_y2kV",
"Hygjz8oskN",
"HyxMeLoi1N",
"rye2cHjokE",
"rkgtpyX5y4",
"rkgGSK0Mhm",
"ByeP-Bzc1E",
"HJl1dciYkV",
"HyxtU5AVJE",
"rJeZGq04y4",
"H1gw_OAN14",
"rkgpAvVGkV",
"Bken9w4zJ4",
"SygDhGjoT7",
"BkgYymsoTm",
"rJxWrMsipX",
"rkgdffoi6m",
"Skx8ACqjp7",
"S1epUX832m"... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"... | [
"After a full discussion on a proper SLAM baseline and explaining its difference with an exploration policy (such as frontier) and clarifying that authors had not been correct about arguing that R1 has missed the frontier method, now authors argue that the reviewer has flipped their arguments. That is not correct. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"HJgsJ_y2kV",
"Hygjz8oskN",
"HyxMeLoi1N",
"rye2cHjokE",
"H1gw_OAN14",
"HJl1dciYkV",
"iclr_2019_SyMWn05F7",
"HJl1dciYkV",
"rkgGSK0Mhm",
"rJeZGq04y4",
"rkgpAvVGkV",
"iclr_2019_SyMWn05F7",
"Bken9w4zJ4",
"Skx8ACqjp7",
"S1epUX832m",
"SygDhGjoT7",
"rkgGSK0Mhm",
"S1eK3wbjnX",
"iclr_2019... |
iclr_2019_SyMhLo0qKQ | Distribution-Interpolation Trade off in Generative Models | We investigate the properties of multidimensional probability distributions in the context of latent space prior distributions of implicit generative models. Our work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors -- regions of latent space in close proximity to the origin of the space are oversampled, which restricts the usability of linear interpolations as a tool to analyse the latent space. We show that the distribution mismatch can be eliminated completely by a proper choice of the latent probability distribution or using non-linear interpolations. We prove that there is a trade off between the interpolation being linear, and the latent distribution having even the most basic properties required for stable training, such as finite mean. We use the multidimensional Cauchy distribution as an example of the prior distribution, and also provide a general method of creating non-linear interpolations, that is easily applicable to a large family of commonly used latent distributions. | accepted-poster-papers | All the reviewers and AC agrees that the main strength of the paper that it studies a rather important question of the validity of using linear interpolation in evaluating GANs. The paper gives concrete examples and theoretical and empirical analysis that shows linear interpolation is not a great idea. The potential weakness is that the paper doesn't provide a very convincing new evaluation to replace the linear interpolation. However, given that it's largely unclear what are the right evaluations for GANs, the AC thinks the "negative result" about linear interpolation already deserves an ICLR paper. | train | [
"r1eKxxYp0Q",
"BJeOoc8DnX",
"rJl3MOIpR7",
"rkejQ1yIR7",
"ryxVnRCrC7",
"HyxpuAKVnQ",
"Sylr6rFB07",
"SJgENyurRQ",
"rygAu1K_6X",
"SklYmyYOTQ",
"H1gsJkFuTm",
"rJg1r3XDh7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"We thank the reviewer for the comments and assure that we will do our best to highlight the issues presented in our work to the community.",
"The paper discusses linear interpolations in the latent space, which is one of the common ways used nowadays to evaluate a quality of implicit generative models. More pre... | [
-1,
6,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"rJl3MOIpR7",
"iclr_2019_SyMhLo0qKQ",
"H1gsJkFuTm",
"Sylr6rFB07",
"SJgENyurRQ",
"iclr_2019_SyMhLo0qKQ",
"rygAu1K_6X",
"SklYmyYOTQ",
"HyxpuAKVnQ",
"rJg1r3XDh7",
"BJeOoc8DnX",
"iclr_2019_SyMhLo0qKQ"
] |
iclr_2019_SyNPk2R9K7 | Learning to Describe Scenes with Programs | Human scene perception goes beyond recognizing a collection of objects and their pairwise relations. We understand higher-level, abstract regularities within the scene such as symmetry and repetition. Current vision recognition modules and scene representations fall short in this dimension. In this paper, we present scene programs, representing a scene via a symbolic program for its objects, attributes, and their relations. We also propose a model that infers such scene programs by exploiting a hierarchical, object-based scene representation. Experiments demonstrate that our model works well on synthetic data and transfers to real images with such compositional structure. The use of scene programs has enabled a number of applications, such as complex visual analogy-making and scene extrapolation. | accepted-poster-papers | This paper presents a dataset and method for training a model to infer, from a visual scene, the program that would generate/describe it. In doing so, it produces abstract disentangled representations of the scene which could be used by agents, models, and other ML methods to reason about the scene.
This is yet another paper where the reviewers disappointingly did not interact. The first round of reviews were mediocre-to-acceptable. The authors, I think, did a good job of responding to the concerns raised by the reviewers and edited their paper accordingly. Unfortunately, not one of the reviewers took the time to consider author responses.
In light of my reading of the responses and the revisions in the paper, I am leaning towards treating this as a paper where the review process has failed the authors, and recommending acceptance. The paper presents a novel method and dataset, and the experiments are reasonably convincing. The paper has flaws and the authors are advised to carefully take into account the concerns flagged by reviewers—many of which they have responded to—in producing their final manuscript. | val | [
"r1lOGdVV1E",
"SkgTlO4N1N",
"H1g-kdE4yN",
"Bkem1z60Cm",
"rygrxnzECX",
"HJlMBWP9R7",
"HJePX-DcCm",
"Byxj--DqRX",
"SJgwjgP5R7",
"rkegVxPc07",
"BJeQhpCZC7",
"Hkl3eaCWAm",
"r1xvq3AZRQ",
"Skg0MUM6nm",
"SylhmVThhQ",
"rJgijGzFnX"
] | [
"author",
"author",
"author",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer 1,\n\nThanks again for your constructive review, which has helped us improved the quality and clarity of the paper. In addition to our response above, in the revision, we have included comparisons with additional baselines and increased the complexity of the scene.\n\nAs the discussion period is abou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"r1xvq3AZRQ",
"Hkl3eaCWAm",
"BJeQhpCZC7",
"iclr_2019_SyNPk2R9K7",
"iclr_2019_SyNPk2R9K7",
"BJeQhpCZC7",
"Hkl3eaCWAm",
"r1xvq3AZRQ",
"rygrxnzECX",
"iclr_2019_SyNPk2R9K7",
"rJgijGzFnX",
"SylhmVThhQ",
"Skg0MUM6nm",
"iclr_2019_SyNPk2R9K7",
"iclr_2019_SyNPk2R9K7",
"iclr_2019_SyNPk2R9K7"
] |
iclr_2019_SyNvti09KQ | Visceral Machines: Risk-Aversion in Reinforcement Learning with Intrinsic Physiological Rewards | As people learn to navigate the world, autonomic nervous system (e.g., ``fight or flight) responses provide intrinsic feedback about the potential consequence of action choices (e.g., becoming nervous when close to a cliff edge or driving fast around a bend.) Physiological changes are correlated with these biological preparations to protect one-self from danger. We present a novel approach to reinforcement learning that leverages a task-independent intrinsic reward function trained on peripheral pulse measurements that are correlated with human autonomic nervous system responses. Our hypothesis is that such reward functions can circumvent the challenges associated with sparse and skewed rewards in reinforcement learning settings and can help improve sample efficiency. We test this in a simulated driving environment and show that it can increase the speed of learning and reduce the number of collisions during the learning stage. | accepted-poster-papers | The paper considers the problem of incorporating human physiological feedback into an autonomous driving system, where minimization of a predicted arousal response is used as an additional source of reward signal, with the intuition that this could be used as a proxy for training a policy that is risk-averse.
Reviewers were generally positive about the novelty and relevance of the approach but had methodological concerns. In particular, concerns about the weighting of the intrinsic vs. extrinsic reward (why under different settings the optimal tradeoff parameter was different, how this affects the optimal policy if the influence of the intrinsic reward is not decreased with time). Additional baseline experiments were requested and performed, and the paper was modified to significantly incorporate other feedback such as drawing connections to imitation learning. A title change was proposed and accepted to reflect the focus on the application of risk aversion (I'd ask that the authors update the paper OpenReview metadata to reflect this).
At a high level, I believe this is an original and interesting contribution to the literature. I have not heard from two of three reviewers regarding whether their concerns were addressed, but given that their concerns appear to me to have been addressed (and their initial scores indicated that the work met the bar for acceptance, if only marginally), I am inclined to recommend acceptance. | train | [
"B1l7Gno_67",
"rye3RNPGp7",
"BJlZQHDMp7",
"ryx-NNvGaQ",
"SylwTdykpQ",
"Hye3ipRChX",
"rylqztph2m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewers again for their constructive and insightful comments. We have uploaded a revised version of our manuscript with the changes described below in the \"initial response and clarifications\". We highlight that we have added additional related work, the experiments and results wit... | [
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2019_SyNvti09KQ",
"SylwTdykpQ",
"rylqztph2m",
"Hye3ipRChX",
"iclr_2019_SyNvti09KQ",
"iclr_2019_SyNvti09KQ",
"iclr_2019_SyNvti09KQ"
] |
iclr_2019_SyVU6s05K7 | Deep Frank-Wolfe For Neural Network Optimization | Learning a deep neural network requires solving a challenging optimization problem: it is a high-dimensional, non-convex and non-smooth minimization problem with a large number of terms. The current practice in neural network optimization is to rely on the stochastic gradient descent (SGD) algorithm or its adaptive variants. However, SGD requires a hand-designed schedule for the learning rate. In addition, its adaptive variants tend to produce solutions that generalize less well on unseen data than SGD with a hand-designed schedule. We present an optimization method that offers empirically the best of both worlds: our algorithm yields good generalization performance while requiring only one hyper-parameter. Our approach is based on a composite proximal framework, which exploits the compositional nature of deep neural networks and can leverage powerful convex optimization algorithms by design. Specifically, we employ the Frank-Wolfe (FW) algorithm for SVM, which computes an optimal step-size in closed-form at each time-step. We further show that the descent direction is given by a simple backward pass in the network, yielding the same computational cost per iteration as SGD. We present experiments on the CIFAR and SNLI data sets, where we demonstrate the significant superiority of our method over Adam, Adagrad, as well as the recently proposed BPGrad and AMSGrad. Furthermore, we compare our algorithm to SGD with a hand-designed learning rate schedule, and show that it provides similar generalization while often converging faster. The code is publicly available at https://github.com/oval-group/dfw. | accepted-poster-papers | The paper was judged by the reviewers as providing interesting ideas, well-written and potentially having impact on future research on NN optimization. The authors are asked to make sure they addressed reviewers comments clearly in the paper. | train | [
"Bkg1zXr01N",
"SJebCbtF67",
"S1ekiBNT0m",
"Hkg1XJI3AX",
"SJlpdCRsCX",
"SkeVgPnFRX",
"HklQyb9637",
"SygOKWFYTm",
"Syei-gKKTm",
"r1g2QHW0h7",
"BJlCnxODhX"
] | [
"author",
"author",
"author",
"public",
"author",
"public",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We have performed additional experiments on the CIFAR data sets using data augmentation. We summarise here our findings, and we will provide more details in future versions of the paper. \n\nIn order to account for the additional variance introduced by the data augmentation, we allow the batch size to be chosen as... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
5,
4
] | [
"S1ekiBNT0m",
"BJlCnxODhX",
"Hkg1XJI3AX",
"SJlpdCRsCX",
"SkeVgPnFRX",
"iclr_2019_SyVU6s05K7",
"iclr_2019_SyVU6s05K7",
"HklQyb9637",
"r1g2QHW0h7",
"iclr_2019_SyVU6s05K7",
"iclr_2019_SyVU6s05K7"
] |
iclr_2019_SyVuRiC5K7 | LEARNING TO PROPAGATE LABELS: TRANSDUCTIVE PROPAGATION NETWORK FOR FEW-SHOT LEARNING | The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class. The recently introduced meta-learning approaches tackle this problem by learning a generic classifier across a large number of multiclass classification tasks and generalizing the model to a new task. Yet, even with such meta-learning, the low-data problem in the novel classification task still remains. In this paper, we propose Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem. Specifically, we propose to learn to propagate labels from labeled instances to unlabeled test instances, by learning a graph construction module that exploits the manifold structure in the data. TPN jointly learns both the parameters of feature embedding and the graph construction in an end-to-end manner. We validate TPN on multiple benchmark datasets, on which it largely outperforms existing few-shot learning approaches and achieves the state-of-the-art results. | accepted-poster-papers | As far as I know, this is the first paper to combine transductive learning with few-shot classification. The proposed algorithm, TPN, combines label propagation with episodic training, as well as learning an adaptive kernel bandwidth in order to determine the label propagation graph. The reviewers liked the idea, however there were concerns of novelty and clarity. I think the contributions of the paper and the strong empirical results are sufficient to merit acceptance, however the paper has not undergone a revision since September. It is therefore recommended that the authors improve the clarity based on the reviewer feedback. In particular, clarifying the details around learning \sigma_i and graph construction. It would also be useful to include the discussion of timing complexity in the final draft. | train | [
"HkgdqB_y1V",
"rkg_emdoA7",
"ryek8HjfyN",
"r1lRhOusR7",
"rkxcWJf_Am",
"r1xOHPbuAm",
"Hye0RIZ_RX",
"r1xNOL-uCX",
"S1lqdSZuRX",
"rkxfh9RTTQ",
"rJxOdw3sTX",
"BJgohCDj6X",
"Skx7vDii3X",
"S1x4ca-chQ",
"HJx6UQbfhX",
"H1l4b54ijm",
"SklAweC0qX",
"Bkli_tYd5Q",
"ryekRHiecQ",
"ryeo4LvecQ"... | [
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public",
"public"
] | [
"Hi\n\nIn section 3.2.4, it was written that cross-entropy loss is computed between F* and query labels, however, https://github.com/anonymisedsupplemental/TPN/blob/master/models.py#L145 the loss is computed between F* and the UNION of support labels and query labels. In fact, I changed the loss computation in your... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"r1lRhOusR7",
"rkxcWJf_Am",
"HkgdqB_y1V",
"rkg_emdoA7",
"rkxfh9RTTQ",
"HJx6UQbfhX",
"S1x4ca-chQ",
"Skx7vDii3X",
"iclr_2019_SyVuRiC5K7",
"rJxOdw3sTX",
"BJgohCDj6X",
"iclr_2019_SyVuRiC5K7",
"iclr_2019_SyVuRiC5K7",
"iclr_2019_SyVuRiC5K7",
"iclr_2019_SyVuRiC5K7",
"SklAweC0qX",
"iclr_2019... |
iclr_2019_SyfIfnC5Ym | Improving the Generalization of Adversarial Training with Domain Adaptation | By injecting adversarial examples into training data, adversarial training is promising for improving the robustness of deep learning models. However, most existing adversarial training approaches are based on a specific type of adversarial attack. It may not provide sufficiently representative samples from the adversarial domain, leading to a weak generalization ability on adversarial examples from other attacks. Moreover, during the adversarial training, adversarial perturbations on inputs are usually crafted by fast single-step adversaries so as to scale to large datasets. This work is mainly focused on the adversarial training yet efficient FGSM adversary. In this scenario, it is difficult to train a model with great generalization due to the lack of representative adversarial samples, aka the samples are unable to accurately reflect the adversarial domain. To alleviate this problem, we propose a novel Adversarial Training with Domain Adaptation (ATDA) method. Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples. The main idea is to learn a representation that is semantically meaningful and domain invariant on the clean domain as well as the adversarial domain. Empirical evaluations on Fashion-MNIST, SVHN, CIFAR-10 and CIFAR-100 demonstrate that ATDA can greatly improve the generalization of adversarial training and the smoothness of the learned models, and outperforms state-of-the-art methods on standard benchmark datasets. To show the transfer ability of our method, we also extend ATDA to the adversarial training on iterative attacks such as PGD-Adversial Training (PAT) and the defense performance is improved considerably. | accepted-poster-papers | The paper presents an interesting idea for increasing the robustness of adversarial defenses by combining with existing domain adaptation approaches. All reviewers agree that the paper is well written and clearly articulates the approach and contribution.
The main areas of weakness is that the experiments focus on small datasets, namely CiFAR and MNIST. That being said, the algorithm is reasonably ablated on the data explored and the authors provided valuable new experimental evidence during the rebuttal phase and in response to the public comment. | train | [
"BJen5fN40X",
"rJgbbg44AQ",
"ryl2YCmVR7",
"HkeAenmERX",
"SJehLY7VAm",
"BJx8VSaIhQ",
"B1ln4ILcaX",
"SkgT7hy9a7",
"SyeI-MS1TX",
"Hygu-dLc37"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"public",
"official_reviewer",
"official_reviewer"
] | [
"\nThanks for your interest in our paper.\n\nSince (the noisy) PGD can samples more sufficient adversarial examples in adversarial domain, adversarial training on it yields more robust models than adversarial training on FGSM. However, PGD-Adversarial Training (PAT) [1] is challenging to scale to deep or wide neura... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
3,
4
] | [
"SkgT7hy9a7",
"BJx8VSaIhQ",
"SyeI-MS1TX",
"Hygu-dLc37",
"iclr_2019_SyfIfnC5Ym",
"iclr_2019_SyfIfnC5Ym",
"SkgT7hy9a7",
"iclr_2019_SyfIfnC5Ym",
"iclr_2019_SyfIfnC5Ym",
"iclr_2019_SyfIfnC5Ym"
] |
iclr_2019_SygD-hCcF7 | Dimensionality Reduction for Representing the Knowledge of Probabilistic Models | Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification. However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting. We propose a simple, intuitive and scalable dimension reduction framework that takes into account the soft probabilistic interpretation of standard deep models for classification. When applying our framework to visualization, our representations more accurately reflect inter-class distances than standard visualization techniques such as t-SNE. We show experimentally that our framework improves generalization performance to unseen categories in zero-shot learning. We also provide a finite sample error upper bound guarantee for the method. | accepted-poster-papers | This paper introduces an approach for reducing the dimensionality of training data examples in a way that preserves information about soft target probabilistic representations provided by a teacher model, with applications such as zero-shot learning and distillation. The authors provide an extensive theoretical and empirical analysis, showing performance improvements in zero shot learning and finite sample error upper bounds. The reviewers generally agree this is a good paper that should be published. | train | [
"HylnV5jchQ",
"SklYQlZsR7",
"HkgXAnlcRQ",
"Syxw33eqRQ",
"B1gOV3x90Q",
"ryxpkAks3m",
"H1gTSCXcnX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Authors propose a method of embedding training data examples into low-dimensional spaces such that mixture probabilities from a mixture model on these points are close to probability predictions from the original model in terms of KL divergence. Authors suggest two use-cases of such an approach: 1) data visualizat... | [
7,
-1,
-1,
-1,
-1,
6,
9
] | [
4,
-1,
-1,
-1,
-1,
1,
3
] | [
"iclr_2019_SygD-hCcF7",
"Syxw33eqRQ",
"H1gTSCXcnX",
"HylnV5jchQ",
"ryxpkAks3m",
"iclr_2019_SygD-hCcF7",
"iclr_2019_SygD-hCcF7"
] |
iclr_2019_SygLehCqtm | Learning protein sequence embeddings using information from structure | Inferring the structural properties of a protein from its amino acid sequence is a challenging yet important problem in biology. Structures are not known for the vast majority of protein sequences, but structure is critical for understanding function. Existing approaches for detecting structural similarity between proteins from sequence are unable to recognize and exploit structural patterns when sequences have diverged too far, limiting our ability to transfer knowledge between structurally related proteins. We newly approach this problem through the lens of representation learning. We introduce a framework that maps any protein sequence to a sequence of vector embeddings --- one per amino acid position --- that encode structural information. We train bidirectional long short-term memory (LSTM) models on protein sequences with a two-part feedback mechanism that incorporates information from (i) global structural similarity between proteins and (ii) pairwise residue contact maps for individual proteins. To enable learning from structural similarity information, we define a novel similarity measure between arbitrary-length sequences of vector embeddings based on a soft symmetric alignment (SSA) between them. Our method is able to learn useful position-specific embeddings despite lacking direct observations of position-level correspondence between sequences. We show empirically that our multi-task framework outperforms other sequence-based methods and even a top-performing structure-based alignment method when predicting structural similarity, our goal. Finally, we demonstrate that our learned embeddings can be transferred to other protein sequence problems, improving the state-of-the-art in transmembrane domain prediction. | accepted-poster-papers | The reviewers and authors had a productive conversation, leading to an improvement in the paper quality. The strengths of the paper highlighted by reviewers are a novel learning set-up and new loss functions that seem to help in the task of protein contact prediction and protein structural similarity prediction. The reviewers characterize the work as constituting an advance in an exciting application space, as well as containing a new configuration of methods to address the problem.
Overall, it is clear the paper should be accepted, based on reviewer comments, which unanimously agreed on the quality of the work. | train | [
"BkeQG-Te3X",
"rJxHUVu7JE",
"Byxe0vrhA7",
"rygB3rus3Q",
"BJeiRUs_0m",
"HkeFnFy3nX",
"ryl0Ewj_C7",
"HygYQvsOR7",
"SylE1Li_0m",
"BkgcoSjdAQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"General comment\n==============\nThe authors describe two loss functions for learning embeddings of protein amino acids based on i) predicting the global structural similarity of two proteins, and ii) predicting amino acid contacts within proteins. As far as I know, these loss functions are novel and the authors s... | [
8,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2019_SygLehCqtm",
"Byxe0vrhA7",
"ryl0Ewj_C7",
"iclr_2019_SygLehCqtm",
"BkeQG-Te3X",
"iclr_2019_SygLehCqtm",
"HygYQvsOR7",
"BJeiRUs_0m",
"rygB3rus3Q",
"HkeFnFy3nX"
] |
iclr_2019_SygQvs0cFQ | Variational Smoothing in Recurrent Neural Network Language Models | We present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational distribution (i.e., a mixture of Gaussians whose weights depend on statistics derived from the corpus such as the unigram distribution). We use this insight to propose a more principled method to apply at prediction time and propose natural extensions to data noising under the variational framework. In particular, we propose variational smoothing with tied input and output embedding matrices and an element-wise variational smoothing method. We empirically verify our analysis on two benchmark language modeling datasets and demonstrate performance improvements over existing data noising methods. | accepted-poster-papers | as r1 and r2 have pointed out, this work presents an interesting and potentially more generalizable extension of the earlier work on introducing noise as regularization in autoregressive language modelling. although it would have been better with more extensive evaluation that goes beyond unsupervised language modelling and toward conditional language modelling, but i believe this is all fine for this further work to be left as follow-up.
r3's concern is definitely valid, but i believe the existing evaluation set as well as exposition merit presentation and discussion at the conference, which was shared by the other reviewers as well as a programme chair. | train | [
"HygjX7N5TQ",
"HkeQtbV5aX",
"BygBmb1caQ",
"H1x17T9eTX",
"H1giSlCF37",
"HyeRjVPknm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the thoughtful comments. \n\nWe totally agree with the reviewer that a comparison on larger dataset (billion word benchmark Chelba et al. 2013) will make the results stronger. It is an interesting question on itself to the data nosing method. If we have much larger data, will such smoothi... | [
-1,
-1,
-1,
7,
6,
2
] | [
-1,
-1,
-1,
4,
4,
5
] | [
"H1x17T9eTX",
"HyeRjVPknm",
"H1giSlCF37",
"iclr_2019_SygQvs0cFQ",
"iclr_2019_SygQvs0cFQ",
"iclr_2019_SygQvs0cFQ"
] |
iclr_2019_SygvZ209F7 | Biologically-Plausible Learning Algorithms Can Scale to Large Datasets | The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” (Grossberg, 1987), two biologically-plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) finds that although feedback alignment (FA) and some variants of target-propagation (TP) perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry (SS) algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs. We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet; RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018) and establish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures. | accepted-poster-papers | This heavily disputed paper discusses a biologically motivated alternative to back-propagation learning. In particular, methods focussing on sign-symmetry rather than weight-symmetry are investigated and, importantly, scaled to large problems. The paper demonstrates the viability of the approach. If nothing else, it instigates a wonderful platform for debate.
The results are convincing and the paper is well-presented. But the biological plausibility of the methods needed for these algorithms can be disputed. In my opinion, these are best tackled in a poster session, following the good practice at neuroscience meetings.
On an aside note, the use of the approach to ResNet should be questioned. The skip-connections in ResNet may be all but biologically relevant. | train | [
"rygygUR5Am",
"Sye4Oht90Q",
"BJxkToUNA7",
"HyegvkufCm",
"r1xjm1dzAX",
"SJexARPG0X",
"BkxHwKZA6Q",
"S1ekcnvWp7",
"Bkxf7H_W6X",
"r1lbbE_W67",
"HJx1pgdbTQ",
"HJg3Sz_WTm",
"HJels3U-67",
"SkguetB9h7",
"BJg34EAt27",
"S1eEJYfCsX",
"BkeAYwE03X"
] | [
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"There are two issues here. First is whether the performance of XNOR-Net predicts the performance of SS. Saying “gradient computation in XNOR-Net is exact in form” means that because symmetrical (binary) weights are used in the forward and backward pass in XNOR-Net, credit assignment on the weights is still accurat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
9,
4,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
-1
] | [
"Sye4Oht90Q",
"BkxHwKZA6Q",
"BkeAYwE03X",
"SkguetB9h7",
"BJg34EAt27",
"S1eEJYfCsX",
"HJels3U-67",
"HJels3U-67",
"BkeAYwE03X",
"S1eEJYfCsX",
"SkguetB9h7",
"BJg34EAt27",
"iclr_2019_SygvZ209F7",
"iclr_2019_SygvZ209F7",
"iclr_2019_SygvZ209F7",
"iclr_2019_SygvZ209F7",
"iclr_2019_SygvZ209F... |
iclr_2019_Syl7OsRqY7 | Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering | End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and self-attention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new state-of-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders. | accepted-poster-papers | The paper presents a method for coarse and fine inference for question answering. It originally measured performance only on WikiHop and then later added experiments on TriviaQA. The results are good.
One of the concerns regarding the paper was the novelty of the work, and lack of enough experiments. However, the addition of TriviaQA results allays some of that concern. I'd suggest citing the paper by Swayamdipta et al from last year that attempted coarse to fine inference for TriviaQA:
Multi-Mention Learning for Reading Comprehension with Neural Cascades.
Swabha Swayamdipta, Ankur P. Parikh and Tom Kwiatkowski.
Proceedings of ICLR 2018.
Overall, there is relative consensus that the paper is good with a new method and some strong results. | test | [
"Hkgaw5-jA7",
"HyleCv-jCX",
"Syg_aZWsC7",
"Hkll9I3KCX",
"HJeH8M5Y0Q",
"Byxm3fiRpm",
"ByeIepE667",
"HJeMGh4p6m",
"H1xqtoV6p7",
"Hyxq-pVap7",
"HkxVLCtKnm",
"S1xSi3WtnQ",
"Sklnu2iP3X"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your prompt reviews and responses!",
"Thanks for posting the new result. That is very helpful. It makes sense that the coarse-only model does not help on this task but the fine-grain model is much more useful. I will discuss with the other reviewers asap to make my final decision.",
"We agree wit... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"HyleCv-jCX",
"Syg_aZWsC7",
"HJeH8M5Y0Q",
"HJeMGh4p6m",
"ByeIepE667",
"iclr_2019_Syl7OsRqY7",
"Sklnu2iP3X",
"S1xSi3WtnQ",
"HkxVLCtKnm",
"ByeIepE667",
"iclr_2019_Syl7OsRqY7",
"iclr_2019_Syl7OsRqY7",
"iclr_2019_Syl7OsRqY7"
] |
iclr_2019_Syl8Sn0cK7 | Learning a Meta-Solver for Syntax-Guided Program Synthesis | We study a general formulation of program synthesis called syntax-guided synthesis(SyGuS) that concerns synthesizing a program that follows a given grammar and satisfies a given logical specification. Both the logical specification and the grammar have complex structures and can vary from task to task, posing significant challenges for learning across different tasks. Furthermore, training data is often unavailable for domain specific synthesis tasks. To address these challenges, we propose a meta-learning framework that learns a transferable policy from only weak supervision. Our framework consists of three components: 1) an encoder, which embeds both the logical specification and grammar at the same time using a graph neural network; 2) a grammar adaptive policy network which enables learning a transferable policy; and 3) a reinforcement learning algorithm that jointly trains the embedding and adaptive policy. We evaluate the framework on 214 cryptographic circuit synthesis tasks. It solves 141 of them in the out-of-box solver setting, significantly outperforming a similar search-based approach but without learning, which solves only 31. The result is comparable to two state-of-the-art classical synthesis engines, which solve 129 and 153 respectively. In the meta-solver setting, the framework can efficiently adapt to unseen tasks and achieves speedup ranging from 2x up to 100x. | accepted-poster-papers | This paper presents an RL agent which progressively synthesis programs according to syntactic constraints, and can learn to solve problems with different DSLs, demonstrating some degree of transfer across program synthesis problems. Reviewers agreed that this was an exciting and important development in program synthesis and meta-learning (if that word still has any meaning to it), and were impressed with both the clarity of the paper and its evaluation. There were some concerns about missing baselines and benchmarks, some of which were resolved during the discussion period, although it would still be good to compare to out-of-the-box MCTS.
Overall, everyone agrees this is a strong paper and that it belongs in the conference, so I have no hesitation in recommending it. | train | [
"HkgUk7O5hm",
"SJgqg7ARRm",
"Hye08GRCCX",
"rJlT6Ucq27",
"SkeU1J2CCQ",
"rkxb8RoCAX",
"H1ge36JK0m",
"rJl81oyFC7",
"BJlJVz9dA7",
"S1xZNmeeCX",
"Hklw2Tke07",
"BygXD6yl0X",
"rkgf_2Je0Q",
"SJltos1eAQ",
"HyeXDiav3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper presents a reinforcement learning based approach to learn a search strategy to search for programs in the generic syntax-guided synthesis (SyGuS) formulation. Unlike previous neural program synthesis approaches, where the DSL grammar is fixed or the specification is in the form of input-output examples ... | [
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"iclr_2019_Syl8Sn0cK7",
"H1ge36JK0m",
"SkeU1J2CCQ",
"iclr_2019_Syl8Sn0cK7",
"rJlT6Ucq27",
"HkgUk7O5hm",
"BJlJVz9dA7",
"iclr_2019_Syl8Sn0cK7",
"rkgf_2Je0Q",
"iclr_2019_Syl8Sn0cK7",
"HyeXDiav3X",
"HkgUk7O5hm",
"SJltos1eAQ",
"rJlT6Ucq27",
"iclr_2019_Syl8Sn0cK7"
] |
iclr_2019_SylCrnCcFX | Towards Robust, Locally Linear Deep Networks | Deep networks realize complex mappings that are often understood by their locally linear behavior at or around points of interest. For example, we use the derivative of the mapping with respect to its inputs for sensitivity analysis, or to explain (obtain coordinate relevance for) a prediction. One key challenge is that such derivatives are themselves inherently unstable. In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions. While the problem is challenging in general, we focus on networks with piecewise linear activation functions. Our algorithm consists of an inference step that identifies a region around a point where linear approximation is provably stable, and an optimization step to expand such regions. We propose a novel relaxation to scale the algorithm to realistic models. We illustrate our method with residual and recurrent networks on image and sequence datasets. | accepted-poster-papers | The paper aims to encourage deep networks to have stable derivatives over larger regions under networks with piecewise linear activation functions.
All reviewers and AC note the significance of the paper. AC also thinks this is also a very timely work and potentially of broader interest of ICLR audience. | train | [
"BJxlHFwdlV",
"SkguX-b90m",
"H1ebgb-9CQ",
"S1g2tfHURX",
"HkeVYZOO67",
"HyeqaWu_67",
"H1eDBfaunm",
"SygRbZudpQ",
"ByeYOl_OT7",
"SJenv7uOpm",
"HJlUOxtgTX",
"rkgbycE5hQ",
"HylATdMS27"
] | [
"public",
"author",
"author",
"public",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the enlightening paper. I believe there is one missed relevant work: \"Deep Defense: Training DNNs with Improved Adversarial Robustness (arXiv:1803.00404, NeurIPS 2018)\" which also aims at enlarging the l_p margin.",
"The manageable size of MNIST was beneficial for parameter analysis and to illustrat... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_SylCrnCcFX",
"S1g2tfHURX",
"HJlUOxtgTX",
"iclr_2019_SylCrnCcFX",
"rkgbycE5hQ",
"HylATdMS27",
"iclr_2019_SylCrnCcFX",
"iclr_2019_SylCrnCcFX",
"iclr_2019_SylCrnCcFX",
"H1eDBfaunm",
"iclr_2019_SylCrnCcFX",
"iclr_2019_SylCrnCcFX",
"iclr_2019_SylCrnCcFX"
] |
iclr_2019_SylKoo0cKm | How Important is a Neuron | The problem of attributing a deep network’s prediction to its input/base features is
well-studied (cf. Simonyan et al. (2013)). We introduce the notion of conductance
to extend the notion of attribution to understanding the importance of hidden units.
Informally, the conductance of a hidden unit of a deep network is the flow of attribution
via this hidden unit. We can use conductance to understand the importance of
a hidden unit to the prediction for a specific input, or over a set of inputs. We justify
conductance in multiple ways via a qualitative comparison with other methods,
via some axiomatic results, and via an empirical evaluation based on a feature
selection task. The empirical evaluations are done using the Inception network
over ImageNet data, and a convolutinal network over text data. In both cases, we
demonstrate the effectiveness of conductance in identifying interesting insights
about the internal workings of these networks. | accepted-poster-papers | This paper proposes a new measure to quantify the contribution of an individual neuron within a deep neural network. Interpretability and better understanding of the inner workings of neural networks are important questions, and all reviewers agree that this work is contributing an interesting approach and results. | train | [
"HyeEiW9FR7",
"Hye5Y6SQR7",
"SJeVPpHQA7",
"SyxMr6SmRQ",
"r1eKZaBmAX",
"H1xmAiWo6Q",
"H1lipsyqp7",
"BkeBNXOJT7",
"HJxwToa927",
"BkxH95gchQ",
"SJe-HDDIn7",
"BJgv0O6Snm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"I confirm that section 5.2 and 6.2 both have quantitative analyses. I missed the table 2 while responding to the anonymous comment.",
"We thank the reviewer for their review. The reviewer notes the need to emphasize how and why to use this approach. In the new revision, we have added a discussion section to make... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
5,
-1,
-1
] | [
"r1eKZaBmAX",
"BkxH95gchQ",
"HJxwToa927",
"BkeBNXOJT7",
"H1xmAiWo6Q",
"H1lipsyqp7",
"iclr_2019_SylKoo0cKm",
"iclr_2019_SylKoo0cKm",
"iclr_2019_SylKoo0cKm",
"iclr_2019_SylKoo0cKm",
"BJgv0O6Snm",
"iclr_2019_SylKoo0cKm"
] |
iclr_2019_SylLYsCcFm | Learning to Make Analogies by Contrasting Abstract Relational Structure | Analogical reasoning has been a principal focus of various waves of AI research. Analogy is particularly challenging for machines because it requires relational structures to be represented such that they can be flexibly applied across diverse domains of experience. Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data. We find that the critical factor for inducing such a capacity is not an elaborate architecture, but rather, careful attention to the choice of data and the manner in which it is presented to the model. The most robust capacity for analogical reasoning is induced when networks learn analogies by contrasting abstract relational structures in their input domains, a training method that uses only the input data to force models to learn about important abstract features. Using this technique we demonstrate capacities for complex, visual and symbolic analogy making and generalisation in even the simplest neural network architectures. | accepted-poster-papers |
pros:
- The paper is well-written and includes a lot of interesting connections to cog sci (though see specific clarity concerns)
- The tasks considered (visual and symbolic) provide a nice opportunity to study analogy making in different settings.
cons:
- There was some concerns about baselines and novelty that I think the authors have largely addressed in revision
This is an intriguing paper and an exciting direction and I think it merits acceptance. | train | [
"SJx77lip2Q",
"SJgoIpD9R7",
"S1gWz6NqhQ",
"BygdZ9AvnQ",
"HJerbEXqR7",
"BJxmrAEOA7",
"Hkl3eCN_C7",
"rkgM56N_RQ",
"rkgGYJ4Z0X",
"H1l7cB-sa7",
"HJgeOMGM6m",
"H1xPezMMTm",
"SyenWffTX",
"HJeoWFkec7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This work investigates the ability of a neural network to learn analogy. They showed that a simple neural network is able to solve analogy problems with image or abstract input, given that the training data is selected to contrast abstract relational structures. \n\nThe paper is relatively well-written with rich d... | [
6,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_SylLYsCcFm",
"rkgM56N_RQ",
"iclr_2019_SylLYsCcFm",
"iclr_2019_SylLYsCcFm",
"BJxmrAEOA7",
"BygdZ9AvnQ",
"S1gWz6NqhQ",
"SJx77lip2Q",
"H1xPezMMTm",
"HJgeOMGM6m",
"BygdZ9AvnQ",
"S1gWz6NqhQ",
"SJx77lip2Q",
"iclr_2019_SylLYsCcFm"
] |
iclr_2019_SylPMnR9Ym | Learning what you can do before doing anything | Intelligent agents can learn to represent the action spaces of other agents simply by observing them act. Such representations help agents quickly learn to predict the effects of their own actions on the environment and to plan complex action sequences. In this work, we address the problem of learning an agent’s action space purely from visual observation. We use stochastic video prediction to learn a latent variable that captures the scene's dynamics while being minimally sensitive to the scene's static content. We introduce a loss term that encourages the network to capture the composability of visual sequences and show that it leads to representations that disentangle the structure of actions. We call the full model with composable action representations Composable Learned Action Space Predictor (CLASP). We show the applicability of our method to synthetic settings and its potential to capture action spaces in complex, realistic visual settings. When used in a semi-supervised setting, our learned representations perform comparably to existing fully supervised methods on tasks such as action-conditioned video prediction and planning in the learned action space, while requiring orders of magnitude fewer action labels. Project website: https://daniilidis-group.github.io/learned_action_spaces | accepted-poster-papers | The reviewers had some concerns regarding clarity and evaluation but in general liked various aspects of the paper. The authors did a good job of addressing the reviewers' concerns so acceptance is recommended. | train | [
"B1lyw7K02X",
"HkxGHBNokN",
"rJetUCDx67",
"S1lxT5Z9Am",
"HJxc1L8GTQ",
"HJlMwVQt67",
"B1gUrEmKTQ",
"SJeMWONjnQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose a way to learn models that predict what will happen next in scenarios where action-labels are not available in abundance. The agents extend previous work by proposing a compositional latent-variable model. Results are shown on BAIR (robot pushing objects) and simulated reacher datasets. The res... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_SylPMnR9Ym",
"iclr_2019_SylPMnR9Ym",
"iclr_2019_SylPMnR9Ym",
"HJlMwVQt67",
"SJeMWONjnQ",
"rJetUCDx67",
"B1lyw7K02X",
"iclr_2019_SylPMnR9Ym"
] |
iclr_2019_Syx0Mh05YQ | Learning Grid Cells as Vector Representation of Self-Position Coupled with Matrix Representation of Self-Motion | This paper proposes a representational model for grid cells. In this model, the 2D self-position of the agent is represented by a high-dimensional vector, and the 2D self-motion or displacement of the agent is represented by a matrix that transforms the vector. Each component of the vector is a unit or a cell. The model consists of the following three sub-models. (1) Vector-matrix multiplication. The movement from the current position to the next position is modeled by matrix-vector multi- plication, i.e., the vector of the next position is obtained by multiplying the matrix of the motion to the vector of the current position. (2) Magnified local isometry. The angle between two nearby vectors equals the Euclidean distance between the two corresponding positions multiplied by a magnifying factor. (3) Global adjacency kernel. The inner product between two vectors measures the adjacency between the two corresponding positions, which is defined by a kernel function of the Euclidean distance between the two positions. Our representational model has explicit algebra and geometry. It can learn hexagon patterns of grid cells, and it is capable of error correction, path integral and path planning. | accepted-poster-papers | The authors have presented a simple yet elegant model to learn grid-like responses to encode spatial position, relying only on relative Euclidean distances to train the model, and achieving a good path integration accuracy. The model is simpler than recent related work and uses a structure of 'disentangled blocks' to achieve multi-scale grids rather than requiring dropout or injected noise. The paper is clearly written and it is intriguing to get down to the fundamentals of the grid code. On the negative side, the section on planning does not hold up as well and makes unverifiable claims, and one reviewer suggests that this section be replaced altogether by additional analysis of the grid model. Another reviewer points out that the authors have missed an opportunity to give a theoretical perspective on their model. Although there are aspects of the work which could be improved, the AC and all reviewers are in favor of acceptance of this paper. | train | [
"rklaoMVFCm",
"S1ls-yVtnm",
"SkeifEYIhX",
"B1xoKkC_0Q",
"r1xO0OpuAX",
"HJe9Ndau0m",
"SyxSaDadCQ",
"B1xEovpuRm",
"Hke2XPpdC7",
"B1l3yDTuCm",
"Hkgw09yBaQ",
"rJxh001bTX",
"rJeFrS4J6X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The response is thorough and my concerns are addressed, I have updated the score accordingly.",
"Updated score from 6 to 7 after the authors addressed my comments below.\n\nPrevious review:\n\nThis paper builds upon the recent work on computational models of grid cells that rely on trainable (parametric) models ... | [
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"S1ls-yVtnm",
"iclr_2019_Syx0Mh05YQ",
"iclr_2019_Syx0Mh05YQ",
"r1xO0OpuAX",
"HJe9Ndau0m",
"SkeifEYIhX",
"B1xEovpuRm",
"S1ls-yVtnm",
"rJeFrS4J6X",
"iclr_2019_Syx0Mh05YQ",
"iclr_2019_Syx0Mh05YQ",
"iclr_2019_Syx0Mh05YQ",
"iclr_2019_Syx0Mh05YQ"
] |
iclr_2019_Syx5V2CcFm | Universal Stagewise Learning for Non-Convex Problems with Convergence on Averaged Solutions | Although stochastic gradient descent (SGD) method and its variants (e.g., stochastic momentum methods, AdaGrad) are algorithms of choice for solving non-convex problems (especially deep learning), big gaps still remain between the theory and the practice with many questions unresolved. For example, there is still a lack of theories of convergence for SGD and its variants that use stagewise step size and return an averaged solution in practice. In addition, theoretical insights of why adaptive step size of AdaGrad could improve non-adaptive step size of SGD is still missing for non-convex optimization. This paper aims to address these questions and fill the gap between theory and practice. We propose a universal stagewise optimization framework for a broad family of non-smooth non-convex problems with the following key features: (i) at each stage any suitable stochastic convex optimization algorithms (e.g., SGD or AdaGrad) that return an averaged solution can be employed for minimizing a regularized convex problem; (ii) the step size is decreased in a stagewise manner; (iii) an averaged solution is returned as the final solution. % that is selected from all stagewise averaged solutions with sampling probabilities increasing as the stage number.
Our theoretical results of stagewise {\ada} exhibit its adaptive convergence, therefore shed insights on its faster convergence than stagewise SGD for problems with slowly growing cumulative stochastic gradients. To the best of our knowledge, these new results are the first of their kind for addressing the unresolved issues of existing theories mentioned earlier. Besides theoretical contributions, our empirical studies show that our stagewise variants of SGD, AdaGrad improve the generalization performance of existing variants/implementations of SGD and AdaGrad. | accepted-poster-papers | This paper develops a stagewise optimization framework for solving non smooth and non convex problems. The idea is to use standard convex solvers to iteratively optimize a regularized objective with penalty centered at previous iterates - which is standard in many proximal methods. The paper combines this with the analysis for non-smooth functions giving a more general convergence results. Reviewers agree on the usefulness and novelty of the contribution. Initially there were concerns about lack of comparison with current results, but updated version have addressed this issue. The main weakness is that the results only holds for \mu weekly convex functions and the algorithm depends on the knowledge of \mu. Despite this limitations, reviewers believe that the paper has enough new material and I suggest for publication. I suggest authors to address these issues in the final version. | train | [
"HJxhO0mfR7",
"SJlOL-OBj7",
"H1gh7N1GRX",
"BJxi_yj_67",
"HJe3T0ADa7",
"BJx2sTRDp7",
"r1eJ7CRDaX",
"SJgxuy1OpQ",
"rJlhyXZ5nX",
"S1gZiSu_3m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors' response clarify the difference between this work and the Natasha paper. My concern is addressed.",
"In the paper, the authors try to analyze the convergence of stochastic gradient descent based method with stagewise learning rate and average solution in practice. The paper is very easy to follow, a... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"r1eJ7CRDaX",
"iclr_2019_Syx5V2CcFm",
"BJxi_yj_67",
"SJlOL-OBj7",
"SJlOL-OBj7",
"rJlhyXZ5nX",
"S1gZiSu_3m",
"SJlOL-OBj7",
"iclr_2019_Syx5V2CcFm",
"iclr_2019_Syx5V2CcFm"
] |
iclr_2019_Syx72jC9tm | Invariant and Equivariant Graph Networks | Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs. A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant \emph{linear} layers. Although this question is answered for the first three examples (for popular transformations, at-least), a full characterization of invariant and equivariant linear layers for graphs is not known.
In this paper we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is 2 and 15, respectively. More generally, for graph data defined on k-tuples of nodes, the dimension is the k-th and 2k-th Bell numbers. Orthogonal bases for the layers are computed, including generalization to multi-graph data. The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs. From the theoretical point of view, our results generalize and unify recent advancement in equivariant deep learning. In particular, we show that our model is capable of approximating any message passing neural network.
Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable results to state-of-the-art and to have better expressivity than previous invariant and equivariant bases.
| accepted-poster-papers | The paper provides a comprehensive study and generalisations of previous results on linear permutation invariant and equivariant operators / layers for the case of hypergraph data on multiple node sets. Reviewers indicate that the paper makes a particularly interesting and important contribution, with applications to graphs and hyper-graphs, as demonstrated in experiments.
A concern was raised that the paper could be overstating its scope. A point is that the model might not actually give a complete characterization, since the analysis considers permutation action only. The authors have rephrased the claim. Following comments of the reviewer, the authors have also revised the paper to include a discussion of how the model is capable of approximating message passing networks.
Two referees give the paper a strong support. One referee considers the paper ok, but not good enough. The authors have made convincing efforts to improve issues and address the concerns.
| train | [
"H1lppkObRm",
"HylTKNLx0m",
"ByeAJ28Opm",
"r1ehj9U_p7",
"HJegE7RUTQ",
"HkxIqcKZa7",
"ByxBaTGy67",
"SyeASuDp27",
"rJgbJ0v52m",
"SkgCXcb16X",
"S1lQhtby6m"
] | [
"author",
"public",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"Thank you for bringing to our attention these two recent works. We uploaded a revision. These two works construct graph features that seem to be very useful for graph classification but are not directly related to our approach. We have added them to our table and updated the text accordingly. Indeed these methods... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
9,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
-1,
-1
] | [
"HylTKNLx0m",
"iclr_2019_Syx72jC9tm",
"HJegE7RUTQ",
"iclr_2019_Syx72jC9tm",
"S1lQhtby6m",
"ByxBaTGy67",
"iclr_2019_Syx72jC9tm",
"iclr_2019_Syx72jC9tm",
"iclr_2019_Syx72jC9tm",
"rJgbJ0v52m",
"SyeASuDp27"
] |
iclr_2019_SyxAb30cY7 | Robustness May Be at Odds with Accuracy | We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization.
Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed in practice. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the features learned by robust models tend to align better with salient data characteristics and human perception. | accepted-poster-papers | This paper provides interesting discussions on the trade-off between model accuracy and robustness to adversarial examples. All reviewers found that both empirical studies and theoretical results are solid. The paper is very well written. The visualization results are very intuitive. I recommend acceptance.
| train | [
"S1gqQEuNCX",
"HklCQsZ4CX",
"BJlcXUnnhQ",
"Bkl044W9a7",
"Bye3G4Zc6X",
"r1l8pmZc6m",
"B1gCuge92Q",
"rkgtSKNtn7"
] | [
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your interest in our paper. In Theorem 2.1 we are proving upper bounds on the *robust accuracy* for a given *standard accuracy* (e.g. standard accuracy >95% implies robust accuracy <45%). One can consider the contrapositive to obtain bounds on the *standard accuracy* for a given *robust accuracy*. Th... | [
-1,
-1,
8,
-1,
-1,
-1,
7,
8
] | [
-1,
-1,
3,
-1,
-1,
-1,
4,
2
] | [
"HklCQsZ4CX",
"iclr_2019_SyxAb30cY7",
"iclr_2019_SyxAb30cY7",
"rkgtSKNtn7",
"B1gCuge92Q",
"BJlcXUnnhQ",
"iclr_2019_SyxAb30cY7",
"iclr_2019_SyxAb30cY7"
] |
iclr_2019_SyxZJn05YX | Feature Intertwiner for Object Detection | A well-trained model should classify objects with unanimous score for every category. This requires the high-level semantic features should be alike among samples, despite a wide span in resolution, texture, deformation, etc. Previous works focus on re-designing the loss function or proposing new regularization constraints on the loss. In this paper, we address this problem via a new perspective. For each category, it is assumed that there are two sets in the feature space: one with more reliable information and the other with less reliable source. We argue that the reliable set could guide the feature learning of the less reliable set during training - in spirit of student mimicking teacher’s behavior and thus pushing towards a more compact class centroid in the high-dimensional space. Such a scheme also benefits the reliable set since samples become more closer within the same category - implying that it is easilier for the classifier to identify. We refer to this mutual learning process as feature intertwiner and embed the spirit into object detection. It is well-known that objects of low resolution are more difficult to detect due to the loss of detailed information during network forward pass. We thus regard objects of high resolution as the reliable set and objects of low resolution as the less reliable set. Specifically, an intertwiner is achieved by minimizing the distribution divergence between two sets. We design a historical buffer to represent all previous samples in the reliable set and utilize them to guide the feature learning of the less reliable set. The design of obtaining an effective feature representation for the reliable set is further investigated, where we introduce the optimal transport (OT) algorithm into the framework. Samples in the less reliable set are better aligned with the reliable set with aid of OT metric. Incorporated with such a plug-and-play intertwiner, we achieve an evident improvement over previous state-of-the-arts on the COCO object detection benchmark. | accepted-poster-papers | The paper proposes an interesting idea (using "reliable" samples to guide the learning of "less reliable" samples). The experimental results and detailed analysis show clear improvement in object detection, especially small objects.
On the weak side, the paper seems to focus quite heavily on the object detection problem, and how to divide the data into reliable/less-reliable samples is domain-specific (it makes sense for object detection tasks, but it's unclear how to do this for general scenarios). As the authors promise, it will make more sense to change the title to "Feature Intertwiner for Object Detection" to alleviate such criticisms.
Given this said, I think this paper is over the acceptance threshold and would be of interest to many researchers. | train | [
"HJet_rzUA7",
"B1g9R3EL0X",
"S1l-I2ELCX",
"rJlJzswYnX",
"r1xxzwg3hQ",
"SJehrj89hm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the positive and helpful comments!!! We really appreciate it and have uploaded a newer version of the manuscript (fonts marked as blue where there is a major change).\n\n>>> The method of obtaining the representative in buffer is not clearly presented; f_critic^j may be the j-th element of F_critic, ... | [
-1,
-1,
-1,
7,
5,
9
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"rJlJzswYnX",
"SJehrj89hm",
"r1xxzwg3hQ",
"iclr_2019_SyxZJn05YX",
"iclr_2019_SyxZJn05YX",
"iclr_2019_SyxZJn05YX"
] |
iclr_2019_Syx_Ss05tm | Adversarial Reprogramming of Neural Networks | Deep neural networks are susceptible to adversarial attacks. In computer vision, well-crafted perturbations to images can cause neural networks to make mistakes such as confusing a cat with a computer. Previous adversarial attacks have been designed to degrade performance of models or cause machine learning models to produce specific outputs chosen ahead of time by the attacker. We introduce attacks that instead reprogram the target model to perform a task chosen by the attacker without the attacker needing to specify or compute the desired output for each test-time input. This attack finds a single adversarial perturbation, that can be added to all test-time inputs to a machine learning model in order to cause the model to perform a task chosen by the adversary—even if the model was not trained to do this task. These perturbations can thus be considered a program for the new task. We demonstrate adversarial reprogramming on six ImageNet classification models, repurposing these models to perform a counting task, as well as classification tasks: classification of MNIST and CIFAR-10 examples presented as inputs to the ImageNet model. | accepted-poster-papers | Reviewers mostly recommended to accept after engaging with the authors. I have decided to reduce the weight of AnonReviewer3 because of the short review. Please take reviewers' comments into consideration to improve your submission for the camera ready.
| train | [
"Hkej4WWMlV",
"BJeV8gZzl4",
"HJgEdaTAyE",
"HJeA4SgFhX",
"r1x4QeAzyN",
"HJgmC0-WCX",
"ryxReqq5hX",
"rJg7kZx-Am",
"H1le-ggZRX",
"S1xa6RJZAX",
"Hkl3sL522m",
"r1xgzcQW9X",
"HkeWHGIe57"
] | [
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Note, in the new Section 4.5 and Figure 3 we show that adversarial programs may be limited to a small fraction of the pixels or even made largely imperceptible by restricting magnitude.",
"Yes, the adversarial program can be thought of as parameters for a particularly bizarre neural net architecture. We note in ... | [
-1,
-1,
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
4,
-1,
-1
] | [
-1,
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
3,
-1,
-1
] | [
"HkeWHGIe57",
"HJgEdaTAyE",
"iclr_2019_Syx_Ss05tm",
"iclr_2019_Syx_Ss05tm",
"rJg7kZx-Am",
"ryxReqq5hX",
"iclr_2019_Syx_Ss05tm",
"HJeA4SgFhX",
"ryxReqq5hX",
"Hkl3sL522m",
"iclr_2019_Syx_Ss05tm",
"HkeWHGIe57",
"iclr_2019_Syx_Ss05tm"
] |
iclr_2019_SyxfEn09Y7 | G-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space | It is well known that neural networks with rectified linear units (ReLU) activation functions are positively scale-invariant. Conventional algorithms like stochastic gradient descent optimize the neural networks in the vector space of weights, which is, however, not positively scale-invariant. This mismatch may lead to problems during the optimization process. Then, a natural question is: \emph{can we construct a new vector space that is positively scale-invariant and sufficient to represent ReLU neural networks so as to better facilitate the optimization process }? In this paper, we provide our positive answer to this question. First, we conduct a formal study on the positive scaling operators which forms a transformation group, denoted as G. We prove that the value of a path (i.e. the product of the weights along the path) in the neural network is invariant to positive scaling and the value vector of all the paths is sufficient to represent the neural networks under mild conditions. Second, we show that one can identify some basis paths out of all the paths and prove that the linear span of their value vectors (denoted as G-space) is an invariant space with lower dimension under the positive scaling group. Finally, we design stochastic gradient descent algorithm in G-space (abbreviated as G-SGD) to optimize the value vector of the basis paths of neural networks with little extra cost by leveraging back-propagation. Our experiments show that G-SGD significantly outperforms the conventional SGD algorithm in optimizing ReLU networks on benchmark datasets. | accepted-poster-papers | This paper proposes a new optimization method for ReLU networks that optimizes in a scale-invariant vector space in the hopes of facilitating learning. The proposed method is novel and is validated by some experiments on CIFAR-10 and CIFAR-100. The reviewers find the analysis of the invariance group informative but have raised questions about the computational cost of the method. These concerns were addressed by the authors in the revision. The method could be of practical interest to the community and so acceptance is recommended. | train | [
"ryefknZIlN",
"ryeWPub4R7",
"B1gKT5yq37",
"S1lkR4gjpX",
"ByeKq4YqTm",
"ByxLrNtqTm",
"rkejI7K5pX",
"ryesj55ZaX",
"SJgCHmagaQ",
"ryevvN85hQ",
"Hylomw-qn7"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"We have finished the experiments on 110-layer ResNet (He, et. al. 2016) using the same training strategies with Table1 in the paper. The test error rates are shown below:\n-------------------------------------------------------------------------- \n CIFAR-10 CIFAR-100\nSGD ... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"B1gKT5yq37",
"B1gKT5yq37",
"iclr_2019_SyxfEn09Y7",
"iclr_2019_SyxfEn09Y7",
"B1gKT5yq37",
"Hylomw-qn7",
"ryevvN85hQ",
"SJgCHmagaQ",
"iclr_2019_SyxfEn09Y7",
"iclr_2019_SyxfEn09Y7",
"iclr_2019_SyxfEn09Y7"
] |
iclr_2019_Syxt2jC5FX | From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference | Nonlinearity is crucial to the performance of a deep (neural) network (DN).
To date there has been little progress understanding the menagerie of available nonlinearities, but recently progress has been made on understanding the r\^{o}le played by piecewise affine and convex nonlinearities like the ReLU and absolute value activation functions and max-pooling.
In particular, DN layers constructed from these operations can be interpreted as {\em max-affine spline operators} (MASOs) that have an elegant link to vector quantization (VQ) and K-means.
While this is good theoretical progress, the entire MASO approach is predicated on the requirement that the nonlinearities be piecewise affine and convex, which precludes important activation functions like the sigmoid, hyperbolic tangent, and softmax.
{\em This paper extends the MASO framework to these and an infinitely large class of new nonlinearities by linking deterministic MASOs with probabilistic Gaussian Mixture Models (GMMs).}
We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural ``hard'' VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding ``soft'' VQ inference problems.
We further extend the framework by hybridizing the hard and soft VQ optimizations to create a β-VQ inference that interpolates between hard, soft, and linear VQ inference.
A prime example of a β-VQ DN nonlinearity is the {\em swish} nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc by experimentation.
Finally, we validate with experiments an important assertion of our theory, namely that DN performance can be significantly improved by enforcing orthogonality in its linear filters.
| accepted-poster-papers | Dear authors,
All reviewers liked your work. However, they also noted that the paper was hard to read, whether because of the notation or the lack of visualization.
I strongly encourage you to spend the extra effort making your work more accessible for the final version. | train | [
"HylVtNN767",
"HkxSzkX767",
"SJetq0GQ67",
"HygJDAvzp7",
"SJxaGCDMTX",
"HyebQg_c2m",
"SylBlmHqhX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their constructive comments. We agree that our soft-VQ extension is an important piece of the puzzle that is necessary to ensure a solid foundation of the 'MASO-view' of deep neural networks. \n\nRegarding the clarity of presentation, we agree that our streamlined treatment of the MASO ba... | [
-1,
-1,
6,
-1,
-1,
6,
7
] | [
-1,
-1,
3,
-1,
-1,
4,
5
] | [
"SJetq0GQ67",
"SJetq0GQ67",
"iclr_2019_Syxt2jC5FX",
"SylBlmHqhX",
"HyebQg_c2m",
"iclr_2019_Syxt2jC5FX",
"iclr_2019_Syxt2jC5FX"
] |
iclr_2019_Syxt5oC5YQ | Aggregated Momentum: Stability Through Passive Damping | Momentum is a simple and widely used trick which allows gradient-based optimizers to pick up speed along low curvature directions. Its performance depends crucially on a damping coefficient. Largecamping coefficients can potentially deliver much larger speedups, but are prone to oscillations and instability; hence one typically resorts to small values such as 0.5 or 0.9. We propose Aggregated Momentum (AggMo), a variant of momentum which combines multiple velocity vectors with different damping coefficients. AggMo is trivial to implement, but significantly dampens oscillations, enabling it to remain stable even for aggressive damping coefficients such as 0.999. We reinterpret Nesterov's accelerated gradient descent as a special case of AggMo and analyze rates of convergence for quadratic objectives. Empirically, we find that AggMo is a suitable drop-in replacement for other momentum methods, and frequently delivers faster convergence with little to no tuning. | accepted-poster-papers | Dear authors,
Reviewers liked the idea of your new optimizer and found the experiments convincing. However, they also would have liked to get better insights on the place of AggMo in the existing optimization literature. Given that the related work section is quite small, I encourage you to expand it based on the works mentioned in the reviews. | train | [
"H1e-CR4sC7",
"Sylt77fo0X",
"B1gMc8f5AQ",
"HylZgSRTTX",
"Hyxbhr0a6m",
"HkeJFV0TTX",
"rJlsENApTX",
"Skg2m3o92X",
"BJlejEkq3m",
"rJeNuHAQhm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\"Using multiple velocity vectors seems interesting but not surprising\": I am not aware of works using a similar technique, despite momentum dating back to 1964. As a result, I am not sure I understand your comment. Could you please explain?",
"Thank you for your response and for the interesting references.\n\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"BJlejEkq3m",
"B1gMc8f5AQ",
"Hyxbhr0a6m",
"BJlejEkq3m",
"rJeNuHAQhm",
"Skg2m3o92X",
"iclr_2019_Syxt5oC5YQ",
"iclr_2019_Syxt5oC5YQ",
"iclr_2019_Syxt5oC5YQ",
"iclr_2019_Syxt5oC5YQ"
] |
iclr_2019_SyxtJh0qYm | Variational Autoencoder with Arbitrary Conditioning | We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot". The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples. | accepted-poster-papers | This paper proposes a VAE model with arbitrary conditioning. It is a novel idea, and the model derivation and training approach are technically sound. Experiments are thoughtfully designed and include comparison with latest related works.
R1 and R3 suggested the original version of the paper was lack of comparison with relevant work and the authors provided new experiments in the revision. The rebuttal also addressed a few other concerns about the novelty and clarity raised by R3.
Based on the novel contribution in handling missing feature imputation with VAE, I would recommend to accept the paper. It is worth noticing that there is another submission to ICLR (https://openreview.net/forum?id=ByxLl309Ym) that shares a similar idea of constructing the inference network with binary masking, although it is designed for a pre-trained VAE model.
There are still two weaknesses pointed out by R3 that would help improve the paper by addressing them:
1. The paper does not handle different kinds of missingness beyond missing at random.
2. VAE model makes the trade-off between computational complexity and accuracy.
Point 1 would be a good direction for future research, and point 2 is a common problem for all VAE approaches. While the latter should not become a reason to reject the paper, I encourage the authors to take MCMC methods into account in the evaluation section.
| train | [
"BJgGYt2i2X",
"Hkg0ZEfNJV",
"HkxzKKxoCm",
"ByxM-yCY07",
"SylNszXY0Q",
"SygTsHQY0Q",
"H1e4bQmYCQ",
"SyxzO-XY0m",
"S1xrqJ7FCQ",
"ryeVlLKR37",
"H1lYezPC3Q"
] | [
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The goal of this paper is to use deep generative models for missing data imputation. This paper proposes learning a latent variable deep generative model over every randomly sampled subset of observed features. First, a masking variable is sampled from a chosen prior distribution. The mask determines which feature... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_SyxtJh0qYm",
"H1e4bQmYCQ",
"ByxM-yCY07",
"SygTsHQY0Q",
"BJgGYt2i2X",
"iclr_2019_SyxtJh0qYm",
"SylNszXY0Q",
"H1lYezPC3Q",
"ryeVlLKR37",
"iclr_2019_SyxtJh0qYm",
"iclr_2019_SyxtJh0qYm"
] |
iclr_2019_SyzVb3CcFX | Time-Agnostic Prediction: Predicting Predictable Video Frames | Prediction is arguably one of the most basic functions of an intelligent system. In general, the problem of predicting events in the future or between two waypoints is exceedingly difficult. However, most phenomena naturally pass through relatively predictable bottlenecks---while we cannot predict the precise trajectory of a robot arm between being at rest and holding an object up, we can be certain that it must have picked the object up. To exploit this, we decouple visual prediction from a rigid notion of time. While conventional approaches predict frames at regularly spaced temporal intervals, our time-agnostic predictors (TAP) are not tied to specific times so that they may instead discover predictable "bottleneck" frames no matter when they occur. We evaluate our approach for future and intermediate frame prediction across three robotic manipulation tasks. Our predictions are not only of higher visual quality, but also correspond to coherent semantic subgoals in temporally extended tasks. | accepted-poster-papers | The paper introduces a new and convincing method for video frame prediction, by adding prediction uncertainty through VAEs. The results are convincing, and the reviewers are convinced.
It's unfortunate however that the method is only evaluated on simulated data. Letting it loose on real data would cement the results and merit oral representation; in the current form, poster presentation is recommended. | train | [
"H1eLJv-cn7",
"BJlRH2cjp7",
"SJgSfKuoTX",
"SJxRR70saQ",
"Skl_eodiT7",
"BJxcsIMF6m",
"Hkl7KkHj6X",
"Byxjv_mHam",
"B1g2eZUqnm",
"BkevZnkq2Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Revision\n----------\nThanks for taking the comments on board. I like the paper, before and after, and so do the other reviewers. Some video results might prove more valuable to follow than the tiny figures in the paper and supplementary. Adding notes on limitations is helpful to understand future extensions.\n\n-... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2019_SyzVb3CcFX",
"H1eLJv-cn7",
"BkevZnkq2Q",
"Hkl7KkHj6X",
"Byxjv_mHam",
"B1g2eZUqnm",
"iclr_2019_SyzVb3CcFX",
"iclr_2019_SyzVb3CcFX",
"iclr_2019_SyzVb3CcFX",
"iclr_2019_SyzVb3CcFX"
] |
iclr_2019_r14EOsCqKX | A Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillation | The convergence rate and final performance of common deep learning models have significantly benefited from recently proposed heuristics such as learning rate schedules, knowledge distillation, skip connections and normalization layers. In the absence of theoretical underpinnings, controlled experiments aimed at explaining the efficacy of these strategies can aid our understanding of deep learning landscapes and the training dynamics. Existing approaches for empirical analysis rely on tools of linear interpolation and visualizations with dimensionality reduction, each with their limitations. Instead, we revisit the empirical analysis of heuristics through the lens of recently proposed methods for loss surface and representation analysis, viz. mode connectivity and canonical correlation analysis (CCA), and hypothesize reasons why the heuristics succeed. In particular, we explore knowledge distillation and learning rate heuristics of (cosine) restarts and warmup using mode connectivity and CCA. Our empirical analysis suggests that: (a) the reasons often quoted for the success of cosine annealing are not evidenced in practice; (b) that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and (c) that the latent knowledge shared by the teacher is primarily disbursed in the deeper layers. | accepted-poster-papers | The presented method uses mode connectivity to help illustrate the surfaces of parameter space between various selections of models (either through changes of parameters, learning methods, or epochs), and canonical correlation analysis (CCA) to visualize the similarity of model layers across two different selected models. These analyses are then used to study 3 forms of learning heuristics: stochastic gradient descent with restart (SGDR), warmup, and distillation.
Reviews tend to be leaning toward acceptance.
Pros:
+ R1: Well-written
+ R1: Papers that analyze learning strategies are generally informative to the larger community. These experiments haven't been previously performed.
+ R1: Thorough experiments
+ R3: Results brought into context of prior hypotheses
Cons:
- R3: Batch normalization not studied, but authors have added experiments in response.
- R3 & R2: Practical implications not clear, but authors have added a discussion.
| train | [
"BkxSkNQ0TX",
"HkgcIVQRTm",
"HJgJGEXR67",
"ByxVaQQApX",
"SJxA1dN9h7",
"rJe_KxWY2X",
"SkgCTcjSh7",
"rylNuSB1Tm"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Thank you for your comment and clarifying the contradiction being discussed. We agree with the points you raised, and are glad that you found our work useful in clarifying some aspects of SGDR.\n",
"We thank the reviewer for the feedback.\n\nOur responses to the two weaknesses pointed out in the review:\n\n(1) W... | [
-1,
-1,
-1,
-1,
4,
7,
6,
-1
] | [
-1,
-1,
-1,
-1,
4,
5,
4,
-1
] | [
"rylNuSB1Tm",
"SkgCTcjSh7",
"rJe_KxWY2X",
"SJxA1dN9h7",
"iclr_2019_r14EOsCqKX",
"iclr_2019_r14EOsCqKX",
"iclr_2019_r14EOsCqKX",
"SJxA1dN9h7"
] |
iclr_2019_r1GAsjC5Fm | Self-Monitoring Navigation Agent via Auxiliary Progress Estimation | The Vision-and-Language Navigation (VLN) task entails an agent following navigational instruction in photo-realistic unknown environments. This challenging task demands that the agent be aware of which instruction was completed, which instruction is needed next, which way to go, and its navigation progress towards the goal. In this paper, we introduce a self-monitoring agent with two complementary components: (1) visual-textual co-grounding module to locate the instruction completed in the past, the instruction required for the next action, and the next moving direction from surrounding images and (2) progress monitor to ensure the grounded instruction correctly reflects the navigation progress. We test our self-monitoring agent on a standard benchmark and analyze our proposed approach through a series of ablation studies that elucidate the contributions of the primary components. Using our proposed method, we set the new state of the art by a significant margin (8% absolute increase in success rate on the unseen test set). Code is available at https://github.com/chihyaoma/selfmonitoring-agent. | accepted-poster-papers | The authors have described a navigation method that uses co-grounding between language and vision as well as an explicit self-assessment of progress. The method is used for room 2 room navigation and is tested in unseen environments. On the positive side, the approach is well-analyzed, with multiple ablations and baseline comparisons. The method is interesting and could be a good starting point for a more ambitious grounded language-vision agent. The approach seems to work well and achieves a high score using the metric of successful goal acquisition. On the negative side, the method relies on beam search, which is certainly unrealistic for real-world navigation, the evaluation metric is very simple and may be misleading, and the architecture is quite complex, may not scale or survive the test of time, and has little relevance for the greater ML community. There was a long discussion between the authors and the reviewers and other members of the public that resolved many of these points, with the authors being extremely responsive in giving additional results and details, and the reviewers' conclusion is that the paper should be accepted. | train | [
"HJeZ0QS9nm",
"S1eCDpX114",
"HJliJgopAX",
"S1g2c3eVn7",
"Bklhwhtm07",
"BklgYjvXRm",
"S1gQk2P70X",
"BkelrYdXCQ",
"Bkxytl_7CQ",
"SJxOQedmC7",
"SkxO6muXC7",
"H1x9gg_70m",
"Bkg3NyOmAm",
"rkgCnbU-6Q",
"SJeQkmIR3m",
"BJee1Q403m",
"SygyyCBKh7",
"HyeQF5h_2Q",
"Skgq_VrU37",
"HygR_tZW2m"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"public",
"author",
"public",
"author",
"public",
"author",
"public",... | [
"This submission introduces a new method for vision+language navigation which tracks progress on the instruction using a progress monitor and a visual-textual co-grounding module. The method is shown to perform well on a standard benchmark. Ablation tests indicate the importance of each component of the model. Qual... | [
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_r1GAsjC5Fm",
"Bkg3NyOmAm",
"Bkxytl_7CQ",
"iclr_2019_r1GAsjC5Fm",
"BJee1Q403m",
"HJeZ0QS9nm",
"BklgYjvXRm",
"rkgCnbU-6Q",
"SJxOQedmC7",
"H1x9gg_70m",
"SJeQkmIR3m",
"S1g2c3eVn7",
"S1gQk2P70X",
"r1lxWit7nX",
"iclr_2019_r1GAsjC5Fm",
"Skgq_VrU37",
"HyeQF5h_2Q",
"r1lxWit7nX",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.