paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2019_BJf_YjCqYX
Identifying Bias in AI using Simulation
Machine learned models exhibit bias, often because the datasets used to train them are biased. This presents a serious problem for the deployment of such technology, as the resulting models might perform poorly on populations that are minorities within the training set and ultimately present higher risks to them. We propose to use high-fidelity computer simulations to interrogate and diagnose biases within ML classifiers. We present a framework that leverages Bayesian parameter search to efficiently characterize the high dimensional feature space and more quickly identify weakness in performance. We apply our approach to an example domain, face detection, and show that it can be used to help identify demographic biases in commercial face application programming interfaces (APIs).
rejected-papers
The paper addresses an important problem of detecting biases in classifiers (e.g. in face detection), using simulation tools with Bayesian parameter search. While the direction of research and the presented approach seem to be practically useful, there were several concerns raised by the reviewers regarding strengthening the results (e.g., beyond single avatar, etc), and suggestions on possibly a more applied conference as a better venue. While thourough rebuttals by the authors addressed some of these concerns, which increased some ratings, overall, the paper was still in the borderline range. We hope the suggestions and comments of the reviewers can help to improve the paper.
train
[ "S1lrNjitRm", "SylorfEKnQ", "rkxX4UUeAX", "SyxeH6GRp7", "HyeeFYm2p7", "SyeHqipnnX", "BygCUy9dTX", "rklDgYwfTm", "BJlcWuDzaQ", "SJxMTUwMT7", "BJeJDrMI3m" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "We appreciate your reviews and the reconsideration of our manuscript.", "\nSummary:\n=========\nThe paper uses a proof-of-concept Bayesian parameter search-based simulation in virtual environment to probe biases of an already trained model towards specific categories that may have been sparsely represented in th...
[ -1, 7, -1, -1, -1, 5, -1, -1, -1, -1, 6 ]
[ -1, 5, -1, -1, -1, 4, -1, -1, -1, -1, 2 ]
[ "HyeeFYm2p7", "iclr_2019_BJf_YjCqYX", "SyxeH6GRp7", "SJxMTUwMT7", "iclr_2019_BJf_YjCqYX", "iclr_2019_BJf_YjCqYX", "iclr_2019_BJf_YjCqYX", "BJeJDrMI3m", "SyeHqipnnX", "SylorfEKnQ", "iclr_2019_BJf_YjCqYX" ]
iclr_2019_BJfguoAcFm
Learning Kolmogorov Models for Binary Random Variables
We propose a framework for learning a Kolmogorov model, for a collection of binary random variables. More specifically, we derive conditions that link (in the sense of implications in mathematical logic) outcomes of specific random variables and extract valuable relations from the data. We also propose an efficient algorithm for computing the model and show its first-order optimality, despite the combinatorial nature of the learning problem. We exemplify our general framework to recommendation systems and gene expression data. We believe that the work is a significant step toward interpretable machine learning.
rejected-papers
This work propose a method for learning a Kolmogorov model, which is a binary random variable model that is very similar (or identical) to a matrix factorization model. The work proposes an alternative optimization approach that is again similar to matrix factorization approaches. Unfortunately, no discussion or experiments are made to compare the proposed problem and method with standard matrix factorization; without such comparison, it is unclear if the proposed is substantially new, or a reformation of a standard problem. The authors are encouraged to improve the draft to clarify the connection matrix factorization and standard factor models.
train
[ "BylpOy0BCQ", "SkgloHhH0m", "H1g4-2YY0m", "BygcX_eV0Q", "Sygzsu9MAm", "BJgzmBk-p7", "r1xXqpp6n7", "HJgr4APDhQ" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "2) We thank the reviewer for pointing out this issue, which was also raised by AnonReviewer1 (comment 3). We have restarted our response here, for your convenience. \nIn the current version, we stated that p_u,i is obtained from the rating matrix for recommendation systems (sec 2.2), and the gene expression matrix...
[ -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "SkgloHhH0m", "HJgr4APDhQ", "BygcX_eV0Q", "r1xXqpp6n7", "BJgzmBk-p7", "iclr_2019_BJfguoAcFm", "iclr_2019_BJfguoAcFm", "iclr_2019_BJfguoAcFm" ]
iclr_2019_BJfvAoC9YQ
Feature Transformers: A Unified Representation Learning Framework for Lifelong Learning
Despite the recent advances in representation learning, lifelong learning continues to be one of the most challenging and unconquered problems. Catastrophic forgetting and data privacy constitute two of the important challenges for a successful lifelong learner. Further, existing techniques are designed to handle only specific manifestations of lifelong learning, whereas a practical lifelong learner is expected to switch and adapt seamlessly to different scenarios. In this paper, we present a single, unified mathematical framework for handling the myriad variants of lifelong learning, while alleviating these two challenges. We utilize an external memory to store only the features representing past data and learn richer and newer representations incrementally through transformation neural networks - feature transformers. We define, simulate and demonstrate exemplary performance on a realistic lifelong experimental setting using the MNIST rotations dataset, paving the way for practical lifelong learners. To illustrate the applicability of our method in data sensitive domains like healthcare, we study the pneumothorax classification problem from X-ray images, achieving near gold standard performance. We also benchmark our approach with a number of state-of-the art methods on MNIST rotations and iCIFAR100 datasets demonstrating superior performance.
rejected-papers
The paper proposes a framework for continual/lifelong learning that has potential to overcome the problems of catastrophic forgetting and data privacy. R1, R2 and AC agree that the proposed method is not suitable for lifelong learning in its current state as it linearly increases memory and computational cost over time (for storing features of all points in the past and increasing model capacity with new tasks) without account for budget constraints. The authors responded in their rebuttal that the data is not stored in the original form, but using feature representation (which is important for privacy issues). The main concern, however, was about the fact that one has to store information about all previous data points which is not feasible in lifelong learning. In the revision the authors have tried to address some of the R1’s and R2’s suggestion about taking into account the budget constraints. However more in-depth analysis is required to assess feasibility and advantage of the proposed approach. The authors motivate some of the key elements in their model as to protect privacy. However no actual study was conducted to show that this has been achieved. The comments from R3 were too brief and did not have a substantial impact on the decision. In conclusion, AC suggests that the authors prepare a major revision addressing suitability of the proposed approach for continual learning under budget constraints and for privacy preservation and resubmit for another round of reviews.
train
[ "SJeZEzz20m", "HJemXlf30Q", "rkl0J2bhCX", "HJeejpL5RX", "rJenDhy5nX", "rylQtqdYnm", "HyxmqYlF3m" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewer for their time and patience. \n\nThe reviewer has made 2 important remarks on the paper:\n1. Non compelling results\n2. An engineering solution\n\n1. Non compelling results\nTo the best of our knowledge, we have achieved state-of-the-art results on multiple variants of continual...
[ -1, -1, -1, -1, 4, 3, 4 ]
[ -1, -1, -1, -1, 3, 4, 5 ]
[ "HyxmqYlF3m", "rylQtqdYnm", "rJenDhy5nX", "iclr_2019_BJfvAoC9YQ", "iclr_2019_BJfvAoC9YQ", "iclr_2019_BJfvAoC9YQ", "iclr_2019_BJfvAoC9YQ" ]
iclr_2019_BJfvknCqFQ
A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations
We show that simple spatial transformations, namely translations and rotations alone, suffice to fool neural networks on a significant fraction of their inputs in multiple image classification tasks. Our results are in sharp contrast to previous work in adversarial robustness that relied on more complicated optimization ap- proaches unlikely to appear outside a truly adversarial context. Moreover, the misclassifying rotations and translations are easy to find and require only a few black-box queries to the target model. Overall, our findings emphasize the need to design robust classifiers even for natural input transformations in benign settings.
rejected-papers
This paper demonstrated interesting observations that simple transformations such as a rotation and a translation is enough to fool CNNs. Major concern of the paper is the novelty. Similar ideas have been proposed before by many previous researchers. Other networks trying to address this issue have been proposed. Such as those rotation-invariant neural networks. The grid search attack used in the experiments may be not convincing. Overall, this paper is not ready for publication.
train
[ "SJgNXNd7kV", "Bkl7nB8oCQ", "HkedC4pFAQ", "HJlVTVaY0X", "BkgPM4aF0Q", "Sylry4at0X", "B1lYTQkR3X", "rJgdkwO6nQ", "BJeJep8gn7" ]
[ "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the anonymous commenter for bringing this work to our attention. There are several differences between this and our work. We believe that their measure of robustness is less intuitive than ours, but this is a subjective point.\n\nIn terms of methodological differences, the way that the authors calculate r...
[ -1, -1, -1, -1, -1, -1, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "Bkl7nB8oCQ", "iclr_2019_BJfvknCqFQ", "HJlVTVaY0X", "BJeJep8gn7", "rJgdkwO6nQ", "B1lYTQkR3X", "iclr_2019_BJfvknCqFQ", "iclr_2019_BJfvknCqFQ", "iclr_2019_BJfvknCqFQ" ]
iclr_2019_BJgEjiRqYX
A Case for Object Compositionality in Deep Generative Models of Images
Deep generative models seek to recover the process with which the observed data was generated. They may be used to synthesize new samples or to subsequently extract representations. Successful approaches in the domain of images are driven by several core inductive biases. However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked. In this work we propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition. This provides a way to efficiently learn a more accurate generative model of real-world images, and serves as an initial step towards learning corresponding object representations. We evaluate our approach on several multi-object image datasets, and find that the generator learns to identify and disentangle information corresponding to different objects at a representational level. A human study reveals that the resulting generative model is better at generating images that are more faithful to the reference distribution.
rejected-papers
The paper proposes a generative model that generates one object at a time, and uses a relational network to encode cross-object relationships. Similar object-centric generation and object-object relational network is proposed in "sequential attend, infer, repeat" of Kosiorek et al. for video generation, which first appeared on arxiv on June 5th 2018 and was officially accepted in NIPS 2018 before the submission deadline for ICLR 2019. Moreover, several recent generative models have been proposed that consider object-centric biases, which the current paper references but does not compare against, e.g., 'attend, infer, repeat' of Eslami et al., or "DRAW: A Recurrent Neural Network For Image Generation" of Gregor et al. . The CLEVR dataset considered, though it contains real images, the intrinsic image complexity is low because it features a small number of objects against table background. As a result, the novelty of the proposed work may not be sufficient in light of recent literature, despite the fact that the paper presents a reasonable and interesting approach for image generation.
train
[ "S1eK2QInaX", "BJlXQQ83am", "HyeRkQI26m", "H1x6ZGI3aX", "SJll1M8h6Q", "ryxjYb8hpX", "BkeddZo23m", "Skx5Nrw5hm", "H1g2wJN93Q" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your consideration and feedback.\n\nThe primary motivation of this work is to argue for object compositionality in deep generative models (and in particularly GANs), which originates from two key observations. First, real-world images are to a large degree compositional, and a generative model that i...
[ -1, -1, -1, -1, -1, -1, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "H1g2wJN93Q", "HyeRkQI26m", "BkeddZo23m", "SJll1M8h6Q", "ryxjYb8hpX", "Skx5Nrw5hm", "iclr_2019_BJgEjiRqYX", "iclr_2019_BJgEjiRqYX", "iclr_2019_BJgEjiRqYX" ]
iclr_2019_BJgGhiR5KX
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model
A significant roadblock in multilingual neural language modeling is the lack of labeled non-English data. One potential method for overcoming this issue is learning cross-lingual text representations that can be used to transfer the performance from training on English tasks to non-English tasks, despite little to no task-specific non-English data. In this paper, we explore a natural setup for learning crosslingual sentence representations: the dual-encoder. We provide a comprehensive evaluation of our cross-lingual representations on a number of monolingual, crosslingual, and zero-shot/few-shot learning tasks, and also give an analysis of different learned cross-lingual embedding spaces.
rejected-papers
Pros: - A new framework for learning sentence representations - Solid experiments and analyses - En-Zh / XNLI dataset was added, addressing the comment that no distant languages were considered; also ablation tests Cons: - The considered components are not novel, and their combination is straightforward - The set of downstream tasks is not very diverse (See R2) - Only high resource languages are considered (interesting to see it applied to real low resource languages) All reviewers agree that there is no modeling contribution. Overall, it is a solid paper but I do not believe that the contribution is sufficient.
train
[ "SkeILk8t07", "SJlXDD81pX", "H1xSxOazaQ", "rJe1bDpfp7", "SkeB-rpzTX", "r1erqlaM6m", "HkxR4Ojt3m", "BkgYgVIIh7" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Hi all, we have updated our paper to include evaluations on XNLI in the main body, as per the suggestions of the reviewers. We have also included further ablation tests in the supplementary material. Thank you all again for your comments!", "This paper proposes a novel cross-lingual multi-tasking framework based...
[ -1, 7, -1, -1, -1, -1, 4, 6 ]
[ -1, 4, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2019_BJgGhiR5KX", "iclr_2019_BJgGhiR5KX", "iclr_2019_BJgGhiR5KX", "BkgYgVIIh7", "HkxR4Ojt3m", "SJlXDD81pX", "iclr_2019_BJgGhiR5KX", "iclr_2019_BJgGhiR5KX" ]
iclr_2019_BJgQB20qFQ
Learning to Progressively Plan
For problem solving, making reactive decisions based on problem description is fast but inaccurate, while search-based planning using heuristics gives better solutions but could be exponentially slow. In this paper, we propose a new approach that improves an existing solution by iteratively picking and rewriting its local components until convergence. The rewriting policy employs a neural network trained with reinforcement learning. We evaluate our approach in two domains: job scheduling and expression simplification. Compared to common effective heuristics, baseline deep models and search algorithms, our approach efficiently gives solutions with higher quality.
rejected-papers
This paper provides a new approach for progressive planning on discrete state and action spaces. The authors use LSTM architectures to iteratively select and improve local segments of an existing plan. They formulate the rewriting task as a reinforcement learning problem where the action space is the application of a set of possible rewriting rules. These models are then evaluated on a simulated job scheduling dataset and Halide expression simplification. This is an interesting paper dealing with an important problem. The proposed solution based on combining several existing pieces is novel. On the negative side, the reviewers thought the writing could be improved, and the main ideas are not explained clearly. Furthermore, the experimental evaluation is weak.
test
[ "ryxL6cPq0m", "S1xk3HP5AQ", "BJQjn8P9Am", "Bye7t4LXCQ", "BylgMrxITQ", "rkgCNnNB6Q", "SJeXthVBa7", "H1g_BV_c2m", "r1eZ-Hw8nX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for their comments! We have revised the paper with the following major changes to incorporate the comments:\n\n- We have added an ablation study to demonstrate that our approach is not heavily biased by the initial solutions.\n\n- For expression simplification, we have added an evaluation on...
[ -1, -1, -1, -1, 5, -1, -1, 5, 5 ]
[ -1, -1, -1, -1, 3, -1, -1, 3, 3 ]
[ "iclr_2019_BJgQB20qFQ", "BylgMrxITQ", "Bye7t4LXCQ", "SJeXthVBa7", "iclr_2019_BJgQB20qFQ", "r1eZ-Hw8nX", "H1g_BV_c2m", "iclr_2019_BJgQB20qFQ", "iclr_2019_BJgQB20qFQ" ]
iclr_2019_BJgTZ3C5FX
Generative model based on minimizing exact empirical Wasserstein distance
Generative Adversarial Networks (GANs) are a very powerful framework for generative modeling. However, they are often hard to train, and learning of GANs often becomes unstable. Wasserstein GAN (WGAN) is a promising framework to deal with the instability problem as it has a good convergence property. One drawback of the WGAN is that it evaluates the Wasserstein distance in the dual domain, which requires some approximation, so that it may fail to optimize the true Wasserstein distance. In this paper, we propose evaluating the exact empirical optimal transport cost efficiently in the primal domain and performing gradient descent with respect to its derivative to train the generator network. Experiments on the MNIST dataset show that our method is significantly stable to converge, and achieves the lowest Wasserstein distance among the WGAN variants at the cost of some sharpness of generated images. Experiments on the 8-Gaussian toy dataset show that better gradients for the generator are obtained in our method. In addition, the proposed method enables more flexible generative modeling than WGAN.
rejected-papers
This method proposes a primal approach to minimizing Wasserstein distance for generative models. It estimates WD by computing the exact WD between empirical distributions. As the reviewers point out, the primal approach has been studied by other papers (which this submission doesn't cite, even in the revision), and suffers from a well-known problem of high variance. The authors have not responded to key criticisms of the reviewers. I don't think this work is ready for publication in ICLR.
train
[ "rygLK5fqnX", "BklBblb52Q", "rylIWjW827" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposed to use the exact empirical Wasserstein distance to supervise the training of generative model. To this end, the authors formulated the optimal transport cost as a linear programming problem. The quantitative results-- empirical Wasserstein distance show the superiority of the proposed methods.\n...
[ 5, 2, 3 ]
[ 2, 5, 4 ]
[ "iclr_2019_BJgTZ3C5FX", "iclr_2019_BJgTZ3C5FX", "iclr_2019_BJgTZ3C5FX" ]
iclr_2019_BJgYl205tQ
Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality
Generative Adversarial Networks (GANs) are an elegant mechanism for data generation. However, a key challenge when using GANs is how to best measure their ability to generate realistic data. In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality. In particular, we propose a new evaluation measure, CrossLID, that assesses the local intrinsic dimensionality (LID) of input data with respect to neighborhoods within GAN-generated samples. In experiments on 3 benchmark image datasets, we compare our proposed measure to several state-of-the-art evaluation metrics. Our experiments show that CrossLID is strongly correlated with sample quality, is sensitive to mode collapse, is robust to small-scale noise and image transformations, and can be applied in a model-free manner. Furthermore, we show how CrossLID can be used within the GAN training process to improve generation quality.
rejected-papers
The paper propose a new metric for the evaluation of generative models, which they call CrossLID and which assesses the local intrinsic dimensionality (LID) of input data with respect to neighborhoods within generated samples, i.e. which is based on nearest neighbor distances between samples from the real data distribution and the generator. The paper is clearly written and provides an extensive experimental analysis, that shows that LID is an interesting metric to use in addition to exciting metrics as FID, at least for the case of not to complex image distributions The paper would be streghten by showing that the metric can also be applied in those more complex settings.
train
[ "rkxxo9DX27", "BklQaxijTm", "S1xLVKn5aX", "HJe-tKh9am", "SkxnJIn5a7", "Bkltr83q6m", "Sylcwt3cTm", "r1g4u825TX", "Bygbx_hqam", "rJlszM39aX", "BJxNftyy6X", "SJgEnknh2m" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Statistics based on KNN distances are ubiquitous in machine learning. In this paper the authors propose to apply the existing LID metric to GANs. The metric can be decomposed as follows: (1) Given a point x in X, compute the k-nearest neighbors KNN(x, X) and let those distances be R1, R2, …, Rk. Now, rewrite LID(x...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_BJgYl205tQ", "iclr_2019_BJgYl205tQ", "rkxxo9DX27", "rkxxo9DX27", "BJxNftyy6X", "BJxNftyy6X", "rkxxo9DX27", "BJxNftyy6X", "SJgEnknh2m", "iclr_2019_BJgYl205tQ", "iclr_2019_BJgYl205tQ", "iclr_2019_BJgYl205tQ" ]
iclr_2019_BJgbzhC5Ym
NECST: Neural Joint Source-Channel Coding
For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes. However, this decomposition can fall short in the finite bit-length regime, as it requires non-trivial tuning of hand-crafted codes and assumes infinite computational power for decoding. In this work, we propose Neural Error Correcting and Source Trimming (NECST) codes to jointly learn the encoding and decoding processes in an end-to-end fashion. By adding noise into the latent codes to simulate the channel during training, we learn to both compress and error-correct given a fixed bit-length and computational budget. We obtain codes that are not only competitive against several capacity-approaching channel codes, but also learn useful robust representations of the data for downstream tasks such as classification. Finally, we learn an extremely fast neural decoder, yielding almost an order of magnitude in speedup compared to standard decoding methods based on iterative belief propagation.
rejected-papers
This paper proposes a principled solution to the problem of joint source-channel coding. The reviewers find the perspectives put forward in the paper refreshing and that the paper is well written. The background and motivation is explained really well. However, reviewers found the paper limited in terms of modeling choices and evaluation methodology. One major flaw is that the experiments are limited to unrealistic datasets, and does not evaluate the method on a realistic benchmarks. It is also questioned whether the error-correcting aspect is practically relevant.
test
[ "ryedu97hyN", "ByxKs1laAX", "BJg-hw220Q", "BkgxULTlR7", "BJlknyukRX", "rkxojKYRTQ", "H1xHRxl86Q", "S1xMN4UWa7", "HkgDn_1T3X", "SklUDGzq3m" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "Sorry for the delay –\n\nThank you, qualifying the statement with the datasets like you suggested is acceptable.", "Thank you for the feedback. We agree that we have demonstrated our results on MNIST, CelebA, Omniglot, and SVHN, which are datasets where generative models such as variational autoencoders have bee...
[ -1, -1, -1, -1, -1, -1, 6, -1, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, 5, -1, 3, 4 ]
[ "ByxKs1laAX", "BJg-hw220Q", "BJlknyukRX", "rkxojKYRTQ", "H1xHRxl86Q", "SklUDGzq3m", "iclr_2019_BJgbzhC5Ym", "HkgDn_1T3X", "iclr_2019_BJgbzhC5Ym", "iclr_2019_BJgbzhC5Ym" ]
iclr_2019_BJgnmhA5KQ
Diverse Machine Translation with a Single Multinomial Latent Variable
There are many ways to translate a sentence into another language. Explicit modeling of such uncertainty may enable better model fitting to the data and it may enable users to express a preference for how to translate a piece of content. Latent variable models are a natural way to represent uncertainty. Prior work investigated the use of multivariate continuous and discrete latent variables, but their interpretation and use for generating a diverse set of hypotheses have been elusive. In this work, we drastically simplify the model, using just a single multinomial latent variable. The resulting mixture of experts model can be trained efficiently via hard-EM and can generate a diverse set of hypothesis by parallel greedy decoding. We perform extensive experiments on three WMT benchmark datasets that have multiple human references, and we show that our model provides a better trade-off between quality and diversity of generations compared to all baseline methods.\footnote{Code to reproduce this work is available at: anonymized URL.}
rejected-papers
+ a simple method + producing diverse translation is an important problem - technical contribution is limited / work is incremental - R1 finds writing not precise and claims not supported, also discussion of related work is considered weak by R3 - claims of modeling uncertainty are not well supported There is no consensus among reviewers. R4 provides detailed arguments why (at the very least) certain aspects of presentations are misleading (e.g., claiming that a uniform prior promotes diversity). R1 is also negative, his main concerns are limited contribution and he also questions the task (from their perspective producing diverse translation is not a valid task; I would disagree with this). R2 likes the paper and believes it is interesting, simple to use and the paper should be accepted. R3 is more lukewarm.
train
[ "BklSfmKSJE", "r1xTHmYBkE", "BkeOw7YMyN", "HyeWvWIc6X", "SkxgQb8cpm", "H1gWie8cTQ", "H1lEMxL9TQ", "Bye_FWf3nX", "H1x1dumo3Q", "Byxs0v_937" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Answers to specific questions:\n1. Modeling uncertainty.\nBy uncertainty in the output distribution we mean the fact that there are multiple plausible translations of the same sentence (hence, uncertainty in what to predict). \nThe baseline model leaves all the uncertainty in the decoder distribution p(y|x) and i...
[ -1, -1, 3, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, 4, 4 ]
[ "BkeOw7YMyN", "BkeOw7YMyN", "iclr_2019_BJgnmhA5KQ", "Byxs0v_937", "H1x1dumo3Q", "Bye_FWf3nX", "Bye_FWf3nX", "iclr_2019_BJgnmhA5KQ", "iclr_2019_BJgnmhA5KQ", "iclr_2019_BJgnmhA5KQ" ]
iclr_2019_BJgolhR9Km
Neural Networks with Structural Resistance to Adversarial Attacks
In adversarial attacks to machine-learning classifiers, small perturbations are added to input that is correctly classified. The perturbations yield adversarial examples, which are virtually indistinguishable from the unperturbed input, and yet are misclassified. In standard neural networks used for deep learning, attackers can craft adversarial examples from most input to cause a misclassification of their choice. We introduce a new type of network units, called RBFI units, whose non-linear structure makes them inherently resistant to adversarial attacks. On permutation-invariant MNIST, in absence of adversarial attacks, networks using RBFI units match the performance of networks using sigmoid units, and are slightly below the accuracy of networks with ReLU units. When subjected to adversarial attacks based on projected gradient descent or fast gradient-sign methods, networks with RBFI units retain accuracies above 75%, while ReLU or Sigmoid see their accuracies reduced to below 1%. Further, RBFI networks trained on regular input either exceed or closely match the accuracy of sigmoid and ReLU network trained with the help of adversarial examples. The non-linear structure of RBFI units makes them difficult to train using standard gradient descent. We show that RBFI networks of RBFI units can be efficiently trained to high accuracies using pseudogradients, computed using functions especially crafted to facilitate learning instead of their true derivatives.
rejected-papers
The paper presents a novel unit making the networks intrinsically more robust to gradient-based adversarial attacks. The authors have addressed some concerns of the reviewers (e.g. regarding pseudo-gradient attacks) but experimental section could benefit from a larger scale evaluation (e.g. Imagenet).
train
[ "HJeGSgFK37", "BkgelgBDp7", "SkxBghB537", "SJewMf7q2m", "BJl-4nFP2m", "HJx7kKdP27" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Summary: The paper proposes a new architecture to defend against adversarial examples. The authors propose a network with new type of hidden units (RBFI units). They also provide a training algorithm to train such networks and evaluate the robustness of these models against different attacks in the literature. \n\...
[ 7, -1, 5, 5, -1, -1 ]
[ 4, -1, 3, 3, -1, -1 ]
[ "iclr_2019_BJgolhR9Km", "iclr_2019_BJgolhR9Km", "iclr_2019_BJgolhR9Km", "iclr_2019_BJgolhR9Km", "HJx7kKdP27", "iclr_2019_BJgolhR9Km" ]
iclr_2019_BJgsN3R9Km
AntMan: Sparse Low-Rank Compression To Accelerate RNN Inference
Wide adoption of complex RNN based models is hindered by their inference performance, cost and memory requirements. To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired accuracy. AntMan extends knowledge distillation based training to learn the compressed models efficiently. Our evaluation shows that AntMan offers up to 100x computation reduction with less than 1pt accuracy drop for language and machine reading comprehension models. Our evaluation also shows that for a given accuracy target, AntMan produces 5x smaller models than the state-of-art. Lastly, we show that AntMan offers super-linear speed gains compared to theoretical speedup, demonstrating its practical value on commodity hardware.
rejected-papers
The submission proposes a method that combines sparsification and low rank projections to compress a neural network. This is in line with nearly all state-of-the-art methods. The specific combination proposed in this instance are SVD for low-rank and localized group projections (LGP) for sparsity. The main concern about the paper is the lack of stronger comparison to sota compression techniques. The authors justify their choice in the rebuttal, but ultimately only compare to relatively straightforward baselines. The additional comparison with e.g. Table 6 of the appendix does not give sufficient information to replicate or to know how the reduction in parameters was achieved. The scores for this paper were borderline, and the reviewers were largely in consensus with their scores and the points raised in the reviews. Given the highly selective nature of ICLR, the overall evaluations and remaining questions about the paper and comparison to baselines indicates that it does not pass the threshold for acceptance.
test
[ "S1gAXzz7RQ", "rJxe6mWXCX", "HkloA--QRQ", "HyeuzZGX0X", "ryguS-zXAm", "Hyx1FWMQAm", "Byg72bzX07", "SyeVNxFq27", "rkxi2FMq2m", "B1e1FpE42m" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the constructive feedback. We respond to the 4 comments separately, as OpenReview does not allow us to post a single long response.", "Thank you for the constructive feedback. \n\n1: Is the training done on CPU or GPUs? How is the training time?\n\nCPU or GPU: While we trained the AntMan models ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 2, 4 ]
[ "B1e1FpE42m", "rkxi2FMq2m", "SyeVNxFq27", "B1e1FpE42m", "B1e1FpE42m", "B1e1FpE42m", "B1e1FpE42m", "iclr_2019_BJgsN3R9Km", "iclr_2019_BJgsN3R9Km", "iclr_2019_BJgsN3R9Km" ]
iclr_2019_BJgvg30ctX
Information Regularized Neural Networks
We formulate an information-based optimization problem for supervised classification. For invertible neural networks, the control of these information terms is passed down to the latent features and parameter matrix in the last fully connected layer, given that mutual information is invariant under invertible map. We propose an objective function and prove that it solves the optimization problem. Our framework allows us to learn latent features in an more interpretable form while improving the classification performance. We perform extensive quantitative and qualitative experiments in comparison with the existing state-of-the-art classification models.
rejected-papers
This paper proposes an approach to regularizing classifiers based on invertible networks using concepts from the information bottleneck theory. Because mutual information is invariant under invertible maps, the regularizer only considers the latent representation produced by the last hidden layer in the network and the network parameters that transform that representation into a classification decision. This leads to a combined ℓ1 regularization on the final weights, W, and ℓ2 regularization on W^{T} F(x), where F(x) is the latent representation produced by the last hidden layer. Experiments on CIFAR-100 image classification show that the proposed regularization can improve test performance. The reviewers liked the theoretical analysis, especially proposition 2.1 and its proof, but even after discussion and revision wanted a more careful empirical comparison to established forms of regularization to establish that the proposed approach has practical merit. The authors are encouraged to continue this line of research, building on the fruitful discussions they had with the reviewers.
train
[ "SJe2kr7ykE", "SJgNfs_S1V", "B1l4KIw9nX", "HJlaLcd4kE", "BylGzSm11E", "BygrQ9H8hQ", "rJx5A-7d2Q", "BJeP6YONRQ", "r1xz-muEAQ", "SJgsIed4Rm" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "1.1. Similarity of feature spaces\n\nWe calculate the principle component of 1000 features of each digit. To measure the similarity among the subspace generated by the top 5 principle components of each digit, we using the following \"metric\":\nlet U and V be 100*5 matrices storing the principle component of feat...
[ -1, -1, 6, -1, -1, 5, 6, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, 3, 3, -1, -1, -1 ]
[ "rJx5A-7d2Q", "HJlaLcd4kE", "iclr_2019_BJgvg30ctX", "SJgsIed4Rm", "rJx5A-7d2Q", "iclr_2019_BJgvg30ctX", "iclr_2019_BJgvg30ctX", "BygrQ9H8hQ", "rJx5A-7d2Q", "B1l4KIw9nX" ]
iclr_2019_BJgy-n0cK7
Inter-BMV: Interpolation with Block Motion Vectors for Fast Semantic Segmentation on Video
Models optimized for accuracy on single images are often prohibitively slow to run on each frame in a video, especially on challenging dense prediction tasks, such as semantic segmentation. Recent work exploits the use of optical flow to warp image features forward from select keyframes, as a means to conserve computation on video. This approach, however, achieves only limited speedup, even when optimized, due to the accuracy degradation introduced by repeated forward warping, and the inference cost of optical flow estimation. To address these problems, we propose a new scheme that propagates features using the block motion vectors (BMV) present in compressed video (e.g. H.264 codecs), instead of optical flow, and bi-directionally warps and fuses features from enclosing keyframes to capture scene context on each video frame. Our technique, interpolation-BMV, enables us to accurately estimate the features of intermediate frames, while keeping inference costs low. We evaluate our system on the CamVid and Cityscapes datasets, comparing to both a strong single-frame baseline and related work. We find that we are able to substantially accelerate segmentation on video, achieving near real-time frame rates (20+ frames per second) on large images (e.g. 960 x 720 pixels), while maintaining competitive accuracy. This represents an improvement of almost 6x over the single-frame baseline and 2.5x over the fastest prior work.
rejected-papers
Strengths: Paper uses an efficient inference procedure cutting inference time on intermediate frames by 53%, & yields better accuracy and IOU compared to the one recent closely related work. The ablation study seems sufficient and well-designed. The paper presents two feature propagation strategies and three feature fusion methods. The experiments compare these different settings, and show that interpolation-BMV is indeed a better feature propagation. Weaknesses: Reviewers believed the work to be of limited novelty. The algorithm is close to the optical-flow based models Shelhamer et al. (2016) and Zhu et al. (2017). Reviewer asserts that the main difference is that the optical-flow is replaced with BMV, which is a byproduct of modern cameras. R3 felt that there was Insufficient experimental comparison with other baselines and that technical details were not clear enough. Contention: Authors assert that Shelhamer et al. (2016) does not use optical flow, and instead simply copies features from frame to frame (and schedules this copying). Zhu et al. (2017) then proposes an improvement to this scheme, forward feature warping with optical flow. In general, both these techniques fail to achieve speedups beyond small multiples of the baseline (< 3x), without impacting accuracy. Consensus: It was disappointing that some of the reviewers did not engage after the author review (perhaps initial impressions were just too low). However, after the author rebuttal R1 did respond and held to the position that the work should not be accepted, justified by the assertion that other modern architectures that are lighter weight and are able to produce fast predictions.
train
[ "H1gbgaS3Cm", "r1lbaHCXT7", "HygcnN07pm", "B1lRMlA7p7", "SkgpZ7NJaQ", "rJgToa76nQ", "BygndE8qhX" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear all,\n\nI went through the answers from the authors and the opinions of the other reviewers. The authors provided an elaborated rebuttal with additional clarifications and experiments. The authors position themselves well w.r.t. to MPEG-flow and provide additional baselines.\n\nHowever, my concern regarding t...
[ -1, -1, -1, -1, 5, 3, 5 ]
[ -1, -1, -1, -1, 4, 5, 4 ]
[ "B1lRMlA7p7", "SkgpZ7NJaQ", "rJgToa76nQ", "BygndE8qhX", "iclr_2019_BJgy-n0cK7", "iclr_2019_BJgy-n0cK7", "iclr_2019_BJgy-n0cK7" ]
iclr_2019_BJl4f2A5tQ
Surprising Negative Results for Generative Adversarial Tree Search
While many recent advances in deep reinforcement learning rely on model-free methods, model-based approaches remain an alluring prospect for their potential to exploit unsupervised data to learn environment dynamics. One prospect is to pursue hybrid approaches, as in AlphaGo, which combines Monte-Carlo Tree Search (MCTS)—a model-based method—with deep-Q networks (DQNs)—a model-free method. MCTS requires generating rollouts, which is computationally expensive. In this paper, we propose to simulate roll-outs, exploiting the latest breakthroughs in image-to-image transduction, namely Pix2Pix GANs, to predict the dynamics of the environment. Our proposed algorithm, generative adversarial tree search (GATS), simulates rollouts up to a specified depth using both a GAN- based dynamics model and a reward predictor. GATS employs MCTS for planning over the simulated samples and uses DQN to estimate the Q-function at the leaf states. Our theoretical analysis establishes some favorable properties of GATS vis-a-vis the bias-variance trade-off and empirical results show that on 5 popular Atari games, the dynamics and reward predictors converge quickly to accurate solutions. However, GATS fails to outperform DQNs in 4 out of 5 games. Notably, in these experiments, MCTS has only short rollouts (up to tree depth 4), while previous successes of MCTS have involved tree depth in the hundreds. We present a hypothesis for why tree search with short rollouts can fail even given perfect modeling.
rejected-papers
The paper addresses questions on the relationship between model-free and model-based reinforcement learning, in particular focusing on planning using learned generative models. The proposed approach, GATS, uses learned generative models for rollouts in MCTS, and provide theoretical insights that show a favorable bias-variance tradeoff. Despite this theoretical advantage, and high-quality models, the proposed approach fails to perform well empirically. This surprising negative results motivates the paper and providing insights on it is the main contribution. Based on the initial submitted version, the reviewers positively emphasized the need to understand and publish important negative results. All reviewers and the AC appreciate the import role that such a contribution can bring to the research community. Reviewers also note the careful discussion of modeling choices for the generative models. The reviewers also noted several potential weaknesses. Central were the need to better motivate and investigate the hypothesis proposed to explain the negative results. Several avenues towards a better understanding were proposed, and many of these were picked up by the authors in the revision and rebuttal. A novel toy domain "goldfish and gold bucket" was introduced for empirical analysis, and experiments there show that GATS can outperform DQN when a longer planning horizon is used. The introduced toy domain provides additional insights into the relationship between planning horizon and GATS / MCTS performance. However, it does not address key questions around why the negative result is maintained. The authors hypothesize that the Q-value is less accurate in the GATS setting - this is something that can be empirically evaluated, but specific evidence for this hypothesis is not clearly shown. Other forms of analysis that could shed further light on why the specific negative result occurs could be to inspect model errors. For example, if generated frames are sorted by the magnitude of prediction errors - what are the largest mistakes? Could these cause learning performance to deteriorate? The reviewers also raised several issues around the theoretical analysis, clarity (especially of captions) and structure - these were largely addressed by the revision. The concern that most strongly affected the final evaluation is the limited insight (and evidence) of the factors that influence performance of the proposed approach. Due to this, the consensus is to not accept the paper for publication at ICLR at this stage.
train
[ "rJg4Vgg8CX", "SJlNNbeUCQ", "ryxT8-lUAQ", "Sylua1lIRX", "r1lZ5v7I67", "HylvhTth27", "ByludhFs2X" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the detailed review and thoughtful suggestions. First, inspired by your suggestion to develop a synthetic example to demonstrate our negative results, we devised the “Goldfish and gold bucket” environment (described below). Additionally, in the latest version, we have addressed your concerns about th...
[ -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, 3, 4, 2 ]
[ "r1lZ5v7I67", "HylvhTth27", "ByludhFs2X", "iclr_2019_BJl4f2A5tQ", "iclr_2019_BJl4f2A5tQ", "iclr_2019_BJl4f2A5tQ", "iclr_2019_BJl4f2A5tQ" ]
iclr_2019_BJl65sA9tm
Improving Generative Adversarial Imitation Learning with Non-expert Demonstrations
Imitation learning aims to learn an optimal policy from expert demonstrations and its recent combination with deep learning has shown impressive performance. However, collecting a large number of expert demonstrations for deep learning is time-consuming and requires much expert effort. In this paper, we propose a method to improve generative adversarial imitation learning by using additional information from non-expert demonstrations which are easier to obtain. The key idea of our method is to perform multiclass classification to learn discriminator functions where non-expert demonstrations are regarded as being drawn from an extra class. Experiments in continuous control tasks demonstrate that our method learns better policies than the generative adversarial imitation learning baseline when the number of expert demonstrations is small.
rejected-papers
This paper proposes a variant of GAIL that can learn from both expert and non-expert demonstrations. The paper is generally well-written, and the general topic is of interest to the ICLR community. Further, the empirical comparisons provide some interesting insights. However, the reviewers are concerned that the conceptual contribution is quite small, and that the relatively small conceptual contribution also does not lead to large empirical gains. As such, the paper does not meet the bar for publication at ICLR.
train
[ "HygVmDfS14", "HylZUHEvAQ", "BygodKhSAQ", "rJeGdLhHC7", "r1eMyU2rR7", "rklDzShHRm", "rklOMN2SRm", "HJl1UZlN6Q", "rJxYAkx46m", "BygZOLGGaQ", "S1g02-Kc3X", "rklN7J19hQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Addressing 1)\nThe submission assumes a huge amount of non-expert data (2-3 orders of magnitude more than expert data). I think that it is not realistic to assume that such amounts of data are collected while familiarizing with kinesthetic teaching. Furthermore, in practice, we often need to adjust data collection...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "BygodKhSAQ", "r1eMyU2rR7", "rklN7J19hQ", "S1g02-Kc3X", "BygZOLGGaQ", "HJl1UZlN6Q", "iclr_2019_BJl65sA9tm", "rJxYAkx46m", "iclr_2019_BJl65sA9tm", "iclr_2019_BJl65sA9tm", "iclr_2019_BJl65sA9tm", "iclr_2019_BJl65sA9tm" ]
iclr_2019_BJlMcjC5K7
Neural Random Projections for Language Modelling
Neural network-based language models deal with data sparsity problems by mapping the large discrete space of words into a smaller continuous space of real-valued vectors. By learning distributed vector representations for words, each training sample informs the neural network model about a combinatorial number of other patterns. In this paper, we exploit the sparsity in natural language even further by encoding each unique input word using a fixed sparse random representation. These sparse codes are then projected onto a smaller embedding space which allows for the encoding of word occurrences from a possibly unknown vocabulary, along with the creation of more compact language models using a reduced number of parameters. We investigate the properties of our encoding mechanism empirically, by evaluating its performance on the widely used Penn Treebank corpus. We show that guaranteeing approximately equidistant vector representations for unique discrete inputs is enough to provide the neural network model with enough information to learn --and make use-- of distributed representations for these inputs.
rejected-papers
There is a clear reviewer consensus to reject this paper so I am also recommending rejecting it. The paper is about an interesting and underused technique. However, ultimately the issue here is that the paper does not do a good enough job of explaining the contribution. I hope the reviews have given the authors some ideas on how to frame and sell this work better in the future. For instance, from my own reading of the abstract, I do not understand what this paper is trying to do and why it is valuable. Phrases such as "we exploit the sparsity" do not tell me why the paper is important to read or what it accomplishes, only how it accomplishes the seemingly elided contribution. I am forced to make assumptions that might not be correct about the goals and motivation. It is certainly true that the implicit one-hot representation of words most common in neural language models is not the only possibility and that random sparse vectors for words will also work reasonably well. I have even tried techniques like this myself, personally, in language modeling experiments and I believe others have as well, although I do not have a nice reference close to hand (some of the various Mikolov models use random hashing of n-grams and I believe related ideas are common in the maxent LM literature and elsewhere). So when the abstract says things like "We show that guaranteeing approximately equidistant vector representations for unique discrete inputs is enough to provide the neural network model with enough information to learn" my immediate reaction is to ask why this would be surprising or why it would matter. Based on the reviews, I believe these sorts of issues affect other parts of the manuscript as well. There needs to be a sharper argument that either presents a problem and its solution or presents a scientific question and its answer. In the first case, the problem should be well motivated and in the second case the question should not yet have been adequately answered by previous work and should be non-obvious. I should not have to read beyond the abstract to understand the accomplishments of this work. Moving to the conclusion and future work section, I can see the appeal of the future work in the second paragraph, but this work has not been done. The first paragraph is about how it is possible to use random projections to represent words, which is not something I think most researchers would question. Missing is a clear demonstration of the potential advantages of doing so.
train
[ "HylSNPqSJE", "HJefI0nNkN", "HkxC23loAX", "HJx-sqxiC7", "ryx8fMZiC7", "Byxbvrls07", "BklvEZwc3m", "rkgirHIOhm", "HyevkcjS2Q" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "What we mean is that while language modelling is the setting, the absolute best perplexity scores are secondary to our exploration. It's tricky to convey this idea because the narrative is that progress is measured in terms of SOTA scores, and this is (sometimes) counter-productive to learning anything about a spe...
[ -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "HJefI0nNkN", "ryx8fMZiC7", "HyevkcjS2Q", "rkgirHIOhm", "BklvEZwc3m", "iclr_2019_BJlMcjC5K7", "iclr_2019_BJlMcjC5K7", "iclr_2019_BJlMcjC5K7", "iclr_2019_BJlMcjC5K7" ]
iclr_2019_BJlSHsAcK7
Overcoming catastrophic forgetting through weight consolidation and long-term memory
Sequential learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby previously learned knowledge is erased during learning of new, disjoint knowledge. Here, we propose a new approach to sequential learning which leverages the recent discovery of adversarial examples. We use adversarial subspaces from previous tasks to enable learning of new tasks with less interference. We apply our method to sequentially learning to classify digits 0, 1, 2 (task 1), 4, 5, 6, (task 2), and 7, 8, 9 (task 3) in MNIST (disjoint MNIST task). We compare and combine our Adversarial Direction (AD) method with the recently proposed Elastic Weight Consolidation (EWC) method for sequential learning. We train each task for 20 epochs, which yields good initial performance (99.24% correct task 1 performance). After training task 2, and then task 3, both plain gradient descent (PGD) and EWC largely forget task 1 (task 1 accuracy 32.95% for PGD and 41.02% for EWC), while our combined approach (AD+EWC) still achieves 94.53% correct on task 1. We obtain similar results with a much more difficult disjoint CIFAR10 task (70.10% initial task 1 performance, 67.73% after learning tasks 2 and 3 for AD+EWC, while PGD and EWC both fall to chance level). We confirm qualitatively similar results for EMNIST with 5 tasks and under 3 variants of our approach. Our results suggest that AD+EWC can provide better sequential learning performance than either PGD or EWC.
rejected-papers
The authors propose an approach for continual learning of a sequence of tasks which augments the network with task-specific neurons which encode 'adversarial subspaces' and prevent interference and forgetting when new tasks are being learnt. The approach is novel and seems to work relatively well on a simple sequence of MNIST or CIFAR10 classes, and has certain advantages, such as not requiring any stored data. However, the reviewers agreed that the presentation of the method is quite confusing and that the paper does not provide adequate intuition, visualisation, or explanation of the claim that they are preventing forgetting through the intersection of adversarial subspaces. Moreover, there was a concern that the baselines were not strong enough to validate the approach.
train
[ "ByxNJHr52m", "SyxrL3Jc3m", "Hkx1mVZEn7", "rkg95oqhtm", "rJxz28U2FX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The paper is about a new method for training neural networks in the continual learning setting, where tasks are presented in a sequential manner (and data from the previous task cannot be revisited). The method proposes a new architecture that adds task-parameters parameters to prevent catastrophic forgetting.\n\n...
[ 4, 4, 4, -1, -1 ]
[ 4, 4, 5, -1, -1 ]
[ "iclr_2019_BJlSHsAcK7", "iclr_2019_BJlSHsAcK7", "iclr_2019_BJlSHsAcK7", "rJxz28U2FX", "iclr_2019_BJlSHsAcK7" ]
iclr_2019_BJlVhsA5KX
Sequenced-Replacement Sampling for Deep Learning
We propose sequenced-replacement sampling (SRS) for training deep neural networks. The basic idea is to assign a fixed sequence index to each sample in the dataset. Once a mini-batch is randomly drawn in each training iteration, we refill the original dataset by successively adding samples according to their sequence index. Thus we carry out replacement sampling but in a batched and sequenced way. In a sense, SRS could be viewed as a way of performing "mini-batch augmentation". It is particularly useful for a task where we have a relatively small images-per-class such as CIFAR-100. Together with a longer period of initial large learning rate, it significantly improves the classification accuracy in CIFAR-100 over the current state-of-the-art results. Our experiments indicate that training deeper networks with SRS is less prone to over-fitting. In the best case, we achieve an error rate as low as 10.10%.
rejected-papers
This paper proposes a new batching strategy for training deep nets. The idea is to have the properties of sampling with replacement while reducing the chance of not touching an example in a given epoch. Experimental results show that this can improve performance on one of the tasks considered. However the reviewers consistently agree that the experimental validation of this work is much too limited. Furthermore the motivation for the approach should be more clearly established.
train
[ "H1egDgd2hm", "S1lYj1Hcn7", "r1lhSo5w3m" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper suggests a new way of sampling mini-batches for training deep neural nets. The idea is to first index the samples then select the batches during training in a sequential way. The proposed method is tested on the CIFAR dataset and some improvement on the classification accuracy is reported.\n\nI find the ...
[ 3, 5, 4 ]
[ 4, 5, 4 ]
[ "iclr_2019_BJlVhsA5KX", "iclr_2019_BJlVhsA5KX", "iclr_2019_BJlVhsA5KX" ]
iclr_2019_BJlXUsR5KQ
Learning Neuron Non-Linearities with Kernel-Based Deep Neural Networks
The effectiveness of deep neural architectures has been widely supported in terms of both experimental and foundational principles. There is also clear evidence that the activation function (e.g. the rectifier and the LSTM units) plays a crucial role in the complexity of learning. Based on this remark, this paper discusses an optimal selection of the neuron non-linearity in a functional framework that is inspired from classic regularization arguments. A representation theorem is given which indicates that the best activation function is a kernel expansion in the training set, that can be effectively approximated over an opportune set of points modeling 1-D clusters. The idea can be naturally extended to recurrent networks, where the expressiveness of kernel-based activation functions turns out to be a crucial ingredient to capture long-term dependencies. We give experimental evidence of this property by a set of challenging experiments, where we compare the results with neural architectures based on state of the art LSTM cells.
rejected-papers
This paper proposes to automatically learn the form of the non-linearities of neural networks in deep neural networks, which the reviewers noted to be an interesting albeit significantly studied direction. Overall, this paper falls just below the bar, with no reviewer really willing to champion for acceptance. Reviewer 3 found the paper to be marginally above the acceptance threshold and found the insights provided in the paper (in Section 2) to be a neat and strong contribution. Reviewers 1-2, however, found the paper marginally below the bar and seemed confused by the presentation of the paper. They seemed to believe in the motivation and idea, but they found the paper hard to follow and not particularly clearly written. It would seem that the paper could significantly benefit from careful editing and restructuring to disambiguate contributions from motivation and existing literature. Also, the authors should provide clear justification for their design choices and modeling assumptions.
train
[ "S1xA6on62m", "rJl3u3O5nX", "rkgv-Z-I2X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper investigates the problem of designing the activation functions of neural networks with focus on recurrent architectures. The authors frame such problem as learning the activation functions in the space of square integrable functions by adding a regularization term penalizing the differential properties o...
[ 5, 4, 6 ]
[ 3, 3, 4 ]
[ "iclr_2019_BJlXUsR5KQ", "iclr_2019_BJlXUsR5KQ", "iclr_2019_BJlXUsR5KQ" ]
iclr_2019_BJl_VnR9Km
A Model Cortical Network for Spatiotemporal Sequence Learning and Prediction
In this paper we developed a hierarchical network model, called Hierarchical Prediction Network (HPNet) to understand how spatiotemporal memories might be learned and encoded in a representational hierarchy for predicting future video frames. The model is inspired by the feedforward, feedback and lateral recurrent circuits in the mammalian hierarchical visual system. It assumes that spatiotemporal memories are encoded in the recurrent connections within each level and between different levels of the hierarchy. The model contains a feed-forward path that computes and encodes spatiotemporal features of successive complexity and a feedback path that projects interpretation from a higher level to the level below. Within each level, the feed-forward path and the feedback path intersect in a recurrent gated circuit that integrates their signals as well as the circuit's internal memory states to generate a prediction of the incoming signals. The network learns by comparing the incoming signals with its prediction, updating its internal model of the world by minimizing the prediction errors at each level of the hierarchy in the style of {\em predictive self-supervised learning}. The network processes data in blocks of video frames rather than a frame-to-frame basis. This allows it to learn relationships among movement patterns, yielding state-of-the-art performance in long range video sequence predictions in benchmark datasets. We observed that hierarchical interaction in the network introduces sensitivity to memories of global movement patterns even in the population representation of the units in the earliest level. Finally, we provided neurophysiological evidence, showing that neurons in the early visual cortex of awake monkeys exhibit very similar sensitivity and behaviors. These findings suggest that predictive self-supervised learning might be an important principle for representational learning in the visual cortex.
rejected-papers
There was major disagreement between reviewers on this paper. Two reviewers recommend acceptance, and one firm rejection. The initial version of the manuscript was of poor quality in terms of exposition, as noted by all reviewers. However, the authors responded carefully and thoroughly to reviewer comments, and major clarity and technical issues were resolved by all authors. I ask PCs to note that the paper, as originally submitted, was not fit for acceptance, and reviewers noted major changes during the review process. I do believe this behavior should be discouraged, since it effectively requires reviewers to examine the paper twice. Regardless, the final overall score of the paper does not meet the bar for acceptance into ICLR.
train
[ "HJlgtsjCJV", "rkxQNsjRyN", "Syer38msk4", "rJx0ap4c1N", "SyxwDou63X", "HJgbiy9tAm", "r1emuJqFCQ", "SylbH1cKCm", "rJl2cTKtRm", "S1xdd6tY0X", "S1g4KnKYAQ", "HyxuQhYYAX", "HJloz_KK0X", "rygqq3Tc3X", "HkxSQjyS3Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We tried our best to submit the best version of the paper by the deadline, but we did have our limitations. We were blind to our own imperfection at the time. We did not submit the paper by the deadline with the intent of “finishing the paper during the rebuttal period”. We appreciated the reviewers’ suggestions a...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "rkxQNsjRyN", "HkxSQjyS3Q", "rJx0ap4c1N", "S1g4KnKYAQ", "iclr_2019_BJl_VnR9Km", "r1emuJqFCQ", "SylbH1cKCm", "HkxSQjyS3Q", "S1xdd6tY0X", "rygqq3Tc3X", "HyxuQhYYAX", "SyxwDou63X", "iclr_2019_BJl_VnR9Km", "iclr_2019_BJl_VnR9Km", "iclr_2019_BJl_VnR9Km" ]
iclr_2019_BJlc6iA5YX
ACE: Artificial Checkerboard Enhancer to Induce and Evade Adversarial Attacks
The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field. The origins and solutions of checkerboard artifacts in the pixel space have been studied for a long time, but their effects on the gradient space have rarely been investigated. In this paper, we revisit the checkerboard artifacts in the gradient space which turn out to be the weak point of a network architecture. We explore image-agnostic property of gradient checkerboard artifacts and propose a simple yet effective defense method by utilizing the artifacts. We introduce our defense module, dubbed Artificial Checkerboard Enhancer (ACE), which induces adversarial attacks on designated pixels. This enables the model to deflect attacks by shifting only a single pixel in the image with a remarkable defense rate. We provide extensive experiments to support the effectiveness of our work for various attack scenarios using state-of-the-art attack methods. Furthermore, we show that ACE is even applicable to large-scale datasets including ImageNet dataset and can be easily transferred to various pretrained networks.
rejected-papers
The reviewers have agreed this work is not ready for publication at ICLR.
train
[ "SylhtNBFkN", "rylYb5iqCQ", "SyePSFiSAQ", "BkgTgd4Tam", "HJl5PX0-T7", "rkeyuETr3Q", "r1gqDA65jX", "r1ec3QQz3Q", "S1gYmGQAsX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Finding the constraint on which our model is robust is crucial. Thank you for the suggestion.\n\nFor now, from our experiments we find that our model is robust against L0-based attacks. Our method works well for attacks that are not bounded within an epsilon ball, but are bounded in terms of the number of pixels ...
[ -1, -1, -1, 4, -1, 4, 6, -1, -1 ]
[ -1, -1, -1, 2, -1, 3, 1, -1, -1 ]
[ "rylYb5iqCQ", "HJl5PX0-T7", "BkgTgd4Tam", "iclr_2019_BJlc6iA5YX", "rkeyuETr3Q", "iclr_2019_BJlc6iA5YX", "iclr_2019_BJlc6iA5YX", "S1gYmGQAsX", "iclr_2019_BJlc6iA5YX" ]
iclr_2019_BJleciCcKQ
EXPLORATION OF EFFICIENT ON-DEVICE ACOUSTIC MODELING WITH NEURAL NETWORKS
Real-time speech recognition on mobile and embedded devices is an important application of neural networks. Acoustic modeling is the fundamental part of speech recognition and is usually implemented with long short-term memory (LSTM)-based recurrent neural networks (RNNs). However, the single thread execution of an LSTM RNN is extremely slow in most embedded devices because the algorithm needs to fetch a large number of parameters from the DRAM for computing each output sample. We explore a few acoustic modeling algorithms that can be executed very efficiently on embedded devices. These algorithms reduce the overhead of memory accesses using multi-timestep parallelization that computes multiple output samples at a time by reading the parameters only once from the DRAM. The algorithms considered are the quasi RNNs (QRNNs), Gated ConvNets, and diagonalized LSTMs. In addition, we explore neural networks that equip one-dimensional (1-D) convolution at each layer of these algorithms, and by which can obtain a very large performance increase in the QRNNs and Gated ConvNets. The experiments were conducted using two tasks, one is the connectionist temporal classification (CTC)-based end-to-end speech recognition on WSJ corpus and the other is the phoneme classification on TIMIT dataset. We not only significantly increase the execution speed but also obtain a much higher accuracy, compared to LSTM RNN-based modeling. Thus, this work can be applicable not only to embedded system-based implementations but also to server-based ones.
rejected-papers
In this work, the authors conduct experiments using variants of RNNs and Gated CNNs on a speech recognition task, motivated by the goal of reducing the computational requirements when deploying these models on mobile devices. While this is an important concern for practical deployment of ASR systems, the main concerns expressed by the reviewers is that the work lacks novelty. Further, the authors choice to investigate CTC based systems which predict characters. These models are not state-of-the-art for ASR, and as such it is hard to judge the impact of this work on a state-of-the-art embedded ASR system. Finally, it would be beneficial to replicate results on a much larger corpus such as Librispeech or Switchboard. Based on the unanimous decision from the reviewers, the AC agrees that the work, in the present form, should be rejected.
train
[ "BJexv4Djn7", "ryxESBsKhX", "HklSyeodh7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper discusses applications of variants of RNNs and Gated CNN to acoustic modeling in embedded speech recognition systems, and the main focus of the paper is computational (memory) efficiency when we deploy the system. The paper well describes the problem of the current LSTM, especially focusing on the recur...
[ 4, 4, 4 ]
[ 5, 4, 4 ]
[ "iclr_2019_BJleciCcKQ", "iclr_2019_BJleciCcKQ", "iclr_2019_BJleciCcKQ" ]
iclr_2019_BJll6o09tm
Padam: Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks
Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, despite the nice property of fast convergence, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes "over adapted". We design a new algorithm, called Partially adaptive momentum estimation method (Padam), which unifies the Adam/Amsgrad with SGD by introducing a partial adaptive parameter p, to achieve the best from both worlds. Experiments on standard benchmarks show that Padam can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks.
rejected-papers
This paper proposes a simple modification of the Adam optimizer, introducing a hyper-parameter 'p' (with value in the range [0,1/2]) parameterizing the parameter update: theta_new = theta_old + m/v^p where p=1/2 falls back to the standard Adam/Amsgrad optimizer, and p=0 falls back to a variant of SGD with momentum. The authors motivate the method by pointing out that: - Through the value of 'p', one can interpolate between SGD with momentum and Adam/Amsgrad. By choosing a value of 'p' smaller than 0.5, one can therefore use perform optimization that is 'partially adaptive'. - The method shows good empirical performance. The paper contains an inaccuracy, which we hope will be solved before the final version. The authors argue that the 1/sqrt(v) term in Adam results in a lower learning rate, and the authors argue that the effective learning rate "easily explodes" (section 3) because of this term, and that a "more aggressive" learning rate is more appropriate. This last point is false; the value of 1/sqrt(v) can be smaller or larger than 1 depending on the value of 'v', and that a decrease in value of 'p' can result in either an increase or decrease in effective learning rate, depending on the value of v. The value of 'v' is a function of the scale of loss function, which can really be arbitrary. (In case of very high-dimensional predictions, for example, the scale of the loss function is often proportional with the dimensionality of variable to be modeled, which can be arbitrarily large, e.g. in image or video modeling the loss function tends to be of a much larger scale than with classification.) The authors promise to include a comparison to AdamW [Loshchilov, 2017] that includes tuning of the weight decay parameter. The lack of this experiments makes it more difficult to make a conclusion regarding the performance relative to AdamW. However, the methods offer potentially orthogonal (and combinable) advantages. [Loshchilov, 2017] https://arxiv.org/pdf/1711.05101.pdf
test
[ "SJg-A0PUAm", "SJxoiG0SAX", "HkeE6748nm", "r1g-Hi6SAX", "HJl_biarAm", "BkeO0q6r0X", "B1l_Ztqc3X", "Syl10Luwi7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your suggestion and increasing your score. We will try using your suggested learning rate schedule. As for weight decay factor for AdamW, we used the parameter suggested in the original paper/github repository. You are right, it could be better since the test environment is not exactly the ...
[ -1, -1, 6, -1, -1, -1, 6, 9 ]
[ -1, -1, 4, -1, -1, -1, 4, 3 ]
[ "SJxoiG0SAX", "HJl_biarAm", "iclr_2019_BJll6o09tm", "Syl10Luwi7", "HkeE6748nm", "B1l_Ztqc3X", "iclr_2019_BJll6o09tm", "iclr_2019_BJll6o09tm" ]
iclr_2019_BJlpCsC5Km
Learning Gibbs-regularized GANs with variational discriminator reparameterization
We propose a novel approach to regularizing generative adversarial networks (GANs) leveraging learned {\em structured Gibbs distributions}. Our method consists of reparameterizing the discriminator to be an explicit function of two densities: the generator PDF q and a structured Gibbs distribution ν. Leveraging recent work on invertible pushforward density estimators, this reparameterization is made possible by assuming the generator is invertible, which enables the analytic evaluation of the generator PDF q. We further propose optimizing the Jeffrey divergence, which balances mode coverage with sample quality. The combination of this loss and reparameterization allows us to effectively regularize the generator by imposing structure from domain knowledge on ν, as in classical graphical models. Applying our method to a vehicle trajectory forecasting task, we observe that we are able to obtain quantitatively superior mode coverage as well as better-quality samples compared to traditional methods.
rejected-papers
The paper proposes to define the GAN discriminator as an explicit function of a invertible generator density and a structured Gibbs distribution to tackle the problems of spurious modes and mode collapse. The resulting model is similar to R2P2, i.e. it can be seen as adding an adversarial component to R2P2, and shows competitive (but no better) performance. Reviewers agree, that these limits the novelty of the contribution, and that the paper would be improved by a more extensive empirical evaluation.
train
[ "Byluja-qAm", "rJl5v6b9Am", "H1l_R2b9R7", "rJera-pdRQ", "HklfE4KKh7", "SyeCe585nm", "SyxS1DdK37" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. Please note that we have revised\nthe manuscript to address them.\n\n1. typo\n=======\n\nYes, the extra sup is a typo. Thanks for pointing it out.\n\n2. Section 2.3 is confusing\n===========================\n\nWe apologize for the confusion. We have tweaked this section in the revis...
[ -1, -1, -1, -1, 5, 5, 4 ]
[ -1, -1, -1, -1, 5, 3, 4 ]
[ "SyxS1DdK37", "rJera-pdRQ", "SyeCe585nm", "HklfE4KKh7", "iclr_2019_BJlpCsC5Km", "iclr_2019_BJlpCsC5Km", "iclr_2019_BJlpCsC5Km" ]
iclr_2019_BJlyznAcFm
Advocacy Learning
We introduce advocacy learning, a novel supervised training scheme for classification problems. This training scheme applies to a framework consisting of two connected networks: 1) the Advocates, composed of one subnetwork per class, which take the input and provide a convincing class-conditional argument in the form of an attention map, and 2) a Judge, which predicts the inputs class label based on these arguments. Each Advocate aims to convince the Judge that the input example belongs to their corresponding class. In contrast to a standard network, in which all subnetworks are trained to jointly cooperate, we train the Advocates to competitively argue for their class, even when the input belongs to a different class. We also explore a variant, honest advocacy learning, where the Advocates are only trained on data corresponding to their class. Applied to several different classification tasks, we show that advocacy learning can lead to small improvements in classification accuracy over an identical supervised baseline. Through a series of follow-up experiments, we analyze when and how Advocates improve discriminative performance. Though it may seem counter-intuitive, a framework in which subnetworks are trained to competitively provide evidence in support of their class shows promise, performing as well as or better than standard approaches. This provides a foundation for further exploration into the effect of competition and class-conditional representations.
rejected-papers
The paper presents a novel architecture, reminescent of mixtures-of-experts, composed of a set of advocates networks providing an attention map to a separate "judge" network. Reviewers have several concerns, including lack of theoretical justification, potential scaling limitations, and weak experimental results. Authors answered to several of the concerns, which did not convinced reviewers. The reviewer with the highest score was also the least confident, so overall I will recommend to reject the paper.
train
[ "Byezma9tC7", "rkgZR25t0m", "rJxw5sqFAQ", "HylFBoqFC7", "BkxmyeY92X", "BJe82CNjim", "r1e-mhjfim" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank you for your feedback, and are glad you liked the paper. To answer some of your comments:\n1. Why the Honest Advocate outperformed the standard Advocate on MIMIC\n\nAnswering this question was our main motivation for including the Imbalanced and Binary MNIST problems. MIMIC differs from MNIST and FMNIST i...
[ -1, -1, -1, -1, 4, 4, 8 ]
[ -1, -1, -1, -1, 4, 4, 2 ]
[ "r1e-mhjfim", "BJe82CNjim", "BkxmyeY92X", "BkxmyeY92X", "iclr_2019_BJlyznAcFm", "iclr_2019_BJlyznAcFm", "iclr_2019_BJlyznAcFm" ]
iclr_2019_BJx1SsAcYQ
Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here, for the first time, we demonstrate ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3, densenet-161, and VGG-16bn networks on the ImageNet classification benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of finetuning, thereby leveraging the availability of pretrained models. We also demonstrate ResNet-18, ResNet-34, and ResNet-50 4-bit models that match the accuracy of the full-precision baseline networks -- the highest scores to date. Surprisingly, the weights of the low-precision networks are very close (in cosine similarity) to the weights of the corresponding baseline networks, making training from scratch unnecessary. We find that gradient noise due to quantization during training increases with reduced precision, and seek ways to overcome this noise. The number of iterations required by stochastic gradient descent to achieve a given training error is related to the square of (a) the distance of the initial solution from the final plus (b) the maximum variance of the gradient estimates. By drawing inspiration from this observation, we (a) reduce solution distance by starting with pretrained fp32 precision baseline networks and fine-tuning, and (b) combat noise introduced by quantizing weights and activations during training, by using larger batches along with matched learning rate annealing. Sensitivity analysis indicates that these techniques, coupled with proper activation function range calibration, offer a promising heuristic to discover low-precision networks, if they exist, close to fp32 precision baseline networks.
rejected-papers
This paper proposes methods to improve the performance of the low-precision neural networks. The reviewers raised concern about lack of novelty. Due to insufficient technical contribution, recommend for rejection.
train
[ "Byeo_Qm_lE", "BkgjGIRvAX", "HJxvf8_d1N", "BkefzsGI14", "rygkvG2jn7", "rkxJpxdMh7", "S1lDfBqCpm", "r1xeDeoCam", "BkxQV2KCTQ", "SJgMvEDa3Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Finding low precision networks for efficient inference was an important open problem in deep learning at both 8- (Jacob et al., 2018) and 4-bit precision (Choi et al, 2018), until now. \n\nA paper published at CVPR earlier this year by researchers at Google (http://openaccess.thecvf.com/content_cvpr_2018/papers/Ja...
[ -1, -1, -1, -1, 4, 6, -1, -1, -1, 5 ]
[ -1, -1, -1, -1, 5, 3, -1, -1, -1, 4 ]
[ "iclr_2019_BJx1SsAcYQ", "BkxQV2KCTQ", "BkefzsGI14", "S1lDfBqCpm", "iclr_2019_BJx1SsAcYQ", "iclr_2019_BJx1SsAcYQ", "rygkvG2jn7", "rkxJpxdMh7", "SJgMvEDa3Q", "iclr_2019_BJx1SsAcYQ" ]
iclr_2019_BJx9f305t7
W2GAN: RECOVERING AN OPTIMAL TRANSPORT MAP WITH A GAN
Understanding and improving Generative Adversarial Networks (GAN) using notions from Optimal Transport (OT) theory has been a successful area of study, originally established by the introduction of the Wasserstein GAN (WGAN). An increasing number of GANs incorporate OT for improving their discriminators, but that is so far the sole way for the two domains to cross-fertilize. In this work we address the converse question: is it possible to recover an optimal map in a GAN fashion? To achieve this, we build a new model relying on the second Wasserstein distance. This choice enables the use of many results from OT community. In particular, we may completely describe the dynamics of the generator during training. In addition, experiments show that practical uses of our model abide by the rule of evolution we describe. As an application, our generator may be considered as a new way of computing an optimal transport map. It is competitive in low-dimension with standard and deterministic ways to approach the same problem. In high dimension, the fact it is a GAN-style method makes it more powerful than other methods.
rejected-papers
The paper introduces a W2GAN method for training GAN by minimizing 2-Wasserstein distance using by computing an optimal transport (OT) map between distributions. However, the difference of previous works is not significant or clearly clarified as pointed out some of the reviewers. The advantage of W2GAN over standard WGAN is also superficially explained, and did not supported by strong empirical evidence.
train
[ "HygCeZ17J4", "Bk7EJWJXkE", "HkgYTx1myE", "Syluu219R7", "SJgFuC0NRX", "BJgkfCRVRQ", "H1gpnpREAX", "rkxEW30V0X", "BJxm4oA4R7", "HJla6JS52m", "SylozPPPnQ", "BkefZGUDnm" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer, \n\nWe would appreciate that you reconsider your decision given the updated version of the paper. \n\nThanks a lot.", "Dear reviewer, \n\nWe would appreciate that you reconsider your decision given the updated version of the paper. \n\nThanks a lot.", "Dear reviewer, \n\nWe would appreciate that...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "BkefZGUDnm", "SylozPPPnQ", "HJla6JS52m", "iclr_2019_BJx9f305t7", "BkefZGUDnm", "SylozPPPnQ", "SylozPPPnQ", "HJla6JS52m", "iclr_2019_BJx9f305t7", "iclr_2019_BJx9f305t7", "iclr_2019_BJx9f305t7", "iclr_2019_BJx9f305t7" ]
iclr_2019_BJxLH2AcYX
Unsupervised Multi-Target Domain Adaptation: An Information Theoretic Approach
Unsupervised domain adaptation (uDA) models focus on pairwise adaptation settings where there is a single, labeled, source and a single target domain. However, in many real-world settings one seeks to adapt to multiple, but somewhat similar, target domains. Applying pairwise adaptation approaches to this setting may be suboptimal, as they would fail to leverage shared information among the multiple domains. In this work we propose an information theoretic approach for domain adaptation in the novel context of multiple target domains with unlabeled instances and one source domain with labeled instances. Our model aims to find a shared latent space common to all domains, while simultaneously accounting for the remaining private, domain-specific factors. Disentanglement of shared and private information is accomplished using a unified information-theoretic approach, which also serves to provide a stronger link between the latent representations and the observed data. The resulting single model, accompanied by an efficient optimization algorithm, allows simultaneous adaptation from a single source to multiple target domains. We test our approach on three publicly-available datasets, showing that it outperforms several popular domain adaptation methods.
rejected-papers
The paper proposes the unique setting of adapting to multiple target domains. The idea being that their approach may leverage commonality across domains to improve adaptation while maintaining domain specific parameters where needed. This idea and general approach is interesting and worth exploring. The authors' rebuttal and paper edits significantly improved the draft and clarified some details missing from the original presentation. There is an ablation study showing that each part of the model contributes to the overall performance. However, the approach provides only modest improvements over comparative methods which were not designed to learn from multiple target domains. In addition, comparison against the latest approaches is missing so it is likely that the performance reported here is below state-of-the-art. Overall, given the modest experimental gains combined with incremental improvement over single source information theoretic methods, this paper is not yet ready for publication.
train
[ "Byeo_L0dCQ", "B1elng0OCQ", "S1gfdmR_0m", "B1eo5rRORm", "S1exkKEc2X", "SkeiWEs52X", "rJeOjiPF3X" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the review! To improve the quality of the paper, we have made several adjustments to our paper in accordance with your review.\n\n\"The contribution is limited since the techniques involved are very common in the domain adaptation\".\n\nWe clarify our contribution bellow:\n- We propose a novel inform...
[ -1, -1, -1, -1, 4, 6, 5 ]
[ -1, -1, -1, -1, 4, 5, 4 ]
[ "rJeOjiPF3X", "iclr_2019_BJxLH2AcYX", "SkeiWEs52X", "S1exkKEc2X", "iclr_2019_BJxLH2AcYX", "iclr_2019_BJxLH2AcYX", "iclr_2019_BJxLH2AcYX" ]
iclr_2019_BJxOHs0cKm
Identifying Generalization Properties in Neural Networks
While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian. We connect model generalization with the local property of a solution under the PAC-Bayes paradigm. In particular, we prove that model generalization ability is related to the Hessian, the higher-order "smoothness" terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters. Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly.
rejected-papers
This paper proposes a generalization metric depending on the Lipschitz of the Hessian. Pros: Paper has some nice experiments correlating their Hessian based generalization metric with the generalization gap, Cons: The paper does not compare its results with existing generalization bounds, as there is substantial work in the area now. It is not clear whether existing generalization bounds do not capture this phenomenon with different batch sizes/learning rates, and the necessity of having and explicit dependence on the Lipschitz of the Hessian. The bound by itself is also weak because of its dependence on number of parameters 'm'. The paper is poorly written and all reviewers complain about its readability. I suggest authors to address concerns of the reviewers before submitting again.
train
[ "Hkxn41YnAm", "Hkl3fxjisX", "B1xHWLbjAX", "rJgJOxhOC7", "SJxLkG9OAQ", "rkeEh0KuC7", "HyelTxG53m", "rJlxUUUsj7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Q: My claim that you have an implicit log-uniform prior comes from ...The width of the interval in log-space, and hence the KL-divergence from posterior to prior, is then independent of the size of w.\n\nA: Thanks for the detailed explanation. We agree, if the posterior is also log-uniform then dependency on w get...
[ -1, 5, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, 4, 4 ]
[ "B1xHWLbjAX", "iclr_2019_BJxOHs0cKm", "rJgJOxhOC7", "rJlxUUUsj7", "Hkl3fxjisX", "HyelTxG53m", "iclr_2019_BJxOHs0cKm", "iclr_2019_BJxOHs0cKm" ]
iclr_2019_BJxPk2A9Km
Learning What to Remember: Long-term Episodic Memory Networks for Learning from Streaming Data
Current generation of memory-augmented neural networks has limited scalability as they cannot efficiently process data that are too large to fit in the external memory storage. One example of this is lifelong learning scenario where the model receives unlimited length of data stream as an input which contains vast majority of uninformative entries. We tackle this problem by proposing a memory network fit for long-term lifelong learning scenario, which we refer to as Long-term Episodic Memory Networks (LEMN), that features a RNN-based retention agent that learns to replace less important memory entries based on the retention probability generated on each entry that is learned to identify data instances of generic importance relative to other memory entries, as well as its historical importance. Such learning of retention agent allows our long-term episodic memory network to retain memory entries of generic importance for a given task. We validate our model on a path-finding task as well as synthetic and real question answering tasks, on which our model achieves significant improvements over the memory augmented networks with rule-based memory scheduling as well as an RL-based baseline that does not consider relative or historical importance of the memory.
rejected-papers
Pros: - This is an interesting and relevant topic - It is well motivated and mostly clear Cons: - The motivation, large amounts of data such as occur in lifelong learning, is not well examined in the evaluation which focuses on quite small problems. For an example of work which addresses the lifelong memory management issue (though does not learn a memory management policy) see [1]. - In general the evaluation is not adequate to the claims. - Reviewer 2 is concerned with the use of a bi-directional RNN for the comparison of memory entries since it may overfit to order. - Reviewer 1 is somewhat concerned with novelty over other memory management schemes. [1] Scalable Recollections for Continual Lifelong Learning. https://arxiv.org/pdf/1711.06761.pdf
train
[ "SylVaKxFC7", "BkgKLulYAX", "H1eTt_gY0X", "Bye_Z58cnQ", "BkeQQ3N7nQ", "SkgHfWgLoQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thoughtful comments regarding our paper.\n\n“However, as we know RNN models including GRU are suitable for those data that have sequence order. More specifically, bidirectional RNN models are used when we want to obtain not only the impact from beginning to end but also the impact from the end t...
[ -1, -1, -1, 5, 4, 4 ]
[ -1, -1, -1, 5, 4, 4 ]
[ "BkeQQ3N7nQ", "Bye_Z58cnQ", "SkgHfWgLoQ", "iclr_2019_BJxPk2A9Km", "iclr_2019_BJxPk2A9Km", "iclr_2019_BJxPk2A9Km" ]
iclr_2019_BJxRVnC5Fm
Mean Replacement Pruning
Pruning units in a deep network can help speed up inference and training as well as reduce the size of the model. We show that bias propagation is a pruning technique which consistently outperforms the common approach of merely removing units, regardless of the architecture and the dataset. We also show how a simple adaptation to an existing scoring function allows us to select the best units to prune. Finally, we show that the units selected by the best performing scoring functions are somewhat consistent over the course of training, implying the dead parts of the network appear during the stages of training.
rejected-papers
This paper proposes an approach to pruning units in a deep neural network while training is in progress. The idea is to (1) use a specific "scoring function" (the absolute-valued Taylor expansion of the loss) to identify the best units to prune, (2) computing the mean activations of the units to be pruned on a small sample of training data, (3) adding the mean activations multiplied by the outgoing weights into the biases of the next layer's units, and (4) removing the pruned units from the network. Extensive experiments show that this approach to pruning does less immediate damage than the more common zero-replacement approach, that this advantage remains (but is much smaller) after fine-tuning, and that the importance of units tends not to change much during training. The reviewers liked the quality of the writing and the extensive experimentation, but even after discussion and revision had concerns about the limited novelty of the approach, the fact that the proposed approach is incompatible with batch normalization (which severely limits the range of architectures to which the method may be applied), and were concerned that the proposed method has limited impact after fine-tuning.
train
[ "HklPw2PaJE", "r1xlzq9inm", "HyeTYxm3yN", "rJxMPRvD14", "BJxNc_wv14", "H1eEkaWDJE", "Hkggp3CrkE", "SkxaUZ0rA7", "SygQNZASAQ", "S1xub-0H0m", "SJxKlwK6hQ", "rylsv4pjhX" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comment, although we believe one should not underestimate the impact that reducing the number of required fine-tuning steps might have.\n\nIndeed, in the case of incremental pruning, fine-tuning steps can take a significant portion of the total training time and reducing them is desirable.\n\nAn...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "HyeTYxm3yN", "iclr_2019_BJxRVnC5Fm", "H1eEkaWDJE", "BJxNc_wv14", "S1xub-0H0m", "Hkggp3CrkE", "SkxaUZ0rA7", "r1xlzq9inm", "rylsv4pjhX", "SJxKlwK6hQ", "iclr_2019_BJxRVnC5Fm", "iclr_2019_BJxRVnC5Fm" ]
iclr_2019_BJxYEsAqY7
FEED: Feature-level Ensemble Effect for knowledge Distillation
This paper proposes a versatile and powerful training algorithm named Feature-level Ensemble Effect for knowledge Distillation(FEED), which is inspired by the work of factor transfer. The factor transfer is one of the knowledge transfer methods that improves the performance of a student network with a strong teacher network. It transfers the knowledge of a teacher in the feature map level using high-capacity teacher network, and our training algorithm FEED is an extension of it. FEED aims to transfer ensemble knowledge, using either multiple teachers in parallel or multiple training sequences. Adapting the peer-teaching framework, we introduce a couple of training algorithms that transfer ensemble knowledge to the student at the feature map level, both of which help the student network find more generalized solutions in the parameter space. Experimental results on CIFAR-100 and ImageNet show that our method, FEED, has clear performance enhancements,without introducing any additional parameters or computations at test time.
rejected-papers
The paper describes knowledge distillation methods. As noted by all reviewers, the methods are very similar to the prior art, so there is not enough novelty for the paper to be accepted. The reviewers' opinion didn't change after the rebuttal.
test
[ "rJeOGv5pCX", "r1xhmXmKRQ", "rJgCQfmY0m", "HJlQjZQYAX", "rJgiuVx-6Q", "BJg-eYsw3m", "HkehgnZ8nQ" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your reply.\n\nSo as far as I can tell, pFeed is an ensemble of activations from teachers (or some function thereof) . This is effectively FitNets (https://arxiv.org/abs/1412.6550)/attention transfer with an ensemble of teachers.\n\nIt is interesting to know that the BAN results aren't reproducible. ...
[ -1, -1, -1, -1, 5, 4, 4 ]
[ -1, -1, -1, -1, 3, 3, 4 ]
[ "r1xhmXmKRQ", "HkehgnZ8nQ", "BJg-eYsw3m", "rJgiuVx-6Q", "iclr_2019_BJxYEsAqY7", "iclr_2019_BJxYEsAqY7", "iclr_2019_BJxYEsAqY7" ]
iclr_2019_BJxbYoC9FQ
Classifier-agnostic saliency map extraction
Extracting saliency maps, which indicate parts of the image important to classification, requires many tricks to achieve satisfactory performance when using classifier-dependent methods. Instead, we propose classifier-agnostic saliency map extraction, which finds all parts of the image that any classifier could use, not just one given in advance. We observe that the proposed approach extracts higher quality saliency maps and outperforms existing weakly-supervised localization techniques, setting the new state of the art result on the ImageNet dataset.
rejected-papers
{418}; {Classifier-agnostic saliency map extraction}; {Avg: 4.33}; {} 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion. The paper is well-written and the method is simple, effective, and well-justified. 2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision. 1. The introduction, in particular the last row of pg 1, implies that this work is the first to show that a class-agnostic saliency estimation method can produce higher-quality saliency maps than class-dependent ones. However, Fan et al. have already shown this. For this reason, AR1 recommended that the authors reword the introduction to reflect prior work on this aspect but the authors declined to do so. The AC would have liked to see a discussion of how the different points of view of the two works (robustness to corruption vs class-agnosticism) both address the same issue (poor segmentation of the salient image regions). 2. The work of Fan et al has a very similar approach and a deeper comparison is needed. While the authors dedicated two paragraphs of discussion to this work, they should have gone further. For example, the work of Fan et al. uses a very simple saliency map extraction network and it's unclear how much this impacts their performance when compared to the proposed method, which uses ResNet50. The AC agrees with the authors that re-implementing the method of Fan et al. is asking a lot but a discussion of the potential impact would have sufficed. 3. The authors didn't mention at all the vast body of work on salient object detection (for a somewhat recent review see Borji et al. "Salient object detection: A benchmark." IEEE TIP). The differences to this line of work should have been discussed. Points 1 and 2 were particularly salient for the final decision. 3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately. Two major points of contention were: - The discussion of differences between the proposed method and the method of Fan et al. - The fairness of the comparison to Fan et al. AR1 felt that the paper was deficient on both counts (AR2 had similar concerns) and the authors disagreed, arguing that the discussion was complete and the quantitative comparison fair. The AC was sympathetic to these concerns and found the authors' responses to be dismissive of those concerns. In particular, the AC agrees that the paper, as currently organized, minimizes the degree to which the work is derived from Fan et al. 4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another. The reviewers reached a consensus that the paper should be rejected.
train
[ "S1lZ8pPWgV", "Hye0fX7qJE", "rkgzKlD7kE", "B1lkOIev0Q", "B1elk8lvA7", "S1xp-Hxv07", "HyxqVGlT37", "Sklh9zM237", "r1gvfkii27" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the clarifications.\n\nI am left with the impression that the improvements on Fan et al. are more like \"tricks\" than something fundamental that next algorithms will build on.\n\nAlso, I would indicate in the paper that the results from previous works are not reproduced but the numbers are copied from ...
[ -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "B1elk8lvA7", "rkgzKlD7kE", "B1lkOIev0Q", "r1gvfkii27", "Sklh9zM237", "HyxqVGlT37", "iclr_2019_BJxbYoC9FQ", "iclr_2019_BJxbYoC9FQ", "iclr_2019_BJxbYoC9FQ" ]
iclr_2019_BJxmXhRcK7
TENSOR RING NETS ADAPTED DEEP MULTI-TASK LEARNING
Recent deep multi-task learning (MTL) has been witnessed its success in alleviating data scarcity of some task by utilizing domain-specific knowledge from related tasks. Nonetheless, several major issues of deep MTL, including the effectiveness of sharing mechanisms, the efficiency of model complexity and the flexibility of network architectures, still remain largely unaddressed. To this end, we propose a novel generalized latent-subspace based knowledge sharing mechanism for linking task-specific models, namely tensor ring multi-task learning (TRMTL). TRMTL has a highly compact representation, and it is very effective in transferring task-invariant knowledge while being super flexible in learning task-specific features, successfully mitigating the dilemma of both negative-transfer in lower layers and under-transfer in higher layers. Under our TRMTL, it is feasible for each task to have heterogenous input data dimensionality or distinct feature sizes at different hidden layers. Experiments on a variety of datasets demonstrate our model is capable of significantly improving each single task’s performance, particularly favourable in scenarios where some of the tasks have insufficient data.
rejected-papers
AR1 is concerned about the poor organisation of this paper. AR2 is concerned about the similarity between TRL and TR. The authors show some empirical results to support their intuition, however, no theoretical guarantees are provided regarding TRL superiority. Moreover, experiments for the Taskonomy dataset as well as on RNN have not been demonstrated, thus AR2 did not increase his/her score. AR3 is the most critical and finds the clarity and explanations not ready for publication. AC agrees with the reviewers in that the proposed idea has some merits, e.g. the reduction in the number of parameters seem a good point of this idea. However, reviewer urges the authors to seek non-trivial theoretical analysis for this method. Otherwise, it indeed is just an intelligent application paper and, as such, it cannot be accepted to ICLR.
train
[ "SJg6gJbN1V", "H1er7SL92Q", "rJxhhaVQk4", "r1x1qqWYCX", "r1eCaY-Y0X", "rkly-fbYAQ", "rJgp8MmuCX", "HJe-_2fOAX", "BJeJCol32X", "r1xm4z8cnQ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response. We provide totally different perspective of our contribution and novelty. Please give us one more opportunity by reading the following explanation:\n\n1 ''(1) Novelty is somewhat limited and incremental.''\nThe most significant novelty of our paper is the information sharing mechanisi...
[ -1, 5, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "rJxhhaVQk4", "iclr_2019_BJxmXhRcK7", "rJgp8MmuCX", "r1xm4z8cnQ", "r1xm4z8cnQ", "iclr_2019_BJxmXhRcK7", "H1er7SL92Q", "BJeJCol32X", "iclr_2019_BJxmXhRcK7", "iclr_2019_BJxmXhRcK7" ]
iclr_2019_BJzmzn0ctX
Scalable Neural Theorem Proving on Knowledge Bases and Natural Language
Reasoning over text and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. Transducing text to logical forms which can be operated on is a brittle and error-prone process. Operating directly on text by jointly learning representations and transformations thereof by means of neural architectures that lack the ability to learn and exploit general rules can be very data-inefficient and not generalise correctly. These issues are addressed by Neural Theorem Provers (NTPs) (Rocktäschel & Riedel, 2017), neuro-symbolic systems based on a continuous relaxation of Prolog’s backward chaining algorithm, where symbolic unification between atoms is replaced by a differentiable operator computing the similarity between their embedding representations. In this paper, we first propose Neighbourhood-approximated Neural Theorem Provers (NaNTPs) consisting of two extensions toNTPs, namely a) a method for drastically reducing the previously prohibitive time and space complexity during inference and learning, and b) an attention mechanism for improving the rule learning process, deeming them usable on real-world datasets. Then, we propose a novel approach for jointly reasoning over KB facts and textual mentions, by jointly embedding them in a shared embedding space. The proposed method is able to extract rules and provide explanations—involving both textual patterns and KB relations—from large KBs and text corpora. We show that NaNTPs perform on par with NTPs at a fraction of a cost, and can achieve competitive link prediction results on challenging large-scale datasets, including WN18, WN18RR, and FB15k-237 (with and without textual mentions) while being able to provide explanations for each prediction and extract interpretable rules.
rejected-papers
This paper focuses on scaling up neural theorem provers, a link prediction system that combines backward chaining with neural embedding of facts, but does not scale to most real-world knowledge bases. The authors introduce a nearest-neighbor search-based method to reduce the time/space complexity, along with an attention mechanism that improves the training. With these extensions, they scale NTP to modern benchmarks for the task, including ones that combine text and knowledge bases, thus providing explanations for such models. The reviewers and the AC note the following as the primary concerns of the paper: (1) the novelty of the contributions is somewhat limited, as nearest neighbor search and attention are both well-known strategies, as is embedding text+facts jointly, (2) there are several issues in the evaluation, in particular around analysis of benefits of the proposed work on new datasets. There were a number of other potential weaknesses, such the performance on some benchmarks (Fb15k) and clarity and writing quality of a few sections. The authors provided significant revisions to the paper that addressed many of the clarity and evaluation concerns, along with providing sufficient comments to better contextualize some of the concerns. However, the concerns with novelty and analysis of the results still hold. Reviewer 3 mentions that it is still unclear in the discussion why the accuracy of the proposed approach matches/outperforms that of NTP, i.e. why is there not a tradeoff. Reviewer 4 also finds the analysis lacking, and feels that the differences between the proposed work and the single-link approaches, in terms of where each excels, are described in insufficient detail. Reviewer 4 focused more on the simplicity of the text encoding, which restricts the novelty as more sophisticated text embeddings approaches are commonplace. Overall, the reviewers raised different concerns, and although all of them appreciated the need for this work and the revisions provided by the authors, ultimately feel that the paper did not quite meet the bar.
train
[ "BkeJxH_myE", "S1x5WfY_27", "HJxcj8ryJE", "HkxqI2-oAm", "B1eBEhWs0m", "BJly-o6tRQ", "rJg7NahKAm", "HyggagnY0Q", "ByeCDPImCm", "r1gk0W7XA7", "Hkglsbm7RQ", "SJl9uZQXRX", "BklwIJXm0m", "SJlb454867", "HkllwPvs2X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I appreciate the authors' effort in revising the paper and including additional experimental results. However, I feel that some of my concerns remain. There are obvious questions that beg for insights yet are simply ignored by the authors. For example, the computational speedup due to restricting the search space ...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "SJl9uZQXRX", "iclr_2019_BJzmzn0ctX", "SJlb454867", "BJly-o6tRQ", "BJly-o6tRQ", "rJg7NahKAm", "ByeCDPImCm", "iclr_2019_BJzmzn0ctX", "BklwIJXm0m", "S1x5WfY_27", "HkllwPvs2X", "HkllwPvs2X", "SJlb454867", "iclr_2019_BJzmzn0ctX", "iclr_2019_BJzmzn0ctX" ]
iclr_2019_BJzuKiC9KX
Revisiting Reweighted Wake-Sleep
Discrete latent-variable models, while applicable in a variety of settings, can often be difficult to learn. Sampling discrete latent variables can result in high-variance gradient estimators for two primary reasons: 1) branching on the samples within the model, and 2) the lack of a pathwise derivative for the samples. While current state-of-the-art methods employ control-variate schemes for the former and continuous-relaxation methods for the latter, their utility is limited by the complexities of implementing and training effective control-variate schemes and the necessity of evaluating (potentially exponentially) many branch paths in the model. Here, we revisit the Reweighted Wake Sleep (RWS; Bornschein and Bengio, 2015) algorithm, and through extensive evaluations, show that it circumvents both these issues, outperforming current state-of-the-art methods in learning discrete latent-variable models. Moreover, we observe that, unlike the Importance-weighted Autoencoder, RWS learns better models and inference networks with increasing numbers of particles, and that its benefits extend to continuous latent-variable models as well. Our results suggest that RWS is a competitive, often preferable, alternative for learning deep generative models.
rejected-papers
The paper presents a well conducted empirical study of the Reweighted Wake Sleep (RWS) algorithm (Bornschein and Bengio, 2015). It shows that it performs consistently better than alternatives such as Importance Weighted Autoencoder (IWAE) for the hard problem of learning deep generative models with discrete latent variables acting as a stochastic control flow. The work is well-written and extracts valuable insights supported by empirical observations: in particular the fact that increasing the number of particles improves learning in RWS but hurts in IWAE, and the fact that RWS can also be successfully applied to continuous variables. The reviewers and AC note the following weaknesses of the work as it currently stands: a) it is almost exclusively empirical and while reasonable explanations are argued, it does not provide a formal theoretical analysis justifying the observed behaviour b) experiments are limited to MNIST and synthetic data, confirmation of the findings on larger-scale real-world data and model would provide a more complete and convincing evidence. The paper should be made stronger on at least one (and ideally both) of these accounts.
train
[ "SJxOX9_p14", "SyeZgH_T1E", "Skx2_YQ6kV", "HJxxyGwinm", "r1gDQE7nJ4", "SylgzBTohX", "rJxb9BFYaQ", "BJeBGBtFaQ", "BylkLVYKa7", "Syg9R7FF6m", "BJepNja0hQ" ]
[ "author", "public", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Note that our claim is not based only on the GMM experiment. It is also backed up by results from training (i) a VAE with continuous latent variable on MNIST data (compared against IWAE since VIMCO is not needed) and (ii) the AIR model on moving MNIST data (compared against VIMCO; VQ-VAE not applicable).", "I se...
[ -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, 4, -1, 3, -1, -1, -1, -1, 3 ]
[ "SyeZgH_T1E", "Skx2_YQ6kV", "r1gDQE7nJ4", "iclr_2019_BJzuKiC9KX", "rJxb9BFYaQ", "iclr_2019_BJzuKiC9KX", "HJxxyGwinm", "SylgzBTohX", "BJepNja0hQ", "iclr_2019_BJzuKiC9KX", "iclr_2019_BJzuKiC9KX" ]
iclr_2019_BkE8NjCqYm
(Unconstrained) Beam Search is Sensitive to Large Search Discrepancies
Beam search is the most popular inference algorithm for decoding neural sequence models. Unlike greedy search, beam search allows for a non-greedy local decisions that can potentially lead to a sequence with a higher overall probability. However, previous work found that the performance of beam search tends to degrade with large beam widths. In this work, we perform an empirical study of the behavior of the beam search algorithm across three sequence synthesis tasks. We find that increasing the beam width leads to sequences that are disproportionately based on early and highly non-greedy decisions. These sequences typically include a very low probability token that is followed by a sequence of tokens with higher (conditional) probability leading to an overall higher probability sequence. However, as beam width increases, such sequences are more likely to have a lower evaluation score. Based on our empirical analysis we propose to constrain the beam search from taking highly non-greedy decisions early in the search. We evaluate two methods to constrain the search and show that constrained beam search effectively eliminates the problem of beam search degradation and in some cases even leads to higher evaluation scores. Our results generalize and improve upon previous observations on copies and training set predictions.
rejected-papers
This paper examines a concept (also coined by the paper) of "search discrepancies" where the search algorithm behaves differently with large beam sizes. It then proposes heuristics to help prevent the model from performing worse when the size of the beam is increased. I think there are some interesting insights in this paper with respect to how search works in modern neural models, but most reviewers (and me) were concerned by the heuristic approach taken to fix these errors. I still think that within a search paper, a clear separation between modeling errors and search errors is useful, and adding heuristics on top has a potential to making things more complicated down the road when, for example, we change our model or we change our training algorithm. It would be nice if the nice insights in the paper could be turned into a more theoretically clean framework that could be re-submitted to a future conference.
train
[ "B1l5OApA14", "Bkgfzvd6kN", "rJxYtH-m07", "HJeT62PxRX", "BkevCeCNp7", "HygNil0Vpm", "SylvelA4pQ", "S1leOha4aQ", "BkllHKTV6m", "HygoNFt7a7", "HyeSeu4jhQ", "S1lH_kUq2Q", "S1gTFivu3m" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comment, we are glad that we have largely resolved some of your concerns.\n\nFor your remaining concern, we think the dichotomy you propose (\"principled and well-motivated\" vs. \"heuristic\") is a false one. Our approach is both. Through extensive experiments, we've provided an understanding of o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "Bkgfzvd6kN", "S1leOha4aQ", "HJeT62PxRX", "SylvelA4pQ", "S1lH_kUq2Q", "S1lH_kUq2Q", "HygoNFt7a7", "HyeSeu4jhQ", "S1gTFivu3m", "iclr_2019_BkE8NjCqYm", "iclr_2019_BkE8NjCqYm", "iclr_2019_BkE8NjCqYm", "iclr_2019_BkE8NjCqYm" ]
iclr_2019_BkGiPoC5FX
Efficient Convolutional Neural Network Training with Direct Feedback Alignment
There were many algorithms to substitute the back-propagation (BP) in the deep neural network (DNN) training. However, they could not become popular because their training accuracy and the computational efficiency were worse than BP. One of them was direct feedback alignment (DFA), but it showed low training performance especially for the convolutional neural network (CNN). In this paper, we overcome the limitation of the DFA algorithm by combining with the conventional BP during the CNN training. To improve the training stability, we also suggest the feedback weight initialization method by analyzing the patterns of the fixed random matrices in the DFA. Finally, we propose the new training algorithm, binary direct feedback alignment (BDFA) to minimize the computational cost while maintaining the training accuracy compared with the DFA. In our experiments, we use the CIFAR-10 and CIFAR-100 dataset to simulate the CNN learning from the scratch and apply the BDFA to the online learning based object tracking application to examine the training in the small dataset environment. Our proposed algorithms show better performance than conventional BP in both two different training tasks especially when the dataset is small.
rejected-papers
This paper proposes a training algorithm for ConvNet architectures in which the final few layers are fully connected. The main idea is to use direct feedback alignment with carefully chosen binarized (±1) weights to train the fully connected layers and backpropagation to train the convolutional layers. The binarization reduces the memory footprint and computational cost of direct feedback alignment, while the careful selection of feedback weights improves convergence. Experiments on CIFAR-10, CIFAR-100, and an object tracking task are provided to show that the proposed algorithm outperforms backpropagation, especially when the amount of training data is small. The reviewers felt that the paper does a terrific job of introducing the various training algorithms --- backpropagation, feedback alignment, and direct feedback alignment --- and that the paper clearly explained what the novel contributions were. However, the reviewers felt the paper had limited novelty because it combines ideas that were already known, that it has limited applicability because it will not work with fully convolutional architectures, that the baselines in the experiments were somewhat weak, and that the paper provided no insights on why the proposed algorithm might be better than backpropagation in some cases. Regrettably, only one reviewer (R2) participated in the discussion, though this was the reviewer who provided the most constructive review. The AC read the revised paper, and agrees with R2's concerns about the limited applicability of the proposed algorithm and lack of insight or analysis explaining why the proposed training algorithm would improve over backpropagation.
train
[ "rklrp6fry4", "S1efJssO6X", "rkeKcIiO6Q", "BklM51jOT7", "rJgvU8g3h7", "HklxaibB2m", "BJxID8GNh7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I am impressed by how quickly the authors addressed some of the issues in the paper.\n\nDespite this, I feel that the method has limited applicability (only on networks with a combination of dense and conv layers).\nAs mentioned by one of the other reviewers, the insight into why this approach would work better is...
[ -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "rkeKcIiO6Q", "BJxID8GNh7", "HklxaibB2m", "rJgvU8g3h7", "iclr_2019_BkGiPoC5FX", "iclr_2019_BkGiPoC5FX", "iclr_2019_BkGiPoC5FX" ]
iclr_2019_BkMWx309FX
Reinforcement Learning with Perturbed Rewards
Recent studies have shown the vulnerability of reinforcement learning (RL) models in noisy settings. The sources of noises differ across scenarios. For instance, in practice, the observed reward channel is often subject to noise (e.g., when observed rewards are collected through sensors), and thus observed rewards may not be credible as a result. Also, in applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors. In this paper, we consider noisy RL problems where observed rewards by RL agents are generated with a reward confusion matrix. We call such observed rewards as perturbed rewards. We develop an unbiased reward estimator aided robust RL framework that enables RL agents to learn in noisy environments while observing only perturbed rewards. Our framework draws upon approaches for supervised learning with noisy data. The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 67.5% and 46.7% improvements in average on five Atari games, when the error rates are 10% and 30% respectively.
rejected-papers
This paper studies RL with perturbed rewards, where a technical challenge is to revert the perturbation process so that the right policy is learned. Some experiments are used to support the algorithm, which involves learning the reward perturbation process (the confusion matrix) using existing techniques from the supervised learning (and crowdsourcing) literature. Reviewers found the problem setting new and worth investigating, but had concerns over the scope/significance of this work, mostly about how the confusion matrix is learned. If this matrix is known, correcting reward perturbation is easy, and standard RL can be applied to the corrected rewards. Specifically, the work seems to be limited in two substantial ways, both related to how the confusion matrix is learned. * The reward function needs to be deterministic. * Majority voting requires the number of states to be finite. The significance of this work is therefore mostly limited to finite-state problems with deterministic reward, which is quite restricted. As the authors pointed out, the paper uses discretization to turn a continuous state space into a finite one, which is how the experiment was done. But discretization is likely not robust or efficient in many high-dimensional problems. It should be noted that the setting studied here, together with a thorough treatment of an (even restricted) case, could make an interesting paper that inspires future work. However, the exact problem setting is not completely clear in the paper, and the limitations of the technical contributions is also somewhat unclear. The authors are strongly advised to revise the paper accordingly to make their contributions clearer. Minor questions: - In lemma 2, what if C is not invertible. - The sampling oracle assumed in def 1 is not very practical, as opposed to what the paper claims. - There are more recent work at NIPS and STOC on attacking RL (including bandits) algorithms by manipulating the reward signals. The authors may want to cite and discuss.
train
[ "H1xorCQWp7", "r1lp2VU_CX", "r1eshzTBCm", "Bkl75YQ92Q", "B1l1hShZRm", "H1gAUgp-AQ", "HylRDDhbAX", "HklMEvnbRm", "HkewCIhb0X", "rJeoiP3-A7", "HJxmW5NZ67", "BJliWdLsh7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "\nThis paper investigates reinforcement learning with a perturbed reward signal. In particular, the paper proposes a particular model for adding noise to the reward function via a confusion matrix, which offers a nuanced notion of reward-noise that is not too complicated so-as to make learning impossible. I take t...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2019_BkMWx309FX", "HkewCIhb0X", "HylRDDhbAX", "iclr_2019_BkMWx309FX", "BJliWdLsh7", "iclr_2019_BkMWx309FX", "HklMEvnbRm", "Bkl75YQ92Q", "H1xorCQWp7", "HJxmW5NZ67", "iclr_2019_BkMWx309FX", "iclr_2019_BkMWx309FX" ]
iclr_2019_BkMXkhA5Fm
Learning State Representations in Complex Systems with Multimodal Data
Representation learning becomes especially important for complex systems with multimodal data sources such as cameras or sensors. Recent advances in reinforcement learning and optimal control make it possible to design control algorithms on these latent representations, but the field still lacks a large-scale standard dataset for unified comparison. In this work, we present a large-scale dataset and evaluation framework for representation learning for the complex task of landing an airplane. We implement and compare several approaches to representation learning on this dataset in terms of the quality of simple supervised learning tasks and disentanglement scores. The resulting representations can be used for further tasks such as anomaly detection, optimal control, model-based reinforcement learning, and other applications.
rejected-papers
The paper introduces a new dataset that contains multiple landings from the X plane simulator, and each includes readings from multiple sensors for aircraft landing. The paper also trains a set of self-supervised methods presented in previous works in order to learn sensory representations, and evaluates the learnt representations in terms of disentanglement and re-purposing to a discriminative task. Though the evaluations presented are interesting, they are not convincingly useful, as noted by the reviewers. Overall, it is not clear why this dataset is particularly well suited for representation learning. Furthermore, it is difficult to evaluate representation learning methods without relating them to an end-task, e.g., that of landing the aircraft. The paper writing would also benefit from restructuring and improving on English expressions. In particular, the conclusion section contains half-finished sentences.
train
[ "H1gg9vJo37", "HkeZIwfwJN", "BkgC4fstRm", "S1giJMsYRQ", "Hyg-ebjK0Q", "r1x_kgot0m", "S1xwcda53m", "r1ggr0FY3Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Overview and contributions:\nThe authors present a newly collected dataset and evaluation framework for learning representations for landing an airplane. The dataset is collected from the X-Plane simulation environment and consists of 8011 landings, each landing consists of time series data from 1090 sensors. Thei...
[ 6, -1, -1, -1, -1, -1, 6, 5 ]
[ 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_BkMXkhA5Fm", "r1x_kgot0m", "iclr_2019_BkMXkhA5Fm", "r1ggr0FY3Q", "S1xwcda53m", "H1gg9vJo37", "iclr_2019_BkMXkhA5Fm", "iclr_2019_BkMXkhA5Fm" ]
iclr_2019_BkMn9jAcYQ
Countering Language Drift via Grounding
While reinforcement learning (RL) shows a lot of promise for natural language processing—e.g. when fine-tuning natural language systems for optimizing a certain objective—there has been little investigation into potential language drift: when an external reward is used to train a system, the agents’ communication protocol may easily and radically diverge from natural language. By re-casting translation as a communication game, we show that language drift indeed happens when pre-trained agents are fine-tuned with policy gradient methods. We contend that simply adding a "naturalness" constraint to the reward, e.g. by using language model log likelihood, does not fully address the issue, and argue that (perceptual) grounding is required. That is, while language model constraints impose syntactic conformity, they do not lead to semantic correspondence. Our experiments show that grounded models give the best communication performance, while retaining English syntax along with the ability to convey the intended semantics.
rejected-papers
This paper proposes a method to resolve "language drift," where a pre-trained X->language model trained in an X->language->Y pipeline drifts away from being natural language. In particular, it proposes to add an auxiliary training objective that performs grounding with multimodal input to fix this problem. Results are good on a task where translation is done between two languages. The main concern that was raised with this paper by most of the reviewers is the validity of the proposed task itself. Even after extensive discussion with the authors, it is not clear that there is a very convincing scenario where we both have a pre-trained X->language, care about the intermediate results, and have some sort of grounded input to fix this drift. While I do understand the MT task is supposed to be a testbed for the true objective, it feel it is necessary to additionally have one convincing use case where this is a real problem and not just the artificially contrived. This use case could either be of practical use (e.g. potentially useful in an application), or of interest from the point of view of cognitive plausibility (e.g. similar to how children actually learn, and inspired by cognitive science literature). A concern that offshoots from this is that because the underlying idea is compelling (some sort of grounding to inform language learning), a paper at a high-profile conference such as ICLR may help re-popularize this line of research, which has been a niche for a while. Normally I would say this is definitely a good thing; I think considering grounding in language learning is definitely an important research direction, and have been a fan of this line of work since reading Roy's seminal work on it from 15 years ago. However, if the task used in this paper, which is of questionable value and reality, becomes the benchmark for this line of work I think this might lead other follow-up work in the wrong direction. I feel that this is a critical issue, and the paper will be much stronger after a more realistic task setting is added. Thus, I am not recommending acceptance at this time, but would definitely like the authors to think hard and carefully about a good and realistic benchmark for the task, and follow up with a revised version of the paper in the future.
train
[ "S1xwWOYNxV", "HyxXo14rxV", "SJeHeYFll4", "Syg_oAdglE", "H1e-YqdglN", "Bkec4PdxgE", "BkxSYCvggN", "SJeamsDelV", "rkx_UwDggV", "ryluBEl5RX", "rygWVeG7CQ", "rkguNtBMR7", "H1g4VRjeRQ", "rygcGRjlRm", "rkeBW0olCm", "r1gYCKweAQ", "SJlXH6cm6m", "SkgsYLlzT7", "SJlxDKzanX", "SJezHMdwnQ"...
[ "public", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_r...
[ "Hi Authors, \n\nThis is a very interesting setting and line of work, and I am working on reproducing this. Before I do, there are a couple of questions in my mind:\n1. For simple PG baseline, the reward for Agent A is log p(De_i | \\bar{En_i}), which if I understand correctly, the log-likelihood of all sentence. B...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2019_BkMn9jAcYQ", "S1xwWOYNxV", "Syg_oAdglE", "H1e-YqdglN", "Bkec4PdxgE", "BkxSYCvggN", "rkx_UwDggV", "rkeBW0olCm", "r1gYCKweAQ", "rygWVeG7CQ", "H1g4VRjeRQ", "r1gYCKweAQ", "SJezHMdwnQ", "S1l0qNZYhm", "SJlxDKzanX", "SJlXH6cm6m", "SkgsYLlzT7", "SJezHMdwnQ", "iclr_2019_BkMn9jA...
iclr_2019_BkMq0oRqFQ
Normalization Gradients are Least-squares Residuals
Batch Normalization (BN) and its variants have seen widespread adoption in the deep learning community because they improve the training of deep neural networks. Discussions of why this normalization works so well remain unsettled. We make explicit the relationship between ordinary least squares and partial derivatives computed when back-propagating through BN. We recast the back-propagation of BN as a least squares fit, which zero-centers and decorrelates partial derivatives from normalized activations. This view, which we term {\em gradient-least-squares}, is an extensible and arithmetically accurate description of BN. To further explore this perspective, we motivate, interpret, and evaluate two adjustments to BN.
rejected-papers
This paper interprets batch norm in terms of normalizing the backpropagated gradients. All of the reviewers believe this interpretation is novel and potentially interesting, but that the paper doesn't make the case that this helps explain batch norm, or provide useful insights into how to improve it. The authors have responded to the original set of reviews by toning down some of the claims in the original paper, but haven't addressed the reviewers' more substantive concerns. There may potentially be interesting ideas here, but I don't think it's ready for publication at ICLR.
train
[ "BkglmsZY0Q", "B1eSC5-K0X", "HyxHscZKA7", "BJxGyPqla7", "ByghVRki27", "SJg5feZqnQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Paper923 AnonReviewer3,\nThank you for your thoughtful criticism, and especially for your kind comments. We have toned down our language broadly in our revision, and removed all mentions of a \"unified view.\" We agree deeply with your note on needing more focus in the experiments; we would have liked to foll...
[ -1, -1, -1, 4, 4, 3 ]
[ -1, -1, -1, 5, 4, 4 ]
[ "BJxGyPqla7", "ByghVRki27", "SJg5feZqnQ", "iclr_2019_BkMq0oRqFQ", "iclr_2019_BkMq0oRqFQ", "iclr_2019_BkMq0oRqFQ" ]
iclr_2019_BkNUFjR5KQ
Learning Internal Dense But External Sparse Structures of Deep Neural Network
Recent years have witnessed two seemingly opposite developments of deep convolutional neural networks (CNNs). On one hand, increasing the density of CNNs by adding cross-layer connections achieve higher accuracy. On the other hand, creating sparsity structures through regularization and pruning methods enjoys lower computational costs. In this paper, we bridge these two by proposing a new network structure with locally dense yet externally sparse connections. This new structure uses dense modules, as basic building blocks and then sparsely connects these modules via a novel algorithm during the training process. Experimental results demonstrate that the locally dense yet externally sparse structure could acquire competitive performance on benchmark tasks (CIFAR10, CIFAR100, and ImageNet) while keeping the network structure slim.
rejected-papers
This paper proposes a genetic algorithm to search neural network architectures with locally dense and globally sparse connections. A population-based genetic algorithm is used to find the sparse, connections between dense module units. The local dense but global sparse architecture is an interesting idea, yet is not well studied in the current version, e.g. overfitting and connections with other similar architecture search methods. Based on reviewers’ ratings (5,5,6), the current version of paper is proposed as borderline lean reject.
train
[ "BJgK3rS80X", "Hyl9rfr8Am", "SyloJ8NLCm", "BJg8FrE80m", "B1liWFj7Tm", "SygozEUAnX", "rkgXDnxCn7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks to all reviewers, we take a lot of time on proofreading and update a new version. Detailed modifications are separately listed under each review.\nThank you very much.", "We are very excited about the positive and enthusiastic support of our core idea. Thank you for your feedback about our strong part. We...
[ -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, 3, 3, 2 ]
[ "iclr_2019_BkNUFjR5KQ", "SygozEUAnX", "BJg8FrE80m", "B1liWFj7Tm", "iclr_2019_BkNUFjR5KQ", "iclr_2019_BkNUFjR5KQ", "iclr_2019_BkNUFjR5KQ" ]
iclr_2019_BkVVOi0cFX
Denoise while Aggregating: Collaborative Learning in Open-Domain Question Answering
The open-domain question answering (OpenQA) task aims to extract answers that match specific questions from a distantly supervised corpus. Unlike supervised reading comprehension (RC) datasets where questions are designed for particular paragraphs, background sentences in OpenQA datasets are more prone to noise. We observe that most existing OpenQA approaches are vulnerable to noise since they simply regard those sentences that contain the answer span as ground truths and ignore the plausible correlation between the sentences and the question. To address this deficiency, we introduce a unified and collaborative model that leverages alignment information from query-sentence pairs in a small-scale supervised RC dataset and aggregates relevant evidence from distantly supervised corpus to answer open-domain questions. We evaluate our model on several real-world OpenQA datasets, and experimental results show that our collaborative learning methods outperform the existing baselines significantly.
rejected-papers
This paper presents a model for question answering, where the idea is to have a collaborative model that aligns queries and sentences on a small supervised dataset and also uses semi-supervised information from a weakly supervised corpus to answer open domain questions resulting in short answer spans. The main criticism of the paper is regarding its novelty, and reviewers cite the similarities with prior work such as Chen et al. and Min et al. There is relative consensus between the reviewers that further work using the semi-supervised outlook with stronger results could strengthen the paper further.
train
[ "BJl-Vc9Yn7", "Hyx1U1jwhX", "S1lq-2jFim" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper shows that a sentence selection / evidence scoring model for QA trained on SQuAD helps for QA datasets where such explicit per-evidence annotation is not available.\n\nQuality:\nPros: The paper is mostly well-written, and suggested models are sensible. Comparisons to the state of the art are appropriate...
[ 4, 6, 5 ]
[ 4, 4, 4 ]
[ "iclr_2019_BkVVOi0cFX", "iclr_2019_BkVVOi0cFX", "iclr_2019_BkVVOi0cFX" ]
iclr_2019_Bke0rjR5F7
Stochastic Learning of Additive Second-Order Penalties with Applications to Fairness
Many notions of fairness may be expressed as linear constraints, and the resulting constrained objective is often optimized by transforming the problem into its Lagrangian dual with additive linear penalties. In non-convex settings, the resulting problem may be difficult to solve as the Lagrangian is not guaranteed to have a deterministic saddle-point equilibrium. In this paper, we propose to modify the linear penalties to second-order ones, and we argue that this results in a more practical training procedure in non-convex, large-data settings. For one, the use of second-order penalties allows training the penalized objective with a fixed value of the penalty coefficient, thus avoiding the instability and potential lack of convergence associated with two-player min-max games. Secondly, we derive a method for efficiently computing the gradients associated with the second-order penalties in stochastic mini-batch settings. Our resulting algorithm performs well empirically, learning an appropriately fair classifier on a number of standard benchmarks.
rejected-papers
The paper presents a method to stochastically optimize second-order penalties and show how this could be applied to training fairness-aware classifiers, where the linear penalties associated with common fairness criteria are expressed as the second order penalties. While the reviewers acknowledged the potential usefulness of the proposed approach, all of them agreed that the paper requires: (1) major improvement in clarifying important points related to the approach (see R3’s detailed comments; R2’s concern on using the double sampling method to train non-convex models; see R1’s and R3’s concerns regarding the double summation/integral terms and how this effects runtime), and (2) major improvement in justifying its application to fairness; as noted by R2, “there is no sufficient evidence why non-convex models are actually useful in the experiments”. Given that fairness problems are currently studied on the small scale datasets (which is not this paper’s fault), a comparison to simpler methods for fairness or other applications could substantially strengthen the contribution and evaluation of this work. We hope the reviews are useful for improving and revising the paper.
train
[ "rye5zA87Cm", "HyeHATL7R7", "rJeGNTLXR7", "rJg_LYQk6Q", "HJgihpDanm", "HJe0uZ753X" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their feedback. Our responses to the questions are below:\n\n1. In the case of linear penalties, the lambdas are coefficients on the constraints <d, c>. While lambda=0 corresponds to an unconstrained loss, lambda->infty does not correspond to a hard constraint. Instead, it corresponds ...
[ -1, -1, -1, 5, 5, 4 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "HJe0uZ753X", "HJgihpDanm", "rJg_LYQk6Q", "iclr_2019_Bke0rjR5F7", "iclr_2019_Bke0rjR5F7", "iclr_2019_Bke0rjR5F7" ]
iclr_2019_Bke96sC5tm
SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning
Model-based reinforcement learning (RL) methods can be broadly categorized as global model methods, which depend on learning models that provide sensible predictions in a wide range of states, or local model methods, which iteratively refit simple models that are used for policy improvement. While predicting future states that will result from the current actions is difficult, local model methods only attempt to understand system dynamics in the neighborhood of the current policy, making it possible to produce local improvements without ever learning to predict accurately far into the future. The main idea in this paper is that we can learn representations that make it easy to retrospectively infer simple dynamics given the data from the current policy, thus enabling local models to be used for policy learning in complex systems. We evaluate our approach against other model-based and model-free RL methods on a suite of robotics tasks, including manipulation tasks on a real Sawyer robotic arm directly from camera images.
rejected-papers
This paper proposes a method to learn representations to infer simple local models that can be used for policy improvement. All the reviewers agree that the paper has interesting ideas, but they found the main contribution to be a bit weak and the experiments to be insufficient. Post rebuttal, the reviewers discussed extensively with each other and agreed that, given more work is done on a clear presentation and improving the experiments, this paper can be accepted. In its current form however, the paper is not ready to be accepted. I have recommended to reject this paper, but I will encourage the authors to resubmit after improving the work.
train
[ "S1ekABFkgE", "S1gFTfz537", "H1eNN3ma14", "H1xc-4vsCQ", "BylfnXvi0m", "HJgPZ8lHRQ", "S1gVBIlrRm", "BkeI2zwr0m", "BJemlBxrRm", "B1g3stit6Q", "HygFzNbFam", "r1eTCmWFTm", "SklP9FFR3Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "We appreciate your feedback, however we disagree with several points in your assessment.\n\nWe find your point on model-based RL in general to be unreasonable in judging the merit of our work. As our paper addresses model-based RL, our method of course will exhibit some of the limitations of the current state of t...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, 3 ]
[ "H1eNN3ma14", "iclr_2019_Bke96sC5tm", "BylfnXvi0m", "BkeI2zwr0m", "B1g3stit6Q", "SklP9FFR3Q", "S1gFTfz537", "HygFzNbFam", "B1g3stit6Q", "iclr_2019_Bke96sC5tm", "S1gFTfz537", "SklP9FFR3Q", "iclr_2019_Bke96sC5tm" ]
iclr_2019_BkeDEoCctQ
Deep Curiosity Search: Intra-Life Exploration Can Improve Performance on Challenging Deep Reinforcement Learning Problems
Traditional exploration methods in reinforcement learning (RL) require agents to perform random actions to find rewards. But these approaches struggle on sparse-reward domains like Montezuma’s Revenge where the probability that any random action sequence leads to reward is extremely low. Recent algorithms have performed well on such tasks by encouraging agents to visit new states or perform new actions in relation to all prior training episodes (which we call across-training novelty). But such algorithms do not consider whether an agent exhibits intra-life novelty: doing something new within the current episode, regardless of whether those behaviors have been performed in previous episodes. We hypothesize that across-training novelty might discourage agents from revisiting initially non-rewarding states that could become important stepping stones later in training—a problem remedied by encouraging intra-life novelty. We introduce Curiosity Search for deep reinforcement learning, or Deep Curiosity Search (DeepCS), which encourages intra-life exploration by rewarding agents for visiting as many different states as possible within each episode, and show that DeepCS matches the performance of current state-of-the-art methods on Montezuma’s Revenge. We further show that DeepCS improves exploration on Amidar, Freeway, Gravitar, and Tutankham (many of which are hard exploration games). Surprisingly, DeepCS also doubles A2C performance on Seaquest, a game we would not have expected to benefit from intra-life exploration because the arena is small and already easily navigated by naive exploration techniques. In one run, DeepCS achieves a maximum training score of 80,000 points on Seaquest—higher than any methods other than Ape-X. The strong performance of DeepCS on these sparse- and dense-reward tasks suggests that encouraging intra-life novelty is an interesting, new approach for improving performance in Deep RL and motivates further research into hybridizing across-training and intra-life exploration methods.
rejected-papers
Pros: - novel idea of intra-life curiosity that encourages diverse behavior within each episode rather than across episodes. Cons: - privileged/ad-hoc information (RAM state, distinguishing rooms) - lack of sufficient ablations/analysis - insufficient revision/rebuttal The reviewers reached consensus that the paper should be rejected in its current form.
train
[ "BklZ3SQ5nX", "S1ldJZzch7", "rkgfBiYbyN", "ByxoQiKZyN", "SyeVGit-kE", "H1lYCLftRm", "BJlxMu4a37" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\nThe authors look at the problem of exploration in deep RL. They propose a “curiosity grid” which is a virtual grid laid out on top of the current level/area that an Atari agent is in. Once an agent enters a new cell of the grid, it obtains a small reward, encouraging the agent to explore all parts of the...
[ 5, 5, -1, -1, -1, -1, 5 ]
[ 3, 1, -1, -1, -1, -1, 3 ]
[ "iclr_2019_BkeDEoCctQ", "iclr_2019_BkeDEoCctQ", "S1ldJZzch7", "BklZ3SQ5nX", "BJlxMu4a37", "iclr_2019_BkeDEoCctQ", "iclr_2019_BkeDEoCctQ" ]
iclr_2019_BkeK-nRcFX
The Nonlinearity Coefficient - Predicting Generalization in Deep Neural Networks
For a long time, designing neural architectures that exhibit high performance was considered a dark art that required expert hand-tuning. One of the few well-known guidelines for architecture design is the avoidance of exploding or vanishing gradients. However, even this guideline has remained relatively vague and circumstantial, because there exists no well-defined, gradient-based metric that can be computed {\it before} training begins and can robustly predict the performance of the network {\it after} training is complete. We introduce what is, to the best of our knowledge, the first such metric: the nonlinearity coefficient (NLC). Via an extensive empirical study, we show that the NLC, computed in the network's randomly initialized state, is a powerful predictor of test error and that attaining a right-sized NLC is essential for attaining an optimal test error, at least in fully-connected feedforward networks. The NLC is also conceptually simple, cheap to compute, and is robust to a range of confounders and architectural design choices that comparable metrics are not necessarily robust to. Hence, we argue the NLC is an important tool for architecture search and design, as it can robustly predict poor training outcomes before training even begins.
rejected-papers
This paper proposes the NonLinearity Coefficient (NLC), a metric which aims to predicts test-time performance of neural networks at initialization. The idea is interesting and novel, and has clear practical implications. Reviewers unanimously agreed that the direction is a worthwhile one to pursue. However, several reviewers also raised concerns about how well-justified the method is: in particular, Reviewer 3 believes that a quantitative comparison to the related work is necessary, and takes issue with the motivation for being ad-hoc. Reviewer 2 also is concerned about the soundness of the coefficient in truly measuring nonlinearity. These concerns make it clear that the paper needs more work before it can be published. And, in particular, addressing the reviewers' concerns and providing proper comparison to related works will go a long way in that direction.
train
[ "S1g5gqvyAQ", "SyluC942aQ", "S1x4fvjv67", "HygEe93_3Q", "BJl6VAZUpQ", "SyeSD4KGTQ", "ByxulRy-aQ", "SygRpTy-T7", "HygxJskbpm", "rJlBqc1bTQ", "S1xAuckZa7", "BkgENCgC3X", "Bkx3LR333Q", "B1eUOvyX57" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "Thank you for your comments.\n\nWe agree that some of the other metrics have been demonstrated to be linked to certain quantities. For example, Lipschitz constant has been linked to adversarial robustness and depth to the efficient representation of certain function classes. However, throughout the paper, we demo...
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 5, 7, -1 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, -1 ]
[ "SyluC942aQ", "HygxJskbpm", "BJl6VAZUpQ", "iclr_2019_BkeK-nRcFX", "SyeSD4KGTQ", "S1xAuckZa7", "HygEe93_3Q", "HygEe93_3Q", "Bkx3LR333Q", "BkgENCgC3X", "BkgENCgC3X", "iclr_2019_BkeK-nRcFX", "iclr_2019_BkeK-nRcFX", "iclr_2019_BkeK-nRcFX" ]
iclr_2019_BkeUasA5YQ
LIT: Block-wise Intermediate Representation Training for Model Compression
Knowledge distillation (KD) is a popular method for reducing the computational over- head of deep network inference, in which the output of a teacher model is used to train a smaller, faster student model. Hint training (i.e., FitNets) extends KD by regressing a student model’s intermediate representation to a teacher model’s intermediate representa- tion. In this work, we introduce bLock-wise Intermediate representation Training (LIT), a novel model compression technique that extends the use of intermediate represen- tations in deep network compression, outperforming KD and hint training. LIT has two key ideas: 1) LIT trains a student of the same width (but shallower depth) as the teacher by directly comparing the intermediate representations, and 2) LIT uses the intermediate representation from the previous block in the teacher model as an input to the current stu- dent block during training, avoiding unstable intermediate representations in the student network. We show that LIT provides substantial reductions in network depth without loss in accuracy — for example, LIT can compress a ResNeXt-110 to a ResNeXt-20 (5.5×) on CIFAR10 and a VDCNN-29 to a VDCNN-9 (3.2×) on Amazon Reviews without loss in accuracy, outperforming KD and hint training in network size at a given accuracy. We also show that applying LIT to identical student/teacher architectures increases the accuracy of the student model above the teacher model, outperforming the recently-proposed Born Again Networks procedure on ResNet, ResNeXt, and VDCNN. Finally, we show that LIT can effectively compress GAN generators.
rejected-papers
The authors propose a method for distilling a student network from a teacher network and while additionally constraining the intermediate representations from the student to match those of the teacher, where the student has the same width, but less depth than the teacher. The main novelty of the work is to use the intermediate representation from the teacher as an input to the student network, and the experimental comparison of the approach against previous work. The reviewers noted that the method is simple to implement, and the paper is clearly written and easy to follow. The reviewers raised some concerns, most notably that the authors were using validation accuracy to measure performance, and were thus potentially overfitting to the test data, and regarding the novelty of the work. Some of the criticisms were subsequently amended in the revised version where results were reported on a test set (the conclusions are as before). Overall, the scores for this paper were close to the threshold for acceptance, and while it was a tough decision, the AC ultimately felt that the overall novelty of the work was slightly below the acceptance bar.
train
[ "S1g1Bku7JN", "ryerNNfWam", "S1e_k8xc3Q", "Byxs1hHsnm", "rkgK05M9A7", "H1ldq-LwRm", "B1g01-Lw07", "SJeoktdZp7", "rylNtOuWTm", "Hyg9X__b67" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Thanks for the update. The revised paper reads much better than the original submission. I wish there could be more analyses as to why/how LIT works. I have revised my score accordingly.", "This paper proposes a new approach to compress neural networks by training the student's intermediate representation to mat...
[ -1, 5, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "H1ldq-LwRm", "iclr_2019_BkeUasA5YQ", "iclr_2019_BkeUasA5YQ", "iclr_2019_BkeUasA5YQ", "B1g01-Lw07", "SJeoktdZp7", "iclr_2019_BkeUasA5YQ", "ryerNNfWam", "Byxs1hHsnm", "S1e_k8xc3Q" ]
iclr_2019_BkedwoC5t7
Formal Limitations on the Measurement of Mutual Information
Motivated by applications to unsupervised learning, we consider the problem of measuring mutual information. Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods. In this paper we prove that serious statistical limitations are inherent to any measurement method. More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than O(ln⁡N) where N is the size of the data sample. We also analyze the Donsker-Varadhan lower bound on KL divergence in particular and show that, when simple statistical considerations are taken into account, this bound can never produce a high-confidence value larger than ln⁡N. While large high-confidence lower bounds are impossible, in practice one can use estimators without formal guarantees. We suggest expressing mutual information as a difference of entropies and using cross entropy as an entropy estimator. We observe that, although cross entropy is only an upper bound on entropy, cross-entropy estimates converge to the true cross entropy at the rate of 1/N.
rejected-papers
The paper proves that the Donsker-Varadhan lower bound on KL divergence cannot be used to estimate KL divergences of more than tens of bits, and that more generally any distribution-free high-confidence lower bound on mutual information cannot be larger than O(ln N) where N is the size of the data sample. As an alternative for applications such as maximum mutual information predictive coding, a form of representation learning, the paper proposes using the cross-entropy upper bound on entropy and estimating mutual information as a difference of two cross-entropies. These cross-entropy bounds converge to the true entropy as 1/\sqrt(N), but at the cost of providing neither an upper nor a lower bound on mutual information. There was a divergence of opinion between the reviewers on this paper. The most negative reviewer (R3) thought there should be experiments confirming that the DV bound fails when mutual information is high, was concerned that the theory applied only in the case of discrete distributions, and was concerned that the proposed optimization problem in Section 6 would be challenging due to its adversarial (max-inf) structure. The authors responded that they felt the theory could stand on its own without empirical tests (a point with which R1 agreed); that although their exposition was for discrete variables, the analysis applies to the continuous case as well; and that they agreed with the point about the difficulty of the optimization, but that GANs face similar difficulties. Because R3 did not participate in the discussion and the AC believes that the authors adequately addressed most of R3's issues in their response and revision, this review has been discounted. The next most negative reviewer (R2) wanted a discussion relating the ideas in this paper to kNN and kernel-based estimators of mutual information, wanted an empirical evaluation (like R3), and was concerned about whether the difference of cross-entropies provides an upper or lower bound on mutual information. In their response and revision the authors added some discussion of kNN methods (but not enough to make R2 happy) and clarified that the difference of cross-entropies provides neither an upper nor a lower bound. The most positive reviewer (R1) thinks the theoretical contribution of the paper is significant enough to justify publication in ICLR. The AC likes the theoretical work and feels that it raises important concerns about MINE, but concurs with R2 and R3 that some empirical validation of the theory is needed for the paper to appear in ICLR. The authors are strongly encouraged to perform an empirical validation of the theory and to submit this work to another machine learning venue.
train
[ "rygbHvdNA7", "Syl6cQcsnX", "H1ejTLFX07", "Hke7leEw6X", "BJlXECzvTm", "H1lxFreb6X", "SkemzTGwp7", "S1gOZ5fPT7", "S1xpzQF4pQ", "r1e0tmqypX", "Syeg-uY1pm", "rJlF2U6MoX", "BJemYcd7jm", "rklrxl9msQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\"The question is what is its relation to actual mutual information quantity. In general machine learning settings, mutual information between high dimensional variables is low.\"\n\nNote this is irrelevant to the theory; regardless of the value of I(;), the lower bound implies that you need log N to be at least a...
[ -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ -1, 3, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "H1ejTLFX07", "iclr_2019_BkedwoC5t7", "BJlXECzvTm", "rJlF2U6MoX", "Syl6cQcsnX", "iclr_2019_BkedwoC5t7", "S1gOZ5fPT7", "H1lxFreb6X", "iclr_2019_BkedwoC5t7", "Syl6cQcsnX", "rJlF2U6MoX", "iclr_2019_BkedwoC5t7", "rJlF2U6MoX", "BJemYcd7jm" ]
iclr_2019_BkesGnCcFX
Learning Goal-Conditioned Value Functions with one-step Path rewards rather than Goal-Rewards
Multi-goal reinforcement learning (MGRL) addresses tasks where the desired goal state can change for every trial. State-of-the-art algorithms model these problems such that the reward formulation depends on the goals, to associate them with high reward. This dependence introduces additional goal reward resampling steps in algorithms like Hindsight Experience Replay (HER) that reuse trials in which the agent fails to reach the goal by recomputing rewards as if reached states were psuedo-desired goals. We propose a reformulation of goal-conditioned value functions for MGRL that yields a similar algorithm, while removing the dependence of reward functions on the goal. Our formulation thus obviates the requirement of reward-recomputation that is needed by HER and its extensions. We also extend a closely related algorithm, Floyd-Warshall Reinforcement Learning, from tabular domains to deep neural networks for use as a baseline. Our results are competetive with HER while substantially improving sampling efficiency in terms of reward computation.
rejected-papers
This manuscript presents a reinterpretation of hindsight experience replay which aims to avoid recomputing the reward function, and investigates Floyd-Warshall RL in the function approximation setting. The paper was judged as relatively clear. The authors report a slight improvement in computational cost, which some reviewers called into question. However, all of the reviewers pointed out that the experimental evidence for the method's superiority is weak. Two reviewers additionally raised that this wasn't significantly different than the standard formulation of Hindsight Experience Replay, which doesn't require the computation of rewards for relabeled goals. Ultimately, reviewers were in agreement that the novelty of the method and quality of the obtained results rendered the work insufficient for publication. The Area Chair concurs, and urges the authors to consider the reviewers' pointers to the existing literature in order to clarify their contribution for subsequent submission.
train
[ "Syeog5bvg4", "HJgCgQUm1E", "H1lj7OnYAQ", "HJgCIDZICX", "rkxw606xAQ", "SkgRQQbMa7", "rkxW9NfW6m", "SkgVLVMZTm", "S1loBx0K27", "r1l2zvNt3X", "BJl_G5df5m", "H1gzrfOGcQ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We address your comments point by point.\n\n> I maintain that they key idea behind this paper is not new. On top of that, the way it is presented obfuscates what is really going on. What is the justification for adding the 1-step loss to Q-learning with a constant reward for all transitions? Why will this converge...
[ -1, -1, -1, -1, -1, 4, -1, -1, 1, 3, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, 4, 4, -1, -1 ]
[ "HJgCgQUm1E", "H1lj7OnYAQ", "HJgCIDZICX", "SkgVLVMZTm", "SkgRQQbMa7", "iclr_2019_BkesGnCcFX", "r1l2zvNt3X", "S1loBx0K27", "iclr_2019_BkesGnCcFX", "iclr_2019_BkesGnCcFX", "H1gzrfOGcQ", "iclr_2019_BkesGnCcFX" ]
iclr_2019_BkesJ3R9YX
Where and when to look? Spatial-temporal attention for action recognition in videos
Inspired by the observation that humans are able to process videos efficiently by only paying attention when and where it is needed, we propose a novel spatial-temporal attention mechanism for video-based action recognition. For spatial attention, we learn a saliency mask to allow the model to focus on the most salient parts of the feature maps. For temporal attention, we employ a soft temporal attention mechanism to identify the most relevant frames from an input video. Further, we propose a set of regularizers that ensure that our attention mechanism attends to coherent regions in space and time. Our model is efficient, as it proposes a separable spatio-temporal mechanism for video attention, while being able to identify important parts of the video both spatially and temporally. We demonstrate the efficacy of our approach on three public video action recognition datasets. The proposed approach leads to state-of-the-art performance on all of them, including the new large-scale Moments in Time dataset. Furthermore, we quantitatively and qualitatively evaluate our model's ability to accurately localize discriminative regions spatially and critical frames temporally. This is despite our model only being trained with per video classification labels.
rejected-papers
Strengths: The paper presentation was assessed as being of high quality. Experiments were diverse in terms of datasets and tasks. Weaknesses: Multiple reviewers commented that the paper does not present substantial novelty compared to previous work. Contention: One reviewer holding out on giving a stronger rating to the paper due to the issue of novelty. Consensus: Final scores were two 6s one 3. This work has merit, but the degree of concern over the level of novelty leads to an aggregate rating that is too low to justify acceptance. Authors are encourage to re-submit to another venue.
train
[ "HJgOFN3fk4", "rye3e-Ai0X", "HJlu-QRORX", "rJxzXT-iCQ", "r1eVOIOt2m", "SJeA67VtRm", "HJxW7zCORQ", "rkxrKQAOAQ", "rJgKgN0n2X", "S1gJRUfs3X" ]
[ "public", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "If someone just reads your paper from section 3, you never mention how $M_i$ is computed. For example whether a sigmoid is applied or not.\nHere I assume a sigmoid is applied.\n\nNow regarding the contrast loss:\nThere is no gradient flowing from $B_i$ back in your implementation. Am I right?\n\nAlso have you cons...
[ -1, -1, -1, -1, 6, -1, -1, -1, 6, 3 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, 4, 5 ]
[ "iclr_2019_BkesJ3R9YX", "HJxW7zCORQ", "S1gJRUfs3X", "rkxrKQAOAQ", "iclr_2019_BkesJ3R9YX", "HJlu-QRORX", "rJgKgN0n2X", "r1eVOIOt2m", "iclr_2019_BkesJ3R9YX", "iclr_2019_BkesJ3R9YX" ]
iclr_2019_Bkeuz20cYm
Double Neural Counterfactual Regret Minimization
Counterfactual regret minimization (CRF) is a fundamental and effective technique for solving imperfect information games. However, the original CRF algorithm only works for discrete state and action spaces, and the resulting strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games and continuing to improve from a poor strategy profile. In this paper, we propose a double neural representation for the Imperfect Information Games, where one neural network represents the cumulative regret, and the other represents the average strategy. Furthermore, we adopt the counterfactual regret minimization algorithm to optimize this double neural representation. To make neural learning efficient, we also developed several novel techniques including a robust sampling method, mini-batch Monte Carlo counterfactual regret minimization (MCCFR) and Monte Carlo counterfactual regret minimization plus (MCCFR+) which may be of independent interests. Experimentally, we demonstrate that the proposed double neural algorithm converges significantly better than the reinforcement learning counterpart.
rejected-papers
The reviewers agreed that there are some promising ideas in this work, and useful empirical analysis to motivate the approach. The main concern is in the soundness of the approach (for example, comments about cumulative learning and negative samples). The authors provided some justification about using previous networks as initialization, but this is an insufficient discussion to understand the soundness of the strategy. The paper should better discuss this more, even if it is not possible to provide theory. The paper could also be improved with the addition of a baseline (though not necessarily something like DeepStack, which is not publicly available and potentially onerous to reimplement).
train
[ "BkldrwZYAQ", "BJgrV-lPCX", "rJeJ5EIBA7", "H1gYO-8SC7", "rJx3ZW8HRm", "rJghPlLrRQ", "Syg2MlIBAQ", "BJx98CBB0Q", "Hklezu2Z6Q", "BJllmKgjn7", "BkxiXMw927", "B1lHz-hstQ", "B1ginbdjY7", "SJeKcZdsYX", "SkgWej4oYX", "H1g5hL4jK7" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "public", "public" ]
[ "The authors have provided a welcome new analysis in Fig. 5, in which performance in larger games was investigated (up to stack of size 15) and the compression/generalization ability of the neural net is displayed.\n\nWhile the ablation analyses and empirical investigations of the proposed method itself are quite t...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5, -1, -1, -1, -1, -1 ]
[ "rJx3ZW8HRm", "iclr_2019_Bkeuz20cYm", "H1g5hL4jK7", "Hklezu2Z6Q", "BJllmKgjn7", "BkxiXMw927", "BkxiXMw927", "iclr_2019_Bkeuz20cYm", "iclr_2019_Bkeuz20cYm", "iclr_2019_Bkeuz20cYm", "iclr_2019_Bkeuz20cYm", "SJeKcZdsYX", "H1g5hL4jK7", "SkgWej4oYX", "iclr_2019_Bkeuz20cYm", "iclr_2019_Bkeuz...
iclr_2019_BkewX2C9tX
Analyzing Federated Learning through an Adversarial Lens
Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server. In this work, we explore the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence. We explore a number of strategies to carry out this attack, starting with simple boosting of the malicious agent's update to overcome the effects of other agents' updates. To increase attack stealth, we propose an alternating minimization strategy, which alternately optimizes for the training loss and the adversarial objective. We follow up by using parameter estimation for the benign agents' updates to improve on attack success. Finally, we use a suite of interpretability techniques to generate visual explanations of model decisions for both benign and malicious models and show that the explanations are nearly visually indistinguishable. Our results indicate that even a highly constrained adversary can carry out model poisoning attacks while simultaneously maintaining stealth, thus highlighting the vulnerability of the federated learning setting and the need to develop effective defense strategies.
rejected-papers
This paper proposes model poisoning (poisoned parameter updates in a federated setting) in contrast to data poisoning (poisoned training data). It proposes an attack method and compares to baselines that are also proposed in the paper (there are no external baselines). While model poisoning is indeed an interesting direction to consider, I agree with reviewer concerns that the relation to data poisoning is not clearly addressed. In particular, any data poisoning attack could be used as a model poisoning attack (just provide whatever updates would be induced by the poisoned data), so there is no good excuse to not compare to the existing strong data poisoning attacks. One reviewer raised concerns about lack of theoretical guarantees but I do not agree with these concerns (the authors correctly point out in the rebuttal that this is not necessary for an attack-focused paper). I do feel there is room to improve the overall clarity/motivation (for instance, equation (1) is presented without any explanation and it is still not clear to me why this is the right formulation).
train
[ "HyeoRt-iCm", "B1eRXHbjAX", "SJxKFSbiAX", "rkeDRZZi07", "SJledo6Rnm", "HkxgZYYT2Q", "B1eu77MS27" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have uploaded a revised version of the paper. In particular, the revised version contains the following changes:\n\n1. The ‘related work’ paragraph has been updated to include the papers suggested by Reviewer 1 and to better place our work with respect to those papers.\n2. The Experimental Setup (Section 2.3) n...
[ -1, -1, -1, -1, 5, 4, 6 ]
[ -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2019_BkewX2C9tX", "HkxgZYYT2Q", "SJledo6Rnm", "B1eu77MS27", "iclr_2019_BkewX2C9tX", "iclr_2019_BkewX2C9tX", "iclr_2019_BkewX2C9tX" ]
iclr_2019_Bkf1tjR9KQ
DVOLVER: Efficient Pareto-Optimal Neural Network Architecture Search
Automatic search of neural network architectures is a standing research topic. In addition to the fact that it presents a faster alternative to hand-designed architectures, it can improve their efficiency and for instance generate Convolutional Neural Networks (CNN) adapted for mobile devices. In this paper, we present a multi-objective neural architecture search method to find a family of CNN models with the best accuracy and computational resources tradeoffs, in a search space inspired by the state-of-the-art findings in neural search. Our work, called Dvolver, evolves a population of architectures and iteratively improves an approximation of the optimal Pareto front. Applying Dvolver on the model accuracy and on the number of floating points operations as objective functions, we are able to find, in only 2.5 days 1 , a set of competitive mobile models on ImageNet. Amongst these models one architecture has the same Top-1 accuracy on ImageNet as NASNet-A mobile with 8% less floating point operations and another one has a Top-1 accuracy of 75.28% on ImageNet exceeding by 0.28% the best MobileNetV2 model for the same computational resources.
rejected-papers
The paper describes an architecture search method which optimises multiple objectives using a genetic algorithm. All reviewers agree on rejection due to limited novelty compared to the prior art; while the results are solid, they are not ground-breaking to justify acceptance based on results alone.
train
[ "rJeMDMpTT7", "rkgrzzaT67", "ryeNabaaa7", "ryeL_QZ9hm", "HJxO1b4Yhm", "r1xaACsOhm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your time and comments. Below we pull quotes from the review followed by responses.\n\n\"The authors admit that their work is incremental and a combination of existing work. Furthermore, they admit that Dong et al. (2018) is the closest related work, however, they do not compare to them in the experimen...
[ -1, -1, -1, 4, 5, 4 ]
[ -1, -1, -1, 4, 3, 4 ]
[ "r1xaACsOhm", "HJxO1b4Yhm", "ryeL_QZ9hm", "iclr_2019_Bkf1tjR9KQ", "iclr_2019_Bkf1tjR9KQ", "iclr_2019_Bkf1tjR9KQ" ]
iclr_2019_BkfPnoActQ
Towards Consistent Performance on Atari using Expert Demonstrations
Despite significant advances in the field of deep Reinforcement Learning (RL), today's algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games. We identify three key challenges that any algorithm needs to master in order to perform well on all games: processing diverse reward distributions, reasoning over long time horizons, and exploring efficiently. In this paper, we propose an algorithm that addresses each of these challenges and is able to learn human-level policies on nearly all Atari games. A new transformed Bellman operator allows our algorithm to process rewards of varying densities and scales; an auxiliary temporal consistency loss allows us to train stably using a discount factor of 0.999 (instead of 0.99) extending the effective planning horizon by an order of magnitude; and we ease the exploration problem by using human demonstrations that guide the agent towards rewarding states. When tested on a set of 42 Atari games, our algorithm exceeds the performance of an average human on 40 games using a common set of hyper parameters.
rejected-papers
This paper proposes a combination of three techniques to improve the learning performance of Atari games. Good performance was shown in the paper with all three techniques together applied to DQN. However, it is hard to justify the integration of these techniques. It is also not clear why the specific decisions were made when combining them. More comprehensive experiments, such as a more systematic ablation study, are required to convince the benefits of individual components. Furthermore, it seems very hard to tell whether the improvement of existing approaches, such as Ape-X DQN, was from using the proposed techniques or a deeper architecture (Tables 1&2&4&5). Overall, this paper is not ready for publication.
test
[ "S1xQQlO2Tm", "BJezQ93FTQ", "rJeR7yRw6Q", "S1lsrHMvTQ", "BkeJsysI6X", "rkgcVWA-6X", "SylXBw1O27" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thanks the reviewer for their comments.\n\nConcerning the Bellman operator\nWe experimented with linear reward scaling extensively. Because there is no universal scaling constant that stabilizes training on all games, we did try various methods of deriving a scaling factor from demonstrations (mea...
[ -1, -1, 6, -1, 5, 7, 7 ]
[ -1, -1, 4, -1, 4, 1, 4 ]
[ "rJeR7yRw6Q", "BkeJsysI6X", "iclr_2019_BkfPnoActQ", "SylXBw1O27", "iclr_2019_BkfPnoActQ", "iclr_2019_BkfPnoActQ", "iclr_2019_BkfPnoActQ" ]
iclr_2019_BkfhZnC9t7
Zero-shot Learning for Speech Recognition with Universal Phonetic Model
There are more than 7,000 languages in the world, but due to the lack of training sets, only a small number of them have speech recognition systems. Multilingual speech recognition provides a solution if at least some audio training data is available. Often, however, phoneme inventories differ between the training languages and the target language, making this approach infeasible. In this work, we address the problem of building an acoustic model for languages with zero audio resources. Our model is able to recognize unseen phonemes in the target language, if only a small text corpus is available. We adopt the idea of zero-shot learning, and decompose phonemes into corresponding phonetic attributes such as vowel and consonant. Instead of predicting phonemes directly, we first predict distributions over phonetic attributes, and then compute phoneme distributions with a customized acoustic model. We extensively evaluate our English-trained model on 20 unseen languages, and find that on average, it achieves 9.9% better phone error rate over a traditional CTC based acoustic model trained on English.
rejected-papers
This paper studies the really hard problem of zero-shot learning in acoustic modeling for languages with limited resources, using data from English. Using a novel universal phonetic model, the authors show improvements compared to using an English model for 20 other languages in phone recognition quality. Strengths - Reviewers agree that the problem is an important one, and the presented ideas are novel. - Universal phonetic model to represent phones in any language is interesting. Weaknesses - The results are really weak, to the point that it is unclear how effective or general the techniques are. The work is an interesting first step, but is not developed enough to be accepted at this point. - The universal phonetic model being trained only in English might affect generalizability to languages that do not share phonetic characteristics. The authors agree partly, and argue that the method already addresses some issues since the model can already represent unseen phones. But, coupled with the high phone error rates, it is still unclear how appropriate the technique will be in addressing this issue. - Novelty: Although the idea of mapping phones to attributes, and using those for ASR is not novel (e.g., using articulatory features), application for zero-shot learning is. The work assumes availability of a small text corpus to learn phone-sequence distribution, so is similar to other zero-resource approaches that assume some data (audio, as opposed to text) is available in the new language. This paper presents interesting first steps, but lacks sufficient experimental validation at this point. Therefore, AE recommendation is to reject the paper. I encourage the authors to improve and resubmit in the future.
train
[ "H1e4b3x9hQ", "BJgHrFCT0X", "HJlUHCjo0m", "Bkx9UfE5Am", "BylKaWEqAm", "BkgCYZV5Cm", "ByeYvTNOhQ", "HkgGv8a7A7", "SkxEjxzm0X", "rke7KlGmCQ", "rkxpSgGXAQ", "r1eK6OJKnQ" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper presents an approach to address the task on zero-shot learning for speech recognition, which consist of learning an acoustic model without any resources for a given language. The universal phonetic model is proposed, which learns phone attributes (instead of phone label), which allows to do prediction o...
[ 7, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 4 ]
[ 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2019_BkfhZnC9t7", "HJlUHCjo0m", "SkxEjxzm0X", "ByeYvTNOhQ", "HkgGv8a7A7", "H1e4b3x9hQ", "iclr_2019_BkfhZnC9t7", "rkxpSgGXAQ", "ByeYvTNOhQ", "r1eK6OJKnQ", "H1e4b3x9hQ", "iclr_2019_BkfhZnC9t7" ]
iclr_2019_BkfxKj09Km
DiffraNet: Automatic Classification of Serial Crystallography Diffraction Patterns
Serial crystallography is the field of science that studies the structure and properties of crystals via diffraction patterns. In this paper, we introduce a new serial crystallography dataset generated through the use of a simulator; the synthetic images are labeled and they are both scalable and accurate. The resulting synthetic dataset is called DiffraNet, and it is composed of 25,000 512x512 grayscale labeled images. We explore several computer vision approaches for classification on DiffraNet such as standard feature extraction algorithms associated with Random Forests and Support Vector Machines but also an end-to-end CNN topology dubbed DeepFreak tailored to work on this new dataset. All implementations are publicly available and have been fine-tuned using off-the-shelf AutoML optimization tools for a fair comparison. Our best model achieves 98.5% accuracy. We believe that the DiffraNet dataset and its classification methods will have in the long term a positive impact in accelerating discoveries in many disciplines, including chemistry, geology, biology, materials science, metallurgy, and physics.
rejected-papers
Reviewer ratings varied radically (from a 3 to an 8). However, the reviewer rating the paper as 8 provided extremely little justification for their rating. The reviewers providing lower ratings gave more detailed reviews, and also engaged in discussion with the authors. Ultimately neither decided to champion the paper, and therefore, I cannot recommend acceptance.
train
[ "HJx4poD1y4", "ryg1gewcA7", "HylxLpXfR7", "BylGTpQMR7", "rkgT40XGR7", "SJghvpQf0Q", "r1gFixco3Q", "BkeHnOh53m", "SJeqFsAajm", "BJlULFh9n7" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the additional comments and suggestions.\n\nCrystallography experiments do not combine different structures, data is always collected and analyzed one structure at a time. We maintain this field-specific common practice in our dataset, hence, our synthetic and real images were generated u...
[ -1, -1, -1, -1, -1, -1, 5, 3, 8, -1 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, -1 ]
[ "ryg1gewcA7", "BylGTpQMR7", "r1gFixco3Q", "BkeHnOh53m", "BJlULFh9n7", "HylxLpXfR7", "iclr_2019_BkfxKj09Km", "iclr_2019_BkfxKj09Km", "iclr_2019_BkfxKj09Km", "SJeqFsAajm" ]
iclr_2019_Bkg5aoAqKm
Fast Binary Functional Search on Graph
The large-scale search is an essential task in modern information systems. Numerous learning based models are proposed to capture semantic level similarity measures for searching or ranking. However, these measures are usually complicated and beyond metric distances. As Approximate Nearest Neighbor Search (ANNS) techniques have specifications on metric distances, efficient searching by advanced measures is still an open question. In this paper, we formulate large-scale search as a general task, Optimal Binary Functional Search (OBFS), which contains ANNS as special cases. We analyze existing OBFS methods' limitations and explain they are not applicable for complicated searching measures. We propose a flexible graph-based solution for OBFS, Search on L2 Graph (SL2G). SL2G approximates gradient decent in Euclidean space, with accessible conditions. Experiments demonstrate SL2G's efficiency in searching by advanced matching measures (i.e., Neural Network based measures).
rejected-papers
This paper proposes an Optimal Binary Functional Search (OBFS) algorithm for searching with general score functions, which generalizes the standard similarity measures based on Euclidean distances. This yields an extension of the classical approximate nearest neighbor search (ANNS). As observed by the reviewers, this work targets an important research direction. Unfortunately, the reviewers raised several concerns regarding the clarity and significance of the work. The authors provided a good rebuttal and addressed some concerns, but not to the degree that reviewers think it passes the bar of ICLR. We encourage the authors to further improve the work to address the key concerns.
train
[ "BklgOiC2jQ", "Bkxw766hR7", "H1giXF6hCX", "BkebSdD9aX", "S1gakuPcTX", "HkehwwDqTX", "SJxBPmP9pm", "Byx3R3E03m" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Post-rebuttal\n------------------\nI have read the rebuttal and I better understand the paper. Given that, I am going to raise my rating by one point for the following reason:\n- The manuscript presents a novel solution to a general problem and it is a valid solution. However, the solution is somewhat obvious, whi...
[ 5, -1, -1, -1, -1, -1, -1, 4 ]
[ 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_Bkg5aoAqKm", "BklgOiC2jQ", "SJxBPmP9pm", "BklgOiC2jQ", "BklgOiC2jQ", "BklgOiC2jQ", "Byx3R3E03m", "iclr_2019_Bkg5aoAqKm" ]
iclr_2019_Bkg93jC5YX
BLISS in Non-Isometric Embedding Spaces
Recent work on bilingual lexicon induction (BLI) has frequently depended either on aligned bilingual lexicons or on distribution matching, often with an assumption about the isometry of the two spaces. We propose a technique to quantitatively estimate this assumption of the isometry between two embedding spaces and empirically show that this assumption weakens as the languages in question become increasingly etymologically distant. We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a novel semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique. Our proposed method improves over strong baselines for 11 of 14 on the MUSE dataset, particularly for languages whose embedding spaces do not appear to be isometric. In addition, we also show that adding supervision stabilizes the learning procedure, and is effective even with minimal supervision.
rejected-papers
This paper is very close to the decision boundary and the reviewers were split about whether it should be accepted or not. The authors updated the paper with additional experiments as request by the reviewers. The area chair acknowledges that there is some novelty that leads to (moderate) empirical gains but does not see these as sufficient to push the paper over the very competitive acceptance threshold.
train
[ "ByxN1pkJkV", "HJxnJu56A7", "rJevx4SYR7", "HklcW7SKAm", "HJxo-KHKRQ", "S1lrtXHt0X", "SkeL2mEshQ", "SkxWL2xK2m", "SJeJN7oHhm" ]
[ "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your question.\n\nAlthough optimizing over the CSLS loss (BLISS(R)) improves over optimizing over the cosine loss (BLISS(M)), we show that both these instantiations of our proposed semi-supervised framework outperform their supervised and unsupervised counterparts. The SotA results of BLISS(R) thus c...
[ -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "HJxnJu56A7", "HJxo-KHKRQ", "SJeJN7oHhm", "SkeL2mEshQ", "iclr_2019_Bkg93jC5YX", "SkxWL2xK2m", "iclr_2019_Bkg93jC5YX", "iclr_2019_Bkg93jC5YX", "iclr_2019_Bkg93jC5YX" ]
iclr_2019_BkgFqiAqFX
Recovering the Lowest Layer of Deep Networks with High Threshold Activations
Giving provable guarantees for learning neural networks is a core challenge of machine learning theory. Most prior work gives parameter recovery guarantees for one hidden layer networks, however, the networks used in practice have multiple non-linear layers. In this work, we show how we can strengthen such results to deeper networks -- we address the problem of uncovering the lowest layer in a deep neural network under the assumption that the lowest layer uses a high threshold before applying the activation, the upper network can be modeled as a well-behaved polynomial and the input distribution is gaussian.
rejected-papers
The reviewers reached a consense on that the paper is not quite ready for publication at ICRL. The main potential drawback include a) the exposition of the paper can be improved; b) it's not entirely clear that some of the assumptions (such as the threshold for the first layer, the polynomial approximation of higher layers) are meaningful , and it seems that the proof technique exploits heavily some of these assumptions and some of the key intermediate steps won't hold in practice. (see reviewer 3's comment for more details.) The authors clarify the writing and intuitions in the response, but overall the AC decided that the paper is not quite ready for publications at the moment.
val
[ "HJeVEP-WCm", "BygHGPWW0Q", "HJxIkwbbAQ", "BJed2wl9Tm", "S1gTl83L6X", "SJe9BFUY2m" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewer for their feedback. We would like to answer for each point:\n1) a) We agree that the threshold seems high however we note that this might be necessary for the following reason. Our work can be viewed in the SIGN activation case as learning a union of half spaces. Conditioning on...
[ -1, -1, -1, 4, 5, 4 ]
[ -1, -1, -1, 4, 3, 4 ]
[ "BJed2wl9Tm", "SJe9BFUY2m", "S1gTl83L6X", "iclr_2019_BkgFqiAqFX", "iclr_2019_BkgFqiAqFX", "iclr_2019_BkgFqiAqFX" ]
iclr_2019_BkgGmh09FQ
Understanding Opportunities for Efficiency in Single-image Super Resolution Networks
A successful application of convolutional architectures is to increase the resolution of single low-resolution images -- a image restoration task called super-resolution (SR). Naturally, SR is of value to resource constrained devices like mobile phones, electronic photograph frames and televisions to enhance image quality. However, SR demands perhaps the most extreme amounts of memory and compute operations of any mainstream vision task known today, preventing SR from being deployed to devices that require them. In this paper, we perform a early systematic study of system resource efficiency for SR, within the context of a variety of architectural and low-precision approaches originally developed for discriminative neural networks. We present a rich set of insights, representative SR architectures, and efficiency trade-offs; for example, the prioritization of ways to compress models to reach a specific memory and computation target and techniques to compact SR models so that they are suitable for DSPs and FPGAs. As a result of doing so, we manage to achieve better and comparable performance with previous models in the existing literature, highlighting the practicality of using existing efficiency techniques in SR tasks. Collectively, we believe these results provides the foundation for further research into the little explored area of resource efficiency for SR.
rejected-papers
This paper targets improving the computation efficiency of super resolution task. Reviewers have a consensus that this paper lacks technical contribution, therefore not recommend acceptance.
train
[ "BJl1kj0FCQ", "HklazuAY0X", "H1lUmvRFRX", "ryeByNe1aX", "HJlpdIWA37", "BJliGqDS2Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for identifying the missing gaps from the paper. We have revised the paper to include more visual comparisons and make our objectives and writing clearer. \n\nThe focus of the paper is to understand the empirical effects of applying and comparing existing techniques that are popular in image discriminative ...
[ -1, -1, -1, 4, 5, 3 ]
[ -1, -1, -1, 5, 4, 5 ]
[ "BJliGqDS2Q", "HJlpdIWA37", "ryeByNe1aX", "iclr_2019_BkgGmh09FQ", "iclr_2019_BkgGmh09FQ", "iclr_2019_BkgGmh09FQ" ]
iclr_2019_BkgVx3A9Km
A More Globally Accurate Dimensionality Reduction Method Using Triplets
We first show that the commonly used dimensionality reduction (DR) methods such as t-SNE and LargeVis poorly capture the global structure of the data in the low dimensional embedding. We show this via a number of tests for the DR methods that can be easily applied by any practitioner to the dataset at hand. Surprisingly enough, t-SNE performs the best w.r.t. the commonly used measures that reward the local neighborhood accuracy such as precision-recall while having the worst performance in our tests for global structure. We then contrast the performance of these two DR method against our new method called TriMap. The main idea behind TriMap is to capture higher orders of structure with triplet information (instead of pairwise information used by t-SNE and LargeVis), and to minimize a robust loss function for satisfying the chosen triplets. We provide compelling experimental evidence on large natural datasets for the clear advantage of the TriMap DR results. As LargeVis, TriMap is fast and and provides comparable runtime on large datasets.
rejected-papers
Dear authors, The reviewers all appreciated your goal of improving dimensionality reduction techniques. This is a field which does not enjoy the popularity it once did but remains nonetheless important. They also appreciated the novel loss and the use of triplets.to get the global structure. However, the paper lacks some guidance. In particular, it oscillates between showing qualitative results (robustness to outliers, "nice" visualizations) and quantitative ones (running time, classification performance). I agree with the reviewers that the quantitative ones should have used the same preprocessing for t-SNE and TriMap (either PCA or no PCA), regardless of the current implementation in software tools. Given that the quantitative results are not that impressive, may I suggest focusing on the qualitative ones for a resubmission? The robustness of the emeddings to the addition or removal of a few points is definitely interesting and worth further investigation, optionally with a corresponding metric.
train
[ "HylVQ3-Gy4", "SJluAqgMkN", "ryeAsHY5hQ", "SkgJdDNjRQ", "B1e323mPR7", "rJxzobq32m", "HkgnsCWD07", "HygXUL-R6m", "rJgNfUWCpm", "SkxHCrZAam", "HklpbWfWhX" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Please note that we carefully addressed all the concerns raised in your initial review. In the revised version, we showed that other DR methods (PHATE, UMAP, and STE) fail at least some of the global tests discussed in our paper. Note that we did not provide thorough comparisons with PHATE and UMAP because these t...
[ -1, -1, 5, -1, -1, 6, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, -1, 5, -1, -1, -1, -1, 3 ]
[ "SJluAqgMkN", "iclr_2019_BkgVx3A9Km", "iclr_2019_BkgVx3A9Km", "iclr_2019_BkgVx3A9Km", "HkgnsCWD07", "iclr_2019_BkgVx3A9Km", "HygXUL-R6m", "iclr_2019_BkgVx3A9Km", "iclr_2019_BkgVx3A9Km", "iclr_2019_BkgVx3A9Km", "iclr_2019_BkgVx3A9Km" ]
iclr_2019_BkgYIiAcFQ
DecayNet: A Study on the Cell States of Long Short Term Memories
It is unclear whether the extensively applied long-short term memory (LSTM) is an optimised architecture for recurrent neural networks. Its complicated design makes the network hard to analyse and non-immediately clear for its utilities in real-world data. This paper studies LSTMs as systems of difference equations, and takes a theoretical mathematical approach to study consecutive transitions in network variables. Our study shows that the cell state propagation is predominantly controlled by the forget gate. Hence, we introduce DecayNets, LSTMs with monotonically decreasing forget gates, to calibrate cell state dynamics. With recurrent batch normalisation, DecayNet outperforms the previous state of the art for permuted sequential MNIST. The Decay mechanism is also beneficial for LSTM-based optimisers, and decrease optimisee neural network losses more rapidly. Edit status: Revised paper.
rejected-papers
there is a disagreement among the reviewers, and i am siding with the two reviewers (r1 and r3) and agree with r3 that it is rather unconventional to pick learning-to-learn to experiment with modelling variable-length sequences (it's not like there's no other task that has this characteristics, e.g., language modelling, translation, ...)
train
[ "Byg8526MgE", "SkgXg26fgV", "HyetijHnCQ", "SklBChmu0m", "H1xuhhQOC7", "BkxB5nmO0m", "ByxYO5XuA7", "HygFxYm_0X", "r1lmr_XO0m", "HkgwgV2q2X", "H1xL3Xuv27", "SyeE1XoaiX", "HJlLQ-0Tnm", "rJgXoShc27" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "As I said, figure 2 is good for providing conceptual observations. But it is also important to see what happens in the real cases. The authors mentioned the figure 6 in the appendix has shown LSTM in real cases have chaotic behaviors in terms of the forget gates, which leads to more motivation of a figure showing ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 4, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, -1, -1 ]
[ "H1xuhhQOC7", "SklBChmu0m", "ByxYO5XuA7", "SyeE1XoaiX", "SyeE1XoaiX", "SyeE1XoaiX", "H1xL3Xuv27", "HkgwgV2q2X", "iclr_2019_BkgYIiAcFQ", "iclr_2019_BkgYIiAcFQ", "iclr_2019_BkgYIiAcFQ", "iclr_2019_BkgYIiAcFQ", "rJgXoShc27", "H1xL3Xuv27" ]
iclr_2019_BkgiM20cYX
A Self-Supervised Method for Mapping Human Instructions to Robot Policies
In this paper, we propose a modular approach which separates the instruction-to-action mapping procedure into two separate stages. The two stages are bridged via an intermediate representation called a goal, which stands for the result after a robot performs a specific task. The first stage maps an input instruction to a goal, while the second stage maps the goal to an appropriate policy selected from a set of robot policies. The policy is selected with an aim to guide the robot to reach the goal as close as possible. We implement the above two stages as a framework consisting of two distinct modules: an instruction-goal mapping module and a goal-policy mapping module. Given a human instruction in the evaluation phase, the instruction-goal mapping module first translates the instruction to a robot-interpretable goal. Once a goal is derived by the instruction-goal mapping module, the goal-policy mapping module then follows up to search through the goal-policy pairs to look for policy to be mapped by the instruction. Our experimental results show that the proposed method is able to learn an effective instruction-to-action mapping procedure in an environment with a given instruction set more efficiently than the baselines. In addition to the impressive data-efficiency, the results also show that our method can be adapted to a new instruction set and a new robot action space much faster than the baselines. The evidence suggests that our modular approach does lead to better adaptability and efficiency.
rejected-papers
The paper proposes a novel approach to interfacing robots with humans, or rather vv: by mapping instructions to goals, and goals to robot actions. A possibly nice idea, and possibly good for more efficient learning. But the technical realisation is less strong than the initial idea. The original idea merits a good evaluation, and the authors are strongly encouraged to follow up on this idea and realise it, towards a stronger publication. It be noted that the authors refrained from using the rebuttal phase.
train
[ "r1xT4mNRnm", "HJlkE0Bqh7", "rJg57h2UhQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a modular approach to the problem of mapping instructions to robot actions. The first of two modules is responsible for learning a goal embedding of a given instruction using a learned distance function. The second module is responsible for mapping goals from this embedding space to control poli...
[ 4, 3, 2 ]
[ 5, 5, 4 ]
[ "iclr_2019_BkgiM20cYX", "iclr_2019_BkgiM20cYX", "iclr_2019_BkgiM20cYX" ]
iclr_2019_BkgosiRcKm
Deep Recurrent Gaussian Process with Variational Sparse Spectrum Approximation
Modeling sequential data has become more and more important in practice. Some applications are autonomous driving, virtual sensors and weather forecasting. To model such systems, so called recurrent models are frequently used. In this paper we introduce several new Deep Recurrent Gaussian Process (DRGP) models based on the Sparse Spectrum Gaussian Process (SSGP) and the improved version, called Variational Sparse Spectrum Gaussian Process (VSSGP). We follow the recurrent structure given by an existing DRGP based on a specific variational sparse Nyström approximation, the recurrent Gaussian Process (RGP). Similar to previous work, we also variationally integrate out the input-space and hence can propagate uncertainty through the Gaussian Process (GP) layers. Our approach can deal with a larger class of covariance functions than the RGP, because its spectral nature allows variational integration in all stationary cases. Furthermore, we combine the (Variational) Sparse Spectrum ((V)SS) approximations with a well known inducing-input regularization framework. For the DRGP extension of these combined approximations and the simple (V)SS approximations an optimal variational distribution exists. We improve over current state of the art methods in prediction accuracy for experimental data-sets used for their evaluation and introduce a new data-set for engine control, named Emission.
rejected-papers
This paper is concerned with combining past approximation methods to obtain a variant of Deep Recurrent GPs. While this variant is new, 2/3 reviewers make very overlapping points about this extension being obtained from a straightforward combination of previous ideas. Furthermore, R3 is not convinced that the approach is well motivated, beyond “filling the gap” in the literature. All reviewers also pointed out that the paper is very hard to read. The authors have improved the manuscript during the rebuttal, but the AC believes that the paper is still written in an unnecessarily complicated way. Overall the AC believes that this paper needs some more work, specifically in (a) improving its presentation (b) providing more technical insights about the methods (as suggested by R2 and R3), which could be a means of boosting the novelty.
train
[ "SJgR7X0th7", "rJg6mhJi07", "ryeukb2_07", "Hyxwdyn_C7", "S1lc23s_Am", "BJgt7uoO07", "rJgI340tn7", "HJeVgHjqh7" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Overall Score: 7/10.\nConfidence Score: 7/10.\n\nDetailed Comments: This paper introduces various Deep Recurrent Gaussian Process (DRGP) models based on the Sparse Spectrum Gaussian Process (SSGP) models and the Variational Sparse Spectrum Gaussian Process (VSSGP) models. This is a good paper and proposed models a...
[ 7, -1, -1, -1, -1, -1, 5, 5 ]
[ 2, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_BkgosiRcKm", "iclr_2019_BkgosiRcKm", "SJgR7X0th7", "rJgI340tn7", "HJeVgHjqh7", "iclr_2019_BkgosiRcKm", "iclr_2019_BkgosiRcKm", "iclr_2019_BkgosiRcKm" ]
iclr_2019_Bkl2SjCcKQ
TequilaGAN: How To Easily Identify GAN Samples
In this paper we show strategies to easily identify fake samples generated with the Generative Adversarial Network framework. One strategy is based on the statistical analysis and comparison of raw pixel values and features extracted from them. The other strategy learns formal specifications from the real data and shows that fake samples violate the specifications of the real data. We show that fake samples produced with GANs have a universal signature that can be used to identify fake samples. We provide results on MNIST, CIFAR10, music and speech data.
rejected-papers
The paper points out a statistical properties of GAN samples which allows their identification as synthetic. The paper was praised by one reviewer as well-written, easy to follow, and addressing an interesting topic. Another added that the authors did an excellent job of "probing into different statistical perspectives", and the fact that they did not confine their investigation to images. Two reviewers leveraged the criticism that various properties discovered are not surprising given the loss functions and associated metrics as well as the inductive biases of continuous-valued generator networks. Tests employed were criticized as ad hoc, and reviewers felt that their generality was limited given their reduced sensitivity on certain modalities. (While Figure 10b is raised by the authors several times in the discussion, and the test statistics of samples are noted to be closer to the test data than to the random baseline, the test falsely rejects the null [p-value ~= 0.0] for non-synthetic test data.) I would encourage the authors to continue this line of inquiry as it is overall agreed to be an interesting topic of relevance and increasing importance, however based on the criticisms of reviewers and the content of the ensuing discussion I do not recommend acceptance at this time.
train
[ "Hkl96tvtn7", "BJlPs8Ru0m", "r1e2wCPeAm", "rkeWSRDxRQ", "H1x3bCPx0m", "HygqtR1O6m", "HkgFFTLA3m", "r1lHsWW92m" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The primary purpose of this paper, from what I understand, is to show that fake samples created with common generative adversarial network (GAN) implementations are easily identified using various statistical techniques. This can potentially be useful in helping to identify artificial samples in the real world.\n\...
[ 5, -1, -1, -1, -1, -1, 4, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2019_Bkl2SjCcKQ", "H1x3bCPx0m", "r1lHsWW92m", "Hkl96tvtn7", "HkgFFTLA3m", "iclr_2019_Bkl2SjCcKQ", "iclr_2019_Bkl2SjCcKQ", "iclr_2019_Bkl2SjCcKQ" ]
iclr_2019_Bkl87h09FX
Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling
Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018). This paper contributes the first large-scale systematic study comparing different pretraining tasks in this context, both as complements to language modeling and as potential alternatives. The primary results of the study support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models. However, a closer look at these results reveals worryingly strong baselines and strikingly varied results across target tasks, suggesting that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work.
rejected-papers
This paper presents an extensive empirical study to sentence-level pre-training. The paper compares pre-trained language models to other potential alternative pre-training options, and concludes that while pre-trained language models are generally stronger than other alternatives, the robustness and generality of the currently available method is less than ideal, at least with respect to ELMO-based pretraining. Pros: The paper presents an extensive empirical study that offers new insights on pre-trained language models with respect to a variety of sentence-level tasks. Cons: The primarily contributions of this paper is empirical and technical novelty is relatively weak. Also, the insights are based just on ELMO, which may have a relatively weak empirical impact. The reviews were generally positive but marginally positive, which reflect that insights are interesting but not overwhelmingly interesting. None of these is a deal-breaker per say, but the paper does not provide sufficiently strong novelty, whether based on insights or otherwise, relative to other papers being considered for acceptance. Verdict: Leaning toward reject due to relatively weak novelty and empirical impact. Additional note on the final decision: The insights provided by the paper are valuable, thus the paper was originally recommended for an accept. However, during the calibration process across all areas, it became evident that we cannot accept all valuable papers, each presenting different types of hard work and novel contributions. Consequently, some papers with mostly positive (but marginally positive) reviews could not be included in the final cut, despite their unique values, hard work, and novel contributions.
train
[ "r1xYl8rzaQ", "BJebTrSGT7", "S1lCFrBzpm", "r1eDs0UkaQ", "SJeRLFcth7", "HJlVdg5FhX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks! We agree with your overall assessment. (I guess that’s not surprising...)\n\nThe single-task baselines are a bit confusing, and we’ll clarify that point in an update shortly. \n\nAs you describe, we pretrain a model on the same single task that we later evaluate it on. The tricky point here is that we foll...
[ -1, -1, -1, 5, 7, 8 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "HJlVdg5FhX", "SJeRLFcth7", "r1eDs0UkaQ", "iclr_2019_Bkl87h09FX", "iclr_2019_Bkl87h09FX", "iclr_2019_Bkl87h09FX" ]
iclr_2019_BklACjAqFm
Successor Uncertainties: exploration and uncertainty in temporal difference learning
We consider the problem of balancing exploration and exploitation in sequential decision making problems. This trade-off naturally lends itself to probabilistic modelling. For a probabilistic approach to be effective, considering uncertainty about all immediate and long-term consequences of agent's actions is vital. An estimate of such uncertainty can be leveraged to guide exploration even in situations where the agent needs to perform a potentially long sequence of actions before reaching an under-explored area of the environment. This observation was made by the authors of the Uncertainty Bellman Equation model (O'Donoghue et al., 2018), which explicitly considers full marginal uncertainty for each decision the agent faces. However, their model still considers a fully factorised posterior over the consequences of each action, meaning that dependencies vital for correlated long-term exploration are ignored. We go a step beyond and develop Successor Uncertainties, a probabilistic model for the state-action value function of a Markov Decision Process with a non-factorised covariance. We demonstrate how this leads to greatly improved performance on classic tabular exploration benchmarks and show strong performance of our method on a subset of Atari baselines. Overall, Successor Uncertainties provides a better probabilistic model for temporal difference learning at a similar computational cost to its predecessors.
rejected-papers
Pros: - interesting algorithmic idea for using successor features to propagate uncertainty for use in epxloration - clarity Cons: - moderate novelty - initially only simplistic experiments (later complemented with Atari results) - initially missing baseline comparisons - no regret-based analysis - questionable soundness because uncertainty is not guaranteed to go down All the reviewers found the initial submission to be insufficient for acceptance, and the one reviewer who read the rebuttal/revision did not change their mind, despite the addition of some large-scale results (Atari).
train
[ "S1x0anUu14", "B1l-shUuyV", "H1xKFh8uyN", "rylgFG35A7", "B1gk_znqA7", "rJe0UMn9RQ", "Hke6BznqAQ", "BylrGMh5RQ", "rJlwFp0g6X", "HkxMSaBJ67", "rke8bVNonm" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear AnonReviewer2,\n\nWe’ve updated the paper, incorporating a great deal of your feedback and improving the experiments. To make it easy to check the changes, our rebuttal contains references to the sections where we address each point. When you have a moment to look over those changes, could you please let us k...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "rke8bVNonm", "HkxMSaBJ67", "rJlwFp0g6X", "BylrGMh5RQ", "BylrGMh5RQ", "BylrGMh5RQ", "BylrGMh5RQ", "iclr_2019_BklACjAqFm", "iclr_2019_BklACjAqFm", "iclr_2019_BklACjAqFm", "iclr_2019_BklACjAqFm" ]
iclr_2019_BklAEsR5t7
Large-scale classification of structured objects using a CRF with deep class embedding
This paper presents a novel deep learning architecture for classifying structured objects in ultrafine-grained datasets, where classes may not be clearly distinguishable by their appearance but rather by their context. We model sequences of images as linear-chain CRFs, and jointly learn the parameters from both local-visual features and neighboring class information. The visual features are learned by convolutional layers, whereas class-structure information is reparametrized by factorizing the CRF pairwise potential matrix. This forms a context-based semantic similarity space, learned alongside the visual similarities, and dramatically increases the learning capacity of contextual information. This new parametrization, however, forms a highly nonlinear objective function which is challenging to optimize. To overcome this, we develop a novel surrogate likelihood which allows for a local likelihood approximation of the original CRF with integrated batch-normalization. This model overcomes the difficulties of existing CRF methods to learn the contextual relationships thoroughly when there is a large number of classes and the data is sparse. The performance of the proposed method is illustrated on a huge dataset that contains images of retail-store product displays, and shows significantly improved results compared to linear CRF parametrization, unnormalized likelihood optimization, and RNN modeling.
rejected-papers
The paper addresses the problem of large scale fine-grained classification by estimating pairwise potentials in a CRF model. The reviewers believe that the paper has some weaknesses including (1) the motivation for approximate learning is not clear (2) the approximate objective is not well studied and (3) the experiments are not convincing. The authors did not submit a rebuttal. I encourage the authors to take the feedback into account to improve the paper.
val
[ "Hkxa4xxWhQ", "Skx1bH993X", "S1xUwr7LhX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper introduces a new dataset consisting of images of various objects placed on store shelves that are labeled with object boundaries and what are described as “ultrafine-grained” class labels. The accompanying task is to predict the labels of each object given the individual images as well as thei...
[ 4, 3, 3 ]
[ 4, 3, 5 ]
[ "iclr_2019_BklAEsR5t7", "iclr_2019_BklAEsR5t7", "iclr_2019_BklAEsR5t7" ]
iclr_2019_BklKFo09YX
Mol-CycleGAN - a generative model for molecular optimization
Designing a molecule with desired properties is one of the biggest challenges in drug development, as it requires optimization of chemical compound structures with respect to many complex properties. To augment the compound design process we introduce Mol-CycleGAN -- a CycleGAN-based model that generates optimized compounds with a chemical scaffold of interest. Namely, given a molecule our model generates a structurally similar one with an optimized value of the considered property. We evaluate the performance of the model on selected optimization objectives related to structural properties (presence of halogen groups, number of aromatic rings) and to a physicochemical property (penalized logP). In the task of optimization of penalized logP of drug-like molecules our model significantly outperforms previous results.
rejected-papers
This paper introduces a variant of the CycleGAN designed to optimize molecular graphs to achieve a desired quality. The work is reasonably clear and sensible, however it's of limited technical novelty, since it's mainly just combining two existing techniques. Overall its specificity and incrementalness make it not meet the bar.
train
[ "Hkxd3QyWTQ", "Byg3Zy8cn7", "HJe11ceq2X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents an approach for optimizing molecular properties, based on the application of CycleGANs to variational autoencoders for molecules. A recently proposed domain-specific VAE called Junction Tree VAE (JT-VAE) is employed. The optimization of molecules is an important problem for example in drug disco...
[ 4, 4, 4 ]
[ 5, 4, 3 ]
[ "iclr_2019_BklKFo09YX", "iclr_2019_BklKFo09YX", "iclr_2019_BklKFo09YX" ]
iclr_2019_BklMYjC9FQ
microGAN: Promoting Variety through Microbatch Discrimination
We propose to tackle the mode collapse problem in generative adversarial networks (GANs) by using multiple discriminators and assigning a different portion of each minibatch, called microbatch, to each discriminator. We gradually change each discriminator's task from distinguishing between real and fake samples to discriminating samples coming from inside or outside its assigned microbatch by using a diversity parameter α. The generator is then forced to promote variety in each minibatch to make the microbatch discrimination harder to achieve by each discriminator. Thus, all models in our framework benefit from having variety in the generated set to reduce their respective losses. We show evidence that our solution promotes sample diversity since early training stages on multiple datasets.
rejected-papers
The paper proposes an approach to remedying mode collapse problem in GANs. This approach relies on using multiple discriminators and assigning a different portion of each minibatch to each discriminator. + preventing mode collapse in GAN training is an important problem - the exact motivation for the proposed techniques is not fully fleshed out - the evaluation and baselines used are lacking
val
[ "HyxXYh5T2Q", "H1etimRThX", "BketXjer2Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a multi-discriminator based extension to GAN training. Specifically, it proposes to split a minibatch of samples into further smaller minibatches (microbatches) and train different discriminators on each. The authors state that \"since each D is trained with different fake and real samples, we e...
[ 3, 3, 6 ]
[ 3, 3, 3 ]
[ "iclr_2019_BklMYjC9FQ", "iclr_2019_BklMYjC9FQ", "iclr_2019_BklMYjC9FQ" ]
iclr_2019_BklUAoAcY7
Unsupervised Learning of Sentence Representations Using Sequence Consistency
Computing universal distributed representations of sentences is a fundamental task in natural language processing. We propose ConsSent, a simple yet surprisingly powerful unsupervised method to learn such representations by enforcing consistency constraints on sequences of tokens. We consider two classes of such constraints – sequences that form a sentence and between two sequences that form a sentence when merged. We learn sentence encoders by training them to distinguish between consistent and inconsistent examples, the latter being generated by randomly perturbing consistent examples in six different ways. Extensive evaluation on several transfer learning and linguistic probing tasks shows improved performance over strong unsupervised and supervised baselines, substantially surpassing them in several cases. Our best results are achieved by training sentence encoders in a multitask setting and by an ensemble of encoders trained on the individual tasks.
rejected-papers
The overall view of the reviewers is that the paper is not quite good enough as it stands. The reviewers also appreciates the contributions so taking the comments into account and resubmit elsewhere is encouraged.
train
[ "rJg7d-s_R7", "SkxJVv1UC7", "BJg5aByUR7", "ByxnKY1IR7", "SJgqvKJ8CQ", "S1gZCvJLAQ", "BklExIJ8RX", "ByeA6nrc3Q", "H1gNUpxc2X", "rylUx2jt27" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have uploaded a revised and improved version of the paper, incorporating most of the suggestions made by the reviewers. We have already given detailed comments below, but to recap, the revised version contains the following changes\n\n1. Results for a multitask trained encoder, which gives the best average perf...
[ -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2019_BklUAoAcY7", "H1gNUpxc2X", "ByeA6nrc3Q", "SJgqvKJ8CQ", "rylUx2jt27", "SkxJVv1UC7", "BJg5aByUR7", "iclr_2019_BklUAoAcY7", "iclr_2019_BklUAoAcY7", "iclr_2019_BklUAoAcY7" ]
iclr_2019_BklpOo09tQ
EFFICIENT TWO-STEP ADVERSARIAL DEFENSE FOR DEEP NEURAL NETWORKS
In recent years, deep neural networks have demonstrated outstanding performancein many machine learning tasks. However, researchers have discovered that thesestate-of-the-art models are vulnerable to adversarial examples: legitimate examples added by small perturbations which are unnoticeable to human eyes. Adversarial training, which augments the training data with adversarial examples duringthe training process, is a well known defense to improve the robustness of themodel against adversarial attacks. However, this robustness is only effective tothe same attack method used for adversarial training. Madry et al. (2017) suggest that effectiveness of iterative multi-step adversarial attacks and particularlythat projected gradient descent (PGD) may be considered the universal first order adversary and applying the adversarial training with PGD implies resistanceagainst many other first order attacks. However, the computational cost of theadversarial training with PGD and other multi-step adversarial examples is muchhigher than that of the adversarial training with other simpler attack techniques.In this paper, we show how strong adversarial examples can be generated only ata cost similar to that of two runs of the fast gradient sign method (FGSM), allowing defense against adversarial attacks with a robustness level comparable to thatof the adversarial training with multi-step adversarial examples. We empiricallydemonstrate the effectiveness of the proposed two-step defense approach againstdifferent attack methods and its improvements over existing defense strategies.
rejected-papers
While the proposed method is novel, the evaluation is not convincing. In particular, the datasets and models used are small. Susceptibility to adversarial examples is tightly related to dimensionality. The study could benefit from more massive datasets (e.g., Imagenet).
train
[ "ByeVD2kjam", "B1l7bFwG6X", "HkeoqNhshQ", "Syey_U8927", "Hygv00zHnm", "H1lXfGukhQ" ]
[ "official_reviewer", "public", "official_reviewer", "official_reviewer", "public", "public" ]
[ "Summary. The authors propose a novel adversarial training method, e2SAD, that relies on a two-step process for generating sets of two training adversarial samples for each clean training sample. The first step is a classical FGSM that yields the first adversarial sample. The second adversarial sample is calculated...
[ 5, -1, 6, 7, -1, -1 ]
[ 4, -1, 3, 3, -1, -1 ]
[ "iclr_2019_BklpOo09tQ", "Syey_U8927", "iclr_2019_BklpOo09tQ", "iclr_2019_BklpOo09tQ", "iclr_2019_BklpOo09tQ", "iclr_2019_BklpOo09tQ" ]
iclr_2019_Bklzkh0qFm
Relational Graph Attention Networks
We investigate Relational Graph Attention Networks, a class of models that extends non-relational graph attention mechanisms to incorporate relational information, opening up these methods to a wider variety of problems. A thorough evaluation of these models is performed, and comparisons are made against established benchmarks. To provide a meaningful comparison, we retrain Relational Graph Convolutional Networks, the spectral counterpart of Relational Graph Attention Networks, and evaluate them under the same conditions. We find that Relational Graph Attention Networks perform worse than anticipated, although some configurations are marginally beneficial for modelling molecular properties. We provide insights as to why this may be, and suggest both modifications to evaluation strategies, as well as directions to investigate for future work.
rejected-papers
The authors propose an architecture for learning and predicting graphs with relations between nodes. The approach is a combination of recent research efforts into Graph Attention Networks and Relational Graph Convolutional Networks. The authors are commended for their clear and direct writing and presentation and their honest claims and their empirical setup. However, the paper simply doesn't have much to offer to the community, since the algorithmic contributions are marginal and the results unimpressive. While the authors justify the submission in terms of the difficult implementation and the extensive experiments, this is not enough to support its publication at a top conference. Rather, this could be a technical report.
test
[ "S1eX1lnFCm", "B1lK2M8jnX", "HJes1ogi67", "ryxiFneiam", "Bkx-xQ1qpX", "BkemLDvAn7", "rkx6mDnd37", "B1en9VfK57", "rygYzASP5m", "rylu9f7W97", "S1e2bT5k57", "rkxFodcJ5X", "HJxMHja3Y7" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public" ]
[ "Here is a summary of the modifications we have made to the paper: \n- We have performed additional experiments using multiplicative attention. On the RDF tasks, we find that ARGAT is best paired with multiplicative attention, whereas WIRGAT is best paired with additive attention. On the Tox21 task we observe a muc...
[ -1, 4, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ -1, 5, -1, -1, -1, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_Bklzkh0qFm", "iclr_2019_Bklzkh0qFm", "B1lK2M8jnX", "rkx6mDnd37", "BkemLDvAn7", "iclr_2019_Bklzkh0qFm", "iclr_2019_Bklzkh0qFm", "rygYzASP5m", "iclr_2019_Bklzkh0qFm", "S1e2bT5k57", "iclr_2019_Bklzkh0qFm", "HJxMHja3Y7", "iclr_2019_Bklzkh0qFm" ]
iclr_2019_Bkx8OiRcYX
Countdown Regression: Sharp and Calibrated Survival Predictions
Personalized probabilistic forecasts of time to event (such as mortality) can be crucial in decision making, especially in the clinical setting. Inspired by ideas from the meteorology literature, we approach this problem through the paradigm of maximizing sharpness of prediction distributions, subject to calibration. In regression problems, it has been shown that optimizing the continuous ranked probability score (CRPS) instead of maximum likelihood leads to sharper prediction distributions while maintaining calibration. We introduce the Survival-CRPS, a generalization of the CRPS to the time to event setting, and present right-censored and interval-censored variants. To holistically evaluate the quality of predicted distributions over time to event, we present the scale agnostic Survival-AUPRC evaluation metric, an analog to area under the precision-recall curve. We apply these ideas by building a recurrent neural network for mortality prediction, using an Electronic Health Record dataset covering millions of patients. We demonstrate significant benefits in models trained by the Survival-CRPS objective instead of maximum likelihood.
rejected-papers
All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.
test
[ "ByxE359_A7", "rkgN55c_Rm", "H1g8HqcOCm", "Syxtf99d0Q", "HJlkck0i6m", "rygHwa3up7", "Sklb5XIqhX", "HJl8ZqBRhm" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear AnonReviewer1,\n\nThank you for taking the time to review our work and offer constructive feedback. We hope to address some of your concerns in the following reply.\n\nYou mention that it’s unclear how the RNN taking EHR patient records is related to the Survival CRPS scoring rule. To clarify, we use the same...
[ -1, -1, -1, -1, 4, 4, 5, 4 ]
[ -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "Sklb5XIqhX", "HJl8ZqBRhm", "rygHwa3up7", "HJlkck0i6m", "iclr_2019_Bkx8OiRcYX", "iclr_2019_Bkx8OiRcYX", "iclr_2019_Bkx8OiRcYX", "iclr_2019_Bkx8OiRcYX" ]
iclr_2019_BkxAUjRqY7
An Information-Theoretic Metric of Transferability for Task Transfer Learning
An important question in task transfer learning is to determine task transferability, i.e. given a common input domain, estimating to what extent representations learned from a source task can help in learning a target task. Typically, transferability is either measured experimentally or inferred through task relatedness, which is often defined without a clear operational meaning. In this paper, we present a novel metric, H-score, an easily-computable evaluation function that estimates the performance of transferred representations from one task to another in classification problems. Inspired by a principled information theoretic approach, H-score has a direct connection to the asymptotic error probability of the decision function based on the transferred feature. This formulation of transferability can further be used to select a suitable set of source tasks in task transfer learning problems or to devise efficient transfer learning policies. Experiments using both synthetic and real image data show that not only our formulation of transferability is meaningful in practice, but also it can generalize to inference problems beyond classification, such as recognition tasks for 3D indoor-scene understanding.
rejected-papers
The paper proposes an information theoretic quantity to measure the performance of transferred representations with an operational appeal, easier computation, and empirical validation. The relation of the proposed measure to test accuracy is not considered. The operational meaning holds exactly only in the special case of linear fine tuning layers. The paper seems to import heavily from previous works. Reviewers found it difficult to understand whether the proposed method makes sense, that the computation of relevant quantities might be difficult in general, and that the comparison with mutual information was not clear. The revision addresses these points, adding experiments and explanations. Yet, none of the reviewers gives the paper a rating beyond marginally above acceptance threshold. All reviewers found the paper interesting and relevant, but none of them found the paper particularly strong. This is a borderline case of a sound and promising paper, which nonetheless seems to be missing a clear selling point. I would suggest that developing the program laid out in the conclusions could make the contributions more convincing, in particular the development of more scalable algorithms and the application of the proposed measure to the design of hierarchies for transfer learning.
train
[ "rJxSvK_i0m", "ByllmBZrAX", "HJxuLd-SR7", "ryeF6vZBRm", "HklexDWrR7", "rJl5qONCnX", "BklF6M1c2X", "BygTt4TK2Q" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have recently performed new experiments to validate H-score as a transferability metric in NLP tasks. In particular, we tested the performance of 3 existing unsupervised word/sentence embeddings of the same dimension(fastText BOW, Globe BOW, InferSent) for 2 sentence classification tasks (CR and SUBJ) presented...
[ -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2019_BkxAUjRqY7", "iclr_2019_BkxAUjRqY7", "BygTt4TK2Q", "BklF6M1c2X", "rJl5qONCnX", "iclr_2019_BkxAUjRqY7", "iclr_2019_BkxAUjRqY7", "iclr_2019_BkxAUjRqY7" ]
iclr_2019_BkxSHsC5FQ
SupportNet: solving catastrophic forgetting in class incremental learning with support data
A plain well-trained deep learning model often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as catastrophic forgetting. Here we propose a novel method, SupportNet, to efficiently and effectively solve the catastrophic forgetting problem in the class incremental learning scenario. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to stabilize the learned representation and ensure the robustness of the learned model. We validate our method with comprehensive experiments on various tasks, which show that SupportNet drastically outperforms the state-of-the-art incremental learning methods and even reaches similar performance as the deep learning model trained from scratch on both old and new data.
rejected-papers
The authors propose using a SVM, trained as a last layer of a neural network, to identify exemplars (support vectors) to save and use to prevent forgetting as the model is trained on further tasks. The method is effective on several supervised benchmarks and is compared to several other methods, including VCL, iCARL, and GEM. The reviewers had various objections to the initial paper that centered around comparisons to other methods and reporting of detailed performance numbers, which the authors resolved convincingly in their revised paper. However, the AC and 2 of the reviewers were unconvinced of the contribution of the approach. Although no one has used this particular strategy, of using support vectors to prevent forgetting, the approach is a simplistic composition of the NN and the SVM which is heuristic, at least in how the authors present it. Most importantly, the approach is limited to supervised classification problems, yet catastrophic forgetting is not commonly considered to be a problem for the supervised classifier setting; rather it is a problem for inherently sequential learning environments such as RL (MNIST and CIFAR are just commonly used in the literature for ease of evaluation).
train
[ "SJlJEl0Y3Q", "HyxhpVJOJN", "Syg7lS_wkV", "B1x4g6hhpX", "H1liEi7vC7", "HkgTqy6na7", "B1eUQCnh6Q", "SJga663h67", "Byxw9n3hTm", "HJgOtj2267", "BJgtai2hpX", "rkejv4Cs2X", "BJxfdaTt3X" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a hybrid concept of deep neural network and support vector machine (SVM) for preventing catastrophic forgetting. The authors consider the last layer and the softmax function as SVM, and obtain support vectors, which are used as important samples of the old dataset. Merging the support vector da...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_BkxSHsC5FQ", "Syg7lS_wkV", "SJga663h67", "Byxw9n3hTm", "HkgTqy6na7", "iclr_2019_BkxSHsC5FQ", "SJga663h67", "BJxfdaTt3X", "SJlJEl0Y3Q", "rkejv4Cs2X", "HJgOtj2267", "iclr_2019_BkxSHsC5FQ", "iclr_2019_BkxSHsC5FQ" ]
iclr_2019_Bkx_Dj09tQ
Causal importance of orientation selectivity for generalization in image recognition
Although both our brain and deep neural networks (DNNs) can perform high-level sensory-perception tasks such as image or speech recognition, the inner mechanism of these hierarchical information-processing systems is poorly understood in both neuroscience and machine learning. Recently, Morcos et al. (2018) examined the effect of class-selective units in DNNs, i.e., units with high-level selectivity, on network generalization, concluding that hidden units that are selectively activated by specific input patterns may harm the network's performance. In this study, we revisit their hypothesis, considering units with selectivity for lower-level features, and argue that selective units are not always harmful to the network performance. Specifically, by using DNNs trained for image classification (7-layer CNNs and VGG16 trained on CIFAR-10 and ImageNet, respectively), we analyzed the orientation selectivity of individual units. Orientation selectivity is a low-level selectivity widely studied in visual neuroscience, in which, when images of bars with several orientations are presented to the eye, many neurons in the visual cortex respond selectively to a specific orientation. We found that orientation-selective units exist in both lower and higher layers of these DNNs, as in our brain. In particular, units in the lower layers become more orientation-selective as the generalization performance improves during the course of training of the DNNs. Consistently, networks that generalize better are more orientation-selective in the lower layers. We finally reveal that ablating these selective units in the lower layers substantially degrades the generalization performance, at least by disrupting the shift-invariance of the higher layers. These results suggest to the machine-learning community that, contrary to the triviality of units with high-level selectivity, lower-layer units with selectivity for low-level features can be indispensable for generalization, and for neuroscientists, orientation selectivity can play a causally important role in object recognition.
rejected-papers
The authors conduct experiments to study orientation selectivity in neural networks. The reviewers generally agreed that the paper was clearly written and easy to follow. Further, the experimental analysis demonstrates that contrary to what was claimed in some previous work, the learned orientation selectivity can be useful for generalization. However, the reviewers also raised a number of concerns: 1) that the conclusions are drawn on the basis of a couple of neural network architectures; the authors attempted to add results using a Resnet50 model, but this analysis was ultimately removed when the authors discovered a bug; 2) in the context of the contributions in neuroscience it was not clear that the limited results on the two artificial networks are sufficient to help draw such conclusions, and that 3) since the network is trained to recognize objects, it would seem natural that the model would learn neurons that are sensitive to orientation and that it is not clear how the author’s observations might lead to better trained models. While the reviewers were not completely unanimous in their scores, the AC agrees with a majority of the reviewers that the work while interesting could be strengthened by additional experiments on other architectures.
train
[ "rJla7L7KlE", "SJlwMNeRhQ", "SkggGK2KhQ", "S1gK85LS0m", "S1x7EqUS0m", "Byegf98HCQ", "rJgOKKIBCm", "Bkec4IPNTX" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "First, thank you for considering the additional analyses as positive. Here we would like to comment on your concerns.\n\n- it's a stretch to generalize this to conclusions concerning functional importance in neuroscience.\n\nResponse: we stated the 2nd argument by regarding DNN as a model of the visual cortex in t...
[ -1, 5, 4, -1, -1, -1, -1, 7 ]
[ -1, 4, 4, -1, -1, -1, -1, 2 ]
[ "SJlwMNeRhQ", "iclr_2019_Bkx_Dj09tQ", "iclr_2019_Bkx_Dj09tQ", "Bkec4IPNTX", "SJlwMNeRhQ", "SkggGK2KhQ", "iclr_2019_Bkx_Dj09tQ", "iclr_2019_Bkx_Dj09tQ" ]
iclr_2019_Bkxdqj0cFQ
Calibration of neural network logit vectors to combat adversarial attacks
Adversarial examples remain an issue for contemporary neural networks. This paper draws on Background Check (Perello-Nieto et al., 2016), a technique in model calibration, to assist two-class neural networks in detecting adversarial examples, using the one dimensional difference between logit values as the underlying measure. This method interestingly tends to achieve the highest average recall on image sets that are generated with large perturbation vectors, which is unlike the existing literature on adversarial attacks (Cubuk et al., 2017). The proposed method does not need knowledge of the attack parameters or methods at training time, unlike a great deal of the literature that uses deep learning based methods to detect adversarial examples, such as Metzen et al. (2017), imbuing the proposed method with additional flexibility.
rejected-papers
The reviewers and AC note the potential weaknesses of the paper in various aspects, and decided that the authors need more works to publish.
train
[ "ByxKo_ry07", "r1l1jdryAX", "r1lfYOBJRX", "r1xbeoD627", "Syx3aoWDnm", "Hkgp682Ihm", "HyltlbNMhX", "ryluik7Cim" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thanks you for your comments. It is a simple technique and I appreciate the highlighted shortcomings mentioned. I agree with all of the points mentioned.", "Thanks you for your comments. It is a simple technique and I appreciate the highlighted shortcomings mentioned. I agree with all of the points mentioned.", ...
[ -1, -1, -1, 3, 2, 4, -1, -1 ]
[ -1, -1, -1, 4, 4, 5, -1, -1 ]
[ "Hkgp682Ihm", "Syx3aoWDnm", "r1xbeoD627", "iclr_2019_Bkxdqj0cFQ", "iclr_2019_Bkxdqj0cFQ", "iclr_2019_Bkxdqj0cFQ", "ryluik7Cim", "iclr_2019_Bkxdqj0cFQ" ]
iclr_2019_BkxgbhCqtQ
Predictive Uncertainty through Quantization
High-risk domains require reliable confidence estimates from predictive models. Deep latent variable models provide these, but suffer from the rigid variational distributions used for tractable inference, which err on the side of overconfidence. We propose Stochastic Quantized Activation Distributions (SQUAD), which imposes a flexible yet tractable distribution over discretized latent variables. The proposed method is scalable, self-normalizing and sample efficient. We demonstrate that the model fully utilizes the flexible distribution, learns interesting non-linearities, and provides predictive uncertainty of competitive quality.
rejected-papers
The reviewers agree this paper is not good enough for ICLR.
test
[ "BJengAhS0Q", "S1gCAa2SR7", "B1xq262SRm", "HJl5Z4Kf67", "Sye-PQNA2m", "S1g8up0dhX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer 2,\n\nThank you for your time, comments and suggestions! We think that you'll find your concerns appropriately addressed, and we are looking forward to hear your thoughts. I'll address them line-by-line below:\n\n\"1. The paper proposes a generic discrete distribution as the variational distribution...
[ -1, -1, -1, 5, 4, 5 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "S1g8up0dhX", "Sye-PQNA2m", "HJl5Z4Kf67", "iclr_2019_BkxgbhCqtQ", "iclr_2019_BkxgbhCqtQ", "iclr_2019_BkxgbhCqtQ" ]
iclr_2019_BkxkH30cFm
Object-Oriented Model Learning through Multi-Level Abstraction
Object-based approaches for learning action-conditioned dynamics has demonstrated promise for generalization and interpretability. However, existing approaches suffer from structural limitations and optimization difficulties for common environments with multiple dynamic objects. In this paper, we present a novel self-supervised learning framework, called Multi-level Abstraction Object-oriented Predictor (MAOP), for learning object-based dynamics models from raw visual observations. MAOP employs a three-level learning architecture that enables efficient dynamics learning for complex environments with a dynamic background. We also design a spatial-temporal relational reasoning mechanism to support instance-level dynamics learning and handle partial observability. Empirical results show that MAOP significantly outperforms previous methods in terms of sample efficiency and generalization over novel environments that have multiple controllable and uncontrollable dynamic objects and different static object layouts. In addition, MAOP learns semantically and visually interpretable disentangled representations.
rejected-papers
This paper tackles a very valuable problem of learning object detection and object dynamics from video sequences, and builds upon the method of Zhu et al. 2018. The reviewers point out that there is a lot of engineering steps in the object proposal stage, which takes into account background subtraction to propose objects. In its current form, the writing of the paper is not clear enough on the object instantiation part, which is also the novel part over Zhu et al., potentially due to the complexity of using motion to guide object proposals. A limitation of the proposed formulation is that it works for moving cameras but only in 2d environments. Experiments on 3D environments would make this paper a much stronger submission.
train
[ "SJegxbwTyE", "HkebUj-MkE", "SJentx5y1E", "HyeMx0niRQ", "BygyoahiC7", "B1ga323iRm", "ryge8nniRX", "rkl_Lshj0Q", "HJx-4O3oA7", "S1gwJXrihQ", "SyeBo445hQ", "HJg7RcTPn7" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your constructive suggestions. The entire architecture of our multi-level abstraction framework can be summarized as follows:\n \nStep 1: Initialization. Initialize the parameters of all neural networks with random weights respectively.\n \nStep 2: Motion Detection Level. Perform foreground detection...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "HkebUj-MkE", "rkl_Lshj0Q", "iclr_2019_BkxkH30cFm", "HJg7RcTPn7", "HJg7RcTPn7", "SyeBo445hQ", "SyeBo445hQ", "S1gwJXrihQ", "iclr_2019_BkxkH30cFm", "iclr_2019_BkxkH30cFm", "iclr_2019_BkxkH30cFm", "iclr_2019_BkxkH30cFm" ]
iclr_2019_By40DoAqtX
Learning Discriminators as Energy Networks in Adversarial Learning
We propose a novel adversarial learning framework in this work. Existing adversarial learning methods involve two separate networks, i.e., the structured prediction models and the discriminative models, in the training. The information captured by discriminative models complements that in the structured prediction models, but few existing researches have studied on utilizing such information to improve structured prediction models at the inference stage. In this work, we propose to refine the predictions of structured prediction models by effectively integrating discriminative models into the prediction. Discriminative models are treated as energy-based models. Similar to the adversarial learning, discriminative models are trained to estimate scores which measure the quality of predicted outputs, while structured prediction models are trained to predict contrastive outputs with maximal energy scores. In this way, the gradient vanishing problem is ameliorated, and thus we are able to perform inference by following the ascent gradient directions of discriminative models to refine structured prediction models. The proposed method is able to handle a range of tasks, \emph{e.g.}, multi-label classification and image segmentation. Empirical results on these two tasks validate the effectiveness of our learning method.
rejected-papers
All three reviewers expressed concerns about the writing of the paper. The AC thus recommends "revise and resubmit".
train
[ "ryeGl4s_nX", "Syl4I9Uq3X", "SJxpw5XiJN", "rJl91mm9yN", "B1xC2G4Ihm", "rkl7ND6uAX", "S1xxNYadCX", "HklMIdpOCQ", "rJx5Pv0k27", "S1eycNwkhm", "S1eHsdAAim", "BJlfxvBAsQ", "B1xPQoAycX" ]
[ "official_reviewer", "official_reviewer", "author", "public", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "public" ]
[ "- The writing and structure of the paper can be improved. It is difficult to read without first reading Gygli et al. 2017, and this paper should be more self-contained. There are also many parts that are not clear: \n 1. What is the model structure of G? Is it another neural network, or other structured predictio...
[ 5, 5, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_By40DoAqtX", "iclr_2019_By40DoAqtX", "rJl91mm9yN", "rkl7ND6uAX", "iclr_2019_By40DoAqtX", "Syl4I9Uq3X", "B1xC2G4Ihm", "ryeGl4s_nX", "S1eycNwkhm", "S1eHsdAAim", "BJlfxvBAsQ", "iclr_2019_By40DoAqtX", "iclr_2019_By40DoAqtX" ]
iclr_2019_By41BjA9YQ
Laplacian Smoothing Gradient Descent
We propose a class of very simple modifications of gradient descent and stochastic gradient descent. We show that when applied to a large variety of machine learning problems, ranging from softmax regression to deep neural nets, the proposed surrogates can dramatically reduce the variance and improve the generalization accuracy. The methods only involve multiplying the usual (stochastic) gradient by the inverse of a positive definitive matrix coming from the discrete Laplacian or its high order generalizations. The theory of Hamilton-Jacobi partial differential equations demonstrates that the implicit version of new algorithm is almost the same as doing gradient descent on a new function which (i) has the same global minima as the original function and (ii) is ``more convex". We show that optimization algorithms with these surrogates converge uniformly in the discrete Sobolev Hσp sense and reduce the optimality gap for convex optimization problems. We implement our algorithm into both PyTorch and Tensorflow platforms which only involves changing of a few lines of code. The code will be available on Github.
rejected-papers
Dear authors, The topic of variance reduction in optimization is timely and the reviewers appreciated your attempt at circumventing the issues faced with the current popular methods. They however had a concern about the significance of the results, which I echo: - First, there have been previous attempts at variance reduction which share some similarity with yours, for instance "No more pesky learning rate", "Topmoumoute online natural gradient algorithm" or even Adam (which does variance reduction without mentioning it). - The fact that previous similar methods exist is a non-issue should yours perform better. However, the absence of stepsize tuning in the experimental evaluation is a big issue as the performance of an iterative algorithm is highly sensitive to it. Finally, the link between flatness of the minimum and generalization is dubious, as mentioned for instance by Dinh et al. (2017). As a consequence, I cannot accept this work for publication to ICLR but I encourage you to address the points of the reviewers should you wish to resubmit it to a future conference.
train
[ "rJl7G8N5RX", "BJlit2xYRm", "SylUDrGCaX", "SklTJBz0p7", "HJeQUNfA67", "BJgKdk3vh7", "S1ehQaOKnm", "S1lXrRHt3m", "rklyJ1tt2m", "B1xMP0uF3X", "SJxrQedYim", "SJxb8OPYim", "SJlEyFsi5Q", "HJxAJbi997" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "We appreciate the reviewer's effort in evaluating our work and propose lots of helpful comments. We will revise our paper follow the above comments.", "I've read the authors response. It's a reasonable response, but I don't think my evaluation changes much. I still find the connection with the proximal point ver...
[ -1, -1, -1, -1, -1, 5, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "BJlit2xYRm", "HJeQUNfA67", "BJgKdk3vh7", "S1lXrRHt3m", "S1ehQaOKnm", "iclr_2019_By41BjA9YQ", "iclr_2019_By41BjA9YQ", "iclr_2019_By41BjA9YQ", "SJxb8OPYim", "SJlEyFsi5Q", "SJxb8OPYim", "iclr_2019_By41BjA9YQ", "HJxAJbi997", "iclr_2019_By41BjA9YQ" ]
iclr_2019_ByEtPiAcY7
Characterizing the Accuracy/Complexity Landscape of Explanations of Deep Networks through Knowledge Extraction
Knowledge extraction techniques are used to convert neural networks into symbolic descriptions with the objective of producing more comprehensible learning models. The central challenge is to find an explanation which is more comprehensible than the original model while still representing that model faithfully. The distributed nature of deep networks has led many to believe that the hidden features of a neural network cannot be explained by logical descriptions simple enough to be understood by humans, and that decompositional knowledge extraction should be abandoned in favour of other methods. In this paper we examine this question systematically by proposing a knowledge extraction method using \textit{M-of-N} rules which allows us to map the complexity/accuracy landscape of rules describing hidden features in a Convolutional Neural Network (CNN). Experiments reported in this paper show that the shape of this landscape reveals an optimal trade off between comprehensibility and accuracy, showing that each latent variable has an optimal \textit{M-of-N} rule to describe its behaviour. We find that the rules with optimal tradeoff in the first and final layer have a high degree of explainability whereas the rules with the optimal tradeoff in the second and third layer are less explainable. The results shed light on the feasibility of rule extraction from deep networks, and point to the value of decompositional knowledge extraction as a method of explainability.
rejected-papers
The presented paper introduces a method to represent neural networks as logical rules of varying complexity, and demonstrate a tradeoff between complexity and error. Reviews yield unanimous reject, with insufficient responses by authors. Pros: + Paper well written Cons: - R1 states inadequacy of baselines, which authors do not address. - R3&4 raise issues about the novelty of the idea. - R2&4 raise issues on limited scope of evaluation, and asked for additional experiments on at least 2 datasets which authors did not provide. Area chair notes the similarity of this work to other works on network compression, i.e. compression of bits to represent weights and activations. By converting neurons to logical clauses, this is essentially a similar method. Authors should familiarize themselves with this field and use it as a baseline comparison. i.e.: https://arxiv.org/pdf/1609.07061.pdf
train
[ "BJeU3XZPRm", "Skldi-ZwCX", "SkeHFqxvCm", "H1eqBPgDAX", "BkeTdJNv6m", "HkgINMZPp7", "SyeC_idL67", "r1xgWbPq2Q" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments,\n\nIt's true that a rule based explanation may not be interpretable to a general audience. However, the aim of the paper is to explore to what extent a rule based explanation is even feasible in the first place. The results show that in certain layers of a CNN even rules which are very...
[ -1, -1, -1, -1, 4, 4, 4, 5 ]
[ -1, -1, -1, -1, 2, 5, 4, 3 ]
[ "BkeTdJNv6m", "SyeC_idL67", "r1xgWbPq2Q", "HkgINMZPp7", "iclr_2019_ByEtPiAcY7", "iclr_2019_ByEtPiAcY7", "iclr_2019_ByEtPiAcY7", "iclr_2019_ByEtPiAcY7" ]
iclr_2019_ByGOuo0cYm
Meta-Learning with Domain Adaptation for Few-Shot Learning under Domain Shift
Few-Shot Learning (learning with limited labeled data) aims to overcome the limitations of traditional machine learning approaches which require thousands of labeled examples to train an effective model. Considered as a hallmark of human intelligence, the community has recently witnessed several contributions on this topic, in particular through meta-learning, where a model learns how to learn an effective model for few-shot learning. The main idea is to acquire prior knowledge from a set of training tasks, which is then used to perform (few-shot) test tasks. Most existing work assumes that both training and test tasks are drawn from the same distribution, and a large amount of labeled data is available in the training tasks. This is a very strong assumption which restricts the usage of meta-learning strategies in the real world where ample training tasks following the same distribution as test tasks may not be available. In this paper, we propose a novel meta-learning paradigm wherein a few-shot learning model is learnt, which simultaneously overcomes domain shift between the train and test tasks via adversarial domain adaptation. We demonstrate the efficacy the proposed method through extensive experiments.
rejected-papers
This paper addresses the problem of few shot learning and then domain transfer. The proposed approach consists of combining a known few shot learning model, prototypical nets, together with image to image translation via CycleGAN for domain adaptation. Thus the algorithmic novelty is minor and amounts to combining two techniques to address a different problem statement. In addition, as mentioned by Reviewer 2, though meta learning could be a solution to learn with few examples, the solution being used in this work is not meta learning and so should not be in the title to avoid confusion. As this is a new problem statement the authors apply multiple existing works from few shot learning (and now adaptation) to their setting. The proposed approach does outperform prior work, however this is not surprising as the prior work was not designed for this task. Despite improvements during the rebuttal to address clarity the specific experimental setting is still unclear -- especially the setup of meta test data vs unsupervised da data. This paper is borderline. However, since the main contribution consists of proposing a new problem statement and suggesting a combination of prior techniques as a first solution, the paper needs a more thorough ablation of other possible combination of techniques as well as a clearly defined experimental setup before it is ready for publication.
train
[ "SkxZy6xby4", "BygGS54qnQ", "SygmDPUFCX", "BJxb9tLFC7", "BkehyvUFR7", "BkesLOIt07", "S1xGt0Rq3m", "r1x9Ih1c2Q", "BJl8X0iRsQ", "S1ekZ0sAs7" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author" ]
[ "We thank you for considering our rebuttal and updating the score. We are grateful for your time and advice, and would appreciate if we could further extend the discussion. \n\n\nWe appreciate the concern in the updated comments, but would like to point out that the novelty in our work should be viewed from two ang...
[ -1, 5, -1, -1, -1, -1, 6, 6, -1, -1 ]
[ -1, 3, -1, -1, -1, -1, 3, 3, -1, -1 ]
[ "BygGS54qnQ", "iclr_2019_ByGOuo0cYm", "BygGS54qnQ", "iclr_2019_ByGOuo0cYm", "r1x9Ih1c2Q", "S1xGt0Rq3m", "iclr_2019_ByGOuo0cYm", "iclr_2019_ByGOuo0cYm", "S1ekZ0sAs7", "iclr_2019_ByGOuo0cYm" ]
iclr_2019_ByGUFsAqYm
Downsampling leads to Image Memorization in Convolutional Autoencoders
Memorization of data in deep neural networks has become a subject of significant research interest. In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution. To analyze this mechanism in a simpler setting, we train linear convolutional autoencoders and show that linear combinations of training data are stored as eigenvectors in the linear operator corresponding to the network when downsampling is used. On the other hand, networks without downsampling do not memorize training data. We provide further evidence that the same effect happens in nonlinear networks. Moreover, downsampling in nonlinear networks causes the model to not only memorize just linear combinations of images, but individual training images. Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important phenomenon of memorization in over-parameterized deep networks.
rejected-papers
This paper studies the question of memorization within overparametrised neural networks. Specifically, the authors conjecture that memorization is linked to the downsampling operators present in many convolutional autoencoders. All reviewers agreed that this is an interesting question that deserves further analysis. However, they also agreed that in its current form, the paper lacks mathematical and experimental rigor. In particular, the paper does not follow the basic mathematical standards of proving any stated proposition/theorem, instead mixing empirical with mathematical proofs. The AC fully agrees with the points raised by reviewers, and therefore recommends rejection at this point, encouraging the authors to address these important points before resubmitting their work.
train
[ "SJlBa66O0X", "HJgoBxRdRQ", "rylzukC_C7", "SJeknR6_R7", "Skljdy0vpX", "HklwQZvDa7", "H1xy_sQJaX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the three reviewers for their comments. We have provided a point-by-point response under each of the reviews.", "We answer the reviewer questions in the points below:\n\n\" - Please elaborate on how different initializations influence memorization effect. Currently the paper only mentions initializatio...
[ -1, -1, -1, -1, 3, 5, 5 ]
[ -1, -1, -1, -1, 3, 2, 2 ]
[ "iclr_2019_ByGUFsAqYm", "H1xy_sQJaX", "HklwQZvDa7", "Skljdy0vpX", "iclr_2019_ByGUFsAqYm", "iclr_2019_ByGUFsAqYm", "iclr_2019_ByGUFsAqYm" ]
iclr_2019_ByGVui0ctm
Three continual learning scenarios and a case for generative replay
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning problematic. Recently, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as “soft targets”) achieved superior performance in all three scenarios. In addition, we reduced the computational cost of generative replay by integrating the generative model into the main model.
rejected-papers
The authors have proposed 3 continual learning variants which are all based on MNIST and which vary in terms of whether task ids are given and what the classification task is, and they have proposed a method which incorporates a symmetric VAE for generative replay with a class discriminator. The proposed method does work well on the continual learning scenarios and the incorporation of the generative model with the classifier is more efficient than keeping them separate. The discussion of the different CL scenarios and of related work is nice to read. However, the authors imply that these scenarios cover the space of important CL variants, yet they do not consider many other settings, such as when tasks continually change rather than having sharp boundaries. The authors have also only focused on the catastrophic forgetting aspect of continual learning, without considering scenarios where, e.g., strong forward transfer (or backwards transfer) is very important. Regarding the proposed architecture that combines a VAE with a softmax classifier for efficiency, the reviewers all felt that this was not novel enough to recommend publication.
train
[ "r1xlfgHugE", "HylBEpGrlN", "Bkxu3s1oyV", "HkgxzCoK1V", "r1gOZ1IDAm", "HJlk11LvAQ", "ByxIiAHwRm", "r1xKLRBvCQ", "rJe-qMJA37", "B1xLojt927", "ryx-aMn_hQ" ]
[ "author", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the comment and for pointing us to these two papers, both of which indeed also discuss the need for defining different learning scenarios to evaluate continual learning algorithms.\n\nCompared with these two studies, our treatment of this important problem is more general. The mentioned papers both f...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "HylBEpGrlN", "r1xKLRBvCQ", "r1gOZ1IDAm", "ByxIiAHwRm", "ryx-aMn_hQ", "B1xLojt927", "rJe-qMJA37", "iclr_2019_ByGVui0ctm", "iclr_2019_ByGVui0ctm", "iclr_2019_ByGVui0ctm", "iclr_2019_ByGVui0ctm" ]
iclr_2019_ByG_3s09KX
Dopamine: A Research Framework for Deep Reinforcement Learning
Deep reinforcement learning (deep RL) research has grown significantly in recent years. A number of software offerings now exist that provide stable, comprehensive implementations for benchmarking. At the same time, recent deep RL research has become more diverse in its goals. In this paper we introduce Dopamine, a new research framework for deep RL that aims to support some of that diversity. Dopamine is open-source, TensorFlow-based, and provides compact yet reliable implementations of some state-of-the-art deep RL agents. We complement this offering with a taxonomy of the different research objectives in deep RL research. While by no means exhaustive, our analysis highlights the heterogeneity of research in the field, and the value of frameworks such as ours.
rejected-papers
The paper presents Dopamine, an open-source implementation of plenty of DRL methods. It presents a case study of DQN and experiments on Atari. The paper is clear and easy to follow. While I believe Dopamine is a very welcomed contribution to the DRL software landscape, it seems there is not enough scientific content in this paper to warrant publication at ICLR. Regarding specifically the ELF and RLlib papers, I think that the ELF paper had a novelty component, and presented RL baselines to a new environment (miniRTS), while the RLlib paper had a stronger "systems research" contribution. This says nothing about the future impact of Dopamine, ELF, and RLlib – the respective software.
train
[ "H1ll2ehxTX", "Hkx2TztkTX", "rJgSH8Io37", "SygEf9Dvhm", "B1lGa1pUn7" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "My feeling towards this is that I think it's perfectly reasonable for ICLR to publish new frameworks. But my view is that the contribution needs to entail a novel capability (i.e. it lets us do something that we couldn't do before, or that would be very hard to do before) as opposed to a well-executed framework t...
[ -1, -1, 3, 3, 3 ]
[ -1, -1, 4, 2, 3 ]
[ "Hkx2TztkTX", "iclr_2019_ByG_3s09KX", "iclr_2019_ByG_3s09KX", "iclr_2019_ByG_3s09KX", "iclr_2019_ByG_3s09KX" ]
iclr_2019_ByGq7hRqKX
Cross-Task Knowledge Transfer for Visually-Grounded Navigation
Recent efforts on training visual navigation agents conditioned on language using deep reinforcement learning have been successful in learning policies for two different tasks: learning to follow navigational instructions and embodied question answering. In this paper, we aim to learn a multitask model capable of jointly learning both tasks, and transferring knowledge of words and their grounding in visual objects across tasks. The proposed model uses a novel Dual-Attention unit to disentangle the knowledge of words in the textual representations and visual objects in the visual representations, and align them with each other. This disentangled task-invariant alignment of representations facilitates grounding and knowledge transfer across both tasks. We show that the proposed model outperforms a range of baselines on both tasks in simulated 3D environments. We also show that this disentanglement of representations makes our model modular, interpretable, and allows for zero-shot transfer to instructions containing new words by leveraging object detectors.
rejected-papers
The authors have proposed a language+vision 'dual' attention architecture, trained in a multitask setting across SGN and EQA in vizDoom, to allow for knowledge grounding. The paper is interesting to read. The complex architecture is very clearly described and motivated, and the knowledge grounding problem is ambitious and relevant. However, the actual proposed solution does not make a novel contribution and the reviewers were unconvinced that the approach would be at all scalable to natural language or more complex tasks. In addition, the question was raised as to whether the 'knowledge grounding' claims by the authors are actually much more shallow associations of color and shape that are beneficial in cluttered environments. This is a borderline case, but the AC agrees that the paper falls a bit short of its goals.
train
[ "S1giw29tk4", "Syexze6aCX", "B1x9nUP5RX", "r1ekwQDcRQ", "H1ezHTU9CQ", "r1xCIjSA2Q", "H1efLyrc3X", "BJxzPorY3Q" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Hi authors — thanks for your answers and updates to the paper. While the gated attention mechanism designed in this paper seems to yield nice interpretable representations (thanks for Fig 9!), I still can't see how this gating mechanism can scale to anything like natural language — take the more complex sentences ...
[ -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "r1ekwQDcRQ", "iclr_2019_ByGq7hRqKX", "BJxzPorY3Q", "H1efLyrc3X", "r1xCIjSA2Q", "iclr_2019_ByGq7hRqKX", "iclr_2019_ByGq7hRqKX", "iclr_2019_ByGq7hRqKX" ]
iclr_2019_ByN7Yo05YX
Adaptive Neural Trees
Deep neural networks and decision trees operate on largely separate paradigms; typically, the former performs representation learning with pre-specified architectures, while the latter is characterised by learning hierarchies over pre-specified features with data-driven architectures. We unite the two via adaptive neural trees (ANTs), a model that incorporates representation learning into edges, routing functions and leaf nodes of a decision tree, along with a backpropagation-based training algorithm that adaptively grows the architecture from primitive modules (e.g., convolutional layers). ANTs allow increased interpretability via hierarchical clustering, e.g., learning meaningful class associations, such as separating natural vs. man-made objects. We demonstrate this on classification and regression tasks, achieving over 99% and 90% accuracy on the MNIST and CIFAR-10 datasets, and outperforming standard neural networks, random forests and gradient boosted trees on the SARCOS dataset. Furthermore, ANT optimisation naturally adapts the architecture to the size and complexity of the training data.
rejected-papers
This paper proposes adaptive neural trees (ANT), a combination of deep networks and decision tress. Reviewers 1 leans toward reject the paper, pointing out several flaws. Reviewer 3 also raises concerns, despite later increasing rating to marginally above threshold. Of particular note is the weak experimental validation. The paper reports results only on MNIST and CIFAR-10. MNIST performance is too easily saturated to be meaningful. The CIFAR-10 results show ANT models to have far greater error than the state-of-the-art deep neural network models. As Reviewer 1 states, "performance of the proposed method is also not the best on either of the tested datasets. Please clearly elaborate on why and how to address this issue. It would be more interesting and meaningful to work with a more recent large datasets, such as ImageNet or MS COCO." The rebuttal fails to offer the type of additional results that would remedy this situation. Without a convincing experimental story, it is not possible to recommend acceptance of this paper.
train
[ "BylyaVvZl4", "ryx1aQDWx4", "Hyl3849tA7", "H1epcjMThm", "HJxPbVzH27", "HklgPmcF0Q", "SklzsP6ph7", "S1gpo4fdam", "S1epB2RNT7", "rJlpbKUzT7", "rJgPf9xzaX", "rklqdugMTQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "public", "author", "author", "author" ]
[ "\nWe would like to let you know that we have updated our work by including the following results: \n\n1. A full set of results on the SARCOS robot inverse dynamics dataset for multivariate regression in Supp. Sec. H. (page 19) to show the wider applicability of ANTs.\n\n2. Results from ensembling ANTs (Supp. Sec. ...
[ -1, -1, -1, 6, 6, -1, 4, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, 3, -1, 4, -1, -1, -1, -1, -1 ]
[ "HJxPbVzH27", "H1epcjMThm", "SklzsP6ph7", "iclr_2019_ByN7Yo05YX", "iclr_2019_ByN7Yo05YX", "H1epcjMThm", "iclr_2019_ByN7Yo05YX", "S1epB2RNT7", "iclr_2019_ByN7Yo05YX", "H1epcjMThm", "SklzsP6ph7", "HJxPbVzH27" ]
iclr_2019_Bye5OiR5F7
Wasserstein proximal of GANs
We introduce a new method for training GANs by applying the Wasserstein-2 metric proximal on the generators. The approach is based on the gradient operator induced by optimal transport, which connects the geometry of sample space and parameter space in implicit deep generative models. From this theory, we obtain an easy-to-implement regularizer for the parameter updates. Our experiments demonstrate that this method improves the speed and stability in training GANs in terms of wall-clock time and Fr\'echet Inception Distance (FID) learning curves.
rejected-papers
Both R3 and R1 argue for rejection, while R2 argues for a weak accept. Given that we have to reject borderline paper, the AC concludes with "revise and resubmit".
val
[ "r1xrnt76aX", "rye0fY76p7", "BJx2fwQpTX", "SyeU6ImapX", "r1lL5EbTnm", "ByxW6xld2X", "B1xtf8_wh7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "3. Q:*** the \"semi-backward Euler method\" is introduced without any context. The fact that it is presented as a proposition using qualitative qualifiers such as \"sufficient regularity\" is suspicious. \n\nAnswer: The semi-backward Euler method is proposed as an effective approximation to the Wasserstein proxima...
[ -1, -1, -1, -1, 3, 6, 4 ]
[ -1, -1, -1, -1, 5, 3, 3 ]
[ "r1lL5EbTnm", "r1lL5EbTnm", "ByxW6xld2X", "B1xtf8_wh7", "iclr_2019_Bye5OiR5F7", "iclr_2019_Bye5OiR5F7", "iclr_2019_Bye5OiR5F7" ]
iclr_2019_Bye9LiR9YX
Remember and Forget for Experience Replay
Experience replay (ER) is crucial for attaining high data-efficiency in off-policy deep reinforcement learning (RL). ER entails the recall of experiences obtained in past iterations to compute gradient estimates for the current policy. However, the accuracy of such updates may deteriorate when the policy diverges from past behaviors, possibly undermining the effectiveness of ER. Previous off-policy RL algorithms mitigated this issue by tuning their hyper-parameters in order to abate policy changes. We propose ReF-ER, a method for active management of experiences in the Replay Memory (RM). ReF-ER forgets experiences that would be too unlikely with the current policy and constrains policy changes within a trust region of the behaviors in the RM. We couple ReF-ER with Q-learning, deterministic policy gradient and off-policy gradient methods to show that ReF-ER reliably improves the performance of continuous-action off-policy RL. We complement ReF-ER with a novel off-policy actor-critic algorithm (RACER) for continuous-action control. RACER employs a computationally efficient closed-form approximation of the action values and is shown to be highly competitive with state-of-the-art algorithms on benchmark problems, while being robust to large hyper-parameter variations.
rejected-papers
This paper introduces a novel idea, and demonstrates its utility in several simulated domains. The key parts of the algorithm are (a) to prefer keeping and using samples in the ER buffer where the corresponding rho_t, using the current policy pi_t, are not too big or small and (b) preventing the policy from changing too quickly, so that samples in the ER buffer are more on-policy. They key weakness is not better investigating the idea of making the ER buffer more on-policy, and the effect of doing so. The experiments compare to other algorithms, but do not sufficiently investigate the use of both Point 1 and Point 3. Further, the appendix contains an investigation into parameter sensitivity and gives some confidence intervals. However, the presentation of this is difficult to follow, and so it is difficult to gauge the sensitivity of Ref-ER. With a more thorough experimental section, better demonstrating the results (not necessarily running more things), the paper would be much stronger. For more context, the authors rightly mention "It is commonly believed that off-policy methods (e.g. Q-learning) can handle the dissimilarity between off-policy and on-policy outcomes. We provide ample evidence that training from highly similar-policy experiences is essential to the success of off-policy continuous-action deep RL." Q-learning can significantly suffer from changing the state-sampling distribution. However, adjusting sampling in the ER buffer using rho_t does not change the state-sampling distribution, and so that mismatch remains a problem. Changing the policy more slowly (Point 3) could help with this more. In general, however, these play two different roles that need to be better understood. The introduction more strongly focuses on classifying samples as more on or off-policy, to solve this problem, rather than the strategy used in Point 3. So, from the current pitch, its not clear which component is solving the issues claimed with off-policy updates. Overall, this paper has some interesting results and is well-written. With more clarity on the roles of the two components of Ref-ER and what they mean for making the ER buffer more on-policy, in terms of both action selection and state distribution, this paper would be a very useful contribution to stable control.
train
[ "Hkxpqdrz1V", "HyeOLv-iA7", "ryeO5Fgbam", "rkgcDmZjaX", "B1xun4Xqhm", "rkg5a-8wRm", "rJlmVW-saQ", "Hkg1sqbm6m", "Bklm4Oxqh7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "I appreciate the authors' response, from which some of my concerns have been addressed. I still hold my view regarding the significance and theoretical novelty of this work, while I do believe that the empirical results could potentially benefit certain researchers in this field.", "I've updated my review above,...
[ -1, -1, 7, -1, 6, -1, -1, -1, 6 ]
[ -1, -1, 3, -1, 3, -1, -1, -1, 3 ]
[ "rJlmVW-saQ", "rkgcDmZjaX", "iclr_2019_Bye9LiR9YX", "ryeO5Fgbam", "iclr_2019_Bye9LiR9YX", "Hkg1sqbm6m", "Bklm4Oxqh7", "B1xun4Xqhm", "iclr_2019_Bye9LiR9YX" ]
iclr_2019_ByeDojRcYQ
COLLABORATIVE MULTIAGENT REINFORCEMENT LEARNING IN HOMOGENEOUS SWARMS
A deep reinforcement learning solution is developed for a collaborative multiagent system. Individual agents choose actions in response to the state of the environment, their own state, and possibly partial information about the state of other agents. Actions are chosen to maximize a collaborative long term discounted reward that encompasses the individual rewards collected by each agent. The paper focuses on developing a scalable approach that applies to large swarms of homogeneous agents. This is accomplished by forcing the policies of all agents to be the same resulting in a constrained formulation in which the experiences of each agent inform the learning process of the whole team, thereby enhancing the sample efficiency of the learning process. A projected coordinate policy gradient descent algorithm is derived to solve the constrained reinforcement learning problem. Experimental evaluations in collaborative navigation, a multi-predator-multi-prey game, and a multiagent survival game show marked improvements relative to methods that do not exploit the policy equivalence that naturally arises in homogeneous swarms.
rejected-papers
Pros: - interesting novel formulation of policy learning in homogeneous swarms - multi-stage learning process that trades off diversity and consistency (fig 1) Cons: - implausible mechanisms like averaging weights of multiple networks - minor novelty - missing ablations of which aspect is crucial - dubious baseline results - no rebuttal One reviewer out of three would have accepted the paper, the other two have major concerns. Unfortunately the authors did not revise the paper or engage with the reviewers to clear up these points, so as it stand the paper should be rejected.
train
[ "SkeAXaEjhX", "r1x9C-Bq2X", "HkeINOFOn7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "## Summary\nThe authors present an approach to training collaborative swarms of agents based around giving all agents identical (or near identical) policies. The training regime involves individual agents rolling out trajectories based on slight perturbations of an agent of focus keeping the policy of other agents...
[ 6, 4, 5 ]
[ 3, 4, 4 ]
[ "iclr_2019_ByeDojRcYQ", "iclr_2019_ByeDojRcYQ", "iclr_2019_ByeDojRcYQ" ]
iclr_2019_ByeLBj0qFQ
Unsupervised Image to Sequence Translation with Canvas-Drawer Networks
Encoding images as a series of high-level constructs, such as brush strokes or discrete shapes, can often be key to both human and machine understanding. In many cases, however, data is only available in pixel form. We present a method for generating images directly in a high-level domain (e.g. brush strokes), without the need for real pairwise data. Specifically, we train a ”canvas” network to imitate the mapping of high-level constructs to pixels, followed by a high-level ”drawing” network which is optimized through this mapping towards solving a desired image recreation or translation task. We successfully discover sequential vector representations of symbols, large sketches, and 3D objects, utilizing only pixel data. We display applications of our method in image segmentation, and present several ablation studies comparing various configurations.
rejected-papers
This paper was reviewed by three experts. After the author response, R2 and R3 recommend rejecting this paper citing concerns of experimental evaluation and poor quality of the manuscript. All three reviewers continue to have questions for the authors, which the authors have not responded to. The AC finds no basis for accepting this paper in this state.
train
[ "rJlOnF8R07", "S1xQKrfcCm", "S1x0DGZc2X", "rJed2el5Rm", "r1gzGnYyCX", "SyxZKiFJCm", "H1xJrit1RX", "BygofiKyRm", "H1gbxsKyAQ", "rJlNNSUKaQ", "HJgte88Knm", "ByxIZdYF3Q" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "1. You only mentioned training/test sets in Appendix 7.1. How do you find the hyper-parameters you used?\n2. In Table 1, how about to compare with pixel-based generative methods? And, could you report standard deviations to see the significance? \n3. Do you believe that `Average Pixelwise Loss` alone is sufficient...
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "SyxZKiFJCm", "rJed2el5Rm", "iclr_2019_ByeLBj0qFQ", "BygofiKyRm", "iclr_2019_ByeLBj0qFQ", "HJgte88Knm", "ByxIZdYF3Q", "S1x0DGZc2X", "rJlNNSUKaQ", "iclr_2019_ByeLBj0qFQ", "iclr_2019_ByeLBj0qFQ", "iclr_2019_ByeLBj0qFQ" ]
iclr_2019_ByeLmn0qtX
Variational Domain Adaptation
This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference. Unlike the existing methods on domain transfer through deep generative models, such as StarGAN (Choi et al., 2017) and UFDN (Liu et al., 2018), the variational domain adaptation has three advantages. Firstly, the samples from the target are not required. Instead, the framework requires one known source as a prior p(x) and binary discriminators, p(Di|x), discriminating the target domain Di from others. Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference, p(x|Di)∝p(Di|x)p(x), as exhibited by a further proposed model of dual variational autoencoder (DualVAE). Secondly, the framework is scablable to large-scale domains. As well as VAE encodes a sample x as a mode on a latent space: μ(x)∈Z, DualVAE encodes a domain Di as a mode on the dual latent space μ∗(Di)∈Z∗, named domain embedding. It reformulates the posterior with a natural paring ⟨,⟩:Z×Z∗→\Real, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation. Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE. Through the numerical experiment, we demonstrate the three benefits with multi-domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state-of-the-art performance outperforming StarGAN and UFDN.
rejected-papers
This paper proposes using conditional VAEs for multi-domain transfer and presents results on CelebA and SCUT. As mentioned by reviewers, the presentation and clarity of the work could be improved. It is quite difficult to determine the new/proposed aspects of the work from a first read through. Though we recognize and appreciate that the authors updated their manuscript to improve its clarity, another edit pass with particular focus on clarifying prior work on conditional VAEs and their proposed new application to domain transfer would be beneficial. In addition, as DIS is the main metric for comparison to prior work and for evaluation of the final approach, the conclusions about the effectiveness of this method would be easier to see if a more detailed description of the metric and analysis of the results were provided. Given the limited technical novelty and discussion amongst reviewers of the desire for more experimental evidence, this work is not quite ready for publication.
train
[ "rJxkNsrnTm", "HJgiq9rh67", "ryl2qtS2p7", "HJeoKdH267", "BJxbaNB2aX", "Byedwza5nQ", "HJlwXuL527", "r1lGHlGd2m", "rJlblAwah7" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Thanks for your feedback.\n\n> (2) In the abstract and introduction, you state that a source domain is regarded as a prior, and the target domain is regarded as a posterior. From the Method section, I am not sure whether this is a valid statement. In my understanding, equation (1) is the KL summation of all the do...
[ -1, -1, -1, -1, -1, 4, 4, 5, -1 ]
[ -1, -1, -1, -1, -1, 3, 5, 3, -1 ]
[ "r1lGHlGd2m", "HJlwXuL527", "Byedwza5nQ", "rJlblAwah7", "iclr_2019_ByeLmn0qtX", "iclr_2019_ByeLmn0qtX", "iclr_2019_ByeLmn0qtX", "iclr_2019_ByeLmn0qtX", "iclr_2019_ByeLmn0qtX" ]
iclr_2019_ByeNFoRcK7
PA-GAN: Improving GAN Training by Progressive Augmentation
Despite recent progress, Generative Adversarial Networks (GANs) still suffer from training instability, requiring careful consideration of architecture design choices and hyper-parameter tuning. The reason for this fragile training behaviour is partially due to the discriminator performing well very quickly; its loss converges to zero, providing no reliable backpropagation signal to the generator. In this work we introduce a new technique - progressive augmentation of GANs (PA-GAN) - that helps to overcome this fundamental limitation and improve the overall stability of GAN training. The key idea is to gradually increase the task difficulty of the discriminator by progressively augmenting its input space, thus enabling continuous learning of the generator. We show that the proposed progressive augmentation preserves the original GAN objective, does not bias the optimality of the discriminator and encourages the healthy competition between the generator and discriminator, leading to a better-performing generator. We experimentally demonstrate the effectiveness of the proposed approach on multiple benchmarks (MNIST, Fashion-MNIST, CIFAR10, CELEBA) for the image generation task.
rejected-papers
The submission hypothesizes that in typical GAN training the discriminator is too strong, too fast, and thus suggests a modification by which they gradually increases the task difficulty of the discriminator. This is done by introducing (effectively) a new random variable -- which has an effect on the label -- and which prevents the discriminator from solving its task too quickly. There was a healthy amount of back-and-forth between the authors and the reviewers which allowed for a number of important clarifications to be made (esp. with regards to proofs, comparison with baselines, etc). My judgment of this paper is that it provides a neat way to overcome a particular difficulty of training GANs, but that there is a lot of confusion about the similarities (of lack thereof) with various potentially simpler alternatives such as input dropout, adding noise to the input etc. I was sometimes confused by the author response as well (they at once suggest that the proposed method reduces overfitting of the discriminator but also state that "We believe our method does not even try to “regularize” the discriminator"). Because of all this, the significance of this work is unclear and thus I do not recommend acceptance.
train
[ "Bkxsudlv14", "HJgC9I9H14", "B1lgHRnr2Q", "SJejbYHd0m", "B1eexKHdCQ", "HkgT_jrOC7", "r1l4-oHuCm", "SkevCtHOC7", "S1xNidrdAX", "HJxEdurdC7", "BJg7dMla27", "B1eXZ7a43Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "First of all, we thank R1 for acknowledging the revision of the paper and correctness of the proof. In the following, we answer further questions and concerns of R1.\n\n1. \n\n- The theory presented in the paper is itself agnostic to the network architecture and means to show that with the progressive augmentation...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "HJgC9I9H14", "SkevCtHOC7", "iclr_2019_ByeNFoRcK7", "B1eexKHdCQ", "S1xNidrdAX", "BJg7dMla27", "B1eXZ7a43Q", "B1lgHRnr2Q", "HJxEdurdC7", "iclr_2019_ByeNFoRcK7", "iclr_2019_ByeNFoRcK7", "iclr_2019_ByeNFoRcK7" ]
iclr_2019_ByePUo05K7
What a difference a pixel makes: An empirical examination of features used by CNNs for categorisation
Convolutional neural networks (CNNs) were inspired by human vision and, in some settings, achieve a performance comparable to human object recognition. This has lead to the speculation that both systems use similar mechanisms to perform recognition. In this study, we conducted a series of simulations that indicate that there is a fundamental difference between human vision and CNNs: while object recognition in humans relies on analysing shape, CNNs do not have such a shape-bias. We teased apart the type of features selected by the model by modifying the CIFAR-10 dataset so that, in addition to containing objects with shape, the images concurrently contained non-shape features, such as a noise-like mask. When trained on these modified set of images, the model did not show any bias towards selecting shapes as features. Instead it relied on whichever feature allowed it to perform the best prediction -- even when this feature was a noise-like mask or a single predictive pixel amongst 50176 pixels. We also found that regularisation methods, such as batch normalisation or Dropout, did not change this behaviour and neither did past or concurrent experience with images from other datasets.
rejected-papers
This paper claims to demonstrate that CNNs, unlike human vision, do not have a bias towards reliance on shape for object recognition. Both AnonReviewer1 and AnonReviewer2 point to fundamental flaws in the paper's argument, which the rebuttal fails to resolve. (AnonReviewer1's criticisms are unfortunately conflated with AnonReviewer1's reluctance to view neuroscience or biological vision as an appropriate topic for ICLR; nonetheless AnonReviewer1's technical criticism stands). These observations are: AnonReviewer2: "Authors have carefully designed a set of experiments which shows CNNs will [overfit] to non-shape features that they added to training images. However, this outcome is not surprising." AnonReviewer1: "The experiments don't seem to effectively demonstrate the main claim of the paper that categorization CNNs do not have inductive shape bias" "The best way to demonstrate this would have been to subject a trained image-categorization CNN to test data with object shapes in a way that the appearance information couldn’t be used to predict the object label. The paper doesn’t do this. None of the experiments logically imply that with an unaltered training regime, a trained network would not be predictive of the category label if shapes corresponding to that category are presented." The AC agrees with both of these observations. CNN behavior is partially a product of the training regime. To examine the scientific question of whether CNNs have similar biases as human vision, the training regimes should be similar. Conversely, if human vision evolved in an environment in which shortcut recognition cues were available via indicator pixels, perhaps it would not have a shape bias. This paper appears fundamentally flawed in its approach. The results are not informative about differences between human vision and CNNs, nor are they surprising to machine learning practitioners.
val
[ "HygmozznRQ", "r1lNe7MhC7", "H1x1wzf2AQ", "H1ebFhuRnQ", "rJlK-uJFRm", "BJglwmOZA7", "SJe3jCqg0m", "Bked7pEg07", "BkgGpsVlCX", "BklELiNxCX", "Byeoli4gCQ", "S1gzNLVxRm", "HkebvSNxAQ", "H1xUCxgEpm", "BygClzecn7", "rkex459YhQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "Cadieu, C. F., Hong, H., Yamins, D. L., Pinto, N., Ardila, D., Solomon, E. A., ... & DiCarlo, J. J. (2014). Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS computational biology, 10(12), e1003963.\n\nCichy, R. M., Khosla, A., Pantazis, D., Torralba, A., &...
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, -1 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, -1 ]
[ "H1x1wzf2AQ", "H1x1wzf2AQ", "rJlK-uJFRm", "iclr_2019_ByePUo05K7", "BklELiNxCX", "SJe3jCqg0m", "Byeoli4gCQ", "rkex459YhQ", "BygClzecn7", "Byeoli4gCQ", "H1ebFhuRnQ", "H1xUCxgEpm", "iclr_2019_ByePUo05K7", "iclr_2019_ByePUo05K7", "iclr_2019_ByePUo05K7", "iclr_2019_ByePUo05K7" ]