paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2019_rJl8BhRqF7 | Improving machine classification using human uncertainty measurements | As deep CNN classifier performance using ground-truth labels has begun to asymptote at near-perfect levels, a key aim for the field is to extend training paradigms to capture further useful structure in natural image data and improve model robustness and generalization. In this paper, we present a novel natural image benchmark for making this extension, which we call CIFAR10H. This new dataset comprises a human-derived, full distribution over labels for each image of the CIFAR10 test set, offering the ability to assess the generalization of state-of-the-art CIFAR10 models, as well as investigate the effects of including this information in model training. We show that classification models trained on CIFAR10 do not generalize as well to our dataset as it does to traditional extensions, and that models fine-tuned using our label information are able to generalize better to related datasets, complement popular data augmentation schemes, and provide robustness to adversarial attacks. We explain these improvements in terms of better empirical approximations to the expected loss function over natural images and their categories in the visual world. | rejected-papers | The paper presents a new annotation of the CIFAR-10 dataset (the test set) as a distribution over labels as opposed to one-hot annotations. This datasets forms a testbed analysis for assessing the generalization abilities of the state-of-the-art models and their robustness to adversarial attacks.
All the reviewers and AC acknowledge the contribution of dataset annotation and that the idea of using label distribution for training the models is sound and should improve the generalization performance of the models.
However the reviewers and AC note the following potential weaknesses: (1) the paper requires major improvement in presentation clarity and in-depth investigation and evidence of the benefits of the proposed framework – see detailed comments of R3 on what to address in a subsequent revision; see the suggestions of R2 for improving the scope of the empirical evaluations (e.g. distortions of the images, incorporating time limits for doing the classifications) and the requests of R1 for clarifications; (2) the related work is inadequate and should be substantially extended – see the related references suggested by the R2; also R1 rightly pointed out that two out of four future extensions of this framework have been addressed already, which questions the significance of findings in this submission.
The R2 raised concerns that the current evaluation is missing comparisons to a) the calibration approaches and b) cheaper/easier ways of getting soft labels -- see R2’s suggestion to use the Brier score for model calibration and to use a cost matrix about how critical a misclassification is (cat <-> dog, versus cat <-> car) as soft labels.
Among these, (2) did not have a substantial impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (3) makes it very difficult to assess the benefits of the proposed approach, and was viewed by the AC as a critical issue.
There is no author response for this paper. The reviewer with a positive view on the manuscript (R3) was reluctant to champion the paper as the authors have not addressed the concerns of the other reviewers (no rebuttal).
| train | [
"SJgtofrqnm",
"r1gzsT493Q",
"HJl5F7-53m",
"Syg4O1pYcm",
"rkeWn1Q4qX",
"HJgVn7UX9Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"The authors create a new dataset with label distributions (rather than one-hot annotations) for the CIFAR-10 test set. They then study the effect of fine-tuning using this dataset on the generalization performance of SOTA deep networks. They also study the effects on adversarial robustness.\n\nI think that dataset... | [
6,
3,
3,
-1,
-1,
-1
] | [
4,
5,
2,
-1,
-1,
-1
] | [
"iclr_2019_rJl8BhRqF7",
"iclr_2019_rJl8BhRqF7",
"iclr_2019_rJl8BhRqF7",
"rkeWn1Q4qX",
"HJgVn7UX9Q",
"iclr_2019_rJl8BhRqF7"
] |
iclr_2019_rJl8FoRcY7 | Deep Generative Models for learning Coherent Latent Representations from Multi-Modal Data | The application of multi-modal generative models by means of a Variational Auto Encoder (VAE) is an upcoming research topic for sensor fusion and bi-directional modality exchange.
This contribution gives insights into the learned joint latent representation and shows that expressiveness and coherence are decisive properties for multi-modal datasets.
Furthermore, we propose a multi-modal VAE derived from the full joint marginal log-likelihood that is able to learn the most meaningful representation for ambiguous observations.
Since the properties of multi-modal sensor setups are essential for our approach but hardly available, we also propose a technique to generate correlated datasets from uni-modal ones.
| rejected-papers | This paper suggests a problem with the standard ELBO for the multi-modal case, and proposes a new objective to address this problem. However, I (and some of the reviewers) disagree with the motivation. First of all, there's no reason one can't train a separate encoder for every combination of modalities available, at least when there are only 2 or 3. Failing that, one could simple optimize per-example approximate posteriors without using an encoder.
Second, once you stop optimizing the ELBO, you've lost the motivating principle for training VAEs, and must justify your new objective empirically. Almost all of the results are (in my opinion) ambiguous plots of latent encodings.
Finally, a point made throughout the paper and discussions was that different modalities should give the same encodings, which is plainly false. One of the reviewers made this point: "The fact that z_a != z_b != z_{a,b} should be expected if a and b provide different information. I don't see the problem with this.", which you dismiss. Additionally, the encoder's job is to approximate the true posterior. The true posteriors will in general be different for different modalities.
I would recommend focusing on ways to train the original ELBO in the presence of different modalities, instead of modifying it based on these intuitions.
| train | [
"SyggX2426X",
"Hyx27qIj6X",
"rylXgI9-am",
"S1gTGBq-67",
"BklFYf9ZTQ",
"Hyxzcy50nX",
"ryxgAFHq27",
"rJxI6Mqd2m"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the discussion and want to explain the following concerns as follows:\n\nFirst of all, we like to give some definitions to avoid confusion:\nA: Is the first dataset (e.g., MNIST) from which we can draw a sample \"a\" (e.g., a picture showing the letter \"0\")\nB: Is the second dataset (e.... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"Hyx27qIj6X",
"S1gTGBq-67",
"ryxgAFHq27",
"rJxI6Mqd2m",
"Hyxzcy50nX",
"iclr_2019_rJl8FoRcY7",
"iclr_2019_rJl8FoRcY7",
"iclr_2019_rJl8FoRcY7"
] |
iclr_2019_rJl8viCqKQ | Low Latency Privacy Preserving Inference | When applying machine learning to sensitive data one has to balance between accuracy, information leakage, and computational-complexity. Recent studies have shown that Homomorphic Encryption (HE) can be used for protecting against information leakage while applying neural networks. However, this comes with the cost of limiting the kind of neural networks that can be used (and hence the accuracy) and with latency of the order of several minutes even for relatively simple networks. In this study we improve on previous results both in the kind of networks that can be applied and in terms of the latency. Most of the improvement is achieved by novel ways to represent the data to make better use of the capabilities of the encryption scheme. | rejected-papers | The paper proposes improvements on the area of neural network inference with homomorphically encrypted data. Existing applications typically have high computational cost, and this paper provides some solutions to these problems. Some of the improvements are due to better "engineering" (the use of the faster SAEL 3.2.1 over CryptoNet). The idea of using pre-trained AlexNet features is new, but pretty standard practice. The presentation has been greatly improved in the updated version, however the paper could benefit from additional discussions and experiments. For example, when a practitioner wants to solve a new problem with some design need (e.g. accuracy, latency vs. bandwidth trade-off), what network modules should be used and how should they be represented? To summarize, the problem considered is important, however, as pointed out by the reviewers, both the empirical and the theoretical results appear to be incremental with respect to the existing literature.
| train | [
"SyxpMdlha7",
"ryeI2Mz4RQ",
"rkerQMfNRm",
"SylO0WzNAm",
"ryeEj-zNRm",
"HklEFpAa2Q",
"Hkgb8VNGhm",
"H1xqJa8knX",
"HJxnfS7Jhm",
"rke0USG1hm",
"Syl2KCLsi7",
"S1lLBKLssQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"public",
"public"
] | [
"This paper proposes several improvements to cryptonet proposed in Dowlin et al, 2016. The contributions include: \n1. Better implementation to improve speed and throughput. \n2. Modified architecture (LoLa) to reduce latency. \n3. Using features from a deep network rather than raw input. \nContribution 1 is better... | [
5,
-1,
-1,
-1,
-1,
6,
5,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
2,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_rJl8viCqKQ",
"H1xqJa8knX",
"Hkgb8VNGhm",
"HklEFpAa2Q",
"SyxpMdlha7",
"iclr_2019_rJl8viCqKQ",
"iclr_2019_rJl8viCqKQ",
"HJxnfS7Jhm",
"Syl2KCLsi7",
"S1lLBKLssQ",
"iclr_2019_rJl8viCqKQ",
"iclr_2019_rJl8viCqKQ"
] |
iclr_2019_rJlHIo09KQ | Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening | We propose Power Slow Feature Analysis, a gradient-based method to extract temporally slow features from a high-dimensional input stream that varies on a faster time-scale, as a variant of Slow Feature Analysis (SFA). While displaying performance comparable to hierarchical extensions to the SFA algorithm, such as Hierarchical Slow Feature Analysis, for a small number of output-features, our algorithm allows fully differentiable end-to-end training of arbitrary differentiable approximators (e.g., deep neural networks). We provide experimental evidence that PowerSFA is able to extract meaningful and informative low-dimensional features in the case of (a) synthetic low-dimensional data, (b) visual data, and also for (c) a general dataset for which symmetric non-temporal relations between points can be defined. | rejected-papers | This paper proposes to unroll power iterations within a Slow-Feature-Analysis learning objective in order to obtain a fully differentiable slow feature learning system. Experiments on several datasets are reported.
This is a borderline submissions, with reviewers torn between acceptance and rejection. They were generally positive about the clarity and simplicity of the presentation, whereas they raised concerns about the relative lack of novelty (especially related to the recent SpIN model), as well as the current limitations of the approach on large-scale problems. Reviewers also found authors to be responsive and diligent during the rebuttal phase. The AC agrees with this assessment, and therefore recommends rejection at this time, encouraging the authors to resubmit to the next conference cycle after addressing the above points. | train | [
"rJeY8RpcRm",
"SJgtgP7nCm",
"HygF9JDDT7",
"BJlEOF32n7",
"HkgJebWSnQ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer, \n\nwe are very thankful for this detailed and actionable review, it gave us a lot to work with to, hopefully, improve our paper for the current revision.\n\nOn Empirical Evaluation & Quality:\nWe included an experiment on the synthetic dataset investigating the progression of average correlation an... | [
-1,
-1,
5,
6,
6
] | [
-1,
-1,
2,
4,
4
] | [
"HkgJebWSnQ",
"BJlEOF32n7",
"iclr_2019_rJlHIo09KQ",
"iclr_2019_rJlHIo09KQ",
"iclr_2019_rJlHIo09KQ"
] |
iclr_2019_rJlJ-2CqtX | Success at any cost: value constrained model-free continuous control | Naively applying Reinforcement Learning algorithms to continuous control problems -- such as locomotion and robot control -- to maximize task reward often results in policies which rely on high-amplitude, high-frequency control signals, known colloquially as bang-bang control. While such policies can implement the optimal solution, particularly in simulated systems, they are often not desirable for real world systems since bang-bang control can lead to increased wear and tear and energy consumption and tends to excite undesired second-order dynamics. To counteract this issue, multi-objective optimization can be used to simultaneously optimize both the reward and some auxiliary cost that discourages undesired (e.g. high-amplitude) control. In principle, such an approach can yield the sought after, smooth, control policies. It can, however, be hard to find the correct trade-off between cost and return that results in the desired behavior. In this paper we propose a new constraint-based approach which defines a lower bound on the return while minimizing one or more costs (such as control effort). We employ Lagrangian relaxation to learn both (a) the parameters of a control policy that satisfies the desired constraints and (b) the Lagrangian multipliers for the optimization. Moreover, we demonstrate policy optimization which satisfies constraints either in expectation or in a per-step fashion, and we learn a single conditional policy that is able to dynamically change the trade-off between return and cost. We demonstrate the efficiency of our approach using a number of continuous control benchmark tasks as well as a realistic, energy-optimized quadruped locomotion task. | rejected-papers | Strengths: The paper introduces a novel constrained-optimization method for RL problems.
A lower-bound constraint can be imposed on the return (cumulative reward),
while optimizing one or more other costs, such as control effort.
The method learns multiple
The paper is clearly written. Results are shown on the cart-and-pole, a humanoid, and a realistic Minitaur
quadruped model. AC: Being able to learn conditional constraints is an interesting direction.
Weaknesses: There are often simpler ways to solve the problem of high-amplitude, high-frequency
controls in the setting of robotics.
The paper removes one hyperparameter (lambda) but then introduces another (beta), although beta
is likely easier to tune. The ideas have some strong connections to existing work in
safe reinforcement learning.
AC: Video results for the humanoid and cart-and-pole examples would have been useful to see.
Summary: The paper makes progress on ideas that are fairly involved to explore and use
(perhaps limiting their use in the short term), but that have potential,
i.e., learning state-dependent Lagrange multipliers for constrained RL. The paper is perfectly fine
technically, and does break some new ground in putting a particular set of pieces together.
As articulated by two of the reviewers, from a pragmatic perspective, the results are not
yet entirely compelling. I do believe that a better understanding of working with constrained RL,
in ways that are somewhat different than those used in Safe RL work.
Given the remaining muted enthusiasm of two of the reviewers, and in the absence of further
calibration, the AC leans marginally towards a reject. Current scores: 5,6,7.
Again, the paper does have novelty, although it's a pretty intricate setup.
The AC would be happy to revisit upon global recalibration.
| train | [
"rJlwa-f0JE",
"ryeVSh2pkE",
"B1g4z20Vs7",
"r1e9cU3tyV",
"B1lIiZ19Am",
"BkxWG9hlTQ",
"HkgBKL6FA7",
"SJgudF2KCX",
"HkgX7K2FA7",
"rkeKAunKAX",
"ryxZT_2YAX",
"HJxn1OnFAQ",
"B1ed4mXbCm",
"SkggB79t2X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I have read the author response and my opinion remains the same.\n\n2) Hyperparameter selection\n\nThe authors did not remove the hyperparameter completely, but introduced a new hyperparameter and stated that hyperparameter could make the learning more robust. The robustness of the new hyperparameter has not been ... | [
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"HkgX7K2FA7",
"SJgudF2KCX",
"iclr_2019_rJlJ-2CqtX",
"iclr_2019_rJlJ-2CqtX",
"HkgBKL6FA7",
"iclr_2019_rJlJ-2CqtX",
"ryxZT_2YAX",
"B1g4z20Vs7",
"SkggB79t2X",
"BkxWG9hlTQ",
"BkxWG9hlTQ",
"iclr_2019_rJlJ-2CqtX",
"iclr_2019_rJlJ-2CqtX",
"iclr_2019_rJlJ-2CqtX"
] |
iclr_2019_rJlMBjAcYX | Optimizing for Generalization in Machine Learning with Cross-Validation Gradients | Cross-validation is the workhorse of modern applied statistics and machine learning, as it provides a principled framework for selecting the model that maximizes generalization performance. In this paper, we show that the cross-validation risk is differentiable with respect to the hyperparameters and training data for many common machine learning algorithms, including logistic regression, elastic-net regression, and support vector machines. Leveraging this property of differentiability, we propose a cross-validation gradient method (CVGM) for hyperparameter optimization. Our method enables efficient optimization in high-dimensional hyperparameter spaces of the cross-validation risk, the best surrogate of the true generalization ability of our learning algorithm. | rejected-papers | This paper gives explicit hyperparameter gradients for several models with convex losses. The idea is well-motivated and clearly presented, but because it's relatively incremental, it needs a more systematic experimental section, or at least a stronger characterization of its scope and limitations. I would also recommend an investigation of more expressive hyperparameterizations (like in Maclaurin et al 2015) and/or an investigation of overfitting on the validation set. | train | [
"HkgzyGoV2Q",
"HJeaamKV3Q",
"Skl1tO4ZnX",
"r1xKLdD_9Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"This paper proposes the so-called cross-validation gradient method (CVGM).\nThis is idea is to express the CV score as a differentiable function\nof the hyperparameters and then to update hyperparameters with gradient\ndescent. Derivations are provided with Logistic regression and Elastic-Net\nthanks to the sign s... | [
5,
2,
4,
-1
] | [
5,
4,
4,
-1
] | [
"iclr_2019_rJlMBjAcYX",
"iclr_2019_rJlMBjAcYX",
"iclr_2019_rJlMBjAcYX",
"iclr_2019_rJlMBjAcYX"
] |
iclr_2019_rJlRKjActQ | Manifold Mixup: Learning Better Representations by Interpolating Hidden States | Deep networks often perform well on the data distribution on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. Ideally, a model would assign lower confidence to points unlike those from the training distribution. We propose a regularizer which addresses this issue by training with interpolated hidden states and encouraging the classifier to be less confident at these points. Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes. This has a major advantage in that it avoids the underfitting which can result from interpolating in the input space. We prove that the exact condition for this problem of underfitting to be avoided by Manifold Mixup is that the dimensionality of the hidden states exceeds the number of classes, which is often the case in practice. Additionally, this concentration can be seen as making the features in earlier layers more discriminative. We show that despite requiring no significant additional computation, Manifold Mixup achieves large improvements over strong baselines in supervised learning, robustness to single-step adversarial attacks, semi-supervised learning, and Negative Log-Likelihood on held out samples. | rejected-papers | The paper contains useful information and shows relative improvements compared to mixup. However, some of the main claims are not substantiated enough to be fully convincing. For example, the claims that manifold mixup can prevent can manifold collision issue where the interpolation between two samples collides with a sample from other class is incorrect. The authors are encouraged to incorporate remarks of the reviewers. | train | [
"SyexuodbeN",
"r1lPNB0RJN",
"SkxG9-lf14",
"BkeJVJ6Zk4",
"BJlMBdjZy4",
"B1g7zHf-y4",
"Skgiansey4",
"Bylw_ZnKRQ",
"S1ldh9QeJN",
"rJx6E57xk4",
"SJg3Fs01y4",
"SyxsI4RyJE",
"SyxHCKQA0m",
"HkljggY207",
"H1gr9yt2A7",
"SylMhAdnRQ",
"Bye9qJi4hQ",
"S1etENZK0m",
"SJxvtyIPCX",
"B1lmEyIwCQ"... | [
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer... | [
"The baseline of ResNet-50 on ImageNet is lower than that current reproduced one, ~ 76.5% top-1. Eg. mixup: Beyond Empirical Risk Minimization. It is OK that the baseline is a bit lower. However, it would be convincible (for me) if the proposed approach could reach or exceed 76.5% + 0.1% (std).\n\n",
"Hello, \n\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"SygudipOAm",
"Hkegs4KP2m",
"SJeIIMu9hX",
"SyxHCKQA0m",
"Skgiansey4",
"SyxHCKQA0m",
"SJg3Fs01y4",
"iclr_2019_rJlRKjActQ",
"SyxHCKQA0m",
"SyxHCKQA0m",
"SyxsI4RyJE",
"iclr_2019_rJlRKjActQ",
"Bkg5lrwV0X",
"Bye9qJi4hQ",
"SJeIIMu9hX",
"Hkegs4KP2m",
"iclr_2019_rJlRKjActQ",
"Bkg5lrwV0X",
... |
iclr_2019_rJl_NhR9K7 | ISA-VAE: Independent Subspace Analysis with Variational Autoencoders | Recent work has shown increased interest in using the Variational Autoencoder (VAE) framework to discover interpretable representations of data in an unsupervised way. These methods have focussed largely on modifying the variational cost function to achieve this goal. However, we show that methods like beta-VAE simplify the tendency of variational inference to underfit causing pathological over-pruning and over-orthogonalization of learned components. In this paper we take a complementary approach: to modify the probabilistic model to encourage structured latent variable representations to be discovered. Specifically, the standard VAE probabilistic model is unidentifiable: the likelihood of the parameters is invariant under rotations of the latent space. This means there is no pressure to identify each true factor of variation with a latent variable.
We therefore employ a rich prior distribution, akin to the ICA model, that breaks the rotational symmetry.
Extensive quantitative and qualitative experiments demonstrate that the proposed prior mitigates the trade-off introduced by modified cost functions like beta-VAE and TCVAE between reconstruction loss and disentanglement. The proposed prior allows to improve these approaches with respect to both disentanglement and reconstruction quality significantly over the state of the art. | rejected-papers | The paper proposes to improve VAE by using a prior distribution that has been previously proposed for independent subspace analysis (ISA). The clarity of the paper could be improved by more clearly describing the proposed method and its implementation details. The originality is not that high, as the main change to VAE is replacing the usual isotropic Gaussian prior with an ISA prior. Moreover, the paper does not provide comparison to VAEs with other more sophisticated priors, such as the VampPrior, and it is unclear whether using the ISA prior makes it difficult to scale to high-dimensional observations. Therefore, it is difficult to evaluate the significance of ISA-VAE. The authors are encouraged to carefully revise their paper to address these concerns. | train | [
"Skxp0qAYRQ",
"HyxTpuAtAX",
"rklh9ApFC7",
"HJlHXsVrTm",
"rylHP6da3m",
"SJxusGwwnQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We are glad that R3 appreciates the \"competitive quantitative experiment results and promising qualitative results\" and finds an important contribution to the state-of-the art in our publication.\nWe also thank for the constructive feedback which we address in the following:\n\n1) Since we only modify the prior ... | [
-1,
-1,
-1,
4,
7,
4
] | [
-1,
-1,
-1,
3,
4,
5
] | [
"rylHP6da3m",
"SJxusGwwnQ",
"HJlHXsVrTm",
"iclr_2019_rJl_NhR9K7",
"iclr_2019_rJl_NhR9K7",
"iclr_2019_rJl_NhR9K7"
] |
iclr_2019_rJlcV2Actm | MahiNet: A Neural Network for Many-Class Few-Shot Learning with Class Hierarchy | We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning scenarios. Compared to the well-studied many-class many-shot and few-class few-shot problems, MCFS problem commonly occurs in practical applications but is rarely studied. MCFS brings new challenges because it needs to distinguish between many classes, but only a few samples per class are available for training. In this paper, we propose ``memory-augmented hierarchical-classification network (MahiNet)'' for MCFS learning. It addresses the ``many-class'' problem by exploring the class hierarchy, e.g., the coarse-class label that covers a subset of fine classes, which helps to narrow down the candidates for the fine class and is cheaper to obtain. MahiNet uses a convolutional neural network (CNN) to extract features, and integrates a memory-augmented attention module with a multi-layer perceptron (MLP) to produce the probabilities over coarse and fine classes. While the MLP extends the linear classifier, the attention module extends a KNN classifier, both together targeting the ''`few-shot'' problem. We design different training strategies of MahiNet for supervised learning and meta-learning. Moreover, we propose two novel benchmark datasets ''mcfsImageNet'' (as a subset of ImageNet) and ''mcfsOmniglot'' (re-splitted Omniglot) specifically for MCFS problem. In experiments, we show that MahiNet outperforms several state-of-the-art models on MCFS classification tasks in both supervised learning and meta-learning scenarios. | rejected-papers | The reviewers raised a number of concerns including low readability/ clarity of the presented work and methodology, insufficient and at times unconvincing experimental evaluation of the proposed, and lack of discussion on pros and cons of the presented. The authors’ rebuttal addressed some of the reviewers’ comments but failed to address all concerns and reconfirmed that relatively large changes are still needed for the paper to be useful to the readers. Hence, although I believe this could be a very interesting paper, I cannot suggest it at this stage for presentation at ICLR. | train | [
"rJxHQ4-KAQ",
"Hyg6ZDZYRm",
"BJxFU-ltA7",
"rklk3pJt0X",
"BJlDageK0Q",
"BJeoaTJYRQ",
"B1xSrU1YC7",
"BJgsHqI8pm",
"HyleW0Hj3Q",
"BJgxSoFu3X"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Q4: The hierarchy information provides the guidance to fine-grained classification, which not only can be added to MahiNet but also the other models. Therefore, to prove its effectiveness, it is better to add hierarchy information to other models for comparison.\n\nR4: In the new experiments shown in Table 6 in Ap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"BJgxSoFu3X",
"BJgxSoFu3X",
"BJgsHqI8pm",
"HyleW0Hj3Q",
"BJgsHqI8pm",
"HyleW0Hj3Q",
"iclr_2019_rJlcV2Actm",
"iclr_2019_rJlcV2Actm",
"iclr_2019_rJlcV2Actm",
"iclr_2019_rJlcV2Actm"
] |
iclr_2019_rJlg1n05YX | Penetrating the Fog: the Path to Efficient CNN Models | With the increasing demand to deploy convolutional neural networks (CNNs) on mobile platforms, the sparse kernel approach was proposed, which could save more parameters than the standard convolution while maintaining accuracy. However, despite the great potential, no prior research has pointed out how to craft an sparse kernel design with such potential (i.e., effective design), and all prior works just adopt simple combinations of existing sparse kernels such as group convolution. Meanwhile due to the large design space it is also impossible to try all combinations of existing sparse kernels. In this paper, we are the first in the field to consider how to craft an effective sparse kernel design by eliminating the large design space. Specifically, we present a sparse kernel scheme to illustrate how to reduce the space from three aspects. First, in terms of composition we remove designs composed of repeated layers. Second, to remove designs with large accuracy degradation, we find an unified property named~\emph{information field} behind various sparse kernel designs, which could directly indicate the final accuracy. Last, we remove designs in two cases where a better parameter efficiency could be achieved. Additionally, we provide detailed efficiency analysis on the final 4 designs in our scheme. Experimental results validate the idea of our scheme by showing that our scheme is able to find designs which are more efficient in using parameters and computation with similar or higher accuracy. | rejected-papers | This paper points out methods to obtain sparse convolutional operators. The reviewers have a consensus on rejection due to clarity and lack of support to the claims. | train | [
"B1x1mxD5h7",
"BylleIdbaX",
"Syejycz037"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers sparse kernel design in order to reduce the space complexity of a convolutional neural network. In specifics, the proposed procedure is composed of following steps: 1) remove repeated layers, 2) remove designs with large degradation design, and 3) further remove design for better parameter eff... | [
5,
5,
4
] | [
3,
3,
3
] | [
"iclr_2019_rJlg1n05YX",
"iclr_2019_rJlg1n05YX",
"iclr_2019_rJlg1n05YX"
] |
iclr_2019_rJlpUiAcYX | Holographic and other Point Set Distances for Machine Learning | We introduce an analytic distance function for moderately sized point sets of known cardinality that is shown to have very desirable properties, both as a loss function as well as a regularizer for machine learning applications. We compare our novel construction to other point set distance functions and show proof of concept experiments for training neural networks end-to-end on point set prediction tasks such as object detection. | rejected-papers | This paper proposes permutation invariant loss functions which depend on the distance of sets. This has an interesting interpretation as the roots of a polynomial, and potentially leads to a more efficient method.
It is not clear, however, whether the method works well in practice for multiple reasons: (i) the experiments are performed in a limited setting, and the rebuttal specifically declined to consider more realistic datasets, (ii) there is an open question about the stability of the resulting gradients, which has been pointed out both in the paper and the reviews.
There was initially a majority vote for rejection. After author response, the only reviewer recommending acceptance wrote "As the other reviews (and my original review) say, the experimental results are not totally convincing. So I would not champion the paper in its present form." | train | [
"rye5ejzT2X",
"BJgQvDCcn7",
"BJej8Z_1aX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new loss for points registration (aligning two point sets) with preferable permutation invariant property. For a 2D point set, the idea is to define a complex polynomial with the points (interpreted as complex numbers) as roots. To compare two point sets, the loss eventually boils down to eva... | [
3,
7,
4
] | [
4,
3,
3
] | [
"iclr_2019_rJlpUiAcYX",
"iclr_2019_rJlpUiAcYX",
"iclr_2019_rJlpUiAcYX"
] |
iclr_2019_rJxF73R9tX | Knows When it Doesn’t Know: Deep Abstaining Classifiers | We introduce the deep abstaining classifier -- a deep neural network trained with a novel loss function that provides an abstention option during training. This allows the DNN to abstain on confusing or difficult-to-learn examples while improving performance on the non-abstained samples. We show that such deep abstaining classifiers can: (i) learn representations for structured noise -- where noisy training labels or confusing examples are correlated with underlying features -- and then learn to abstain based on such features; (ii) enable robust learning in the presence of arbitrary or unstructured noise by identifying noisy samples; and (iii) be used as an effective out-of-category detector that learns to reliably abstain when presented with samples from unknown classes. We provide analytical results on loss function behavior that enable automatic tuning of accuracy and coverage, and demonstrate the utility of the deep abstaining classifier using multiple image benchmarks, Results indicate significant improvement in learning in the presence of label noise. | rejected-papers | The reviewers felt that the method was natural and the writing was mostly clear (although could be improved by providing better signposting and fixing typos). However, there was also general agreement that comparison to other methods was weak; one reviewer also points out that the way that the reported numbers compare the methods on different sets of data, which might be an inaccurate measure of performance (this is more minor than the overall issue of lack of comparisons). While the authors provided more comparison experiments during the author response, it is in general the responsibility of authors to have a close-to-final work at the time of submission. | train | [
"ryxJqsOcAQ",
"H1ga8q_cCX",
"rkgyC8D9Am",
"rkxlX7u9Am",
"SklpLawcAm",
"B1l87E4-CQ",
"BklSo4p9nm",
"Hklv2HpdhX",
"rJlVL2PuhX"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your suggestions.\n\nPlease see updated Section 3.1 and 3.3 for risk coverage curves involving softmax thresholds and the selective guaranteed risk method described in [1]. We use the authors' implementation in [2]. \n\nThe updated results in Section 4 on CIFAR_10 and CIFAR-100 report the performance... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"B1l87E4-CQ",
"rJlVL2PuhX",
"iclr_2019_rJxF73R9tX",
"Hklv2HpdhX",
"BklSo4p9nm",
"iclr_2019_rJxF73R9tX",
"iclr_2019_rJxF73R9tX",
"iclr_2019_rJxF73R9tX",
"iclr_2019_rJxF73R9tX"
] |
iclr_2019_rJxMM2C5K7 | Nested Dithered Quantization for Communication Reduction in Distributed Training | In distributed training, the communication cost due to the transmission of gradients
or the parameters of the deep model is a major bottleneck in scaling up the number
of processing nodes. To address this issue, we propose dithered quantization for
the transmission of the stochastic gradients and show that training with Dithered
Quantized Stochastic Gradients (DQSG) is similar to the training with unquantized
SGs perturbed by an independent bounded uniform noise, in contrast to the other
quantization methods where the perturbation depends on the gradients and hence,
complicating the convergence analysis. We study the convergence of training
algorithms using DQSG and the trade off between the number of quantization
levels and the training time. Next, we observe that there is a correlation among the
SGs computed by workers that can be utilized to further reduce the communication
overhead without any performance loss. Hence, we develop a simple yet effective
quantization scheme, nested dithered quantized SG (NDQSG), that can reduce the
communication significantly without requiring the workers communicating extra
information to each other. We prove that although NDQSG requires significantly
less bits, it can achieve the same quantization variance bound as DQSG. Our
simulation results confirm the effectiveness of training using DQSG and NDQSG
in reducing the communication bits or the convergence time compared to the
existing methods without sacrificing the accuracy of the trained model. | rejected-papers | The reviewers found that the paper needs more compelling empirical study. | train | [
"B1gs7jQ0AQ",
"rJeqaFp30m",
"SygVWg7BCX",
"BJgVE2fr0X",
"ryxWehzSC7",
"rJlDh7B0h7",
"r1gMEiLihm",
"BylmK1YB3m"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the feedback.\n\nI would like to mention that the complexity of the dithered quantization at the workers is similar to the other stochastic quantization methods such as TernGrad and QSGD. Hence, at the worker side, the complexity of the algorithm would be the same.\nHowever, for dequantization, our meth... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"rJeqaFp30m",
"ryxWehzSC7",
"BylmK1YB3m",
"r1gMEiLihm",
"rJlDh7B0h7",
"iclr_2019_rJxMM2C5K7",
"iclr_2019_rJxMM2C5K7",
"iclr_2019_rJxMM2C5K7"
] |
iclr_2019_rJxNAjC5F7 | Learning Hash Codes via Hamming Distance Targets | We present a powerful new loss function and training scheme for learning binary hash codes with any differentiable model and similarity function.
Our loss function improves over prior methods by using log likelihood loss on top of an accurate approximation for the probability that two inputs fall within a Hamming distance target.
Our novel training scheme obtains a good estimate of the true gradient by better sampling inputs and evaluating loss terms between all pairs of inputs in each minibatch.
To fully leverage the resulting hashes, we use multi-indexing.
We demonstrate that these techniques provide large improvements to a similarity search tasks.
We report the best results to date on competitive information retrieval tasks for Imagenet and SIFT 1M, improving recall from 73% to 85% and reducing query cost by a factor of 2-8, respectively. | rejected-papers | The paper proposes learning a hash function that maps high dimensional data to binary codes, and uses multi-index hashing for efficient retrieval. The paper discusses similar results to "Similarity estimation techniques from rounding algorithms, M Charikar, 2002" without citing this paper. The proposed learning idea is also similar to "Binary Reconstructive Embedding, B. Kulis, T. Darrell, NIPS'09" without citation. Please study the learning to hash literature and discuss the similarities and differences with your approach.
Due to missing citations and lack of novelty, I believe the paper does not pass the bar for acceptance at ICLR.
PS: PQ and its better variants (optimized PQ and cartesian k-means) are from a different family of quantization techniques as pointed out by R3 and multi-index hashing is not directly applicable to such techniques. Regardless, I am also surprised that your technique just using hamming distance is able to outperform PQ using lookup table distance.
| train | [
"SJeZZiNtAm",
"rkxP4RvF3X",
"Hyl1p4zYRX",
"S1e4CbbYR7",
"r1gyNgoyTX",
"H1ljGMwA3Q",
"BJlMlHIJ67",
"ByxcPXfOim",
"S1l19KUA27",
"rJxr2gvHom",
"HkxzzDtL9Q",
"Sye0dUFAKX",
"rJgytH0atQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"public"
] | [
"You are absolutely correct that PQ is typically used as an unsupervised method. In this context of learning to hash, supervised is often used to mean that the model learned from the same dataset that queries are made against, whereas unsupervised is used to mean that the model learned from a different dataset. In ... | [
-1,
6,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
-1,
-1,
-1,
-1,
3,
5,
-1,
-1,
-1,
-1,
-1
] | [
"S1e4CbbYR7",
"iclr_2019_rJxNAjC5F7",
"H1ljGMwA3Q",
"r1gyNgoyTX",
"BJlMlHIJ67",
"rkxP4RvF3X",
"iclr_2019_rJxNAjC5F7",
"iclr_2019_rJxNAjC5F7",
"ByxcPXfOim",
"HkxzzDtL9Q",
"Sye0dUFAKX",
"rJgytH0atQ",
"iclr_2019_rJxNAjC5F7"
] |
iclr_2019_rJxXDsCqYX | Sentence Encoding with Tree-Constrained Relation Networks | The meaning of a sentence is a function of the relations that hold between its words. We instantiate this relational view of semantics in a series of neural models based on variants of relation networks (RNs) which represent a set of objects (for us, words forming a sentence) in terms of representations of pairs of objects. We propose two extensions to the basic RN model for natural language. First, building on the intuition that not all word pairs are equally informative about the meaning of a sentence, we use constraints based on both supervised and unsupervised dependency syntax to control which relations influence the representation. Second, since higher-order relations are poorly captured by a sum of pairwise relations, we use a recurrent extension of RNs to propagate information so as to form representations of higher order relations. Experiments on sentence classification, sentence pair classification, and machine translation reveal that, while basic RNs are only modestly effective for sentence representation, recurrent RNs with latent syntax are a reliably powerful representational device. | rejected-papers | This paper presents two extensions of Relation Networks (RNs) to represent a sentence as a set of relations between words: (1) dependency-based constraints to control the influence of different relations within a sentence and (2) recurrent extension of RNs to propagate information through the tree structure of relations.
Pros:
The notion of relation networks for sentence representation is potentially interesting.
Cons:
The significance of the proposed methods compared to existing variants of TreeRNNs is not clear (R1). R1 requested empirical comparisons against TreeRNNs (since the proposed methods are also of tree shape), but the authors argued back that such experiments are necessary beyond BiLSTM baselines.
Verdict:
Reject. The proposed methods build on relatively incremental ideas and the empirical results are rather inconclusive. | test | [
"BJerp7cc2m",
"rJlapuqFn7",
"HyekwSaYRX",
"HkeFmraYCm",
"rylOKNaK0Q",
"S1ljwRnK0m",
"Hyls9029hm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"\n[Summary]\nThe main purpose of this paper is to propose an extension of relation networks.\nThe proposal consists of two parts: 1) to integrate constraints of dependency syntax to control which relations influence the representation, and 2) utilize a recurrent computation to capture higher-order relations.\n\n[c... | [
5,
5,
-1,
-1,
-1,
-1,
3
] | [
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_rJxXDsCqYX",
"iclr_2019_rJxXDsCqYX",
"rJlapuqFn7",
"BJerp7cc2m",
"Hyls9029hm",
"iclr_2019_rJxXDsCqYX",
"iclr_2019_rJxXDsCqYX"
] |
iclr_2019_rJx_b3RqY7 | AIM: Adversarial Inference by Matching Priors and Conditionals | Effective inference for a generative adversarial model remains an important and challenging problem. We propose a novel approach, Adversarial Inference by Matching priors and conditionals (AIM), which explicitly matches prior and conditional distributions in both data and code spaces, and puts a direct constraint on the dependency structure of the generative model. We derive an equivalent form of the prior and conditional matching objective that can be optimized efficiently without any parametric assumption on the data. We validate the effectiveness of AIM on the MNIST, CIFAR-10, and CelebA datasets by conducting quantitative and qualitative evaluations. Results demonstrate that AIM significantly improves both reconstruction and generation as compared to other adversarial inference models. | rejected-papers | The paper proposes a method that aims to combine the strenghts of VAEs and GANs.
The paper establishes an interesting bridge between GANs and VAEs. The experimental results are encouraging, even though only relatively small datasets were used. It is encouraging that the method results in better reconstructions then ALI, a related method.
Some reviewers think that the paper contains limited novelty compared to the wealth of recent work on this topic (e.g. ALI/BiGAN). The paper's contribution is seen as incremental; e.g. the training is very similar to InfoGAN. Also, the claims of better sample quality over ALI seem insufficiently supported by the data. | train | [
"ByevvbGUxN",
"HJlfIT1ZlN",
"BygpS0C527",
"S1lGhOQlC7",
"H1gK50mgAX",
"HygMkrMgAm",
"HkxoDveR2m",
"r1l9TGz637"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer 1,\n\nWe have added another high-dimensional experiment in which we compared the KL-divergence achieved by different models. The result has been posted in another reply.\n\nIn the previous update, we have tried to address some of your concerns. Furthermore, we have added another section 4.3 to explai... | [
-1,
-1,
6,
-1,
-1,
-1,
7,
4
] | [
-1,
-1,
4,
-1,
-1,
-1,
4,
5
] | [
"r1l9TGz637",
"BygpS0C527",
"iclr_2019_rJx_b3RqY7",
"BygpS0C527",
"HkxoDveR2m",
"r1l9TGz637",
"iclr_2019_rJx_b3RqY7",
"iclr_2019_rJx_b3RqY7"
] |
iclr_2019_rJxcHnRqYQ | Local Binary Pattern Networks for Character Recognition | Memory and computation efficient deep learning architectures are crucial to the continued proliferation of machine learning capabilities to new platforms and systems. Binarization of operations in convolutional neural networks has shown promising results in reducing the model size and computing efficiency.
In this paper, we tackle the character recognition problem using a strategy different from the existing literature by proposing local binary pattern networks or LBPNet that can learn and perform bit-wise operations in an end-to-end fashion. LBPNet uses local binary comparisons and random projection in place of conventional convolution (or approximation of convolution) operations, providing important means to improve memory and speed efficiency that is particularly suited for small footprint devices and hardware accelerators. These operations can be implemented efficiently on different platforms including direct hardware implementation. LBPNet demonstrates its particular advantage on the character classification task where the content is composed of strokes. We applied LBPNet to benchmark datasets like MNIST, SVHN, DHCD, ICDAR, and Chars74K and observed encouraging results. | rejected-papers | This paper proposed a LBPNet for character recognition, which introduces the LBP feature extraction into deep learning. Reviewers are confused on implementation and not convinced on experiments. The only score 6 reviewer is also concerned "Empirically weak, practical advantage wrt to literature unclear". Only evaluating on MNIST/SVHN etc is not convincing to demo the effectiveness of the proposed method. | train | [
"rkl6xVHcam",
"BklHrzScTm",
"BygIVXH5pX",
"r1xw-56i2m",
"rJxFat292X",
"rklNFUxr2m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"[from authors:]\nQ: \"A list of additional datasets is provided in Table 5, but only the performance metric is listed, which is meaningless if it is not accompanied by figures for size, latency, and speedup.\"\nA: We have updated figure 1 to indicate typical images from different datasets used in the paper. We hav... | [
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
5,
4,
4
] | [
"rJxFat292X",
"r1xw-56i2m",
"rklNFUxr2m",
"iclr_2019_rJxcHnRqYQ",
"iclr_2019_rJxcHnRqYQ",
"iclr_2019_rJxcHnRqYQ"
] |
iclr_2019_rJxpuoCqtQ | Likelihood-based Permutation Invariant Loss Function for Probability Distributions | We propose a permutation-invariant loss function designed for the neural networks reconstructing a set of elements without considering the order within its vector representation. Unlike popular approaches for encoding and decoding a set, our work does not rely on a carefully engineered network topology nor by any additional sequential algorithm. The proposed method, Set Cross Entropy, has a natural information-theoretic interpretation and is related to the metrics defined for sets. We evaluate the proposed approach in two object reconstruction tasks and a rule learning task. | rejected-papers | This paper proposes a new permutation invariant loss (where the order doesn't matter), motivated by set autoencoding settings. This is an important problem, and the authors' solution is interesting. The reviewers, however, found the exposition to be unclear, in particular the explanation on how the loss function is derived was confusing for two of the reviewers. Reviewers also found the experimental results to be not convincing, even after the revision. This is a borderline paper: the idea is valuable and I'd encourage the authors to develop it further, improving exposition and including additional experiments as suggested by the reviewers.
| train | [
"rJe3Vj9tA7",
"rkxXX1XYRm",
"S1xEuWr4AX",
"H1ezDoAkCQ",
"BkgusNjaam",
"BJlQWHaL67",
"r1g0z4pU6m",
"BygnoGpIpX",
"rkxJ7-a8a7",
"HJglfr-CiQ",
"SklYVU2hs7",
"S1lS16Ohsm"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Please check the updated paper for the additional results.",
" > I still think that evaluating the quality of representations learnt\n with an SCE loss is not a very surprising thing to ask for.\n\n To address the reviewer's request, we are running a new experiment. We\n modified Latplan (Asai, Fukunaga AA... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"S1xEuWr4AX",
"S1xEuWr4AX",
"BygnoGpIpX",
"BkgusNjaam",
"BJlQWHaL67",
"S1lS16Ohsm",
"SklYVU2hs7",
"HJglfr-CiQ",
"iclr_2019_rJxpuoCqtQ",
"iclr_2019_rJxpuoCqtQ",
"iclr_2019_rJxpuoCqtQ",
"iclr_2019_rJxpuoCqtQ"
] |
iclr_2019_rJxug2R9Km | Meta-Learning for Contextual Bandit Exploration | We describe MÊLÉE, a meta-learning algorithm for learning a good exploration policy in the interactive contextual bandit setting. Here, an algorithm must take actions based on contexts, and learn based only on a reward signal from the action taken, thereby generating an exploration/exploitation trade-off. MÊLÉE addresses this trade-off by learning a good exploration strategy based on offline synthetic tasks, on which it can simulate the contextual bandit setting. Based on these simulations, MÊLÉE uses an imitation learning strategy to learn a good exploration policy that can then be applied to true contextual bandit tasks at test time. We compare MÊLÉE to seven strong baseline contextual bandit algorithms on a set of three hundred real-world datasets, on which it outperforms alternatives in most settings, especially when differences in rewards are large. Finally, we demonstrate the importance of having a rich feature representation for learning how to explore.
| rejected-papers | This paper provides an interesting strategy for learning to explore, by first training on fully supervised data before deploying that policy to an online setting. There are some concerns, however, on the realism and utility of this setting that should be further discussed. If the offline data is not related to the contextual bandit problem, it would be surprising for this to have much benefit, and this should be better motivated and discussed. Because there are no theoretical guarantees for exploration, a discussion is needed and as suggested by a reviewer the learned exploration policies could be qualitatively examined. For example, the paper says "While these approaches are effective if the distribution of tasks is very similar and the state space is shared among different tasks, they fail to generalize when the tasks are different. Our approach targets an easier problem than exploration in full reinforcement learning environments, and can generalize well across a wide range of different tasks with completely unrelated features spaces." This is a pretty surprising statement, that your idea would not work well in an RL setting, but does work well in a contextual bandit setting.
There should also be a bit more discussion comparing to previous approach to learn how to explore, including in active learning. It is true that active learning is a different setting, but in both a goal is to become optimal as quickly as possible. Similarly, the ideas used for RL could be used here as well, essentially by setting gamma to 0.
Overall, the ideas here are interesting and well-written, but need a bit more development on previous work, and motivation for why this approach will be effective. | train | [
"Sklm6yPP67",
"SJeF96IDpX",
"HkgsDh8wp7",
"rkeSHBI937",
"S1xpvoeO3Q",
"HJluMESy2X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the provided feedback. Please find our response below:\n\n1) Relevance to Real Problems:\n===========================\n
\nWe believe that there is a fundamental misunderstanding in this point of the review regarding the experimental setup we study on our paper. \n\nWe want to stress that... | [
-1,
-1,
-1,
7,
6,
3
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"HJluMESy2X",
"S1xpvoeO3Q",
"rkeSHBI937",
"iclr_2019_rJxug2R9Km",
"iclr_2019_rJxug2R9Km",
"iclr_2019_rJxug2R9Km"
] |
iclr_2019_rJzoujRct7 | A Solution to China Competitive Poker Using Deep Learning | Recently, deep neural networks have achieved superhuman performance in various games such as Go, chess and Shogi. Compared to Go, China Competitive Poker, also known as Dou dizhu, is a type of imperfect information game, including hidden information, randomness, multi-agent cooperation and competition. It has become widespread and is now a national game in China. We introduce an approach to play China Competitive Poker using Convolutional Neural Network (CNN) to predict actions. This network is trained by supervised learning from human game records. Without any search, the network already beats the best AI program by a large margin, and also beats the best human amateur players in duplicate mode. | rejected-papers | The paper presents a CNN that is trained from human games to predict which actions to take for China Competitive Poker (Dou dizhu).
The paper is poorly written, not because of the English, but because it is hard to understand the details of the proposed solution: it is not straight-forward to reimplement a solution from the presentation in the paper. It lacks explanations for several design decisions. This is unfortunate, as the authors point out in the rebuttal that they actually did way more experiments that are presented in the paper. Moreover, the experimental results lack comparisons to baselines, ablations, so that the proposed solution could be evaluated fairly.
In its current state, this paper can not be accepted for presentation at ICLR 2019. | train | [
"HJehAIhjyE",
"S1le0-8h6X",
"r1eKD--3aQ",
"S1empmJnTm",
"HygUgVsfam",
"B1lI3xdG6Q",
"SJlZVO_Gam",
"BJeKTW7zpQ",
"BygupeAAhX",
"HJgfur5Inm",
"rJgEq1qrn7",
"BylSiKSBnm",
"SyeZVvHV3m",
"SkgYrSeV2X",
"BkeBfg-mnX",
"r1xYI4DhjQ",
"SJeVf_S2iQ",
"rkg_dernjX",
"SyxKWCEhiQ",
"Syg6kcNhi7"... | [
"public",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"public",
"public",
"author"... | [
"Can you give more contacts, with whom could be discussed the delails of the topic and future cooperation?",
"(3) The authors' \"yes\" answer, to me, does not contain enough scientific proof. CNN has lots of difference to, say, a fully-connected network. Without a careful scientific study, it is not logical to at... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_rJzoujRct7",
"S1empmJnTm",
"BJeKTW7zpQ",
"HygUgVsfam",
"B1lI3xdG6Q",
"BygupeAAhX",
"SkgYrSeV2X",
"iclr_2019_rJzoujRct7",
"iclr_2019_rJzoujRct7",
"rJgEq1qrn7",
"BylSiKSBnm",
"SyeZVvHV3m",
"iclr_2019_rJzoujRct7",
"BkeBfg-mnX",
"iclr_2019_rJzoujRct7",
"Syg6kcNhi7",
"S1xe6Ub2j... |
iclr_2019_rk4Wf30qKQ | Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks | Recent work has introduced attacks that extract the architecture information of deep neural networks (DNN), as this knowledge enhances an adversary’s capability to conduct attacks on black-box networks. This paper presents the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. First, we define the threat model for these attacks: our adversary does not need the ability to query the victim model; instead, she runs a co-located process on the host machine victim ’s deep learning (DL) system is running and passively monitors the accesses of the target functions in the shared framework. Second, we introduce DeepRecon, an attack that reconstructs the architecture of the victim network by using the internal information extracted via Flush+Reload, a cache side-channel technique. Once the attacker observes function invocations that map directly to architecture attributes of the victim network, the attacker can reconstruct the victim’s entire network architecture. In our evaluation, we demonstrate that an attacker can accurately reconstruct two complex networks (VGG19 and ResNet50) having only observed one forward propagation. Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting. From this meta-model, we evaluate the importance of the observed attributes in the fingerprinting process. Third, we propose and evaluate new framework-level defense techniques that obfuscate our attacker’s observations. Our empirical security analysis represents a step toward understanding the DNNs’ vulnerability to cache side-channel attacks. | rejected-papers | The reviewers generally had concerns that the goal of recovering only the model architecture was unmotivated (given that knowing the architecture is not a large threat on its own, and there are existing attacks that work without knowledge of the model architecture). Moreover, given the strength of the assumed attack model, recovering model architecture is a fairly unambitious goal (again, more serious attacks have already been demonstrated under weaker attack models). Finally, though less seriously, the analysis is fairly preliminary, e.g. it is unclear if the attack can generalize to nearby architectures that were outside the training set. | val | [
"HJeelb-5C7",
"rylt1I8ZpQ",
"rJguAoCtRQ",
"B1l2z9wgTm",
"HylS7nAK0Q",
"S1gLF5l5am",
"BJx5zS5d6m",
"SygLtq7ZpQ",
"rygfvh7h2X"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank our reviewers for taking the time to read, evaluate our work, and provide constructive feedback. We have uploaded a revised version of our paper, with edits to address the concerns raised. We summarize our updates below:\n\n1. We update the content that made our contributions confusing in Sec. 1.\n (e.g... | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"iclr_2019_rk4Wf30qKQ",
"iclr_2019_rk4Wf30qKQ",
"B1l2z9wgTm",
"rygfvh7h2X",
"S1gLF5l5am",
"rylt1I8ZpQ",
"SygLtq7ZpQ",
"iclr_2019_rk4Wf30qKQ",
"iclr_2019_rk4Wf30qKQ"
] |
iclr_2019_rkGG6s0qKQ | The GAN Landscape: Losses, Architectures, Regularization, and Normalization | Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of ``tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub. | rejected-papers | The paper presents a large scale empirical comparison between different prominent losses, regularization and normalization schemes, and neural architectures frequently used in GAN training. Large scale comparisons in this field are rare and important and the outcome of the experimental analysis is clearly of interest for practitioners. However, as two of the reviewers point out, the significance of the new insights is limited, and after rebutal all reviewers agree that the paper would profit from a clearer write-up and presentation of the main findings. I see the paper therefore, as lying slightly under the acceptance trashhold. | train | [
"ryezN9__TQ",
"H1e5hKOdTm",
"Hyeml__dT7",
"Hye9PEPg6m",
"BJxVfiNqhm",
"H1gxi8FY2X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the time. We would like to take this opportunity to correct some factually incorrect statements below. \n\n[Q] The results of the paper do not give major insights into what are the preferred techniques for training GANs, and certainly not why and under what circumstances they'll work. \n[A] We resp... | [
-1,
-1,
-1,
4,
4,
7
] | [
-1,
-1,
-1,
3,
2,
4
] | [
"Hye9PEPg6m",
"BJxVfiNqhm",
"H1gxi8FY2X",
"iclr_2019_rkGG6s0qKQ",
"iclr_2019_rkGG6s0qKQ",
"iclr_2019_rkGG6s0qKQ"
] |
iclr_2019_rkGcYi09Km | NUTS: Network for Unsupervised Telegraphic Summarization | Extractive summarization methods operate by ranking and selecting the sentences which best encapsulate the theme of a given document. They do not fare well in domains like fictional narratives where there is no central theme and core information is not encapsulated by a small set of sentences. For the purpose of reducing the size of the document while conveying the idea expressed by each sentence, we need more sentence specific methods. Telegraphic summarization, which selects short segments across several sentences, is better suited for such domains. Telegraphic summarization captures the plot better by retaining shorter versions of each sentence while not really concerning itself with grammatically linking these segments. In this paper, we propose an unsupervised deep learning network (NUTS) to generate telegraphic summaries.
We use multiple encoder-decoder networks and learn to drop portions of the text that are inferable from the chosen segments. The model is agnostic to both sentence length and style. We demonstrate that the summaries produced by our model show significant quantitative and qualitative improvement over those produced by existing methods and baselines. | rejected-papers | This paper presents methods for telegraphic summarization, a task that generates extremely short summaries.
There are concerns about the utility of the task in general, and also the novelty of the modeling framework.
There is overall consensus between reviewers regarding the paper's assessment the feedback is lukewarm. | train | [
"rJlcW07_07",
"rkgmooX_0Q",
"SklBB_QdRm",
"SJxGwdnqhQ",
"BkeiJu-q27",
"rJxYbry4hm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"R2 : Concerns about novelty of the work\n\nWhile LSTMs and autoencoders find several applications in the domain of summarization and NLP in general, the model (composed of LSTMs) proposed in this paper is in fact novel. The architecture is intuitively motivated to solve the problem at hand and the objective for ea... | [
-1,
-1,
-1,
4,
4,
4
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"rJxYbry4hm",
"BkeiJu-q27",
"SJxGwdnqhQ",
"iclr_2019_rkGcYi09Km",
"iclr_2019_rkGcYi09Km",
"iclr_2019_rkGcYi09Km"
] |
iclr_2019_rkMD73A5FX | Can I trust you more? Model-Agnostic Hierarchical Explanations | Interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models. We propose Mahé, a novel approach to provide Model-Agnostic Hierarchical Explanations of how powerful machine learning models, such as deep neural networks, capture these interactions as either dependent on or free of the context of data instances. Specifically, Mahé provides context-dependent explanations by a novel local interpretation algorithm that effectively captures any-order interactions, and obtains context-free explanations through generalizing context-dependent interactions to explain global behaviors. Experimental results show that Mahé obtains improved local interaction interpretations over state-of-the-art methods and successfully provides explanations of interactions that are context-free. | rejected-papers | This paper introduces Mahe, a model-agnostic hierarchical explanation technique, that constructs a hierarchy of explanations, from local, context-dependent ones (like LIME) to global, context-free ones. The reviewers found the proposed work to be a quite interesting application of the neural interaction detection (NID) framework, and overall found the results to be quite extensive and promising.
The reviewers and the AC note the following as the primary concerns of the paper: (1) a crucial concern with the proposed work is the clarity of writing in the paper, and (2) the proposed work is quite expensive, computationally, as the exhaustive search is needed over local interactions.
The reviewers appreciated the detailed comments and the revision, and felt the revised the manuscript was much improved by the additional editing, details in the papers, and the additional experiments. However, both reviewer 1 and 3 have strong reservations about the computational complexity of the approach, and the additional experiments did not alleviate it. Further, reviewer 1 is still concerned about the clarity of the work, finding much of the proposed work to be unclear, and recommends further revisions.
Given these considerations, everyone felt that the idea is strong and most of the experiments are quite promising. However, without further editing and some efficiency strategies, it barely misses the bar of acceptance.
| train | [
"S1gB29f3sm",
"B1xF8StW14",
"r1ly9Li8Rm",
"rJebRE2507",
"HyxeaRB5AQ",
"BylQzfHL0m",
"SJxzhNaPRQ",
"H1g7jxSURQ",
"H1e41g4PTm",
"HkxEuTNzaX",
"Syg0NUh6nm",
"BkxG9ejcnm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary\n=======\nThe authors extended the linear local attribution method LIME for interpreting black box models by non-linear functions to more accurately approximate black box models locally and identifying interactions between model input variables using the previously published neural interaction detection (N... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_rkMD73A5FX",
"rJebRE2507",
"Syg0NUh6nm",
"HyxeaRB5AQ",
"H1g7jxSURQ",
"S1gB29f3sm",
"BkxG9ejcnm",
"S1gB29f3sm",
"HkxEuTNzaX",
"S1gB29f3sm",
"iclr_2019_rkMD73A5FX",
"iclr_2019_rkMD73A5FX"
] |
iclr_2019_rkMhusC5Y7 | Learning to Coordinate Multiple Reinforcement Learning Agents for Diverse Query Reformulation | We propose a method to efficiently learn diverse strategies in reinforcement learning for query reformulation in the tasks of document retrieval and question answering. In the proposed framework an agent consists of multiple specialized sub-agents and a meta-agent that learns to aggregate the answers from sub-agents to produce a final answer. Sub-agents are trained on disjoint partitions of the training data, while the meta-agent is trained on the full training set. Our method makes learning faster, because it is highly parallelizable, and has better generalization performance than strong baselines, such as an ensemble of agents trained on the full data. We show that the improved performance is due to the increased diversity of reformulation strategies. | rejected-papers |
pros:
- The paper is clear and easy to read
- Both Reviewer 1 and Reviewer 2 found the empirical evaluation to be good
cons:
- Some of the reviewers felt that the proposed approach lacked novelty (e.g. with respect to Nogueira and Cho)
- Some of the architecture choices seem complicated and it was not fully clear to the reviewers (even after the rebuttal) how and why things were working better in this approach than in other similar ones.
I think this is a good paper but it doesn't quite meet the bar for acceptance at this time.
| train | [
"ByeZ-WSV0Q",
"HJgFRREEAm",
"SkxZf6VN0X",
"BJltzGNERm",
"rJxJYI8T37",
"SkloQyKK3X",
"ryeOhUMtn7",
"SkxftzzHhm",
"BJxNrTCQn7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thanks for your comments and questions!\n\nQuestion 1: It is counter-intuitive, e.g., why sub-agents trained on full training dataset obtain worse results than on its subset. \n\nAnswer: This question is similar to questions 1 and 2 from AnonReviewer1, and we hope to have answered it there.\n\n\nQuestion 2: Regard... | [
-1,
-1,
-1,
-1,
4,
7,
5,
-1,
-1
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
-1,
-1
] | [
"ryeOhUMtn7",
"SkloQyKK3X",
"rJxJYI8T37",
"SkxftzzHhm",
"iclr_2019_rkMhusC5Y7",
"iclr_2019_rkMhusC5Y7",
"iclr_2019_rkMhusC5Y7",
"BJxNrTCQn7",
"iclr_2019_rkMhusC5Y7"
] |
iclr_2019_rkMnHjC5YQ | Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps | We propose a new algorithm to learn a one-hidden-layer convolutional neural network where both the convolutional weights and the outputs weights are parameters to be learned. Our algorithm works for a general class of (potentially overlapping) patches, including commonly used structures for computer vision tasks. Our algorithm draws ideas from (1) isotonic regression for learning neural networks and (2) landscape analysis of non-convex matrix factorization problems. We believe these findings may inspire further development in designing provable algorithms for learning neural networks and other complex models. While our focus is theoretical, we also present experiments that illustrate our theoretical findings. | rejected-papers | The reviewers seem to reach a consensus that the contribution of the paper is somewhat incremental give the prior work of Goel et al and that a main drawback of the paper is that it's not clear the similar technique can be applied to multiple **convolutional filters**. The authors mentioned in the response that some of the techniques can be heuristically applied to multiple layers, but the AC is skeptical about it because, with multiple layers and multiple convolutional filters, one has to deal with the permutation invariance caused by the multiple convolutional filters. (It's unclear to the AC how one could have a meaningful setting with multiple layers but a single convolution filters.) | train | [
"BJgAcYWW0X",
"rkl-S4D067",
"Hkx5HVxhTX",
"SyltPVgnam",
"S1e1lgWoh7",
"SJlH_D8F37"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank for your thoughtful review.\n\nFirst, we want to emphasize that the generalization of Convotron [1] to handle approximately known weights in the second layer is highly non-trivial. It requires an analysis of the new patch matrix that is weighted by the coefficients. We use a substantially different techni... | [
-1,
6,
-1,
-1,
5,
6
] | [
-1,
3,
-1,
-1,
1,
4
] | [
"rkl-S4D067",
"iclr_2019_rkMnHjC5YQ",
"S1e1lgWoh7",
"SJlH_D8F37",
"iclr_2019_rkMnHjC5YQ",
"iclr_2019_rkMnHjC5YQ"
] |
iclr_2019_rkVOXhAqY7 | The Conditional Entropy Bottleneck | We present a new family of objective functions, which we term the Conditional Entropy Bottleneck (CEB). These objectives are motivated by the Minimum Necessary Information (MNI) criterion. We demonstrate the application of CEB to classification tasks. We show that CEB gives: well-calibrated predictions; strong detection of challenging out-of-distribution examples and powerful whitebox adversarial examples; and substantial robustness to those adversaries. Finally, we report that CEB fails to learn from information-free datasets, providing a possible resolution to the problem of generalization observed in Zhang et al. (2016). | rejected-papers | This paper proposes a criterion for representation learning, minimum necessary information, which states that for a task defined by some joint probability distribution P(X,Y) and the goal of (for example) predicting Y from X, a learned representation of X, denoted Z, should satisfy the equality I(X;Y) = I(X;Z) = I(Y;Z). The authors then propose an objective function, the conditional entropy bottleneck (CEB), to ensure that a learned representation satisfies the minimum necessary information criterion, and a variational approximation to the conditional entropy bottleneck that can be parameterized using deep networks and optimized with standard methods such as stochastic gradient descent. The authors also relate the conditional entropy bottleneck to the information bottleneck Lagrangian proposed by Tishby, showing that the CEB corresponds to the information bottleneck with β = 0.5. An important contribution of this work is that it gives a theoretical justification for selecting a specific value of β rather than testing multiple values. Experiments on Fashion-MNIST show that, in comparison to a deterministic classifier and to variational information bottleneck models with β in {0.01, 0.1, 0.5}, the CEB model achieves good accuracy and calibration, is competitive at detecting out-of-distribution inputs, and is more resistant to white-box adversarial attacks. Another experiment demonstrates that a model trained with the CEB criterion is *unable* to memorize a randomly labeled version of Fashion-MNIST. There was a strong difference of opinion between the reviewers on this paper. One reviewer (R1) dismissed the work as trivial. The authors rebutted this claim in their response and revision, and R1 failed to participate in the discussion, so the AC strongly discounted this review. The other two reviewers had some concerns about the paper, most of which were addressed by the revision. But, crucially, some concerns still remain. R4 would like more theoretical rigor in the paper, while R2 would like a direct comparison against MINE and CPC. In the end, the AC thinks that this paper needs just a bit more work to address these concerns. The authors are encouraged to revise this work and submit it to another machine learning venue. | train | [
"Skx2S8ZEyE",
"HJeLz4ey1E",
"BklfFeO56Q",
"HkeB8Ry92X",
"BJeYXIm907",
"Hkx3Y3f5R7",
"S1g2EolPCQ",
"rJx2ypTTT7",
"Bkg6uTmaTQ",
"S1eAXuma6m",
"ryx7fSXaaQ",
"BkgjjVm6p7",
"S1e2IVQppX",
"BJewrEQTTX",
"Hyx7DifTpX",
"H1lTT5fp6m",
"rJgNX5MT6m",
"ryx8V2-6Tm",
"Sye2s1-paX",
"rJxskIxpaQ"... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
... | [
"Thank you for reading our revisions, and for adjusting your score. You are correct that we focus on the body of the paper on supervised representation learning. This is representation learning as presented in Tishby et al. (2000).\n\nHowever, we explicitly state in the main body of the paper that we are placing no... | [
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1
] | [
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1
] | [
"HJeLz4ey1E",
"BJewrEQTTX",
"iclr_2019_rkVOXhAqY7",
"iclr_2019_rkVOXhAqY7",
"S1g2EolPCQ",
"iclr_2019_rkVOXhAqY7",
"rJxskIxpaQ",
"S1eAXuma6m",
"ryx8V2-6Tm",
"rJxskIxpaQ",
"H1ehV4-62X",
"Syxd_Ta2n7",
"BJewrEQTTX",
"HkeB8Ry92X",
"H1lTT5fp6m",
"rJgNX5MT6m",
"ByxNXe-Chm",
"Sye2s1-paX",
... |
iclr_2019_rke41hC5Km | Generating Realistic Stock Market Order Streams | We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks.
We model the order stream as a stochastic process with finite history dependence, and employ a conditional Wasserstein GAN to capture history dependence of orders in a stock market.
We test our approach with actual market and synthetic data on a number of different statistics, and find the generated data to be close to real data. | rejected-papers | The reviewers raised a number of major concerns including the incremental novelty of the proposed (WGANs are applied to a new domain), and, most importantly, insufficient and unconvincing experimental evaluation presented (including the lack of comparative studies). The authors’ rebuttal failed to fully alleviate reviewers’ concerns. Hence, I cannot suggest this paper for presentation at ICLR. | train | [
"ryeg7nJ9AX",
"Bkxgu1nm67",
"BJx6X1n76m",
"rygGCAjmpX",
"B1e29JKRn7",
"BygYHZ3F3m",
"BJgoOGtxi7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In response to the reviewers comments, we have updated our submission with the following information:\n1) Appendix B: presented results with a recurrent VAE, which performs worse in certain aspects compared to our approach.\n2) Appendix C: posted code snippets presenting our neural network,loss functions, and opti... | [
-1,
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2019_rke41hC5Km",
"BJgoOGtxi7",
"BygYHZ3F3m",
"B1e29JKRn7",
"iclr_2019_rke41hC5Km",
"iclr_2019_rke41hC5Km",
"iclr_2019_rke41hC5Km"
] |
iclr_2019_rke8ZhCcFQ | ATTACK GRAPH CONVOLUTIONAL NETWORKS BY ADDING FAKE NODES | Graph convolutional networks (GCNs) have been widely used for classifying graph nodes in the semi-supervised setting.
Previous works have shown that GCNs are vulnerable to the perturbation on adjacency and feature matrices of existing nodes. However, it is unrealistic to change the connections of existing nodes in many applications, such as existing users in social networks. In this paper, we investigate methods attacking GCNs by adding fake nodes. A greedy algorithm is proposed to generate adjacency and feature matrices of fake nodes, aiming to minimize the classification accuracy on the existing ones. In additional, we introduce a discriminator to classify fake nodes from real nodes, and propose a Greedy-GAN algorithm to simultaneously update the discriminator and the attacker, to make fake nodes indistinguishable to the real ones. Our non-targeted attack decreases the accuracy of GCN down to 0.10, and our targeted attack reaches a success rate of 0.99 for attacking the whole datasets, and 0.94 on average for attacking a single node. | rejected-papers | While the main idea of the paper is nice, the reviewers are not satisfied with the clarity of the material and the execution. | val | [
"ryeL9Aae6X",
"Hke6uV79nm",
"HJl0oTyFhX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an idea of adding fake nodes to attack a graph network model, by a GAN style trainning procedure.\n\nHowever I concern about the experimental parts, which are only evaluated on small settings. \n\nPlus, the notations are inconsistant, whereas the objective function in (3) has nothing to do with... | [
4,
3,
3
] | [
3,
4,
2
] | [
"iclr_2019_rke8ZhCcFQ",
"iclr_2019_rke8ZhCcFQ",
"iclr_2019_rke8ZhCcFQ"
] |
iclr_2019_rkeMHjR9Ym | Stochastic Gradient Descent Learns State Equations with Nonlinear Activations | We study discrete time dynamical systems governed by the state equation ϕht+1=ϕ(Aht+But). Here A,B are weight matrices, ϕ is an activation function, and ut is the input data. This relation is the backbone of recurrent neural networks (e.g. LSTMs) which have broad applications in sequential learning tasks. We utilize stochastic gradient descent to learn the weight matrices from a finite input/state trajectory (ut,ht)t=0N. We prove that SGD estimate linearly converges to the ground truth weights while using near-optimal sample size. Our results apply to increasing activations whose derivatives are bounded away from zero. The analysis is based on i) an SGD convergence result with nonlinear activations and ii) careful statistical characterization of the state vector. Numerical experiments verify the fast convergence of SGD on ReLU and leaky ReLU in consistence with our theory. | rejected-papers | This paper shows convergence of stochastic gradient descent for the problem of learning weight matrices for a linear dynamical system with non-linear activation. Reviewers agree that the problem considered is both interesting and challenging. However the paper makes many simplifying assumptions - 1) both input and hidden state are observed, a very non standard assumption, 2) analysis requires increasing activation functions, cannot handle ReLU functions. I agree with R2 and think these assumptions make the results significantly weaker. R1 and R3 are more optimistic, but authors response does not give an insight into how one might extend this analysis to the setting where hidden state is not observed. Relaxing these assumptions will make the paper more interesting. | train | [
"Sygf9aE927",
"SJx2X4Rpn7",
"BJxDKvltR7",
"HJlkw8gYA7",
"HJeIAZlYCX",
"HJxxv9Bp37"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies discrete time dynamical systems with a non-linear state equation. They assume the non-linear function is assumed to be \\beta-increasing like leaky ReLU. Under this setting, the authors prove that for the given state equation for stable systems with random gaussian input at each time step, runni... | [
7,
7,
-1,
-1,
-1,
5
] | [
3,
3,
-1,
-1,
-1,
5
] | [
"iclr_2019_rkeMHjR9Ym",
"iclr_2019_rkeMHjR9Ym",
"Sygf9aE927",
"HJxxv9Bp37",
"SJx2X4Rpn7",
"iclr_2019_rkeMHjR9Ym"
] |
iclr_2019_rkeT8iR9Y7 | Directional Analysis of Stochastic Gradient Descent via von Mises-Fisher Distributions in Deep Learning | Although stochastic gradient descent (SGD) is a driving force behind the recent success of deep learning, our understanding of its dynamics in a high-dimensional parameter space is limited. In recent years, some researchers have used the stochasticity of minibatch gradients, or the signal-to-noise ratio, to better characterize the learning dynamics of SGD. Inspired from these work, we here analyze SGD from a geometrical perspective by inspecting the stochasticity of the norms and directions of minibatch gradients. We propose a model of the directional concentration for minibatch gradients through von Mises-Fisher (VMF) distribution, and show that the directional uniformity of minibatch gradients increases over the course of SGD. We empirically verify our result using deep convolutional networks and observe a higher correlation between the gradient stochasticity and the proposed directional uniformity than that against the gradient norm stochasticity, suggesting that the directional statistics of minibatch gradients is a major factor behind SGD. | rejected-papers | The paper presents a careful analysis of SGD by characterizing the stochastic gradient via von Mises-Fisher distributions. While the paper has good quality and clarity, and the authors' detailed response has further clarified several raised issues, some important concerns remain: Reviewer 1 would like to see careful discussions on related observations by other work in the literature, such as low rank Hessians in the over-parameterized regime, Reviewer 2 is concerned about the significance of the presented analysis and observations, and Reviewers 2 and 4 both would like to see how the presented theoretical analysis could be used to design improved algorithms. In the AC's opinion, while solid theoretical analysis of SGD is definitely valuable, it is highly desirable to demonstrate its practical value (considering that it does not provide clearly new insights about the learning dynamics of SGD). | train | [
"Byxlpm08yE",
"SJlXBVN71V",
"BkgdqbSFAm",
"H1lDUA_jaQ",
"SylwdgtPpX",
"SJlhIp_DaQ",
"SJxhAiOPTX",
"S1lAhxfZam",
"ByxlCvF92m",
"SJeiEX3YnX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"“A high-d uniform random unit vector will, with high probability, be orthogonal to vectors living in a subspace. The observations made in the paper may not be capturing such sub-space concentration correctly or at least the claims need to be reconciled with existing observations regarding the Hessian being low ran... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"SJlXBVN71V",
"H1lDUA_jaQ",
"iclr_2019_rkeT8iR9Y7",
"SJlhIp_DaQ",
"SJeiEX3YnX",
"ByxlCvF92m",
"S1lAhxfZam",
"iclr_2019_rkeT8iR9Y7",
"iclr_2019_rkeT8iR9Y7",
"iclr_2019_rkeT8iR9Y7"
] |
iclr_2019_rkeUrjCcYQ | Monge-Amp\`ere Flow for Generative Modeling | We present a deep generative model, named Monge-Amp\`ere flow, which builds on continuous-time gradient flow arising from the Monge-Amp\`ere equation in optimal transport theory. The generative map from the latent space to the data space follows a dynamical system, where a learnable potential function guides a compressible fluid to flow towards the target density distribution. Training of the model amounts to solving an optimal control problem. The Monge-Amp\`ere flow has tractable likelihoods and supports efficient sampling and inference. One can easily impose symmetry constraints in the generative model by designing suitable scalar potential functions. We apply the approach to unsupervised density estimation of the MNIST dataset and variational calculation of the two-dimensional Ising model at the critical point. This approach brings insights and techniques from Monge-Amp\`ere equation, optimal transport, and fluid dynamics into reversible flow-based generative models. | rejected-papers | This paper develops a generative density model based on continuous-time flows on a potential field.
Strengths: The paper contains interesting ideas and connections to physics, in particular the enforcement of symmetry in a computationally cheap way.
Weaknesses: The main quantitative results of this paper are undercut by the numerical error introduced by the approximate, fixed-step integrator used. In the paper, the authors did not check the degree of numerical error (or to what extend their reported likelihoods do not normalize) as a function of the step size. This was partially addressed in a comment below.
There does seem to be some novelty but the lack of concrete experiments is a letdown. One could e.g. verify that the samples have similar properties (e.g. moments) to the ground truth, which are known for the Ising model. Regarding clarity, the symmetry constraints are never clearly specified.
This paper contains many ideas that would have been novel, but were scooped by [1] which was put on arXiv 3 months before the ICLR submission date. The authors have added appropriate references to this paper, but this still undercuts the originality of the contribution.
The explanation of how and which symmetries are enforced is a little bit buried and unclear.
Points of contention: Two of the reviewers didn't seem to be aware that the main mathematical results of the model are special cases of results from [1].
Consensus: All reviewers agreed that there were interesting ideas in the paper, and that it was close to the bar.
[1] Chen, Tian Qi, et al. "Neural Ordinary Differential Equations." | train | [
"S1lVyzH1gN",
"rygostgRy4",
"BJlJ2e-GJN",
"Bkg3dNuh0m",
"SkgDRoNs0m",
"BJgwkdVoR7",
"rkxAz5z9hQ",
"HylWFOk5A7",
"rJekJlednX",
"HklT8WB927",
"BylSHFrzRm",
"rJgNSZHGRm",
"H1gpYYJMC7",
"HJlPc7kzA7",
"B1geQk2Ah7"
] | [
"author",
"official_reviewer",
"author",
"public",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"> I'm concerned about the numerical error introduced by the approximate, fixed-step integrator used. In the paper, the authors did not check the degree of numerical error (or to what extend their reported likelihoods do not normalize) as a function of the step size. In the comments below, you state: \"We have ch... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1
] | [
"rygostgRy4",
"iclr_2019_rkeUrjCcYQ",
"Bkg3dNuh0m",
"iclr_2019_rkeUrjCcYQ",
"rJekJlednX",
"HylWFOk5A7",
"iclr_2019_rkeUrjCcYQ",
"rJgNSZHGRm",
"iclr_2019_rkeUrjCcYQ",
"iclr_2019_rkeUrjCcYQ",
"rJekJlednX",
"rkxAz5z9hQ",
"HklT8WB927",
"B1geQk2Ah7",
"iclr_2019_rkeUrjCcYQ"
] |
iclr_2019_rkeX-3Rqtm | Training Hard-Threshold Networks with Combinatorial Search in a Discrete Target Propagation Setting | Learning deep neural networks with hard-threshold activation has recently become an important problem due to the proliferation of resource-constrained computing devices. In order to circumvent the inability to train with backpropagation in the present of hard-threshold activations, \cite{friesen2017} introduced a discrete target propagation framework for training hard-threshold networks in a layer-by-layer fashion. Rather than using a gradient-based target heuristic, we explore the use of search methods for solving the target setting problem. Building on both traditional combinatorial optimization algorithms and gradient-based techniques, we develop a novel search algorithm Guided Random Local Search (GRLS). We demonstrate the effectiveness of our algorithm in training small networks on several datasets and evaluate our target-setting algorithm compared to simpler search methods and gradient-based techniques. Our results indicate that combinatorial optimization is a viable method for training hard-threshold networks that may have the potential to eventually surpass gradient-based methods in many settings. | rejected-papers | The paper proposes a novel local combinatorial search algorithm for the discrete target propagation framework of Friesen & Domingos 2018, and shows a few promising empirical results.
Reviewers found the paper well written and clear, and two of them were enthusiastic about the direction of this research.
But all reviewers agreed that the paper is too preliminary, particularly in its empirical coverage. More extensive experiments are needed to compare with competitive approaches form the literature, for the task of training hard-threshold networks. Experiments would need to evaluate the algorithms on larger models and data more representative of the field, to measure how the approach can scale, and to convince of the superiority or advantage of the proposed method. | train | [
"H1eV7W9nT7",
"SJgYmJCF6X",
"rJlwCC6Kp7",
"HylcG0aYaX",
"SkxCpX2yaQ",
"B1eTCTj9n7",
"rylmu9RdnX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I look forward to the next version of this work.",
"Great, I wish you the best of luck! ",
"1. We began with the most basic and general approach to local search for the target setting algorithm, which we present as the “naive” approach in the paper. We went through many possible improvements to the method and ... | [
-1,
-1,
-1,
-1,
3,
4,
5
] | [
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"rJlwCC6Kp7",
"HylcG0aYaX",
"rylmu9RdnX",
"SkxCpX2yaQ",
"iclr_2019_rkeX-3Rqtm",
"iclr_2019_rkeX-3Rqtm",
"iclr_2019_rkeX-3Rqtm"
] |
iclr_2019_rkeYUsRqKQ | An Adversarial Learning Framework for a Persona-based Multi-turn Dialogue Model | In this paper, we extend the persona-based sequence-to-sequence (Seq2Seq) neural network conversation model to a multi-turn dialogue scenario by modifying the state-of-the-art hredGAN architecture to simultaneously capture utterance attributes such as speaker identity, dialogue topic, speaker sentiments and so on. The proposed system, phredGAN has a persona-based HRED generator (PHRED) and a conditional discriminator. We also explore two approaches to accomplish the conditional discriminator: (1) phredGANa, a system that passes the attribute representation as an additional input into a traditional adversarial discriminator, and (2) phredGANd, a dual discriminator system which in addition to the adversarial discriminator, collaboratively predicts the attribute(s) that generated the input utterance. To demonstrate the superior performance of phredGAN over the persona SeqSeq model, we experiment with two conversational datasets, the Ubuntu Dialogue Corpus (UDC) and TV series transcripts from the Big Bang Theory and Friends. Performance comparison is made with respect to a variety of quantitative measures as well as crowd-sourced human evaluation. We also explore the trade-offs from using either variant of phredGAN on datasets with many but weak attribute modalities (such as with Big Bang Theory and Friends) and ones with few but strong attribute modalities (customer-agent interactions in Ubuntu dataset). | rejected-papers | This work presents extensions of dialogue systems to simultaneously capture speakers' "personas" (in the framing of Li et al's work) and adapt to them. While the ideas are interesting, reviewers note that the incremental contribution compared to previous work is a bit too limited for ICLR's expectation, without being offset by strongly convincing experimental results. Authors are encouraged to incorporated their ideas into future submissions after having combined them with other insights to provide a stronger overall contribution. | train | [
"HyxbMDyF3Q",
"HJgLhRp0Am",
"BkxiAAOY27",
"SJeTEV0a0m",
"B1lEtNkjRX",
"rJlObhFf6m",
"Syg9jTOz6Q",
"HyxVn2dGTQ",
"Bkeg36TFnQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"===============================\nI have read the authors' response and other reviewers' comments carefully. Thank you for taking great efforts to improve the paper, including providing additional results on human evaluation. (Btw, Table 1 and Table 2 are also much nicer now.)\n\nHowever, from the reviews it seems ... | [
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_rkeYUsRqKQ",
"SJeTEV0a0m",
"iclr_2019_rkeYUsRqKQ",
"Syg9jTOz6Q",
"iclr_2019_rkeYUsRqKQ",
"HyxbMDyF3Q",
"BkxiAAOY27",
"Bkeg36TFnQ",
"iclr_2019_rkeYUsRqKQ"
] |
iclr_2019_rkeqCoA5tX | LEARNING GENERATIVE MODELS FOR DEMIXING OF STRUCTURED SIGNALS FROM THEIR SUPERPOSITION USING GANS | Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions. Most of the existing works implicitly assume that the clean samples from the target distribution are easily available. However, in many applications, this assumption is violated. In this paper, we consider the problem of learning GANs under the observation setting when the samples from target distribution are given by the superposition of two structured components. We propose two novel frameworks: denoising-GAN and demixing-GAN. The denoising-GAN assumes access to clean samples from the second component and try to learn the other distribution, whereas demixing-GAN learns the distribution of the components at the same time. Through comprehensive numerical experiments, we demonstrate that proposed frameworks can generate clean samples from unknown distributions, and provide competitive performance in tasks such as denoising, demixing, and compressive sensing. | rejected-papers | The paper proposes two simple generator architecture variants enabling the use of GAN training for the tasks of denoising (from known noise types) and demixing (of two added sources). While the denoising approach is very similar to AmbientGAN and could thus be considered somewhat incremental, all reviewers and the AC agree that the developed use of GANs for demixing is an interesting novel direction. The paper is well written, and the approach is supported by encouraging experimental results on MNIST and Fashion-MNIST.
Reviewers and AC noted the following weaknesses of the paper: a) no theoretical support or analysis is provided for the approach, this makes it primarily an empirical study of a nice idea.
b) For an empirical study, the experimental evaluation is very limited, both in terms of dataset/problems it is tested on; and in terms of algorithms for demixing/source-separation that it is compared against.
Following these reviews, the authors added the experiments on Fashion-MNIST and comparison with ICA which are steps in the right direction. This improvement moved one reviewer to positively update his score, but not the others.
Taking everything into account, the AC judges that it is a very promising direction, but that more extensive experiments on additional benchmark tasks for demixing and comparison with other demixing algorithms are needed to make this work a more complete contribution.
| train | [
"HJlZFxF_3m",
"SylumfyqRX",
"rJg9EyCtAQ",
"Byx-dJJqCQ",
"SJgIjPdhhm",
"B1ew6SL9nX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Quality is good, just a handful of typos.\nClaritys above average in explaining the problem setting.\nOriginality: scan refs...\nSignificance: medium\nPros: the authors develop a novel GAN-based approach to denoising, demixing, and in the process train generators for the various components (not just inference). Fu... | [
7,
-1,
-1,
-1,
4,
5
] | [
4,
-1,
-1,
-1,
5,
4
] | [
"iclr_2019_rkeqCoA5tX",
"HJlZFxF_3m",
"B1ew6SL9nX",
"SJgIjPdhhm",
"iclr_2019_rkeqCoA5tX",
"iclr_2019_rkeqCoA5tX"
] |
iclr_2019_rket4i0qtX | The meaning of "most" for visual question answering models | The correct interpretation of quantifier statements in the context of a visual scene requires non-trivial inference mechanisms. For the example of "most", we discuss two strategies which rely on fundamentally different cognitive concepts. Our aim is to identify what strategy deep learning models for visual question answering learn when trained on such questions. To this end, we carefully design data to replicate experiments from psycholinguistics where the same question was investigated for humans. Focusing on the FiLM visual question answering model, our experiments indicate that a form of approximate number system emerges whose performance declines with more difficult scenes as predicted by Weber's law. Moreover, we identify confounding factors, like spatial arrangement of the scene, which impede the effectiveness of this system. | rejected-papers | The paper studies an narrowly focused but interesting problem -- if the Visual Question answering model “FILM” from Perez et al (2018) is able to decide if “most” of the objects have a certain attribute or color. While the work itself is appreciate by the reviewers, concerns remain about the conclusion being limited in scope due to the synthetic nature of the data, and the analysis fairly narrow (a single model with a single very specific task). We encourage the authors to use reviewer feedback to make the manuscript stronger for a future deadline. | train | [
"rke3eLqpJV",
"BJg2S_InpQ",
"HyxpQdUhp7",
"Ske5xOUn6m",
"S1eF1dL3TQ",
"SygP4mws3Q",
"B1eHFc49nm",
"rJlfD0wcom"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"On the positive side:\n+ The paper improved in the revision, improving mainly discussion and increasing clarity.\n\nRemaining weaknesses:\n- I still think for an analysis paper it is important to have a comparison of more than a single model. Even when proposing a new model we expect papers to compare to prior wor... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"Ske5xOUn6m",
"HyxpQdUhp7",
"rJlfD0wcom",
"B1eHFc49nm",
"SygP4mws3Q",
"iclr_2019_rket4i0qtX",
"iclr_2019_rket4i0qtX",
"iclr_2019_rket4i0qtX"
] |
iclr_2019_rkg5fh0ctQ | Transferring SLU Models in Novel Domains | Spoken language understanding (SLU) is a critical component in building dialogue systems. When building models for novel natural language domains, a major challenge is the lack of data in the new domains, no matter whether the data is annotated or not. Recognizing and annotating ``intent'' and ``slot'' of natural languages is a time-consuming process. Therefore, spoken language understanding in low resource domains remains a crucial problem to address. In this paper, we address this problem by proposing a transfer-learning method, whereby a SLU model is transferred to a novel but data-poor domain via a deep neural network framework. We also introduce meta-learning in our work to bridge the semantic relations between seen and unseen data, allowing new intents to be recognized and new slots to be filled with much lower new training effort. We show the performance improvement with extensive experimental results for spoken language understanding in low resource domains. We show that our method can also handle novel intent recognition and slot-filling tasks. Our methodology provides a feasible solution for alleviating data shortages in spoken language understanding. | rejected-papers | This paper proposes a transfer learning approach based on previous works on this area, to build language understanding models for new domains. Experimental results show improved performance in comparison to previous studies in terms of slot and intent accuracies in multiple setups.
The work is interesting and useful, but is not novel given the previous work.
The paper organization is also not great, for example, the intro should introduce the approach beyond just mentioning transfer learning and meta-learning.
The improvements over the baselines look good, but the baselines themselves are quite simple. It'd be better to include comparisons with other state of the art methods. Also, the improvements over DNN are not consistent, it would be good to analyze and come up with suggestions on when to use which approach. | train | [
"SyxTD03x1N",
"B1luOHqK07",
"H1g8yS5YCQ",
"SylXI4cKRX",
"HJg3RTKFC7",
"HJghT0Dqnm",
"BJxtUzpFhQ",
"Skx-_1juhQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks a lot for elaborating the difference and adding the discussions. I think that the score 6 (above the threshold) is reasonable for this paper.",
"We want to thank you for your kind and helpful feedback. We've followed your comments and addressed the related concerns in our revision as follows:\n1) \"Hard t... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"SylXI4cKRX",
"Skx-_1juhQ",
"BJxtUzpFhQ",
"HJghT0Dqnm",
"iclr_2019_rkg5fh0ctQ",
"iclr_2019_rkg5fh0ctQ",
"iclr_2019_rkg5fh0ctQ",
"iclr_2019_rkg5fh0ctQ"
] |
iclr_2019_rkgMNnC9YQ | ATTENTIVE EXPLAINABILITY FOR PATIENT TEMPORAL EMBEDDING | Learning explainable patient temporal embeddings from observational data has mostly ignored the use of RNN architecture that excel in capturing temporal data dependencies but at the expense of explainability. This paper addresses this problem by introducing and applying an information theoretic approach to estimate the degree of explainability of such architectures. Using a communication paradigm, we formalize metrics of explainability by estimating the amount of information that an AI model needs to convey to a human end user to explain and rationalize its outputs. A key aspect of this work is to model human prior knowledge at the receiving end and measure the lack of explainability as a deviation from human prior knowledge. We apply this paradigm to medical concept representation problems by regularizing loss functions of temporal autoencoders according to the derived explainability metrics to guide the learning process towards models producing explainable outputs. We illustrate the approach with convincing experimental results for the generation of explainable temporal embeddings for critical care patient data. | rejected-papers | The paper proposes an approach to define an "interpretable representation",
in particular for the case of patient condition monitoring. Reviewers point
to several concerns, including even the definition of explainability and
limited significance. The authors tried to address the concerns but reviewers
think the paper is not ready for acceptance. I concur with them in rejecting it. | train | [
"HyxW_aE9AQ",
"SyeM1qV5CQ",
"ByeXN_N5AQ",
"B1lSpR6n2m",
"ByeabU2c3X",
"rygGb4XW2X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Regarding issue 1) raised in your review, we would like to point out that we do not measure interpretability or explainability with a \"number of model parameters\". The use of the external observer treats the model as a blackbox. Our approach is in fact based on the premise that we do not want to inspect models t... | [
-1,
-1,
-1,
4,
3,
2
] | [
-1,
-1,
-1,
4,
4,
3
] | [
"rygGb4XW2X",
"B1lSpR6n2m",
"ByeabU2c3X",
"iclr_2019_rkgMNnC9YQ",
"iclr_2019_rkgMNnC9YQ",
"iclr_2019_rkgMNnC9YQ"
] |
iclr_2019_rkgZ3oR9FX | Learning to Refer to 3D Objects with Natural Language | Human world knowledge is both structured and flexible. When people see an object, they represent it not as a pixel array but as a meaningful arrangement of semantic parts. Moreover, when people refer to an object, they provide descriptions that are not merely true but also relevant in the current context. Here, we combine these two observations in order to learn fine-grained correspondences between language and contextually relevant geometric properties of 3D objects. To do this, we employed an interactive communication task with human participants to construct a large dataset containing natural utterances referring to 3D objects from ShapeNet in a wide variety of contexts. Using this dataset, we developed neural listener and speaker models with strong capacity for generalization. By performing targeted lesions of visual and linguistic input, we discovered that the neural listener depends heavily on part-related words and associates these words correctly with the corresponding geometric properties of objects, suggesting that it has learned task-relevant structure linking the two input modalities. We further show that a neural speaker that is `listener-aware' --- that plans its utterances according to how an imagined listener would interpret its words in context --- produces more discriminative referring expressions than an `listener-unaware' speaker, as measured by human performance in identifying the correct object. | rejected-papers | Paper develops a dataset and model for learning to refer to 3D objects. Reviewers raised concerns about lack of novelty. Fundamentally, it seems unclear what (if any) the take-away for an ML-audience would be after reading this paper. We encourage the authors to incorporate reviewer feedback and submit a stronger manuscript at a future (perhaps a more applied) venue. | val | [
"rkgk-fZ63X",
"Ske3XSpdhm",
"BygDtFc9nQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Update: I have read author's response (sorry for being super late). The response better indicates and brings out the contributions made in the paper, and in my opinion is a strong application paper. But as before, and in agreement with R1 I still do not see technical novelty in the paper. For an application driven... | [
6,
4,
6
] | [
4,
4,
3
] | [
"iclr_2019_rkgZ3oR9FX",
"iclr_2019_rkgZ3oR9FX",
"iclr_2019_rkgZ3oR9FX"
] |
iclr_2019_rkgd0iA9FQ | Convergence Guarantees for RMSProp and ADAM in Non-Convex Optimization and an Empirical Comparison to Nesterov Acceleration | RMSProp and ADAM continue to be extremely popular algorithms for training neural nets but their theoretical convergence properties have remained unclear. Further, recent work has seemed to suggest that these algorithms have worse generalization properties when compared to carefully tuned stochastic gradient descent or its momentum variants. In this work, we make progress towards a deeper understanding of ADAM and RMSProp in two ways. First, we provide proofs that these adaptive gradient algorithms are guaranteed to reach criticality for smooth non-convex objectives, and we give bounds on the running time.
Next we design experiments to empirically study the convergence and generalization properties of RMSProp and ADAM against Nesterov's Accelerated Gradient method on a variety of common autoencoder setups and on VGG-9 with CIFAR-10. Through these experiments we demonstrate the interesting sensitivity that ADAM has to its momentum parameter \beta_1. We show that at very high values of the momentum parameter (\beta_1 = 0.99) ADAM outperforms a carefully tuned NAG on most of our experiments, in terms of getting lower training and test losses. On the other hand, NAG can sometimes do better when ADAM's \beta_1 is set to the most commonly used value: \beta_1 = 0.9, indicating the importance of tuning the hyperparameters of ADAM to get better generalization performance.
We also report experiments on different autoencoders to demonstrate that NAG has better abilities in terms of reducing the gradient norms, and it also produces iterates which exhibit an increasing trend for the minimum eigenvalue of the Hessian of the loss function at the iterates. | rejected-papers | The reviewers and ACs acknowledge that the paper has a solid theoretical contribution because it give a convergence (to critical points) of the ADAM and RMSprop algorithms, and also shows that NAG can be tuned to match or outperform SGD in test errors. However, reviewers and the AC also note that potential improvements for the paper a) the exposition/notations can be improved; b) better comparison to the prior work could be made; c) the theoretical and empirical parts of the paper are somewhat disconnected; d) the proof has an error (that is fixed by the authors with additional assumptions.) Therefore, the paper is not quite ready for publications right now but the AC encourages the authors to submit revisions to other top ML venues.
| val | [
"Sklzfh9GyV",
"H1x2CXtlCm",
"SkloKxKgAX",
"HJe00ROlRm",
"r1gImA_gAX",
"H1l4kAugAm",
"SkgBbJ9waQ",
"H1gQqQbc2m",
"SJxNpokU27",
"Hke2uNGZhm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I thank the authors for their response.\nNevertheless, in face of the [Li and Orabona], I think that their contribution is incremental.\n\n- Indeed, only when \\sigma ->0, then [Li and Orabona] enable fast rate of 1/T. \n This is relevant to stochastic settings where we use large batch sizes which decrease the v... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"SkloKxKgAX",
"H1gQqQbc2m",
"Hke2uNGZhm",
"SJxNpokU27",
"SkgBbJ9waQ",
"iclr_2019_rkgd0iA9FQ",
"iclr_2019_rkgd0iA9FQ",
"iclr_2019_rkgd0iA9FQ",
"iclr_2019_rkgd0iA9FQ",
"iclr_2019_rkgd0iA9FQ"
] |
iclr_2019_rkgfWh0qKX | Do Language Models Have Common Sense? | It has been argued that current machine learning models do not have commonsense, and therefore must be hard-coded with prior knowledge (Marcus, 2018). Here we show surprising evidence that language models can already learn to capture certain common sense knowledge. Our key observation is that a language model can compute the probability of any statement, and this probability can be used to evaluate the truthfulness of that statement. On the Winograd Schema Challenge (Levesque et al., 2011), language models are 11% higher in accuracy than previous state-of-the-art supervised methods. Language models can also be fine-tuned for the task of Mining Commonsense Knowledge on ConceptNet to achieve an F1 score of 0.912 and 0.824, outperforming previous best results (Jastrzebskiet al., 2018). Further analysis demonstrates that language models can discover unique features of Winograd Schema contexts that decide the correct answers without explicit supervision. | rejected-papers | This paper adapts language models (LMs), recurrent models trained on large corpus to produce the next word in English, to two commonsense reasoning tasks: the Winograd schema challenge and commonsense knowledge extraction. For the former, the language model score itself is used to obtain substantial gains over existing approaches for this challenging task, while a slightly more involved training procedure adapts the LMs to commonsense extraction. The reviewers appreciated the simplicity of the changes to existing LMs and the impressive results (especially on the WSC).
The reviewers point out the following potential weaknesses: (1) clarity issues in the writing and the presentation, (2) a lack of novelty in the proposed approach, given a number of recent work has shown the ability of language models to perform commonsense reasoning, and (3) critical methodological issues in the evaluation that raise questions about the significance of the results. A lack of response from the authors meant that there was no further discussion needed, and the reviewers encourage the authors to take the feedback to improve further versions of the paper. | train | [
"H1e8tLW9hm",
"BklPaakus7",
"rkx4ukiFnm",
"SJlmKFV4nX",
"rJeWt75Qsm",
"HyetCdcY5X",
"HJedsCKt5m",
"SJeCbxuFqX",
"ryxyxGDt5Q",
"BJxXFkLtqm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public",
"author",
"public",
"author",
"public"
] | [
"This paper evaluates language models for tasks that involve \"commonsense knowledge\" such as the Winograd Schema Challenge (WSC), Pronoun Disambiguation Problems (PDP), and commonsense knowledge base completion (KBC). \n\nPros:\n\nThe approach is relatively simple in that it boils down to just applying language m... | [
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_rkgfWh0qKX",
"iclr_2019_rkgfWh0qKX",
"iclr_2019_rkgfWh0qKX",
"rJeWt75Qsm",
"iclr_2019_rkgfWh0qKX",
"HJedsCKt5m",
"SJeCbxuFqX",
"ryxyxGDt5Q",
"BJxXFkLtqm",
"iclr_2019_rkgfWh0qKX"
] |
iclr_2019_rkglvsC9Ym | Log Hyperbolic Cosine Loss Improves Variational Auto-Encoder | In Variational Auto-Encoder (VAE), the default choice of reconstruction loss function between the decoded sample and the input is the squared L2. We propose to replace it with the log hyperbolic cosine (log-cosh) loss, which behaves as L2 at small values and as L1 at large values, and differentiable everywhere. Compared with L2, the log-cosh loss improves the reconstruction without damaging the latent space optimization, thus automatically keeping a balance between the reconstruction and the generation. Extensive experiments on MNIST and CelebA datasets show that the log-cosh reconstruction loss significantly improves the performance of VAE and its variants in output quality, measured by sharpness and FID score. In addition, the gradient of the log-cosh is a simple tanh function, which makes the implementation of gradient descent as simple as adding one sentence in coding. | rejected-papers | The reviewers agree that the paper is well-written, and they all seem to like the general idea. One of the earlier criticisms was that you did not compare against other robust loss functions, but you have partially rectified that by comparing to L1 in the appendix. As per the request of reviewer 2 I would also compare to the Huber loss.
One remaining concern is the lack of theoretical justification, which could help address the comment of reviewer 3 regarding blurry images from location uncertainty. The other concern is that you should compare your method using FID scores from a standard implementation so that your numbers are comparable to other papers. Some of the reviewers were impressed, but confused by your relatively low scores. | test | [
"HyxhGw8rRX",
"H1gOXTikCm",
"ryl8ijVj2X",
"SklQIfSq27",
"rylfViud3Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Based on the latest authors' reply I need to conclude that I am not changing my score.\nInstead of reporting the FID scores following a standard implementation widely accepted these days,\nit turns out the authors were using their own classifier trained on CelebA to embed the pictures. It is\nmy fault that initial... | [
-1,
-1,
4,
4,
5
] | [
-1,
-1,
4,
4,
4
] | [
"rylfViud3Q",
"iclr_2019_rkglvsC9Ym",
"iclr_2019_rkglvsC9Ym",
"iclr_2019_rkglvsC9Ym",
"iclr_2019_rkglvsC9Ym"
] |
iclr_2019_rkgpCoRctm | Detecting Out-Of-Distribution Samples Using Low-Order Deep Features Statistics | The ability to detect when an input sample was not drawn from the training distribution is an important desirable property of deep neural networks. In this paper, we show that a simple ensembling of first and second order deep feature statistics can be exploited to effectively differentiate in-distribution and out-of-distribution samples. Specifically, we observe that the mean and standard deviation within feature maps differs greatly between in-distribution and out-of-distribution samples. Based on this observation, we propose a simple and efficient plug-and-play detection procedure that does not require re-training, pre-processing or changes to the model. The proposed method outperforms the state-of-the-art by a large margin in all standard benchmarking tasks, while being much simpler to implement and execute. Notably, our method improves the true negative rate from 39.6% to 95.3% when 95% of in-distribution (CIFAR-100) are correctly detected using a DenseNet and the out-of-distribution dataset is TinyImageNet resize. The source code of our method will be made publicly available. | rejected-papers | The paper proposes a simple method for detecting out-of-distribution samples. The authors' major finding is that mean and standard deviation within feature maps can be used as an input for classifying out-of-distribution (OOD) samples. The proposed method is simple and practical.
The reviewers and AC note the following potential weaknesses: (1) limited novelty and somewhat ad-hoc approach, i.e., it is not too surprising to expect that such statistics can be useful for the purpose. Some theoretical justification might help. (2) arguable experimental settings, i.e., the performance highly varies depending on validation (even in the revised draft), and sometimes irrationally good. It also depends on the choice of classifier.
For (2), I think the whole evaluation should be done assuming that we don't know how it looks the OOD set. Under the setting, the authors should compare the proposed method and existing ones for fair comparisons. AC understands the authors follows the same experimental settings of some previous work addressing this problem, but it's time that this is changed. Indeed, a recent paper by Lee at al. 2018 considers such a setting for detecting more general types of abnormal samples including OOD.
In overall, the proposed idea is simple and easy to use. However, AC decided that the authors need more significant works to publish the work.
| train | [
"S1eoz66I3m",
"SJeA0Zo2im",
"ryx10nht07",
"H1x6jshKCX",
"Hygkgj8KnX",
"HkgKWJiwCX",
"HkefCUYU0X",
"rkl0GLLxCQ",
"Hkxf3d8xRQ",
"HkxY0dIxC7",
"HJe0p8Ue0Q",
"S1gILLLlRQ",
"H1eObN8phQ",
"BJg6GbL5n7",
"S1lpfbL03m",
"SyeiJ-U0h7",
"B1xRapg92Q",
"SyeF2dmRi7"
] | [
"public",
"public",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author"
] | [
"Dear Authors,\n\nThank you for your reply. And I am sorry for lack of information message.\n\nAbout the issue above, I did the experiment in different architecture of network containing simply 7 layers of CNN and ended with global average pooling. This network produces 85% accuracy on Cifar-10 test dataset, where... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1
] | [
"SyeF2dmRi7",
"iclr_2019_rkgpCoRctm",
"HkgKWJiwCX",
"HkefCUYU0X",
"iclr_2019_rkgpCoRctm",
"HkefCUYU0X",
"HkxY0dIxC7",
"iclr_2019_rkgpCoRctm",
"H1eObN8phQ",
"H1eObN8phQ",
"BJg6GbL5n7",
"Hygkgj8KnX",
"iclr_2019_rkgpCoRctm",
"iclr_2019_rkgpCoRctm",
"B1xRapg92Q",
"S1eoz66I3m",
"iclr_2019... |
iclr_2019_rkgqCiRqKQ | Inferring Reward Functions from Demonstrators with Unknown Biases | Our goal is to infer reward functions from demonstrations. In order to infer the correct reward function, we must account for the systematic ways in which the demonstrator is suboptimal. Prior work in inverse reinforcement learning can account for specific, known biases, but cannot handle demonstrators with unknown biases. In this work, we explore the idea of learning the demonstrator's planning algorithm (including their unknown biases), along with their reward function. What makes this challenging is that any demonstration could be explained either by positing a term in the reward function, or by positing a particular systematic bias. We explore what assumptions are sufficient for avoiding this impossibility result: either access to tasks with known rewards which enable estimating the planner separately, or that the demonstrator is sufficiently close to optimal that this can serve as a regularizer. In our exploration with synthetic models of human biases, we find that it is possible to adapt to different biases and perform better than assuming a fixed model of the demonstrator, such as Boltzmann rationality. | rejected-papers | The authors study an inverse reinforcement learning problem where the goal is to infer an underlying reward function from demonstration with bias. To achieve this, the authors learn the planners and the reward functions from demonstrations. As this is in general impossible, the authors consider two special cases in which either the reward function is observed on a subset of tasks or in which the observations are assumed to be close to optimal. They propose algorithms for both cases and evaluate these in basic experiments. The problem considered is important and challenging. One issue is that in order to make progress the authors need to make strong and restrictive assumptions (e.g., assumption 3, the well-suited inductive bias). It is not clear if the assumptions made are reasonable. Experimentally, it would be important to see how results change if the model for the planner changes and to evaluate what the inferred biases would be. Overall, there is consensus among the reviewers that the paper is interesting but not ready for publication.
| train | [
"rkl48xtZ6m",
"ByeDZjQ62m",
"BJxgiuKq2X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Not all examples in the introduction are necessarily biases but can be modeled with reward functions, where reward is given to specific states other than finishing work by the deadline. It would be helpful for the reader to get examples that correspond to the investigated biases. \n\nIt would be good if the autho... | [
5,
5,
5
] | [
4,
3,
4
] | [
"iclr_2019_rkgqCiRqKQ",
"iclr_2019_rkgqCiRqKQ",
"iclr_2019_rkgqCiRqKQ"
] |
iclr_2019_rkgsvoA9K7 | Dirichlet Variational Autoencoder | This paper proposes Dirichlet Variational Autoencoder (DirVAE) using a Dirichlet prior for a continuous latent variable that exhibits the characteristic of the categorical probabilities. To infer the parameters of DirVAE, we utilize the stochastic gradient method by approximating the Gamma distribution, which is a component of the Dirichlet distribution, with the inverse Gamma CDF approximation. Additionally, we reshape the component collapsing issue by investigating two problem sources, which are decoder weight collapsing and latent value collapsing, and we show that DirVAE has no component collapsing; while Gaussian VAE exhibits the decoder weight collapsing and Stick-Breaking VAE shows the latent value collapsing. The experimental results show that 1) DirVAE models the latent representation result with the best log-likelihood compared to the baselines; and 2) DirVAE produces more interpretable latent values with no collapsing issues which the baseline models suffer from. Also, we show that the learned latent representation from the DirVAE achieves the best classification accuracy in the semi-supervised and the supervised classification tasks on MNIST, OMNIGLOT, and SVHN compared to the baseline VAEs. Finally, we demonstrated that the DirVAE augmented topic models show better performances in most cases. | rejected-papers | This paper applies Dirichlet distribution to the latent variables of a VAE in order to address the component collapsing issues for categorical probabilities. The method is clearly presented, and extensive experiments are carried out to prove the advantage against VAEs with other prior distributions.
The main concern of the paper is the limited novelty. The main methodology contribution of this paper is to combine the decomposition a Dirichlet distribution as Gamma distributions, and approximating Gamma component with inverse Gamma CDF, but both components are common practices.
R3 also points out that the paper is distracted by two different messages the authors try to convey. The presentation and experiments are not designed to provide a cohesive message. The concern is not solved in the authors' feedback.
Based on the current reviews, this paper does not meet the standard for ICLR publication. Despite the limited novelty in the proposed model, if the paper could be revised to show that a simple modification is good for solve one problem with general applications, it would make a good publication in a future venue. | train | [
"BJxNbEr6CX",
"r1x7APd30m",
"SJeB-IILRQ",
"S1lfpr8L0m",
"ByluKKS3aQ",
"H1xeSd-7R7",
"BygtBi5MAQ",
"r1lXXtoI6m",
"ryx1y01Lpm",
"Hyg1EakUTX",
"r1ergtDq37",
"Hke6jd7w27",
"ByxO1XWLsm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer3:\n\nWe respond to your point in the below.\n\n\n1: \nWe checked the experiment with the 'normalizing flows' approach, but we were not able to find an error. It should be noted that the experiment code is our creation, but from the authors of the normalizing flows. With our creation, we also tested O... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"r1x7APd30m",
"ByluKKS3aQ",
"H1xeSd-7R7",
"r1ergtDq37",
"r1ergtDq37",
"r1lXXtoI6m",
"ryx1y01Lpm",
"Hke6jd7w27",
"ByxO1XWLsm",
"r1ergtDq37",
"iclr_2019_rkgsvoA9K7",
"iclr_2019_rkgsvoA9K7",
"iclr_2019_rkgsvoA9K7"
] |
iclr_2019_rkgv9oRqtQ | Compound Density Networks | Despite the huge success of deep neural networks (NNs), finding good mechanisms for quantifying their prediction uncertainty is still an open problem. It was recently shown, that using an ensemble of NNs trained with a proper scoring rule leads to results competitive to those of Bayesian NNs. This ensemble method can be understood as finite mixture model with uniform mixing weights. We build on this mixture model approach and increase its flexibility by replacing the fixed mixing weights by an adaptive, input-dependent distribution (specifying the probability of each component) represented by an NN, and by considering uncountably many mixture components. The resulting model can be seen as the continuous counterpart to mixture density networks and is therefore referred to as compound density network. We empirically show that the proposed model results in better uncertainty estimates and is more robust to adversarial examples than previous approaches. | rejected-papers | Reviewers are in a consensus and weakly recommended to reject after engaging with the authors, with the reviewers updating their scores on Dec 11 after engagement. The authors answered most of the reviewers' concerns, however from further discussions with the reviewers there are still some points which lead them to rank the paper lower than others. I thus lean to reject. Please take reviewers' comments into consideration to improve submission should you choose to resubmit. | train | [
"SJekKHZ93Q",
"Hyx6tz_G1V",
"B1gsAe0C07",
"HylOqXp0AX",
"Byl1eZo9CX",
"Bklyu8HcRQ",
"rkxWOkMqR7",
"ryg0dWFrC7",
"rJxslWKrCX",
"ByxbnbuB0Q",
"BylqUWurRQ",
"H1lZP6Jchm",
"r1xXo7UF3m"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this work the authors propose an extension of mixture density networks to the continuous domain, named compound density networks. Specifically the paper builds on top of the idea of the ensemble neural networks (NNs) and introduces a stochastic neural network for handling the mixing components. The mixing distr... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_rkgv9oRqtQ",
"Byl1eZo9CX",
"Bklyu8HcRQ",
"rkxWOkMqR7",
"rJxslWKrCX",
"ryg0dWFrC7",
"ByxbnbuB0Q",
"r1xXo7UF3m",
"H1lZP6Jchm",
"SJekKHZ93Q",
"iclr_2019_rkgv9oRqtQ",
"iclr_2019_rkgv9oRqtQ",
"iclr_2019_rkgv9oRqtQ"
] |
iclr_2019_rkgwuiA9F7 | Cramer-Wold AutoEncoder | Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and
kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE. | rejected-papers | The reviewers in general like the idea of using the Cramer-Wold kernel, noting that its heavy tails and closed form solution are appealing properties that lead to increased stability and improved training. The main concern was novelty, as this paper can be seen as simply changing the kernel in WAE-MMD. One suggestion is to more heavily highlight the CW-distance, and in particular to find another useful application for it outside of WAE-MMD.
The paper emphasizes frequently that the closed-form loss function is a critical feature of this approach, however I don’t see any experiments that optimize WAE-MMD under the CW-distance while sampling from the Gaussian. This is important to measure the degree to which any improvement is attributable to a closed-form solution, or to the distance measure itself.
| train | [
"rkghYv1pkV",
"HJl1QUco0X",
"B1lJvAAcnX",
"SklRSG4F0Q",
"r1x-_ji_07",
"HJxRO37ka7",
"SygaBRNB07",
"r1e16X4HAm",
"HkggAGNHCm",
"Sker8vbtp7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thank you again for your comments and suggestions. Have our responses and the changes we made to the manuscript addressed all of your concerns?",
"After reading the revision: I have raised my score by 1 point and recommend acceptance.\n",
"This paper proposes a WAE variant based on a new statistical distance b... | [
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
5
] | [
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
4
] | [
"Sker8vbtp7",
"SklRSG4F0Q",
"iclr_2019_rkgwuiA9F7",
"r1x-_ji_07",
"r1e16X4HAm",
"iclr_2019_rkgwuiA9F7",
"Sker8vbtp7",
"B1lJvAAcnX",
"HJxRO37ka7",
"iclr_2019_rkgwuiA9F7"
] |
iclr_2019_rkl3-hA5Y7 | Towards Decomposed Linguistic Representation with Holographic Reduced Representation | The vast majority of neural models in Natural Language Processing adopt a form of structureless distributed representations. While these models are powerful at making predictions, the representational form is rather crude and does not provide insights into linguistic structures. In this paper we introduce novel language models with representations informed by the framework of Holographic Reduced Representation (HRR). This allows us to inject structures directly into our word-level and chunk-level representations. Our analyses show that by using HRR as a structured compositional representation, our models are able to discover crude linguistic roles, which roughly resembles a classic division between syntax and semantics. | rejected-papers | This paper proposes the use of holographic reduced representations in language modeling, which allows for a cleaner decomposition of various linguistic traits in the representation. Results show improvements over baseline language models, and analysis shows that the representations are indeed decomposing as expected.
The main reviewer concern was the lack of strength of the baseline, although the authors stress that they were using the default baseline from TensorFlow, which seems like it will be reasonable to me. Another concern is that there is other work on using HRR to disentangle syntax and semantics in representations for language (e.g. "Distributed Tree Kernels" ICML 2012, but also others), that has not been considered.
Based on this, this seems like a very borderline case. Given that no reviewer is pushing strongly for the paper I'm leaning towards not recommending acceptance, but I could very easily see the paper being accepted as well. | train | [
"ByxIPyhlgV",
"SkgxB0oglN",
"Bygsq6lvyV",
"S1g6sOHqAm",
"rJxfxxdCTX",
"BJezFXrWTQ",
"H1xMvQSZ6m",
"H1e8MXHZaX",
"S1l5JmSWa7",
"H1g_oGB-pX",
"rkxxlMH-aX",
"rJlUq1BbT7",
"HygFGGDC37",
"S1xJZx1i2X",
"H1lF68l527"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you again for the very careful and detailed review. We answer your questions below:\n\n1. We agree that the distributional conditions are crucial to the success of decoding procedure. One way of evaluating the sensitivity of model performance to the degree of satisfying these conditions is that for our model... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"rJxfxxdCTX",
"Bygsq6lvyV",
"iclr_2019_rkl3-hA5Y7",
"iclr_2019_rkl3-hA5Y7",
"H1lF68l527",
"HygFGGDC37",
"HygFGGDC37",
"H1lF68l527",
"H1lF68l527",
"H1lF68l527",
"S1xJZx1i2X",
"iclr_2019_rkl3-hA5Y7",
"iclr_2019_rkl3-hA5Y7",
"iclr_2019_rkl3-hA5Y7",
"iclr_2019_rkl3-hA5Y7"
] |
iclr_2019_rkl42iA5t7 | NETWORK COMPRESSION USING CORRELATION ANALYSIS OF LAYER RESPONSES | Principal Filter Analysis (PFA) is an easy to implement, yet effective method for neural network compression. PFA exploits the intrinsic correlation between filter responses within network layers to recommend a smaller network footprint. We propose two compression algorithms: the first allows a user to specify the proportion of the original spectral energy that should be preserved in each layer after compression, while the second is a heuristic that leads to a parameter-free approach that automatically selects the compression used at each layer. Both algorithms are evaluated against several architectures and datasets, and we show considerable compression rates without compromising accuracy, e.g., for VGG-16 on CIFAR-10, CIFAR-100 and ImageNet, PFA achieves a compression rate of 8x, 3x, and 1.4x with an accuracy gain of 0.4%, 1.4% points, and 2.4% respectively. In our tests we also demonstrate that networks compressed with PFA achieve an accuracy that is very close to the empirical upper bound for a given compression ratio. Finally, we show how PFA is an effective tool for simultaneous compression and domain adaptation. | rejected-papers | The authors propose a technique for compressing neural networks by examining the correlations between filter responses, by removing filters which are highly correlated. This differentiates the authors’ work from many other works which compress the weights independent of the task/domain.
Strengths:
Clearly written paper
PFA-KL does not require additional hyperparameter tuning (apart from those implicit in choosing \psi)
Experiments demonstrate that the number of filters determined by the algorithm scale with complexity of the task
Weaknesses:
Results on large-scale tasks such as Imagenet (subsequently added by the authors during the rebuttal period)
Compression after the fact may not be as good as training with a modified loss function that does compression jointly
Insufficient comparisons on ResNet architectures which make comparisons against previous works harder
Overall, the reviewers were in agreement that this work (particularly, the revised version) was close to the acceptance threshold. In the ACs view, the authors addressed many of the concerns raised by the reviewers in the revisions. However, after much deliberation, the AC decided that the weaknesses 2, and 3 above were significant, and that these should be addressed in a subsequent submission. | train | [
"BJlCzy7q37",
"H1xvnmK414",
"ryl3i9nH37",
"H1xCdNmQJE",
"SylHM_0xJV",
"rJxU4AU76X",
"HJxNw2I7Tm",
"Skeeao8ma7",
"BkgKD7doaX",
"HJlRitO53Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes to prune convolutional networks by analyzing the observed correlation between the filters of a same layer as expressed by the eigenvalue spectrum of their covariance matrix. The authors propose two strategies to decide of a compression level, one based on an eigenvalue threshold, the other one b... | [
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2019_rkl42iA5t7",
"SylHM_0xJV",
"iclr_2019_rkl42iA5t7",
"SylHM_0xJV",
"rJxU4AU76X",
"ryl3i9nH37",
"BJlCzy7q37",
"HJlRitO53Q",
"iclr_2019_rkl42iA5t7",
"iclr_2019_rkl42iA5t7"
] |
iclr_2019_rkl4M3R5K7 | Optimal Attacks against Multiple Classifiers | We study the problem of designing provably optimal adversarial noise algorithms that induce misclassification in settings where a learner aggregates decisions from multiple classifiers. Given the demonstrated vulnerability of state-of-the-art models to adversarial examples, recent efforts within the field of robust machine learning have focused on the use of ensemble classifiers as a way of boosting the robustness of individual models. In this paper, we design provably optimal attacks against a set of classifiers. We demonstrate how this problem can be framed as finding strategies at equilibrium in a two player, zero sum game between a learner and an adversary and consequently illustrate the need for randomization in adversarial attacks. The main technical challenge we consider is the design of best response oracles that can be implemented in a Multiplicative Weight Updates framework to find equilibrium strategies in the zero-sum game. We develop a series of scalable noise generation algorithms for deep neural networks, and show that it outperforms state-of-the-art attacks on various image classification tasks. Although there are generally no guarantees for deep learning, we show this is a well-principled approach in that it is provably optimal for linear classifiers. The main insight is a geometric characterization of the decision space that reduces the problem of designing best response oracles to minimizing a quadratic function over a set of convex polytopes. | rejected-papers | Four reviewers have evaluated this paper. The reviewers have raised concerns about the specific formulation used for adversarial example generation which requires further clarity in motivation and interpretation. The reviewers have also made the point that the experimental evaluation is against previous work which tried to solve a different problem (black box based attack) and hence the conclusions are unconvincing. | test | [
"rJgF5r4ggN",
"HJek2uD1gE",
"rkgjJoXyx4",
"H1x1hWe53m",
"Syx9kN8dR7",
"ByxUiSHuRX",
"HklMmBruCQ",
"rkeZ49B5hQ",
"SyglfqEchm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the quick response.\n\n\"The weights are the probabilities that the learner assigns to the classifier, as stated in the first page of the introduction where we write\"\n\nLet me try to be clearer. Previous work asked : \"suppose I want an adversarial perturbation that fools each of k classifiers. How do... | [
-1,
-1,
5,
6,
-1,
-1,
-1,
4,
6
] | [
-1,
-1,
4,
3,
-1,
-1,
-1,
4,
4
] | [
"HJek2uD1gE",
"rkgjJoXyx4",
"iclr_2019_rkl4M3R5K7",
"iclr_2019_rkl4M3R5K7",
"SyglfqEchm",
"H1x1hWe53m",
"rkeZ49B5hQ",
"iclr_2019_rkl4M3R5K7",
"iclr_2019_rkl4M3R5K7"
] |
iclr_2019_rklEUjR5tm | SHE2: Stochastic Hamiltonian Exploration and Exploitation for Derivative-Free Optimization | Derivative-free optimization (DFO) using trust region methods is frequently used for machine learning applications, such as (hyper-)parameter optimization without the derivatives of objective functions known. Inspired by the recent work in continuous-time minimizers, our work models the common trust region methods with the exploration-exploitation using a dynamical system coupling a pair of dynamical processes. While the first exploration process searches the minimum of the blackbox function through minimizing a time-evolving surrogation function, another exploitation process updates the surrogation function time-to-time using the points traversed by the exploration process. The efficiency of derivative-free optimization thus depends on ways the two processes couple. In this paper, we propose a novel dynamical system, namely \ThePrev---\underline{S}tochastic \underline{H}amiltonian \underline{E}xploration and \underline{E}xploitation, that surrogates the subregions of blackbox function using a time-evolving quadratic function, then explores and tracks the minimum of the quadratic functions using a fast-converging Hamiltonian system. The \ThePrev\ algorithm is later provided as a discrete-time numerical approximation to the system. To further accelerate optimization, we present \TheName\ that parallelizes multiple \ThePrev\ threads for concurrent exploration and exploitation. Experiment results based on a wide range of machine learning applications show that \TheName\ outperform a boarder range of derivative-free optimization algorithms with faster convergence speed under the same settings. | rejected-papers | In general the reviewers found the work to be interesting and the results to be promising. However, all the reviewers shared significant concerns about the clarity of the paper and the correctness of technical claims made. This paper would significantly benefit from rewriting and restructuring the paper to improve clarity, better motivate the approach and provide more careful exposition of related work and technical claims. | train | [
"SkeDMj0n2X",
"Hkg1UWG9h7",
"B1x6Bf-XnX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Derivative-free optimization is not a novel domain and your work could benefit from some accepted benchmarking practices. For instance, you can consider the Black-Box Optimization Benchmarking (BBOB) Workshop and its COCO platform which was used to test many optimization algorithms including the ones mentioned in ... | [
4,
3,
3
] | [
4,
3,
5
] | [
"iclr_2019_rklEUjR5tm",
"iclr_2019_rklEUjR5tm",
"iclr_2019_rklEUjR5tm"
] |
iclr_2019_rklQas09tm | Learning Corresponded Rationales for Text Matching | The ability to predict matches between two sources of text has a number of applications including natural language inference (NLI) and question answering (QA). While flexible neural models have become effective tools in solving these tasks, they are rarely transparent in terms of the mechanism that mediates the prediction. In this paper, we propose a self-explaining architecture where the model is forced to highlight, in a dependent manner, how spans of one side of the input match corresponding segments of the other side in order to arrive at the overall decision. The text spans are regularized to be coherent and concise, and their correspondence is captured explicitly. The text spans -- rationales -- are learned entirely as latent mechanisms, guided only by the distal supervision from the end-to-end task. We evaluate our model on both NLI and QA using three publicly available datasets. Experimental results demonstrate quantitatively and qualitatively that our method delivers interpretable justification of the prediction without sacrificing state-of-the-art performance. Our code and data split will be publicly available. | rejected-papers | This paper attempts at modeling text matching and also generating rationales. The motivation of the paper is good.
However there is some shortcomings of the paper, e.g. there is very little comparison with prior work, no human evaluation at scale and also it seems that several prior models that use attention mechanism would generate similar rationales. No characterization of the last aspect has been made here. Hence, addressing these issues could make the paper better for future venues.
There is relative consensus between the reviewers that the paper could improve if the reviewers' concerns are addressed when it is submitted to future venues. | train | [
"HygaVw0TA7",
"S1xX-vAaRm",
"SkxQPUA6RQ",
"SJeyrnU0h7",
"H1eiY7Ys2X",
"Sye_Ff482m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"A word-by-word soft attention is somewhat different, offering only an approximate version of the rationale we are after. We outline three reasons for this below.\n\nFirst, a soft attention does not provide any certificate of exclusion. By this we mean that any word receiving a small attention weight (as long as it... | [
-1,
-1,
-1,
6,
4,
3
] | [
-1,
-1,
-1,
4,
4,
5
] | [
"Sye_Ff482m",
"H1eiY7Ys2X",
"SJeyrnU0h7",
"iclr_2019_rklQas09tm",
"iclr_2019_rklQas09tm",
"iclr_2019_rklQas09tm"
] |
iclr_2019_rklXaoAcFX | Geomstats: a Python Package for Riemannian Geometry in Machine Learning | We introduce geomstats, a Python package for Riemannian modelization and optimization over manifolds such as hyperspheres, hyperbolic spaces, SPD matrices or Lie groups of transformations. Our contribution is threefold. First, geomstats allows the flexible modeling of many a machine learning problem through an efficient and extensively unit-tested implementations of these manifolds, as well as the set of useful Riemannian metrics, exponential and logarithm maps that we provide. Moreover, the wide choice of loss functions and our implementation of the corresponding gradients allow fast and easy optimization over manifolds. Finally, geomstats is the only package to provide a unified framework for Riemannian geometry, as the operations implemented in geomstats are available with different computing backends (numpy,tensorflow and keras), as well as with a GPU-enabled mode–-thus considerably facilitating the application of Riemannian geometry in machine learning. In this paper, we present geomstats through a review of the utility and advantages of manifolds in machine learning, using the concrete examples that they span to show the efficiency and practicality of their implementation using our package | rejected-papers | Learning on Riemannian manifolds can be easily done with this Python package. Considering the recent work on these in latent-variable models, the package can be quite a useful approach.
But its novelty is disputed. In particular Pymanopt is a package that does mostly the same, even though that may be computationally more expensive. The merits of Geomstats vs. Pymanopt is not clarified. But be that as it may, there is interest amongst the reviewers for the software package.
In the end, too, it's not uniformly agreed upon that a software-describing paper fits ICLR. | train | [
"HJlGVIQ9CX",
"HJxEOxqZpX",
"rkgs_koa2m",
"HyeF-4_9hm",
"SkefgeUtnX",
"ryeC8TwO3Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We thank the reviewers for their constructive feedback which has shown us the need to clearly highlight the impact and scope of our contribution. We answer their concerns regarding the novelty, practicality and use of the package. In summary, we feel that the reviewers have focused too much on the “Riemannian opti... | [
-1,
4,
4,
3,
8,
-1
] | [
-1,
4,
5,
5,
2,
-1
] | [
"iclr_2019_rklXaoAcFX",
"iclr_2019_rklXaoAcFX",
"iclr_2019_rklXaoAcFX",
"iclr_2019_rklXaoAcFX",
"iclr_2019_rklXaoAcFX",
"iclr_2019_rklXaoAcFX"
] |
iclr_2019_rkle3i09K7 | Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks | Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks poorly generalize from such noisy training datasets. In this paper, we propose a novel inference method, Deep Determinantal Generative Classifier (DDGC), which can obtain a more robust decision boundary under any softmax neural classifier pre-trained on noisy datasets. Our main idea is inducing a generative classifier on top of hidden feature spaces of the discriminative deep model. By estimating the parameters of generative classifier using the minimum covariance determinant estimator, we significantly improve the classification accuracy, with neither re-training of the deep model nor changing its architectures. In particular, we show that DDGC not only generalizes well from noisy labels, but also is robust against adversarial perturbations due to its large margin property. Finally, we propose the ensemble version ofDDGC to improve its performance, by investigating the layer-wise characteristics of generative classifier. Our extensive experimental results demonstrate the superiority of DDGC given different learning models optimized by various training techniques to handle noisy labels or adversarial samples. For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33.34% to 43.02%. | rejected-papers | While the paper contains interesting ideas, the reviewers agree the experimental study can be improved. | train | [
"H1gwRCWxJ4",
"H1gpKAWxJV",
"Skx-bI5HCX",
"S1eqi1XYRm",
"rkgNYrWt07",
"B1gcyfVOn7",
"rklKNDqrAQ",
"H1xd_45SCm",
"B1elymcSCX",
"HJe1O6BchX",
"SyxhCjde3Q"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear AnonReviewer2,\n\nWe hope that you found our rebuttal/revision for you and other reviewers in common. \n\nIf you have any remaining questions/concerns, please do not hesitate to let us know and we would be happy to answer.\n\nThank you very much,\nAuthors",
"Dear AnonReviewer1,\n\nWe hope that you found our... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
3,
4
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
4,
4
] | [
"Skx-bI5HCX",
"B1elymcSCX",
"SyxhCjde3Q",
"rkgNYrWt07",
"H1xd_45SCm",
"iclr_2019_rkle3i09K7",
"iclr_2019_rkle3i09K7",
"B1gcyfVOn7",
"HJe1O6BchX",
"iclr_2019_rkle3i09K7",
"iclr_2019_rkle3i09K7"
] |
iclr_2019_rklhb2R9Y7 | Reinforced Imitation Learning from Observations | Imitation learning is an effective alternative approach to learn a policy when the reward function is sparse. In this paper, we consider a challenging setting where an agent has access to a sparse reward function and state-only expert observations. We propose a method which gradually balances between the imitation learning cost and the reinforcement learning objective. Built upon an existing imitation learning method, our approach works with state-only observations. We show, through navigation scenarios, that (i) an agent is able to efficiently leverage sparse rewards to outperform standard state-only imitation learning, (ii) it can learn a policy even when learner's actions are different from the expert, and (iii) the performance of the agent is not bounded by that of the expert due to the optimized usage of sparse rewards. | rejected-papers | This paper proposes to combine rewards obtained through IRL from rewards coming from the environment, and evaluate the algorithm on grid world environments. The problem setting is important and of interest to the ICLR community. While the revised paper addresses the concerns about the lack of a stochastic environment problem, the reviewers still have major concerns regarding the novelty and significance of the algorithmic contribution, as well as the limited complexity of the experimental domains. As such, the paper does not meet the bar for publication at ICLR. | train | [
"Sygig_e9RQ",
"ByeSUJ03nX",
"SJxH9o-fRm",
"S1gXhcwkCm",
"H1l-XOhNam",
"ryeub04zpQ",
"BkxIVC4fTm",
"S1eBU1kkaX",
"SkeVxQuo27"
] | [
"author",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank for all reviews again. We updated the paper based on them.\n\nThe main update is the description of (and the results on) the partially observable environment. We wanted to design the new environment to be different from the one presented while being able to perform the same set of experiment... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2019_rklhb2R9Y7",
"iclr_2019_rklhb2R9Y7",
"S1gXhcwkCm",
"iclr_2019_rklhb2R9Y7",
"S1eBU1kkaX",
"ByeSUJ03nX",
"SkeVxQuo27",
"iclr_2019_rklhb2R9Y7",
"iclr_2019_rklhb2R9Y7"
] |
iclr_2019_rklwwo05Ym | Pushing the bounds of dropout | We show that dropout training is best understood as performing MAP estimation concurrently for a family of conditional models whose objectives are themselves lower bounded by the original dropout objective. This discovery allows us to pick any model from this family after training, which leads to a substantial improvement on regularisation-heavy language modelling. The family includes models that compute a power mean over the sampled dropout masks, and their less stochastic subvariants with tighter and higher lower bounds than the fully stochastic dropout objective. We argue that since the deterministic subvariant's bound is equal to its objective, and the highest amongst these models, the predominant view of it as a good approximation to MC averaging is misleading. Rather, deterministic dropout is the best available approximation to the true objective. | rejected-papers | The paper tried to introduce a new interpretation of dropout and come with improved algorithms. However, the reviewers were not convinced that the presented arguments were correct/novel, and they found the paper difficult to follow. The authors are encouraged to carefully revise their paper to address these concerns. | train | [
"rkgXd-5ryE",
"Hye3DGjKR7",
"rJgktsbDpm",
"HJlGHsZDTQ",
"ryerzjWvpQ",
"Bkg0Rq9Vam",
"rJgOl3ChnX",
"Sye5tvB53X"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the followup. What we intend to show in the paper is that the actual loss being optimized for models with dropout is an approximation in the same vein to the variational and MAP objectives (for all members of family). If the MAP formulation explains why the deterministic variant is best, then the sam... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"Hye3DGjKR7",
"ryerzjWvpQ",
"Sye5tvB53X",
"rJgOl3ChnX",
"Bkg0Rq9Vam",
"iclr_2019_rklwwo05Ym",
"iclr_2019_rklwwo05Ym",
"iclr_2019_rklwwo05Ym"
] |
iclr_2019_rkx0g3R5tX | Partially Mutual Exclusive Softmax for Positive and Unlabeled data | In recent years, softmax together with its fast approximations has become the de-facto loss function for deep neural networks with multiclass predictions. However, softmax is used in many problems that do not fully fit the multiclass framework and where the softmax assumption of mutually exclusive outcomes can lead to biased results. This is often the case for applications such as language modeling, next event prediction and matrix factorization, where many of the potential outcomes are not mutually exclusive, but are more likely to be independent conditionally on the state. To this end, for the set of problems with positive and unlabeled data, we propose a relaxation of the original softmax formulation, where, given the observed state, each of the outcomes are conditionally independent but share a common set of negatives. Since we operate in a regime where explicit negatives are missing, we create an adversarially-trained model of negatives and derive a new negative sampling and weighting scheme which we denote as Cooperative Importance Sampling (CIS). We show empirically the advantages of our newly introduced negative sampling scheme by pluging it in the Word2Vec algorithm and benching it extensively against other negative sampling schemes on both language modeling and matrix factorization tasks and show large lifts in performance. | rejected-papers | All reviewers agree that the paper is not quite ready for publication.
| test | [
"Byg02IJ_a7",
"rygqyL8mT7",
"r1llurUmpX",
"r1ldt8dJpm",
"H1xEBUXd2m"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The mutually exclusive assumption of traditional softmax can be biased in case negative samples are not explicitly defined. This paper presents Cooperative Importance Sampling towards resolving this problem. The authors experimentally verify the effectiveness of the proposed approach using different tasks includin... | [
5,
-1,
-1,
4,
5
] | [
4,
-1,
-1,
4,
4
] | [
"iclr_2019_rkx0g3R5tX",
"r1ldt8dJpm",
"H1xEBUXd2m",
"iclr_2019_rkx0g3R5tX",
"iclr_2019_rkx0g3R5tX"
] |
iclr_2019_rkx1m2C5YQ | Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces | In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models. Yet, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time
step. While our locally linear modelling and factorization assumptions are in general not true for the original low-dimensional state space of the system, the network finds a high-dimensional latent space where these assumptions hold to perform efficient inference. This state representation is learned jointly with the transition and noise models. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter and Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task. | rejected-papers | A lot of work has appeared recently on recurrent state space models. So although this paper is in general considered favorable by the reviewers it is unclear exactly how the paper places itself in that (crowded) space. So rejection with a strong encouragement to update and resubmission is encouraged. | train | [
"Syx2wN5c37",
"rkxkuiQNAX",
"S1ln-iQ4Rm",
"Byec4cQ4AX",
"Syg1qfyRn7",
"BklzonzohX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes, Recurrent Kalman Network, a modified Kaman filter in which the latent dynamics is projected into a higher dimensional space; efficient inference in this high-dimensional latent space is possible due to the space being locally linear. The state representation, transition, and observation models... | [
6,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_rkx1m2C5YQ",
"Syx2wN5c37",
"BklzonzohX",
"Syg1qfyRn7",
"iclr_2019_rkx1m2C5YQ",
"iclr_2019_rkx1m2C5YQ"
] |
iclr_2019_rkx8l3Cctm | Safe Policy Learning from Observations | In this paper, we consider the problem of learning a policy by observing numerous non-expert agents. Our goal is to extract a policy that, with high-confidence, acts better than the agents' average performance. Such a setting is important for real-world problems where expert data is scarce but non-expert data can easily be obtained, e.g. by crowdsourcing. Our approach is to pose this problem as safe policy improvement in reinforcement learning. First, we evaluate an average behavior policy and approximate its value function. Then, we develop a stochastic policy improvement algorithm that safely improves the average behavior. The primary advantages of our approach, termed Rerouted Behavior Improvement (RBI), over other safe learning methods are its stability in the presence of value estimation errors and the elimination of a policy search process. We demonstrate these advantages in the Taxi grid-world domain and in four games from the Atari learning environment. | rejected-papers | The paper studies safer policy improvement based on non-expert demonstrations. The paper contains some interesting ideas, and is supported by reasonable empirical evidence. Overall, the work has a good potential. The author response was also helpful. That said, after considering the paper and rebuttal, the reviewers were not convinced the paper is ready for publication, as the significance of this work is limited by a rather strong assumption (see reviews for details). Furthermore, the presentation of the paper also requires some work to improve (see reviews for detailed comments). | val | [
"rJlp0ZcnhX",
"Syx8_Y4t2X",
"HkgmEkXjRm",
"r1gqUGTVaQ",
"BklNNWpNaQ",
"ByxNik6ETQ",
"HJxxDyTEpX",
"BJlViRtY3m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper looks at learning a policy from multiple demonstrators which should also be safely improved by an reinforcement learning signal. They define the policy as a mixture of policies from the single demonstrators. The paper gives a new way to estimate the value function of each policy where the overall policy ... | [
5,
5,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_rkx8l3Cctm",
"iclr_2019_rkx8l3Cctm",
"iclr_2019_rkx8l3Cctm",
"Syx8_Y4t2X",
"BJlViRtY3m",
"rJlp0ZcnhX",
"rJlp0ZcnhX",
"iclr_2019_rkx8l3Cctm"
] |
iclr_2019_rkxJus0cFX | RedSync : Reducing Synchronization Traffic for Distributed Deep Learning | Data parallelism has become a dominant method to scale Deep Neural Network (DNN) training across multiple nodes. Since the synchronization of the local models or gradients can be a bottleneck for large-scale distributed training, compressing communication traffic has gained widespread attention recently. Among several recent proposed compression algorithms,
Residual Gradient Compression (RGC) is one of the most successful approaches---it can significantly compress the transmitting message size (0.1% of the gradient size) of each node and still preserve accuracy. However, the literature on compressing deep networks focuses almost exclusively on achieving good compression rate, while the efficiency of RGC in real implementation has been less investigated. In this paper, we develop an RGC method that achieves significant training time improvement in real-world multi-GPU systems. Our proposed RGC system design called RedSync, introduces a set of optimizations to reduce communication bandwidth while introducing limited overhead. We examine the performance of RedSync on two different multiple GPU platforms, including a supercomputer and a multi-card server. Our test cases include image classification on Cifar10 and ImageNet, and language modeling tasks on Penn Treebank and Wiki2 datasets. For DNNs featured with high communication to computation ratio, which has long been considered with poor scalability, RedSync shows significant performance improvement. | rejected-papers | This paper proposed Residual Gradient Compression as a promising approach to reduce the synchronization cost of gradients in a distributed settings. It provides a useful approach that works for a number of models. The reviewers have a consensus that the quality is below acceptance standard due to practicality of experiments and lack of contribution. | val | [
"HJxplXBIJN",
"rJe6slB81N",
"BylHXcGWCm",
"rkeBP9fZRQ",
"Byx1rqMWCm",
"r1eSJT-a27",
"rklNHAOc2X",
"Bkgkpnntn7",
"rygdFMCGnQ",
"S1e1qb1-hm"
] | [
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thanks for your reimplementation.\nBoth of your findings are right. I will correct them.",
"I have implemented algorithm 2 and 3.\nIn algorithm 2, I think line 6's condition should be nnz < k. Because increasing threshold will only make nnz even bigger.\nAlso in algorithm 3, line 10 and 12, I think ratio is the ... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
-1,
-1
] | [
"rJe6slB81N",
"iclr_2019_rkxJus0cFX",
"r1eSJT-a27",
"Bkgkpnntn7",
"rklNHAOc2X",
"iclr_2019_rkxJus0cFX",
"iclr_2019_rkxJus0cFX",
"iclr_2019_rkxJus0cFX",
"S1e1qb1-hm",
"iclr_2019_rkxJus0cFX"
] |
iclr_2019_rkxd2oR9Y7 | The Case for Full-Matrix Adaptive Regularization | Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix. Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive. We show how to modify full-matrix adaptive regularization in order to make it practical and effective. We also provide novel theoretical analysis
for adaptive regularization in non-convex optimization settings. The core of our algorithm, termed GGT, consists of efficient inverse computation of square roots of low-rank matrices. Our preliminary experiments underscore improved convergence rate of GGT across a variety of synthetic tasks and standard deep learning benchmarks. | rejected-papers | This paper shows how to implement a low-rank version of the Adagrad preconditioner in a GPU-friendly manner. A theoretical analysis of a "hard-window" version of the proposed algorithm demonstrates that it is not worse than SGD at finding a first-order stationary point in the nonconvex setting. Experiments on CIFAR-10 classification using a ConvNet and Penn Treebank character-level language modeling using an LSTM show that the proposed algorithm improves training loss faster than SGD, Adagrad, and Adam (measuring time in epochs) and has better generalization performance on the language modeling task. However, if wall-clock time is used to measure time, there is no speedup for the ConvNet model, but there is for the recurrent model. The reviewers liked the simplicity of the approach and greatly appreciated the elegant visualization of the eigenspectrum in Figure 4. But, even after discussion, critical concerns remained about the need for more focus on the practical tradeoffs between per-iteration improvement and per-second improvement in the loss and the need for a more careful analysis of the relationship of this method to stochastic L-BFGS. A more minor concern is that the term "full-matrix regularization" seems somewhat deceptive when the actual regularization is low rank. The AC also suggests that, if the authors plan to revise this paper and submit it to another venue, they consider the relationship between GGT and the various stochastic natural gradient optimization algorithms in the literature that differ from GGT primarily in the exponent on the Gram matrix. | train | [
"BJgICDN92m",
"r1e2sLyfRm",
"B1lP_IyMRm",
"S1xoNIkzAQ",
"SJgzxEO5hQ",
"rJggFt49nX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers adaptive regularization, which has been popular in neural network learning. Rather than adapting diagonal elements of the adaptivity matrix, the paper proposes to consider a low-rank approximation to the Gram/correlation matrix.\n\nWhen you say that full-matrix computation \"requires taking th... | [
5,
-1,
-1,
-1,
6,
5
] | [
3,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_rkxd2oR9Y7",
"BJgICDN92m",
"rJggFt49nX",
"SJgzxEO5hQ",
"iclr_2019_rkxd2oR9Y7",
"iclr_2019_rkxd2oR9Y7"
] |
iclr_2019_rkxfjjA5Km | Auto-Encoding Knockoff Generator for FDR Controlled Variable Selection | A new statistical procedure (Candès,2018) has provided a way to identify important factors using any supervised learning method controlling for FDR. This line of research has shown great potential to expand the horizon of machine learning methods beyond the task of prediction, to serve the broader need for scientific researches for interpretable findings. However, the lack of a practical and flexible method to generate knockoffs remains the major obstacle for wide application of Model-X procedure. This paper fills in the gap by proposing a model-free knockoff generator which approximates the correlation structure between features through latent variable representation. We demonstrate our proposed method can achieve FDR control and better power than two existing methods in various simulated settings and a real data example for finding mutations associated with drug resistance in HIV-1 patients.
| rejected-papers | The paper presents a novel strategy for statistically motivated feature selection i.e. aimed at controlling the false discovery rate. This is achieved by extending knockoffs to complex predictive models and complex distributions; specifically using a variational auto-encoder to generate conditionally independent data samples with the same joint distribution.
The reviewers and ACs noted weakness in the original submission related to the clarity of the presentation, relationship to already published work, and concerns about the correctness of some main claims (this mostly seems to have been fixed after the rebuttal period). There are additional concerns about a thorough evaluation of the claimed results, as the ground truth is unknown. The authors (and reviewers) also note a similar paper submitted to ICLR with the same goal but implemented using GANs. Nevertheless, there remain significant concerns about the clarity of the presentation. | train | [
"B1efp2tdAQ",
"Hkxqt7n2Cm",
"H1g-wyYhAQ",
"rJxAnSreAm",
"S1lM5g2lC7",
"ryxnMxhg0m",
"Bklxsap1CQ",
"rJlPs5tyTX",
"SyeHB4cKnm",
"r1xr9_8v2X",
"HyxbhZTpom",
"r1gahXwA2X"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Dear Reviewers and Chairs,\n\nThanks for the helpful feedback, we have made the following revision to address the comments and concerns.\nThe highlight of the contribution of our paper is that it is among the first papers to tackle the problem of knockoff generation with deep learning, which is an emerging import... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
6,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
-1
] | [
"iclr_2019_rkxfjjA5Km",
"H1g-wyYhAQ",
"S1lM5g2lC7",
"HyxbhZTpom",
"ryxnMxhg0m",
"SyeHB4cKnm",
"rJlPs5tyTX",
"r1gahXwA2X",
"iclr_2019_rkxfjjA5Km",
"iclr_2019_rkxfjjA5Km",
"iclr_2019_rkxfjjA5Km",
"r1xr9_8v2X"
] |
iclr_2019_rkxhX209FX | An Active Learning Framework for Efficient Robust Policy Search | Robust Policy Search is the problem of learning policies that do not degrade in performance when subject to unseen environment model parameters. It is particularly relevant for transferring policies learned in a simulation environment to the real world. Several existing approaches involve sampling large batches of trajectories which reflect the differences in various possible environments, and then selecting some subset of these to learn robust policies, such as the ones that result in the worst performance. We propose an active learning based framework, EffAcTS, to selectively choose model parameters for this purpose so as to collect only as much data as necessary to select such a subset. We apply this framework to an existing method, namely EPOpt, and experimentally validate the gains in sample efficiency and the performance of our approach on standard continuous control tasks. We also present a Multi-Task Learning perspective to the problem of Robust Policy Search, and draw connections from our proposed framework to existing work on Multi-Task Learning. | rejected-papers | The paper addresses sample-efficient robust policy search borrowing ideas from active learning. The reviews raised important concerns regarding (1) the complexity of the proposed technique, which combines many separate pieces and (2) the significance of experimental results. The empirical setup adopted is not standard in RL, and a clear comparison against EPOpt is lacking. I appreciate the changes made to address the comment, and I encourage the authors to continue improving the paper by simplifying the model and including a few baseline comparisons in the experiments. | test | [
"ryewCa4XR7",
"H1lQr2V7R7",
"HygiHo4mCm",
"Syl3kt4QCm",
"BJgA2D7WTm",
"BJeCUHYThX",
"rJejEde9n7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review. We have made several changes to the paper as we have described in our official comment. Our response is based on this new version.\n\nModel-Ensemble Trust-Region Policy Optimization (ME-TRPO):\nThis paper presents a Model Based RL method to solve any given RL task, with the highlight bei... | [
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"rJejEde9n7",
"BJgA2D7WTm",
"BJeCUHYThX",
"iclr_2019_rkxhX209FX",
"iclr_2019_rkxhX209FX",
"iclr_2019_rkxhX209FX",
"iclr_2019_rkxhX209FX"
] |
iclr_2019_rkxjnjA5KQ | Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation | Deep Reinforcement Learning has managed to achieve state-of-the-art results in learning control policies directly from raw pixels. However, despite its remarkable success, it fails to generalize, a fundamental component required in a stable Artificial Intelligence system. Using the Atari game Breakout, we demonstrate the difficulty of a trained agent in adjusting to simple modifications in the raw image, ones that a human could adapt to trivially. In transfer learning, the goal is to use the knowledge gained from the source task to make the training of the target task faster and better. We show that using various forms of fine-tuning, a common method for transfer learning, is not effective for adapting to such small visual changes. In fact, it is often easier to re-train the agent from scratch than to fine-tune a trained agent. We suggest that in some cases transfer learning can be improved by adding a dedicated component whose goal is to learn to visually map between the known domain and the new one. Concretely, we use Unaligned Generative Adversarial Networks (GANs) to create a mapping function to translate images in the target task to corresponding images in the source task. These mapping functions allow us to transform between various variations of the Breakout game, as well as between different levels of a Nintendo game, Road Fighter. We show that learning this mapping is substantially more efficient than re-training. A visualization of a trained agent playing Breakout and Road Fighter, with and without the GAN transfer, can be seen in \url{https://streamable.com/msgtm} and \url{https://streamable.com/5e2ka}. | rejected-papers | The paper proposes an transfer learning approach to reinforcement learning, where observations from a target domain are mapped to a source domain in which the algorithm was originally trained. Using unsupervised GAN models to learn this mapping from unaligned samples, the authors show that such a mapping allows the RL agent to successfully interact with the target domain without further training (apart from training the GAN models). The approach is empirically validated on modified versions of the Atari game breakout, as well as subsequent levels of Road Fighter, showing good performance on the transfer domain with a fraction of the samples that would be required for retraining the RL algorithm from scratch.
The reviewers and AC note the strong motivation for this work and emphasize that they find the idea interesting and novel. Reviewer 3 emphasizes the detailed analysis and results. Reviewer 2 notes the innovative idea to evaluate GANs in this application domain. Reviewer 1 identifies a key contribution in the thorough empirical analysis of the generalization issues that plaque current RL algorithms, as well as the comparison between different GAN models and finding their performance to be task-specific.
The reviewers and AC noted several potential weaknesses: The proposed training based on images collected by an untrained agent focus the data on experience that agents would see very early on in the game, and may lead to generalization issues in more advanced parts of the game. Indeed these generalization issues are one possible explanation for the discrepancies between qualitative and quantitative results noted by reviewer 1. While the quantitative results indicate good performance on the target task, the image to image translation makes substantial errors, e.g., hallucinating blocks in breakout and erasing cars in Road Fighter. To the AC, the current paper does not provide enough insight into why the translation approach works even in cases where key elements are added or removed from the scene. The paper would benefit from a revision that thoroughly analyses such cases as well as the reason why the trained RL policy is able to generalize to them.
R1 further notes that the paper does not address the RL generalization issue, but rather presents an empirical study that shows that in specific cases it is easier to translate from a target to a source domain, than to learn a policy for the target domain. The AC shares this concern, especially given the limited error analysis and conceptual insights derived from the empirical study. There are further concerns about the experimental protocol and hyper-parameter selection on the target tasks. Finally reviewer 1 questions the claim of whether data efficiency matters more than training efficiency in the proposed setting.
There is disagreement about this paper. Reviewers 2 and 3 gave high scores and positive reviews, but did not provide sufficient feedback to the concerns raised by reviewer 1, who put forward significant concerns.
The AC is particularly concerned about the experimental protocol and hyper-parameter tuning directly on the test tasks. The authors counter this point by noting that "We agree that selecting configurations based on the test set is far from ideal, but we also note that this is the de-facto standard in video game-playing RL works, so we do not believe our work is any worse than others in the literature in this regard." The AC worries about the lack of motivation to identify a strong empirical setup to arrive at the strongest possible contribution. A key concern here is that the results seem to vary substantially by task, GAN model used, etc. and substantial tuning on the target domain seems to be required. This makes it hard to draw any generalizable conclusions. This concern can be alleviated by including additional analysis, e.g., error analysis of where a proposed approach fails, or additional experiments designed to isolate the factors that contribute to a particular performance level. However, the current paper does not go to this detail of empirical exploration. Given these concerns, I recommend not accepting the paper at the current stage. | train | [
"rkxuiMsdAQ",
"HJx6ByVYhQ",
"ryx5jL0STm",
"Bkg8xwAHT7",
"rJEql0HT7",
"HyerEyRB6m",
"H1xIHS96hQ",
"rkeqDrUTnQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for taking our comments seriously and bumping your score.\n\nWe updated the paper's related work section with a discussion of Bousmalis et al and similar work, and how it differs from our learning setup and from our proposal (last paragraph of that section).\n\nWe are sorry that you feel that we dismisse... | [
-1,
4,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"HJx6ByVYhQ",
"iclr_2019_rkxjnjA5KQ",
"HJx6ByVYhQ",
"HJx6ByVYhQ",
"rkeqDrUTnQ",
"H1xIHS96hQ",
"iclr_2019_rkxjnjA5KQ",
"iclr_2019_rkxjnjA5KQ"
] |
iclr_2019_rkxkHnA5tX | Learning from Noisy Demonstration Sets via Meta-Learned Suitability Assessor | A noisy and diverse demonstration set may hinder the performances of an agent aiming to acquire certain skills via imitation learning. However, state-of-the-art imitation learning algorithms often assume the optimality of the given demonstration set.
In this paper, we address such optimal assumption by learning only from the most suitable demonstrations in a given set. Suitability of a demonstration is estimated by whether imitating it produce desirable outcomes for achieving the goals of the tasks. For more efficient demonstration suitability assessments, the learning agent should be capable of imitating a demonstration as quick as possible, which shares similar spirit with fast adaptation in the meta-learning regime. Our framework, thus built on top of Model-Agnostic Meta-Learning, evaluates how desirable the imitated outcomes are, after adaptation to each demonstration in the set. The resulting assessments hence enable us to select suitable demonstration subsets for acquiring better imitated skills. The videos related to our experiments are available at: https://sites.google.com/view/deepdj | rejected-papers | The reviewers raised a number of major concerns including the incremental novelty of the proposed (if any), insufficient explanation, and, most importantly, insufficient and inadequate experimental evaluation presented. The authors did not provide any rebuttal. Hence, I cannot suggest this paper for presentation at ICLR. | train | [
"HJxSenCj6Q",
"SylhzCuiaX",
"SkxapRNs3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary/Contributions:\nThis paper focuses on an imitation learning setup where there some of the provided demonstrations which are irrelevant to the task being considered. The stated contribution of the paper is a MAML based algorithm to imitation learning which automatically determines if the demonstrations are ... | [
4,
4,
4
] | [
4,
4,
3
] | [
"iclr_2019_rkxkHnA5tX",
"iclr_2019_rkxkHnA5tX",
"iclr_2019_rkxkHnA5tX"
] |
iclr_2019_rkxn7nR5KX | Incremental Few-Shot Learning with Attention Attractor Networks | Machine learning classifiers are often trained to recognize a set of pre-defined classes. However,
in many real applications, it is often desirable to have the flexibility of learning additional
concepts, without re-training on the full training set. This paper addresses this problem,
incremental few-shot learning, where a regular classification network has already been trained to
recognize a set of base classes; and several extra novel classes are being considered, each with
only a few labeled examples. After learning the novel classes, the model is then evaluated on the
overall performance of both base and novel classes. To this end, we propose a meta-learning model,
the Attention Attractor Network, which regularizes the learning of novel classes. In each episode,
we train a set of new weights to recognize novel classes until they converge, and we show that the
technique of recurrent back-propagation can back-propagate through the optimization process and
facilitate the learning of the attractor network regularizer. We demonstrate that the learned
attractor network can recognize novel classes while remembering old classes without the need to
review the original training set, outperforming baselines that do not rely on an iterative
optimization process. | rejected-papers | This paper proposes an approach for incremental learning of new classes using meta-learning.
Strengths: The framework is interesting. The reviewers agree that the paper is well-written and clear. The experiments include comparisons to prior work, and the ablation studies are useful for judging the performance of the method.
Weaknesses: The paper does not provide significant insights over Gidaris & Komodakis '18. Reviewer 1 was also concerned that the motivation for RBP is not entirely clear.
Overall, the reviewers found that the strengths did not outweigh the weaknesses. Hence, I recommend reject.
| train | [
"Hyeike3c3X",
"H1eAV3aK1E",
"HJxkPjEY1V",
"B1g66Xr9Cm",
"BylWUetOAQ",
"rkgRfJ576X",
"S1xaiCK76X",
"SJe0IAYmTX",
"B1lJPA1hhQ",
"BJlnllqsn7",
"BJxKuIEahm",
"Sye3gf7ahX",
"B1ghUyjmn7",
"Skgbo1iQ37",
"HylWwmBZh7",
"BklxABhTi7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"official_reviewer",
"public"
] | [
"The paper addresses the incremental few-shot learning problem where a model starts with base network and then introduces the novel classes, building a connection between novel and base classes via an attention module.\n\nStrengths:\n+ clear writing. \n+ the experiments are compared with related work and the ablati... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_rkxn7nR5KX",
"rkgRfJ576X",
"SJe0IAYmTX",
"rkgRfJ576X",
"S1xaiCK76X",
"Hyeike3c3X",
"BJlnllqsn7",
"B1lJPA1hhQ",
"iclr_2019_rkxn7nR5KX",
"iclr_2019_rkxn7nR5KX",
"Sye3gf7ahX",
"iclr_2019_rkxn7nR5KX",
"HylWwmBZh7",
"BklxABhTi7",
"iclr_2019_rkxn7nR5KX",
"iclr_2019_rkxn7nR5KX"
] |
iclr_2019_rkxraoRcF7 | Learning Disentangled Representations with Reference-Based Variational Autoencoders | Learning disentangled representations from visual data, where different high-level generative factors are independently encoded, is of importance for many computer vision tasks. Supervised approaches, however, require a significant annotation effort in order to label the factors of interest in a training set. To alleviate the annotation cost, we introduce a learning setting which we refer to as "reference-based disentangling''. Given a pool of unlabelled images, the goal is to learn a representation where a set of target factors are disentangled from others. The only supervision comes from an auxiliary "reference set'' that contains images where the factors of interest are constant. In order to address this problem, we propose reference-based variational autoencoders, a novel deep generative model designed to exploit the weak supervisory signal provided by the reference set. During training, we use the variational inference framework where adversarial learning is used to minimize the objective function. By addressing tasks such as feature learning, conditional image generation or attribute transfer, we validate the ability of the proposed model to learn disentangled representations from minimal supervision.
| rejected-papers | This is a proposed method that studies learning of disentangled representations in a relatively specific setting, defined as follows: given two datasets, one unlabeled and another that has a particular factor of variation fixed, the method will disentangle the factor of variation from the others. The reviewers found the method promising, with interesting results (qual & quant).
The weaknesses of the method as discussed in the reviews and after:
- the quantitative results with weak supervision are not a big improvement over beta-vae-like methods or mathieu et al.
- a red flag of sorts to me is that it is not very clear where the gains are coming from: the authors claim to have done a fair comparison with the various baselines, but they introduce an entirely new encoder/decoder architecture that was likely (involuntarily, but still) tuned more to their method than others.
- the setup as presented is somewhat artificial and less general than it could be (however, this was not a major factor in my decision). It is easy to get confused by the kind of disentagled representations that this work is aiming to get.
I think this has the potential to be a solid paper, but at this stage it's missing a number of ablation studies to truly understand what sets it apart from the previous work. At the very least, there is a number of architectural and training choices in Appendix D -- like the 0.25 dropout -- that require more explanation / empirical understanding and how they generalize to other datasets.
Given all of this, at this point it is hard for me to recommend acceptance of this work. I encourage the authors to take all this feedback into account, extend their work to more domains (the artistic-style disentangling that they mention seems like a good idea) and provide more empirical evidence about their architectural choices and their effect on the results. | train | [
"S1eRWHHgTQ",
"BkxDCPGjR7",
"Skx6zH6UaQ",
"r1l5xmMv07",
"r1gtKLnOhX",
"ByxvxBTI6X",
"Hkg-PraIaX",
"rklyCH6I67",
"Hkxy6H6Ia7",
"rJgnOS6IT7",
"rJeRCc9t3m"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors address the problem of representation learning in which data-generative factors of variation are separated, or disentangled, from each other. Pointing out that unsupervised disentangling is hard despite recent breakthroughs, and that supervised disentangling needs a large number of carefully labeled da... | [
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_rkxraoRcF7",
"r1gtKLnOhX",
"S1eRWHHgTQ",
"Skx6zH6UaQ",
"iclr_2019_rkxraoRcF7",
"iclr_2019_rkxraoRcF7",
"rJeRCc9t3m",
"r1gtKLnOhX",
"r1gtKLnOhX",
"rJeRCc9t3m",
"iclr_2019_rkxraoRcF7"
] |
iclr_2019_rkxt8oC9FQ | Perfect Match: A Simple Method for Learning Representations For Counterfactual Inference With Neural Networks | Learning representations for counterfactual inference from observational data is of high practical relevance for many domains, such as healthcare, public policy and economics. Counterfactual inference enables one to answer "What if...?" questions, such as "What would be the outcome if we gave this patient treatment t1?". However, current methods for training neural networks for counterfactual inference on observational data are either overly complex, limited to settings with only two available treatment options, or both. Here, we present Perfect Match (PM), a method for training neural networks for counterfactual inference that is easy to implement, compatible with any architecture, does not add computational complexity or hyperparameters, and extends to any number of treatments. PM is based on the idea of augmenting samples within a minibatch with their propensity-matched nearest neighbours. Our experiments demonstrate that PM outperforms a number of more complex state-of-the-art methods in inferring counterfactual outcomes across several real-world and semi-synthetic datasets. | rejected-papers | The reviewers found the paper to be well written, the work novel and they appreciated the breadth of the empirical evaluation. However, they did not seem entirely convinced that the improvements over the baseline are statistically significant. Reviewer 1 has lingering concerns about the experimental conditions and whether propensity-score matching within a minibatch would provide a substantial improvement over propensity-score matching across the dataset. Overall the reviewers found this to be a good paper and noted that the discussion was illuminating and demonstrated the merits of this work and interest to the community. However, no reviewers were prepared to champion the paper and thus it falls just below borderline for acceptance. | train | [
"S1xgxlyoCX",
"r1el-iAcCQ",
"HkgGOBR9CX",
"SyezEpscC7",
"ByloXp8_CQ",
"B1e9IhZ4RX",
"B1gT2MScTm",
"Skxizc4q6m",
"SklykqV9a7",
"B1euHY4q67",
"HygsMMPq2X",
"B1e3G21qhm",
"rkxxZ8nwnm"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"R1: \"In counterfactual inference, you need to use error on the (propensity-score matched etc.) validation set (which you call 'factual error') as an early stopping criterion, however since matching procedures are not perfect, this proxy objective does not accurately estimate the true counterfactual error objectiv... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"r1el-iAcCQ",
"HkgGOBR9CX",
"SyezEpscC7",
"ByloXp8_CQ",
"B1e9IhZ4RX",
"Skxizc4q6m",
"iclr_2019_rkxt8oC9FQ",
"HygsMMPq2X",
"B1e3G21qhm",
"rkxxZ8nwnm",
"iclr_2019_rkxt8oC9FQ",
"iclr_2019_rkxt8oC9FQ",
"iclr_2019_rkxt8oC9FQ"
] |
iclr_2019_rkxtl3C5YX | Understanding & Generalizing AlphaGo Zero | AlphaGo Zero (AGZ) introduced a new {\em tabula rasa} reinforcement learning algorithm that has achieved superhuman performance in the games of Go, Chess, and Shogi with no prior knowledge other than the rules of the game. This success naturally begs the question whether it is possible to develop similar high-performance reinforcement learning algorithms for generic sequential decision-making problems (beyond two-player games), using only the constraints of the environment as the ``rules.'' To address this challenge, we start by taking steps towards developing a formal understanding of AGZ. AGZ includes two key innovations: (1) it learns a policy (represented as a neural network) using {\em supervised learning} with cross-entropy loss from samples generated via Monte-Carlo Tree Search (MCTS); (2) it uses {\em self-play} to learn without training data.
We argue that the self-play in AGZ corresponds to learning a Nash equilibrium for the two-player game; and the supervised learning with MCTS is attempting to learn the policy corresponding to the Nash equilibrium, by establishing a novel bound on the difference between the expected return achieved by two policies in terms of the expected KL divergence (cross-entropy) of their induced distributions. To extend AGZ to generic sequential decision-making problems, we introduce a {\em robust MDP} framework, in which the agent and nature effectively play a zero-sum game: the agent aims to take actions to maximize reward while nature seeks state transitions, subject to the constraints of that environment, that minimize the agent's reward. For a challenging network scheduling domain, we find that AGZ within the robust MDP framework provides near-optimal performance, matching one of the best known scheduling policies that has taken the networking community three decades of intensive research to develop.
| rejected-papers | This work examines the AlphaGo Zero algorithm, a self-play reinforcement learning algorithm that has been shown to learn policies with superhuman performance on 2 player perfect information games. The main result of the paper is that the policy learned by AGZ corresponds to a Nash equilibrium, that and that the cross-entropy minimization in the supervised learning-inspired part of the algorithm converges to this Nash equillibrium, proves a bound on the expected returns of two policies under the and introduces a "robust MDP" view of a 2 player zero-sum game played between the agent and nature.
R3 found the paper well-structured and the results presented therein interesting. R2 complained of overly heavy notation and questioned the applicability of the results, as well as the utility of the robust MDP perspective (though did raise their score following revisions).
The most detailed critique came from R1, who suggested that the bound on the convergence of returns of two policies as the KL divergence between their induced distributions decreases is unsurprising, that using it to argue for AGZ's convergence to the optimal policy ignores the effects introduced by the suboptimality of the MCTS policy (while really interesting part being understanding how AGZ deals with, and whether or not it closes, this gap), and that the "robust MDP" view is less novel than the authors claim based on the known relationships between 2 player zero-sum games and minimax robust control.
I find R1's complaints, in particular with respect to "robust MDPs" (a criticism which went completely unaddressed by the authors in their rebuttal), convincing enough that I would narrowly recommend rejection at this time, while also agreeing with R3 that this is an interesting subject and that the results within could serve as the bedrock for a stronger future paper. | val | [
"SJlYT1Uw27",
"Bklrkne90m",
"HJxewogcRX",
"S1xqVig50Q",
"HyljDMze6m",
"Hylj7vZsh7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a formal framework to claim that Alpha Zero might converges to a Nash equilibrium. The main theoretical result is that the reward difference between a pair of policy and the Nash policy is bounded by the expected KL of these policy on a state distribution sampled from the Nash policies. \n\nThe ... | [
5,
-1,
-1,
-1,
5,
7
] | [
3,
-1,
-1,
-1,
5,
4
] | [
"iclr_2019_rkxtl3C5YX",
"SJlYT1Uw27",
"Hylj7vZsh7",
"HyljDMze6m",
"iclr_2019_rkxtl3C5YX",
"iclr_2019_rkxtl3C5YX"
] |
iclr_2019_rkxusjRctQ | Learning models for visual 3D localization with implicit mapping | We consider learning based methods for visual localization that do not require the construction of explicit maps in the form of point clouds or voxels. The goal is to learn an implicit representation of the environment at a higher, more abstract level, for instance that of objects. We propose to use a generative approach based on Generative Query Networks (GQNs, Eslami et al. 2018), asking the following questions: 1) Can GQN capture more complex scenes than those it was originally demonstrated on? 2) Can GQN be used for localization in those scenes? To study this approach we consider procedurally generated Minecraft worlds, for which we can generate images of complex 3D scenes along with camera pose coordinates. We first show that GQNs, enhanced with a novel attention mechanism can capture the structure of 3D scenes in Minecraft, as evidenced by their samples. We then apply the models to the localization problem, comparing the results to a discriminative baseline, and comparing the ways each approach captures the task uncertainty. | rejected-papers | The paper proposes a method that learns mapping implicitly, by using a generative query network of Eslami et al. with an attention mechanism to learn to predict egomotion. The empirical findings is that training for egomotion estimation alongside the generative task of view prediction helps over a discriminative baseline, that does not consoder view prediction. The model is tested in Minecraft environments.
A comparison to some baseline SLAM-like method, e.g., a method based on bundle adjustment, would be important to include despite beliefs of the authors that eventually learning-based methods would win over geometric methods. For example, potentially environments with changes can be considered, which will cause the geometric method to fail, but the proposed learning-based method to succeed.
Moreover, there are currently learning based methods for the re-localization problem that the paper would be important to compare against (instead of just cite), such as "MapNet: An Allocentric Spatial Memory for Mapping Environments" of Henriques et al. and "Active Neural Localization" of Chaplot et al. . In particular, Mapnet has a generative interpretation by using cross-convolutions as part of its architecture, which generalize very well, and which consider the geometric formation process. The paper makes a big distinction between generative and discriminative, however the architectural details behind the egomotion estimation network are potentially more or equally important to the loss used. This means, different discriminative networks depending on their architecture may perform very differently. Thus, it would be important to present quantitative results against such methods that use cross-convolutions for egomotion estimation/re-localization. | train | [
"B1xGm5kRkE",
"Sygu18wv3X",
"SyluN9utCm",
"BJeJKJUERm",
"rJeroar407",
"BkgP-or4Am",
"HylrOFH4RQ",
"BJlDJti1C7",
"rkxoWDUT2X",
"Hygz4bf93Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"The authors did not address my concerns much. I keep my rating to 5 (leaning to 4), as I still think there are more experiments that can be included to strengthen the paper (and I think the paper would benefit from this in a long run).\n\n[Comparison to traditional SLAM]\nWhile the claimed main contribution of th... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"rJeroar407",
"iclr_2019_rkxusjRctQ",
"BJeJKJUERm",
"Sygu18wv3X",
"Hygz4bf93Q",
"rkxoWDUT2X",
"BJlDJti1C7",
"iclr_2019_rkxusjRctQ",
"iclr_2019_rkxusjRctQ",
"iclr_2019_rkxusjRctQ"
] |
iclr_2019_rkzUYjCcFm | FAST OBJECT LOCALIZATION VIA SENSITIVITY ANALYSIS | Deep Convolutional Neural Networks (CNNs) have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data. Methods for object localization, however, are still in need of substantial improvement. Common approaches to this problem involve the use of a sliding window, sometimes at multiple scales, providing input to a deep CNN trained to classify the contents of the window. In general, these approaches are time consuming, requiring many classification calculations. In this paper, we offer a fundamentally different approach to the localization of recognized objects in images. Our method is predicated on the idea that a deep CNN capable of recognizing an object must implicitly contain knowledge about object location in its connection weights. We provide a simple method to interpret classifier weights in the context of individual classified images. This method involves the calculation of the derivative of network generated activation patterns, such as the activation of output class label units, with regard to each in- put pixel, performing a sensitivity analysis that identifies the pixels that, in a local sense, have the greatest influence on internal representations and object recognition. These derivatives can be efficiently computed using a single backward pass through the deep CNN classifier, producing a sensitivity map of the image. We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object. Our experimental results, using real-world data sets for which ground truth localization information is known, reveal competitive accuracy from our fast technique. | rejected-papers | This paper was reviewed by three experts. After the author response, R2 and R3 recommend rejecting this paper citing concerns of novelty and experimental evaluation. R1 assigns it a score of "6" but in comments agrees that the manuscript is not ready for ICLR. The AC finds no basis for accepting this paper in this state.
| train | [
"rkxfZezc2Q",
"SkxfZXG43Q",
"ryxzjQsrh7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary\nThe paper presents a method to perform object localization by computing sensitivity of the network activations with respect to each pixel. The key idea is that the representation for classification implicitly contains object localization information since the object classification is done by detecting fea... | [
4,
3,
6
] | [
3,
4,
5
] | [
"iclr_2019_rkzUYjCcFm",
"iclr_2019_rkzUYjCcFm",
"iclr_2019_rkzUYjCcFm"
] |
iclr_2019_rkzfuiA9F7 | Projective Subspace Networks For Few-Shot Learning | Generalization from limited examples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of lifelong learning. In this paper, we introduce the Projective Subspace Networks (PSN), a deep learning paradigm that learns non-linear embeddings from limited supervision. In contrast to previous studies, the embedding in PSN deems samples of a given class to form an affine subspace. We will show that such modeling leads to robust solutions, yielding competitive results on supervised and semi-supervised few-shot classification. Moreover, our PSN approach has the ability of end-to-end learning. In contrast to previous works, our projective subspace can be thought of as a richer representation capturing higher-order information datapoints for modeling new concepts. | rejected-papers | The reviewers all like the idea, and though the performance is a little better when compared to prototypical networks, the reviewers felt that the contribution over and above prototypical networks was marginal and none of them was willing to champion the paper. There is merit in that there is increased robustness to outliers, and future iterations of the paper should work to strengthen this aspect.
As a quick nitpick: based on my reading, and on Figure 3, it looks like there might be a typo in the definition of X_k (bottom of page 4). Right now it is defined in terms of the original data space x, when I think it should be defined in terms of the embedding space f(x). Overall this paper is a good contribution to the few-shot learning area. | train | [
"HkgTot5w14",
"BJx1MS5PJV",
"S1eTvIOrk4",
"rJejdLo5hm",
"S1eGjtv0RX",
"HJl27WKY0Q",
"SkgcVftYRQ",
"ryl_jMYYAQ",
"Bklp1-tYRQ",
"H1xWb1tFCQ",
"SkxkiPMn2Q",
"Hkgla4cno7"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We want to thank the reviewer for his/her insightful input and suggestions that led to an improved version of our paper. We have taken all remarks on-board when making our revision. It is our pleasure if the reviewer found our revisions/additional experiments interesting and valuable. We hope other readers will al... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"S1eTvIOrk4",
"S1eGjtv0RX",
"SkgcVftYRQ",
"iclr_2019_rkzfuiA9F7",
"ryl_jMYYAQ",
"Hkgla4cno7",
"SkxkiPMn2Q",
"rJejdLo5hm",
"Hkgla4cno7",
"iclr_2019_rkzfuiA9F7",
"iclr_2019_rkzfuiA9F7",
"iclr_2019_rkzfuiA9F7"
] |
iclr_2019_ryEkcsActX | Teacher Guided Architecture Search | Strong improvements in neural network performance in vision tasks have resulted from the search of alternative network architectures, and prior work has shown that this search process can be automated and guided by evaluating candidate network performance following limited training (“Performance Guided Architecture Search” or PGAS). However, because of the large architecture search spaces and the high computational cost associated with evaluating each candidate model, further gains in computational efficiency are needed. Here we present a method termed Teacher Guided Search for Architectures by Generation and Evaluation (TG-SAGE) that produces up to an order of magnitude in search efficiency over PGAS methods. Specifically, TG-SAGE guides each step of the architecture search by evaluating the similarity of internal representations of the candidate networks with those of the (fixed) teacher network. We show that this procedure leads to significant reduction in required per-sample training and that, this advantage holds for two different search spaces of architectures, and two different search algorithms. We further show that in the space of convolutional cells for visual categorization, TG-SAGE finds a cell structure with similar performance as was previously found using other methods but at a total computational cost that is two orders of magnitude lower than Neural Architecture Search (NAS) and more than four times lower than progressive neural architecture search (PNAS). These results suggest that TG-SAGE can be used to accelerate network architecture search in cases where one has access to some or all of the internal representations of a teacher network of interest, such as the brain. | rejected-papers | The authors propose to accelerate neural architecture search by using feature similarity with a given teacher network to measure how good a new candidate architecture is. The experiments show that the method accelerates architecture search, and has competitive performance. However, both Reviewers 1 and 3 noted questionable motivation behind the approach, as the method assumes that there already exists a strong teacher network in the domain where we architecture search is performed, which is not always the case. The rebuttal and the revised version of the paper addressed some of the reviewers' concerns, but overall the paper remained below the acceptance bar. I suggest that the authors further expand the evaluation and motivate their approach better before re-submitting to another venue.
| train | [
"SJgayj9Z1N",
"S1g_BwYp0m",
"BJx-beJ5nm",
"HklJlbdoAQ",
"ryef0eUs0m",
"BklXcO5d0m",
"SJlCYB9uCQ",
"BJx6AX9ORm",
"HyxUEf5dAQ",
"Hkxig7F5n7",
"rJgcfakc27"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for providing a second round of feedback on our work. We responded to each comment below: \n\n1. \"As it is well known from the literature[1, 2, 3], just using the premature performance as a surrogate for the mature performance leads to poor prediction,...\": \nWe agree with the reviewer that... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"S1g_BwYp0m",
"BJx-beJ5nm",
"iclr_2019_ryEkcsActX",
"ryef0eUs0m",
"SJlCYB9uCQ",
"BJx-beJ5nm",
"rJgcfakc27",
"Hkxig7F5n7",
"iclr_2019_ryEkcsActX",
"iclr_2019_ryEkcsActX",
"iclr_2019_ryEkcsActX"
] |
iclr_2019_ryG2Cs09Y7 | Feature prioritization and regularization improve standard accuracy and adversarial robustness | Adversarial training has been successfully applied to build robust models at a certain cost. While the robustness of a model increases, the standard classification accuracy declines. This phenomenon is suggested to be an inherent trade-off. We propose a model that employs feature prioritization by a nonlinear attention module and L2 feature regularization to improve the adversarial robustness and the standard accuracy relative to adversarial training. The attention module encourages the model to rely heavily on robust features by assigning larger weights to them while suppressing non-robust features. The regularizer encourages the model to extracts similar features for the natural and adversarial images, effectively ignoring the added perturbation. In addition to evaluating the robustness of our model, we provide justification for the attention module and propose a novel experimental strategy that quantitatively demonstrates that our model is almost ideally aligned with salient data characteristics. Additional experimental results illustrate the power of our model relative to the state of the art methods. | rejected-papers | The paper proposes an attention mechanism to focus on robust features in the
context of adversarial attacks. Reviewers asked for more intuition, more
results, and more experiments with different attack/defense models. Authors
have added experimental results and provided some intuition of their proposed
approach. Overall, reviewers still think the novelty is too thin and recommend
rejection. I concur with them. | train | [
"H1lypnNVyV",
"Bkx2XmKY2Q",
"SklYe2qunm",
"ryg9zojcRX",
"Skewgioc0m",
"Hkxpu5sq0X",
"HyeSggsdn7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"\"Also, I found in table 3 that, the larger-capacity model is less robust than the smaller-capacity model against white-box iterative attacks? This is strange.\"\n\n- It's due to overfitting. Below is the record of the training and test accuracy relative to training epochs for wide networks against PGD-5. The firs... | [
-1,
5,
5,
-1,
-1,
-1,
4
] | [
-1,
5,
2,
-1,
-1,
-1,
3
] | [
"Bkx2XmKY2Q",
"iclr_2019_ryG2Cs09Y7",
"iclr_2019_ryG2Cs09Y7",
"SklYe2qunm",
"HyeSggsdn7",
"Bkx2XmKY2Q",
"iclr_2019_ryG2Cs09Y7"
] |
iclr_2019_ryG8UsR5t7 | MERCI: A NEW METRIC TO EVALUATE THE CORRELATION BETWEEN PREDICTIVE UNCERTAINTY AND TRUE ERROR | As deep learning applications are becoming more and more pervasive, the question of evaluating the reliability of a prediction becomes a central question in the machine learning community. This domain, known as predictive uncertainty, has come under the scrutiny of research groups developing Bayesian approaches to deep learning such as Monte Carlo Dropout. Unfortunately, for the time being, the real goal of predictive uncertainty has been swept under the rug. Indeed, Bayesian approaches are solely evaluated in terms of raw performance of the prediction, while the quality of the estimated uncertainty is not assessed. One contribution of this article is to draw attention on existing metrics developed in the forecast community, designed to evaluate both the sharpness and the calibration of predictive uncertainty. Sharpness refers to the concentration of the predictive distributions and calibration to the consistency between the predicted uncertainty level and the actual errors. We further analyze the behavior of these metrics on regression problems when deep convolutional networks are involved and for several current predictive uncertainty approaches. A second contribution of this article is to propose an alternative metric that is more adapted to the evaluation of relative uncertainty assessment and directly applicable to regression with deep learning. This metric is evaluated and compared with existing ones on a toy dataset as well as on the problem of monocular depth estimation. | rejected-papers | Reviewers are in a consensus and recommended to reject after engaging with the authors. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.
| train | [
"Byejusor0Q",
"rJenao8lC7",
"SkgNvoLlCm",
"SyxT09LeAX",
"BJeLx9UeRX",
"BkgP0S052m",
"SygUVxWc3X",
"ryeyZSlc37"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your reply. Without knowing the performance on the new experiments my opinion of the paper is however unchanged, so I will maintain the score as is.",
"Thank you for your time and critical feedback.\nThe score you propose is indeed interesting and we would need to look further into it. However, it see... | [
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"SkgNvoLlCm",
"ryeyZSlc37",
"SygUVxWc3X",
"BkgP0S052m",
"iclr_2019_ryG8UsR5t7",
"iclr_2019_ryG8UsR5t7",
"iclr_2019_ryG8UsR5t7",
"iclr_2019_ryG8UsR5t7"
] |
iclr_2019_ryGDEjCcK7 | CONTROLLING COVARIATE SHIFT USING EQUILIBRIUM NORMALIZATION OF WEIGHTS | We introduce a new normalization technique that exhibits the fast convergence properties of batch normalization using a transformation of layer weights instead of layer outputs. The proposed technique keeps the contribution of positive and negative weights to the layer output in equilibrium. We validate our method on a set of standard benchmarks including CIFAR-10/100, SVHN and ILSVRC 2012 ImageNet. | rejected-papers | This paper introduces a technique called EquiNorm, which normalizes the weights of convolutional layers in order to control covariate shift. The paper is well-written and the reviewers agree that the solution idea is elegant. However, the reviewers also agree that the experiments presented in the work were insufficient to prove the method's superiority. Reviewer 2 also expressed concerns about the poor results on ImageNet, which calls into question the significance of the proposed method. | train | [
"HJxkOXz5aQ",
"rygP6rGqTQ",
"H1x9HNwYnX",
"B1laMQwYhX",
"rJlTTIWvnQ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThanks for the detailed comments. We would like to respond to a few points, with the hope that you will reconsider your evaluation.\n\n• In the cons section you say that “It looks similar to the training curve when using smaller learning rates.”. As we state in the results section, no value of the learning rate ... | [
-1,
-1,
4,
6,
7
] | [
-1,
-1,
4,
1,
4
] | [
"H1x9HNwYnX",
"rJlTTIWvnQ",
"iclr_2019_ryGDEjCcK7",
"iclr_2019_ryGDEjCcK7",
"iclr_2019_ryGDEjCcK7"
] |
iclr_2019_ryGiYoAqt7 | Learning agents with prioritization and parameter noise in continuous state and action space | Reinforcement Learning (RL) problem can be solved in two different ways - the Value function-based approach and the policy optimization-based approach - to eventually arrive at an optimal policy for the given environment. One of the recent breakthroughs in reinforcement learning is the use of deep neural networks as function approximators to approximate the value function or q-function in a reinforcement learning scheme. This has led to results with agents automatically learning how to play games like alpha-go showing better-than-human performance. Deep Q-learning networks (DQN) and Deep Deterministic Policy Gradient (DDPG) are two such methods that have shown state-of-the-art results in recent times. Among the many variants of RL, an important class of problems is where the state and action spaces are continuous --- autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces. In this paper, we adapt and combine approaches such as DQN and DDPG in novel ways to outperform the earlier results for continuous state and action space problems. We believe these results are a valuable addition to the fast-growing body of results on Reinforcement Learning, more so for continuous state and action space problems. | rejected-papers | The authors take two algorithmic components that were proposed in the context of discrete-action RL - priority replay and parameter noise - and evaluate them with DDPG on continuous control tasks. The different approaches are nicely summarized by the authors, however the contribution of the paper is extremely limited. There is no novelty in the proposed approaches, the empirical evaluation is inconclusive and limited, and there is no analysis or additional insights or results. The AC and the reviewers agree that this paper is not strong enough for ICLR. | test | [
"BkeQx84T2X",
"S1ez5NE52m",
"r1ePXjQXh7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an augmentation of the DDPG algorithm with prioritized experience replay plus parameter noise. Empirical evaluations of the proposed algorithm are conducted on Mujoco benchmarks while the results are mixed.\n\nAs far as I can see, the paper contains almost no novelty as it crudely puts together ... | [
3,
4,
4
] | [
4,
3,
4
] | [
"iclr_2019_ryGiYoAqt7",
"iclr_2019_ryGiYoAqt7",
"iclr_2019_ryGiYoAqt7"
] |
iclr_2019_ryGpEiAcFQ | A Synaptic Neural Network and Synapse Learning | A Synaptic Neural Network (SynaNN) consists of synapses and neurons. Inspired by the synapse research of neuroscience, we built a synapse model with a nonlinear synapse function of excitatory and inhibitory channel probabilities. Introduced the concept of surprisal space and constructed a commutative diagram, we proved that the inhibitory probability function -log(1-exp(-x)) in surprisal space is the topologically conjugate function of the inhibitory complementary probability 1-x in probability space. Furthermore, we found that the derivative of the synapse over the parameter in the surprisal space is equal to the negative Bose-Einstein distribution. In addition, we constructed a fully connected synapse graph (tensor) as a synapse block of a synaptic neural network. Moreover, we proved the gradient formula of a cross-entropy loss function over parameters, so synapse learning can work with the gradient descent and backpropagation algorithms. In the proof-of-concept experiment, we performed an MNIST training and testing on the MLP model with synapse network as hidden layers. | rejected-papers | In this paper, neural networks are taken a step further by increasing their biological likeliness. In particular, a model of the membranes of biological cells are used computationally to train a neural network. The results are validated on MNIST.
The paper argumentation is not easy to follow, and all reviewers agree that the text needs to be improved. ˜The neuroscience sources that the models are based on are possibly outdated. Finally, the results are too meagre and, in the end, not well compared with competing approaches.
All in all, the merit of this approach is not fully demonstrated, and further work seems to be needed to clarify this. | train | [
"S1gTr6CfyV",
"HJlOooJ5A7",
"HyekZ9k5AQ",
"rkxUOt1cRX",
"SkeAKnct0m",
"S1l8-q9FRX",
"BJlOpY3J07",
"H1lEp0r7T7",
"HJgB6aBXa7",
"B1lc0btMpQ",
"H1x4D-Fza7",
"S1eiR7w-67",
"HJlSNotxpQ",
"H1ltdO_eTX",
"Sye-CUIgpm",
"BJgNadIjh7",
"SJgS3s0I3Q",
"B1l9EKsCh7",
"HJey6OjRnm",
"Byl3YpOCn7"... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public"
] | [
"Replacing Fully Connected (FC) layer by SynaMLP (Synaptic Neural Network for Multiple Layer Perceptrons) in a 6 layer CNN neural network, we have achieved 86% accuracy CIFAR10. That is near equal to the classical neural network in the same settings. \n\nConsidering that the fully connected layer is to leaning a no... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
3,
2,
2,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
3,
3,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2019_ryGpEiAcFQ",
"SkeAKnct0m",
"rkxUOt1cRX",
"S1l8-q9FRX",
"BJlOpY3J07",
"BJlOpY3J07",
"iclr_2019_ryGpEiAcFQ",
"HJgB6aBXa7",
"S1eiR7w-67",
"S1eiR7w-67",
"S1eiR7w-67",
"iclr_2019_ryGpEiAcFQ",
"H1ltdO_eTX",
"Sye-CUIgpm",
"iclr_2019_ryGpEiAcFQ",
"iclr_2019_ryGpEiAcFQ",
"iclr_2019... |
iclr_2019_ryM07h0cYX | Reinforced Pipeline Optimization: Behaving Optimally with Non-Differentiabilities | Many machine learning systems are implemented as pipelines. A pipeline is essentially a chain/network of information processing units. As information flows in and out and gradients vice versa, ideally, a pipeline can be trained end-to-end via backpropagation provided with the right supervision and loss function. However, this is usually impossible in practice, because either the loss function itself may be non-differentiable, or there may exist some non-differentiable units. One popular way to superficially resolve this issue is to separate a pipeline into a set of differentiable sub-pipelines and train them with isolated loss functions. Yet, from a decision-theoretical point of view, this is equivalent to making myopic decisions using ad hoc heuristics along the pipeline while ignoring the real utility, which prevents the pipeline from behaving optimally. In this paper, we show that by converting a pipeline into a stochastic counterpart, it can then be trained end-to-end in the presence of non-differentiable parts. Thus, the resulting pipeline is optimal under certain conditions with respect to any criterion attached to it. In experiments, we apply the proposed approach - reinforced pipeline optimization - to Faster R-CNN, a state-of-the-art object detection pipeline, and obtain empirically near-optimal object detectors consistent with its base design in terms of mean average precision. | rejected-papers | The work proposes a method for smoothing a non-differentiable machine learning pipeline (such as the Faster-RCNN detector) using policy gradient. Unfortunately, the reviewers identified a number of critical issues, including no significant improvement beyond existing works. The authors did not provide a rebuttal for these critical issues. | train | [
"HyeZwJEcnQ",
"Syx6N8zqnm",
"Syxl8c0D2m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors use RPO (Shulman et al, 2015) to transform non-differentiable operations in Faster R-CNN such as NNS, RoIPool, mAP to stochastic but differentiable operations. They cast Faster R-CNN as a SCG which can be trained end-to-end. They show results on VOC 2007.\n\nPros:\n(+) The idea of casting a non-differe... | [
4,
5,
3
] | [
5,
2,
4
] | [
"iclr_2019_ryM07h0cYX",
"iclr_2019_ryM07h0cYX",
"iclr_2019_ryM07h0cYX"
] |
iclr_2019_ryMQ5sRqYX | Finding Mixed Nash Equilibria of Generative Adversarial Networks | We reconsider the training objective of Generative Adversarial Networks (GANs) from the mixed Nash Equilibria (NE) perspective. Inspired by the classical prox methods, we develop a novel algorithmic framework for GANs via an infinite-dimensional two-player game and prove rigorous convergence rates to the mixed NE. We then propose a principled procedure to reduce our novel prox methods to simple sampling routines, leading to practically efficient algorithms. Finally, we provide experimental evidence that our approach outperforms methods that seek pure strategy equilibria, such as SGD, Adam, and RMSProp, both in speed and quality. | rejected-papers | While the authors made a strong rebuttal, none of the reviewers were particularly enthusiastic about the contributions of this paper and we unfortunately have to reject borderline papers. Concerns were expressed about the presentation, as well as the scalability of the approach. The AC encourages the authors to "revise and resubmit". | train | [
"H1xT14ML1E",
"SygwzvX9nm",
"BJeyoJKEJN",
"SyxtdPvQJN",
"Hkx64DCgJN",
"Skl0WT3Op7",
"r1lqOanOaQ",
"Ske0Vphua7",
"rygbA2hua7",
"rJgX333d6m",
"HJxCtnhOaX",
"rJew8qeCn7",
"r1eNSuXihm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We would like to note that \n\n1. This is the first of its kind convergence result under mild assumptions.\n\n2.The main scalability bottleneck of the algorithm is actually the growth of the samples, which we handle via the mean approximation.\n\n\nAs a result, we can consider sharpening the analysis if the review... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"BJeyoJKEJN",
"iclr_2019_ryMQ5sRqYX",
"r1lqOanOaQ",
"Hkx64DCgJN",
"Ske0Vphua7",
"rJew8qeCn7",
"SygwzvX9nm",
"r1eNSuXihm",
"iclr_2019_ryMQ5sRqYX",
"iclr_2019_ryMQ5sRqYX",
"iclr_2019_ryMQ5sRqYX",
"iclr_2019_ryMQ5sRqYX",
"iclr_2019_ryMQ5sRqYX"
] |
iclr_2019_rye7XnRqFm | Q-map: a Convolutional Approach for Goal-Oriented Reinforcement Learning | Goal-oriented learning has become a core concept in reinforcement learning (RL), extending the reward signal as a sole way to define tasks. However, as parameterizing value functions with goals increases the learning complexity, efficiently reusing past experience to update estimates towards several goals at once becomes desirable but usually requires independent updates per goal.
Considering that a significant number of RL environments can support spatial coordinates as goals, such as on-screen location of the character in ATARI or SNES games, we propose a novel goal-oriented agent called Q-map that utilizes an autoencoder-like neural network to predict the minimum number of steps towards each coordinate in a single forward pass. This architecture is similar to Horde with parameter sharing and allows the agent to discover correlations between visual patterns and navigation. For example learning how to use a ladder in a game could be transferred to other ladders later.
We show how this network can be efficiently trained with a 3D variant of Q-learning to update the estimates towards all goals at once. While the Q-map agent could be used for a wide range of applications, we propose a novel exploration mechanism in place of epsilon-greedy that relies on goal selection at a desired distance followed by several steps taken towards it, allowing long and coherent exploratory steps in the environment.
We demonstrate the accuracy and generalization qualities of the Q-map agent on a grid-world environment and then demonstrate the efficiency of the proposed exploration mechanism on the notoriously difficult Montezuma's Revenge and Super Mario All-Stars games. | rejected-papers | The paper proposes to use a convolutional/de-convolutional Q function over on-screen goal locations, and applied to the problem of structured exploration. Reviewers pointed out the similarity to the UNREAL architecture, the difference being that the auxiliary Q functions learned are actually used to act in this case.
Reviewers raised concerns regarding novelty, the formality of the writing, a lack of comparisons to other exploration methods, and the need for ground truth about the sprite location at training time. A minor revision to the text was made, but the reviewers did not feel their main criticisms were addressed. While the method shows promise, given that the authors acknowledge that the method is somewhat incremental, a more thorough quantitative and ablative study would be necessary in order to recommend acceptance. | train | [
"B1ge36LsJN",
"HJx0bKkKhQ",
"r1eBktwNJV",
"Hke-4YVmkV",
"r1lOBG5KCX",
"rJxSZbqFCm",
"rklgYx5Y07",
"SJxwFycY0Q",
"SJgnyWWqh7",
"BklVETbY3Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"While authors have addressed some concerns, they have not addressed others. I would encourage the authors to conduct thorough experimental evaluation and resubmit the paper. ",
"Focus on navigation problems, this paper proposes Q-map, a neural network that estimates the number of steps (in terms of the discount ... | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"rklgYx5Y07",
"iclr_2019_rye7XnRqFm",
"r1lOBG5KCX",
"rJxSZbqFCm",
"HJx0bKkKhQ",
"BklVETbY3Q",
"SJgnyWWqh7",
"iclr_2019_rye7XnRqFm",
"iclr_2019_rye7XnRqFm",
"iclr_2019_rye7XnRqFm"
] |
iclr_2019_ryeAy3AqYm | Distilled Agent DQN for Provable Adversarial Robustness | As deep neural networks have become the state of the art for solving complex reinforcement learning tasks, susceptibility to perceptual adversarial examples have become a concern. The transferability of adversarial examples is known to enable attacks capable of tricking the agent into bad states. In this work we demonstrate a simple poisoning attack able to keep deep RL from learning, and into fooling it when trained with defense methods commonly used for classification tasks. We then propose an algorithm called DadQN, based on deep Q-networks, which enables the use of stronger defenses, including defenses enabling the first ever on-line robustness certification of a deep RL agent. | rejected-papers | Reviewers had several concerns about the paper, primary among them being limited novelty of the approach. The reviewers have offered suggestions for improving the work which we encourage the authors to read and consider. | train | [
"BJerskW50Q",
"SyxKh2gqC7",
"HJghJZZ9AX",
"SJgk7xbqAX",
"S1xoa1ZqRm",
"Syxr1BTQT7",
"S1x98QUa3m",
"rkxhT2cFn7"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"→ The Untargeted Q-Poisoning (UQP) is not a true poisoning attack since it attacks not just at training time, but also at test time.\n\nWhile UQP is designed to be most effective when used at both training and test time, our evaluation shows that it is still quite effective when used only during training, thus mak... | [
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
2
] | [
"Syxr1BTQT7",
"iclr_2019_ryeAy3AqYm",
"rkxhT2cFn7",
"S1x98QUa3m",
"Syxr1BTQT7",
"iclr_2019_ryeAy3AqYm",
"iclr_2019_ryeAy3AqYm",
"iclr_2019_ryeAy3AqYm"
] |
iclr_2019_ryeNPi0qKX | Language Modeling Teaches You More Syntax than Translation Does: Lessons Learned Through Auxiliary Task Analysis | Recent work using auxiliary prediction task classifiers to investigate the properties of LSTM representations has begun to shed light on why pretrained representations, like ELMo (Peters et al., 2018) and CoVe (McCann et al., 2017), are so beneficial for neural language understanding models. We still, though, do not yet have a clear understanding of how the choice of pretraining objective affects the type of linguistic information that models learn. With this in mind, we compare four objectives - language modeling, translation, skip-thought, and autoencoding - on their ability to induce syntactic and part-of-speech information. We make a fair comparison between the tasks by holding constant the quantity and genre of the training data, as well as the LSTM architecture. We find that representations from language models consistently perform best on our syntactic auxiliary prediction tasks, even when trained on relatively small amounts of data. These results suggest that language modeling may be the best data-rich pretraining task for transfer learning applications requiring syntactic information. We also find that the representations from randomly-initialized, frozen LSTMs perform strikingly well on our syntactic auxiliary tasks, but this effect disappears when the amount of training data for the auxiliary tasks is reduced. | rejected-papers | Strengths:
-- Solid experiments
-- The paper is well written
Weaknesses:
-- The findings are not entirely novel and not so surprising, previous papers (e.g., Brevlins et al (ACL 2018)) have already
suggested that LM objectives are preferable and also using LM objective for pretraining is already the standard practice (see details in R1 and R3).
There is a consensus between the two reviewers who provided detailed comments and engaged in discussion with the authors. | train | [
"HkgkksjYAm",
"H1e8RYK_0Q",
"r1x6pM-I07",
"HylENB7xAQ",
"Hyl2uSVCa7",
"Sygh7MNRTX",
"SkgklG4CpQ",
"S1lDaxv027",
"BkedG4CTnQ",
"rJl1Yb9O2Q"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your clarification. I understand your point now and agree that the per token loss for the language model encoders (in contrast to the encoders for other tasks) could be a reason why LMs perform well on the evaluation tasks. I think the per token loss could also be a reason why LMs are in general able to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"H1e8RYK_0Q",
"Hyl2uSVCa7",
"HylENB7xAQ",
"SkgklG4CpQ",
"BkedG4CTnQ",
"rJl1Yb9O2Q",
"S1lDaxv027",
"iclr_2019_ryeNPi0qKX",
"iclr_2019_ryeNPi0qKX",
"iclr_2019_ryeNPi0qKX"
] |
iclr_2019_ryeX-nC9YQ | Dimension-Free Bounds for Low-Precision Training | Low-precision training is a promising way of decreasing the time and energy cost of training machine learning models.
Previous work has analyzed low-precision training algorithms, such as low-precision stochastic gradient descent, and derived theoretical bounds on their convergence rates.
These bounds tend to depend on the dimension of the model d in that the number of bits needed to achieve a particular error bound increases as d increases.
This is undesirable because a motivating application for low-precision training is large-scale models, such as deep learning, where d can be huge.
In this paper, we prove dimension-independent bounds for low-precision training algorithms that use fixed-point arithmetic, which lets us better understand what affects the convergence of these algorithms as parameters scale.
Our methods also generalize naturally to let us prove new convergence bounds on low-precision training with other quantization schemes, such as low-precision floating-point computation and logarithmic quantization. | rejected-papers | As the reviewers pointed out, the strength of the paper mostly comes from the analysis of the non-linear quantization which depends on the double log of the Lipschitz constants and other parameters. The AC and reviewers agree with the dimension-independent nature of the bounds, but also note that dimension-independent gound may not necessarily be significantly stronger than the dimension-dependent bounds as the metric of measuring the difficulty of the problem also matters. Although the paper does seem to lack result that shows the empirical benefit of the non-linear quantization. In considering the author response and reviewer comments, the AC decided that this comparison was indeed important for understanding the contribution in this work, and it is difficult to assess the scope of the contribution without such a comparison. | train | [
"BklWPAQcR7",
"Bkg0g0WqAQ",
"Byx19PntCm",
"H1xqG42YA7",
"ryg6W3EaaX",
"SygLsgMVnX",
"H1gEemfTn7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear readers and reviewers, we have uploaded the revised version of our paper, and we made the following changes:\n\nWe fixed some typos and font problems.\nWe removed some confusing mentions of SVRG [1] and HALP [1] in the main body of the paper, which should have been moved to in the appendix before revision.\nW... | [
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2019_ryeX-nC9YQ",
"ryg6W3EaaX",
"SygLsgMVnX",
"H1gEemfTn7",
"iclr_2019_ryeX-nC9YQ",
"iclr_2019_ryeX-nC9YQ",
"iclr_2019_ryeX-nC9YQ"
] |
iclr_2019_ryeaZhRqFm | Link Prediction in Hypergraphs using Graph Convolutional Networks | Link prediction in simple graphs is a fundamental problem in which new links between nodes are predicted based on the observed structure of the graph. However, in many real-world applications, there is a need to model relationships among nodes which go beyond pairwise associations. For example, in a chemical reaction, relationship among the reactants and products is inherently higher-order. Additionally, there is need to represent the direction from reactants to products. Hypergraphs provide a natural way to represent such complex higher-order relationships. Even though Graph Convolutional Networks (GCN) have recently emerged as a powerful deep learning-based approach for link prediction over simple graphs, their suitability for link prediction in hypergraphs is unexplored -- we fill this gap in this paper and propose Neural Hyperlink Predictor (NHP). NHP adapts GCNs for link prediction in hypergraphs. We propose two variants of NHP --NHP-U and NHP-D -- for link prediction over undirected and directed hypergraphs, respectively. To the best of our knowledge, NHP-D is the first method for link prediction over directed hypergraphs. Through extensive experiments on multiple real-world datasets, we show NHP's effectiveness. | rejected-papers | The paper describes a method for the link prediction problem in both directed and undirected hypergraphs. While the problem discussed in the paper is clearly importnant and interesting, all reviewers agree that the novelty of the proposed approach is somewhat limited given the prior art. | train | [
"r1xLwJqHJ4",
"rJxYm2KHk4",
"B1xcPUDSyV",
"HJlSr0o80Q",
"rklrbi_4Rm",
"B1xL2zM4pX",
"BJenCQzNaX",
"SJxfTVfNpQ",
"r1xcxV_4R7",
"S1xMYoWc2Q",
"r1x10QuUhm",
"HJxLFwebh7"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the response. \n\nOn the novelty of our work:\nWe reiterate that the main novelty / contribution of our work is to explore\n1) an unexplored problem (link prediction in directed hypergraphs) \n2) an underexplored problem (link prediction in undirected hypergraphs) and to propose the first neural-network... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"B1xcPUDSyV",
"HJlSr0o80Q",
"BJenCQzNaX",
"r1xcxV_4R7",
"iclr_2019_ryeaZhRqFm",
"S1xMYoWc2Q",
"r1x10QuUhm",
"HJxLFwebh7",
"B1xL2zM4pX",
"iclr_2019_ryeaZhRqFm",
"iclr_2019_ryeaZhRqFm",
"iclr_2019_ryeaZhRqFm"
] |
iclr_2019_ryeh4jA9F7 | Playing the Game of Universal Adversarial Perturbations | We study the problem of learning classifiers robust to universal adversarial perturbations. While prior work approaches this problem via robust optimization, adversarial training, or input transformation, we instead phrase it as a two-player zero-sum game. In this new formulation, both players simultaneously play the same game, where one player chooses a classifier that minimizes a classification loss whilst the other player creates an adversarial perturbation that increases the same loss when applied to every sample in the training set.
By observing that performing a classification (respectively creating adversarial samples) is the best response to the other player, we propose a novel extension of a game-theoretic algorithm, namely fictitious play, to the domain of training robust classifiers. Finally, we empirically show the robustness and versatility of our approach in two defence scenarios where universal attacks are performed on several image classification datasets -- CIFAR10, CIFAR100 and ImageNet. | rejected-papers | Reviewers mostly recommended to reject after engaging with the authors, with one reviewer slightly suggesting to accept, but with confidence 1. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit. | train | [
"S1lKRChbJE",
"HJxnhijqAX",
"H1xBXQ8FC7",
"r1exBMUK0X",
"H1gqeMLK0m",
"r1ln77Jc2m",
"BJe5rqjvnX",
"BkeuIKaB2X",
"SyxwTzBWqQ",
"Skgc1MSWqm",
"H1eTnRJAt7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public"
] | [
"Thanks for the detailed response for my previous questions. \n\n\"We add that information in the main text in Sec 3.4 to make it clearer.\"\n\nThe last paragraph of Sec. 3.4 is still unclear. \n\nFirst the authors mentioned \"where Ea,b,χ,θ is the expectation over the parameters of the affine transformation applie... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
1,
4,
3,
-1,
-1,
-1
] | [
"r1exBMUK0X",
"H1xBXQ8FC7",
"BkeuIKaB2X",
"BJe5rqjvnX",
"r1ln77Jc2m",
"iclr_2019_ryeh4jA9F7",
"iclr_2019_ryeh4jA9F7",
"iclr_2019_ryeh4jA9F7",
"iclr_2019_ryeh4jA9F7",
"iclr_2019_ryeh4jA9F7",
"iclr_2019_ryeh4jA9F7"
] |
iclr_2019_ryekdoCqF7 | Incremental training of multi-generative adversarial networks | Generative neural networks map a standard, possibly distribution to a complex high-dimensional distribution, which represents the real world data set. However, a determinate input distribution as well as a specific architecture of neural networks may impose limitations on capturing the diversity in the high dimensional target space. To resolve this difficulty, we propose a training framework that greedily produce a series of generative adversarial networks that incrementally capture the diversity of the target space. We show theoretically and empirically that our training algorithm converges to the theoretically optimal distribution, the projection of the real distribution onto the convex hull of the network's distribution space. | rejected-papers | The reviewers and the AC acknowledge the paper contains interesting ideas on using an incremental sequence of multiple generators to capture the diversity of the examples. However, the reviewers and the AC also note that the potential drawback of the paper is the lack of evaluation with other metrics such as inception score, FID score, etc. Therefore the paper is not quite ready for acceptance right now, but the AC encourages the authors to submit to other top venues with more thorough experiments. | train | [
"BJlLjYqBAm",
"HJxt4YLHC7",
"BJxrYBLr0m",
"H1xn-w1Op7",
"BJe_W-n-TX",
"S1lT0m9n2m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for careful reading and helpful comments.\n\n1. We propose incremental training because of its flexible. A jointly training algorithm is limited by its number of components. As [1] shows, due to the limit of GPU memory, they can only get mix+GAN with components T<=5. As in our algorithm, we d... | [
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
3,
4,
3
] | [
"H1xn-w1Op7",
"BJe_W-n-TX",
"S1lT0m9n2m",
"iclr_2019_ryekdoCqF7",
"iclr_2019_ryekdoCqF7",
"iclr_2019_ryekdoCqF7"
] |
iclr_2019_ryemosC9tm | Representation-Constrained Autoencoders and an Application to Wireless Positioning | In a number of practical applications that rely on dimensionality reduction, the dataset or measurement process provides valuable side information that can be incorporated when learning low-dimensional embeddings. We propose the inclusion of pairwise representation constraints into autoencoders (AEs) with the goal of promoting application-specific structure. We use synthetic results to show that only a small amount of AE representation constraints are required to substantially improve the local and global neighborhood preserving properties of the learned embeddings. To demonstrate the efficacy of our approach and to illustrate a practical application that naturally provides such representation constraints, we focus on wireless positioning using a recently proposed channel charting framework. We show that representation-constrained AEs recover the global geometry of the learned low-dimensional representations, which enables channel charting to perform approximate positioning without access to global navigation satellite systems or supervised learning methods that rely on extensive measurement campaigns. | rejected-papers | The reviewers found the work interesting and sensible. The application of latent space constrained autoencoders to wireless positioning certainly seems novel. Applications can certainly be exciting additions to the conference program. However, the reviewers weren't convinced that the technical content of the paper was sufficiently novel to be interesting to the ICLR community. In particular, the reviewers seem concerned that there are no comparisons to more recent methods for dimensionality reduction and learning latent embeddings, such as variational auto-encoders. Certainly a comparison to more recent work constraining latent representations seems warranted to justify this particular approach. | train | [
"SyemCpCF07",
"BJe4Oxm5pX",
"B1eOc1zw6Q",
"Syxkf7kMa7",
"BJeOGpKe6X",
"HyghFgZh37",
"r1xT1HuC3X"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author"
] | [
"I don't find the distinction with CVAE very convincing. I'd grant that this is not a variational model. But it's stretching to suggest autoencoders are not generative models (they are simply deterministic ones). The probabilistic interpretation is that we have a prior of the latents being 'nearby' to other latents... | [
-1,
-1,
5,
-1,
4,
6,
-1
] | [
-1,
-1,
4,
-1,
4,
2,
-1
] | [
"BJe4Oxm5pX",
"B1eOc1zw6Q",
"iclr_2019_ryemosC9tm",
"BJeOGpKe6X",
"iclr_2019_ryemosC9tm",
"iclr_2019_ryemosC9tm",
"HyghFgZh37"
] |
iclr_2019_ryeoxnRqKQ | NATTACK: A STRONG AND UNIVERSAL GAUSSIAN BLACK-BOX ADVERSARIAL ATTACK | Recent works find that DNNs are vulnerable to adversarial examples, whose changes from the benign ones are imperceptible and yet lead DNNs to make wrong predictions. One can find various adversarial examples for the same input to a DNN using different attack methods. In other words, there is a population of adversarial examples, instead of only one, for any input to a DNN. By explicitly modeling this adversarial population with a Gaussian distribution, we propose a new black-box attack called NATTACK. The adversarial attack is hence formalized as an optimization problem, which searches the mean of the Gaussian under the guidance of increasing the target DNN's prediction error. NATTACK achieves 100% attack success rate on six out of eleven recently published defense methods (and greater than 90% for four), all using the same algorithm. Such results are on par with or better than powerful state-of-the-art white-box attacks. While the white-box attacks are often model-specific or defense-specific, the proposed black-box NATTACK is universally applicable to different defenses. | rejected-papers | Although one review is favorable, it does not make a strong enough case for accepting this paper. Thus there is not sufficient support in the reviews to accept this paper.
I am recommending rejecting this submission for multiple reasons.
Given that this is a "black box" attack formalized as an optimization problem, the method must be compared to other approaches in the large field of derivative-free optimization. There are many techniques including: Bayesian optimization, (other) evolutionary algorithms, simulated annealing, Nelder-Mead, coordinate descent, etc. Since the method of the paper does not use anything about the structure of the problem it can be applied to other derivative-free optimization problems that had the same search constraint. However, the paper does not provide evidence that it has advanced the state of the art in derivative-free optimization.
The method the paper describes does not need a new name and is an obvious variation of existing evolutionary algorithms. Someone facing the same problem could easily reinvent the exact method of the paper without reading it and this limits the value of the contribution.
Finally, this paper amounts to breaking already broken defenses, which is not an activity of high value to the community at this stage and also limits the contribution of this work.
| train | [
"ryxoR3ptyE",
"rJxJgUsYkE",
"SklmUtNFJE",
"r1eZ4M7N14",
"BylsJD_90X",
"BJe1JZucAm",
"ByeQ2E_dAQ",
"Hyxdoxb8CX",
"rygk8TgICX",
"HyxHfJWUCm",
"rkxO_clI07",
"BkeY7txIR7",
"r1lSC_3T2m",
"S1g3Hhj_hm",
"SyepS_mP3m",
"SklRyhS5nQ",
"ByeeVjH92m",
"HJe1uvS52Q",
"H1eCOHrqhQ",
"S1ekDzBc2m"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public",
... | [
"Not sure why you are obsessed with the minimal adversarial examples. If those are what you are looking for, our paper does not provide a direct answer though you probably can derive one based on our work. \n\nKindly check Steps 1--4 in the paper which generate valid adversarial examples whose differences from the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rJxJgUsYkE",
"SklmUtNFJE",
"r1eZ4M7N14",
"BylsJD_90X",
"ByeQ2E_dAQ",
"ByeQ2E_dAQ",
"rygk8TgICX",
"iclr_2019_ryeoxnRqKQ",
"SyepS_mP3m",
"SklRyhS5nQ",
"S1g3Hhj_hm",
"r1lSC_3T2m",
"iclr_2019_ryeoxnRqKQ",
"iclr_2019_ryeoxnRqKQ",
"iclr_2019_ryeoxnRqKQ",
"ByeeVjH92m",
"HJe1uvS52Q",
"H1e... |
iclr_2019_ryewE3R5YX | Characterizing Attacks on Deep Reinforcement Learning | Deep Reinforcement learning (DRL) has achieved great success in various applications, such as playing computer games and controlling robotic manipulation. However, recent studies show that machine learning models are vulnerable to adversarial examples, which are carefully crafted instances that aim to mislead learning models to make arbitrarily incorrect prediction, and raised severe security concerns. DRL has been attacked by adding perturbation to each observed frame. However, such observation based attacks are not quite realistic considering that it would be hard for adversaries to directly manipulate pixel values in practice. Therefore, we propose to understand the vulnerabilities of DRL from various perspectives and provide a throughout taxonomy of adversarial perturbation against DRL, and we conduct the first experiments on unexplored parts of this taxonomy. In addition to current observation based attacks against DRL, we propose attacks based on the actions and environment dynamics. Among these experiments, we introduce a novel sequence-based attack to attack a sequence of frames for real-time scenarios such as autonomous driving, and the first targeted attack that perturbs environment dynamics to let the agent fail in a specific way. We show empirically that our sequence-based attack can generate effective perturbations in a blackbox setting in real time with a small number of queries, independent of episode length. We conduct extensive experiments to compare the effectiveness of different attacks with several baseline attack methods in several game playing, robotics control, and autonomous driving environments. | rejected-papers | The authors have delivered an extensive examination of deep RL attacks, placing them within a taxonomy, proposing new attacks, and giving empirical evidence to compare the effectiveness of the attacks. The reviewers and AC appreciate the broad effort, comprising 14 different attacks, and the well-written taxonomic discussion. However, the reviewers were concerned that the paper had significant problems with clarity of technical presentation and that the attacks were not well grounded in any sort of real world scenario. Although the authors addressed many concerns with their revision and rebuttal, the reviewers were not convinced. The AC believes that R1 ought to have increased their score given their comments and the resulting rebuttal, but the paper remains a borderline reject even with a corrected R1 score. | train | [
"HJgJC2JBlE",
"BklYLpkBl4",
"BJle9p1HlN",
"SkeBrJ4oyN",
"Byl9vVV5yE",
"rkeB4Y6hhX",
"BJe9EVVYnX",
"SkxGwtoYJ4",
"B1g-xog5AX",
"Bklh-sx9AQ",
"B1xIHog5RQ",
"HkxZ4jxqC7",
"Bkg3wUSThQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Since this paper is more about attack instead of defense, due to the page limit, we only include a brief discussion about how to make rl more secure with our proposed attack methods. In terms of the ordering of priority for defense with respect to the risk caused by the attacks and the likelihood of the attacks, e... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"SkeBrJ4oyN",
"Byl9vVV5yE",
"SkxGwtoYJ4",
"Bklh-sx9AQ",
"HkxZ4jxqC7",
"iclr_2019_ryewE3R5YX",
"iclr_2019_ryewE3R5YX",
"B1xIHog5RQ",
"iclr_2019_ryewE3R5YX",
"Bkg3wUSThQ",
"BJe9EVVYnX",
"rkeB4Y6hhX",
"iclr_2019_ryewE3R5YX"
] |
iclr_2019_ryeyti0qKX | On the Statistical and Information Theoretical Characteristics of DNN Representations | It has been common to argue or imply that a regularizer can be used to alter a statistical property of a hidden layer's representation and thus improve generalization or performance of deep networks. For instance, dropout has been known to improve performance by reducing co-adaptation, and representational sparsity has been argued as a good characteristic because many data-generation processes have only a small number of factors that are independent. In this work, we analytically and empirically investigate the popular characteristics of learned representations, including correlation, sparsity, dead unit, rank, and mutual information, and disprove many of the \textit{conventional wisdom}. We first show that infinitely many Identical Output Networks (IONs) can be constructed for any deep network with a linear layer, where any invertible affine transformation can be applied to alter the layer's representation characteristics. The existence of ION proves that the correlation characteristics of representation can be either low or high for a well-performing network. Extensions to ReLU layers are provided, too. Then, we consider sparsity, dead unit, and rank to show that only loose relationships exist among the three characteristics. It is shown that a higher sparsity or additional dead units do not imply a better or worse performance when the rank of representation is fixed. We also develop a rank regularizer and show that neither representation sparsity nor lower rank is helpful for improving performance even when the data-generation process has only a small number of independent factors. Mutual information I(\zl;\x) and I(\zl;\y) are investigated as well, and we show that regularizers can affect I(\zl;\x) and thus indirectly influence the performance. Finally, we explain how a rich set of regularizers can be used as a powerful tool for performance tuning. | rejected-papers | The paper considers an important problem of investigating the effects different statistical characteristics of representations (hidden unit activations) , such as sparsity, low correlation, etc, have on the neural network performance; while all reviewers agree that this is clearly a very important topic, there is also a consensus that perhaps the authors must strengthen and emphasize their contribution more clearly.
| train | [
"SkxbtcSACQ",
"S1xp4do3Am",
"S1xAif0YCQ",
"Bkg2Cnht0X",
"rkg9VZ6u0m",
"SkeJUS6_0m",
"Hke8IZadAX",
"Byl64ga_C7",
"Bygh4ga53m",
"HJeleGmwnQ",
"ByxWllVI2Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for clarifying the point about which layers should be explored for various statistical properties. It's good to know that you are currently investigating mutual information further, which would make the contributions of this paper more interesting.",
"Awesome. Good to hear positive feedback. :) We will... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"SkeJUS6_0m",
"Bkg2Cnht0X",
"Byl64ga_C7",
"rkg9VZ6u0m",
"ByxWllVI2Q",
"Bygh4ga53m",
"ByxWllVI2Q",
"HJeleGmwnQ",
"iclr_2019_ryeyti0qKX",
"iclr_2019_ryeyti0qKX",
"iclr_2019_ryeyti0qKX"
] |
iclr_2019_ryfaViR9YX | Variation Network: Learning High-level Attributes for Controlled Input Manipulation | This paper presents the Variation Network (VarNet), a generative model providing means to manipulate the high-level attributes of a given input. The originality of our approach is that VarNet is not only capable of handling pre-defined attributes but can also learn the relevant attributes of the dataset by itself. These two settings can be easily combined which makes VarNet applicable for a wide variety of tasks. Further, VarNet has a sound probabilistic interpretation which grants us with a novel way to navigate in the latent spaces as well as means to control how the attributes are learned. We demonstrate experimentally that this model is capable of performing interesting input manipulation and that the learned attributes are relevant and interpretable. | rejected-papers | The authors propose a generative model based on variational autoencoders that provides means to manipulate the high-level attributes of a given input. The attributes can be either pre-defined ground truth attributes or unknown attributes automatically discovered from the data.
While the reviewers acknowledged the potential usefulness of the proposed approach, they raised important concerns that were viewed by AC as a critical issue: (1) very limited experimental evaluation (e.g. no baseline or ablation results, no quantitative results); comparisons on other more complex datasets and more in-depth analysis would substantially strengthen the evaluation and would allow to assess the scope of the contribution of this work – see, for example, R3’s suggestion to use other dataset like dSprites or CelebA, where the ground truth attributes are known; (2) lack of presentation clarity – see R2’s latest comment how to improve.
A general consensus among reviewers and AC suggests, in its current state the manuscript is not ready for a publication. It needs clarification, more empirical studies and polish to achieve the desired goal.
| val | [
"B1efH9Ngg4",
"ByeKdiFFAQ",
"HJljbAELp7",
"HyxsBAGAhX",
"HyxzkI2ThQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Here there are some specifics about why I found the paper difficult to follow.\n\nThere are isolated statements that lack a motivation that can guide the reader about why this was a logical step to do. Two examples:\n\"The idea is then to compute z from z∗ by applying a transformation parametrized only by the feat... | [
-1,
-1,
3,
6,
4
] | [
-1,
-1,
4,
2,
3
] | [
"ByeKdiFFAQ",
"iclr_2019_ryfaViR9YX",
"iclr_2019_ryfaViR9YX",
"iclr_2019_ryfaViR9YX",
"iclr_2019_ryfaViR9YX"
] |
iclr_2019_ryfcCo0ctQ | Convergent Reinforcement Learning with Function Approximation: A Bilevel Optimization Perspective | We study reinforcement learning algorithms with nonlinear function approximation in the online setting. By formulating both the problems of value function estimation and policy learning as bilevel optimization problems, we propose online Q-learning and actor-critic algorithms for these two problems respectively. Our algorithms are gradient-based methods and thus are computationally efficient. Moreover, by approximating the iterates using differential equations, we establish convergence guarantees for the proposed algorithms. Thorough numerical experiments are conducted to back up our theory. | rejected-papers | The paper gives an bilevel optimization view for several standard RL algorithms, and proves their asymptotic convergence with function approximation under some assumptions. The analysis is a two-time scale one, and some empirical study is included.
It's a difficult decision to make for this paper. It clearly has a few things to be liked: (1) the bilevel view seems new in the RL literature (although the view has been implicitly used throughout the literature); (2) the paper is solid and gives rigorous, nontrivial analyses.
On the other hand, reviewers are not convinced it's ready for publication in its current stage:
(1) Technical novelty, in the context of published works: extra challenges needed on top of Borkar; similarity to and differences from Dai et al.; ...
(2) The practical significance is somewhat limited. Does the analysis provide additional insight into how to improve existing approaches? How restricted are the assumptions? Are the online-vs-batch distinction from Dai et al. really important in practice?
(3) What does the paper want to show in the experiments, since no new algorithms are developed? Some claims are made based on very limited empirical evidence. It'd be much better to run algorithms on more controlled situations to show, say, the significance of two timescale updates. Also, as those algorithms are classic Q-learning and actor-critic (quote the authors in responses), how well do the algorithms solve the well-known divergent examples when function approximation is used?
(4) Presentation needs to be improved. Reviewers pointed out some over claims and imprecise statements.
While the author responses were helpful in clarifying some of the questions, reviewers felt that the remaining questions needed to be addressed and the changes would be large enough that another full review cycle is needed. | train | [
"SJxcv5Khk4",
"Syl9zC3r1N",
"ryetm_kR3Q",
"Bkein8QWaQ",
"rJxpWzYq3X",
"rylFIoZ5C7",
"SJlBKqxqRX",
"Hyx_TCHtR7",
"BJeFFCBFRQ",
"HkejBTBFCX",
"SkllJTrKC7",
"B1xSunHYAQ",
"rJeFeRyWam"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"1. Ok\n\n2. Yes the assumptions are stronger, but the results are much stronger as a result, so it's not surprising.\n\n3. The practical implications of this theoretical work are unclear. It's nice that it provides motivation for current practices, but it does not provide additional insight into how to improve exi... | [
-1,
-1,
6,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"rylFIoZ5C7",
"HkejBTBFCX",
"iclr_2019_ryfcCo0ctQ",
"iclr_2019_ryfcCo0ctQ",
"iclr_2019_ryfcCo0ctQ",
"SJlBKqxqRX",
"B1xSunHYAQ",
"BJeFFCBFRQ",
"rJxpWzYq3X",
"ryetm_kR3Q",
"rJeFeRyWam",
"Bkein8QWaQ",
"iclr_2019_ryfcCo0ctQ"
] |
iclr_2019_ryfz73C9KQ | Neural Predictive Belief Representations | Unsupervised representation learning has succeeded with excellent results in many applications. It is an especially powerful tool to learn a good representation of environments with partial or noisy observations. In partially observable domains it is important for the representation to encode a belief state---a sufficient statistic of the observations seen so far. In this paper, we investigate whether it is possible to learn such a belief representation using modern neural architectures. Specifically, we focus on one-step frame prediction and two variants of contrastive predictive coding (CPC) as the objective functions to learn the representations. To evaluate these learned representations, we test how well they can predict various pieces of information about the underlying state of the environment, e.g., position of the agent in a 3D maze. We show that all three methods are able to learn belief representations of the environment---they encode not only the state information, but also its uncertainty, a crucial aspect of belief states. We also find that for CPC multi-step predictions and action-conditioning are critical for accurate belief representations in visually complex environments. The ability of neural representations to capture the belief information has the potential to spur new advances for learning and planning in partially observable domains, where leveraging uncertainty is essential for optimal decision making. | rejected-papers | This paper proposed an unsupervised learning algorithm for predictive modeling. The key idea of using NCE/CPC for predictive modeling is interesting. However, major concerns were raised by reviewers on the experimental design/empirical comparisons and paper writing. Overall, this paper cannot be published in its current form, but I think it may be dramatically improved for a future publication. | test | [
"S1gS823KCQ",
"BJg0Jn3t07",
"rklIio2tCX",
"SylDwo2F0X",
"HJgKV__a2X",
"Hkgl4dA32Q",
"BJxAFigP27"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewers for their comments. We address those comments by the following changes in the paper. In addition, we provide a specific reply for each reviewer. \n\n1.We have consolidated our review of CPC in Sec. 2.2. This addresses a concern raised by AnonReviewer2.\n\n2. We have corrected ... | [
-1,
-1,
-1,
-1,
4,
7,
5
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2019_ryfz73C9KQ",
"BJxAFigP27",
"Hkgl4dA32Q",
"HJgKV__a2X",
"iclr_2019_ryfz73C9KQ",
"iclr_2019_ryfz73C9KQ",
"iclr_2019_ryfz73C9KQ"
] |
iclr_2019_rygFmh0cKm | On Difficulties of Probability Distillation | Probability distillation has recently been of interest to deep learning practitioners as it presents a practical solution for sampling from autoregressive models for deployment in real-time applications. We identify a pathological optimization issue with the commonly adopted stochastic minimization of the (reverse) KL divergence, owing to sparse gradient signal from the teacher model due to curse of dimensionality. We also explore alternative principles for distillation, and show that one can achieve qualitatively better results than with KL minimization.
| rejected-papers | The paper proposes new methods for optimization of optimization of KL(student_model||teacher_model).
The topic is relevant. The paper also contains interesting ideas and the proposed methods are interesting; they are elegant and seems to work reasonably well on the tasks tried.
However, the reviewers do not all agree that the paper is well written. The reviewers have pointed out several issues that need to be addresses before the paper can be accepted.
| train | [
"HkeCRRt7RX",
"rkejbRFXA7",
"B1lvYTFX0X",
"rJllYM08p7",
"BJxBtwJqhX",
"SygyL6Otnm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the constructive feedback! We address the concerns below:\n\n1. Thanks for spotting the typo! We’ve fixed it.\n\n2. We used c initially to express the formula generally, for any vector c. This is a stylistic choice, but we agree that it might be confusing for some readers, and for the sake of readabi... | [
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
5,
2,
4
] | [
"SygyL6Otnm",
"BJxBtwJqhX",
"rJllYM08p7",
"iclr_2019_rygFmh0cKm",
"iclr_2019_rygFmh0cKm",
"iclr_2019_rygFmh0cKm"
] |
iclr_2019_rygVV205KQ | Visual Imitation with a Minimal Adversary | High-dimensional sparse reward tasks present major challenges for reinforcement learning agents. In this work we use imitation learning to address two of these challenges: how to learn a useful representation of the world e.g. from pixels, and how to explore efficiently given the rarity of a reward signal? We show that adversarial imitation can work well even in this high dimensional observation space. Surprisingly the adversary itself, acting as the learned reward function, can be tiny, comprising as few as 128 parameters, and can be easily trained using the most basic GAN formulation. Our approach removes limitations present in most contemporary imitation approaches: requiring no demonstrator actions (only video), no special initial conditions or warm starts, and no explicit tracking of any single demo. The proposed agent can solve a challenging robot manipulation task of block stacking from only video demonstrations and sparse reward, in which the non-imitating agents fail to learn completely. Furthermore, our agent learns much faster than competing approaches that depend on hand-crafted, staged dense reward functions, and also better compared to standard GAIL baselines. Finally, we develop a new adversarial goal recognizer that in some cases allows the agent to learn stacking without any task reward, purely from imitation. | rejected-papers | The paper extends an existing approach to imitation learning, GAIL (Generative Adversarial Imitation Learning, based on an adversarial approach where a policy learner competes with a discriminator) in several ways and demonstrates that the resulting approach can learn in settings with high dimensional observation spaces, even with a very low dimensional discriminator. Empirical results show promising performance on a (simulated) robotics block stacking task, as well as a standard benchmark - Walker2D (DeepMind control suite).
The reviewers and the AC note several potential weaknesses. Most importantly, the contributions of the paper are "muddled" (R2). The authors introduce several modifications to their baseline, GAIL, and show empirical improvements over the baseline. However, the presented experiments do systematically identify which modifications have what impact on the empirical results. For example, R2 mentions this for figure 4, where it appears on first look that the proposed approach is compared to the vanilla GAIL baseline - however, there appear to be differences from vanilla GAIL, e.g., in terms of reward structure (and possibly other modeling choices - how close is the GAIL implementation used to the original method, e.g., in terms of the policy learner and discriminator)? There is also confusion on which setting is addressed in which part of the paper, given that there is both a "RL+IL" and an "imitation only" component.
In their rebuttal, the authors respond to, and clarify some of the questions raised by the reviewers, but the AC and corresponding reviewers consider many issues to remain unclear. Overall, the presentation could be much improved by indicating, for each set of experiments, what research question or hypothesis it is designed to address, and to clearly indicate conclusions on each question once the results have been discussed. In its current state, the paper reads as a list of interesting and potentially highly valuable ideas, together with a list of empirical results. The real value of the paper should come in when these are synthesized into lessons learned, e.g., why specific results are observed and what novel insights they afford the reader. Overall, the paper will benefit from a thorough revision and is not considered ready for publication at ICLR at this stage.
The AC notes that they placed less weight on R3's assessment, due to their relatively low confidence, because they appear not to be familiar with key related work (GAIL), and did not respond to further requests for comments in the discussion phase.
The AC also notes a potential weakness that was not brought up by the reviewers, and which they therefore did not weigh into their assessment of the paper, but nevertheless want to share to hopefully help improve a future version of the paper. Figure 6(b) should be interpreted with caution given that performance with a greater number of demonstrations (120 vs 60) showed lower performance. The authors note in the caption that one of the "120 demos" runs "failed to take of". This suggests that variance for all these runs may be underestimated with the currently used number of seeds. It is not clear what the shaded region indicates (another drawback) but if I interpret these as standard errors then this plot would suggest lower performance for higher numbers of demonstrations with some confidence - clearly that conclusion is unlikely to be correct. | train | [
"S1liUUpqyV",
"SklR0NgUpQ",
"r1l6qR44kN",
"Hygx4yyxC7",
"BJgtuAaJCX",
"rylFR8TkCX",
"HkeXJh1a37",
"B1x90GNcn7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Imitation can be treated as a supervised learning problem when there are (state, action) pairs available and you want to learn a policy by regressing expert actions from the states. However, when no expert actions are available to predict, one must learn from experience by interacting with the environment. This ty... | [
-1,
5,
-1,
-1,
-1,
-1,
3,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"r1l6qR44kN",
"iclr_2019_rygVV205KQ",
"Hygx4yyxC7",
"B1x90GNcn7",
"HkeXJh1a37",
"SklR0NgUpQ",
"iclr_2019_rygVV205KQ",
"iclr_2019_rygVV205KQ"
] |
iclr_2019_rygZJ2RcF7 | Out-of-Sample Extrapolation with Neuron Editing | While neural networks can be trained to map from one specific dataset to another, they usually do not learn a generalized transformation that can extrapolate accurately outside the space of training. For instance, a generative adversarial network (GAN) exclusively trained to transform images of cars from light to dark might not have the same effect on images of horses. This is because neural networks are good at generation within the manifold of the data that they are trained on. However, generating new samples outside of the manifold or extrapolating "out-of-sample" is a much harder problem that has been less well studied. To address this, we introduce a technique called neuron editing that learns how neurons encode an edit for a particular transformation in a latent space. We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons. By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations. We showcase our technique on image domain/style transfer and two biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs. | rejected-papers | This was a borderline paper, as reviewers generally agreed that the method was a new method that was appropriately explained and motivated and had reasonable experimental results. The main drawbacks were that the significance of the method was unclear. In particular, the method might be too inflexible due to being based on a hard-coded rule, and it is not clear why this is the right approach relative to e.g. GANs with a modified training objective). Reviewers also had difficulty assessing the significance of the results on biological datasets. While such results certainly add to the paper, the paper would be stronger if the argument for significance could be assessed from more standard datasets.
A note on the review process: the reviewers initially scored the paper 6/6/6, but the review text for some of the reviews was more negative than a typical 6 score. To confirm this, I asked if any reviewers wanted to push for acceptance. None of the reviewers did (generally due to feeling the significance of the results was limited) and two of the reviewers decided to lower their scores to account for this. | train | [
"SklGFZbphQ",
"H1xmTQtGpX",
"SJxxYe1fkN",
"BJxInW-YCm",
"HJgAPWWFRX",
"SJgRJxZF0m",
"ryeR_EJ937"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper demonstrates that we can harness (semantically meaningful) features learned by a pre-trained autoencoder AE to define a determinisc transformation (e.g. math operations on latent space) to transform one distribution A into another distribution B.\nThe original AE was pre-trained on a larger distribution ... | [
5,
5,
-1,
-1,
-1,
-1,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_rygZJ2RcF7",
"iclr_2019_rygZJ2RcF7",
"SJgRJxZF0m",
"ryeR_EJ937",
"H1xmTQtGpX",
"SklGFZbphQ",
"iclr_2019_rygZJ2RcF7"
] |
iclr_2019_rygjN3C9F7 | The Variational Deficiency Bottleneck | We introduce a bottleneck method for learning data representations based on channel deficiency, rather than the more traditional information sufficiency. A variational upper bound allows us to implement this method efficiently. The bound itself is bounded above by the variational information bottleneck objective, and the two methods coincide in the regime of single-shot Monte Carlo approximations. The notion of deficiency provides a principled way of approximating complicated channels by relatively simpler ones. The deficiency of one channel w.r.t. another has an operational interpretation in terms of the optimal risk gap of decision problems, capturing classification as a special case. Unsupervised generalizations are possible, such as the deficiency autoencoder, which can also be formulated in a variational form. Experiments demonstrate that the deficiency bottleneck can provide advantages in terms of minimal sufficiency as measured by information bottleneck curves, while retaining a good test performance in classification and reconstruction tasks. | rejected-papers | Strengths: The paper presents an alternative regularized training objective for supervised learning that has a reasonable theoretical justification. It also has a simple computational formula.
Weaknesses:
The experiments are minimal proofs of concept on MNIST and fashion MNIST, and the authors didn't find an example where this formulation makes a large difference. The resulting formula is very close to existing methods. Finally the paper is a bit dense and the intuitions we should gain from this theory aren't made clear.
Points of contention:
One reviewer pointed out the close connection of the new objective to IWAE, and the authors added a discussion of the relation and showed that they're not mathematically equivalent. However, as far as I can tell they're almost identical in purpose: As k -> \infty in IWAE, the encoder ceases to matter. And as M -> \infty in VDB, we take the max over all encoders. Could the method proposed in this paper lead to an alternative to IWAE in the VAE setting?
Consensus:
Consensus wasn't reached, but the "7" reviewer did not appear to have put much though into their review. | train | [
"SkeI70MiAm",
"BkgLu-miR7",
"Syxm7-moAm",
"ryxlCAGjAm",
"H1l-kpRKC7",
"Byx7lzhRhQ",
"SJgxZjtRnX",
"BkerznG53Q"
] | [
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments! \n\n* We included a table showing accuracy numbers for different values of beta and M (see p. 6, Table 1) for the latent bottleneck sizes K=256 (Figure 2) and K=2 (Figure 3). \n\n* In relation to the figures, we have improved these in the revision. We are added a figure tracing the mut... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
2,
2
] | [
"Byx7lzhRhQ",
"H1l-kpRKC7",
"BkerznG53Q",
"SJgxZjtRnX",
"iclr_2019_rygjN3C9F7",
"iclr_2019_rygjN3C9F7",
"iclr_2019_rygjN3C9F7",
"iclr_2019_rygjN3C9F7"
] |
iclr_2019_rygk9oA9Ym | 3D-RelNet: Joint Object and Relational Network for 3D Prediction | We propose an approach to predict the 3D shape and pose for the objects present in a scene. Existing learning based methods that pursue this goal make independent predictions per object, and do not leverage the relationships amongst them. We argue that reasoning about these relationships is crucial, and present an approach to incorporate these in a 3D prediction framework. In addition to independent per-object predictions, we predict pairwise relations in the form of relative 3D pose, and demonstrate that these can be easily incorporated to improve object level estimates. We report performance across different datasets (SUNCG, NYUv2), and show that our approach significantly improves over independent prediction approaches while also outperforming alternate implicit reasoning methods. | rejected-papers | With ratings of 6, 5 & 3 the numerical scores are just not strong enough to warrant acceptance.
The author rebuttal was not able to sway opinions.
| train | [
"S1gLiDjm14",
"BJe-6FS0Rm",
"B1lY_kDsT7",
"B1ld8lvop7",
"rkxzbxPsT7",
"SJxi1JwiTX",
"Sye4stm16X",
"BJgJHwjq2Q",
"H1lDjH9p3X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I read authors’ rebuttal. In the rebuttal some of my questions and concerns are quoted incompletely such that my original statements can be interpreted in a different way than what I had originally written. Then, the authors have used such incomplete quotations and have called them as “incorrect statements of the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"SJxi1JwiTX",
"rkxzbxPsT7",
"H1lDjH9p3X",
"Sye4stm16X",
"BJgJHwjq2Q",
"iclr_2019_rygk9oA9Ym",
"iclr_2019_rygk9oA9Ym",
"iclr_2019_rygk9oA9Ym",
"iclr_2019_rygk9oA9Ym"
] |
iclr_2019_rygnfn0qF7 | Language Model Pre-training for Hierarchical Document Representations | Hierarchical neural architectures can efficiently capture long-distance dependencies and have been used for many document-level tasks such as summarization, document segmentation, and fine-grained sentiment analysis. However, effective usage of such a large context can difficult to learn, especially in the case where there is limited labeled data available.
Building on the recent success of language model pretraining methods for learning flat representations of text, we propose algorithms for pre-training hierarchical document representations from unlabeled data. Unlike prior work, which has focused on pre-training contextual token representations or context-independent sentence/paragraph representations, our hierarchical document representations include fixed-length sentence/paragraph representations which integrate contextual information from the entire documents. Experiments on document segmentation, document-level question answering, and extractive document summarization demonstrate the effectiveness of the proposed pre-training algorithms. | rejected-papers | This paper proposes to pre-train hierarchical document representations for use in downstream tasks. All reviewers agreed that the results were reasonable.
However, the methodological novelty is limited. While I believe there is a place for solid empirical results, even if not incredibly novel, there is also little qualitative or quantitative analysis to shed additional insights.
Given the high quality bar for ICLR, I can't recommend the paper for acceptance at this time. | val | [
"rJewk1Bphm",
"rylFqMMiAm",
"HJxN7zGjCQ",
"HJxdkzGs0Q",
"rygPes5cn7",
"H1xO5rV56Q",
"SkegnEVq6m",
"HkebYmVca7",
"SyeacLV537"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Reasonable method, but not too much novelty\n\n[Summary]\n\nThe paper proposed techniques to pretrain two-layer hierarchical bi-directional or single-directional LSTM networks for language processing tasks. In particular, the paper uses the word prediction, either for the next work or randomly missing words, as th... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4
] | [
"iclr_2019_rygnfn0qF7",
"HkebYmVca7",
"SkegnEVq6m",
"H1xO5rV56Q",
"iclr_2019_rygnfn0qF7",
"SyeacLV537",
"rJewk1Bphm",
"rygPes5cn7",
"iclr_2019_rygnfn0qF7"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.