paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2019_HkgxasA5Ym
Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors
Obtaining reliable uncertainty estimates of neural network predictions is a long standing challenge. Bayesian neural networks have been proposed as a solution, but it remains open how to specify their prior. In particular, the common practice of a standard normal prior in weight space imposes only weak regularities, causing the function posterior to possibly generalize in unforeseen ways on inputs outside of the training distribution. We propose noise contrastive priors (NCPs) to obtain reliable uncertainty estimates. The key idea is to train the model to output high uncertainty for data points outside of the training distribution. NCPs do so using an input prior, which adds noise to the inputs of the current mini batch, and an output prior, which is a wide distribution given these inputs. NCPs are compatible with any model that can output uncertainty estimates, are easy to scale, and yield reliable uncertainty estimates throughout training. Empirically, we show that NCPs prevent overfitting outside of the training distribution and result in uncertainty estimates that are useful for active learning. We demonstrate the scalability of our method on the flight delays data set, where we significantly improve upon previously published results.
rejected-papers
The paper studies the problem of uncertainty estimation of neural networks and proposes to use Bayesian approach with noice contrastive prior. The reviewers and AC note the potential weaknesses of experimental results: (1) lack of sufficient datasets with moderate-to-high dimensional inputs, (2) arguable choices of hyperparameters and (3) lack of direct evaluations, e.g., measuring network calibration is better than active learning. The paper is well written and potentially interesting. However, AC decided that the paper might not be ready to publish in the current form due to the weakness.
val
[ "rkeTqRVc0Q", "Hkxh96Ec07", "rJl-HdVcAX", "r1xV4DNqAm", "rkgJ1Jle6Q", "r1e1xzyk6X", "r1xkPlfY2m", "BkxGzqKocX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Thank you for pointing out your related recent work.", "Thank you very much for your review and the constructive suggestions.\n\n[1. The authors propose to use so-called noise contrastive prior, but the actual implementation boils down to adding Gaussian noise to input points and respective outputs.]\n\nWe would...
[ -1, -1, -1, -1, 7, 4, 6, -1 ]
[ -1, -1, -1, -1, 3, 4, 4, -1 ]
[ "BkxGzqKocX", "r1e1xzyk6X", "r1xkPlfY2m", "rkgJ1Jle6Q", "iclr_2019_HkgxasA5Ym", "iclr_2019_HkgxasA5Ym", "iclr_2019_HkgxasA5Ym", "iclr_2019_HkgxasA5Ym" ]
iclr_2019_Hkl-di09FQ
Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics
Scaling end-to-end reinforcement learning to control real robots from vision presents a series of challenges, in particular in terms of sample efficiency. Against end-to-end learning, state representation learning can help learn a compact, efficient and relevant representation of states that speeds up policy learning, reducing the number of samples needed, and that is easier to interpret. We evaluate several state representation learning methods on goal based robotics tasks and propose a new unsupervised model that stacks representations and combines strengths of several of these approaches. This method encodes all the relevant features, performs on par or better than end-to-end learning, and is robust to hyper-parameters change.
rejected-papers
This paper proposes a new method for combining previous state representation learning methods and compares to end-to-end learning without without separately learning a state representation. The topic is important, and the authors have made an extensive effort to address the reviewer's concerns, particularly regarding clarity, related work, and accuracy of the drawn conclusions. The reviewers found that the main weakness of the paper was the experiments not being sufficiently convincing that the proposed approach is better than the alternatives. Hence, it does not currently meet the bar for publication.
train
[ "H1lK0lbPCX", "BkgIYgbw07", "r1gX6z-w07", "rygjyxZv0X", "rklb2FSeam", "Bke6q_IRhQ", "BJg5O9zd2Q" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nDear reviewer,\nThank you for your remarks!\n\n1. We indeed do not have strong theoretical result on the applicability of our approach, however, we provide some insight about the way of performing efficient state representation learning in the case of goal based tasks. In particular, we highlight the fact that ...
[ -1, -1, -1, -1, 5, 3, 4 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "BJg5O9zd2Q", "Bke6q_IRhQ", "iclr_2019_Hkl-di09FQ", "rklb2FSeam", "iclr_2019_Hkl-di09FQ", "iclr_2019_Hkl-di09FQ", "iclr_2019_Hkl-di09FQ" ]
iclr_2019_Hkl84iCcFm
RESIDUAL NETWORKS CLASSIFY INPUTS BASED ON THEIR NEURAL TRANSIENT DYNAMICS
In this study, we analyze the input-output behavior of residual networks from a dynamical system point of view by disentangling the residual dynamics from the output activities before the classification stage. For a network with simple skip connections between every successive layer, and for logistic activation function, and shared weights between layers, we show analytically that there is a cooperation and competition dynamics between residuals corresponding to each input dimension. Interpreting these kind of networks as nonlinear filters, the steady state value of the residuals in the case of attractor networks are indicative of the common features between different input dimensions that the network has observed during training, and has encoded in those components. In cases where residuals do not converge to an attractor state, their internal dynamics are separable for each input class, and the network can reliably approximate the output. We bring analytical and empirical evidence that residual networks classify inputs based on the integration of the transient dynamics of the residuals, and will show how the network responds to input perturbations. We compare the network dynamics for a ResNet and a Multi-Layer Perceptron and show that the internal dynamics, and the noise evolution are fundamentally different in these networks, and ResNets are more robust to noisy inputs. Based on these findings, we also develop a new method to adjust the depth for residual networks during training. As it turns out, after pruning the depth of a ResNet using this algorithm,the network is still capable of classifying inputs with a high accuracy.
rejected-papers
The paper uses dynamical systems theory to evaluate feed-forward neural networks. The theory is used to compute the optimal depth of resnets. An interesting approach, and a good initiative. At the same time, the approach seems not to be thought through well enough, and the work needs another level of maturation before publication. The application that is realised is too immature, and the corresponding contributions are not significant in their current form. All reviewers agree on rejection of the paper.
train
[ "BJxCAXGsAQ", "BJxbV1f5RX", "Hyx1gkzcRm", "H1xe9Ab50Q", "SyeKDppbT7", "B1g6l9B5n7", "rJgykt45h7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response. While I appreciate that the various simplifications used in this paper (MNIST only, sigmoids, shared weights, etc.) make the analysis easier, they also reduce the likelihood that results found in these regimes will generalize to more realistic models and domains. \n\nAdditionally, whil...
[ -1, -1, -1, -1, 4, 2, 5 ]
[ -1, -1, -1, -1, 4, 4, 5 ]
[ "Hyx1gkzcRm", "rJgykt45h7", "B1g6l9B5n7", "SyeKDppbT7", "iclr_2019_Hkl84iCcFm", "iclr_2019_Hkl84iCcFm", "iclr_2019_Hkl84iCcFm" ]
iclr_2019_HklAhi09Y7
Question Generation using a Scratchpad Encoder
In this paper we introduce the Scratchpad Encoder, a novel addition to the sequence to sequence (seq2seq) framework and explore its effectiveness in generating natural language questions from a given logical form. The Scratchpad encoder enables the decoder at each time step to modify all the encoder outputs, thus using the encoder as a "scratchpad" memory to keep track of what has been generated so far and to guide future generation. Experiments on a knowledge based question generation dataset show that our approach generates more fluent and expressive questions according to quantitative metrics and human judgments.
rejected-papers
This paper introduces a "scratchpad" extension to seq2seq models whereby the encoder outputs, typically "read-only" during decoding, are editable by the decoder. In practice, this bears quite a lot of similarity—if not in the general concept, then in the the implementation—to a variety of models proposed in the NLP community (see reviews for details). As the technical novelty of the paper is quite limited, and there are issues with the clarity both in the technical contribution and in presenting what exactly is the main contribution of the paper, I must concur with the reviewers and recommend rejection.
train
[ "H1x8HiTh1E", "BJlxyt63kN", "BJlwPu6hyE", "H1euiaMah7", "rkeUDe_i37", "HJlWUOjV37" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are aware of the related work you mention. Please note that unfortunately the “Semantically Conditioned LSTM…” is not directly comparable because, as they state in their paper, “the generator is further conditioned on a control vector d, a 1-hot representation of the dialogue act (DA) type and its slot-value pa...
[ -1, -1, -1, 4, 3, 4 ]
[ -1, -1, -1, 4, 5, 5 ]
[ "HJlWUOjV37", "rkeUDe_i37", "H1euiaMah7", "iclr_2019_HklAhi09Y7", "iclr_2019_HklAhi09Y7", "iclr_2019_HklAhi09Y7" ]
iclr_2019_HklJV3A9Ym
Approximation capability of neural networks on sets of probability measures and tree-structured data
This paper extends the proof of density of neural networks in the space of continuous (or even measurable) functions on Euclidean spaces to functions on compact sets of probability measures. By doing so the work parallels a more then a decade old results on mean-map embedding of probability measures in reproducing kernel Hilbert spaces. The work has wide practical consequences for multi-instance learning, where it theoretically justifies some recently proposed constructions. The result is then extended to Cartesian products, yielding universal approximation theorem for tree-structured domains, which naturally occur in data-exchange formats like JSON, XML, YAML, AVRO, and ProtoBuffer. This has important practical implications, as it enables to automatically create an architecture of neural networks for processing structured data (AutoML paradigms), as demonstrated by an accompanied library for JSON format.
rejected-papers
Several reviewers thought the results were not surprising in light of existing universality results, and thought the results were of limited relevance, given that the formalization is not quite in line with real-world networks for MIL. The authors draw out some further justifications in the rebuttal. These should be reintegrated. I agree with the general criticisms regarding relevance to ICLR. Ultimately, this work may belong in a journal.
train
[ "B1lSnuSWAm", "BJeq4UXeR7", "BJekexQOTQ", "HkeHul4P6Q", "HkxRkOgRnX", "SkxnuIAp3m", "BJgiQFiNhX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We do not dispute the novelty of the proof, yet we believe that as the number of applications of AI grows, it becomes important to prove even expected results, as the lack of a proof can help us spot the unsound constructions quicker. The proof itself is important for the field of multi-instance learning, since it...
[ -1, -1, -1, -1, 6, 5, 4 ]
[ -1, -1, -1, -1, 3, 5, 5 ]
[ "SkxnuIAp3m", "BJgiQFiNhX", "HkxRkOgRnX", "iclr_2019_HklJV3A9Ym", "iclr_2019_HklJV3A9Ym", "iclr_2019_HklJV3A9Ym", "iclr_2019_HklJV3A9Ym" ]
iclr_2019_HklKWhC5F7
How Training Data Affect the Accuracy and Robustness of Neural Networks for Image Classification
Recent work has demonstrated the lack of robustness of well-trained deep neural networks (DNNs) to adversarial examples. For example, visually indistinguishable perturbations, when mixed with an original image, can easily lead deep learning models to misclassifications. In light of a recent study on the mutual influence between robustness and accuracy over 18 different ImageNet models, this paper investigates how training data affect the accuracy and robustness of deep neural networks. We conduct extensive experiments on four different datasets, including CIFAR-10, MNIST, STL-10, and Tiny ImageNet, with several representative neural networks. Our results reveal previously unknown phenomena that exist between the size of training data and characteristics of the resulting models. In particular, besides confirming that the model accuracy improves as the amount of training data increases, we also observe that the model robustness improves initially, but there exists a turning point after which robustness starts to decrease. How and when such turning points occur vary for different neural networks and different datasets.
rejected-papers
The reviewers conclude the paper does not bring an important contribution compared to existing work. The experimental study can also be improved.
train
[ "HkeTVQV9RX", "S1gvPogc3m", "SkgBFpM8CX", "HkeEEjMUCm", "SJxad5zUCX", "BJx-g_GU07", "S1xDWt79pm", "rJxmqa5s3Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I thank the authors for clarifying the contribution of the paper and for providing additional results with other measures of robustness. I have hence revised my rating.", "The paper presents an empirical study of how accuracy and robustness vary with increasing training data for four different data sets and CNN ...
[ -1, 5, -1, -1, -1, -1, 4, 5 ]
[ -1, 4, -1, -1, -1, -1, 4, 3 ]
[ "HkeEEjMUCm", "iclr_2019_HklKWhC5F7", "S1xDWt79pm", "S1gvPogc3m", "rJxmqa5s3Q", "iclr_2019_HklKWhC5F7", "iclr_2019_HklKWhC5F7", "iclr_2019_HklKWhC5F7" ]
iclr_2019_HklQxnC5tX
Overlapping Community Detection with Graph Neural Networks
Community detection in graphs is of central importance in graph mining, machine learning and network science. Detecting overlapping communities is especially challenging, and remains an open problem. Motivated by the success of graph-based deep learning in other graph-related tasks, we study the applicability of this framework for overlapping community detection. We propose a probabilistic model for overlapping community detection based on the graph neural network architecture. Despite its simplicity, our model outperforms the existing approaches in the community recovery task by a large margin. Moreover, due to the inductive formulation, the proposed model is able to perform out-of-sample community detection for nodes that were not present at training time
rejected-papers
The paper provides an interesting combination of existing techniques (such as GCN and and the Bernoulli-Poisson link) to address the problem of overlapping community detection. However, there were concerns about lack of novelty, evaluation metrics, and missing comparisons with previous work. The authors did not provide a response to address these concerns.
train
[ "SJg2Eh7CnQ", "rJlq687527", "H1lyCKNP3X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The current paper considers the overlapping community detection problem and suggests to use the so-called graph neural networks for its solution.\n\nThe approach starts from BigCLAM model and suggests to parametrize factor matrices (or embedding vectors) via neural network with graph adjacency matrix and node attr...
[ 5, 3, 4 ]
[ 4, 5, 5 ]
[ "iclr_2019_HklQxnC5tX", "iclr_2019_HklQxnC5tX", "iclr_2019_HklQxnC5tX" ]
iclr_2019_HklVMnR5tQ
Exploring the interpretability of LSTM neural networks over multi-variable data
In learning a predictive model over multivariate time series consisting of target and exogenous variables, the forecasting performance and interpretability of the model are both essential for deployment and uncovering knowledge behind the data. To this end, we propose the interpretable multi-variable LSTM recurrent neural network (IMV-LSTM) capable of providing accurate forecasting as well as both temporal and variable level importance interpretation. In particular, IMV-LSTM is equipped with tensorized hidden states and update process, so as to learn variables-wise hidden states. On top of it, we develop a mixture attention mechanism and associated summarization methods to quantify the temporal and variable importance in data. Extensive experiments using real datasets demonstrate the prediction performance and interpretability of IMV-LSTM in comparison to a variety of baselines. It also exhibits the prospect as an end-to-end framework for both forecasting and knowledge extraction over multi-variate data.
rejected-papers
The reviewers appreciated the clarity of writing, and the importance of the problem being addressed. There was a moderate amount of discussion around the paper, but the two reviewers who responded to the author discussion were split in their opinion, with one slightly increasing their score to a 6, and the other remaining unconvinced. The scores overall are borderline for ICLR acceptance, and given that, no reviewer stepped forward to champion the paper.
train
[ "SJgW5KK3y4", "Skegf_JRkV", "BkedufG31E", "H1gY3VZhJ4", "SklwRJb52m", "B1ez2no9hX", "SJxactWc3X" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer,\n\nThanks for your reply to the revision! Maybe we did not explain clearly in the previous response. \n\nWe understand and fully agree that RETAIN is innovative in calculating the contribution of each variable in each timestep. \n\n-- regarding RETAIN \n\nWhat we tried to explain is that the derivat...
[ -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, 5, 5, 3 ]
[ "BkedufG31E", "H1gY3VZhJ4", "B1ez2no9hX", "SklwRJb52m", "iclr_2019_HklVMnR5tQ", "iclr_2019_HklVMnR5tQ", "iclr_2019_HklVMnR5tQ" ]
iclr_2019_HklVTi09tm
Detecting Topological Defects in 2D Active Nematics Using Convolutional Neural Networks
Active matter consists of active agents which transform energy extracted from surroundings into momentum, producing a variety of collective phenomena. A model, synthetic active system composed of microtubule polymers driven by protein motors spontaneously forms a liquid-crystalline nematic phase. Extensile stress created by the protein motors precipitates continuous buckling and folding of the microtubules creating motile topological defects and turbulent fluid flows. Defect motion is determined by the rheological properties of the material; however, these remain largely unquantified. Measuring defects dynamics can yield fundamental insights into active nematics, a class of materials that include bacterial films and animal cells. Current methods for defect detection lack robustness and precision, and require fine-tuning for datasets with different visual quality. In this study, we applied Deep Learning to train a defect detector to automatically analyze microscopy videos of the microtubule active nematic. Experimental results indicate that our method is robust and accurate. It is expected to significantly increase the amount of video data that can be processed.
rejected-papers
The reviewers raised a number of major concerns including the incremental novelty of the proposed (if any) and insufficient and unconvincing experimental evaluation presented. The authors did not provide any rebuttal. Hence, I cannot suggest this paper for presentation at ICLR.
train
[ "HJgMRaSyTm", "S1gsYybA3X", "HygtIiI53Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper applies deep learning model YOLO to detect topological defects in 2D active nematics. Experimental results show that YOLO is robust and accurate, which outperforms traditional state-of-the-art defect detection methods significantly.\n\nPros:\n+ Detecting defects in 2D active nematics is an imp...
[ 4, 4, 2 ]
[ 4, 4, 5 ]
[ "iclr_2019_HklVTi09tm", "iclr_2019_HklVTi09tm", "iclr_2019_HklVTi09tm" ]
iclr_2019_HklbTjRcKX
What Information Does a ResNet Compress?
The information bottleneck principle (Shwartz-Ziv & Tishby, 2017) suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective. However, this claim was established on toy data. The goal of the work we present here is to test these claims in a realistic setting using a larger and deeper convolutional architecture, a ResNet model. We trained PixelCNN++ models as inverse representation decoders to measure the mutual information between hidden layers of a ResNet and input image data, when trained for (1) classification and (2) autoencoding. We find that two stages of learning happen for both training regimes, and that compression does occur, even for an autoencoder. Sampling images by conditioning on hidden layers’ activations offers an intuitive visualisation to understand what a ResNets learns to forget.
rejected-papers
This paper explores an approach to testing the information bottleneck hypothesis of deep learning, specifically the idea that layers in a deep model successively discard information about the input which is irrelevant to the task being performed by the model, in full-scale ResNet models that are too large to admit the more standard binning-based estimators used in other work. Instead, to lower-bound I(x;h), the authors propose using the log-likelihood of a generative model (PixelCNN++). They also attempt visualize what sort of information is lost and what is retained by examining PixelCNN++ reconstructions from the hidden representation at different positions in a ResNet trained to perform image classification on the CINIC-10 task. To lower-bound I(y;h), they perform classification. In the experiments, the evolution of the bounds on I(x;h) and I(y;h) are tracked as a function of training epoch, and visualizations (reconstructions of the input) are shown to support the argument that color-invariance and diversity of samples increases during the compression phase of training. These tests are done on models trained to perform either image classification or autoencoding. This paper enjoyed a good discussion between the reviewers and the authors. The reviewers liked the quantitative analysis of "usable information" using PixelCNN++, though R2 wanted additional experiments to better quantify the limitations of the PixelCNN++ model to provide the reader with a better understanding of plots in Fig. 3, as well as more points sampled during training. Both R2 and R3 had reservations about the qualitative analysis based on the visualizations, which constitute the bulk of the paper. Unfortunately, the PixelCNN++ training is computationally intensive enough that these requests could not be fulfilled during the ICLR discussion phase. While the AC recommends that this submission be rejected from ICLR, this is a promising line of research. The authors should address the constructive suggestions of R2 and R3 and submit this work elsewhere.
train
[ "BkxB8c0nRm", "S1eNRjitAQ", "H1ezF-oF0m", "S1lQxp4VA7", "HklHZqy4Am", "BJgDQIV70Q", "BkltogsfRm", "SkxKtVeGCm", "H1lPaVml07", "BkxPuVQxA7", "rylPxV7xCX", "rye2iGUs3m", "SJx8hPAq2m", "HJlXGqI53Q" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Using a powerful model (like PixelCNN++) is not solving the tightness issue. Since the results of autoencoder experiments could be explained in different ways and have inherent flaws (for example the loss), I suggest to remove this part from this work.\n\nAfter reading the response, I will keep my rating.", "Tha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "BkxPuVQxA7", "H1ezF-oF0m", "S1lQxp4VA7", "HklHZqy4Am", "rylPxV7xCX", "BkltogsfRm", "SkxKtVeGCm", "H1lPaVml07", "HJlXGqI53Q", "SJx8hPAq2m", "rye2iGUs3m", "iclr_2019_HklbTjRcKX", "iclr_2019_HklbTjRcKX", "iclr_2019_HklbTjRcKX" ]
iclr_2019_Hklc6oAcFX
Co-manifold learning with missing data
Representation learning is typically applied to only one mode of a data matrix, either its rows or columns. Yet in many applications, there is an underlying geometry to both the rows and the columns. We propose utilizing this coupled structure to perform co-manifold learning: uncovering the underlying geometry of both the rows and the columns of a given matrix, where we focus on a missing data setting. Our unsupervised approach consists of three components. We first solve a family of optimization problems to estimate a complete matrix at multiple scales of smoothness. We then use this collection of smooth matrix estimates to compute pairwise distances on the rows and columns based on a new multi-scale metric that implicitly introduces a coupling between the rows and the columns. Finally, we construct row and column representations from these multi-scale metrics. We demonstrate that our approach outperforms competing methods in both data visualization and clustering.
rejected-papers
This manuscript proposes a technique for co-manifold learning that exploits smoothness jointly over the rows and columns of the data. This is an important topic worth further study in the community. The reviewers and AC opinions were mixed, with reviewers either being unconvinced about the novelty of the proposed work or expressing issues about the clarity of the presentation. Further improvement of the clarity -- particularly clarification of the learning goals, combined with additional convincing experiments would significantly strengthen this submission.
train
[ "HkgYwBCu3X", "BJelEZN5CX", "HyetndV5R7", "ryg0-lr5R7", "B1lGCkrqCX", "Hke8xaN9AQ", "ByxOX2N5CQ", "Skl0LdE5A7", "r1esFmN50Q", "Syl58eh927", "rkgxURsK37" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Review for CO-MANIFOLD LEARNING WITH MISSING DATA\nSummary:\nThis paper proposes a two-stage method to recovering the underlying structure of a data manifold using both the rows and columns of an incomplete data matrix. In the first stage they impute the missing values using their proposed co-clustering algorithm ...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_Hklc6oAcFX", "iclr_2019_Hklc6oAcFX", "Skl0LdE5A7", "B1lGCkrqCX", "Hke8xaN9AQ", "ByxOX2N5CQ", "HkgYwBCu3X", "rkgxURsK37", "Syl58eh927", "iclr_2019_Hklc6oAcFX", "iclr_2019_Hklc6oAcFX" ]
iclr_2019_Hklgis0cF7
Radial Basis Feature Transformation to Arm CNNs Against Adversarial Attacks
The linear and non-flexible nature of deep convolutional models makes them vulnerable to carefully crafted adversarial perturbations. To tackle this problem, in this paper, we propose a nonlinear radial basis convolutional feature transformation by learning the Mahalanobis distance function that maps the input convolutional features from the same class into tight clusters. In such a space, the clusters become compact and well-separated, which prevent small adversarial perturbations from forcing a sample to cross the decision boundary. We test the proposed method on three publicly available image classification and segmentation data-sets namely, MNIST, ISBI ISIC skin lesion, and NIH ChestX-ray14. We evaluate the robustness of our method to different gradient (targeted and untargeted) and non-gradient based attacks and compare it to several non-gradient masking defense strategies. Our results demonstrate that the proposed method can boost the performance of deep convolutional neural networks against adversarial perturbations without accuracy drop on clean data.
rejected-papers
Strengths of the paper: Based on previous work suggesting that radial basis features can help defend against adversarial attacks, the paper proposes a concrete method for incorporating them in deep networks. The paper evaluates the method on multiple datasets, including MNIST and ISBI International Skin Imaging Collaboration (ISIC) Challenge. Weaknesses: Reviewers 2 and 3 felt that the paper was not clearly written, and cited several concrete questions about the method that could not be understood from the paper. There were additional concerns of lacking comparison to existing methods, and Reviewer 1 pointed out that a competing method gave higher performance, although this was not reported in the present submission. Points of contention: The authors did not provide a response to the reviewer concerns. Consensus: All reviewers recommended that the paper be rejected, and the authors did not provide a rebuttal.
train
[ "HygfG3O-TQ", "Hyxghm0t2X", "HJxVHlBNnQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a new defense against adversarial examples based on radial basis features. Prior work has suggested that the linearity of standard convolutional networks may be a factor contributing to their vulnerability against adversarial examples, and that radial basis functions may help alleviate this wea...
[ 4, 4, 3 ]
[ 4, 3, 4 ]
[ "iclr_2019_Hklgis0cF7", "iclr_2019_Hklgis0cF7", "iclr_2019_Hklgis0cF7" ]
iclr_2019_HklnzhR9YQ
Approximation and non-parametric estimation of ResNet-type convolutional neural networks via block-sparse fully-connected neural networks
We develop new approximation and statistical learning theories of convolutional neural networks (CNNs) via the ResNet-type structure where the channel size, filter size, and width are fixed. It is shown that a ResNet-type CNN is a universal approximator and its expression ability is no worse than fully-connected neural networks (FNNs) with a \textit{block-sparse} structure even if the size of each layer in the CNN is fixed. Our result is general in the sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs. Thanks to the general theory, it is shown that learning on CNNs satisfies optimality in approximation and estimation of several important function classes. As applications, we consider two types of function classes to be estimated: the Barron class and H\"older class. We prove the clipped empirical risk minimization (ERM) estimator can achieve the same rate as FNNs even the channel size, filter size, and width of CNNs are constant with respect to the sample size. This is minimax optimal (up to logarithmic factors) for the H\"older class. Our proof is based on sophisticated evaluations of the covering number of CNNs and the non-trivial parameter rescaling technique to control the Lipschitz constant of CNNs to be constructed.
rejected-papers
The paper presents an interesting treatment of transforming a block-sparse fully connected neural networks to a ResNet-type Convolutional Network. Equipped with recent development on approximations of function classes (Barron, Holder) via block-sparse fully connected networks in the optimal rates, this enables us to show the equivalent power of ResNet Convolutional Nets.  The major weakness in this treatment lies in that the ResNet architecture for realizing the block-sparse fully connected nets is unrealistic. It originates from the recent developments in approximation theory that transforming a fully connected net into a convolutional net via Toeplitz matrix (operator) factorizations. However the convolutional nets or ResNets obtained in this way is different to what have been used successfully in applications. Some special properties associated with convolutions, e.g. translation invariance and local deformation stability, are not natural in original fully connected nets and might be indirect after such a treatment. The presentation of the paper is better polished further. Based on ratings of reviewers, the current version of the paper is on borderline lean reject.
train
[ "SJxraILVRQ", "Skgxv9UK6m", "rJxXsq8FTX", "H1l4r5IKpQ", "r1evGqLtpQ", "HJxRgdIKaX", "HJeo4wAshX", "rJxsSvvcnQ", "ryxrITy5nm" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I do not have further questions.", "Reply to specific comments:\n> Section 1, p.2: define M? define D? M seems to be used for different things in different paragraphs. The discussion on 'relative scale' could be made clearer.\n\nWe added the definition of D and M to the introduction section. We used the variable...
[ -1, -1, -1, -1, -1, -1, 4, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "r1evGqLtpQ", "ryxrITy5nm", "HJeo4wAshX", "ryxrITy5nm", "rJxsSvvcnQ", "iclr_2019_HklnzhR9YQ", "iclr_2019_HklnzhR9YQ", "iclr_2019_HklnzhR9YQ", "iclr_2019_HklnzhR9YQ" ]
iclr_2019_HklyMhCqYQ
Super-Resolution via Conditional Implicit Maximum Likelihood Estimation
Single-image super-resolution (SISR) is a canonical problem with diverse applications. Leading methods like SRGAN produce images that contain various artifacts, such as high-frequency noise, hallucinated colours and shape distortions, which adversely affect the realism of the result. In this paper, we propose an alternative approach based on an extension of the method of Implicit Maximum Likelihood Estimation (IMLE). We demonstrate greater effectiveness at noise reduction and preservation of the original colours and shapes, yielding more realistic super-resolved images.
rejected-papers
The main novelty of the paper lies in using multiple noise vectors to reconstruct the high resolution image in multiple ways. Then, the reconstruction with minimal loss is selected and updated to improve the fit against the target image. The most important control experiment in my opinion should compare this approach against the same architecture with only with m=1 noise vector (i.e., using a constant noise vector all the time). Unfortunately, the paper does not include such a comparison, which means the main hypothesis of the paper is not tested. Please include this experiment in the revised version of the paper. PS: There is another high level concern regarding the use of PSNR or SSIM for evaluation of super-resolution methods. As shown by "Pixel recursive super resolution (Dahl et al.)" and others, PSNR and SSIM metrics are only relevant in the low magnification regime, in which techniques based on MSE (mean squared error) are very competitive. Maybe you need to consider large magnification regime in which GAN and normalized flow-based models are more relevant.
train
[ "Skg5JP42k4", "Bkl9w6GnkE", "rkgIZHPj0X", "H1x-h4wiA7", "rJgGtNwsRQ", "HkeCTOqY2X", "SJxh_DXN2Q", "SJeKkSMYs7", "HyxcMnORcX", "Bylsy6q997", "rJlQGqnY97", "ByxSpthFcX", "rJgtXKEv9Q", "SkxSWW0mqX" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "public", "public" ]
[ "Yes, we have tried using m=1, but found that this resulted in blurrier images because not allowing the net to output multiple possibilities essentially forces it to predict the mean of the different possibilities. We'll include this result in the camera-ready. ", "The main novelty of the paper lies in using mult...
[ -1, -1, -1, -1, -1, 5, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "Bkl9w6GnkE", "iclr_2019_HklyMhCqYQ", "SJeKkSMYs7", "SJxh_DXN2Q", "HkeCTOqY2X", "iclr_2019_HklyMhCqYQ", "iclr_2019_HklyMhCqYQ", "iclr_2019_HklyMhCqYQ", "Bylsy6q997", "ByxSpthFcX", "rJgtXKEv9Q", "SkxSWW0mqX", "iclr_2019_HklyMhCqYQ", "iclr_2019_HklyMhCqYQ" ]
iclr_2019_Hkx-ii05FQ
The Cakewalk Method
Combinatorial optimization is a common theme in computer science. While in general such problems are NP-Hard, from a practical point of view, locally optimal solutions can be useful. In some combinatorial problems however, it can be hard to define meaningful solution neighborhoods that connect large portions of the search space, thus hindering methods that search this space directly. We suggest to circumvent such cases by utilizing a policy gradient algorithm that transforms the problem to the continuous domain, and to optimize a new surrogate objective that renders the former as generic stochastic optimizer. This is achieved by producing a surrogate objective whose distribution is fixed and predetermined, thus removing the need to fine-tune various hyper-parameters in a case by case manner. Since we are interested in methods which can successfully recover locally optimal solutions, we use the problem of finding locally maximal cliques as a challenging experimental benchmark, and we report results on a large dataset of graphs that is designed to test clique finding algorithms. Notably, we show in this benchmark that fixing the distribution of the surrogate is key to consistently recovering locally optimal solutions, and that our surrogate objective leads to an algorithm that outperforms other methods we have tested in a number of measures.
rejected-papers
The paper investigates a variant of the "cross-entropy method" (CME) for heuristic combinatorial optimization, based on stochastically improving a search distribution via policy optimization in a surrogate objective. Unfortunately, the reviewers unanimously recommended rejection, noting that the significance of the contribution over CME remains far from clear and insufficiently supported by the given evidence. The experimental evaluation was unconvincing to all of the reviewers, particularly since only one artificial problem (clique finding) was considered in the paper (with an additional problem, k-medoid clustering, briefly and incompletely considered in the appendix). Several additional concerns were raised about the experimental evaluation, which triggered lengthy author responses but really need to be properly handled in the paper itself: - The sensitivity of performance to the optimization algorithm is a concern and requires more detailed understanding so that reasonable choices can be made in practice. - The independence assumption between search components is an extreme simplification that limits the appeal and applicability of the proposed approach. Even after author response, it remains unconvincing that an independent search distribution over subcomponents can be effective in challenging combinatorial spaces. Concrete evidence on challenging problems would be a more effective evidence than discussion. - The comparisons omitted any tailored algorithms for the specific problems. Even if the authors insist on only comparing to more "general purpose" methods, there is a large space of evolutionary and Bayesian optimization strategies that have been neglected from the comparison. A justification is needed for such an omission (if indeed it is even justifiable).
val
[ "HkejiztdRm", "SyxFgYgiTQ", "HJxVVKlspm", "Skl8N_xipQ", "HkxGgdxjp7", "HygFbNmL6X", "SkxZmGP027", "BJxHwvW23X", "B1l3zjA_h7" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Following our response to the major issues raised by the reviewers in comment https://openreview.net/forum?id=Hkx-ii05FQ&noteId=HygFbNmL6X , we have uploaded a new version of our paper. We hope this version addresses the reviewers' major concerns. Specifically:\n- We have edited the end of the introduction to bet...
[ -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2019_Hkx-ii05FQ", "B1l3zjA_h7", "B1l3zjA_h7", "BJxHwvW23X", "SkxZmGP027", "iclr_2019_Hkx-ii05FQ", "iclr_2019_Hkx-ii05FQ", "iclr_2019_Hkx-ii05FQ", "iclr_2019_Hkx-ii05FQ" ]
iclr_2019_HkxAisC9FQ
Improved robustness to adversarial examples using Lipschitz regularization of the loss
We augment adversarial training (AT) with worst case adversarial training (WCAT) which improves adversarial robustness by 11% over the current state- of-the-art result in the `2-norm on CIFAR-10. We interpret adversarial training as Total Variation Regularization, which is a fundamental tool in mathematical im- age processing, and WCAT as Lipschitz regularization, which appears in Image Inpainting. We obtain verifiable worst and average case robustness guarantees, based on the expected and maximum values of the norm of the gradient of the loss.
rejected-papers
This paper suggests augmenting adversarial training with a Lipschitz regularization of the loss, and suggests that this improves the adversarial robustness of deep neural networks. The idea of using such regularization seems novel. However, several reviewers were seriously concerned with the quality of the writing. In particular, the paper contains claims that not only are not needed but also are incorrect. Also, the Reviewer 2 in particular was also concerned with the presentation of prior work on Lipschitz regularization. Such poor quality of the presentation makes it impossible to properly evaluate the actual paper contribution.
val
[ "BJgh0eTL1N", "HkePeI0SyV", "HJxhfMpp0m", "rJejui5aCX", "H1xFcKa_hX", "Hye2u2xPCm", "BJgKHaxwAX", "BklZ42xPRm", "rkxOCoewAQ", "B1xzNWKERm", "BJg8q4ZuTm", "rkgx_ajThm", "H1e9KG-ch7", "HkxQ77mUiX" ]
[ "author", "public", "author", "public", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "author" ]
[ "Hi,\n\nThanks for your interest. You're correct, a mixed derivative -- in both x (image) and theta (parameters) is computed. Our implementation was easily done in PyTorch, simply by running autograd twice - first in x to get the norm gradient, then in theta. In practice we found that networks trained with this Lip...
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 6, 6, -1 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 1, 3, -1 ]
[ "HkePeI0SyV", "iclr_2019_HkxAisC9FQ", "rJejui5aCX", "iclr_2019_HkxAisC9FQ", "iclr_2019_HkxAisC9FQ", "H1e9KG-ch7", "H1xFcKa_hX", "rkgx_ajThm", "iclr_2019_HkxAisC9FQ", "BJg8q4ZuTm", "iclr_2019_HkxAisC9FQ", "iclr_2019_HkxAisC9FQ", "iclr_2019_HkxAisC9FQ", "iclr_2019_HkxAisC9FQ" ]
iclr_2019_HkxCEhAqtQ
Accelerated Gradient Flow for Probability Distributions
This paper presents a methodology and numerical algorithms for constructing accelerated gradient flows on the space of probability distributions. In particular, we extend the recent variational formulation of accelerated gradient methods in wibisono2016 from vector valued variables to probability distributions. The variational problem is modeled as a mean-field optimal control problem. The maximum principle of optimal control theory is used to derive Hamilton's equations for the optimal gradient flow. The Hamilton's equation are shown to achieve the accelerated form of density transport from any initial probability distribution to a target probability distribution. A quantitative estimate on the asymptotic convergence rate is provided based on a Lyapunov function construction, when the objective functional is displacement convex. Two numerical approximations are presented to implement the Hamilton's equations as a system of N interacting particles. The continuous limit of the Nesterov's algorithm is shown to be a special case with N=1. The algorithm is illustrated with numerical examples.
rejected-papers
This paper developed an accelerated gradient flow in the space of probability measures. Unfortunately, the reviewers think the practical usefulness of the proposed approach is not sufficiently supported by realistic experiments, and the clarity of the paper need to be significantly improved. The authors' rebuttal resolved some of the confusion the reviewers had, but we believe further substantial improvement will make this work a much stronger contribution.
train
[ "H1lvDj2FCX", "SJe91SyLAQ", "BkxMU9rA6m", "Skx0X5H0TX", "B1grUCbp67", "BJgSTkJp6Q", "BJemw1J6a7", "H1gUpCRham", "r1evDrYhnQ", "SyxTTel53Q" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for reading our response and the revised version of the paper. We think this discussion is helpful and important in understanding the paper. \n\n“… I still don't quite get how to go from an ODE/PDE to the Lagrangian. What is the relation between these two as well as the Lyapuno function ...”\...
[ -1, -1, -1, -1, 4, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, 3, 4 ]
[ "B1grUCbp67", "B1grUCbp67", "B1grUCbp67", "B1grUCbp67", "iclr_2019_HkxCEhAqtQ", "SyxTTel53Q", "r1evDrYhnQ", "iclr_2019_HkxCEhAqtQ", "iclr_2019_HkxCEhAqtQ", "iclr_2019_HkxCEhAqtQ" ]
iclr_2019_HkxCenR5F7
Variational recurrent models for representation learning
We study the problem of learning representations of sequence data. Recent work has built on variational autoencoders to develop variational recurrent models for generation. Our main goal is not generation but rather representation learning for downstream prediction tasks. Existing variational recurrent models typically use stochastic recurrent connections to model the dependence among neighboring latent variables, while generation assumes independence of generated data per time step given the latent sequence. In contrast, our models assume independence among all latent variables given non-stochastic hidden states, which speeds up inference, while assuming dependence of observations at each time step on all latent variables, which improves representation quality. In addition, we propose and study extensions for improving downstream performance, including hierarchical auxiliary latent variables and prior updating during training. Experiments show improved performance on several speech and language tasks with different levels of supervision, as well as in a multi-view learning setting.
rejected-papers
This paper heavily modifies standard time-series-VAE models to improve their representation learning abilities. However, the resulting model seems like an ad-hoc combination of tricks that lose most of the nice properties of VAEs. The resulting method does not appear to be useful enough to justify itself, and it's not clear that the same ends couldn't be pursued using simpler, more general, and computationally cheaper approaches.
train
[ "HJl7auR214", "r1xyhfI314", "S1e7BjGrAm", "HkeyNwzH07", "Bkx0K9MHC7", "ByeZL0VCh7", "rkl07WUq3Q", "H1gJO-4227" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the detailed and constructive review! ", "As area chair I just wanted to comment that this is an outstandingly thorough, clear, and constructive review. Thank you.", "Thank you for pointing out the missing speed comparison. RecRep is roughly twice faster in our implementation than StocCon when us...
[ -1, -1, -1, -1, -1, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "r1xyhfI314", "ByeZL0VCh7", "rkl07WUq3Q", "ByeZL0VCh7", "H1gJO-4227", "iclr_2019_HkxCenR5F7", "iclr_2019_HkxCenR5F7", "iclr_2019_HkxCenR5F7" ]
iclr_2019_HkxMG209K7
An Alarm System for Segmentation Algorithm Based on Shape Model
It is usually hard for a learning system to predict correctly on the rare events, and there is no exception for segmentation algorithms. Therefore, we hope to build an alarm system to set off alarms when the segmentation result is possibly unsatisfactory. One plausible solution is to project the segmentation results into a low dimensional feature space, and then learn classifiers/regressors in the feature space to predict the qualities of segmentation results. In this paper, we form the feature space using shape feature which is a strong prior information shared among different data, so it is capable to predict the qualities of segmentation results given different segmentation algorithms on different datasets. The shape feature of a segmentation result is captured using the value of loss function when the segmentation result is tested using a Variational Auto-Encoder(VAE). The VAE is trained using only the ground truth masks, therefore the bad segmentation results with bad shapes become the rare events for VAE and will result in large loss value. By utilizing this fact, the VAE is able to detect all kinds of shapes that are out of the distribution of normal shapes in ground truth (GT). Finally, we learn the representation in the one-dimensional feature space to predict the qualities of segmentation results. We evaluate our alarm system on several recent segmentation algorithms for the medical segmentation task. The segmentation algorithms perform differently on different datasets, but our system consistently provides reliable prediction on the qualities of segmentation results.
rejected-papers
The authors present a method using a VAE to model segmentation masks directly. Errors in reconstruction of masks by the VAE indicate that the mask may be outside the distribution of common mask shapes, and are used to predict poor quality segmentation scenarios that fall outside the distribution of common segmentations. Pros: + R2: Technical idea is interesting, and a number of baselines used to compare. + R1 & R4: Method is novel. Cons: - R3 & R4: The method ignores the original input in its prediction, making the method wholly reliant on shape priors. In situations where the shape prior is weak, the method may be expected to fail. Authors have confirmed this, but not added any experiments to quantify its effect. - R4: The baseline regressor method is missing key details, which makes it impossible to judge if the comparison is fair (i.e. at minimum, number of learned parameters for each model, number of convolutional layers, structure of network, etc.). Authors have not provided these details. Authors have not investigated datasets with weak shape prior to see how methods compare in this setting. - R2: GANs can be used as a baseline. Authors confirmed, but did not supply results. Reviewers generally agree that the idea is novel, but the value of the approach cannot be determined due to missing baseline experiments, and missing details of baselines. Recommend reject in current form, but encourage authors to complete experiments.
train
[ "BJeTwti2p7", "rkxfla79Am", "B1g4YhX50X", "BygYm3X9A7", "SygsgjmqCm", "BJeY9XzphQ", "Bkxr7N33pm", "SJeR97Wi27" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors present a method to detect poor quality segmentation results by using a VAE to understand the statistical distribution of segmentation masks, and detect outliers from that distribution in predictions. Method is compared to a few baselines to show improved results.\n\nPros:\n\n1) The idea seems slightly...
[ 3, -1, -1, -1, -1, 6, 7, 5 ]
[ 5, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2019_HkxMG209K7", "SJeR97Wi27", "BJeY9XzphQ", "BJeTwti2p7", "Bkxr7N33pm", "iclr_2019_HkxMG209K7", "iclr_2019_HkxMG209K7", "iclr_2019_HkxMG209K7" ]
iclr_2019_HkxOoiAcYX
Estimating Information Flow in DNNs
We study the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory. The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information I(X;T) between the input X and internal representations T decreases. Several papers observe compression of estimated mutual information on different DNN models, but the true I(X;T) over these networks is provably either constant (discrete X) or infinite (continuous X). This work explains the discrepancy between theory and experiments, and clarifies what was actually measured by these past works. To this end, we introduce an auxiliary (noisy) DNN framework for which I(X;T) is a meaningful quantity that depends on the network's parameters. This noisy framework is shown to be a good proxy for the original (deterministic) DNN both in terms of performance and the learned representations. We then develop a rigorous estimator for I(X;T) in noisy DNNs and observe compression in various models. By relating I(X;T) in the noisy DNN to an information-theoretic communication problem, we show that compression is driven by the progressive clustering of hidden representations of inputs from the same class. Several methods to directly monitor clustering of hidden representations, both in noisy and deterministic DNNs, are used to show that meaningful clusters form in the T space. Finally, we return to the estimator of I(X;T) employed in past works, and demonstrate that while it fails to capture the true (vacuous) mutual information, it does serve as a measure for clustering. This clarifies the past observations of compression and isolates the geometric clustering of hidden representations as the true phenomenon of interest.
rejected-papers
This paper studies the compression aspect of the information bottleneck. It seeks to clarify a debate about the evolution of mutual information between inputs and representations during training in neural networks. The paper discusses numerous ideas and techniques and arrives at valuable conclusions. A concern is that parts of the paper (theoretical parts) are intended for a separate paper, and are included in the paper only for reference. This means that the actual contribution of the present paper is mostly on the experimental part. Nonetheless, the discussion derived from the theory and experiments seem valuable in the ongoing discussion of this topic. In any case, I encourage the authors to make efforts to obtain a transparent separation of the different pieces of work. A concern was raised that the current paper mainly addresses a discussion that originated in a paper that has not passed peer review. On the other hand, this discussion does occupy many researchers and justifies the analysis, even if the originating paper has not been published in a peer reviewed format. All reviewers are confident in their assessment. Two of them regard the paper positively and one of them regards the paper as ok, but not good enough, with main criticism in relation to the points discussed above. Although the paper is in any case very good, unfortunately it does not reach the very high bar for acceptance at this ICLR.
train
[ "HJxX7cGnh7", "S1xPxKclRm", "Syx3uzKGnQ", "SyxZ8-zo6X", "SylcZTH767", "HyxKvDU7TX", "r1es6aSXa7", "SkgUq6SXaX", "S1ev16BmTm", "Hyxq33SmTm", "rylnRsBQT7", "HJenP8pmjm" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper provides a principled way to examine the compression phrase, i.e, I(X;T) in deep neural networks. To achieve this, the authors provides an theoretical sounding entropy estimator to estimate mutual information. Empirically, the paper did observe this compression phrase across both synthetic and real-wor...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2019_HkxOoiAcYX", "iclr_2019_HkxOoiAcYX", "iclr_2019_HkxOoiAcYX", "iclr_2019_HkxOoiAcYX", "Syx3uzKGnQ", "iclr_2019_HkxOoiAcYX", "HJenP8pmjm", "HJenP8pmjm", "Syx3uzKGnQ", "Syx3uzKGnQ", "HJxX7cGnh7", "iclr_2019_HkxOoiAcYX" ]
iclr_2019_HkxWrsC5FQ
Provable Guarantees on Learning Hierarchical Generative Models with Deep CNNs
Learning deep networks is computationally hard in the general case. To show any positive theoretical results, one must make assumptions on the data distribution. Current theoretical works often make assumptions that are very far from describing real data, like sampling from Gaussian distribution or linear separability of the data. We describe an algorithm that learns convolutional neural network, assuming the data is sampled from a deep generative model that generates images level by level, where lower resolution images correspond to latent semantic classes. We analyze the convergence rate of our algorithm assuming the data is indeed generated according to this model (as well as additional assumptions). While we do not pretend to claim that the assumptions are realistic for natural images, we do believe that they capture some true properties of real data. Furthermore, we show that on CIFAR-10, the algorithm we analyze achieves results in the same ballpark with vanilla convolutional neural networks that are trained with SGD.
rejected-papers
This manuscript proposes a generative model for images, then proposes a training procedure for fitting a convolutional neural network based on this model. One novelty if this result is that the generative procedure seems to be more complex than generative assumptions required for previous work. It is clear that the problem addressed -- training methods that may improve on SGD, with convergence guarantees -- is of significant interest to the community. The reviewers and AC note several issue (i) the initial version of the manuscript includes several assumptions that are not clearly stated. This seems to have been fixed in the updated manuscript (ii) reviewers suspect that the accumulation of stated assumptions may result in an easily separable generative model -- limiting the generality of the results (iii) experiemental results are underwhelming, and only comparable to much older published results.
val
[ "Hkl_UoaMy4", "B1eXfc6MJE", "B1gdgfOFhX", "rJefuPi_hm", "HyleRy0S6m", "B1lG1Omo37" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "Thank you for the response.\nWe will give here a few notes of clarification about the assumptions, and we can add these to the final revision of the paper. We hope that these comments provide the intuitions and explanations that are missing.\n\nAssumption 1: The linear separability of the latent distribution captu...
[ -1, -1, 6, 4, -1, 6 ]
[ -1, -1, 3, 4, -1, 3 ]
[ "rJefuPi_hm", "B1gdgfOFhX", "iclr_2019_HkxWrsC5FQ", "iclr_2019_HkxWrsC5FQ", "iclr_2019_HkxWrsC5FQ", "iclr_2019_HkxWrsC5FQ" ]
iclr_2019_Hkxarj09Y7
Unified recurrent network for many feature types
There are time series that are amenable to recurrent neural network (RNN) solutions when treated as sequences, but some series, e.g. asynchronous time series, provide a richer variation of feature types than current RNN cells take into account. In order to address such situations, we introduce a unified RNN that handles five different feature types, each in a different manner. Our RNN framework separates sequential features into two groups dependent on their frequency, which we call sparse and dense features, and which affect cell updates differently. Further, we also incorporate time features at the sequential level that relate to the time between specified events in the sequence and are used to modify the cell's memory state. We also include two types of static (whole sequence level) features, one related to time and one not, which are combined with the encoder output. The experiments show that the proposed modeling framework does increase performance compared to standard cells.
rejected-papers
This paper presents an algorithm for combining various feature types when training recurrent networks. The features are handled by modifying the update rules and cell states based on the features' type -- dense, sparse, static, w/ decay, etc. Strengths - The model handles each feature according to its type and handles cell state and transitions appropriately. - Extends earlier work to handle more feature types, like sparse features. Weaknesses - Limited novelty. Models similar to various aspects of the proposed system have been presented in prior works. For example: TLSTM, which the authors use as a baseline. Although some components are novel, like the treatment of sparse features, contributions, in my opinion, are not sufficient to be accepted at ICLR. - Presentation: Confusing and not enough information for reproducing results; multiple reviewers raised concerns about presentation of the feature types and experimental results. There were suggestions to improve, which the authors did consider during revision, but some concerns still remain. In the end, the reviewers agreed about the limited novelty of this work, given existing literature. The recommendation, therefore, is to reject the paper.
train
[ "HJlU1Emc07", "SygXTpKYAm", "HJxnVI0_CQ", "rylW1isU07", "S1lzLxgbAm", "SyxE3UJb0Q", "rJxFHKFxA7", "B1xJivtxAX", "BJg382TuTX", "Skeo_ksLaX", "SkgQlMh92m", "ryeOGpb5hX" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comments, our responses to each are below.\n\n - If I understand correctly, each sparse feature has its own memory state. This will pose a scalability issue when the number of features are large, e.g., those in electronic medical records where the number of features can go to tens of thousands, and...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 2 ]
[ "HJxnVI0_CQ", "S1lzLxgbAm", "B1xJivtxAX", "rJxFHKFxA7", "BJg382TuTX", "Skeo_ksLaX", "ryeOGpb5hX", "SkgQlMh92m", "iclr_2019_Hkxarj09Y7", "iclr_2019_Hkxarj09Y7", "iclr_2019_Hkxarj09Y7", "iclr_2019_Hkxarj09Y7" ]
iclr_2019_Hkxr1nCcFm
An investigation of model-free planning
The field of reinforcement learning (RL) is facing increasingly challenging domains with combinatorial complexity. For an RL agent to address these challenges, it is essential that it can plan effectively. Prior work has typically utilized an explicit model of the environment, combined with a specific planning algorithm (such as tree search). More recently, a new family of methods have been proposed that learn how to plan, by providing the structure for planning via an inductive bias in the function approximator (such as a tree structured neural network), trained end-to-end by a model-free RL algorithm. In this paper, we go even further, and demonstrate empirically that an entirely model-free approach, without special structure beyond standard neural network components such as convolutional networks and LSTMs, can learn to exhibit many of the hallmarks that we would typically associate with a model-based planner. We measure our agent's effectiveness at planning in terms of its ability to generalize across a combinatorial and irreversible state space, its data efficiency, and its ability to utilize additional thinking time. We find that our agent has the characteristics that one might expect to find in a planning algorithm. Furthermore, it exceeds the state-of-the-art in challenging combinatorial domains such as Sokoban and outperforms other model-free approaches that utilize strong inductive biases toward planning.
rejected-papers
The paper studies a convolutional LSTM (ConvLSTM) based model (DRC: Deep Repeated ConvLSTM) trained through reinforcement, and shows that it performs better than other model-free approaches, in particular in term of generalization. The ability to generalize is attributed to being able to plan. This last part is not completely convincing. The paper is clearly written, the experiments are in 4 limited domains: Sokoban, Boxworld, MiniPacman, Gridworld. While diverse, tasks are still all similarly navigation in top-down (2D) grid worlds. It is unclear what are the limits of the reach of this study. The experimental evidence presented here could also be interpreted as: local best-response recognition of shapes (Conv) and memory of such patterns and associated actions (LSTM) are sufficient for all those environments. Overall, this is an interesting direction, but it falls slightly short of being acceptable for publication at ICLR.
train
[ "BJgsIJ93am", "SkgIby9hp7", "H1x620KnTQ", "ryeEtRthTQ", "BJx8y00-pQ", "B1gJZic037", "r1ewUXWvnm" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewer for constructive feedback. We have written a common response to all three reviewers in a separate comment addressing the common issues. Below we address specific issues raised by AnonReviewer3.\n\nAbout comparison to “true planning”:\n\nLet us elaborate on model-based comparison...
[ -1, -1, -1, -1, 5, 5, 4 ]
[ -1, -1, -1, -1, 4, 3, 5 ]
[ "BJx8y00-pQ", "r1ewUXWvnm", "B1gJZic037", "iclr_2019_Hkxr1nCcFm", "iclr_2019_Hkxr1nCcFm", "iclr_2019_Hkxr1nCcFm", "iclr_2019_Hkxr1nCcFm" ]
iclr_2019_Hkxx3o0qFX
High Resolution and Fast Face Completion via Progressively Attentive GANs
Face completion is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of "holes" and the controllable attributes of filled-in fragments. Our system addresses the challenges by learning a fully end-to-end framework that trains generative adversarial networks (GANs) progressively from low resolution to high resolution with conditional vectors encoding controllable attributes. We design a novel coarse-to-fine attentive module network architecture. Our model is encouraged to attend on finer details while the network is growing to a higher resolution, thus being capable of showing progressive attention to different frequency components in a coarse-to-fine way. We term the module Frequency-oriented Attentive Module (FAM). Our system can complete faces with large structural and appearance variations using a single feed-forward pass of computation with mean inference time of 0.54 seconds for images at 1024x1024 resolution. A pilot human study shows our approach outperforms state-of-the-art face completion methods. The code will be released upon publication.
rejected-papers
All reviewers gave a 5 rating. The author rebuttal was not able to alter the consensus view of reviewers. See below for details.
train
[ "H1lHPv8iAm", "Skex8Gg067", "BkxC-fgA6X", "ByeFPbgATX", "rylzbX3ThX", "ryl5fByTjQ", "Syl6qnpoj7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors claimed that the results in the paper are some \"typical cases\", neither completely random, nor cherry-picked bad results for CTX. However, \"typical cases\" are still very vague. The author still did some kind of selection. Since there are user studies, why not showing some top selected results and l...
[ -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, 5, 2, 5 ]
[ "Skex8Gg067", "Syl6qnpoj7", "ryl5fByTjQ", "rylzbX3ThX", "iclr_2019_Hkxx3o0qFX", "iclr_2019_Hkxx3o0qFX", "iclr_2019_Hkxx3o0qFX" ]
iclr_2019_HkzNXhC9KQ
Adaptive Sample-space & Adaptive Probability coding: a neural-network based approach for compression
We propose Adaptive Sample-space & Adaptive Probability (ASAP) coding, an efficient neural-network based method for lossy data compression. Our ASAP coding distinguishes itself from the conventional method based on adaptive arithmetic coding in that it models the probability distribution for the quantization process in such a way that one can conduct back-propagation for the quantization width that determines the support of the distribution. Our ASAP also trains the model with a novel, hyper-parameter free multiplicative loss for the rate-distortion tradeoff. With our ASAP encoder, we are able to compress the image files in the Kodak dataset to as low as one fifth the size of the JPEG-compressed image without compromising their visual quality, and achieved the state-of-the-art result in terms of MS-SSIM based rate-distortion tradeoff.
rejected-papers
This paper presents an interesting approach to image compression, as recognized by all reviewers. However, important concerns about evaluating the contribution remains: as noted by reviewers, evaluating the contribution requires disentangling what part of the improvement is due to the proposed approach and what part is due to the loss chosen and evaluation methods. While authors have done a valuable effort adding experiments to incorporate reviewers suggestions with ablation studies, it does not convincingly show that the proposed approach truly improves over existing ones like Balle et al. Authors are encouraged to strengthen their work for future submission by putting particular emphasis on those questions.
train
[ "Hkl-uw_m1N", "BJe0-zhTRQ", "BkeJntYY0Q", "SygOmuttCX", "S1xac_YtAm", "ByxvpTVyTm", "BJgY5e1337", "r1xWuPcOh7" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In addition to the added analysis and observations we stated in the revision, we are also inferring from our results that the energy landscape of MS-SSIM contains multiple local extrama and it is, at least for the dataset we have studied, difficult to optimize. In fact, the model optimized for MS-SSIM is worse in...
[ -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "BJe0-zhTRQ", "BkeJntYY0Q", "r1xWuPcOh7", "ByxvpTVyTm", "BJgY5e1337", "iclr_2019_HkzNXhC9KQ", "iclr_2019_HkzNXhC9KQ", "iclr_2019_HkzNXhC9KQ" ]
iclr_2019_HkzOWnActX
Model-Agnostic Meta-Learning for Multimodal Task Distributions
Gradient-based meta-learners such as MAML (Finn et al., 2017) are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML algorithm that is able to modulate its meta-learned prior according to the identified task, allowing faster adaptation. We evaluate the proposed model on a diverse set of problems including regression, few-shot image classification, and reinforcement learning. The results demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks sampled from a multimodal distribution.
rejected-papers
This paper proposes a meta-learning algorithm that extends MAML, particularly focusing on multimodal task distributions. The paper is generally well-written, especially with the latest revisions, and the qualitative experiments show some interesting structure recovered. The primary weakness of the paper is that the experiments are largely on relatively simple benchmarks, such as Omniglot and low-dimensional regression problems. Meta-learning papers with convincing results have shown results on MiniImagenet, CIFAR, CelebA, and/or other natural image datasets. Hence, the paper would be more compelling with more difficult experimental settings. In the paper's current form, the reviewers and the AC agree that it does not meet the bar for ICLR.
val
[ "HJxPOIRvRm", "r1liF2MbRX", "SkgtIhfW0X", "Bkl3ziz-R7", "BygrA2zZCQ", "rkguWNGZCX", "rke6feGWR7", "B1lwX9L537", "HJeH-KpCn7", "SJe7jhshh7" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We sincerely appreciate the constructive reviews provided by all reviewers. We addressed the concerns in our response and revision. We believe our contributions toward multimodal model-agnostic meta-learning are solid. We would like to kindly ask the reviewers to let us know if there is any further comment towards...
[ -1, -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "iclr_2019_HkzOWnActX", "HJeH-KpCn7", "HJeH-KpCn7", "HJeH-KpCn7", "iclr_2019_HkzOWnActX", "SJe7jhshh7", "B1lwX9L537", "iclr_2019_HkzOWnActX", "iclr_2019_HkzOWnActX", "iclr_2019_HkzOWnActX" ]
iclr_2019_HkzZBi0cFQ
Quantization for Rapid Deployment of Deep Neural Networks
This paper aims at rapid deployment of the state-of-the-art deep neural networks (DNNs) to energy efficient accelerators without time-consuming fine tuning or the availability of the full datasets. Converting DNNs in full precision to limited precision is essential in taking advantage of the accelerators with reduced memory footprint and computation power. However, such a task is not trivial since it often requires the full training and validation datasets for profiling the network statistics and fine tuning the networks to recover the accuracy lost after quantization. To address these issues, we propose a simple method recognizing channel-level distribution to reduce the quantization-induced accuracy loss and minimize the required image samples for profiling. We evaluated our method on eleven networks trained on the ImageNet classification benchmark and a network trained on the Pascal VOC object detection benchmark. The results prove that the networks can be quantized into 8-bit integer precision without fine tuning.
rejected-papers
This paper proposes an 8-bit quantization strategy for rapid DNN deployment. 3 reviewers all rated this paper as marginally below acceptance threshold due to lack of novelty. 8 bit quantization (including channel-wise) is a well studied task. The paper lacks comparison with peer work.
train
[ "rJlAQp8Zk4", "S1gnBtlFRX", "B1eFz3plRX", "r1xlSn6l0X", "HygNgn6eRQ", "Hklvh2J_6m", "HkltNd_637", "ryebGUDO2X" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for the valuable comments. We will improve the manuscript in the final version by adding further analysis and results as you suggested.", "Thank you for the updates. These results indicate more consistent results for the non-MAX approaches for layer-wise methods. Overall, channel-wise methods...
[ -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "S1gnBtlFRX", "HygNgn6eRQ", "HkltNd_637", "ryebGUDO2X", "Hklvh2J_6m", "iclr_2019_HkzZBi0cFQ", "iclr_2019_HkzZBi0cFQ", "iclr_2019_HkzZBi0cFQ" ]
iclr_2019_HkzyX3CcFQ
Contextual Recurrent Convolutional Model for Robust Visual Learning
Feedforward convolutional neural network has achieved a great success in many computer vision tasks. While it validly imitates the hierarchical structure of biological visual system, it still lacks one essential architectural feature: contextual recurrent connections with feedback, which widely exists in biological visual system. In this work, we designed a Contextual Recurrent Convolutional Network with this feature embedded in a standard CNN structure. We found that such feedback connections could enable lower layers to ``rethink" about their representations given the top-down contextual information. We carefully studied the components of this network, and showed its robustness and superiority over feedforward baselines in such tasks as noise image classification, partially occluded object recognition and fine-grained image classification. We believed this work could be an important step to help bridge the gap between computer vision models and real biological visual system.
rejected-papers
This paper explores the addition of feedback connections to popular CNN architectures. All three reviewers suggest rejecting the paper, pointing to limited novelty with respect to other recent publications, and unconvincing experiments. The AC agrees with the reviewers.
train
[ "H1ewdKtsR7", "HkgMSYFsCQ", "BkgWrpdiR7", "H1gYpXvjR7", "BJeQtfvjAX", "B1lRdulhnQ", "HygOWC_937", "rye4DnoY27" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "6. Then robustness to noise and adversarial attacks tested on ImageNet and with a modification of the architecture. According to the caption of Fig. 4, this is done with 5 timesteps this time!\n\nWe apologize that we actually used 2 unroll times model for ImageNet classification instead of 5 unroll times. We have ...
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "HkgMSYFsCQ", "rye4DnoY27", "HygOWC_937", "B1lRdulhnQ", "iclr_2019_HkzyX3CcFQ", "iclr_2019_HkzyX3CcFQ", "iclr_2019_HkzyX3CcFQ", "iclr_2019_HkzyX3CcFQ" ]
iclr_2019_Hy4R2oRqKQ
Canonical Correlation Analysis with Implicit Distributions
Canonical Correlation Analysis (CCA) is a ubiquitous technique that shows promising performance in multi-view learning problems. Due to the conjugacy of the prior and the likelihood, probabilistic CCA (PCCA) presents the posterior with an analytic solution, which provides probabilistic interpretation for classic linear CCA. As the multi-view data are usually complex in practice, nonlinear mappings are adopted to capture nonlinear dependency among the views. However, the interpretation provided in PCCA cannot be generalized to this nonlinear setting, as the distribution assumptions on the prior and the likelihood makes it restrictive to capture nonlinear dependency. To overcome this bottleneck, in this paper, we provide a novel perspective for CCA based on implicit distributions. Specifically, we present minimum Conditional Mutual Information (CMI) as a new criteria to capture nonlinear dependency for multi-view learning problem. To eliminate the explicit distribution requirement in direct estimation of CMI, we derive an objective whose minimization implicitly leads to the proposed criteria. Based on this objective, we present an implicit probabilistic formulation for CCA, named Implicit CCA (ICCA), which provides a flexible framework to design CCA extensions with implicit distributions. As an instantiation, we present adversarial CCA (ACCA), a nonlinear CCA variant which benefits from consistent encoding achieved by adversarial learning. Quantitative correlation analysis and superior performance on cross-view generation task demonstrate the superiority of the proposed ACCA.
rejected-papers
This manuscript proposes an implicit generative modeling approach for the non-linear CCA problem. One contribution is the proposal of Conditional Mutual Information (CMI) as a criterion to capture nonlinear dependency, resulting in an objective that can be solved using implicit distributions. The work seems to be well motivated and of interest to the community. The reviewers and AC opinions were mixed, and the rebuttal did not completely address the concerns. In particular, a reviewer pointed out an issue with a derivation in the paper, and the issue was not satisfactorily resolved by the authors. Some additional reading suggests that the misunderstanding may be partially due to incomplete notation and other issues with clarity of writing.
train
[ "BJlmc3EtyE", "Bke-nsVt14", "rkeP0gxSyV", "rklmIENEJ4", "rJgpPvg41V", "rJe19Og4kE", "BkeMHiIZ1N", "BJg5zj0eJN", "HkgJh-QKAX", "Byex_m7F0Q", "r1lKaUmtAX", "H1enRDXtRQ", "HygXIrnhn7", "HklJ3TmqnX", "Sye9zUNbsm" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Second, the motivation of our work can be explained as follows.\n\nBased on an in-depth discussion on the restrictive assumptions on the probabilistic interpretation of linear CCA (PCCA), we aim to re-decide the criteria to generalize the probabilistic understanding to complex nonlinear CCA models and relax the as...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "Bke-nsVt14", "iclr_2019_Hy4R2oRqKQ", "rklmIENEJ4", "rJgpPvg41V", "BJg5zj0eJN", "BkeMHiIZ1N", "r1lKaUmtAX", "iclr_2019_Hy4R2oRqKQ", "Sye9zUNbsm", "Sye9zUNbsm", "HklJ3TmqnX", "HygXIrnhn7", "iclr_2019_Hy4R2oRqKQ", "iclr_2019_Hy4R2oRqKQ", "iclr_2019_Hy4R2oRqKQ" ]
iclr_2019_HyEl3o05Fm
Stochastic Adversarial Video Prediction
Being able to predict what may happen in the future requires an in-depth understanding of the physical and causal rules that govern the world. A model that is able to do so has a number of appealing applications, from robotic planning to representation learning. However, learning to predict raw future observations, such as frames in a video, is exceedingly challenging—the ambiguous nature of the problem can cause a naively designed model to average together possible futures into a single, blurry prediction. Recently, this has been addressed by two distinct approaches: (a) latent variational variable models that explicitly model underlying stochasticity and (b) adversarially-trained models that aim to produce naturalistic images. However, a standard latent variable model can struggle to produce realistic results, and a standard adversarially-trained model underutilizes latent variables and fails to produce diverse predictions. We show that these distinct methods are in fact complementary. Combining the two produces predictions that look more realistic to human raters and better cover the range of possible futures. Our method outperforms prior works in these aspects.
rejected-papers
This paper shows that combining GAN and VAE for video prediction allows to trade off diversity and realism. The paper is well-written and the experimentation is careful, as noted by reviewers. However, reviewers agree that this combination is of limited novelty (having been used for images before). Reviewers also note that the empirical performance is not very much stronger than baselines. Overall, the novelty is too slight and the empirical results are not strong enough compared to baselines to justify acceptance based solely on empirical results.
train
[ "ryghv7rqAX", "Bygl7mS5CX", "SkgLa7Hc0X", "BygkjldqC7", "Bkx8_JcuCX", "H1gm0W0X0X", "rJexhWRXCm", "rygvTCLQRX", "BkgZKhUX07", "rkxwCK8chQ", "Hyen_JS9nX", "r1gzmeTDnm" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have included a revised plot in Figure 15 at the end of the Appendix (which will be incorporated to Figure 7) that fixes the KTH dataset preprocessing. Our VAE-only model now achieves substantially higher accuracy and diversity than SVG (Denton & Fergus, 2018). As before, the GAN-only model mode-collapses and g...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "BkgZKhUX07", "Bkx8_JcuCX", "iclr_2019_HyEl3o05Fm", "iclr_2019_HyEl3o05Fm", "rJexhWRXCm", "rJexhWRXCm", "r1gzmeTDnm", "Hyen_JS9nX", "rkxwCK8chQ", "iclr_2019_HyEl3o05Fm", "iclr_2019_HyEl3o05Fm", "iclr_2019_HyEl3o05Fm" ]
iclr_2019_HyG1_j0cYQ
Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels
It is challenging to train deep neural networks robustly on the industrial-level data, since labels of such data are heavily noisy, and their label generation processes are normally agnostic. To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations. Then, we may employ early stopping or the small-loss trick to train them on selected instances. However, in such training procedures, deep neural networks inevitably memorize some noisy labels, which will degrade their generalization. In this paper, we propose a meta algorithm called Pumpout to overcome the problem of memorizing noisy labels. By using scaled stochastic gradient ascent, Pumpout actively squeezes out the negative effects of noisy labels from the training model, instead of passively forgetting these effects. We leverage Pumpout to upgrade two representative methods: MentorNet and Backward Correction. Empirical results on benchmark vision and text datasets demonstrate that Pumpout can significantly improve the robustness of representative methods.
rejected-papers
The paper presents an approach to mitigate the presence of noisy labels during training by trying to forget wrong labels. Reviewers pointed out a few concerns, including lack of novelty, lack of enough experimental support, and lack of theoretical support. Authors have added some experiments and details about the experimental section, but reviewers still think it's not enough for acceptance. I concur with the reviewers to reject the paper.
train
[ "S1eMFCbt0m", "r1l3TQxVAQ", "BJlAGblVRQ", "ryltU0y4RX", "rJeM7llN0Q", "BJlSZVx4Am", "ByxhtWe4CX", "SJeMcexNRm", "HkeWAkx4Am", "Bkgn5kxN0Q", "H1l5aC1E0Q", "ByxT2oFn2X", "rJlfPNcx2X", "SJxMUmdJnm", "BJlibzhU57", "Bygu6To19X", "rJeNFlCVcQ", "BkefgvT4qm", "S1lJiNpNcQ", "H1eGMrT49m"...
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "author", "author", "public", "public", "public", "public...
[ "Dear Area Chair and Anonymous Reviewers,\n\nOn behalf of all co-authors, we appreciate your great efforts in our paper review. Except our point to point response to each reviewer (see details in following posts), we hope to highlight several important points that we revised in the high level.\n\n1. To further just...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_HyG1_j0cYQ", "SJxMUmdJnm", "SJeMcexNRm", "ByxT2oFn2X", "HkeWAkx4Am", "r1l3TQxVAQ", "BJlAGblVRQ", "rJeM7llN0Q", "Bkgn5kxN0Q", "rJlfPNcx2X", "ryltU0y4RX", "iclr_2019_HyG1_j0cYQ", "iclr_2019_HyG1_j0cYQ", "iclr_2019_HyG1_j0cYQ", "S1lJiNpNcQ", "iclr_2019_HyG1_j0cYQ", "S1luH11e5...
iclr_2019_HyGDdsCcFQ
Better Generalization with On-the-fly Dataset Denoising
Memorization in over-parameterized neural networks can severely hurt generalization in the presence of mislabeled examples. However, mislabeled examples are to hard avoid in extremely large datasets. We address this problem using the implicit regularization effect of stochastic gradient descent with large learning rates, which we find to be able to separate clean and mislabeled examples with remarkable success using loss statistics. We leverage this to identify and on-the-fly discard mislabeled examples using a threshold on their losses. This leads to On-the-fly Data Denoising (ODD), a simple yet effective algorithm that is robust to mislabeled examples, while introducing almost zero computational overhead. Empirical results demonstrate the effectiveness of ODD on several datasets containing artificial and real-world mislabeled examples.
rejected-papers
The paper aims to clean data samples with label noise in the training procedure. The reviewers and AC note the following potential weaknesses: (1) the assumption of uniform noise, which is not the case in practice, (2) marginal gains under real-world datasets and (3) highly empirical and ad-hoc approach. AC thinks the proposed method has potential and is interesting, but decided that the authors need more significant works to publish the work.
val
[ "H1xJ3ivvyV", "r1gwyhfvkE", "B1xbQ1GP1V", "HJep-VMm1E", "SkemR3Nf14", "HkgFSQ1j2Q", "BkxMcDc5n7", "H1xTMRF2Tm", "rJxAl0YhTX", "BJg3dpt26m", "SkeVN3FnpX", "Hyx00AV5n7" ]
[ "author", "author", "public", "author", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "We evaluated all methods with multiple runs and compute the mean and standard deviation of accuracy (except WebVision, we do not have enough resources). The standard deviation is relatively small compared to the empirical gains. Moreover, we use the same training hyperparameters for ODD and ERM in all the comparis...
[ -1, -1, -1, -1, -1, 5, 6, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, -1, -1, -1, -1, 5 ]
[ "HkgFSQ1j2Q", "BkxMcDc5n7", "HJep-VMm1E", "SkemR3Nf14", "iclr_2019_HyGDdsCcFQ", "iclr_2019_HyGDdsCcFQ", "iclr_2019_HyGDdsCcFQ", "Hyx00AV5n7", "Hyx00AV5n7", "BkxMcDc5n7", "HkgFSQ1j2Q", "iclr_2019_HyGDdsCcFQ" ]
iclr_2019_HyGLy2RqtQ
Over-parameterization Improves Generalization in the XOR Detection Problem
Empirical evidence suggests that neural networks with ReLU activations generalize better with over-parameterization. However, there is currently no theoretical analysis that explains this observation. In this work, we study a simplified learning task with over-parameterized convolutional networks that empirically exhibits the same qualitative phenomenon. For this setting, we provide a theoretical analysis of the optimization and generalization performance of gradient descent. Specifically, we prove data-dependent sample complexity bounds which show that over-parameterization improves the generalization performance of gradient descent.
rejected-papers
`This paper tackles the problem of learning with one hidden layer non-overlapping conv net for XOR detection problem. For this problem the paper shows that over parametrized models perform better, giving insights into why larger neural networks generalize better - an interesting question to study. However reviews opined that the setting considered in this paper is too specific to this XOR problem and the simplified network architecture, and the techniques are not generalizable to other models. Generalizing these results to more complex architectures or other learning problems will make the paper more interesting.
train
[ "SJlCIAjMk4", "rye-5bzqhX", "HkxFAOH4CX", "HJl2_8BNCX", "SJxE-4rNA7", "B1gd_7HVA7", "rkgmTMLyTm", "r1xlUhQE3m" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We would appreciate it if the reviewer could elaborate on the revision. It is not clear what are the current concerns given our response.\n\nWe have addressed the main concern regarding the fixed label function and certain distribution. Our result holds for many distributions. We have emphasized the significance a...
[ -1, 5, -1, -1, -1, -1, 4, 5 ]
[ -1, 4, -1, -1, -1, -1, 4, 4 ]
[ "rye-5bzqhX", "iclr_2019_HyGLy2RqtQ", "rkgmTMLyTm", "rye-5bzqhX", "r1xlUhQE3m", "iclr_2019_HyGLy2RqtQ", "iclr_2019_HyGLy2RqtQ", "iclr_2019_HyGLy2RqtQ" ]
iclr_2019_HyGh4sR9YQ
Deep Neuroevolution: Genetic Algorithms are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning
Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of neuroevolution techniques that improve performance. We demonstrate the latter by showing that combining DNNs with novelty search, which encourages exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g.\ DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA is faster than ES, A3C, and DQN (it can train Atari in {\raise.17ex\hbox{∼}}4 hours on one workstation or {\raise.17ex\hbox{∼}}1 hour distributed on 720 cores), and enables a state-of-the-art, up to 10,000-fold compact encoding technique.
rejected-papers
This paper presents an empirical study of the applicability of genetic algorithms to deep RL problems. Major concerns of the paper include: 1. paper organization, especially the presentation of the results, is hard to follow; 2. the results are not strong enough to support that claims made in this paper, as GAs are currently not strong enough when compared to the SOTA RL algorithms; 3. Not quite clear why or when GAs are better than RL or ES; Lack of insights. Overall, this paper cannot be accepted yet.
train
[ "HylDCSibJN", "Bkx-Z20_pm", "SyeRCcQpAX", "HJg26wuJsX", "S1lOWOH3RX", "H1x5l1WsA7", "B1enHygsR7", "r1x2QydqCQ", "Hygmy1d9RX", "BygojRv9Cm", "HkeZ_0Dq07", "BJlqBAvqA7", "r1lp3Fqdpm", "SyxONoAPpX", "BJx1Itz827", "HkgVkG2fcm", "HkxMHB9x9X" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you. We greatly appreciate your open-minded consideration and that you raised your score for our paper. ", "The authors show that using a simple genetic algorithm to optimize the weights of a large DNN parameterizing the action-value function can result in competitive policies. The GA uses selection trunca...
[ -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 3, -1, -1 ]
[ -1, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, -1, -1 ]
[ "SyeRCcQpAX", "iclr_2019_HyGh4sR9YQ", "BygojRv9Cm", "iclr_2019_HyGh4sR9YQ", "B1enHygsR7", "BJlqBAvqA7", "HkeZ_0Dq07", "SyxONoAPpX", "r1lp3Fqdpm", "Bkx-Z20_pm", "HJg26wuJsX", "BJx1Itz827", "iclr_2019_HyGh4sR9YQ", "iclr_2019_HyGh4sR9YQ", "iclr_2019_HyGh4sR9YQ", "HkxMHB9x9X", "iclr_2019...
iclr_2019_HyGySsAct7
Targeted Adversarial Examples for Black Box Audio Systems
The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity after 3000 generations while maintaining 94.6% audio file similarity.
rejected-papers
The authors propose an algorithm for generating adversarial examples for ASR systems treating them as black boxes. Strengths - One of the early works to demonstrate black box attacks on ASR system that recognize phrases instead of isolated words. Weaknesses - The approach assumes that the logits are available, which may not be realistic for most ASR systems when they are used in practice -- typically only the final transcription is available. - Although the technique is applied to continuous speech, algorithmic improvements over prior work of Alzanot et al. is minimal. - Evaluation is weak. For example, cross correlation cannot completely capture the adversarial nature of a generated audio sample. - The authors use a genetic algorithm for generating new set of examples which are pruned and mutated. It’s not clear what guarantees exist that the algorithm will eventually succeed. The reviewers agree that the presented work puts forth an interesting research direction. But given the deficiencies of the current submission as pointed out by the reviewers, the recommendation is to reject the paper.
train
[ "B1eLktzOnm", "ryxlDFpLp7", "ryeSGI6L67", "Sklu5t3IaQ", "r1g3c6VgpX", "SkxXsTqT3m" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "PAPER SUMMARY:\n\nThis paper introduces a biologically motivated black-box attack algorithm. \nThe target model in this case is DNN applied to the ASR context (automatic speech recognition system). \n\nNOVELTY & SIGNIFICANCE:\n\nThe proposed approach extends the previous genetic approach of (Alzantot et al., 2018)...
[ 3, -1, -1, -1, 4, 6 ]
[ 4, -1, -1, -1, 3, 4 ]
[ "iclr_2019_HyGySsAct7", "r1g3c6VgpX", "B1eLktzOnm", "SkxXsTqT3m", "iclr_2019_HyGySsAct7", "iclr_2019_HyGySsAct7" ]
iclr_2019_HyM8V2A9Km
ACTRCE: Augmenting Experience via Teacher’s Advice
Sparse reward is one of the most challenging problems in reinforcement learning (RL). Hindsight Experience Replay (HER) attempts to address this issue by converting a failure experience to a successful one by relabeling the goals. Despite its effectiveness, HER has limited applicability because it lacks a compact and universal goal representation. We present Augmenting experienCe via TeacheR's adviCE (ACTRCE), an efficient reinforcement learning technique that extends the HER framework using natural language as the goal representation. We first analyze the differences among goal representation, and show that ACTRCE can efficiently solve difficult reinforcement learning problems in challenging 3D navigation tasks, whereas HER with non-language goal representation failed to learn. We also show that with language goal representations, the agent can generalize to unseen instructions, and even generalize to instructions with unseen lexicons. We further demonstrate it is crucial to use hindsight advice to solve challenging tasks, but we also found that little amount of hindsight advice is sufficient for the learning to take off, showing the practical aspect of the method.
rejected-papers
This paper was reviewed by three experts (I assure the authors R3 is indeed familiar with RL and this area). Initially, the reviews were mixed with several concerns raised. After the author response, R2 and R3 recommend rejecting the paper, and R1 is unwilling to defend/champion/support it (not visible to the authors). The AC agrees with the concerns raised (in particular by R2) and finds no basis for overruling this recommendation. We encourage the authors to incorporate reviewer feedback and submit a stronger manuscript at a future venue.
train
[ "SJg3i9FpJ4", "rygbwTvyy4", "SJlXVLXkJN", "H1gEaMA1nX", "BkgIAzNs0Q", "HJepDTmj07", "rye9Dr5FAX", "H1eOc0EDRQ", "SJeYHWVvRm", "SylOg-VwA7", "Hke7RgNv0X", "r1gQngEvCQ", "H1x4lSQDAm", "B1gVMQ7wRX", "r1xEsMXDAX", "BkxtS7Ko2X", "rJgEVQD9nm" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nI would like to thank the authors for their detailed response and paper updates. In particular, the table illustrating the comparison with [1] is instructive and should definitely be part of the paper. My concern '2a' still stands. Focusing on the VizDoom task, it really seems to me that this paper has applied H...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "r1xEsMXDAX", "SJlXVLXkJN", "H1x4lSQDAm", "iclr_2019_HyM8V2A9Km", "HJepDTmj07", "rye9Dr5FAX", "SJeYHWVvRm", "iclr_2019_HyM8V2A9Km", "H1gEaMA1nX", "H1gEaMA1nX", "H1gEaMA1nX", "H1gEaMA1nX", "rJgEVQD9nm", "BkxtS7Ko2X", "BkxtS7Ko2X", "iclr_2019_HyM8V2A9Km", "iclr_2019_HyM8V2A9Km" ]
iclr_2019_HyMRUiC9YX
Exploring and Enhancing the Transferability of Adversarial Examples
State-of-the-art deep neural networks are vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can \textit{transfer across models}: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any query, which severely hinders the application of deep learning, especially in the safety-critical areas. In this work, we empirically study how two classes of factors those might influence the transferability of adversarial examples. One is about model-specific factors, including network architecture, model capacity and test accuracy. The other is the local smoothness of loss surface for constructing adversarial examples. Inspired by these understandings on the transferability of adversarial examples, we then propose a simple but effective strategy to enhance the transferability, whose effectiveness is confirmed by a variety of experiments on both CIFAR-10 and ImageNet datasets.
rejected-papers
While the paper contains significant information, most insights have already been revealed in previous work as noted by R1. The empirical novelty is therefore limited and the authors do not provide theoretical analysis to complement this.
test
[ "HyeuTWOE1E", "BJe1SGwW1E", "HkliEwcY07", "r1ltev9YCQ", "BJgtK89FCX", "Bkl4MaqOnm", "SJg6llgw2X", "HkehbJYgnQ" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the reference, and we will add discussions about universal adversarial perturbations. However, in a nutshell, Moosavi-Dezfooli 2017 did not study the “transferability of adversarial examples”.\n\nMossavi-Dezfooli 2017 showed the existence of a “universal” (image-agnostic) perturbation that causes most o...
[ -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "BJe1SGwW1E", "BJgtK89FCX", "HkehbJYgnQ", "SJg6llgw2X", "Bkl4MaqOnm", "iclr_2019_HyMRUiC9YX", "iclr_2019_HyMRUiC9YX", "iclr_2019_HyMRUiC9YX" ]
iclr_2019_HyMRaoAqKX
Implicit Autoencoders
In this paper, we describe the "implicit autoencoder" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions. We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning. Using implicit distributions allows us to learn more expressive posterior and conditional likelihood distributions for the autoencoder. Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract and high-level information of the data, while the remaining information is captured by the implicit conditional likelihood distribution. For example, we show that implicit autoencoders can disentangle the global and local information, and perform deterministic or stochastic reconstructions of the images. We further show that implicit autoencoders can disentangle discrete underlying factors of variation from the continuous factors in an unsupervised fashion, and perform clustering and semi-supervised learning.
rejected-papers
The paper proposes an original idea for training a generative model based on an objective inspired by a VAE-like evidence lower bound (ELBO), reformulated as two KL terms, which are then approximately optimized by two GANs. They thus use implicit distributions for both the posterior and the conditional likelihood. The idea is original and intriguing. But reviewers and AC found that the paper currently suffered from the following weaknesses: a) The presentation of the approach is unclear, due primarily to the fact that it doesn't throughout unambiguously enough separate the VAE-like ELBO *inspiration*, from what happens when replacing the two KL terms by GANs, i.e. the actual algorithm used. This is a big conceptual jump that would deserve being discussed and analyzed more carefully and thoroughly. b) Reviewers agreed that the paper does not sufficiently evaluate the approach in comparative experiments with alternatives, in particular its generative capabilities, in addition to the provided evaluations of the learned representation on downstream tasks. Reviewers did not reach a clear consensus on this paper, although discussion led two of them to revise their assessment score slightly towards each other's. One reviewer judged the paper currently too confusing (point a) putting more weight on this aspect than the other reviewers. Based on the paper and the review discussion thread, the AC judges that while it is an original, interesting and potentially promising approach, its presentation can and should be much clarified and improved.
train
[ "B1gi2qkAkV", "SylTQ_kR1N", "rJl-qNJ0yN", "HJg9eC9Anm", "Sklgza8C3m", "BJxHUqBopm", "H1gmOkF5T7", "HJxMkUiu6m", "rke6Qaqu6m", "BkxEplmjnm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "We thank the reviewer again for the feedback. We were wondering if our rebuttal addressed the concerns of the reviewer.", "We noticed that the reviewer has reduced the rating without modifying the review. We were wondering if there is any new concern that we can address.", "We thank the reviewer for updating t...
[ -1, -1, -1, 3, 6, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, 3, 4, -1, -1, -1, -1, 3 ]
[ "BkxEplmjnm", "Sklgza8C3m", "HJg9eC9Anm", "iclr_2019_HyMRaoAqKX", "iclr_2019_HyMRaoAqKX", "BkxEplmjnm", "Sklgza8C3m", "HJg9eC9Anm", "HJg9eC9Anm", "iclr_2019_HyMRaoAqKX" ]
iclr_2019_HyMS8iRcK7
SEQUENCE MODELLING WITH AUTO-ADDRESSING AND RECURRENT MEMORY INTEGRATING NETWORKS
Processing sequential data with long term dependencies and learn complex transitions are two major challenges in many deep learning applications. In this paper, we introduce a novel architecture, the Auto-addressing and Recurrent Memory Integrating Network (ARMIN) to address these issues. The ARMIN explicitly stores previous hidden states and recurrently integrate useful past states into current time-step by an efficient memory addressing mechanism. Compared to existing memory networks, the ARMIN is more light-weight and inference-time efficient. Our network can be trained on small slices of long sequential data, and thus, can boost its training speed. Experiments on various tasks demonstrate the efficiency of the ARMIN architecture. Codes and models will be available.
rejected-papers
there have been many variants of memory augmented neural nets since around 2014 when NTM, attention-based NMT and MemNet were proposed. it is indeed still an interesting and important direction of research, but the bar for introducing yet another variant of memory-augmented neural nets has been significantly raised, which is a sentiment shared by the reviewers. the author's response had not swayed the reviewers' opinion, and i am sticking to the reviewers' decisions. i believe more streamlined and systematic comparison among different memory augmented networks across many different benchmarks (e.g., use the same set of latest variants of memory nets across all the benchmarks) in this submission would make it a better paper and increase the chance of acceptance.
train
[ "SJg795yK3m", "HyeCHvIlx4", "r1xWkXNHyE", "rkgnMFwKA7", "HklQJxwF07", "rJlwfsSK0Q", "HkedbPIY0Q", "S1xBW3wWT7", "BJlwZjE9n7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposed a RNN with skip-connection (external memory) to past hidden states, this is a slightly different version of the TARDIS network. The authors experimented on PTB and a temporal action detection method.\n\nNovelty:\n\nI dont see a lot of novelty to the method. The authors proposed a method very sim...
[ 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2019_HyMS8iRcK7", "rkgnMFwKA7", "rJlwfsSK0Q", "SJg795yK3m", "BJlwZjE9n7", "iclr_2019_HyMS8iRcK7", "S1xBW3wWT7", "iclr_2019_HyMS8iRcK7", "iclr_2019_HyMS8iRcK7" ]
iclr_2019_HyMnYiR9Y7
DOMAIN ADAPTATION VIA DISTRIBUTION AND REPRESENTATION MATCHING: A CASE STUDY ON TRAINING DATA SELECTION VIA REINFORCEMENT LEARNING
Supervised models suffer from domain shifting where distribution mismatch across domains greatly affect model performance. Particularly, noise scattered in each domain has played a crucial role in representing such distribution, especially in various natural language processing (NLP) tasks. In addressing this issue, training data selection (TDS) has been proven to be a prospective way to train supervised models with higher performance and efficiency. Following the TDS methodology, in this paper, we propose a general data selection framework with representation learning and distribution matching simultaneously for domain adaptation on neural models. In doing so, we formulate TDS as a novel selection process based on a learned distribution from the input data, which is produced by a trainable selection distribution generator (SDG) that is optimized by reinforcement learning (RL). Then, the model trained by the selected data not only predicts the target domain data in a specific task, but also provides input for the value function of the RL. Experiments are conducted on three typical NLP tasks, namely, part-of-speech tagging, dependency parsing, and sentiment analysis. Results demonstrate the validity and effectiveness of our approach.
rejected-papers
This paper investigates a data selection framework for domain adaptation based on reinforcement learning. Pros: The paper presents an approach that can dynamically adjust the data selection strategy via reinforcement learning. More specifically, the RL agent gets reward by selecting a new sample that makes the source training data distribution closer to the target distribution, where the distribution comparison is based on the feature representations that will be used by the prediction classifier. While the use of RL for data selection is not entirely new, the specific method proposed by the paper is reasonably novel and interesting. Cons: The use of RL is not clearly motivated and justified (R1,R3) and the method presented in this paper is rather hard to follow might be overly complex (R1). One fair point R1 raised is more clean-cut empirical evaluation that demonstrates how RL performs clearly better than greedy optimization. The authors came back with additional analysis in Section 4.2 to address this question, but R1 feels the new analysis (e.g., Fig 3) is not clear how to interpret. A more thorough ablation study of the proposed model might have addressed the reviewer's question more clearly. In addition, all reviewers felt that baselines are not convincingly strong enough, though each reviewer pointed out somewhat different aspects of baselines. R3 is most concerned about baselines being not state-of-the-art, and the rebuttal did not address R3's concern well enough. Verdict: Reject. A potentially interesting idea but 2/3 reviewers share strong concerns about the empirical results and overall clarity of the paper.
train
[ "SygLERUjk4", "ryxkLEtc2X", "Hye68GV93m", "S1xEN_Sfhm" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Similar work on using Reinforcement learning for sample selection published (http://www.ecmlpkdd2018.org/wp-content/uploads/2018/09/81.pdf) and need to be referred.", "Response to author comments:\n\nUnfortunately I am still significantly unclear on why RL is useful here. The author response attempts to clarif...
[ -1, 4, 7, 5 ]
[ -1, 2, 3, 4 ]
[ "iclr_2019_HyMnYiR9Y7", "iclr_2019_HyMnYiR9Y7", "iclr_2019_HyMnYiR9Y7", "iclr_2019_HyMnYiR9Y7" ]
iclr_2019_HyMuaiAqY7
Deli-Fisher GAN: Stable and Efficient Image Generation With Structured Latent Generative Space
Generative Adversarial Networks (GANs) are powerful tools for realistic image generation. However, a major drawback of GANs is that they are especially hard to train, often requiring large amounts of data and long training time. In this paper we propose the Deli-Fisher GAN, a GAN that generates photo-realistic images by enforcing structure on the latent generative space using similar approaches in \cite{deligan}. The structure of the latent space we consider in this paper is modeled as a mixture of Gaussians, whose parameters are learned in the training process. Furthermore, to improve stability and efficiency, we use the Fisher Integral Probability Metric as the divergence measure in our GAN model, instead of the Jensen-Shannon divergence. We show by experiments that the Deli-Fisher GAN performs better than DCGAN, WGAN, and the Fisher GAN as measured by inception score.
rejected-papers
This paper combines two recently proposed ideas for GAN training: Fisher integral probability metrics, and the Deli-GAN. As the reviewers have pointed out, the writing is somewhat haphazard, and it's hard to identify the key contributions, why the proposed method is expected to help, and so on. The experiments are rather minimal: a single experiment comparing Inception scores to previous models on CIFAR; Inception scores are not a great measure, and the experiments don't yield much insight into where the improvement comes from. No author response was given. I don't think this paper is ready for publication in ICLR.
train
[ "ByxlwZX80m", "Byxo1A_wa7", "S1xar3L4p7", "ryesmzVRh7" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you so much for your pertinent reviews and suggestions. We do realize the problems in writing(time is a bit rushed before submission) and will make effort to improve on experiments for submission to future venues. We will also try to modify the structure of the paper and add comparisons to the existing mod...
[ -1, 2, 2, 3 ]
[ -1, 4, 5, 5 ]
[ "iclr_2019_HyMuaiAqY7", "iclr_2019_HyMuaiAqY7", "iclr_2019_HyMuaiAqY7", "iclr_2019_HyMuaiAqY7" ]
iclr_2019_HyMxAi05Km
Dual Learning: Theoretical Study and Algorithmic Extensions
Dual learning has been successfully applied in many machine learning applications, including machine translation, image-to-image transformation, etc. The high-level idea of dual learning is very intuitive: if we map an x from one domain to another and then map it back, we should recover the original x. Although its effectiveness has been empirically verified, theoretical understanding of dual learning is still missing. In this paper, we conduct a theoretical study to understand why and when dual learning can improve a mapping function. Based on the theoretical discoveries, we extend dual learning by introducing more related mappings and propose highly symmetric frameworks, cycle dual learning and multipath dual learning, in both of which we can leverage the feedback signals from additional domains to improve the qualities of the mappings. We prove that both cycle dual learning and multipath dual learning can boost the performance of standard dual learning under mild conditions. Experiments on WMT 14 English↔German and MultiUN English↔French translations verify our theoretical findings on dual learning, and the results on the translations among English, French, and Spanish of MultiUN demonstrate the efficacy of cycle dual learning and multipath dual learning.
rejected-papers
The reviewers vary in their scores but overall there is agreement that this paper is not ready for acceptance.
val
[ "BJgJmn5YCm", "Ske8LgLJ6m", "SygP9_2p2X", "rye3SauchX" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for the very helpful comments and suggestions! We will revise our paper accordingly.", "The paper addresses a means of boosting the accuracy of automatic translators (sentences) by training dual models (a.k.a. language A to B, B to A), multipath (e.g. A to B to C) and cyclical (e.g. A to B...
[ -1, 6, 2, 5 ]
[ -1, 3, 4, 3 ]
[ "iclr_2019_HyMxAi05Km", "iclr_2019_HyMxAi05Km", "iclr_2019_HyMxAi05Km", "iclr_2019_HyMxAi05Km" ]
iclr_2019_HyNbtiR9YX
Unsupervised Document Representation using Partition Word-Vectors Averaging
Learning effective document-level representation is essential in many important NLP tasks such as document classification, summarization, etc. Recent research has shown that simple weighted averaging of word vectors is an effective way to represent sentences, often outperforming complicated seq2seq neural models in many tasks. While it is desirable to use the same method to represent documents as well, unfortunately, the effectiveness is lost when representing long documents involving multiple sentences. One reason for this degradation is due to the fact that a longer document is likely to contain words from many different themes (or topics), and hence creating a single vector while ignoring all the thematic structure is unlikely to yield an effective representation of the document. This problem is less acute in single sentences and other short text fragments where presence of a single theme/topic is most likely. To overcome this problem, in this paper we present PSIF, a partitioned word averaging model to represent long documents. P-SIF retains the simplicity of simple weighted word averaging while taking a document's thematic structure into account. In particular, P-SIF learns topic-specific vectors from a document and finally concatenates them all to represent the overall document. Through our experiments over multiple real-world datasets and tasks, we demonstrate PSIF's effectiveness compared to simple weighted averaging and many other state-of-the-art baselines. We also show that PSIF is particularly effective in representing long multi-sentence documents. We will release PSIF's embedding source code and data-sets for reproducing results.
rejected-papers
This paper proposes a document classification algorithm based on partitioned word vector averaging. I agree with even the most positive reviewer. More experiments would be good. This is a very developed old area.
val
[ "Bye99dVqRQ", "BJeR1Cv0C7", "B1eclsU5CX", "SyxGNMhchQ", "ByeLTINcCm", "rJxWqIE50Q", "S1lqkXVcCm", "SJlh3pvcpX", "rkl_3oB5Tm", "HJxkA-Zea7", "HJgH7vkxam", "rklLsLyeaQ", "SyxrQI1ga7", "rJxruoSypm", "Hkeha2EqnX" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We have tried to address all the reviewers’ concerns adequately and believe that our paper has significantly improved. We have updated the paper with recent results in the Appendix.\n\n1. We added a Proof. Sketch for our embedding following (Arora et. al. 2017) paper. See Appendix I.\n\n2. We showed that many embe...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_HyNbtiR9YX", "B1eclsU5CX", "S1lqkXVcCm", "iclr_2019_HyNbtiR9YX", "HJgH7vkxam", "SyxrQI1ga7", "SJlh3pvcpX", "rkl_3oB5Tm", "HJxkA-Zea7", "rklLsLyeaQ", "Hkeha2EqnX", "SyxGNMhchQ", "rJxruoSypm", "iclr_2019_HyNbtiR9YX", "iclr_2019_HyNbtiR9YX" ]
iclr_2019_HyNmRiCqtm
CDeepEx: Contrastive Deep Explanations
We propose a method which can visually explain the classification decision of deep neural networks (DNNs). There are many proposed methods in machine learning and computer vision seeking to clarify the decision of machine learning black boxes, specifically DNNs. All of these methods try to gain insight into why the network "chose class A" as an answer. Humans, when searching for explanations, ask two types of questions. The first question is, "Why did you choose this answer?" The second question asks, "Why did you not choose answer B over A?" The previously proposed methods are either not able to provide the latter directly or efficiently. We introduce a method capable of answering the second question both directly and efficiently. In this work, we limit the inputs to be images. In general, the proposed method generates explanations in the input space of any model capable of efficient evaluation and gradient evaluation. We provide results, showing the superiority of this approach for gaining insight into the inner representation of machine learning models.
rejected-papers
Paper studies an important problem -- producing contrastive explanations (why did the network predict class B not A?). Two major concerns raised by reviewers -- the use of one learned "black-box" method to explain another and lack of human-studies to quantify results -- make it very difficult to accept this manuscript in its current state. We encourage the authors to incorporate reviewer feedback to make this manuscript stronger for a future submission; this is an important research topic.
val
[ "rJx5ghGShX", "rkg0nroYn7", "ryxftKl0RX", "H1x6HWaF07", "H1eiGWpt0m", "BJlWvUII0m", "BJg3248ICQ", "HklmqfFmAX", "B1g1tMt7AX", "HyxLHfF7RQ", "B1xsXfY7RQ", "S1exWztQR7", "ryxtBh5_h7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The idea proposed in this paper is to aid in understanding networks by showing why a network chose class A over class B. To do so, the goal is to find an example that is close to the original sample, but belongs to the other class. As is mentioned in the paper, it is crucial to stay on the data manifold for this ...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2019_HyNmRiCqtm", "iclr_2019_HyNmRiCqtm", "B1xsXfY7RQ", "BJlWvUII0m", "BJg3248ICQ", "B1g1tMt7AX", "HklmqfFmAX", "rJx5ghGShX", "rJx5ghGShX", "ryxtBh5_h7", "rkg0nroYn7", "iclr_2019_HyNmRiCqtm", "iclr_2019_HyNmRiCqtm" ]
iclr_2019_HyVbhi0cYX
Complexity of Training ReLU Neural Networks
In this paper, we explore some basic questions on complexity of training Neural networks with ReLU activation function. We show that it is NP-hard to train a two-hidden layer feedforward ReLU neural network. If dimension d of the data is fixed then we show that there exists a polynomial time algorithm for the same training problem. We also show that if sufficient over-parameterization is provided in the first hidden layer of ReLU neural network then there is a polynomial time algorithm which finds weights such that output of the over-parameterized ReLU neural network matches with the output of the given data.
rejected-papers
Dear authors, All reviewers agreed that, while the problem considered was of interest, the theoretical result presented in this work was of too limited scope to be of interest for the ICLR audience. Based on their comments, you might want to consider a more theoretically-oriented venue for such a submission.
test
[ "SkgrWz6n2m", "rylXTxh92X", "SkleFFb9nX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper claims results showing ReLU networks (or a particular architecture for that) are NP-hard to learn. The authors claim that results that essentially show this (such as those by Livni et al.) are unsatisfactory as they only show this for ReLU networks that are fully connected. However, the authors fail to ...
[ 3, 5, 4 ]
[ 5, 5, 3 ]
[ "iclr_2019_HyVbhi0cYX", "iclr_2019_HyVbhi0cYX", "iclr_2019_HyVbhi0cYX" ]
iclr_2019_HyVxPsC9tm
DynCNN: An Effective Dynamic Architecture on Convolutional Neural Network for Surveillance Videos
The large-scale surveillance video analysis becomes important as the development of intelligent city. The heavy computation resources neccessary for state-of-the-art deep learning model makes the real-time processing hard to be implemented. This paper exploits the characteristic of high scene similarity generally existing in surveillance videos and proposes dynamic convolution reusing the previous feature map to reduce the computation amount. We tested the proposed method on 45 surveillance videos with various scenes. The experimental results show that dynamic convolution can reduce up to 75.7% of FLOPs while preserving the precision within 0.7% mAP. Furthermore, the dynamic convolution can enhance the processing time up to 2.2 times.
rejected-papers
The paper proposes a method for saving computation in surveillance videos (videos without camera motion) by re-using features from parts of the image that do not change. The results show that this significantly saves computation time, which is a big benefit, given also the amount of surveillance video input available for processing nowadays. Reviewers request comparisons to obvious baselines, e.g., selecting a subset of frames for processing or performing a low level pixel matching to select the pixels to compute new features on. Such experiments would make this paper much stronger. There is no rebuttal and thus no ground for discussion or acceptance.
train
[ "SkxNGokyp7", "ryx2A8V52X", "BJglEsgqnX", "r1x9q8dOiQ", "S1g_jdUVoX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "In this paper, the authors propose a dynamic convolution model by exploiting the inter-scene similarity. The computation cost is reduced significantly by reusing the feature map. In general, the paper is present clearly, but the technical contribution is rather incremental. I have several concerns:\n1. The authors...
[ 3, 4, 4, -1, -1 ]
[ 4, 4, 3, -1, -1 ]
[ "iclr_2019_HyVxPsC9tm", "iclr_2019_HyVxPsC9tm", "iclr_2019_HyVxPsC9tm", "S1g_jdUVoX", "iclr_2019_HyVxPsC9tm" ]
iclr_2019_Hye-LiR5Y7
SOSELETO: A Unified Approach to Transfer Learning and Training with Noisy Labels
We present SOSELETO (SOurce SELEction for Target Optimization), a new method for exploiting a source dataset to solve a classification problem on a target dataset. SOSELETO is based on the following simple intuition: some source examples are more informative than others for the target problem. To capture this intuition, source samples are each given weights; these weights are solved for jointly with the source and target classification problems via a bilevel optimization scheme. The target therefore gets to choose the source samples which are most informative for its own classification task. Furthermore, the bilevel nature of the optimization acts as a kind of regularization on the target, mitigating overfitting. SOSELETO may be applied to both classic transfer learning, as well as the problem of training on datasets with noisy labels; we show state of the art results on both of these problems.
rejected-papers
The paper proposes an approach for transfer learning by assigning weights to source samples and learning these jointly with the network parameters. Reviewers had a few concerns about experiments, some of which have been addressed by the authors. The proposed approach is simple which is a positive but it is not evaluated on any of the regular transfer learning benchmarks (eg, the ones used in Kornblith et al., 2018 "Do Better ImageNet Models Transfer Better?"). The tasks used in the paper, such as CIFAR noisy -> CIFAR and SVHN0-4 -> MNIST5-9, are artificially constructed, and the paper falls short of demonstrating the effectiveness of the approach on real settings. The paper is on the borderline with current scores and the lack of regular transfer learning benchmarks in the evaluations makes me lean towards not recommending acceptance.
test
[ "H1gW0QzC37", "B1l0QvUS0Q", "BkxTJwLH0m", "SJxcaULr07", "rygIrL8SC7", "SJxl2E_9nX", "rygGGGMu3Q" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "PROS:\n* This is an interesting approach of assigning contribution weights to each source sample.\n* Could be very helpful for tasks where we have a noisy and a (small) clean dataset.\n* The method seems to be performing well for the tasks chosen, especially for the CIFAR experiments.\n* Simple idea and relatively...
[ 7, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_Hye-LiR5Y7", "rygGGGMu3Q", "SJxl2E_9nX", "H1gW0QzC37", "iclr_2019_Hye-LiR5Y7", "iclr_2019_Hye-LiR5Y7", "iclr_2019_Hye-LiR5Y7" ]
iclr_2019_Hye64hA9tm
Measuring Density and Similarity of Task Relevant Information in Neural Representations
Neural models achieve state-of-the-art performance due to their ability to extract salient features useful to downstream tasks. However, our understanding of how this task-relevant information is included in these networks is still incomplete. In this paper, we examine two questions (1) how densely is information included in extracted representations, and (2) how similar is the encoding of relevant information between related tasks. We propose metrics to measure information density and cross-task similarity, and perform an extensive analysis in the domain of natural language processing, using four varieties of sentence representation and 13 tasks. We also demonstrate how the proposed analysis tools can find immediate use in choosing tasks for transfer learning.
rejected-papers
This paper addresses important general questions about how linear classifiers use features, and about the transferability of those features across tasks. The paper presents a specific new analysis method, and demonstrates it on a family of NLP tasks. All four reviewers (counting the emergency fourth review) found the general direction of research to be interesting and worthwhile, but all four shared several serious concerns about the impact and soundness of the proposed method. The impact concerns mostly dealt with the observation that the method is specific to linear classifiers, and that it's only applicable to tasks for which a substantial amount of training data is available. As the AC, I'm willing to accept that it should still be possible to conduct an informative analysis under these conditions, but I'm more concerned about the soundness issues: The reviewers were not convinced that a method based on the counting of specific features was appropriate for the proposed setting (due to rotation sensitivity, among other issues), and did not find that the experiments were sufficiently extensive to overcome these doubts.
train
[ "SyxZ_mXK0m", "r1gQHcar0X", "BJgm9FTSCm", "BkgH9OTSAQ", "Skgy6Qpr07", "rklXOqkaaQ", "SJecygkZp7", "BklIRmpxa7", "rklOwd45hm", "SyefBu5O3m" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I appreciate the time you took to explain your reasoning about simpler methods, and I look forward to the comparisons you mentioned.\n\nIt also does sound like you you've already thought about how to adapt these ideas to other settings, which I think will be a good next test for these methods. ", "Thank you for ...
[ -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "BJgm9FTSCm", "SyefBu5O3m", "rklOwd45hm", "BklIRmpxa7", "SJecygkZp7", "SJecygkZp7", "iclr_2019_Hye64hA9tm", "iclr_2019_Hye64hA9tm", "iclr_2019_Hye64hA9tm", "iclr_2019_Hye64hA9tm" ]
iclr_2019_Hye6uoC9tm
Incremental Hierarchical Reinforcement Learning with Multitask LMDPs
Exploration is a well known challenge in Reinforcement Learning. One principled way of overcoming this challenge is to find a hierarchical abstraction of the base problem and explore at these higher levels, rather than in the space of primitives. However, discovering a deep abstraction autonomously remains a largely unsolved problem, with practitioners typically hand-crafting these hierarchical control architectures. Recent work with multitask linear Markov decision processes, allows for the autonomous discovery of deep hierarchical abstractions, but operates exclusively in the offline setting. By extending this work, we develop an agent that is capable of incrementally growing a hierarchical representation, and using its experience to date to improve exploration.
rejected-papers
The paper studies an interesting problem with a reasonable solution. However, reviewers feel that the technical contributions are somewhat incremental. Furthermore, the empirical study would have been stronger with more proper baselines (simple adaptation to the multitask setting), and on problems beyond the simple grid worlds. In addition, reviewers also find the presentation should be improved substantially.
train
[ "r1gHwX8ZTX", "HJxYDLd52m", "rylvQpg5n7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Major comments:\n\nThis paper builds on previous work in hierarchical LMDPs and extends the core ideas to an online setting. Essentially, we incrementally construct a hierarchy by adding new states to upper-level MDPs every once in a while; these are loosely initialized and the parameters are then refined with ad...
[ 3, 4, 5 ]
[ 4, 4, 4 ]
[ "iclr_2019_Hye6uoC9tm", "iclr_2019_Hye6uoC9tm", "iclr_2019_Hye6uoC9tm" ]
iclr_2019_HyeS73ActX
Multi-Objective Value Iteration with Parameterized Threshold-Based Safety Constraints
We consider an environment with multiple reward functions. One of them represents goal achievement and the others represent instantaneous safety conditions. We consider a scenario where the safety rewards should always be above some thresholds. The thresholds are parameters with values that differ between users. %The thresholds are not known at the time the policy is being designed. We efficiently compute a family of policies that cover all threshold-based constraints and maximize the goal achievement reward. We introduce a new parameterized threshold-based scalarization method of the reward vector that encodes our objective. We present novel data structures to store the value functions of the Bellman equation that allow their efficient computation using the value iteration algorithm. We present results for both discrete and continuous state spaces.
rejected-papers
The main issue with the work in its current form is a lack of motivation and some clarity issues. The paper presents some interesting ideas, and will be much stronger when it incorporates a more clear discussion on motivation, both for the problem setting and the proposed solutions. The writing itself could also be significantly improved.
train
[ "SkgO4p1qCQ", "Hyl_NqRB0m", "rkx3U3TkRX", "Hke844Sb6X", "rJxDJ8Sbp7", "Bkgi1BrbaQ", "Bkg-5-ht2m", "SyxwYxTMnQ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "- To avoid confusion, d is the dimension of the parameter space and is not related to the number of users. But yes, for a single patient, the parameters will be fixed and the reward vector will be scalarized. We added a comment below about the motivation of the work after a question from reviewer1. I hope that wou...
[ -1, -1, 5, -1, -1, -1, 5, 3 ]
[ -1, -1, 4, -1, -1, -1, 2, 4 ]
[ "rkx3U3TkRX", "rJxDJ8Sbp7", "iclr_2019_HyeS73ActX", "SyxwYxTMnQ", "Bkg-5-ht2m", "iclr_2019_HyeS73ActX", "iclr_2019_HyeS73ActX", "iclr_2019_HyeS73ActX" ]
iclr_2019_HyeU1hRcFX
Unsupervised Conditional Generation using noise engineered mode matching GAN
Conditional generation refers to the process of sampling from an unknown distribution conditioned on semantics of the data. This can be achieved by augmenting the generative model with the desired semantic labels, albeit it is not straightforward in an unsupervised setting where the semantic label of every data sample is unknown. In this paper, we address this issue by proposing a method that can generate samples conditioned on the properties of a latent distribution engineered in accordance with a certain data prior. In particular, a latent space inversion network is trained in tandem with a generative adversarial network such that the modal properties of the latent space distribution are induced in the data generating distribution. We demonstrate that our model, despite being fully unsupervised, is effective in learning meaningful representations through its mode matching property. We validate our method on multiple unsupervised tasks such as conditional generation, dataset attribute discovery and inference using three real world image datasets namely MNIST, CIFAR-10 and CELEB-A and show that the results are comparable to the state-of-the-art methods.
rejected-papers
The paper uses a multimodal prior in GANs and reconstructs the latents back from images in two stages to match the generated data modes to the latent space modes. It is empirically shown that this can prevent mode collapse to some extent (including intra-class collapse). However the paper lacks a comparison with state of the art GANs that have been shown to get better FID scores (~21 for SN-GAN [1] vs ~28 in the paper) so the benefit here is unclear, particularly in cases when the mode prior is unknown. Similarly for other applications used in the paper such as inference and attribute discovery, it falls short of demonstrating quantitative improvements with the approach. For example, there is a growing body of work on unsupervised disentanglement in generative models with several metrics to measure it, which could be used to evaluate the attribute discovery performance. R1 has brought up the point of lack of comparisons which the AC agrees with. Authors have made revisions in the paper including some comparisons but these feel insufficient to establish the benefits of the method over state of the art in preventing mode collapse. A borderline paper as reflected in the reviewer scores but can be made stronger with experiments showing convincing improvements over state of the art in at least one of the applications considered in the paper. [1] Miyato, T., Kataoka, T., Koyama, M., & Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. ArXiv Preprint ArXiv:1802.05957.
train
[ "Hyx-tgcuCX", "H1xeiWqO0m", "ryx9eWzApQ", "Hyxm3CqxAm", "rJeA2YhtCm", "HkxeyR-CaX", "r1gpI2JA67", "rJxVyZR0h7", "HkeyPwV63X", "r1lKzjkq2Q" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the insightful reviews. Your review helped in improving our paper significantly. Below are the point-wise responses for the concerns raised.\n\nQ1. I think the paper is of low significance, but the approach outlined is interesting.\n\nAns: Thank you for noting that it is an interesting approach. We a...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "rJxVyZR0h7", "rJxVyZR0h7", "HkeyPwV63X", "r1lKzjkq2Q", "iclr_2019_HyeU1hRcFX", "HkeyPwV63X", "iclr_2019_HyeU1hRcFX", "iclr_2019_HyeU1hRcFX", "iclr_2019_HyeU1hRcFX", "iclr_2019_HyeU1hRcFX" ]
iclr_2019_Hyed4i05KX
Interpreting Layered Neural Networks via Hierarchical Modular Representation
Interpreting the prediction mechanism of complex models is currently one of the most important tasks in the machine learning field, especially with layered neural networks, which have achieved high predictive performance with various practical data sets. To reveal the global structure of a trained neural network in an interpretable way, a series of clustering methods have been proposed, which decompose the units into clusters according to the similarity of their inference roles. The main problems in these studies were that (1) we have no prior knowledge about the optimal resolution for the decomposition, or the appropriate number of clusters, and (2) there was no method with which to acquire knowledge about whether the outputs of each cluster have a positive or negative correlation with the input and output dimension values. In this paper, to solve these problems, we propose a method for obtaining a hierarchical modular representation of a layered neural network. The application of a hierarchical clustering method to a trained network reveals a tree-structured relationship among hidden layer units, based on their feature vectors defined by their correlation with the input and output dimension values.
rejected-papers
All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.
train
[ "ByxTjCE-pQ", "BygtYjnwhm", "HJe9AyUUh7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Sorry, I am not convinced by this paper.\n\nI just don't believe that one can really gain any useful insight into neural networks by this kind of visualization. In my opinion, all these kinds of visualization can give is the false believe that one understands what the network is doing. (If you think about it, un...
[ 4, 3, 3 ]
[ 3, 4, 4 ]
[ "iclr_2019_Hyed4i05KX", "iclr_2019_Hyed4i05KX", "iclr_2019_Hyed4i05KX" ]
iclr_2019_HyefgnCqFm
Learning Partially Observed PDE Dynamics with Neural Networks
Spatio-Temporal processes bear a central importance in many applied scientific fields. Generally, differential equations are used to describe these processes. In this work, we address the problem of learning spatio-temporal dynamics with neural networks when only partial information on the system's state is available. Taking inspiration from the dynamical system approach, we outline a general framework in which complex dynamics generated by families of differential equations can be learned in a principled way. Two models are derived from this framework. We demonstrate how they can be applied in practice by considering the problem of forecasting fluid flows. We show how the underlying equations fit into our formalism and evaluate our method by comparing with standard baselines.
rejected-papers
This paper introduces a few training methods to fit the dynamics of a PDE based on observations. Quality: Not great. The authors seem unaware of much related work both in the numerics and deep learning communities. The experiments aren't very illuminating, and the connections between the different methods are never clearly and explicitly laid out in one place. Clarity: Poor. The intro is long and rambly, and the main contributions aren't clearly motivated. A lot of time is spent mentioning things that could be done, without saying when this would be important or useful to do. An algorithm box or two would be a big improvement over the many long english explanations of the methods, and the diagrams with cycles in them. Originality: Not great. There has been a lot of work on fitting dynamics models using NNs, and also attempting to optimize PDE solvers, which is hardly engaged with. Significance: This work fails to make its own significance clear, by not exploring or explaining the scope and limitations of their proposed approach, or comparing against more baselines from the large set of related literature.
test
[ "S1xxl4RtRm", "S1epUNRtRm", "SyxyYr0YCX", "H1giIr0K0Q", "HJxtGVCKAm", "rJgypGCt0X", "SkeVKA1C27", "BylmOXd6hm", "rye2Nl8phQ" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your extensive and detailed comments. We will try to address your remarks and concerns in this answer. We will also try to make clearer some points that we might have gone through too quickly in the paper.\n\nYou are right to mention the GP community, they are very active and pioneering in ...
[ -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "SkeVKA1C27", "BylmOXd6hm", "H1giIr0K0Q", "rye2Nl8phQ", "S1xxl4RtRm", "iclr_2019_HyefgnCqFm", "iclr_2019_HyefgnCqFm", "iclr_2019_HyefgnCqFm", "iclr_2019_HyefgnCqFm" ]
iclr_2019_HyesW2C9YQ
I Know the Feeling: Learning to Converse with Empathy
Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficult by the paucity of suitable publicly available datasets both for emotion and dialogues. This work proposes a new task for empathetic dialogue generation and EmpatheticDialogues, a dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems. Our experiments indicate that dialogue models that use our dataset are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores), compared to models merely trained on large-scale Internet conversation data. We also present empirical comparisons of several ways to improve the performance of a given model by leveraging existing models or datasets without requiring lengthy re-training of the full model.
rejected-papers
The reviewers raised a number of concerns including the usefulness of the presented dataset given that the collected data is acted rather than naturalistic (and the large body of research in affective computing explains that models trained on acted data cannot generalise to naturalistic data), no methodological novelty in the presented work, and relatively uninteresting application with very limited real-world application (it remains unclear whether having better empathetic dialogues would be truly crucial for any real-life application and, in addition, all work is based on acted rather than real-world data). The authors’ rebuttal addressed some of the reviewers’ concerns but not fully (especially when it comes to usefulness of the data). Overall, I believe that the effort to collect the presented database is noble and may be useful to the community to a small extent. However, given the unrealism of the data and, in turn, very limited (if any) generalisability of the presented to real-world scenarios, and lack of methodological contribution, I cannot recommend this paper for presentation at ICLR.
test
[ "HklNRaCYA7", "B1erH8iYCX", "S1gqZa98CQ", "S1ecLgoURQ", "HyxygWiUR7", "BJx59xoLCm", "BJgP-yiLRm", "S1xwkJiLAm", "Byg9rCcIA7", "Hkeif058AX", "Ske58Tc8RQ", "Hklbe_U027", "S1eSaJc53Q", "B1xDLFJTiQ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear AnonReviewer1,\n\nWe are very glad to hear that you found our additional comments useful and upgraded your score to help it get presented to ICLR. We indeed are eager to see a lot more development based on this dataset.\nWe noticed that you posted your response on the thread of the review of AnonReviewer3, wh...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "B1erH8iYCX", "BJgP-yiLRm", "iclr_2019_HyesW2C9YQ", "B1xDLFJTiQ", "B1xDLFJTiQ", "B1xDLFJTiQ", "S1eSaJc53Q", "S1eSaJc53Q", "Hklbe_U027", "Hklbe_U027", "iclr_2019_HyesW2C9YQ", "iclr_2019_HyesW2C9YQ", "iclr_2019_HyesW2C9YQ", "iclr_2019_HyesW2C9YQ" ]
iclr_2019_HyevnsCqtQ
Integral Pruning on Activations and Weights for Efficient Neural Networks
With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment. This work aims to advance the compression beyond the weights to the activations of DNNs. We propose the Integral Pruning (IP) technique which integrates the activation pruning with the weight pruning. Through the learning on the different importance of neuron responses and connections, the generated network, namely IPnet, balances the sparsity between activations and weights and therefore further improves execution efficiency. The feasibility and effectiveness of IPnet are thoroughly evaluated through various network models with different activation functions and on different datasets. With <0.5% disturbance on the testing accuracy, IPnet saves 71.1% ~ 96.35% of computation cost, compared to the original dense models with up to 5.8x and 10x reductions in activation and weight numbers, respectively.
rejected-papers
This paper proposes to compress the deep learning model using both activation pruning and weight pruning. The reviewers have a consensus on rejection due to lack of novelty.
train
[ "SJlhCAYtR7", "rygCXpHD67", "BkebwFHv6Q", "rkegDEBw67", "ByxcYlki3Q", "rylDDl2c27", "ByxdPQb5nm" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors have commented on the major issues regarding the time complexity on selection of winner rate per layer, compared their method against existing channel/layer based pruning methods and agreed to correct few minor issues. The authors have empirically observed that searching for the right set of choices fo...
[ -1, -1, -1, -1, 4, 5, 5 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "rygCXpHD67", "ByxdPQb5nm", "rylDDl2c27", "ByxcYlki3Q", "iclr_2019_HyevnsCqtQ", "iclr_2019_HyevnsCqtQ", "iclr_2019_HyevnsCqtQ" ]
iclr_2019_Hyewf3AqYX
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/T) convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
rejected-papers
While there was some support for the ideas presented, the majority of the reviewers did not think the submission is ready for publication at ICLR. Significant concerns were raised about clarity of the exposition.
train
[ "HyeaQklByN", "rygDlUyHkV", "SyllU9Iq37", "BJli6m1bJE", "rJgGFXJWk4", "H1etHF6ykE", "ryge7ywCAQ", "HygnCHRpR7", "HJlvq-NoRQ", "S1lJZSCqCQ", "rJefA1rv3X", "HyxXl7Z5RX", "ByxEFM-c0Q", "BJxWfM-c0X", "H1lALLCRhX", "S1xm6X3TcX", "B1edUenZcX" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Dear Reviewer 2, thank you for reading our response and increasing your score. At the end of your updated review, you said that “the authors answered some of my questions”. We apologize if we missed any of your questions. We wonder could you let us know what are the questions that we did not answer? We will answer...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, 5, -1, -1 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 4, -1, -1 ]
[ "ryge7ywCAQ", "H1etHF6ykE", "iclr_2019_Hyewf3AqYX", "H1etHF6ykE", "HygnCHRpR7", "BJxWfM-c0X", "HygnCHRpR7", "ByxEFM-c0Q", "S1lJZSCqCQ", "HyxXl7Z5RX", "iclr_2019_Hyewf3AqYX", "rJefA1rv3X", "SyllU9Iq37", "H1lALLCRhX", "iclr_2019_Hyewf3AqYX", "B1edUenZcX", "iclr_2019_Hyewf3AqYX" ]
iclr_2019_Hyffti0ctQ
PRUNING WITH HINTS: AN EFFICIENT FRAMEWORK FOR MODEL ACCELERATION
In this paper, we propose an efficient framework to accelerate convolutional neural networks. We utilize two types of acceleration methods: pruning and hints. Pruning can reduce model size by removing channels of layers. Hints can improve the performance of student model by transferring knowledge from teacher model. We demonstrate that pruning and hints are complementary to each other. On one hand, hints can benefit pruning by maintaining similar feature representations. On the other hand, the model pruned from teacher networks is a good initialization for student model, which increases the transferability between two networks. Our approach performs pruning stage and hints stage iteratively to further improve the performance. Furthermore, we propose an algorithm to reconstruct the parameters of hints layer and make the pruned model more suitable for hints. Experiments were conducted on various tasks including classification and pose estimation. Results on CIFAR-10, ImageNet and COCO demonstrate the generalization and superiority of our framework.
rejected-papers
This paper proposes a new framework which combines pruning and model distillation techniques for model acceleration. The reviewers have a consensus on rejection due to limited novelty.
train
[ "SkxjvM-upm", "SJl2i4dA3X", "HyencTJY27" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new framework which combines pruning and model distillation techniques for model acceleration. Though the ``pruning” (Molchanov et al. (2017)) and hint components already exists, the authors claim to be the first to combine them, and experimentally show the benefit of jointly and iteratively ...
[ 4, 5, 4 ]
[ 3, 4, 4 ]
[ "iclr_2019_Hyffti0ctQ", "iclr_2019_Hyffti0ctQ", "iclr_2019_Hyffti0ctQ" ]
iclr_2019_Hyfg5o0qtm
Temporal Gaussian Mixture Layer for Videos
We introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture longer-term temporal information in continuous activity videos. The TGM layer is a temporal convolutional layer governed by a much smaller set of parameters (e.g., location/variance of Gaussians) that are fully differentiable. We present our fully convolutional video models with multiple TGM layers for activity detection. The experiments on multiple datasets including Charades and MultiTHUMOS confirm the effectiveness of TGM layers, outperforming the state-of-the-arts.
rejected-papers
The reviewers raised a number of major concerns including lack of explanations, lack of baseline comparisons, and lack of discussion on pros and cons of the main contribution of this work -- the presented Temporal Gaussian Mixture (TGM) layer. The authors’ rebuttal addressed some of the reviewers’ comments but failed to address all concerns (especially when it comes to the success of TGMs; it remains unclear whether this could be attributed solely to the way TGMs are applied rather than to their fundamental methodological advantage). Having said that, I cannot suggest this paper for presentation at ICLR.
train
[ "rJg-atu1xV", "S1gsmbhDnm", "BkgQqD4NJV", "SJlu1iaQpQ", "SJgDnqTm6m", "B1e-_cpm67", "BJxVQYuypX", "B1g-RVbO3Q", "H1gd1mL8n7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n1/2) We will further revise the paper to clarify the effect of having an additional temporal channel axis. We will move the ablation experiments we have in the appendix to the main paper, and add discussions on the impacts of different forms of convolutional layers.\n\n3) Following the suggestion from the review...
[ -1, 6, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 5, -1, -1, -1, -1, -1, 3, 5 ]
[ "BkgQqD4NJV", "iclr_2019_Hyfg5o0qtm", "SJgDnqTm6m", "H1gd1mL8n7", "S1gsmbhDnm", "B1g-RVbO3Q", "S1gsmbhDnm", "iclr_2019_Hyfg5o0qtm", "iclr_2019_Hyfg5o0qtm" ]
iclr_2019_HyfyN30qt7
NICE: noise injection and clamping estimation for neural network quantization
Convolutional Neural Networks (CNN) are very popular in many fields including computer vision, speech recognition, natural language processing, to name a few. Though deep learning leads to groundbreaking performance in these domains, the networks used are very demanding computationally and are far from real-time even on a GPU, which is not power efficient and therefore does not suit low power systems such as mobile devices. To overcome this challenge, some solutions have been proposed for quantizing the weights and activations of these networks, which accelerate the runtime significantly. Yet, this acceleration comes at the cost of a larger error. The NICE method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve the accuracy. This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18/34/50 with low as 3-bit weights and 3 -bit activations. We implement the proposed solution on an FPGA to demonstrate its applicability for low power real-time applications.
rejected-papers
This paper addresses an important problem, quantizing deep neural network models to reduce the cost of implementing them on hardware such as FPGAs without severely affecting task performance. The approach explored in the paper combines three ideas: (1) injecting noise into the network to simulate the effects of quantization noise, (2) a smart initialization of the parameter and activation clamping along with learning of the activation clamping using the straight-through estimator, and (3) a gradual approach to quantization. While the reviewers agreed that the problem is important, they raised concerns about the novelty of the proposed approach and the quality of the experiments. The authors did not respond to the reviewers in the discussion period, and did not revise their submission.
train
[ "B1xOIQXcnm", "BJl1eHfFnQ", "HJeXr78fhQ", "SyeRdDUJ3X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[Summary]\nNeural network quantization is can enable many practical applications for deep learning, therefore it is an important research problem. The paper claims two contributions: 1. Injecting noise during training to make it more robust to quantization errors. 2. Clamping the parameter values in a layer as wel...
[ 4, 5, 4, -1 ]
[ 4, 3, 3, -1 ]
[ "iclr_2019_HyfyN30qt7", "iclr_2019_HyfyN30qt7", "iclr_2019_HyfyN30qt7", "iclr_2019_HyfyN30qt7" ]
iclr_2019_Hyg1Ls0cKQ
Learning Latent Semantic Representation from Pre-defined Generative Model
Learning representations of data is an important issue in machine learning. Though GAN has led to significant improvements in the data representations, it still has several problems such as unstable training, hidden manifold of data, and huge computational overhead. GAN tends to produce the data simply without any information about the manifold of the data, which hinders from controlling desired features to generate. Moreover, most of GAN’s have a large size of manifold, resulting in poor scalability. In this paper, we propose a novel GAN to control the latent semantic representation, called LSC-GAN, which allows us to produce desired data to generate and learns a representation of the data efficiently. Unlike the conventional GAN models with hidden distribution of latent space, we define the distributions explicitly in advance that are trained to generate the data based on the corresponding features by inputting the latent variables that follow the distribution. As the larger scale of latent space caused by deploying various distributions in one latent space makes training unstable while maintaining the dimension of latent space, we need to separate the process of defining the distributions explicitly and operation of generation. We prove that a VAE is proper for the former and modify a loss function of VAE to map the data into the pre-defined latent space so as to locate the reconstructed data as close to the input data according to its characteristics. Moreover, we add the KL divergence to the loss function of LSC-GAN to include this process. The decoder of VAE, which generates the data with the corresponding features from the pre-defined latent space, is used as the generator of the LSC-GAN. Several experiments on the CelebA dataset are conducted to verify the usefulness of the proposed method to generate desired data stably and efficiently, achieving a high compression ratio that can hold about 24 pixels of information in each dimension of latent space. Besides, our model learns the reverse of features such as not laughing (rather frowning) only with data of ordinary and smiling facial expression.
rejected-papers
Reviewers have expressed concerns about clarity/writing of the paper and technical novelty, which the authors haven't responded to. The paper is not suitable for publication at ICLR.
train
[ "ryl2cEYo37", "B1ebp4G527", "HyxqHRUt2Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "* Pros\n- addresses an interesting problem\n- gives a nice approach to the problem\n- attempts to give some theoretical justification for the approach\n\n* Cons\n- I generally understand the approach, but details were not clear to me (specifics given below)\n- Sections 3.2.1 and 3.2.2 (the theoretical section), I ...
[ 5, 3, 4 ]
[ 2, 5, 3 ]
[ "iclr_2019_Hyg1Ls0cKQ", "iclr_2019_Hyg1Ls0cKQ", "iclr_2019_Hyg1Ls0cKQ" ]
iclr_2019_Hyg74h05tX
Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Flow-based generative models are powerful exact likelihood models with efficient sampling and inference. Despite their computational efficiency, flow-based models generally have much worse density modeling performance compared to state-of-the-art autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models.
rejected-papers
Strengths: -------------- This paper was clearly written, contained novel technical insights, and had SOTA results. In particular, the explanation of the generalized dequantization trick was enlightening and I expect will be useful in this entire family of methods. The paper also contained ablation experiments. Weaknesses: ------------------ The paper went for a grab-bag approach, when it might have been better to focus on one contribution and explore it in more detail (e.g. show that the learned pdf is smoother when using variational quantization, or showing the different in ELBO when using uniform q as suggested by R2). Also, the main text contains many references to experiments that hadn't converged at submission time, but the submission wasn't updated during the initial discussion period. Why not? Points of contention: ----------------------------- Everyone agrees that the contributions are novel and useful. The only question is whether the exposition is detailed enough to reproduce the new methods (the authors say they will provide code), and whether the experiments, which meet basic standards, of a high enough standard for publication, because there was little investigation into the causes of the difference in performance between models. Consensus: ---------------- The consensus was that this paper was slightly below the bar.
train
[ "S1e0_khKhX", "rkltsyQYRm", "SJeE7kmtRX", "Skly26zK0Q", "SkxrmDQunQ", "SylTRPCaom" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I think the ideas are of sufficient interest to the community to merit acceptance & discussion, but I still miss the high resolution samples we got with the Glow paper. Responses to my concerns somewhat addressed, though simpler alternatives to uniform dequant would be nice.\n\n=====\n\nImprovements are attained o...
[ 6, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, 3, 5 ]
[ "iclr_2019_Hyg74h05tX", "SylTRPCaom", "S1e0_khKhX", "SkxrmDQunQ", "iclr_2019_Hyg74h05tX", "iclr_2019_Hyg74h05tX" ]
iclr_2019_HygQro05KX
A∗ sampling with probability matching
Probabilistic methods often need to draw samples from a nontrivial distribution. A∗ sampling is a nice algorithm by building upon a top-down construction of a Gumbel process, where a large state space is divided into subsets and at each round A∗ sampling selects a subset to process. However, the selection rule depends on a bound function, which can be intractable. Moreover, we show that such a selection criterion can be inefficient. This paper aims to improve A∗ sampling by addressing these issues. To design a suitable selection rule, we apply \emph{Probability Matching}, a widely used method for decision making, to A∗ sampling. We provide insights into the relationship between A∗ sampling and probability matching by analyzing a nontrivial special case in which the state space is partitioned into two subsets. We show that in this case probability matching is optimal within a constant gap. Furthermore, as directly applying probability matching to A∗ sampling is time consuming, we design an approximate version based on Monte-Carlo estimators. We also present an efficient implementation by leveraging special properties of Gumbel distributions and well-designed balanced trees. Empirical results show that our method saves a significantly amount of computational resources on suboptimal regions compared with A∗ sampling.
rejected-papers
This paper applied probability matching to A* sampling in order to provide an approximate variant without a bound function. It is a novel idea and a good contribution to the A* sampling family. The authors also provided regret analysis for the adoption of PM. However, as pointed out by R1 and R3, the authors failed to clarify the approximation introduced by the PM and its implication in the output samples. The empirical comparison should also take into account this difference. Further analysis of the bias in the sample distribution would also help clarify the pros and cons of the proposed method. R3 also raised the concern that the description of the preliminary section and the main contribution in section 4 was not clear.
train
[ "HyxV1doKhX", "SJevthcFnX", "rkl8w3ZuoQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper introduces a probability matching approach for optimizing Gumbel processes, i.e. the extension of the Gumbel-Max trick to more general measure spaces. The basic idea is to use a more refined subset selection mechanism as compared to A* Sampling, but at the cost of being able to guarantee an exa...
[ 5, 6, 3 ]
[ 5, 2, 5 ]
[ "iclr_2019_HygQro05KX", "iclr_2019_HygQro05KX", "iclr_2019_HygQro05KX" ]
iclr_2019_HygS7n0cFQ
Fast Exploration with Simplified Models and Approximately Optimistic Planning in Model Based Reinforcement Learning
Humans learn to play video games significantly faster than the state-of-the-art reinforcement learning (RL) algorithms. People seem to build simple models that are easy to learn to support planning and strategic exploration. Inspired by this, we investigate two issues in leveraging model-based RL for sample efficiency. First we investigate how to perform strategic exploration when exact planning is not feasible and empirically show that optimistic Monte Carlo Tree Search outperforms posterior sampling methods. Second we show how to learn simple deterministic models to support fast learning using object representation. We illustrate the benefit of these ideas by introducing a novel algorithm, Strategic Object Oriented Reinforcement Learning (SOORL), that outperforms state-of-the-art algorithms in the game of Pitfall! in less than 50 episodes.
rejected-papers
Pros: - rather novel approach to using optimistic MCTS for exploration with deterministic models - positive rewards on Pitfall Cons: - lost of domain-specific knowledge - deteministic models - lacking clarity - lacking ablations - no rebuttal I agree with both reviewers that the paper is not good enough to be accepted.
train
[ "Syeo0AeX6Q", "H1liHsyThQ" ]
[ "official_reviewer", "official_reviewer" ]
[ "-- Summary --\n\nThe paper proposes to learn (transition) models (for MDPs) in terms of objects and their interactions. These models are effectively deterministic and are compatible with algorithms for planning with count-based exploration. The paper demonstrates the performance of one such planning method in toy ...
[ 5, 4 ]
[ 4, 4 ]
[ "iclr_2019_HygS7n0cFQ", "iclr_2019_HygS7n0cFQ" ]
iclr_2019_HygT9oRqFX
MixFeat: Mix Feature in Latent Space Learns Discriminative Space
Deep learning methods perform well in various tasks. However, the over-fitting problem, which causes the performance to decrease for unknown data, remains. We hence propose a method named MixFeat that directly creates latent spaces in a network that can distinguish classes. MixFeat mixes two feature maps in each latent space in the network and uses unmixed labels for learning. We discuss the difference between a method that mixes only features (MixFeat) and a method that mixes both features and labels (mixup and its family). Mixing features repeatedly is effective in expanding feature diversity, but mixing labels repeatedly makes learning difficult. MixFeat makes it possible to obtain the advantages of repeated mixing by mixing only features. We report improved results obtained using existing network models with MixFeat on CIFAR-10/100 datasets. In addition, we show that MixFeat effectively reduces the over-fitting problem even when the training dataset is small or contains errors. MixFeat is easy to implement and can be added to various network models without additional computational cost in the inference phase.
rejected-papers
The paper describes a method to improve generalization by mixing examples in the hidden space. Experiments on CIFAR-10 and CIFAR-100 showed that the proposed method improves the generalization of the networks. The reviewers found these results promising, but argue that the experimental section was too weak in its current form - notable lacking experiments on larger scale datasets such as Imagenet. Notably the paper should compare more with the relevant baselines to better understand its significance.
train
[ "SJxujY9wy4", "Bkez0mbD1V", "S1lmwoFm1E", "HygwEq2XJN", "r1gN6dEm1N", "Syl8D1dqhX", "B1exXPjf14", "rkeswodvh7", "rJxFTsctAm", "BJg6MwcYCX", "H1gKwIqF07", "ryePqr5KRX", "B1eUiowt2Q" ]
[ "author", "public", "author", "author", "official_reviewer", "official_reviewer", "public", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Thank you for your suggestion.\nWe will reflect it to the final version.", "Thanks for your reply. I would suggest that in the interest of the research community, the results from the Mixup and Manifold Mixup papers should be added in this paper, clearly stating why those results are different from the results r...
[ -1, -1, -1, -1, -1, 6, -1, 4, -1, -1, -1, -1, 4 ]
[ -1, -1, -1, -1, -1, 4, -1, 3, -1, -1, -1, -1, 4 ]
[ "Bkez0mbD1V", "S1lmwoFm1E", "B1exXPjf14", "r1gN6dEm1N", "H1gKwIqF07", "iclr_2019_HygT9oRqFX", "iclr_2019_HygT9oRqFX", "iclr_2019_HygT9oRqFX", "rkeswodvh7", "B1eUiowt2Q", "Syl8D1dqhX", "iclr_2019_HygT9oRqFX", "iclr_2019_HygT9oRqFX" ]
iclr_2019_HygTE309t7
Outlier Detection from Image Data
Modern applications from Autonomous Vehicles to Video Surveillance generate massive amounts of image data. In this work we propose a novel image outlier detection approach (IOD for short) that leverages the cutting-edge image classifier to discover outliers without using any labeled outlier. We observe that although intuitively the confidence that a convolutional neural network (CNN) has that an image belongs to a particular class could serve as outlierness measure to each image, directly applying this confidence to detect outlier does not work well. This is because CNN often has high confidence on an outlier image that does not belong to any target class due to its generalization ability that ensures the high accuracy in classification. To solve this issue, we propose a Deep Neural Forest-based approach that harmonizes the contradictory requirements of accurately classifying images and correctly detecting the outlier images. Our experiments using several benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN demonstrate the effectiveness of our IOD approach for outlier detection, capturing more than 90% of outliers generated by injecting one image dataset into another, while still preserving the classification accuracy of the multi-class classification problem.
rejected-papers
The paper proposes a decision forest based method for outlier detection. The reviewers and AC note the improvement over the existing method is incremental. Although the problem is of significant practical importance, AC decided that the authors should do more works to attract the attention of a broader range of ICLR audience.
train
[ "SkldSLn_JV", "B1xl7InOkE", "SyePOrh_k4", "SJeuP-1pA7", "rJlHfCjK0Q", "HkxbofoKCQ", "ryl5vJ3tRQ", "rJl6r2sKCQ", "rkxh6jsFCX", "Byx4wMFKCQ", "SylwHAutCQ", "HylM6a_W6Q", "SJg-77cYnQ", "rkgzpVtf3X", "r1lxyDYj5X" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "[REVIEWER: New experiments of varying k]: The proposal was to use the features right before the softmax layer as input to Isolation Forest, while the presented experiments in the appendix appear to use the output of the convolution layers. It looks like the deep neural forest works on features from the FC layer. S...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, -1 ]
[ "SJeuP-1pA7", "SJeuP-1pA7", "SJeuP-1pA7", "rkgzpVtf3X", "rkgzpVtf3X", "rkgzpVtf3X", "rkgzpVtf3X", "SJg-77cYnQ", "SJg-77cYnQ", "SJg-77cYnQ", "HylM6a_W6Q", "iclr_2019_HygTE309t7", "iclr_2019_HygTE309t7", "iclr_2019_HygTE309t7", "iclr_2019_HygTE309t7" ]
iclr_2019_HygUOoC5KX
Are Generative Classifiers More Robust to Adversarial Attacks?
There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers, which only model the conditional distribution of the labels given the inputs. In this paper, we propose and investigate the deep Bayes classifier, which improves classical naive Bayes with conditional deep generative models. We further develop detection methods for adversarial examples, which reject inputs with low likelihood under the generative model. Experimental results suggest that deep Bayes classifiers are more robust than deep discriminative classifiers, and that the proposed detection methods are effective against many recently proposed attacks.
rejected-papers
Adversarial defense is a tricky subject, and the authors are to be commended for their novel approach to this problem. The reviewers all agree that there is promise in this approach. However, after reviewing the discussion, they have all come to the conclusion that the robustness of your generative model needs to be more thoroughly explored. Regarding gradient masking, there are other attacks like a manifold attack that use gradients that can be explored as well. Regarding SPSA, it would be helpful perhaps to also include other numerical gradient attacks to ensure that SPSA is stronger and working as intended. Essentially, the reviewers would all like to see a more streamlined version of this paper that removes any doubt about the efficacy of the generative approach. Once that is established, additional properties and features can be explored. Also note that for the purposes of these reviews and discussion, Schott et al. was considered as concurrent work and not prior work.
test
[ "rJgivUDMg4", "B1xvkiIZgN", "rkgKPh4sC7", "ryeZfsJcR7", "BJgkobF_0X", "H1eILWKOAX", "Bkx6veF_0Q", "B1gl1pCHAQ", "SJejnID-0Q", "r1x6ax3lAX", "Syg6yc0La7", "SylS6FCITm", "HklDlvjx07", "S1xMSRGjaX", "H1eqrk0K6X", "Hkl63gkw67", "B1ePMbbcnQ", "ryl3wzTYn7" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. \n\nShort answers to your the major concerns:\n\n1. Our work is highly novel, the two \"prior work\" you mentioned are after or concurrent to our work. We have discussed this in section 5.\n2. We did tested score-based attack (SPSA) which is **gradient-free**. Results show that the rob...
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, 4, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, 5, 3 ]
[ "B1xvkiIZgN", "iclr_2019_HygUOoC5KX", "iclr_2019_HygUOoC5KX", "BJgkobF_0X", "H1eILWKOAX", "B1gl1pCHAQ", "iclr_2019_HygUOoC5KX", "Syg6yc0La7", "r1x6ax3lAX", "Syg6yc0La7", "B1ePMbbcnQ", "B1ePMbbcnQ", "iclr_2019_HygUOoC5KX", "H1eqrk0K6X", "iclr_2019_HygUOoC5KX", "ryl3wzTYn7", "iclr_2019...
iclr_2019_HygYqs0qKX
Conscious Inference for Object Detection
Current Convolutional Neural Network (CNN)-based object detection models adopt strictly feedforward inference to predict the final detection results. However, the widely used one-way inference is agnostic to the global image context and the interplay between input image and task semantics. In this work, we present a general technique to improve off-the-shelf CNN-based object detection models in the inference stage without re-training, architecture modification or ground-truth requirements. We propose an iterative, bottom-up and top-down inference mechanism, which is named conscious inference, as it is inspired by prevalent models for human consciousness with top-down guidance and temporal persistence. While the downstream pass accumulates category-specific evidence over time, it subsequently affects the proposal calculation and the final detection. Feature activations are updated in line with no additional memory cost. Our approach advances the state of the art using popular detection models (Faster-RCNN, YOLOv2, YOLOv3) on 2D object detection and 6D object pose estimation.
rejected-papers
The paper presents an interesting idea, but there are significant concerns about the presentation issues and experimental results (e.g., comparisons with baselines). Overall, it is not ready for publication.
val
[ "Hygbihk8CQ", "S1gg5nyLRm", "SyxTd2y8R7", "H1lT0X4J6m", "H1gAl_Vc3m", "HylDrHuPhm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your time and effort for reviewing our paper. We will keep working on refining our work according to your precise comments and suggestions.", "Thank you very much for your time and effort for reviewing our paper. We will keep working on refining our work according to your precise comments...
[ -1, -1, -1, 4, 6, 4 ]
[ -1, -1, -1, 4, 4, 5 ]
[ "H1lT0X4J6m", "HylDrHuPhm", "H1gAl_Vc3m", "iclr_2019_HygYqs0qKX", "iclr_2019_HygYqs0qKX", "iclr_2019_HygYqs0qKX" ]
iclr_2019_HygcvsAcFX
Optimal margin Distribution Network
Recent research about margin theory has proved that maximizing the minimum margin like support vector machines does not necessarily lead to better performance, and instead, it is crucial to optimize the margin distribution. In the meantime, margin theory has been used to explain the empirical success of deep network in recent studies. In this paper, we present ODN (the Optimal margin Distribution Network), a network which embeds a loss function in regard to the optimal margin distribution. We give a theoretical analysis for our method using the PAC-Bayesian framework, which confirms the significance of the margin distribution for classification within the framework of deep networks. In addition, empirical results show that the ODN model always outperforms the baseline cross-entropy loss model consistently across different regularization situations. And our ODN model also outperforms the cross-entropy loss (Xent), hinge loss and soft hinge loss model in generalization task through limited training data.
rejected-papers
The paper proposed an optimal margin distribution loss and applied PAC-Bayesian bounds that are from Sanov large deviation inequalities to give generalization error bounds for such a loss. Some interesting empirical results are shown to support the proposed method. The majority of reviewers think the paper’s empirical results are encouraging, although still in premature stage. The theoretical analysis is a kind of being standard. After reading the authors’ response and revision, the reviewers do not change much of their opinions and think the paper better undergoes systematic further study on their proposal for big improvement. Based on current ratings, the paper is therefore proposed to borderline lean rejection.
train
[ "S1lXsaCBnm", "r1gKXwNHCm", "BkeJdPEr0Q", "HyxjEHVBAm", "r1e4WH4S07", "SJlzk7uV3m", "ryerWMf93m", "S1xq2nxq2X", "H1eRD8ag3m", "Bkes3De1s7" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "public" ]
[ "The paper presents an improvement on the previous work by [Neyshabur et el, ICLR 2018].\nMore precisely, an emprical generalization bound is provided by using PAC-Bayesian empirical \nbounds. To obtain the claimed improvement over the works [Barlett et al, NIPS 2017] and \n[Neyshabur et el, ICLR 2018], the authors...
[ 5, -1, -1, -1, -1, -1, 5, 6, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 5, 3, -1, -1 ]
[ "iclr_2019_HygcvsAcFX", "S1lXsaCBnm", "r1gKXwNHCm", "S1xq2nxq2X", "ryerWMf93m", "H1eRD8ag3m", "iclr_2019_HygcvsAcFX", "iclr_2019_HygcvsAcFX", "iclr_2019_HygcvsAcFX", "iclr_2019_HygcvsAcFX" ]
iclr_2019_Hyghb2Rct7
SIMILE: Introducing Sequential Information towards More Effective Imitation Learning
Reinforcement learning (RL) is a metaheuristic aiming at teaching an agent to interact with an environment and maximizing the reward in a complex task. RL algorithms often encounter the difficulty in defining a reward function in a sparse solution space. Imitation learning (IL) deals with this issue by providing a few expert demonstrations, and then either mimicking the expert's behavior (behavioral cloning, BC) or recovering the reward function by assuming the optimality of the expert (inverse reinforcement learning, IRL). Conventional IL approaches formulate the agent policy by mapping one single state to a distribution over actions, which did not consider sequential information. This strategy can be less accurate especially in IL, a weakly supervised learning environment, especially when the number of expert demonstrations is limited. This paper presents an effective approach named Sequential IMItation LEarning (SIMILE). The core idea is to introduce sequential information, so that an agent can refer to both the current state and past state-action pairs to make a decision. We formulate our approach into a recurrent model, and instantiate it using LSTM so as to fuse both long-term and short-term information. SIMILE is a generalized IL framework which is easily applied to BL and IRL, two major types of IL algorithms. Experiments are performed on several robot controlling tasks in OpenAI Gym. SIMILE not only achieves performance gain over the baseline approaches, but also enjoys the benefit of faster convergence and better stability of testing performance. These advantages verify a higher learning efficiency of SIMILE, and implies its potential applications in real-world scenarios, i.e., when the agent-environment interaction is more difficult and/or expensive.
rejected-papers
This paper explores the use of sequential information to improve imitation learning, essentially using recurrent networks (LSTM) instead of a simple NN in several existing imitation learning models (BC, GAIL, etc.). On the positive side, the empirical results are good, showing improvement in terms of attained rewards, convergence speed and stability. There are however some significant issues with the way the way the approach is motivated and positioned with respect to existsing work. In particular, the issue described in the paper is due to the fact they consider POMDPs (not MDPs): this should have been more clearly explained. There are also issues with the Related Work section. For these reasons, the paper is not quite ready for publication.
train
[ "rkxIWmILCm", "B1elsABUA7", "r1xyYnSLRQ", "H1g1CWYinX", "H1lsdScYhm", "HkescKAB2Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for valuable comments. While the idea is simple and the contribution in term of model seems small, we posed an important problem that sequential information is important in the RL-related approaches.\n\nWe are sorry that we have missed the connection between this work and POMDPs. The mentione...
[ -1, -1, -1, 6, 4, 4 ]
[ -1, -1, -1, 3, 5, 4 ]
[ "HkescKAB2Q", "H1lsdScYhm", "H1g1CWYinX", "iclr_2019_Hyghb2Rct7", "iclr_2019_Hyghb2Rct7", "iclr_2019_Hyghb2Rct7" ]
iclr_2019_Hygm8jC9FQ
FAVAE: SEQUENCE DISENTANGLEMENT USING IN- FORMATION BOTTLENECK PRINCIPLE
A state-of-the-art generative model, a ”factorized action variational autoencoder (FAVAE),” is presented for learning disentangled and interpretable representations from sequential data via the information bottleneck without supervision. The purpose of disentangled representation learning is to obtain interpretable and transferable representations from data. We focused on the disentangled representation of sequential data because there is a wide range of potential applications if disentanglement representation is extended to sequential data such as video, speech, and stock price data. Sequential data is characterized by dynamic factors and static factors: dynamic factors are time-dependent, and static factors are independent of time. Previous works succeed in disentangling static factors and dynamic factors by explicitly modeling the priors of latent variables to distinguish between static and dynamic factors. However, this model can not disentangle representations between dynamic factors, such as disentangling ”picking” and ”throwing” in robotic tasks. In this paper, we propose new model that can disentangle multiple dynamic factors. Since our method does not require modeling priors, it is capable of disentangling ”between” dynamic factors. In experiments, we show that FAVAE can extract the disentangled dynamic factors.
rejected-papers
This paper introduces an autoencoder architecture that can handle sequences of data, and attempts to automatically disentangle multiple static and dynamic factors. Quality: The main idea is relatively well-motivated. However the motivation for the particular technical choices made seems a little lacking, and the complexity of the proposed model put a lot of strain on the experiments. A lot of important updates were made by the authors in the rebuttal period, however I feel the number of changes are a lot to ask the reviewers to re-evaluate. Clarity: The English of the paper isn't great, including the title (should be "Using an ..." or "Using the ..."). The intro is clear enough, but belabors a relatively simple point about how an image model can't model factors in video. There were some concerning parts where major issues seemed to be glossed over. E.g. "FHVAE model uses label information to disentangle time series data, which is different setup with our FAVAE model." As far as I understand, they both are trained from unsupervised data. Originality: This paper does a good job of citing related work, but seems incremental in relation to the FHVAE. But the main problem is that the proposed method makes a lot of changes from a standard time-series VAE, and the limited number of experiments means it's hard to say what the important factor in this model's performance is. Significance: Ultimately it's hard to say what the takeaway from this paper is. The authors motivated and evaluated a new model, but the work wasn't done in a systematic enough way to make an strong conclusions. What conclusion were asserted seem specious and overly general, e.g. " Since dynamic factors have the same time dependency, these models cannot disentangle dynamic factors.". Why not? Why can't a dynamic model learn the time-scales of each of its factors automatically?
train
[ "BkeRc0ZYAm", "rJerpy-PA7", "SygiuCX8Rm", "SkxgX2bKpQ", "BJg64TbKpQ", "HJl_zTbtaQ", "S1eQKn-K6Q", "rkgGc0nah7", "Syl0S_Iihm", "ryxaDZN9n7" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> 3) Figure 5 is now clearer, thanks. Figure 4 is nicer too, but I’m not extremely sure what point you’re trying to put across with it. Does it inform your choice of beta? It is not clear from reading the text.\n\nIn the 3rd paragraph in Sec 7.2, we state that it is better to use C because when increasing Beta wit...
[ -1, -1, -1, -1, -1, -1, -1, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "rJerpy-PA7", "HJl_zTbtaQ", "SkxgX2bKpQ", "iclr_2019_Hygm8jC9FQ", "ryxaDZN9n7", "Syl0S_Iihm", "rkgGc0nah7", "iclr_2019_Hygm8jC9FQ", "iclr_2019_Hygm8jC9FQ", "iclr_2019_Hygm8jC9FQ" ]
iclr_2019_Hygp1nR9FQ
Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks
Recent analysis of deep neural networks has revealed their vulnerability to carefully structured adversarial examples. Many effective algorithms exist to craft these adversarial examples, but performant defenses seem to be far away. In this work, we explore the use of edge-aware bilateral filtering as a projection back to the space of natural images. We show that bilateral filtering is an effective defense in multiple attack settings, where the strength of the adversary gradually increases. In the case of adversary who has no knowledge of the defense, bilateral filtering can remove more than 90% of adversarial examples from a variety of different attacks. To evaluate against an adversary with complete knowledge of our defense, we adapt the bilateral filter as a trainable layer in a neural network and show that adding this layer makes ImageNet images significantly more robust to attacks. When trained under a framework of adversarial training, we show that the resulting model is hard to fool with even the best attack methods.
rejected-papers
The paper proposes a technique for defending against adversarial examples that relies on averaging pixels that are close to each other both in position and value. This approach seems to be an interesting preprocessing technique in the robust training pipeline. However, the actual claims made are not well-supported and, in fact, seem somewhat implausible.
train
[ "r1xR1zuyJV", "r1xrPb0hRQ", "SJebTYan07", "BJxyLtxcCX", "Hklnz31qRQ", "H1ego31qC7", "r1e2wn1qRm", "Byg9_ogxpm", "S1lSIkGThQ", "SkeO-wwcnQ", "r1lkHf8U37", "rkloIQS9hX", "H1lRnhUr3m", "BJeTuHSBnm", "SJgcQDCN3m", "SJgLLepNnX", "S1gYbT2VnX", "SylxeYYy3m", "Byl3X3v12m", "rylHEqDyhm"...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public", "author", "author", "public", "public", "public", "public", "public" ...
[ "Thank you for your continued engagement with us on this matter. \n\nIn the final version, we will merge Table 4 and Table 7 into one table, which will solve the validity issue we have discussed in detail above, since in Table 7 we did test against the best attacks, as requested. We would like to reiterate that BFN...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "r1xrPb0hRQ", "SJebTYan07", "BJxyLtxcCX", "Hklnz31qRQ", "S1lSIkGThQ", "r1lkHf8U37", "SkeO-wwcnQ", "rkloIQS9hX", "iclr_2019_Hygp1nR9FQ", "iclr_2019_Hygp1nR9FQ", "iclr_2019_Hygp1nR9FQ", "SJgcQDCN3m", "BJeTuHSBnm", "SJgLLepNnX", "S1gYbT2VnX", "rylHEqDyhm", "Byl3X3v12m", "rylHEqDyhm", ...
iclr_2019_HygqJnCqtm
Rating Continuous Actions in Spatial Multi-Agent Problems
We study credit assignment problems in spatial multi-agent environments where agents pursue a joint objective. On the example of soccer, we rate the movements of individual players with respect to their potential for staging a successful attack. We propose a purely data-driven approach to simultaneously learn a model of agent movements as well as their ratings via an agent-centric deep reinforcement learning framework. Our model allows for efficient learning and sampling of ratings in the continuous action space. We empirically observe on historic soccer data that the model accurately rates agent movements w.r.t. their relative contribution to the collective goal.
rejected-papers
The reviewers raised a number of major concerns including the incremental novelty of the proposed (if any), a poor readability of the presented materials, and, most importantly, insufficient and unconvincing experimental evaluation presented. The authors did not provide any rebuttal. Hence, I cannot suggest this paper for presentation at ICLR.
train
[ "SkgVnULqhQ", "HkekrgUD2Q", "B1eNgg07hm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the credit assignment problem in multi-agent domain (soccer playing as a concrete application). The paper itself is well-written. I hope that the author could use a better methodology to make the evaluation part stronger.\n\nI have a few questions to the author:\n\n1. Why features like which team...
[ 5, 4, 4 ]
[ 4, 3, 4 ]
[ "iclr_2019_HygqJnCqtm", "iclr_2019_HygqJnCqtm", "iclr_2019_HygqJnCqtm" ]
iclr_2019_HygtHnR5tQ
Generative Adversarial Networks for Extreme Learned Image Compression
We propose a framework for extreme learned image compression based on Generative Adversarial Networks (GANs), obtaining visually pleasing images at significantly lower bitrates than previous methods. This is made possible through our GAN formulation of learned compression combined with a generator/decoder which operates on the full-resolution image and is trained in combination with a multi-scale discriminator. Additionally, if a semantic label map of the original image is available, our method can fully synthesize unimportant regions in the decoded image such as streets and trees from the label map, therefore only requiring the storage of the preserved region and the semantic label map. A user study confirms that for low bitrates, our approach is preferred to state-of-the-art methods, even when they use more than double the bits.
rejected-papers
This paper proposes a GAN-based framework for image compression. The reviewers and AC note a critical limitation on novelty of the paper i.e., such a conditional GAN framework is now standard. The authors mentioned that they apply GAN for extreme compression for the first time in the literature, but this is not enough to justify the novelty issue. AC thinks the proposed method has potential and is interesting, but decided that the authors need new ideas to publish the work.
train
[ "rke92n2h3X", "HylAU71xyN", "HJlImthqRX", "ByxQGdh5AX", "rkeIvu250Q", "BJlLGD3c07", "SJlxoKZWpQ", "ryee31V52m" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed an interesting method using GANs for image compression. The experimental results on several benchmarks demonstrated the proposed method can significantly outperform baselines. \n\nThere are a few questions for the authors:\n\n1.The actually benefit from GAN loss: the adversarial part usually ca...
[ 6, -1, -1, -1, -1, -1, 6, 4 ]
[ 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_HygtHnR5tQ", "BJlLGD3c07", "ryee31V52m", "SJlxoKZWpQ", "rke92n2h3X", "iclr_2019_HygtHnR5tQ", "iclr_2019_HygtHnR5tQ", "iclr_2019_HygtHnR5tQ" ]
iclr_2019_Hygv0sC5F7
When Will Gradient Methods Converge to Max-margin Classifier under ReLU Models?
We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset. The classifier is described by a nonlinear ReLU model and the objective function adopts the exponential loss function. We first characterize the landscape of the loss function and show that there can exist spurious asymptotic local minima besides asymptotic global minima. We then show that gradient descent (GD) can converge to either a global or a local max-margin direction, or may diverge from the desired max-margin direction in a general context. For stochastic gradient descent (SGD), we show that it converges in expectation to either the global or the local max-margin direction if SGD converges. We further explore the implicit bias of these algorithms in learning a multi-neuron network under certain stationary conditions, and show that the learned classifier maximizes the margins of each sample pattern partition under the ReLU activation.
rejected-papers
The reviewers and AC note the following potential weaknesses: 1) the proof techniques largley follow from previous work on linear models 2) it’s not clear how signficant it is to analyze a one-neuron ReLU model for linearly separable data.
train
[ "SJg80NOj2Q", "HkgRSG1t3X", "rkx8w0vw3X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper considers the binary classification problem with exponential loss and ReLu activation function (single neuron). The authors characterize the asymptotic loss landscape by three different types of critical points. They prove that gradient descent (GD) will result in four different regions and provide conv...
[ 5, 4, 5 ]
[ 5, 3, 4 ]
[ "iclr_2019_Hygv0sC5F7", "iclr_2019_Hygv0sC5F7", "iclr_2019_Hygv0sC5F7" ]
iclr_2019_Hygvln09K7
Meta Learning with Fast/Slow Learners
Meta-learning has recently achieved success in many optimization problems. In general, a meta learner g(.) could be learned for a base model f(.) on a variety of tasks, such that it can be more efficient on a new task. In this paper, we make some key modifications to enhance the performance of meta-learning models. (1) we leverage different meta-strategies for different modules to optimize them separately: we use conservative “slow learners” on low-level basic feature representation layers and “fast learners” on high-level task-specific layers; (2) Furthermore, we provide theoretical analysis on why the proposed approach works, based on a case study on a two-layer MLP. We evaluate our model on synthetic MLP regression, as well as low-shot learning tasks on Omniglot and ImageNet benchmarks. We demonstrate that our approach is able to achieve state-of-the-art performance.
rejected-papers
The paper introduces an interesting idea of using different rates of learning for low level vs high level computation for meta learning. However, the experiments lack the thoroughness needed to justify the basic intuition of the approach and design choices like which layers to learn fast or slow need to be further ablated.
train
[ "Bke71ZD5nX", "SJlSVKRY2Q", "S1gXHkH5hm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The overall contribution makes sense. Consider solving a linear system i.e., learning an unknown matrix. Splitting it into two components (like in NMF or MMF) and learning each separately gives more control on the conditioning of the matrices. This is the basis of residual networks (at least the theory for linear ...
[ 5, 5, 6 ]
[ 3, 4, 3 ]
[ "iclr_2019_Hygvln09K7", "iclr_2019_Hygvln09K7", "iclr_2019_Hygvln09K7" ]
iclr_2019_HylDpoActX
N-Ary Quantization for CNN Model Compression and Inference Acceleration
The tremendous memory and computational complexity of Convolutional Neural Networks (CNNs) prevents the inference deployment on resource-constrained systems. As a result, recent research focused on CNN optimization techniques, in particular quantization, which allows weights and activations of layers to be represented with just a few bits while achieving impressive prediction performance. However, aggressive quantization techniques still fail to achieve full-precision prediction performance on state-of-the-art CNN architectures on large-scale classification tasks. In this work we propose a method for weight and activation quantization that is scalable in terms of quantization levels (n-ary representations) and easy to compute while maintaining the performance close to full-precision CNNs. Our weight quantization scheme is based on trainable scaling factors and a nested-means clustering strategy which is robust to weight updates and therefore exhibits good convergence properties. The flexibility of nested-means clustering enables exploration of various n-ary weight representations with the potential of high parameter compression. For activations, we propose a linear quantization strategy that takes the statistical properties of batch normalization into account. We demonstrate the effectiveness of our approach using state-of-the-art models on ImageNet.
rejected-papers
The submission proposes a hierarchical clustering approach (nested-means clustering) to determine good quantization intervals for non-uniform quantization. An empirical validation shows improvement over a very closely related approach (Zhu et al, 2016). There was an overall consensus that the literature review was insufficient in its initial form. The authors have proposed to extend it somewhat. Other concerns are related to the novelty of the technique (R4 was particularly concerned about novelty over Zhu et al, 2016). Two reviewers were against acceptance, and one was positive about the paper. Due to the overall concerns about the novelty of the approach, and that these concerns were confirmed in discussion after the rebuttal, this paper is unlikely to meet the threshold for acceptance to ICLR.
train
[ "HJgKRv-PkV", "r1lODsYKAQ", "HygC7nFt07", "rylKOcKtR7", "B1effYtYAm", "BJx34liHa7", "B1xV5ifgT7", "B1gjSZHh2X", "rJgkkgx15Q" ]
[ "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "To add to the list of missing references, this paper also does n-ary quantization but it does not use nested means.\nhttps://arxiv.org/abs/1811.04985", "We appreciate your feedback on our initial submission. Regarding your questions:\n\n1. Gaussian distribution: We observe that l2-regularized weights are close ...
[ -1, -1, -1, -1, -1, 4, 4, 7, -1 ]
[ -1, -1, -1, -1, -1, 4, 4, 5, -1 ]
[ "rJgkkgx15Q", "B1xV5ifgT7", "B1gjSZHh2X", "BJx34liHa7", "iclr_2019_HylDpoActX", "iclr_2019_HylDpoActX", "iclr_2019_HylDpoActX", "iclr_2019_HylDpoActX", "iclr_2019_HylDpoActX" ]
iclr_2019_HylJtiRqYQ
VECTORIZATION METHODS IN RECOMMENDER SYSTEM
The most used recommendation method is collaborative filtering, and the key part of collaborative filtering is to compute the similarity. The similarity based on co-occurrence of similar event is easy to implement and can be applied to almost all the situation. So when the word2vec model reach the state-of-art at a lower computation cost in NLP. An correspond model in recommender system item2vec is proposed and reach state-of-art in recommender system. It is easy to see that the position of user and item is interchangeable when their count size gap is not too much, we proposed a user2vec model and show its performance. The similarity based on co-occurrence information suffers from cold start, we proposed a content based similarity model based on doc2vec which is another technology in NLP.
rejected-papers
The reviewers are unanonymous in their assessment that the paper is not ICLR quality in its current form.
train
[ "BygOyPgW6Q", "BJxqIwJnn7", "SklWPtcC57" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Review: \n\n— the writing is not sufficiently clear and a lot of the ideas are hard to follow (the sections 3.2 and 3.3 which should cover proposed methods are only a paragraph long each, have no loss functions and no architecture descriptions/diagrams)\n— the ideas presented are only derivative and are not suffic...
[ 2, 2, 3 ]
[ 5, 5, 4 ]
[ "iclr_2019_HylJtiRqYQ", "iclr_2019_HylJtiRqYQ", "iclr_2019_HylJtiRqYQ" ]
iclr_2019_HylKJhCcKm
Generalized Capsule Networks with Trainable Routing Procedure
CapsNet (Capsule Network) was first proposed by Sabour et al. (2017) and lateranother version of CapsNet was proposed by Hinton et al. (2018). CapsNet hasbeen proved effective in modeling spatial features with much fewer parameters.However, the routing procedures (dynamic routing and EM routing) in both pa-pers are not well incorporated into the whole training process, and the optimalnumber for the routing procedure has to be found manually. We propose Gen-eralized GapsNet (G-CapsNet) to overcome this disadvantages by incorporatingthe routing procedure into the optimization. We implement two versions of G-CapsNet (fully-connected and convolutional) on CAFFE (Jia et al. (2014)) andevaluate them by testing the accuracy on MNIST & CIFAR10, the robustness towhite-box & black-box attack, and the generalization ability on GAN-generatedsynthetic images. We also explore the scalability of G-CapsNet by constructinga relatively deep G-CapsNet. The experiment shows that G-CapsNet has goodgeneralization ability and scalability.
rejected-papers
The paper proposes to replace dynamic routing in Capsule networks with a trainable layer that produces routing coefficients. The goal is to improve their scalability. This is promising as a research direction but reviewers have raised several concerns about unclear contributions and lack of a thorough evaluation of the approach. There is also a recent relevant work pointed out by Reviewer 1 that should be discussed. Given these concerns, the paper is not suitable for publication in its current form, however I encourage the authors to use reviewers' comments for improving the paper and resubmit in next venues.
train
[ "ByeE99nj0m", "HJeGWE55RX", "rJe2vOjb3m", "SyeVlGjkTQ", "HJgBIcFyp7", "SklaDkLp3Q", "rJl5Vrn7nQ", "HJgjucrkp7", "ByxZPSHJTQ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author" ]
[ "Thanks for the comments.\n\nThis is the official code of CapsNet: https://github.com/Sarasra/models/tree/master/research/capsules. \nIf I am correct, your idea is first setting the routing number as one, then initializing the trouting coefficients as trainable parameters. As the official code shows, the dynamic ro...
[ -1, -1, 4, -1, -1, 5, 3, -1, -1 ]
[ -1, -1, 5, -1, -1, 3, 5, -1, -1 ]
[ "HJeGWE55RX", "HJgBIcFyp7", "iclr_2019_HylKJhCcKm", "SklaDkLp3Q", "rJl5Vrn7nQ", "iclr_2019_HylKJhCcKm", "iclr_2019_HylKJhCcKm", "iclr_2019_HylKJhCcKm", "rJe2vOjb3m" ]
iclr_2019_HylRk2A5FQ
Graph Learning Network: A Structure Learning Algorithm
Graph prediction methods that work closely with the structure of the data, e.g., graph generation, commonly ignore the content of its nodes. On the other hand, the solutions that consider the node’s information, e.g., classification, ignore the structure of the whole. And some methods exist in between, e.g., link prediction, but predict the structure piece-wise instead of considering the graph as a whole. We hypothesize that by jointly predicting the structure of the graph and its nodes’ features, we can improve both tasks. We propose the Graph Learning Network (GLN), a simple yet effective process to learn node embeddings and structure prediction functions. Our model uses graph convolutions to propose expected node features, and predict the best structure based on them. We repeat these steps sequentially to enhance the prediction and the embeddings. In contrast to existing generation methods that rely only on the structure of the data, we use the feature on the nodes to predict better relations, similar to what link prediction methods do. However, we propose an holistic approach to process the whole graph for our predictions. Our experiments show that our method predicts consistent structures across a set of problems, while creating meaningful node embeddings.
rejected-papers
The paper addresses an important problem of supervised learning for predicting graph connectivity using both node features and the overall graph structure. The paper is clearly written, and the presented approach produces promising results on synthetic data. However, all reviewers agree that the paper could be improved by including more comparison with prior art and related work discussion, and strengthening empirical results by including real-life data and more through evaluation; they also find the novelty and significance of the proposed approach somewhat limited. We hope the authors will use the suggestions of the reviewers to further improve the paper.
train
[ "rylkrw2h07", "Hyx87waK0m", "HJezmN6tCX", "rkgSzJpK0m", "H1xq0d9xTm", "H1eW37in37", "rJgPhD45nX" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the response---I hope the suggested changes are useful in submitting this to another venue. As the current version stands, however, I do not feel it is ready for publication at ICLR and therefore will not be changing my score.", "We are grateful for the positive feedback and constructive comments from...
[ -1, -1, -1, -1, 4, 3, 4 ]
[ -1, -1, -1, -1, 5, 4, 4 ]
[ "HJezmN6tCX", "rJgPhD45nX", "H1eW37in37", "H1xq0d9xTm", "iclr_2019_HylRk2A5FQ", "iclr_2019_HylRk2A5FQ", "iclr_2019_HylRk2A5FQ" ]
iclr_2019_HylSk205YQ
Multi-agent Deep Reinforcement Learning with Extremely Noisy Observations
Multi-agent reinforcement learning systems aim to provide interacting agents with the ability to collaboratively learn and adapt to the behaviour of other agents. In many real-world applications, the agents can only acquire a partial view of the world. Here we consider a setting whereby most agents' observations are also extremely noisy, hence only weakly correlated to the true state of the environment. Under these circumstances, learning an optimal policy becomes particularly challenging, even in the unrealistic case that an agent's policy can be made conditional upon all other agents’ observations. To overcome these difficulties, we propose a multi-agent deep deterministic policy gradient algorithm enhanced by a communication medium (MADDPG-M), which implements a two-level, concurrent learning mechanism. An agent's policy depends on its own private observations as well as those explicitly shared by others through a communication medium. At any given point in time, an agent must decide whether its private observations are sufficiently informative to be shared with others. However, our environments provide no explicit feedback informing an agent whether a communication action is beneficial, rather the communication policies must also be learned through experience concurrently to the main policies. Our experimental results demonstrate that the algorithm performs well in six highly non-stationary environments of progressively higher complexity, and offers substantial performance gains compared to the baselines.
rejected-papers
The paper presents an extension of MADDPG, adding communication between agents. The methods targets extremely noisy observations settings, so that agents need to decide if they communicate their private observations (or not). There is no intrinsic/explicit reward to guide the learning of the communication, only the extrinsic/implicit reward of the downstream task. The paper is clear and easy to follow, in particular after the updated writing. I believe some of the reviewers' points were addressed by the rebuttal. Nonetheless, some of the weaknesses of the paper still hold: namely the complexity of the approach compounded with a very specific experimental evaluation. The more complex an approach is (and it may be justified by the complexity of the setting!), the more varied its supporting evidence should be. In its current form, the paper would constitute a good workshop contribution (to discuss the approach), but I believe it needs more varied (and/or harder) experiments to be published at ICLR.
train
[ "HkgEPrtH1E", "BJlGv0uBy4", "B1xty3T4y4", "BJe9oua4JE", "SygXc_Gt37", "HJeVsNJc0X", "SJlkQbkqAm", "r1g0ybycCm", "r1lmEeJ5Cm", "SkgWiaCKCQ", "SkeEA8O9hm", "rJesCoTthm" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This comment on ‘sparse’ communication seems to suggest that the Reviewer continues to misinterpret the meaning of the hyperparameter C, and more generally how our methodology works. We had made an attempt to clarify this point in our previous comment, and an alternative explanation is in order. \n\nDuring trainin...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, 3 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 2, 4 ]
[ "B1xty3T4y4", "BJe9oua4JE", "r1lmEeJ5Cm", "r1lmEeJ5Cm", "iclr_2019_HylSk205YQ", "iclr_2019_HylSk205YQ", "SygXc_Gt37", "SygXc_Gt37", "rJesCoTthm", "SkeEA8O9hm", "iclr_2019_HylSk205YQ", "iclr_2019_HylSk205YQ" ]
iclr_2019_HyleYiC9FX
Text Embeddings for Retrieval from a Large Knowledge Base
Text embedding representing natural language documents in a semantic vector space can be used for document retrieval using nearest neighbor lookup. In order to study the feasibility of neural models specialized for retrieval in a semantically meaningful way, we suggest the use of the Stanford Question Answering Dataset (SQuAD) in an open-domain question answering context, where the first task is to find paragraphs useful for answering a given question. First, we compare the quality of various text-embedding methods on the performance of retrieval and give an extensive empirical comparison on the performance of various non-augmented base embedding with, and without IDF weighting. Our main results are that by training deep residual neural models specifically for retrieval purposes can yield significant gains when it is used to augment existing embeddings. We also establish that deeper models are superior to this task. The best base baseline embeddings augmented by our learned neural approach improves the top-1 recall of the system by 14% in terms of the question side, and by 8% in terms of the paragraph side.
rejected-papers
I have to agree with the reviewers here and unfortunately recommend a rejection. The methodology and task are not clear. Authors have reformulated QA in SQUAD as as ranking and never compared the results of the proposed model with other QA systems. If authors want to solve a pure ranking problem why they do not compare their methods with other ranking methods/datasets.
train
[ "SkghxJq9nQ", "SJlCpmx92Q", "SJxuqQXc3Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThis paper proposes to reformulate the QA task in SQUAD as a retrieval task, i.e., using question as query and paragraphs as candidate results to be ranked. Authors makes some modifications to elmo model to create better word embedding for the ranking task. Authors have mentioned and are aware of open ...
[ 3, 5, 3 ]
[ 4, 4, 5 ]
[ "iclr_2019_HyleYiC9FX", "iclr_2019_HyleYiC9FX", "iclr_2019_HyleYiC9FX" ]
iclr_2019_HyllasActm
End-to-End Learning of Video Compression Using Spatio-Temporal Autoencoders
Deep learning (DL) is having a revolutionary impact in image processing, with DL-based approaches now holding the state of the art in many tasks, including image compression. However, video compression has so far resisted the DL revolution, with the very few proposed approaches being based on complex and impractical architectures with multiple networks. This paper proposes what we believe is the first approach to end-to-end learning of a single network for video compression. We tackle the problem in a novel way, avoiding explicit motion estimation/prediction, by formalizing it as the rate-distortion optimization of a single spatio-temporal autoencoder; i.e., we jointly learn a latent-space projection transform and a synthesis transform for low bitrate video compression. The quantizer uses a rounding scheme, which is relaxed during training, and an entropy estimation technique to enforce an information bottleneck, inspired by recent advances in image compression. We compare the obtained video compression networks with standard widely-used codecs, showing better performance than the MPEG-4 standard, being competitive with H.264/AVC for low bitrates.
rejected-papers
The paper proposes a neural network architecture for video compression. The reviewers point out lack of novelty with respect to recent neural compression works on static images, which the present paper extends by adding a temporal consistency loss. More importantly, reviewers point our severe problems with the metrics used to measure compression quality, which the authors promise to take into account in a future manuscript.
train
[ "Bkx4_6oOAQ", "SJxA6Tj_Am", "ryePBTouC7", "SyxxsJkbpX", "B1lF8Q81T7", "rye7Oj2uhQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for pointing out such important issues. We will take your input into consideration in a revised manuscript.\n", "Thank you for your feedback and suggestions. We have double checked our results, and confirmed that the values on the graphs are correct according to the used method. However, we agree that ...
[ -1, -1, -1, 3, 3, 2 ]
[ -1, -1, -1, 3, 4, 5 ]
[ "B1lF8Q81T7", "rye7Oj2uhQ", "SyxxsJkbpX", "iclr_2019_HyllasActm", "iclr_2019_HyllasActm", "iclr_2019_HyllasActm" ]
iclr_2019_Hylnis0qKX
Task-GAN for Improved GAN based Image Restoration
Deep Learning (DL) algorithms based on Generative Adversarial Network (GAN) have demonstrated great potentials in computer vision tasks such as image restoration. Despite the rapid development of image restoration algorithms using DL and GANs, image restoration for specific scenarios, such as medical image enhancement and super-resolved identity recognition, are still facing challenges. How to ensure visually realistic restoration while avoiding hallucination or mode- collapse? How to make sure the visually plausible results do not contain hallucinated features jeopardizing downstream tasks such as pathology identification and subject identification? Here we propose to resolve these challenges by coupling the GAN based image restoration framework with another task-specific network. With medical imaging restoration as an example, the proposed model conducts additional pathology recognition/classification task to ensure the preservation of detailed structures that are important to this task. Validated on multiple medical datasets, we demonstrate the proposed method leads to improved deep learning based image restoration while preserving the detailed structure and diagnostic features. Additionally, the trained task network show potentials to achieve super-human level performance in identifying pathology and diagnosis. Further validation on super-resolved identity recognition tasks also show that the proposed method can be generalized for diverse image restoration tasks.
rejected-papers
This work presents a reconstruction GAN with an additional classification task in the objective loss function. Evaluations are carried out on medical and non-medical datasets. Reviewers raise multiple concerns around the following: - Novelty (all reviewers) - Inadequate comparison baselines (all reviewers) - Inadequate citations. (R2 & R3) Authors have not offered a rebuttal. Recommendation is reject. Work may be more suitable as an application paper for a medical conference or journal.
test
[ "SyeiqXvxTm", "Hkgnx5Gtn7", "rJl0F0-r2Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose a novel method of Task-GAN of image coupling by coupling GAN and a task-specific network, which alleviates to avoid hallucination or mode collapse. In general, the paper is addressing an important problem but I still have several concerns as follows:\n1. The technical contribut...
[ 4, 5, 4 ]
[ 5, 4, 5 ]
[ "iclr_2019_Hylnis0qKX", "iclr_2019_Hylnis0qKX", "iclr_2019_Hylnis0qKX" ]
iclr_2019_Hyls7h05FQ
A Differentiable Self-disambiguated Sense Embedding Model via Scaled Gumbel Softmax
We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources. It jointly learns how to disambiguate senses given local context and how to represent senses using hard attention. Unlike previous multi-prototype models, our model approximates discrete sense selection in a differentiable manner via a modified Gumbel softmax. We also propose a novel human evaluation task that quantitatively measures (1) how meaningful the learned sense groups are to humans and (2) how well the model is able to disambiguate senses given a context sentence. Our model outperforms competing approaches on both human evaluations and multiple word similarity tasks.
rejected-papers
Pros: * High quality evaluation across different benchmarks, plus human eval * The paper is well written (though one could quibble about the motivation for the method, see Cons) Cons: * The approach is incremental, the main contribution is replacing marginalization or RL with G-S. G-S has already been studied in the context of VAEs with categorical latent variables, i.e. very similar models. * The main technical novelty is varying amount of added noise (i.e. downscaling Gumbel noise). In principle, the Gumbel relaxation is not needed here as exact marginalization can be done (as) effectively. Unlike the standard strategy used to make discrete r.v. tractable in complex models, samples from G-S are not used in this work to weight input to the 'decoder' (thus avoiding expensive marginalization) but to weight terms corresponding to reconstruction from individual latent states (in constract, e.g., to SkimRNN of Seo et al (ICLR 2018)). Presumably adding noise to softmax helps to force sharpness on the posteriors (~ argmax in previous work) and stochasticity may also help exploration. (Given the above, "to preserve differentiability and circumvent the difficulties in training with reinforcement learning, we apply the reparameterization trick with Gumbel softmax" seems slightly misleading) * With contextualized embeddings, which are sense-disambiguated given the context, learning discrete senses (which are anyway only coarse approximations of reality) is less practically important Two reviewers are somewhat lukewarm (weak accept) about the paper (limited novelty), whereas one reviewer is considerably more positive. I do not believe that the reviews diverge in any factual information though.
train
[ "SJlEps7zkN", "S1exdsXGkE", "rygvBxcJyN", "ryl5e-HnRX", "Hkxssn3oRX", "BygDQWP537", "r1eqzQYw07", "SkgpFMYv0m", "r1xGmzKvAQ", "r1xVV1VR2m", "BkgDZsDThQ" ]
[ "author", "author", "public", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the commenter for their interest in our paper and the valuable comments! We address the concerns as follows:\n\n1) Model assumptions\n\nIf we understand correctly, the commenter worries that the implied assumption w->s->c (first select a sense s from word w, then generate the context c) of our model is fl...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, 7, 6 ]
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, 3, 4 ]
[ "rygvBxcJyN", "rygvBxcJyN", "iclr_2019_Hyls7h05FQ", "Hkxssn3oRX", "r1eqzQYw07", "iclr_2019_Hyls7h05FQ", "BygDQWP537", "BkgDZsDThQ", "r1xVV1VR2m", "iclr_2019_Hyls7h05FQ", "iclr_2019_Hyls7h05FQ" ]
iclr_2019_HylsgnCcFQ
Dynamic Graph Representation Learning via Self-Attention Networks
Learning latent representations of nodes in graphs is an important and ubiquitous task with widespread applications such as link prediction, node classification, and graph visualization. Previous methods on graph representation learning mainly focus on static graphs, however, many real-world graphs are dynamic and evolve over time. In this paper, we present Dynamic Self-Attention Network (DySAT), a novel neural architecture that operates on dynamic graphs and learns node representations that capture both structural properties and temporal evolutionary patterns. Specifically, DySAT computes node representations by jointly employing self-attention layers along two dimensions: structural neighborhood and temporal dynamics. We conduct link prediction experiments on two classes of graphs: communication networks and bipartite rating networks. Our experimental results show that DySAT has a significant performance gain over several different state-of-the-art graph embedding baselines.
rejected-papers
This paper proposes a self-attention based approach for learning representations for the vertices of a dynamic graph, where the topology of the edges may change. The attention focuses on representing the interaction of vertices that have connections. Experimental results for the link prediction task on multiple datasets demonstrate the benefits of the approach. The idea of attention or its computation is not novel, however its application for estimating embeddings for dynamic graph vertices is new. The original version of the paper did not have strong baselines as noted by multiple reviewers, but the paper was revised during the review period. However, some of these suggestions, for example, experiments with larger graph sizes and other related work i.e., similar work on static graphs are left as a future work.
train
[ "rylCJmlmkV", "BkgzUr-9CQ", "rkgbrNZ5R7", "rylPJV-cRX", "Hyx3dG-cRm", "Skg4XcgqC7", "S1eSaFecR7", "SkgpdOcs27", "S1g39zdt3m", "Bkg9hJLDnQ", "S1eLDjjAtX", "Skeoa8zRFX" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "\nDear Reviewers and ACs,\n\nWe thank you once again for the time and effort to review our paper, and appreciate the valuable questions and suggestions. We have made several improvements to our paper, and hope that they sufficiently address your key concerns. We would greatly value any additional comments on the r...
[ -1, -1, -1, -1, -1, -1, -1, 4, 6, 5, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, -1, -1 ]
[ "iclr_2019_HylsgnCcFQ", "iclr_2019_HylsgnCcFQ", "rylPJV-cRX", "Bkg9hJLDnQ", "S1g39zdt3m", "S1eSaFecR7", "SkgpdOcs27", "iclr_2019_HylsgnCcFQ", "iclr_2019_HylsgnCcFQ", "iclr_2019_HylsgnCcFQ", "Skeoa8zRFX", "iclr_2019_HylsgnCcFQ" ]
iclr_2019_Hylyui09tm
EMI: Exploration with Mutual Information Maximizing State and Action Embeddings
Policy optimization struggles when the reward feedback signal is very sparse and essentially becomes a random search algorithm until the agent stumbles upon a rewarding or the goal state. Recent works utilize intrinsic motivation to guide the exploration via generative models, predictive forward models, or more ad-hoc measures of surprise. We propose EMI, which is an exploration method that constructs embedding representation of states and actions that does not rely on generative decoding of the full observation but extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Our experiments show the state of the art performance on challenging locomotion task with continuous control and on image-based exploration tasks with discrete actions on Atari.
rejected-papers
This paper proposes a method to compute embeddings of states and actions that facilitate computing measures of surprise for intrinsic reward. Though some of the ideas are quite interesting, there are currently issues with the experiments and the motivation. The experiments have high variance across the 5 runs, with significant overlap of shaded regions representing just one standard deviation from the mean. It is hard to draw any conclusions about improved performance, and statements like the following are much too strong: "For vision-based exploration tasks, our results in Figure 5 show that EMI achieves the state of the art performance on Freeway, Frostbite, Venture, and Montezuma’s Revenge in comparison to the baseline exploration methods." Further, the proposed approach has three new hyperparameters (lambdas), without much understanding into how to set them or their effect on the results. Specific values are reported for the different game types, without explanation for how or why these values were chosen. Similarly strong claims, that are not well substantiated, are given for the proposed approach. This paper seems to suggest that this is a principled approach to using surprise for exploration, contrasted to other ad-hoc approaches ("Other approaches utilize more ad-hoc measures (Pathak et al., 2017; Tang et al., 2017) that aim to approximate surprise."). Yet, the paper does not define surprise (say by citing work by Itti and Baldi on Bayesian surprise), and then proposes what is largely a intuitive approach to providing a good intrinsic reward related to surprise. For example, "we show that imposing linear topology on the learned embedding representation space (such that the transitions are linear), thereby offloading most of the modeling burden onto the embedding function itself, provides an essential informative measure of surprise when visiting novel states." This might be intuitively true, but I do not see a clear demonstration in Section 4.2 actually showing that this restriction provides a measure of surprise. Additionally, some of the choices in Section 4.2 are about estimating "irreducible error under the linear dynamics model", but irreducible error is about inherent uncertainty (due to stochasticity and partial observability), not due to the choice of modeling class. In general, many intuitive choices in the algorithm need to be better justified, and some claims disparaging other work for being ad-hoc should be toned down. Overall, this paper is as yet a bit preliminary, in terms of clarity and experiments. In a further iteration, with some improvements, it could be a useful contribution for exploration in image-based environments.
train
[ "SJgZ10d5a7", "HJl6rWXAkE", "HylJRPB81V", "H1gHamLm07", "r1xxOpzeR7", "S1xhrazgAQ", "SJeLMTzgCm", "Sygal6Me0Q", "SJlNCnMxA7", "S1ly5VWY2Q", "SkxSVV7AjQ", "S1gd_ctS37", "HkxsYJGlcQ" ]
[ "official_reviewer", "author", "public", "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "public" ]
[ "The paper proposes an approach for exploration via reward bonuses based on a form of surprise. The surprise factor is based on the next state of a particular transition, and the error in the embedding space to satisfy a linear dynamics formulation. The embedding space of the states and actions are optimized to inc...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, -1, -1 ]
[ "iclr_2019_Hylyui09tm", "HylJRPB81V", "SJlNCnMxA7", "S1xhrazgAQ", "S1gd_ctS37", "HkxsYJGlcQ", "SJgZ10d5a7", "S1ly5VWY2Q", "SkxSVV7AjQ", "iclr_2019_Hylyui09tm", "iclr_2019_Hylyui09tm", "iclr_2019_Hylyui09tm", "iclr_2019_Hylyui09tm" ]
iclr_2019_HyxBpoR5tm
Adversarially Robust Training through Structured Gradient Regularization
We propose a novel data-dependent structured gradient regularizer to increase the robustness of neural networks vis-a-vis adversarial perturbations. Our regularizer can be derived as a controlled approximation from first principles, leveraging the fundamental link between training with noise and regularization. It adds very little computational overhead during learning and is simple to implement generically in standard deep learning frameworks. Our experiments provide strong evidence that structured gradient regularization can act as an effective first line of defense against attacks based on long-range correlated signal corruptions.
rejected-papers
Reviewers are in a consensus and recommended to reject after engaging with the authors. Further, many additional questions raised in the discussion should be addressed in the submission to improve clarity. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.
train
[ "rJem8EPbxE", "S1gZPPycRX", "B1goHwJqR7", "BylCiAXq0Q", "S1ew11U4RQ", "B1lhp0BVCm", "BkxyyjB40m", "BJe7jcB4Am", "HJlYFYr4CQ", "rJgEwtBNC7", "HklHzUwsn7", "H1eTf52unm", "HylEkYgY3Q", "rJgQCEy8cQ", "Bkl2bNy8q7", "H1e-Zmy85X", "SkgpUeJI5Q", "S1l41x189Q", "B1xN-wnf5X", "rylfcB3MqX"...
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "public", ...
[ "I think the Structured Gradient Regularization you propose is very very similar to the classical **natural gradient** or the closely related **Gauss-Newton** method. In natural gradient they also approximate the deviation constraint by second order Taylor expansion (also drop higher order term in Hessian), resulti...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_HyxBpoR5tm", "B1goHwJqR7", "BkxyyjB40m", "S1ew11U4RQ", "B1lhp0BVCm", "HklHzUwsn7", "BJe7jcB4Am", "HylEkYgY3Q", "rJgEwtBNC7", "H1eTf52unm", "iclr_2019_HyxBpoR5tm", "iclr_2019_HyxBpoR5tm", "iclr_2019_HyxBpoR5tm", "ryxGk0jM9Q", "BJgbuknzqm", "SkexZVhMcQ", "rylfcB3MqX", "B1x...
iclr_2019_HyxOIoRqFQ
Discrete flow posteriors for variational inference in discrete dynamical systems
Each training step for a variational autoencoder (VAE) requires us to sample from the approximate posterior, so we usually choose simple (e.g. factorised) approximate posteriors in which sampling is an efficient computation that fully exploits GPU parallelism. However, such simple approximate posteriors are often insufficient, as they eliminate statistical dependencies in the posterior. While it is possible to use normalizing flow approximate posteriors for continuous latents, there is nothing analogous for discrete latents. The most natural approach to model discrete dependencies is an autoregressive distribution, but sampling from such distributions is inherently sequential and thus slow. We develop a fast, parallel sampling procedure for autoregressive distributions based on fixed-point iterations which enables efficient and accurate variational inference in discrete state-space models. To optimize the variational bound, we considered two ways to evaluate probabilities: inserting the relaxed samples directly into the pmf for the discrete distribution, or converting to continuous logistic latent variables and interpreting the K-step fixed-point iterations as a normalizing flow. We found that converting to continuous latent variables gave considerable additional scope for mismatch between the true and approximate posteriors, which resulted in biased inferences, we thus used the former approach. We tested our approach on the neuroscience problem of inferring discrete spiking activity from noisy calcium-imaging data, and found that it gave accurate connectivity estimates in an order of magnitude less time.
rejected-papers
The paper presents an original approach to replace inefficient discrete autoregressive posterior sampling by a parallel sampling procedure based on fixed-point iterations reminiscent of normalizing flow, but for discrete variables. All reviewers liked the idea, and found that it was an original and promising approach. But all agreed the paper was poorly written and very unclear. All also found the experimental section lacking, in clarity and scope. Authors did not provide a rebuttal. Overall a potentially really promising idea, but the paper is not yet ripe.
test
[ "Skg7fQTnnX", "BkxzQuVqnm", "HyecyNAVhQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper uses an autoregressive filtering variational approximation for parameter estimation in discrete dynamical systems. One issue that crops up with this *particular choice* of variational distribution is that (a) inference proceeds sequentially (by definition) and (b) this does not make use of parallelism i...
[ 4, 4, 7 ]
[ 3, 4, 4 ]
[ "iclr_2019_HyxOIoRqFQ", "iclr_2019_HyxOIoRqFQ", "iclr_2019_HyxOIoRqFQ" ]
iclr_2019_HyxSBh09t7
Graph Generation via Scattering
Generative networks have made it possible to generate meaningful signals such as images and texts from simple noise. Recently, generative methods based on GAN and VAE were developed for graphs and graph signals. However, the mathematical properties of these methods are unclear, and training good generative models is difficult. This work proposes a graph generation model that uses a recent adaptation of Mallat's scattering transform to graphs. The proposed model is naturally composed of an encoder and a decoder. The encoder is a Gaussianized graph scattering transform, which is robust to signal and graph manipulation. The decoder is a simple fully connected network that is adapted to specific tasks, such as link prediction, signal generation on graphs and full graph and signal generation. The training of our proposed system is efficient since it is only applied to the decoder and the hardware requirement is moderate. Numerical results demonstrate state-of-the-art performance of the proposed system for both link prediction and graph and signal generation. These results are in contrast to experience with Euclidean data, where it is difficult to form a generative scattering network that performs as well as state-of-the-art methods. We believe that this is because of the discrete and simpler nature of graph applications, unlike the more complex and high-frequency nature of Euclidean data, in particular, of some natural images.
rejected-papers
AR1 is concerned about the novelty and what are exact novel elements of the proposed approach. AR2 is worried about the novelty (combination of existing blocks) and lack of insights. AR3 is also concerned about the novelty, complexity and poor evaluations/lack of thorough comparisons with other baselines. After rebuttal, the reviewers remained unconvinced e.g. AR3 still would like to see why the proposed method would be any better than GAN-based approaches. With regret, at this point, the AC cannot accept this paper but AC encourages the authors to take all reviews into consideration and improve their manuscript accordingly. Matters such as complexity (perhaps scattering networks aren't the most friendly here), clear insights and strong comparisons to generative approaches are needed.
train
[ "r1lYN5pryN", "r1x35juryN", "HyeaY7H9CX", "rkeLHQr5RQ", "rJlWAWSqRX", "H1gJGbrqC7", "ByeF3skShm", "BkgwHNj93X", "r1eDvfHq2X" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "* The new uploaded version of the paper addressed your comments on the very few unclear sentences. If you have additional comments let us know. \n* Thanks for pointing to the new github page with the code of MolGAN. It was not available before submission (it was only initiated, with partial information, 3 days bef...
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "r1x35juryN", "HyeaY7H9CX", "rkeLHQr5RQ", "ByeF3skShm", "r1eDvfHq2X", "BkgwHNj93X", "iclr_2019_HyxSBh09t7", "iclr_2019_HyxSBh09t7", "iclr_2019_HyxSBh09t7" ]
iclr_2019_HyxUIj09KX
S-System, Geometry, Learning, and Optimization: A Theory of Neural Networks
We present a formal measure-theoretical theory of neural networks (NN) built on {\it probability coupling theory}. Particularly, we present an algorithm framework, Hierarchical Measure Group and Approximate System (HMGAS), nicknamed S-System, of which NNs are special cases. In addition to many other results, the framework enables us to prove that 1) NNs implement {\it renormalization group (RG)} using information geometry, which points out that the large scale property to renormalize is dual Bregman divergence and completes the analog between NNs and RG; 2) and under a set of {\it realistic} boundedness and diversity conditions, for {\it large size nonlinear deep} NNs with a class of losses, including the hinge loss, all local minima are global minima with zero loss errors, using random matrix theory.
rejected-papers
The paper is extremely difficult to read, even given that both reviewers have very strong math / theoretical background. Although it may potentially include interesting ideas, nothing in the work could not be understood by the ICLR audience.
train
[ "HJehf-1Lam", "SygFw7xWpQ" ]
[ "official_reviewer", "official_reviewer" ]
[ "The paper provides a new framework \"S-System\" as a generalization of hierarchal models including neural networks. The paper shows an alternative way to derive the activation functions commonly used in practice in a principled way. It further shows that the landscape of the optimization problem of neural networks...
[ 4, 4 ]
[ 2, 1 ]
[ "iclr_2019_HyxUIj09KX", "iclr_2019_HyxUIj09KX" ]
iclr_2019_HyxhusA9Fm
Talk The Walk: Navigating Grids in New York City through Grounded Dialogue
We introduce `"Talk The Walk", the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a 'guide' and a 'tourist') that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.
rejected-papers
This paper introduces a newly collected dataset of natural language interactions between a tourist and a guide for localization and navigation. The paper also includes baseline experiments with a reasonably novel approach. The task is well motivated (although an open question remains due to GPS, comment by reviewer 1), but the description of the dataset and collection, approach and experiments were not ideal in the first version of the paper. Much of the information was pushed to the appendix and it was hard to follow the paper without going back and forth, and even then some points were missing. Authors rewrote parts of the paper to address these concerns, but there are still some open questions. For example, is it possible to have sub-tasks, given the task is complex and may not be easy to accomplish as a whole? Or could simple LSTM be another baseline (the final review of the third reviewer)?
test
[ "B1xMzl-dCm", "BkxzjYxdR7", "rJlShugu07", "r1lkCfhc3m", "rklAz2j92m", "rkgGwTlF27", "BkgwVGyN5X", "Hygemsnz5Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thanks for your positive feedback on our work. It would be really helpful if you could elaborate on the usefulness and challenges of the new task, the introduced baselines, or the descriptions of our experiments. ", "Thank you for taking the time to review our manuscript :)\n\nWe believe there is a misinterpreta...
[ -1, -1, -1, 6, 7, 4, -1, -1 ]
[ -1, -1, -1, 4, 4, 3, -1, -1 ]
[ "rklAz2j92m", "rkgGwTlF27", "r1lkCfhc3m", "iclr_2019_HyxhusA9Fm", "iclr_2019_HyxhusA9Fm", "iclr_2019_HyxhusA9Fm", "Hygemsnz5Q", "iclr_2019_HyxhusA9Fm" ]
iclr_2019_HyxlHsActm
Efficient Dictionary Learning with Gradient Descent
Randomly initialized first-order optimization algorithms are the method of choice for solving many high-dimensional nonconvex problems in machine learning, yet general theoretical guarantees cannot rule out convergence to critical points of poor objective value. For some highly structured nonconvex problems however, the success of gradient descent can be understood by studying the geometry of the objective. We study one such problem -- complete orthogonal dictionary learning, and provide converge guarantees for randomly initialized gradient descent to the neighborhood of a global optimum. The resulting rates scale as low order polynomials in the dimension even though the objective possesses an exponential number of saddle points. This efficient convergence can be viewed as a consequence of negative curvature normal to the stable manifolds associated with saddle points, and we provide evidence that this feature is shared by other nonconvex problems of importance as well.
rejected-papers
It seems that the reviewers reached a consensus that the paper is not ready for publication in ICLR. (see more details in the reviews below. )
train
[ "SklG6dwsnX", "SygOCLP527", "H1emP_AJ3Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a convergence analysis for manifold gradient descent in complete dictionary learning. I have three major concerns:\n\n(1) The optimization problem for complete orthogonal dictionary learning in this paper is very different from overcomplete dictionary learning in practice. It is actually more si...
[ 5, 4, 5 ]
[ 4, 3, 2 ]
[ "iclr_2019_HyxlHsActm", "iclr_2019_HyxlHsActm", "iclr_2019_HyxlHsActm" ]
iclr_2019_HyxpNnRcFX
Modulating transfer between tasks in gradient-based meta-learning
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not mutually beneficial, for instance, when tasks are sufficiently dissimilar or change over time. Here, we use the connection between gradient-based meta-learning and hierarchical Bayes to propose a mixture of hierarchical Bayesian models over the parameters of an arbitrary function approximator such as a neural network. Generalizing the model-agnostic meta-learning (MAML) algorithm, we present a stochastic expectation maximization procedure to jointly estimate parameter initializations for gradient descent as well as a latent assignment of tasks to initializations. This approach better captures the diversity of training tasks as opposed to consolidating inductive biases into a single set of hyperparameters. Our experiments demonstrate better generalization on the standard miniImageNet benchmark for 1-shot classification. We further derive a novel and scalable non-parametric variant of our method that captures the evolution of a task distribution over time as demonstrated on a set of few-shot regression tasks.
rejected-papers
This paper is extending the meta-learning MAML method to the mixture case. Specifically, the global parameters of the method are now modeled as a mixture. The authors also derive the elaborate associated inference for this approach. The paper is well written although Rev2 raises some presentation issues that can surely improve the quality of the paper, if addressed in depth. The results do not convince any of the three reviewers. Rev3 asks for a clearer exposition of the results to increase convincingness. Rev2 and Rev1 also make similar comments. Rev1 also questions the motivation of the approach, although the other two reviewers seem to find the approach well motivated. Although it certainly helps to prove the motivation within a very tailored to the method application, the AC weighted the opinion of all reviewers and did not consider the paper to lack in the motivation aspect. The reviewers were overall not very impressed with this paper and that does not seem to stem from lack of novelty or technical correctness. Instead, it seems that this work is rather inconclusive (or at least it is presented in an inconclusive manner): Rev1 says that the important questions (like trade-offs and other practical issues) are not answered, Rev2 suggests that maybe this paper is trying to address too much, and all three reviewers are not convinced by the experiments and derived insights. Finally, Rev2 points out some inherent caveats of the method; although they do not seem to be severe enough to undermine the overall quality of the approach, it would be instructive to have them investigated more thoroughly (even if not completely solving them).
train
[ "rkxmRTySkE", "rJgk5vxH1E", "HkxkCllryN", "HJxYmA1rJ4", "rkl1V9kry4", "H1lwg9yHkV", "ryg_PJdYCm", "rkeZMawKAm", "Bkx2s5vFRm", "BJgTUwVtCQ", "rkgvengN0m", "rJgBMFg4Rm", "rkl_0aJV0Q", "SkehYq1V0m", "H1xS9OkVRQ", "Hkeu__y4AQ", "BJxnHUyEAX", "BkxRN1aQCQ", "ryeDpxpX0m", "B1xyDeTQRQ"...
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer",...
[ " We thank the reviewer for the response. We respond to specific points below.\n\n> \"...with a known task distribution you can leverage more information in your objective and could possibly encourage greater simultaneous exploration of initialization space.\"\n\nWe reiterate that the task-agnostic setting is the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "rkeZMawKAm", "iclr_2019_HyxpNnRcFX", "rkeZMawKAm", "rkeZMawKAm", "BJgTUwVtCQ", "BJgTUwVtCQ", "Skl0gf6On7", "Skl0gf6On7", "Skl0gf6On7", "BkxRN1aQCQ", "SkehYq1V0m", "SkehYq1V0m", "SkehYq1V0m", "iclr_2019_HyxpNnRcFX", "Skl0gf6On7", "Skl0gf6On7", "S1l6hQFp3Q", "BJeOO--12Q", "BJeOO--...
iclr_2019_Hyxsl2AqKm
ON THE EFFECTIVENESS OF TASK GRANULARITY FOR TRANSFER LEARNING
We describe a DNN for video classification and captioning, trained end-to-end, with shared features, to solve tasks at different levels of granularity, exploring the link between granularity in a source task and the quality of learned features for transfer learning. For solving the new task domain in transfer learning, we freeze the trained encoder and fine-tune an MLP on the target domain. We train on the Something-Something dataset with over 220, 000 videos, and multiple levels of target granularity, including 50 action groups, 174 fine-grained action categories and captions. Classification and captioning with Something-Something are challenging because of the subtle differences between actions, applied to thousands of different object classes, and the diversity of captions penned by crowd actors. Our model performs better than existing classification baselines for SomethingSomething, with impressive fine-grained results. And it yields a strong baseline on the new Something-Something captioning task. Experiments reveal that training with more fine-grained tasks tends to produce better features for transfer learning.
rejected-papers
This paper presents the empirical relation between the task granularity and transfer learning, when applied between video classification and video captioning. The key take away message is that more fine-grained tasks support better transfer in the case of classification---captioning transfer on 20BN-something-something dataset. Pros: The paper presents a new empirical study on transfer learning between video classification and video captioning performed on the recent 20BN-something-something dataset (220,000 videos concentrating on 174 action categories). The paper presents a lot of experimental results, albeit focused primarily on the 20BN dataset. Cons: The investigation presented by this paper on the effect of the task granularity is rather application-specific and empirical. As a result, it is unclear what generalizable knowledge or insights we gain for a broad range of other applications. The methodology used in the paper is relatively standard and not novel. Also, according to the 20BN-something-something leaderboard (https://20bn.com/datasets/something-something), the performance reported in the paper does not seem competitive compared to current state-of-the-art. There were some clarification questions raised by the reviewers but the authors did not respond. Verdict: Reject. The study presented by the paper is a bit too application-specific with relatively narrow impact for ICLR. Relatively weak novelty and empirical results.
train
[ "rkef2JUcnQ", "B1xz8jHq2X", "SklNaNgcnQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper describes a multi-task video classification and captioning model applied to a fine-grained object relationship video dataset, for a range of different classification and captioning tasks at different levels of granularity. This paper also creates a new video action dataset around kitchen objects and act...
[ 5, 5, 5 ]
[ 4, 4, 4 ]
[ "iclr_2019_Hyxsl2AqKm", "iclr_2019_Hyxsl2AqKm", "iclr_2019_Hyxsl2AqKm" ]
iclr_2019_Hyxtso0qtX
Adversarial Exploration Strategy for Self-Supervised Imitation Learning
We present an adversarial exploration strategy, a simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, and its objective is to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, and the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent collects only moderately hard samples and not overly hard ones that prevent the inverse model from imitating effectively. We evaluate the effectiveness of our method on several OpenAI gym robotic arm and hand manipulation tasks against a number of baseline models. Experimental results show that our method is comparable to that directly trained with expert demonstrations, and superior to the other baselines even without any human priors.
rejected-papers
This paper proposes a method for incentivizing exploration in self-supervised learning using an inverse model, and then uses the learned inverse model for imitating an expert demonstration. The approach of incentivizing the agent to visit transitions where a learned model performs poorly. This relates to prior work (e.g. [1]), but using an inverse model instead of a forward model. The results are promising on challenging problem domains, and the method is simple. The authors have addressed several of the reviewer concerns throughout the discussion period. However, three primary concerns remain: (A) First and foremost: There has been confusion about the problem setting and the comparisons. I think these confusions have stemmed from the writing in the paper not being sufficiently clear. First, it should be made clear in the plots that the "Demos" comparison is akin to an oracle. Second, the difference between self-supervised imitation learning (IL) and traditional IL needs to be spelled out more clearly in the paper. Given that self-supervised imitation learning is not a previously established term, the problem statement needs to be clearly and formally described (and without relying heavily on prior papers). Further, the term self-supervised imitation learning does not seem to be an appropriate term, since imitation learning from an expert is, by definition, not self-supervised, as it involves supervisory information from an expert. Changing this term and clearly defining the problem would likely lead to less confusion about the method and the relevant comparisons. (B) The "Demos" comparison is meant as an upper bound on the performance of this particular approach. However, it is also important to understand what the upper bound is on these problems in general, irrespective of whether or not an inverse model is used. Training a policy with behavior cloning on demonstrations with many (s,a) pairs would be able to better provide such a comparison. (C) Inverse models inherently model the part of the environment that is directly controllable (e.g. the robot arm), and often do not effectively model other aspects of the environment that are only indirectly controllable (e.g. the objects). If the method overcomes this issue, then that should be discussed in the paper. Otherwise, the limitation should be outlined and discussed in more detail, including text that outlines which forms of problems and environments this approach is expected to be able to handle, and which of those it cannot handle. Generally, this paper is quite borderline, as indicated by the reviewer's scores. After going through the reviews and parts of the paper in detail, I am inclined to recommend reject as I think the above concerns do not outweigh the pros. One more minor comment is that the paper should consider mentioning the related work by Torabi et al. [2], which considers a similar approach in a slightly different problem setting. [1] Stadie et al. https://arxiv.org/abs/1507.00814 [2] Torabi et al. IJCAI '18 (https://arxiv.org/abs/1805.01954)
train
[ "HkeGILmSkV", "rkxrJw7HkV", "r1l6EvQS14", "SygTzvXHJV", "HJgWrZ2u0X", "BkxL-Cdha7", "SkltsaunaQ", "S1xCH6unTX", "SkxwoWYnaQ", "HJgAY-KnaX", "SJgi2xY26Q", "SJx05eY3TQ", "Syl-u4YshX", "H1l5QTm5nQ", "rkxzS_7qnm" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Here is the PDF version of our response: https://www.dropbox.com/s/fyzs56wbjyb6ykp/20181215%20Replies%20to%20Reviewer%203.pdf?dl=0 (anonymous link)\n\nThe authors appreciate the reviewer’s the time and efforts for reviewing this paper, and would like to respond to the questions in the following paragraphs.\n\nComm...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "HJgWrZ2u0X", "HkeGILmSkV", "SygTzvXHJV", "rkxrJw7HkV", "BkxL-Cdha7", "SkltsaunaQ", "S1xCH6unTX", "rkxzS_7qnm", "HJgAY-KnaX", "Syl-u4YshX", "SJx05eY3TQ", "H1l5QTm5nQ", "iclr_2019_Hyxtso0qtX", "iclr_2019_Hyxtso0qtX", "iclr_2019_Hyxtso0qtX" ]
iclr_2019_Hyxu6oAqYX
An Energy-Based Framework for Arbitrary Label Noise Correction
We propose an energy-based framework for correcting mislabelled training examples in the context of binary classification. While existing work addresses random and class-dependent label noise, we focus on feature dependent label noise, which is ubiquitous in real-world data and difficult to model. Two elements distinguish our approach from others: 1) instead of relying on the original feature space, we employ an autoencoder to learn a discriminative representation and 2) we introduce an energy-based formalism for the label correction problem. We prove that a discriminative representation can be learned by training a generative model using a loss function comprised of the difference of energies corresponding to each class. The learned energy value for each training instance is compared to the original training labels and contradictions between energy assignment and training label are used to correct labels. We validate our method across eight datasets, spanning synthetic and realistic settings, and demonstrate the technique's state-of-the-art label correction performance. Furthermore, we derive analytical expressions to show the effect of label noise on the gradients of empirical risk.
rejected-papers
The authors present an algorithm for label noise correction when the label error is a function of the input features. Strengths - Well motivated problem and a well written paper. Weaknesses - The reviewers raised concerns about theoretical guarantees on generalization; it is not clear why energy based auto-encoder / contrastive divergence would be a good measure of label accuracy especially when the feature distribution has high variance, and when there are not enough clean examples to model this distribution correctly. - Evaluations are all on toy-like tasks with small training sets, which makes it harder to gauge how well the techniques work for real-world tasks. - It’s not clear how well the algorithm can be extended to multi-class problems. The authors suggested 1-vs-all, but have no experiments or results to support the claim. The authors tried to address some of the concerns raised by the reviewers in the rebuttal, e.g., how to address unavailability of correctly labeled data to train an auto-encoder. But other concerns remain. Therefore, the recommendation is to reject the paper.
train
[ "S1eT2iLan7", "Hyg1sDgfR7", "BklviCdgRX", "SkllgyZWaX", "Bkegv0DOh7" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Need improvements\n\n[Summary]\n\nThis paper addresses the problem of correcting noisy labels for binary classification. It assumes the exists of fully clean data, trains an energy-based autoencoder using contrastive learning objective, and use the estimated energy to determine if a training label is corrupted or ...
[ 5, -1, -1, 5, 5 ]
[ 4, -1, -1, 4, 5 ]
[ "iclr_2019_Hyxu6oAqYX", "SkllgyZWaX", "S1eT2iLan7", "iclr_2019_Hyxu6oAqYX", "iclr_2019_Hyxu6oAqYX" ]
iclr_2019_S14g5s09tm
Unseen Action Recognition with Unpaired Adversarial Multimodal Learning
In this paper, we present a method to learn a joint multimodal representation space that allows for the recognition of unseen activities in videos. We compare the effect of placing various constraints on the embedding space using paired text and video data. Additionally, we propose a method to improve the joint embedding space using an adversarial formulation with unpaired text and video data. In addition to testing on publicly available datasets, we introduce a new, large-scale text/video dataset. We experimentally confirm that learning such shared embedding space benefits three difficult tasks (i) zero-shot activity classification, (ii) unsupervised activity discovery, and (iii) unseen activity captioning.
rejected-papers
The paper received mixed reviews. The proposed ideas are reasonable and it shows that unpaired data can improve the performance of unseen video (action) classification tasks and other related tasks. The authors rightfully argue that the main contribution is the use of unpaired, multimodal data for learning a joint embedding (that generalizes to unseen actions) with positive results, but not the use of attentional pooling mechanism. Despite this, as the Reviewer3 points out, technical novelty seems minor as there are quite many papers on learning joint embedding for multimodal data. Many of these works were evaluated for fine-grained image classification setting, but there is no reason that such methods cannot be used here. The revision only compares against methods published in 2017 or before. So more comprehensive evaluation would be needed to fully justify the proposed method. In addition, it seems that the proposed method has fairly marginal gain for the generalized zero-shot learning setting. Overall, the paper can be viewed as an application paper on unseen action recognition tasks but the technical novelty and more rigorous comparisons against recent related work are somewhat lacking. I recommend rejection due to several concerns raised here and by the reviewers.
train
[ "H1gTDeB9Cm", "rkeimeB9RX", "ByeAR1HqA7", "SkxT6yD52m", "H1eJZN-52Q", "rkgxy1Tr2m" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the comments. We would like to address an important misunderstanding regarding the contribution of the paper. We also revised the paper to include the results with the new experimental setting suggested by the reviewer.\n\n- The contribution of the paper and comparison to the previous wor...
[ -1, -1, -1, 7, 5, 4 ]
[ -1, -1, -1, 4, 4, 5 ]
[ "rkgxy1Tr2m", "H1eJZN-52Q", "SkxT6yD52m", "iclr_2019_S14g5s09tm", "iclr_2019_S14g5s09tm", "iclr_2019_S14g5s09tm" ]
iclr_2019_S14h9sCqYm
Weakly-supervised Knowledge Graph Alignment with Adversarial Learning
Aligning knowledge graphs from different sources or languages, which aims to align both the entity and relation, is critical to a variety of applications such as knowledge graph construction and question answering. Existing methods of knowledge graph alignment usually rely on a large number of aligned knowledge triplets to train effective models. However, these aligned triplets may not be available or are expensive to obtain for many domains. Therefore, in this paper we study how to design fully-unsupervised methods or weakly-supervised methods, i.e., to align knowledge graphs without or with only a few aligned triplets. We propose an unsupervised framework based on adversarial training, which is able to map the entities and relations in a source knowledge graph to those in a target knowledge graph. This framework can be further seamlessly integrated with existing supervised methods, where only a limited number of aligned triplets are utilized as guidance. Experiments on real-world datasets prove the effectiveness of our proposed approach in both the weakly-supervised and unsupervised settings.
rejected-papers
This paper considers an important problem of aligning two knowledge graphs (the entities and relations therein). The reviewers found the use of adversarial training quite novel and appropriate for the the task, especially as it works in the unsupervised setting as well. The reviewers were also impressed that the proposed work outperforms existing approaches in terms of the accuracy of the alignment. The following potential weaknesses were raised by the reviewers and the AC: (1) Reviewer 3 brings up the fact that the hyperaparameters were set different from the original publications of the baselines, and thus are not convinced of the soundness of the results, (2) Reviewer 2 notes that the evaluation is limited, and more variations should be considered, such as varying the overlap, taking larger subsets of knowledge graphs, and going beyond TranE as the choice for embedding, and (3) Reviewer 3 notes that a simpler baseline based on alignment discrepancy should be considered, which would alleviate the need for RL based training. Although the reviewers raised very different concerns with the paper, none of them were addressed in a response or revision, and thus they agree that the paper should be rejected.
train
[ "r1gUvP3h2Q", "r1gxn52qnQ", "SJx202Uchm", "BJgUlz6E2m", "ByxPM08V2X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The authors propose KAGAN, a novel method for knowledge graph (KG) alignments using GANs. In contrast to most other methods, KAGAN does not rely on a supervised setting where a set of already aligned triples is used as seed. In addition, the authors propose modifications such that their method can also integrate i...
[ 5, 5, 5, -1, -1 ]
[ 3, 4, 3, -1, -1 ]
[ "iclr_2019_S14h9sCqYm", "iclr_2019_S14h9sCqYm", "iclr_2019_S14h9sCqYm", "ByxPM08V2X", "iclr_2019_S14h9sCqYm" ]
iclr_2019_S1E64jC5tm
The Forward-Backward Embedding of Directed Graphs
We introduce a novel embedding of directed graphs derived from the singular value decomposition (SVD) of the normalized adjacency matrix. Specifically, we show that, after proper normalization of the singular vectors, the distances between vectors in the embedding space are proportional to the mean commute times between the corresponding nodes by a forward-backward random walk in the graph, which follows the edges alternately in forward and backward directions. In particular, two nodes having many common successors in the graph tend to be represented by close vectors in the embedding space. More formally, we prove that our representation of the graph is equivalent to the spectral embedding of some co-citation graph, where nodes are linked with respect to their common set of successors in the original graph. The interest of our approach is that it does not require to build this co-citation graph, which is typically much denser than the original graph. Experiments on real datasets show the efficiency of the approach.
rejected-papers
The reviewers are unanimous in their assessment that the paper lacks originality in its current form to be publishable at ICLR-2018.
train
[ "ByllbacFA7", "BJgR6tiBAX", "Hkx9C3cB0m", "SkxVfLSzaX", "r1e5PpwR37", "BkeLffqx3Q", "BJxPZpNJpX", "r1l2PlkY57", "B1lFP1IPqX" ]
[ "official_reviewer", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "Thanks for the reply! I agree with your assessment concerning undirected graphs. It would strengthen your paper immensely if you had a scenario where the improvements over the 'standard' approach are clearly shown (this does not necessarily have to be something complex, though!).\n\nI also agree with your thoughts...
[ -1, -1, -1, 5, 3, 4, -1, -1, -1 ]
[ -1, -1, -1, 5, 5, 4, -1, -1, -1 ]
[ "BJgR6tiBAX", "BkeLffqx3Q", "SkxVfLSzaX", "iclr_2019_S1E64jC5tm", "iclr_2019_S1E64jC5tm", "iclr_2019_S1E64jC5tm", "r1e5PpwR37", "B1lFP1IPqX", "iclr_2019_S1E64jC5tm" ]
iclr_2019_S1G_cj05YQ
Activity Regularization for Continual Learning
While deep neural networks have achieved remarkable successes, they suffer the well-known catastrophic forgetting issue when switching from existing tasks to tackle a new one. In this paper, we study continual learning with deep neural networks that learn from tasks arriving sequentially. We first propose an approximated multi-task learning framework that unifies a family of popular regularization based continual learning methods. We then analyze the weakness of existing approaches, and propose a novel regularization method named “Activity Regularization” (AR), which alleviates forgetting meanwhile keeping model’s plasticity to acquire new knowledge. Extensive experiments show that our method outperform state-of-the-art methods and effectively overcomes catastrophic forgetting.
rejected-papers
There is no author response for this paper. The paper presents a multi-task learning framework as a unified view on the previous methods for tackling catastrophic forgetting in continual learning. In light of this framework, the authors propose to minimize the KL-divergence between the predictions of the previous optimal model and the current model using some stored samples from the previous tasks. The consensus among all three reviewers and AC is that the paper lacks (1) novelty, as the proposed approach is similar if not identical to Learning without forgetting (LwF)[Li&Hoiem 2017] with the difference that the KL-divergence is computed on samples kept from the previous tasks (and LwF uses samples from the current task). Methodological and experimental comparison to LwF is crucial to assess the benefits and novelty of the proposed approach. Also the reviewers address other potential weaknesses and give suggestions for improvement: (2) empirical evaluations can be substantially improved with sensitivity analysis of the hyper-parameters on the validation data (R3), indicating errors and error bars for all results (R3 and R2), using more challenging and realistic experimental setting where the data comes from different domains (R1), justifying the results better -- see R2’s questions; (3) lack of clarity and motivation in Section 3.1 -- see R2’s and R1’s suggestions for how to improve clarity and potentially take advantage of the current task to probably correct the previous models prediction when it was wrong. AC suggests, in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.
train
[ "rkgFqD033X", "BkeZupMchQ", "HJlmbJG9hQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors proposed a new regularizer for continual learning to tackle the catastrophic forgetting problem. The proposed method minimizes the KL-divergence between the prediction of previous models and current models on the stored samples of previous tasks. The idea is straightforward and sounds technical. Experi...
[ 4, 4, 4 ]
[ 5, 5, 4 ]
[ "iclr_2019_S1G_cj05YQ", "iclr_2019_S1G_cj05YQ", "iclr_2019_S1G_cj05YQ" ]