paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2019_ByeTHsAqtX
Gradient Descent Happens in a Tiny Subspace
We show that in a variety of large-scale deep learning scenarios the gradient dynamically converges to a very small subspace after a short period of training. The subspace is spanned by a few top eigenvectors of the Hessian (equal to the number of classes in the dataset), and is mostly preserved over long periods of training. A simple argument then suggests that gradient descent may happen mostly in this subspace. We give an example of this effect in a solvable model of classification, and we comment on possible implications for optimization and learning.
rejected-papers
The paper is overally interesting and addresses an important problem, however reviewers ask for more rigorous empirical study and less restrictive settings.
train
[ "SJgd8-z-l4", "rygrNWk80m", "HkghkW1ICQ", "rJgLqx1I0Q", "S1gpLgJUA7", "Hke2kYg2nm", "rJesez7I3m", "BJxE_SOIhX" ]
[ "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Hi,\n\nI am trying to reconcile these findings with those of https://arxiv.org/pdf/1804.08838.pdf where they find that the intrinsic dimensionality had more to do with the input dimension than the number of classes.\n\nOne of the differences between the two works is that your subspace is not randomly chosen but I ...
[ -1, -1, -1, -1, -1, 4, 6, 4 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2019_ByeTHsAqtX", "Hke2kYg2nm", "BJxE_SOIhX", "rJesez7I3m", "iclr_2019_ByeTHsAqtX", "iclr_2019_ByeTHsAqtX", "iclr_2019_ByeTHsAqtX", "iclr_2019_ByeTHsAqtX" ]
iclr_2019_ByeWdiR5Ym
Adaptive Convolutional Neural Networks
The quest for increased visual recognition performance has led to the development of highly complex neural networks with very deep topologies. To avoid high computing resource requirements of such complex networks and to enable operation on devices with limited resources, this paper introduces adaptive kernels for convolutional layers. Motivated by the non-linear perception response in human visual cells, the input image is used to define the weights of a dynamic kernel called Adaptive kernel. This new adaptive kernel is used to perform a second convolution of the input image generating the output pixel. Adaptive kernels enable accurate recognition with lower memory requirements; This is accomplished through reducing the number of kernels and the number of layers needed in the typical CNN configuration, in addition to reducing the memory used, increasing 2X the training speed and the number of activation function evaluations. Our experiments show a reduction of 70X in the memory used for MNIST, maintaining 99% accuracy and 16X memory reduction for CIFAR10 with 92.5% accuracy.
rejected-papers
The paper presents a modification of the convolution layer, where the convolution weights are generated by another convolution operation. While this is an interesting idea, all reviewers felt that the evaluation and results are not particularly convincing, and the paper is not ready for acceptance.
train
[ "SygnZzm7AX", "SylUZyX7AX", "BJg8KpGm0X", "ryezQctq2X", "SkeXQDAtnX", "SylPfhAunm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. Why there is still a need to combine adaptive convolutions with regular convolutions? What would the model performance be for a model with only adaptive kernels?\n\nR)It is possible to change all the layers to use Adaptive convolutions, we replaced only one to measure the unitary contribution. We chose the firs...
[ -1, -1, -1, 5, 4, 4 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "SylPfhAunm", "SkeXQDAtnX", "ryezQctq2X", "iclr_2019_ByeWdiR5Ym", "iclr_2019_ByeWdiR5Ym", "iclr_2019_ByeWdiR5Ym" ]
iclr_2019_ByecAoAqK7
Zero-shot Dual Machine Translation
Neural Machine Translation (NMT) systems rely on large amounts of parallel data.This is a major challenge for low-resource languages. Building on recent work onunsupervised and semi-supervised methods, we present an approach that combineszero-shot and dual learning. The latter relies on reinforcement learning, to exploitthe duality of the machine translation task, and requires only monolingual datafor the target language pair. Experiments on the UN corpus show that a zero-shotdual system, trained on English-French and English-Spanish, outperforms by largemargins a standard NMT system in zero-shot translation performance on Spanish-French (both directions). We also evaluate onnewstest2014. These experimentsshow that the zero-shot dual method outperforms the LSTM-based unsupervisedNMT system proposed in (Lample et al., 2018b), on the en→fr task, while onthe fr→en task it outperforms both the LSTM-based and the Transformers-basedunsupervised NMT systems.
rejected-papers
This paper is essentially an application of dual learning to multilingual NMT. The results are reasonable. However, reviewers noted that the methodological novelty is minimal, and there are not a large number of new insights to be gained from the main experiments. Thus, I am not recommending the paper for acceptance at this time.
test
[ "BJx2tQC007", "ryek_zaK2Q", "SyesPDp307", "ByeKDfJ5A7", "SkgUp-J9Rm", "SyxLW-1cA7", "HJeDdxy5CQ", "BklxgQ0T2X", "rkl6FwWq3Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the quick reply!\n\n[Reply to #1 and #2] \nWe believe that pivoting and pseudo-NMTs are a simple, yet competitive baseline. However, we are currently working on adding several additional baselines from the related work.\nThe numbers you are referring to are not yet included in the paper since the traini...
[ -1, 5, -1, -1, -1, -1, -1, 4, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 5, 3 ]
[ "SyesPDp307", "iclr_2019_ByecAoAqK7", "ByeKDfJ5A7", "ryek_zaK2Q", "rkl6FwWq3Q", "BklxgQ0T2X", "iclr_2019_ByecAoAqK7", "iclr_2019_ByecAoAqK7", "iclr_2019_ByecAoAqK7" ]
iclr_2019_ByezgnA5tm
Constraining Action Sequences with Formal Languages for Deep Reinforcement Learning
We study the problem of deep reinforcement learning where the agent's action sequences are constrained, e.g., prohibition of dithering or overactuating action sequences that might damage a robot, drone, or other physical device. Our model focuses on constraints that can be described by automata such as DFAs or PDAs. We then propose multiple approaches to augment the state descriptions of the Markov decision process (MDP) with summaries of recent action histories. We empirically evaluate these methods applying DQN to three Atari games, training with reward shaping. We found that our approaches are effective in significantly reducing, and even eliminating, constraint violations while maintaining high reward. We also observed that the total reward achieved by an agent can be highly sensitive to how much the constraints encourage or discourage exploration of potentially effective actions during training, and, in addition to helping ensure safe policies, the use of constraints can enhance exploration during training.
rejected-papers
The paper studies the problem of reinforcement learning under certain constraints on action sequences. The reviewers raised important concerns regarding (1) the general motivation, (2) the particular formulation of constraints in terms of action sequences and (3) the relevance and significance of experimental results. The authors did not submit a rebuttal. Given the concerns raised by the reviewers, I encourage the authors to improve the paper to possibly resubmit to another venue.
train
[ "rJe6YEgqR7", "BylDXuqCnQ", "SkgaHjmch7", "rJeD7ZN937" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their detailed comments, and we are using this feedback to improve the paper. ", "This paper presents an DFA-based approach to constrain certain behavior of RL agents, where \"behavior\" is defined by a sequence of actions. This approach assumes that the developer has knowledge of what...
[ -1, 5, 4, 3 ]
[ -1, 4, 3, 4 ]
[ "iclr_2019_ByezgnA5tm", "iclr_2019_ByezgnA5tm", "iclr_2019_ByezgnA5tm", "iclr_2019_ByezgnA5tm" ]
iclr_2019_ByfXe2C5tm
NLProlog: Reasoning with Weak Unification for Natural Language Question Answering
Symbolic logic allows practitioners to build systems that perform rule-based reasoning which is interpretable and which can easily be augmented with prior knowledge. However, such systems are traditionally difficult to apply to problems involving natural language due to the large linguistic variability of language. Currently, most work in natural language processing focuses on neural networks which learn distributed representations of words and their composition, thereby performing well in the presence of large linguistic variability. We propose to reap the benefits of both approaches by applying a combination of neural networks and logic programming to natural language question answering. We propose to employ an external, non-differentiable Prolog prover which utilizes a similarity function over pretrained sentence encoders. We fine-tune these representations via Evolution Strategies with the goal of multi-hop reasoning on natural language. This allows us to create a system that can apply rule-based reasoning to natural language and induce domain-specific natural language rules from training data. We evaluate the proposed system on two different question answering tasks, showing that it complements two very strong baselines – BIDAF (Seo et al., 2016a) and FASTQA (Weissenborn et al.,2017) – and outperforms both when used in an ensemble.
rejected-papers
This paper combines Prolog-like reasoning with distributional semantics, applied to natural language question answering. Given the importance of combining neural and symbolic techniques, this paper provides an important contribution. Further, the proposed method complements standard QA models as it can be easily combined with them. The reviewers and AC note the following potential weaknesses: (1) The evaluation consisted primarily on small subsets of existing benchmarks, (2) the reviewers were concerned that the handcrafted rules were introducing domain information into the model, and (3) were unconvinced that the benefits of the proposed approach were actually complementary to existing neural models. The authors addressed a number of these concerns in the response and their revision. They discussed how OpenIE affects the performance, and other questions the reviewers had. Further, they clarified that the rule templates are really high-level/generic and not "prior knowledge" as the reviewers had initially assumed. The revision also provided more error analysis, and heavily edited the paper for clarity. Although these changes increased the reviewer scores, a critical concern still remains: the evaluation is not performed on the complete question-answering benchmark, but on small subsets of the data, and the benefits are not significant. This makes the evaluation quite weak, and the authors are encouraged to identify appropriate evaluation benchmarks. There is disagreement in the reviewer scores; even though all of them identified the weak evaluation as a concern, some are more forgiving than others, partly due to the other improvements made to the paper. The AC, however, agrees with reviewer 2 that the empirical results need to be sound for this paper to have an impact, and thus is recommending a rejection. Please note that paper was incredibly close to an acceptance, but identifying appropriate benchmarks will make the paper much stronger.
train
[ "S1x2IuwcnX", "BJlraS3Ch7", "H1xQHGlYh7", "HJlfVSl8C7", "S1gEDWeLAm", "rJgkJWxI0Q", "HkgkdleUAX", "SJxXSeeUR7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Updated after reading author revisions:\nI appreciate the clarifications, the response answered almost all of my small technical questions. That plus the new error analysis increases my opinion about the paper, and I'm no longer concerned that the rule templates are hand-generated given their generality and small...
[ 5, 7, 7, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2019_ByfXe2C5tm", "iclr_2019_ByfXe2C5tm", "iclr_2019_ByfXe2C5tm", "iclr_2019_ByfXe2C5tm", "BJlraS3Ch7", "S1x2IuwcnX", "H1xQHGlYh7", "H1xQHGlYh7" ]
iclr_2019_ByfbnsA9Km
Cross-Entropy Loss Leads To Poor Margins
Neural networks could misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset. In this work, we study the binary classification of linearly separable datasets and show that linear classifiers could also have decision boundaries that lie close to their training dataset if cross-entropy loss is used for training. In particular, we show that if the features of the training dataset lie in a low-dimensional affine subspace and the cross-entropy loss is minimized by using a gradient method, the margin between the training points and the decision boundary could be much smaller than the optimal value. This result is contrary to the conclusions of recent related works such as (Soudry et al., 2018), and we identify the reason for this contradiction. In order to improve the margin, we introduce differential training, which is a training paradigm that uses a loss function defined on pairs of points from each class. We show that the decision boundary of a linear classifier trained with differential training indeed achieves the maximum margin. The results reveal the use of cross-entropy loss as one of the hidden culprits of adversarial examples and introduces a new direction to make neural networks robust against them.
rejected-papers
The paper challenges claims about cross-entropy loss attaining max margin when applied to linear classifier and linearly separable data. This is important in moving forward with the development of better loss functions. The main criticism of the paper is that the results are incremental and can be easily obtained from previous work. The authors expressed certain concerns about the reviewing process. In the interest of dissipating any doubts, we collected two additional referee reports. Although one referee is positive about the paper, four other referees agree that the paper is not strong enough.
test
[ "B1gj2lXwkV", "BJxJVvt8JN", "rklW51EUy4", "S1l0f9m8JE", "BkgLGizLJV", "rJeCZjJ52m", "ByxobfCsT7", "S1l15DRl67", "r1lrQuRxpX", "HJgZ7PCga7", "BylxpBRx6Q", "ryx8tGiohX", "rJegWQkshX", "H1gqEmdSim" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "Sorry for my late reply! I've read the response, but I'm not convinced to change the rating.\n\nFor 2a) and 2b), I apologize for not making my previous comment on Section 3 very clear. I did not ignore Theorems 3,4 and Remark 3. In my original comment, I tried to use 'Further theoretical results are given explaini...
[ -1, -1, -1, 3, 4, 5, -1, -1, -1, -1, -1, 8, 5, -1 ]
[ -1, -1, -1, 4, 5, 4, -1, -1, -1, -1, -1, 3, 4, -1 ]
[ "HJgZ7PCga7", "rklW51EUy4", "BkgLGizLJV", "iclr_2019_ByfbnsA9Km", "iclr_2019_ByfbnsA9Km", "iclr_2019_ByfbnsA9Km", "S1l15DRl67", "rJeCZjJ52m", "iclr_2019_ByfbnsA9Km", "rJegWQkshX", "ryx8tGiohX", "iclr_2019_ByfbnsA9Km", "iclr_2019_ByfbnsA9Km", "iclr_2019_ByfbnsA9Km" ]
iclr_2019_BygANjA5FX
IEA: Inner Ensemble Average within a convolutional neural network
Ensemble learning is a method of combining multiple trained models to improve model accuracy. We propose the usage of such methods, specifically ensemble average, inside Convolutional Neural Network (CNN) architectures by replacing the single convolutional layers with Inner Average Ensembles (IEA) of multiple convolutional layers. Empirical results on different benchmarking datasets show that CNN models using IEA outperform those with regular convolutional layers and advances the state of art. A visual and a similarity score analysis of the features generated from IEA explains why it boosts the model performance.
rejected-papers
The method under consideration uses parallel convolutional filter groups per layer, where activations are averaged between the groups, forming "inner ensembles". Reviewers raised a number of concerns, including the increased computational cost for apparently little performance gain, the choice of base architecture (later addressed with additional experiments using WideResNet and ResNeXt), issues of clarity of presentation (some of which were addressed). One reviewer was unconvinced without direct comparison to full ensembles. Another reviewer raised the issue of a missing direct comparison to the most similar method in the literature, maxout (Goodfellow et al, 2013). Authors rebutted this by claiming that maxout is difficult to implement and offering vague arguments for its inferiority to their method. The AC agrees that a maxout baseline is important here, as it is extremely close to the proposed method and also trivially implemented, and that in light of maxout (and other related methods) the degree of novelty is limited. The AC also concurs that a full ensemble baseline would strengthen the paper's claims. In the absence of either of these the AC concurs with the reviewers that this work is not suitable for publication at this time.
train
[ "Ske_KbZhJ4", "r1ghCUOoRX", "B1exMlxcCQ", "H1xLpthtR7", "S1Q3ytntR7", "SkeqRwnF0Q", "BkgksQPZ6m", "BygZMqts3Q", "ryx_iuvFnX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your response and the updated experiments and paper text, especially the experiments with augmentation on CIFAR.\n\nMy primary concern is the lack of comparison between your layer-wise ensembles and full-network ensembles, e.g. “(m=3, k=1)” versus “(m=1, k=3)”. This issue is coupled with the lack of a c...
[ -1, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "S1Q3ytntR7", "SkeqRwnF0Q", "iclr_2019_BygANjA5FX", "ryx_iuvFnX", "BygZMqts3Q", "BkgksQPZ6m", "iclr_2019_BygANjA5FX", "iclr_2019_BygANjA5FX", "iclr_2019_BygANjA5FX" ]
iclr_2019_BygGNnCqKQ
Architecture Compression
In this paper we propose a novel approach to model compression termed Architecture Compression. Instead of operating on the weight or filter space of the network like classical model compression methods, our approach operates on the architecture space. A 1-D CNN encoder/decoder is trained to learn a mapping from discrete architecture space to a continuous embedding and back. Additionally, this embedding is jointly trained to regress accuracy and parameter count in order to incorporate information about the architecture's effectiveness on the dataset. During the compression phase, we first encode the network and then perform gradient descent in continuous space to optimize a compression objective function that maximizes accuracy and minimizes parameter count. The final continuous feature is then mapped to a discrete architecture using the decoder. We demonstrate the merits of this approach on visual recognition tasks such as CIFAR-10/100, FMNIST and SVHN and achieve a greater than 20x compression on CIFAR-10.
rejected-papers
The authors propose a scheme to learn a mapping between the discrete space of network architectures into a continuous embedding, and from the continuous embedding back into the space of network architectures. During the training phase, the models regress the number of parameters, and expected accuracy given the continuous embedding. Once trained, the model can be used for compression by first embedding the network structure and then performing gradient descent to maximize accuracy by minimizing the number of parameters. The optimized representation can then be mapped back into the discrete architecture space. Overall, the main idea of this work is very interesting, and the experiments show that the method has some promise. However, as was noted by the reviewers, the paper could be significantly strengthened by performing additional experiments and analyses. As such, the AC agrees with the reviewers that the paper in its present form is not suitable for acceptance, but the authors are encouraged to revise and resubmit this work to a future venue.
train
[ "H1ldz3JhyV", "Hklv4COo3X", "BJgCiR_LAX", "r1eLUCdLCm", "BJxj26d80Q", "r1lO96_IAQ", "r1gE7a1Rn7", "BJgdDYYun7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the response.\n\nI agree with the concerns raised by the other reviewer on the experiment including reproducibility and multi-target optimization. Still, I think the point in the rebuttal is not critical enough to change my decision. That said, I do believe as long as there is enough pooling layer (...
[ -1, 4, -1, -1, -1, -1, 6, 4 ]
[ -1, 3, -1, -1, -1, -1, 4, 4 ]
[ "BJgCiR_LAX", "iclr_2019_BygGNnCqKQ", "r1gE7a1Rn7", "Hklv4COo3X", "BJgdDYYun7", "BJgdDYYun7", "iclr_2019_BygGNnCqKQ", "iclr_2019_BygGNnCqKQ" ]
iclr_2019_BygIV2CcKm
Learning to Augment Influential Data
Data augmentation is a technique to reduce overfitting and to improve generalization by increasing the number of labeled data samples by performing label preserving transformations; however, it is currently conducted in a trial and error manner. A composition of predefined transformations, such as rotation, scaling and cropping, is performed on training samples, and its effect on performance over test samples can only be empirically evaluated and cannot be predicted. This paper considers an influence function which predicts how generalization is affected by a particular augmented training sample in terms of validation loss. The influence function provides an approximation of the change in validation loss without comparing the performance which includes and excludes the sample in the training process. A differentiable augmentation model that generalizes the conventional composition of predefined transformations is also proposed. The differentiable augmentation model and reformulation of the influence function allow the parameters of the augmented model to be directly updated by backpropagation to minimize the validation loss. The experimental results show that the proposed method provides better generalization over conventional data augmentation methods.
rejected-papers
This paper proposes and end-to-end trainable architecture for data augmentation, by defining a parametric model for data augmentation (using spatial transformers and GANs) and optimizing validation classification error through the notion of influence functions. Experiments are reported on MNIST and CIfar-10. This is a borderline submission. Reviewers found the theoretical framework and problem setup to be solid and promising, but were also concerned about the experimental setup and the lack of clarity in the manuscript. In particular, one would like to evaluate this model against similar baselines (e.g. Ratner et al) on a large-scale classification problem. The AC, after taking these comments into account and making his/her own assessment, recommends rejection at this time, encouraging the authors to address the above comments and resubmit this promising work in the next conference cycle.
test
[ "BJlFiqpNgN", "rylTvkCFA7", "Bke0k1CFCQ", "rJgOoATFCm", "HkxHblgRnm", "BkgxppXT2X", "Byld6s022Q" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the detailed response. Though I partially agree with the author's responses (e.g., I do not agree with the argument about randomness), I believe this paper shows enough values to be above the bar. ", "The authors would like to thank all the reviewers for their valuable comments. There seems to be a...
[ -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "Bke0k1CFCQ", "Byld6s022Q", "BkgxppXT2X", "HkxHblgRnm", "iclr_2019_BygIV2CcKm", "iclr_2019_BygIV2CcKm", "iclr_2019_BygIV2CcKm" ]
iclr_2019_BygMAiRqK7
Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs
Building on the success of deep learning, two modern approaches to learn a probability model of the observed data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs). VAEs consider an explicit probability model for the data and compute a generative distribution by maximizing a variational lower-bound on the log-likelihood function. GANs, however, compute a generative model by minimizing a distance between observed and generated probability distributions without considering an explicit model for the observed data. The lack of having explicit probability models in GANs prohibits computation of sample likelihoods in their frameworks and limits their use in statistical inference problems. In this work, we show that an optimal transport GAN with the entropy regularization can be viewed as a generative model that maximizes a lower-bound on average sample likelihoods, an approach that VAEs are based on. In particular, our proof constructs an explicit probability model for GANs that can be used to compute likelihood statistics within GAN's framework. Our numerical results on several datasets demonstrate consistent trends with the proposed theory.
rejected-papers
The paper's strength is in that it shows the log likelihood objective is lower bounded by a GAN objective plus an entropy term. The theory is novel (but it seems to relate closely to the work https://arxiv.org/abs/1711.02771.) The main drawback the reviewer raised includes a) it's not clear how tight the lower bound is; b) the theory only applies to a particular subcase of GANs --- it seems that the only reasonable instance that allows efficient generator is the case where Y = G(x)+\xi where \xi is Gaussian noise. The authors addressed the issue a) with some new experiments with linear generators and quadratic loss, but it lacks experiments with deep models which seems to be necessary since this is a critical issue. Based on this, the AC decided to recommend reject and would encourage the authors to add more experiments on the tightness of the lower bound with bigger models and submit to other top venues.
train
[ "Hkleh_K_3Q", "B1lXCkeqAX", "SklzmcPOTQ", "H1esAdwu6X", "SJlPLdvdaQ", "SJeUt8H62X", "HygcBfTO2m" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The contribution of the paper is to show that WGAN with entropic regularization maximize a lower bound on the likelihood of the observed data distribution. While the WGAN formulation minimizes the Wasserstein distance of the transformed latent distribution and the empirical distribution which is already a nice mea...
[ 6, -1, -1, -1, -1, 5, 5 ]
[ 5, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_BygMAiRqK7", "iclr_2019_BygMAiRqK7", "Hkleh_K_3Q", "HygcBfTO2m", "SJeUt8H62X", "iclr_2019_BygMAiRqK7", "iclr_2019_BygMAiRqK7" ]
iclr_2019_BygNqoR9tm
Sinkhorn AutoEncoders
Optimal Transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show how this principle dictates the minimization of the Wasserstein distance between the encoder aggregated posterior and the prior, plus a reconstruction error. We prove that in the non-parametric limit the autoencoder generates the data distribution if and only if the two distributions match exactly, and that the optimum can be obtained by deterministic autoencoders. We then introduce the Sinkhorn AutoEncoder (SAE), which casts the problem into Optimal Transport on the latent space. The resulting Wasserstein distance is minimized by backpropagating through the Sinkhorn algorithm. SAE models the aggregated posterior as an implicit distribution and therefore does not need a reparameterization trick for gradients estimation. Moreover, it requires virtually no adaptation to different prior distributions. We demonstrate its flexibility by considering models with hyperspherical and Dirichlet priors, as well as a simple case of probabilistic programming. SAE matches or outperforms other autoencoding models in visual quality and FID scores.
rejected-papers
The reviewers appreciated the contribution of combining Wasserstein Autoencoders with the Sinkhorn algorithm. Yet R4 as well as the author of the WAE paper (Ilya Tolstikhin) both expressed concerns about the empirical evaluation. While R1-R3 were all somewhat positive in their recommendation after the rebuttal, they all have somewhat lower confidence reviews, as is also clear by their comments. The AC decided to follow the recommendation of R4 as they were the most expert reviewer. The AC thus recommends to "revise and resubmit" the paper.
train
[ "ryg6_-ShJN", "S1x4RtP6hQ", "SJlAP_3fAQ", "B1gZb2ky07", "Sygi8JJkCX", "SJeod5c_6X", "HJeduotupm", "r1ghcbxM6m", "S1e0fT3eaX", "rJxPy2YZp7", "BJlBQj-a3X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "public", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Authors have addressed some of my concerns, yet in most cases the response is limited to defending their positions. It is clear now for me that the paper provides deeper insights into WAE and proposes promising alternative of SAE. \n\nHowever, the empirical results are still missing and there are some major flaws:...
[ -1, 7, -1, -1, -1, 5, -1, -1, -1, 6, 7 ]
[ -1, 3, -1, -1, -1, 4, -1, -1, -1, 3, 3 ]
[ "B1gZb2ky07", "iclr_2019_BygNqoR9tm", "iclr_2019_BygNqoR9tm", "SJeod5c_6X", "HJeduotupm", "iclr_2019_BygNqoR9tm", "iclr_2019_BygNqoR9tm", "rJxPy2YZp7", "iclr_2019_BygNqoR9tm", "iclr_2019_BygNqoR9tm", "iclr_2019_BygNqoR9tm" ]
iclr_2019_BygREjC9YQ
A unified theory of adaptive stochastic gradient descent as Bayesian filtering
We formulate stochastic gradient descent (SGD) as a novel factorised Bayesian filtering problem, in which each parameter is inferred separately, conditioned on the corresopnding backpropagated gradient. Inference in this setting naturally gives rise to BRMSprop and BAdam: Bayesian variants of RMSprop and Adam. Remarkably, the Bayesian approach recovers many features of state-of-the-art adaptive SGD methods, including amongst others root-mean-square normalization, Nesterov acceleration and AdamW. As such, the Bayesian approach provides one explanation for the empirical effectiveness of state-of-the-art adaptive SGD algorithms. Empirically comparing BRMSprop and BAdam with naive RMSprop and Adam on MNIST, we find that Bayesian methods have the potential to considerably reduce test loss and classification error.
rejected-papers
The aim of this paper is to interpret various optimizers such as RMSprop, Adam, and NAG, as approximate Kalman filtering of the optimal parameters. These algorithms are derived as inference procedures in various dynamical systems. The main empirical result is the algorithms achieve slightly better test accuracy on MNIST compared to an unregularized network trained with Adam or RMSprop. This was a controversial paper, and each of the reviewers had a significant back-and-forth with the authors. The controversy reflects that this is a pretty interesting and relevant topic: a proper Bayesian framework could provide significant guidance for developing better optimizers and regularizers. Unfortunately, I don't think this paper delivers on its promise of a unifying Bayesian framework for these various methods, and I don't think it's quite ready for publication at ICLR. There was some controversy about relationships to various recently published papers giving Bayesian interpretations of optimizers. The authors believe the added value of this submission is that it recovers features such as momentum and root-mean-square normalization. This would be a very interesting contribution beyond those works. But R2 and R3 feel like these particular features were derived using fairly ad-hoc assumptions or approximations almost designed to obtain existing algorithms, and from reading the paper I have to say I agree with the reviewers. There was a lot of back-and-forth about the correctness of various theoretical claims. But overall, my impression is that the theoretical arguments in this paper exceed the bar for a primarily practical/empirical paper, but aren't rigorous enough for the paper to stand purely based on the theoretical contributions. Unfortunately, the empirical part of the paper is rather lacking. The only experiment reported is on MNIST, and the only result is improved test error. The baseline gets below 99% test accuracy, below the level achieved by the original LeNet, suggesting the baseline may be somehow broken. Simply measuring test error doesn't really get at the benefits of Bayesian approaches, as it doesn't distinguish it from the many other regularizers that have been proposed. Since the proposed method is nearly identical to things like Adam or NAG, I don't see any reason it can't be evaluated on more challenging problems (as reviewers have asked for). Overall, while I find the ideas promising, I think the paper needs considerable work before it is ready for publication at ICLR.
train
[ "B1xjAjZcRm", "Byl_YTnU0X", "SyxpYrZLCm", "SklIUQINC7", "HJgLe01m0Q", "SkxmoLr53Q", "Hyg6_TTMCQ", "SylcBr6zRX", "rkxXskaMCQ", "SygJL2hGAX", "rJgJQhiZ07", "HklQLC5WCX", "HkxH12v-RX", "SkxIyCtaTm", "rJxLfaOlCQ", "S1epgrIt2Q", "Syx3r6dp6Q", "HJl-3AvnaX", "rkg-ctln6Q", "H1gRViDo6Q"...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "o...
[ "I am sorry for responding late -- I wrote a comment but I guess I did not upload it correctly.\n\nThank you for clarifying some of these points. I felt that the later revisions and your comments helped to clear up some of the issues I had and have thus updated my review score from 4 to 5 (actually, I did this a fe...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "Hyg6_TTMCQ", "SyxpYrZLCm", "iclr_2019_BygREjC9YQ", "HJgLe01m0Q", "iclr_2019_BygREjC9YQ", "iclr_2019_BygREjC9YQ", "rJgJQhiZ07", "rkxXskaMCQ", "HkxH12v-RX", "HklQLC5WCX", "r1lwDgEi6Q", "HkxH12v-RX", "rJxLfaOlCQ", "Syx3r6dp6Q", "BkeX5DNjTQ", "iclr_2019_BygREjC9YQ", "HJl-3AvnaX", "rkg...
iclr_2019_BygRNn0qYX
P^2IR: Universal Deep Node Representation via Partial Permutation Invariant Set Functions
Graph node representation learning is a central problem in social network analysis, aiming to learn the vector representation for each node in a graph. The key problem is how to model the dependence of each node to its neighbor nodes since the neighborhood can uniquely characterize a graph. Most existing approaches rely on defining the specific neighborhood dependence as the computation mechanism of representations, which may exclude important subtle structures within the graph and dependence among neighbors. Instead, we propose a novel graph node embedding method (namely P^2IR) via developing a novel notion, namely partial permutation invariant set function} to learn those subtle structures. Our method can 1) learn an arbitrary form of the representation function from the neighborhood, without losing any potential dependence structures, 2) automatically decide the significance of neighbors at different distances, and 3) be applicable to both homogeneous and heterogeneous graph embedding, which may contain multiple types of nodes. Theoretical guarantee for the representation capability of our method has been proved for general homogeneous and heterogeneous graphs. Evaluation results on benchmark data sets show that the proposed P^IR outperforms the state-of-the-art approaches on producing node vectors for classification tasks.
rejected-papers
AR1 is concerned with the presentation of the paper and the complexity as well as missing discussion on recent embedding methods. AR2 is concerned about comparison to recent methods and the small size of datasets. AR3 is also concerned about limited comparisons and evaluations. Lastly, AR4 again points out the poor complexity due to the spectral decomposition. While authors argue that the sparsity can be exploited to speed up computations, AR4 still asks for results of the exact model with/without any approximation, effect of clipping spectrum, time complexity versus GCN, and more empirical results covering all these aspects. On balance, all reviewers seem to voice similar concerns which need to be resolved. However, this requires more than just a minor revision of the manuscript. Thus, at this time, the proposed paper cannot be accepted.
train
[ "BJxVho0Jam", "HkxD5X1s3m", "B1eIs99237", "H1lkbOAUjm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a heterogeneous graph embedding method P^2IR. The author(s) first\nargued that such an embedding should be invariant to partial permutations of nodes.\nThen the authors gave a general formulation of such an embedding in theorem 3.1.\nThen the authors instantiated this general formulation by a n...
[ 4, 5, 7, 5 ]
[ 4, 3, 4, 5 ]
[ "iclr_2019_BygRNn0qYX", "iclr_2019_BygRNn0qYX", "iclr_2019_BygRNn0qYX", "iclr_2019_BygRNn0qYX" ]
iclr_2019_ByghKiC5YX
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data
We present a probabilistic framework for studying adversarial attacks on discrete data. Based on this framework, we derive a perturbation-based method, Greedy Attack, and a scalable learning-based method, Gumbel Attack, that illustrate various tradeoffs in the design of attacks. We demonstrate the effectiveness of these methods using both quantitative metrics and human evaluation on various state-of-the-art models for text classification, including a word-based CNN, a character-based CNN and an LSTM. As an example of our results, we show that the accuracy of character-based convolutional networks drops to the level of random selection by modifying only five characters through Greedy Attack.
rejected-papers
I appreciate the willingness of the authors to engage in vigorous discussion about their paper. Although several reviewers support accepting this submission, I do not find their arguments for acceptance convincing. The paper considers automated methods for finding errors in text classification models. I believe it is valuable to study the errors our models make in order to understand when they work well and how to improve them. Crucially, in the later case, we should demonstrate how to use the errors we find to close the loop and create better models. A paper about techniques to find errors for text models should make a sufficiently large contribution to be accepted. I view the following hypothetical contributions as the most salient in this specific case thus my decision reduces to determining if any of these conditions have been met. A paper need not achieve all of these things, any one of them would suffice: 1. Show that the errors found can be used to meaningfully improve the models. This requires building a better model than the one probed by the method and convincingly demonstrating that it is superior in an important way that is relevant to the original goals of the application. Ideally it would also consider alternative, simpler ways to improve the models (e.g. making them larger). 2. Show that errors are difficult to find, but that the proposed method is nonetheless capable of finding errors and that the method is non-obvious to a researcher in the field. This is not applicable here because errors are extremely easy to find on the test set and from labeling more data. If we demand an automated method, then the greedy algorithm does not qualify as sufficiently non-obvious and it seems to work fine, making the Gumbel method unnecessary. 3. Show that the particular specific errors found are qualitatively different from other errors in their implications and that they provide a unique and important insight. I do not believe this submission attempts to show this type of contribution. One example of this type of paper would be a paper that does a comparative study of the errors that different models make and finds something interesting (potentially yielding a path to improved models). 4. Generate a new, more difficult/interesting, dataset by finding errors of one or more trained models Given that the authors use human labelers to validate examples this is potentially another path. Here is an example of a paper using adversarial techniques in this way: https://arxiv.org/abs/1808.05326 However, I believe the paper would need to be rethought and rewritten to make this sort of contribution. Ultimately, the authors and reviews supporting acceptance must explain the contribution succinctly and convincingly. The reviewers most strongly advocating for accepting this submission seem to be saying that there is a valuable new method and probabilistic framework proposed here for finding model errors. I believe researchers in the field could have easily come up with the greedy algorithm (a standard approach to discrete optimization problems) proposed here without needing to read the paper. Furthermore, I believe the other more complicated Gumbel algorithm proposed is not necessary given the similarly effective and simpler greedy algorithm. If the authors believe that the Gumbel algorithm provides application-relevant advantages over the greedy algorithm, then they should specify how these errors will be used and rewrite the paper to make the greedy algorithm a baseline. However, I do not believe the experimental results support this idea.
test
[ "rJlr2RR-xV", "HJgu9RRWxE", "Ske46vsZeE", "r1gzkf3Zx4", "HJl-l_1-lN", "S1xyRzcxeV", "HkxPdeCh14", "r1lwJY3tyE", "BkeaGveDJV", "Syg18fj11E", "HyeQu_ns07", "BkgBBnMApX", "ryeJ9zR3pQ", "H1x-WZR2T7", "H1e_TlAhTQ", "HyxnjDahaX", "BJxl0Dtjpm", "Hkx5iwFoam", "S1x8tPKiam", "B1lXIwYsT7"...
[ "author", "author", "public", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", ...
[ "4. In terms of security, finding the error is a way to attack the model.\n\nFrom the security perspective, the ability of finding nearby errors (adversarial examples) leads to many security threats. Several important applications including attacking self-driving cars (slightly perturbed traffic sign can fool self-...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "iclr_2019_ByghKiC5YX", "iclr_2019_ByghKiC5YX", "HJl-l_1-lN", "Ske46vsZeE", "S1xyRzcxeV", "Skg9KAvmTm", "r1lwJY3tyE", "BkeaGveDJV", "Syg18fj11E", "HyeQu_ns07", "iclr_2019_ByghKiC5YX", "H1e_TlAhTQ", "HyxnjDahaX", "SkxNEow7Tm", "iclr_2019_ByghKiC5YX", "BJgwIBu7am", "iclr_2019_ByghKiC5Y...
iclr_2019_BygmRoA9YQ
Mixture of Pre-processing Experts Model for Noise Robust Deep Learning on Resource Constrained Platforms
Deep learning on an edge device requires energy efficient operation due to ever diminishing power budget. Intentional low quality data during the data acquisition for longer battery life, and natural noise from the low cost sensor degrade the quality of target output which hinders adoption of deep learning on an edge device. To overcome these problems, we propose simple yet efficient mixture of pre-processing experts (MoPE) model to handle various image distortions including low resolution and noisy images. We also propose to use adversarially trained auto encoder as a pre-processing expert for the noisy images. We evaluate our proposed method for various machine learning tasks including object detection on MS-COCO 2014 dataset, multiple object tracking problem on MOT-Challenge dataset, and human activity recognition on UCF 101 dataset. Experimental results show that the proposed method achieves better detection, tracking and activity recognition accuracies under noise without sacrificing accuracies for the clean images. The overheads of our proposed MoPE are 0.67% and 0.17% in terms of memory and computation compared to the baseline object detection network.
rejected-papers
As the reviewers point out, the paper seems to be below the ICLR publication bar due to low novelty and limited significance.
train
[ "HyxYstmF0Q", "BygKuqmYAm", "Bye-NqXt0X", "SklUqujvp7", "S1e467GThm", "B1x6GVBxhQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the valuable reviews.\n\nQ1 – Originality and significance:\n\n(Ans) In contrast to many other DL works focused on denoising or image classification on noisy images [1-2], the main contribution of this paper is to enhance the performance of object detection and its related other tasks (multiple objec...
[ -1, -1, -1, 3, 4, 4 ]
[ -1, -1, -1, 4, 5, 4 ]
[ "SklUqujvp7", "B1x6GVBxhQ", "S1e467GThm", "iclr_2019_BygmRoA9YQ", "iclr_2019_BygmRoA9YQ", "iclr_2019_BygmRoA9YQ" ]
iclr_2019_Bygre3R9Fm
DEFactor: Differentiable Edge Factorization-based Probabilistic Graph Generation
Generating novel molecules with optimal properties is a crucial step in many industries such as drug discovery. Recently, deep generative models have shown a promising way of performing de-novo molecular design. Although graph generative models are currently available they either have a graph size dependency in their number of parameters, limiting their use to only very small graphs or are formulated as a sequence of discrete actions needed to construct a graph, making the output graph non-differentiable w.r.t the model parameters, therefore preventing them to be used in scenarios such as conditional graph generation. In this work we propose a model for conditional graph generation that is computationally efficient and enables direct optimisation of the graph. We demonstrate favourable performance of our model on prototype-based molecular graph conditional generation tasks.
rejected-papers
Since the reviewers unanimously recommended rejecting this paper, I am also recommending against publication. The paper considers an interesting problem and expresses some interesting modeling ideas. However, I concur with the reviewers that a more extensive and convincing set of experiments would be important to add. Especially important would be more experiments with simple extensions of previous approaches and much simpler models designed to solve one of the tasks directly, even if it is in an ad hoc way. If we assume that we only care about results, we should first make sure these particular benchmarks are difficult (this should not be too hard to establish more convincingly if it is true) and that obvious things to try do not work well.
train
[ "rylQT5-c27", "BJl69AQcR7", "rJxhICXqRQ", "BklYpixApm", "B1glS5H76X", "HylN_T0kaX", "SklCA18chQ", "SJl1HiKwsX", "SJx7vcuDjX", "HJeEJFpIs7", "rkeqNivN5m" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "public", "author", "public" ]
[ "In this paper, authors propose a deep generative model and a variant for graph generation and conditional graph generation respectively. It exploits an encoder which is built based on GCN and GraphSAGE, a autoregressive LSTM decoder which generates the graph embedding, and a factorized edge based probabilistic mod...
[ 4, -1, -1, -1, 3, -1, 5, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, 5, -1, 3, -1, -1, -1, -1 ]
[ "iclr_2019_Bygre3R9Fm", "rJxhICXqRQ", "rylQT5-c27", "B1glS5H76X", "iclr_2019_Bygre3R9Fm", "SklCA18chQ", "iclr_2019_Bygre3R9Fm", "SJx7vcuDjX", "HJeEJFpIs7", "rkeqNivN5m", "iclr_2019_Bygre3R9Fm" ]
iclr_2019_BygrtoC9Km
Meta-Learning with Individualized Feature Space for Few-Shot Classification
Meta-learning provides a promising learning framework to address few-shot classification tasks. In existing meta-learning methods, the meta-learner is designed to learn about model optimization, parameter initialization, or similarity metric. Differently, in this paper, we propose to learn how to create an individualized feature embedding specific to a given query image for better classifying, i.e., given a query image, a specific feature embedding tailored for its characteristics is created accordingly, leading to an individualized feature space in which the query image can be more accurately classified.  Specifically, we introduce a kernel generator as meta-learner to learn to construct feature embedding for query images. The kernel generator acquires meta-knowledge of generating adequate convolutional kernels for different query images during training, which can generalize to unseen categories without fine-tuning. In two standard few-shot classification data sets, i.e. Omniglot, and \emph{mini}ImageNet, our method shows highly competitive performance.
rejected-papers
The reviewers all appreciate the idea, and the competitive performance, however the consensus is that this is a simple extension of the work of Han et al. and therefore the current submission contains little novelty. There are also numerous issues regarding clarity that the reviewers have pointed out. It is unfortunate that the authors have not engaged in discussion with the reviewers to resolve these, however they are encouraged to consider the reviewer feedback in order to improve the paper.
train
[ "BkgHzmC92X", "SkeyaSPc3Q", "rke8hD2_3Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# Summary\nThis work deals with few-shot learning and classification by means of similarity learning. The authors propose a method for generating a set of convolutional kernels, i.e. a mini-CNN, for a query image given a set of support samples (with samples from the same class and some other classes). Kernels are ...
[ 5, 5, 3 ]
[ 4, 4, 3 ]
[ "iclr_2019_BygrtoC9Km", "iclr_2019_BygrtoC9Km", "iclr_2019_BygrtoC9Km" ]
iclr_2019_BylBfnRqFm
CAML: Fast Context Adaptation via Meta-Learning
We propose CAML, a meta-learning method for fast adaptation that partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks. At test time, the context parameters are updated with one or several gradient steps on a task-specific loss that is backpropagated through the shared part of the network. Compared to approaches that adjust all parameters on a new task (e.g., MAML), our method can be scaled up to larger networks without overfitting on a single task, is easier to implement, and saves memory writes during training and network communication at test time for distributed machine learning systems. We show empirically that this approach outperforms MAML, is less sensitive to the task-specific learning rate, can capture meaningful task embeddings with the context parameters, and outperforms alternative partitionings of the parameter vectors.
rejected-papers
This paper proposes a meta-learning algorithm that performs gradient-based adaptation (similar to MAML) on a lower dimensional embedding. The paper is generally well-written, and the reviewers generally agree that it has nice conceptual properties. The method also draws similarities to LEO. The main weakness of the paper is with regard to the strength of the experimental results. In a future version of the paper, we encourage the authors to improve the paper by introducing more complex domains or adding experiments that explicitly take advantage of the accessibility of the task embedding. Without such experiments that are more convincing, I do not think the paper meets the bar for acceptance at ICLR.
train
[ "HyeF5PNARQ", "Byl9ch4FR7", "SyeqYsIyCQ", "SJxzujUyCX", "HklgSqLJA7", "rkgcmqIkR7", "ByeMWqLy0X", "B1xX4-Eq3m", "S1eoNpUQaQ", "B1l_Q1cgTX", "H1g5yNXKhX", "Skxk5LPL27", "HyxEtN6Ohm" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author" ]
[ "Thank you for the detailed replies, particularly regarding how CAML relates to prior work.\n\nI still have concerns about novelty and strength of experiments. Rusu et al. learn an embedding that can also be interpreted as a task encoding, and it’s not clear from the results whether the choice of parameter regressi...
[ -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 6, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 5, 4, 2, -1, -1 ]
[ "SyeqYsIyCQ", "iclr_2019_BylBfnRqFm", "SJxzujUyCX", "S1eoNpUQaQ", "B1l_Q1cgTX", "B1xX4-Eq3m", "H1g5yNXKhX", "iclr_2019_BylBfnRqFm", "iclr_2019_BylBfnRqFm", "iclr_2019_BylBfnRqFm", "iclr_2019_BylBfnRqFm", "iclr_2019_BylBfnRqFm", "Skxk5LPL27" ]
iclr_2019_BylBns0qtX
On Learning Heteroscedastic Noise Models within Differentiable Bayes Filters
In many robotic applications, it is crucial to maintain a belief about the state of a system, like the location of a robot or the pose of an object. These state estimates serve as input for planning and decision making and provide feedback during task execution. Recursive Bayesian Filtering algorithms address the state estimation problem, but they require a model of the process dynamics and the sensory observations as well as noise estimates that quantify the accuracy of these models. Recently, multiple works have demonstrated that the process and sensor models can be learned by end-to-end training through differentiable versions of Recursive Filtering methods. However, even if the predictive models are known, finding suitable noise models remains challenging. Therefore, many practical applications rely on very simplistic noise models. Our hypothesis is that end-to-end training through differentiable Bayesian Filters enables us to learn more complex heteroscedastic noise models for the system dynamics. We evaluate learning such models with different types of filtering algorithms and on two different robotic tasks. Our experiments show that especially for sampling-based filters like the Particle Filter, learning heteroscedastic noise models can drastically improve the tracking performance in comparison to using constant noise models.
rejected-papers
This paper shows experiments in favor of learning and using heteroscedastic noise models for differentiable Bayes filter. Reviewers agree that this is interesting and also very useful for the community. However, they have also found plenty of issues with the presentation, execution and evaluations shown in the paper. Post rebuttal, one of the reviewer increased their score, but the other has reduced the score. Overall, the reviewers are in agreement that more work is required before this work can be accepted. Some of existing work on variational inference has not been included which, I agree, is problematic. Simple methods have been compared but then why these methods were chosen and not the other ones, is not completely clear. The paper definitely can improve on this aspect, clearly discussing relationships to many existing methods and then picking important methods to clearly bring some useful insights about learning heteroscedastic noise. Such insights are currently missing in the paper. Reviewers have given many useful feedback in their review, and I believe this can be helpful for the authors to improve their work. In its current form, the paper is not ready to be accepted and I recommend rejection. I encourage the authors to resubmit this work.
train
[ "Bkxqg9aD27", "B1gBt34n3X", "H1x3H8msRX", "B1enjzWvCm", "SyxX8G-w0m", "rkgsGzWwRX", "rJl-GybDAQ", "H1xwa0lvC7", "SkxvJDW937" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This is a well written paper which proposes to learn heteroscedastic noise models from data by optimizing the prediction likelihood end-to-end through differentiable Bayesian Filters. In addition to existing Bayesian filters, the paper also proposes two different versions of the [differentiable] Unscented Kalman F...
[ 6, 6, -1, -1, -1, -1, -1, -1, 4 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_BylBns0qtX", "iclr_2019_BylBns0qtX", "iclr_2019_BylBns0qtX", "Bkxqg9aD27", "Bkxqg9aD27", "Bkxqg9aD27", "SkxvJDW937", "B1gBt34n3X", "iclr_2019_BylBns0qtX" ]
iclr_2019_BylRVjC9K7
Explaining Adversarial Examples with Knowledge Representation
Adversarial examples are modified samples that preserve original image structures but deviate classifiers. Researchers have put efforts into developing methods for generating adversarial examples and finding out origins. Past research put much attention on decision boundary changes caused by these methods. This paper, in contrast, discusses the origin of adversarial examples from a more underlying knowledge representation point of view. Human beings can learn and classify prototypes as well as transformations of objects. While neural networks store learned knowledge in a more hybrid way of combining all prototypes and transformations as a whole distribution. Hybrid storage may lead to lower distances between different classes so that small modifications can mislead the classifier. A one-step distribution imitation method is designed to imitate distribution of the nearest different class neighbor. Experiments show that simply by imitating distributions from a training set without any knowledge of the classifier can still lead to obvious impacts on classification results from deep networks. It also implies that adversarial examples can be in more forms than small perturbations. Potential ways of alleviating adversarial examples are discussed from the representation point of view. The first path is to change the encoding of data sent to the training step. Training data that are more prototypical can help seize more robust and accurate structural knowledge. The second path requires constructing learning frameworks with improved representations.
rejected-papers
The reviewers have agreed this paper is not ready for publication at ICLR.
train
[ "HJxxeB363X", "SJlH5W753Q", "BkxE6t383X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper discusses two ways of constructing adversarial examples (images) using PCA+knn in the input space. Compared to the litterature on adversarial examples, the modifications proposed by the authors are clearly visible to the human eye and the resulting images do not seem natural (see Figure 4 and 5). The aut...
[ 3, 3, 2 ]
[ 4, 2, 5 ]
[ "iclr_2019_BylRVjC9K7", "iclr_2019_BylRVjC9K7", "iclr_2019_BylRVjC9K7" ]
iclr_2019_Byl_ciRcY7
ON BREIMAN’S DILEMMA IN NEURAL NETWORKS: SUCCESS AND FAILURE OF NORMALIZED MARGINS
A belief persists long in machine learning that enlargement of margins over training data accounts for the resistance of models to overfitting by increasing the robustness. Yet Breiman shows a dilemma (Breiman, 1999) that a uniform improvement on margin distribution \emph{does not} necessarily reduces generalization error. In this paper, we revisit Breiman's dilemma in deep neural networks with recently proposed normalized margins using Lipschitz constant bound by spectral norm products. With both simplified theory and extensive experiments, Breiman's dilemma is shown to rely on dynamics of normalized margin distributions, that reflects the trade-off between model expression power and data complexity. When the complexity of data is comparable to the model expression power in the sense that training and test data share similar phase transitions in normalized margin dynamics, two efficient ways are derived via classic margin-based generalization bounds to successfully predict the trend of generalization error. On the other hand, over-expressed models that exhibit uniform improvements on training normalized margins may lose such a prediction power and fail to prevent the overfitting.
rejected-papers
The reviewers reached a consensus that the paper is not ready for publication in ICLR. (see more details in the reviews below. )
train
[ "rJeA1UCq2X", "rkxPipBfAQ", "H1gIu6BMCm", "BkebW6rfA7", "rke-fUAYnX", "SJgqQ5vNn7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThe authors investigate the Breiman’s dilemma in the context of deep learning. They show generalization bounds in terms of the margin distribution. They also perform experiments showing the Breiman’s dilemma.\n\nComments: \nI am afraid the authors miss an important related paper:\n\nLev Reyzin, Robert E...
[ 4, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, 4, 3 ]
[ "iclr_2019_Byl_ciRcY7", "SJgqQ5vNn7", "rke-fUAYnX", "rJeA1UCq2X", "iclr_2019_Byl_ciRcY7", "iclr_2019_Byl_ciRcY7" ]
iclr_2019_BylctiCctX
Guiding Physical Intuition with Neural Stethoscopes
Model interpretability and systematic, targeted model adaptation present central challenges in deep learning. In the domain of intuitive physics, we study the task of visually predicting stability of block towers with the goal of understanding and influencing the model's reasoning. Our contributions are two-fold. Firstly, we introduce neural stethoscopes as a framework for quantifying the degree of importance of specific factors of influence in deep networks as well as for actively promoting and suppressing information as appropriate. In doing so, we unify concepts from multitask learning as well as training with auxiliary and adversarial losses. Secondly, we deploy the stethoscope framework to provide an in-depth analysis of a state-of-the-art deep neural network for stability prediction, specifically examining its physical reasoning. We show that the baseline model is susceptible to being misled by incorrect visual cues. This leads to a performance breakdown to the level of random guessing when training on scenarios where visual cues are inversely correlated with stability. Using stethoscopes to promote meaningful feature extraction increases performance from 51% to 90% prediction accuracy. Conversely, training on an easy dataset where visual cues are positively correlated with stability, the baseline model learns a bias leading to poor performance on a harder dataset. Using an adversarial stethoscope, the network is successfully de-biased, leading to a performance increase from 66% to 88%.
rejected-papers
This submission proposes an interesting new approach on how to evaluate what features are the most useful during training. The paper is interesting and the proposed approach has the potential to be deployed in many applications, however the work as currently presented is demonstrated in a very narrow domain (stability prediction), as noted by all reviewers. Authors are encouraged to provide stronger experimental validation over more domains to show that their approach can truly improve over existing multitask frameworks.
train
[ "HJe-nmVXTQ", "S1eolmNX6m", "HyeOuMN7pm", "B1lChPhJpQ", "SklxtSc93Q", "rJgLrhZq2X" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and the overall positive assessment. In particular, we are delighted that you see the potential of the stethoscope framework lending itself to much broader applications. We agree - particularly for applications regarding the interpretability of deep representations as well as the manipula...
[ -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "B1lChPhJpQ", "SklxtSc93Q", "rJgLrhZq2X", "iclr_2019_BylctiCctX", "iclr_2019_BylctiCctX", "iclr_2019_BylctiCctX" ]
iclr_2019_Byldr3RqKX
Tinkering with black boxes: counterfactuals uncover modularity in generative models
Deep generative models such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) are important tools to capture and investigate the properties of complex empirical data. However, the complexity of their inner elements makes their functionment challenging to assess and modify. In this respect, these architectures behave as black box models. In order to better understand the function of such networks, we analyze their modularity based on the counterfactual manipulation of their internal variables. Our experiments on the generation of human faces with VAEs and GANs support that modularity between activation maps distributed over channels of generator architectures is achieved to some degree, can be used to better understand how these systems operate and allow meaningful transformations of the generated images without further training. erate and edit the content of generated images.
rejected-papers
This paper explores an interpretation of generative models in terms of interventions on their latent variables. The overall set of ideas seems novel and potentially useful, but the presentation is unclear, the goal of the method seems poorly defined, and the qualitative results (including the videos) are unconvincing. I recommend you put work into factoring the ideas in this paper into smaller ones. For instance, definition 1 is a mess. I would also recommend the use of algorithm boxes.
train
[ "SkxGivnqR7", "Syen4wnqCX", "rkxssS29Rm", "rkeUkmCLhQ", "HygZm9oph7", "S1xBGosDh7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer 3,\nWe have rephrased the unclear sentences you pointed out, many thanks. Regarding the lack of clarity of the approach, we have considerably improved the explanation and rigorous formulation of our analysis in the revision. In particular, Definition 2 and 5 as well as equation (3) now describe in de...
[ -1, -1, -1, 4, 6, 4 ]
[ -1, -1, -1, 5, 3, 4 ]
[ "rkeUkmCLhQ", "S1xBGosDh7", "HygZm9oph7", "iclr_2019_Byldr3RqKX", "iclr_2019_Byldr3RqKX", "iclr_2019_Byldr3RqKX" ]
iclr_2019_BylkG20qYm
On Meaning-Preserving Adversarial Perturbations for Sequence-to-Sequence Models
Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence (seq2seq) models, by applying perturbations to the input of a model leading to large degradation in performance. However, these perturbations are only indicative of a weakness in the model if they do not change the semantics of the input in a way that would change the expected output. Using the example of machine translation (MT), we propose a new evaluation framework for adversarial attacks on seq2seq models taking meaning preservation into account and demonstrate that existing methods may not preserve meaning in general. Based on these findings, we propose new constraints for attacks on word-based MT systems and show, via human and automatic evaluation, that they produce more semantically similar adversarial inputs. Furthermore, we show that performing adversarial training with meaning-preserving attacks is beneficial to the model in terms of adversarial robustness without hurting test performance.
rejected-papers
This paper present a framework for creating meaning-preserving adversarial examples. It then proposes two attacks within this framework: one based on k-NN in the word embedding space, and another one based on character swapping. Overall, the goal of constructing such meaning-preserving attacks is very interesting. However, it is unclear how successful the proposed approach really is in the context of this goal. Additionally, it is not clear how much novelty there is compared to already existing methods that have a very similar aim.
train
[ "BJlBx-PjJV", "BJxrCaJtCX", "HylVuxePR7", "Ske4mggw0X", "BJgGtygvR7", "SJxazWdlCQ", "rJlanx6La7", "Byxb0Ssv3Q", "HJePML_8pX", "BylH3PXm6Q", "HkxhXwXQTQ", "rkxBIwX7T7", "rJlfD8QQpX", "Skexz1j3hm", "r1xFfsu9nm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors present a framework for creating meaning-preserving adversarial examples, and give two methods for such attacks. One is based on k-nn in the word embedding space, and another is based on character swapping. The authors further study a series of automatic metrics for determining whether semantic meaning...
[ 4, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 4, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_BylkG20qYm", "BJgGtygvR7", "rJlanx6La7", "SJxazWdlCQ", "HJePML_8pX", "rJlfD8QQpX", "BylH3PXm6Q", "iclr_2019_BylkG20qYm", "rkxBIwX7T7", "Byxb0Ssv3Q", "r1xFfsu9nm", "HkxhXwXQTQ", "Skexz1j3hm", "iclr_2019_BylkG20qYm", "iclr_2019_BylkG20qYm" ]
iclr_2019_Byx1VnR9K7
Trajectory VAE for multi-modal imitation
We address the problem of imitating multi-modal expert demonstrations in sequential decision making problems. In many practical applications, for example video games, behavioural demonstrations are readily available that contain multi-modal structure not captured by typical existing imitation learning approaches. For example, differences in the observed players' behaviours may be representative of different underlying playstyles. In this paper, we use a generative model to capture different emergent playstyles in an unsupervised manner, enabling the imitation of a diverse range of distinct behaviours. We utilise a variational autoencoder to learn an embedding of the different types of expert demonstrations on the trajectory level, and jointly learn a latent representation with a policy. In experiments on a range of 2D continuous control problems representative of Minecraft environments, we empirically demonstrate that our model can capture a multi-modal structured latent space from the demonstrated behavioural trajectories.
rejected-papers
The paper considers the problem of imitating multi-modal expert demonstrations using a variational auto-encoder to embed demonstrated trajectories into a structured latent space. The problem is important, and the paper is well written. The model is shown to work well on toy examples. However, as pointed out by the reviewers, given that multi-modal has been studied before, the approach should have been compared both in theory and in practice to existing methods and baselines (e.g., InfoGAIL). Furthermore, the technical contribution is somewhat limited as it using an existing model on a new application domain.
train
[ "H1l9q4gqAm", "rylVt4lq0X", "Syenv4xqR7", "SJeeDkc227", "SyxpLO9_2m", "Sye9uPhBn7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We propose a new trajectory-level VAE which is different compared with previous work. The model is an alternative fully probabilistic model to capture state sequence dependencies, which is easy to train simply by gradient descent and has a promising performance on a range of problems. We agree that further experim...
[ -1, -1, -1, 4, 4, 4 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "Sye9uPhBn7", "SyxpLO9_2m", "SJeeDkc227", "iclr_2019_Byx1VnR9K7", "iclr_2019_Byx1VnR9K7", "iclr_2019_Byx1VnR9K7" ]
iclr_2019_Byx7LjRcYm
Human Action Recognition Based on Spatial-Temporal Attention
Many state-of-the-art methods of recognizing human action are based on attention mechanism, which shows the importance of attention mechanism in action recognition. With the rapid development of neural networks, human action recognition has been achieved great improvement by using convolutional neural networks (CNN) or recurrent neural networks (RNN). In this paper, we propose a model based on spatial-temporal attention weighted LSTM. This model pays attention to the key part in each video frame, and also focuses on the important frames in each video sequence, thus the most important theme for our model is how to find out the key point spatially and the key frames temporally. We show a feasible architecture which can solve those two problems effectively and achieve a satisfactory result. Our model is trained and tested on three datasets including UCF-11, UCF-101, and HMDB51. Those results demonstrate a high performance of our model in human action recognition.
rejected-papers
Average score of 3.33, highest score of 4. The AC recommends rejection.
train
[ "HyxmR-Rhnm", "ryxVqKQ52m", "SJgAvo_t3Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# 1. Summary\nThis paper presents a spatio-temporal attention LSTM for action recognition, where attention decides which pixels and frames are more important for classification. ConvNet features are extracted, a first layer of attention looks at the pixel level, then a second layer is applied at the temporal level...
[ 4, 3, 3 ]
[ 4, 5, 4 ]
[ "iclr_2019_Byx7LjRcYm", "iclr_2019_Byx7LjRcYm", "iclr_2019_Byx7LjRcYm" ]
iclr_2019_Byx93sC9tm
Deep Ensemble Bayesian Active Learning : Adressing the Mode Collapse issue in Monte Carlo dropout via Ensembles
In image classification tasks, the ability of deep convolutional neural networks (CNNs) to deal with complex image data has proved to be unrivalled. Deep CNNs, however, require large amounts of labeled training data to reach their full potential. In specialised domains such as healthcare, labeled data can be difficult and expensive to obtain. One way to alleviate this problem is to rely on active learning, a learning technique that aims to reduce the amount of labelled data needed for a specific task while still delivering satisfactory performance. We propose a new active learning strategy designed for deep neural networks. This method improves upon the current state-of-the-art deep Bayesian active learning method, which suffers from the mode collapse problem. We correct for this deficiency by making use of the expressive power and statistical properties of model ensembles. Our proposed method manages to capture superior data uncertainty, which translates into improved classification performance. We demonstrate empirically that our ensemble method yields faster convergence of CNNs trained on the MNIST and CIFAR-10 datasets.
rejected-papers
The reviewers in general found the paper approachable, well written and clear. They noted that the empirical observation of mode collapse in active learning was an interesting insight. However, all the reviewers had concerns with novelty, particularly in light of Lakshminarayanan et al. who also train ensembles to get a measure of uncertainty. An interesting addition to the paper might be some theoretical insight about what the model corresponds to when one ensembles multiple models from MC Dropout. One reviewer noted that it's not clear that the ensemble is capturing the desired posterior. As a note, I don't believe there is agreement in the community that MC dropout is state-of-the-art in terms of capturing uncertainty for deep neural networks, as argued in the author response (and the abstract). To the contrary, I believe a variety of papers have improved over the results from that work (e.g. see experiments in Multiplicative Normalizing Flows from over a year ago).
train
[ "ByxPYoUPy4", "rke6sOVz1V", "HkgHpnJ9Cm", "SkesRmzXR7", "H1eWGlaxRX", "BygRYc8eA7", "B1xXG3110Q", "rylwcjykCX", "rygdEjy1CX", "BJgzh7Schm", "B1g0bJk5h7", "rJgwCU_82Q" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I agree that stochastic ensemble is better than the plain ensemble empirically, but the gain is incremental and lacks theoretic support. \n\nAgain, I think the proposed method did not align with the initial goal. Your initial aim is to correct mode collapse problem in estimating posterior but the proposed method ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "B1xXG3110Q", "rygdEjy1CX", "SkesRmzXR7", "H1eWGlaxRX", "BygRYc8eA7", "rylwcjykCX", "BJgzh7Schm", "B1g0bJk5h7", "rJgwCU_82Q", "iclr_2019_Byx93sC9tm", "iclr_2019_Byx93sC9tm", "iclr_2019_Byx93sC9tm" ]
iclr_2019_ByxAOoR5K7
Policy Generalization In Capacity-Limited Reinforcement Learning
Motivated by the study of generalization in biological intelligence, we examine reinforcement learning (RL) in settings where there are information-theoretic constraints placed on the learner's ability to represent a behavioral policy. We first show that the problem of optimizing expected utility within capacity-limited learning agents maps naturally to the mathematical field of rate-distortion (RD) theory. Applying the RD framework to the RL setting, we develop a new online RL algorithm, Capacity-Limited Actor-Critic, that learns a policy that optimizes a tradeoff between utility maximization and information processing costs. Using this algorithm in a 2D gridworld environment, we demonstrate two novel empirical results. First, at high information rates (high channel capacity), the algorithm achieves faster learning and discovers better policies compared to the standard tabular actor-critic algorithm. Second, we demonstrate that agents with capacity-limited policy representations avoid 'overfitting' and exhibit superior transfer to modified environments, compared to policies learned by agents with unlimited information processing resources. Our work provides a principled framework for the development of computationally rational RL agents.
rejected-papers
The paper studies RL from a rate-distortion (RD) theory perspective. A new actor-critic algorithm is developed and evaluated on a series of 2D grid worlds. The paper has some novel idea, and the connection of RL to RD is quite new. This seems like an interesting direction that is worth further investigation. On the other hand, all reviewers agreed there is a severe flaw in this work, casting a doubt where RD can be directly applied to an RL setting because the distribution is not fixed (unlike in standard RD). This issue could have been addressed empirically, by running controlled experiments, something the the paper might include in a future version.
train
[ "Byxnr1pYhQ", "Skx6i98T0X", "rJeb0HVcR7", "r1gjiBV5AQ", "SJgmFB4qRX", "HyeiEgwjn7", "rke6NSZ-sX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "(score raised from 6 to 7 after the rebuttal)\nThe paper explores the application of the rate-distortion framework to policy learning in the reinforcement learning setting. In particular, a policy that maps from states to actions is considered an information theoretic channel of limited capacity. This viewpoint pr...
[ 7, -1, -1, -1, -1, 7, 5 ]
[ 4, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_ByxAOoR5K7", "r1gjiBV5AQ", "rke6NSZ-sX", "Byxnr1pYhQ", "HyeiEgwjn7", "iclr_2019_ByxAOoR5K7", "iclr_2019_ByxAOoR5K7" ]
iclr_2019_ByxAcjCqt7
Point Cloud GAN
Generative Adversarial Networks (GAN) can achieve promising performance on learning complex data distributions on different types of data. In this paper, we first show that a straightforward extension of an existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data. We propose a two fold modification to a GAN algorithm to be able to generate point clouds (PC-GAN). First, we combine ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process. A key component of our method is that we train a posterior inference network for the hidden variables. Second, PC-GAN defines a generic framework that can incorporate many existing GAN algorithms. We further propose a sandwiching objective, which results in a tighter Wasserstein distance estimate than the commonly used dual form in WGAN. We validate our claims on the ModelNet40 benchmark dataset and observe that PC- GAN trained by the sandwiching objective achieves better results on test data than existing methods. We also conduct studies on several tasks, including generalization on unseen point clouds, latent space interpolation, classification, and image to point clouds transformation, to demonstrate the versatility of the proposed PC-GAN algorithm.
rejected-papers
Reviewers mostly recommended to reject after engaging with the authors, however since not all author answers have been acknowledged by reviewers, I am not sure if there are any remaining issues with the submission. I thus lean to recommend to reject and resubmit. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.
train
[ "r1eiKIzRC7", "rJlR-Pg90X", "H1e8c5rL3Q", "BkgC-8Jc0Q", "r1eAuTlLCX", "S1x_7qgLAX", "HJg61YeLRX", "H1er4dxIRX", "SyeS7CQ83m", "H1eZXehVnX", "Bye2CkeKqQ", "BygRsWmBcQ", "BylnKVcMqQ", "ryeUxneAY7" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "1. Yes, since the training takes time, we reported the preliminary results above, but we believe it should be enough to observe the trade-off when we increase/decrease the ratio toward the extremes (upper only and lower bound only). We will make the study more complete and revise the results in the revision. \n\n2...
[ -1, -1, 5, -1, -1, -1, -1, -1, 5, 6, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1 ]
[ "rJlR-Pg90X", "S1x_7qgLAX", "iclr_2019_ByxAcjCqt7", "HJg61YeLRX", "H1eZXehVnX", "SyeS7CQ83m", "H1e8c5rL3Q", "iclr_2019_ByxAcjCqt7", "iclr_2019_ByxAcjCqt7", "iclr_2019_ByxAcjCqt7", "BygRsWmBcQ", "iclr_2019_ByxAcjCqt7", "ryeUxneAY7", "iclr_2019_ByxAcjCqt7" ]
iclr_2019_ByxF-nAqYX
Locally Linear Unsupervised Feature Selection
The paper, interested in unsupervised feature selection, aims to retain the features best accounting for the local patterns in the data. The proposed approach, called Locally Linear Unsupervised Feature Selection, relies on a dimensionality reduction method to characterize such patterns; each feature is thereafter assessed according to its compliance w.r.t. the local patterns, taking inspiration from Locally Linear Embedding (Roweis and Saul, 2000). The experimental validation of the approach on the scikit-feature benchmark suite demonstrates its effectiveness compared to the state of the art.
rejected-papers
This paper presents an LLE-based unsupervised feature selection approach. While one of the reviewers has acknowledged that the paper is well-written with clear mathematical explanations of the key ideas, it also lacks a sufficiently strong theoretical foundation as the authors have acknowledged in their responses; as well as novelty in its tight connection to LLE. When theoretical backbone is weak, the role of empirical results is paramount, but the paper is not convincing in that regard.
train
[ "HygWE6dxAm", "B1giAnOxCQ", "SkerO3_x07", "Hkxcj4-lTQ", "rJxCdB-dnm", "BJgZhgKDhm", "HyeU4d1UnQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Thank you for your review.\n\n\nQ1: \"First, the result of the dimensionality reduction drastically depend on the method used.\nIt is well known that every DR method focuses on preserving certain properties of the data.\nFor instance, PCA preserves the global structure while t-SNE works locally, maximizing the rec...
[ -1, -1, -1, 4, 6, 3, -1 ]
[ -1, -1, -1, 5, 2, 5, -1 ]
[ "BJgZhgKDhm", "rJxCdB-dnm", "Hkxcj4-lTQ", "iclr_2019_ByxF-nAqYX", "iclr_2019_ByxF-nAqYX", "iclr_2019_ByxF-nAqYX", "iclr_2019_ByxF-nAqYX" ]
iclr_2019_ByxHb3R5tX
Universal Successor Features for Transfer Reinforcement Learning
Transfer in Reinforcement Learning (RL) refers to the idea of applying knowledge gained from previous tasks to solving related tasks. Learning a universal value function (Schaul et al., 2015), which generalizes over goals and states, has previously been shown to be useful for transfer. However, successor features are believed to be more suitable than values for transfer (Dayan, 1993; Barreto et al.,2017), even though they cannot directly generalize to new goals. In this paper, we propose (1) Universal Successor Features (USFs) to capture the underlying dynamics of the environment while allowing generalization to unseen goals and (2) a flexible end-to-end model of USFs that can be trained by interacting with the environment. We show that learning USFs is compatible with any RL algorithm that learns state values using a temporal difference method. Our experiments in a simple gridworld and with two MuJoCo environments show that USFs can greatly accelerate training when learning multiple tasks and can effectively transfer knowledge to new tasks.
rejected-papers
In considering the reviews and the author response, I would summarize the evaluation of the paper as following: The main idea in the paper -- to combine goal-conditioning with successor features -- is an interesting direction for research, but is somewhat incremental in light of the prior work in the area. Most of the reviewers generally agreed on this point. While a relatively incremental technical contribution could still result in a successful paper with a thorough empirical analysis and compelling results, the evaluation in the paper is unfortunately not very extensive: the provided tasks are very simple, and the difference from prior methods is not very large. All of the tasks are equivalent to either grid worlds or reaching, which are very simple. Without a deeper technical contribution or a more extensive empirical evaluation, I do not think the paper is ready for publication in ICLR.
train
[ "SJxab7-DlN", "S1xvb_FYxN", "BJxfPRLUgE", "H1gc-aXshm", "Skl17Kw7g4", "Byl9e_uzxN", "B1eu52GGx4", "Hyx0GKC3JN", "SJx1orD4J4", "S1gu-BwN1N", "r1lXpKtc0Q", "r1eieYYqAm", "SkxU9_YqR7", "S1gyhwFc0X", "HyxSyWWGA7", "H1gYhUfITQ", "rkxVB9fBaX", "ByeGy5GBTX", "SJe7jR_gam", "BJeFxqwC3Q"...
[ "public", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I have noted some interesting facts about the proposed USF architectures by the Authors. I will point it down since I also wants to know weather I am correct. \n\n1. Previous methods of SF-RL mainly based on DQN architectures and they not highlight any zero shot transfer learning ability. \n\n2. Previous networks...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2019_ByxHb3R5tX", "BJxfPRLUgE", "B1eu52GGx4", "iclr_2019_ByxHb3R5tX", "Byl9e_uzxN", "iclr_2019_ByxHb3R5tX", "Hyx0GKC3JN", "r1eieYYqAm", "H1gYhUfITQ", "HyxSyWWGA7", "iclr_2019_ByxHb3R5tX", "H1gc-aXshm", "SJe7jR_gam", "BJeFxqwC3Q", "ByeGy5GBTX", "rkxVB9fBaX", "H1gc-aXshm", "SJe...
iclr_2019_ByxLl309Ym
Conditional Inference in Pre-trained Variational Autoencoders via Cross-coding
Variational Autoencoders (VAEs) are a popular generative model, but one in which conditional inference can be challenging. If the decomposition into query and evidence variables is fixed, conditional VAEs provide an attractive solution. To support arbitrary queries, one is generally reduced to Markov Chain Monte Carlo sampling methods that can suffer from long mixing times. In this paper, we propose an idea we term cross-coding to approximate the distribution over the latent variables after conditioning on an evidence assignment to some subset of the variables. This allows generating query samples without retraining the full VAE. We experimentally evaluate three variations of cross-coding showing that (i) can be quickly optimized for different decompositions of evidence and query and (ii) they quantitatively and qualitatively outperform Hamiltonian Monte Carlo.
rejected-papers
This paper proposes to approximate arbitrary conditional distribution of a pertained VAE using variational inferences. The paper is technically sound and clearly written. A few variants of the inference network are also compared and evaluated in experiments. The main problems of the paper are as follows: 1. The motivation of training an inference network for a fixed decoder is not well explained. 2. The application of VI is standard, and offers limited novelty or significance of the proposed method. 3. The introduction of the new term cross-coding is not necessary and does not bring new insights than a standard VI method. The authors argued in the feedback that the central contribution is using augmented VI to do conditioning inference, similar to Rezende at al, but didn't address reviewers' main concerns. I encourage the authors to incorporate the reviewers' comments in a future revision, and explain why this proposed method bring significant contribution to either address a real problem or improve VI methodology.
train
[ "SklMscm3CX", "SkxXa4gKRX", "S1xb5Tak6Q", "S1xh9yst37", "rkl3pcrM2X" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your feedback.\n\nMy score of 4 was mainly due to the lack of original contribution. The paper is technically sound, clearly written and interesting to read, but the inference methods discussed are already known and well-understood in the variational-inference community. There isn't anything in the auth...
[ -1, -1, 4, 4, 4 ]
[ -1, -1, 4, 4, 5 ]
[ "SkxXa4gKRX", "iclr_2019_ByxLl309Ym", "iclr_2019_ByxLl309Ym", "iclr_2019_ByxLl309Ym", "iclr_2019_ByxLl309Ym" ]
iclr_2019_ByxZdj09tX
FROM DEEP LEARNING TO DEEP DEDUCING: AUTOMATICALLY TRACKING DOWN NASH EQUILIBRIUM THROUGH AUTONOMOUS NEURAL AGENT, A POSSIBLE MISSING STEP TOWARD GENERAL A.I.
Contrary to most reinforcement learning studies, which emphasize on training a deep neural network to approximate its output layer to certain strategies, this paper proposes a reversed method for reinforcement learning. We call this “Deep Deducing”. In short, after adequately training a deep neural network according to a strategy-environment-to-payoff table, then we initialize randomized strategy input and propagate the error between the actual output and the desired output back to the initially-randomized strategy input in the “input layer” of the trained deep neural network gradually to perform a task similar to “human deduction”. And we view the final strategy input in the “input layer” as the fittest strategy for a neural network when confronting the observed environment input from the world outside.
rejected-papers
The paper presents "deep deducing", which means learning the state-action value function of 2 player games from a payoff table, and using the value function by maximizing over the (actionable) inputs at test time. The paper lacks clarity overall. The method does not contain any new model nor algorithm. The experiments are too weak (easy environments, few/no comparisons) to support the claims. The paper is not ready for publication at this time.
train
[ "Byg8FCttCQ", "BkgCQeuUaQ", "Hke-a03-6m", "HJx6WXTa2m" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a method of searching for a Nash equilibrium strategy in games where the strategy-to-payoff mapping is defined by a neural network. The idea is to perform gradient optimization of the payoff w.r.t. the strategy. Preliminary results on tic-tac-toe and variations of the prisoner’s dilemma task are...
[ 3, -1, 2, 4 ]
[ 3, -1, 4, 5 ]
[ "iclr_2019_ByxZdj09tX", "Hke-a03-6m", "iclr_2019_ByxZdj09tX", "iclr_2019_ByxZdj09tX" ]
iclr_2019_ByxkCj09Fm
DEEP HIERARCHICAL MODEL FOR HIERARCHICAL SELECTIVE CLASSIFICATION AND ZERO SHOT LEARNING
Object recognition in real-world image scenes is still an open problem. With the growing number of classes, the similarity structures between them become complex and the distinction between classes blurs, which makes the classification problem particularly challenging. Standard N-way discrete classifiers treat all classes as disconnected and unrelated, and therefore unable to learn from their semantic relationships. In this work, we present a hierarchical inter-class relationship model and train it using a newly proposed probability-based loss function. Our hierarchical model provides significantly better semantic generalization ability compared to a regular N-way classifier. We further proposed an algorithm where given a probabilistic classification model it can return the input corresponding super-group based on classes hierarchy without any further learning. We deploy it in two scenarios in which super-group retrieval can be useful. The first one, selective classification, deals with the problem of low-confidence classification, wherein a model is unable to make a successful exact classification. The second, zero-shot learning problem deals with making reasonable inferences on novel classes. Extensive experiments with the two scenarios show that our proposed hierarchical model yields more accurate and meaningful super-class predictions compared to a regular N-way classifier because of its significantly better semantic generalization ability.
rejected-papers
The paper proposes to take into accunt the label structure for classification tasks, instead of a flat N-way softmax. This also lead to a zero-shot setting to consider novel classes. Reviewers point to a lack of reference to prior work and comparisons. Authors have tried to justify their choices, but the overall sentiment is that it lacks novelty with respect to previous approaches. All reviewers recommend to reject, and so do I.
train
[ "rkeQcM9y6m", "B1lPUv5yTm", "ByemSkfqCQ", "HJe5zaIiCX", "B1l57E9OAQ", "rJeu1Ensnm", "BygR5oLw27", "rygXQEFS2m" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1) Clarify the novelties of the article in the abstract and in introduction too.\n2) Put more related work (main part)\n3) Add another experiment in section 5.3 ZERO SHOT LEARNING on bigger zero-shot dataset called 3-hops relative to the 2-hop dataset.\n4) add more conclusions and future work.\n5) Improve grammar...
[ -1, -1, -1, -1, -1, 4, 5, 2 ]
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2019_ByxkCj09Fm", "rygXQEFS2m", "BygR5oLw27", "rJeu1Ensnm", "iclr_2019_ByxkCj09Fm", "iclr_2019_ByxkCj09Fm", "iclr_2019_ByxkCj09Fm", "iclr_2019_ByxkCj09Fm" ]
iclr_2019_ByxmXnA9FQ
A Variational Dirichlet Framework for Out-of-Distribution Detection
With the recently rapid development in deep learning, deep neural networks have been widely adopted in many real-life applications. However, deep neural networks are also known to have very little control over its uncertainty for test examples, which potentially causes very harmful and annoying consequences in practical scenarios. In this paper, we are particularly interested in designing a higher-order uncertainty metric for deep neural networks and investigate its performance on the out-of-distribution detection task proposed by~\cite{hendrycks2016baseline}. Our method first assumes there exists a underlying higher-order distribution P(z), which generated label-wise distribution P(y) over classes on the K-dimension simplex, and then approximate such higher-order distribution via parameterized posterior function pθ(z|x) under variational inference framework, finally we use the entropy of learned posterior distribution pθ(z|x) as uncertainty measure to detect out-of-distribution examples. However, we identify the overwhelming over-concentration issue in such a framework, which greatly hinders the detection performance. Therefore, we further design a log-smoothing function to alleviate such issue to greatly increase the robustness of the proposed entropy-based uncertainty measure. Through comprehensive experiments on various datasets and architectures, our proposed variational Dirichlet framework with entropy-based uncertainty measure is consistently observed to yield significant improvements over many baseline systems.
rejected-papers
The paper proposes a new framework for out-of-distribution detection, based on variational inference and a prior Dirichlet distribution. The reviewers and AC note the following potential weaknesses: (1) arguable and not well justified choices of parameters and (2) the performance degradation under many classes (e.g., CIFAR-100). For (2), the authors mentioned that this is because "there are more than 20% of misclassified test images". But, AC rather views it as a limitation of the proposed approach. The out-of-detection detection problem is a one or two classification task, independent of how many classes exist in the neural classifier. In overall, the proposed idea is interesting and makes sense but AC decided that the authors need more significant works to publish the work.
train
[ "SkxJntDG1V", "r1lPrDdk1E", "HylthcXjA7", "Skx98khBCX", "r1gofqwBR7", "H1gx-5vSAQ", "HJx_JxfNRQ", "rJx-4Lt6nX", "HJghUGPw2Q", "rkxmjBjz3Q", "Byl7LG6Vi7", "S1eBylRZiX", "Syxt4Id3cm", "HyxxfMDscX" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "1. My design is inspired by \"Evidential Deep Learning to Quantify Classification Uncertainty\" (equation 9). Maybe it is better to revise it back to the concentration \"clipping\" version so that the uniform prior is not dependent on the data.\n\n2. Due to the regularization, most of the concentration parameters ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, -1, -1, -1, -1 ]
[ "r1lPrDdk1E", "Skx98khBCX", "iclr_2019_ByxmXnA9FQ", "rkxmjBjz3Q", "HJghUGPw2Q", "HJghUGPw2Q", "rJx-4Lt6nX", "iclr_2019_ByxmXnA9FQ", "iclr_2019_ByxmXnA9FQ", "iclr_2019_ByxmXnA9FQ", "S1eBylRZiX", "iclr_2019_ByxmXnA9FQ", "HyxxfMDscX", "iclr_2019_ByxmXnA9FQ" ]
iclr_2019_Byxr73R5FQ
Successor Options : An Option Discovery Algorithm for Reinforcement Learning
Hierarchical Reinforcement Learning is a popular method to exploit temporal abstractions in order to tackle the curse of dimensionality. The options framework is one such hierarchical framework that models the notion of skills or options. However, learning a collection of task-agnostic transferable skills is a challenging task. Option discovery typically entails using heuristics, the majority of which revolve around discovering bottleneck states. In this work, we adopt a method complementary to the idea of discovering bottlenecks. Instead, we attempt to discover ``landmark" sub-goals which are prototypical states of well connected regions. These sub-goals are points from which densely connected set of states are easily accessible. We propose a new model called Successor options that leverages Successor Representations to achieve the same. We also design a novel pseudo-reward for learning the intra-option policies. Additionally, we describe an Incremental Successor options model that iteratively builds options and explores in environments where exploration through primitive actions is inadequate to form the Successor Representations. Finally, we demonstrate the efficacy of our approach on a collection of grid worlds and on complex high dimensional environments like Deepmind-Lab.
rejected-papers
Pros: - simple, sensible subgoal discovery method - strong inuitions, visualizations - detailed rebuttal, 15 appendix sections Cons: - moderate novelty - lack of ablations - assessments don't back up all claims - ill-justified/mismatching design decisions - inefficiency due to relying on a random policy in the first phase There is consensus among the reviewers that the paper is not quite good enough, and should be (borderline) rejected.
train
[ "B1et7uIGyV", "BkgI4DIz1E", "SkekOG7cA7", "ryxPblrK0m", "HyxOpOYm07", "ByxqybYGA7", "Bkeydt9fAm", "HJxsEFcMAQ", "SJlOZYFGA7", "SJeyQWBI67", "r1g-1y95nm", "BJlo6uzc3m", "BJlZtQO4sX", "rkgf6rTQjm", "rJg0XY0WoX" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "Thank you for going through our work and providing valuable feedback. We hopefully address the concerns and questions raised in this response and we would be happy to expand on any point unclear in this response.\n\n1. The reward used to train the options:\n One way to understand the proposed reward function is to...
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, 6, 4, -1, -1, -1 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, 4, 5, -1, -1, -1 ]
[ "SkekOG7cA7", "SkekOG7cA7", "iclr_2019_Byxr73R5FQ", "HyxOpOYm07", "SJlOZYFGA7", "r1g-1y95nm", "BJlo6uzc3m", "BJlo6uzc3m", "SJeyQWBI67", "iclr_2019_Byxr73R5FQ", "iclr_2019_Byxr73R5FQ", "iclr_2019_Byxr73R5FQ", "rkgf6rTQjm", "rJg0XY0WoX", "iclr_2019_Byxr73R5FQ" ]
iclr_2019_Byxz4n09tQ
Model Compression with Generative Adversarial Networks
More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU- or memory-constrained devices. Model compression (also known as distillation) alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. However, when fresh data is unavailable for the compression task, the teacher's training data is typically reused, leading to suboptimal compression. In this work, we propose to augment the compression dataset with synthetic data from a generative adversarial network (GAN) designed to approximate the training data distribution. Our GAN-assisted model compression (GAN-MC) significantly improves student accuracy for expensive models such as deep neural networks and large random forests on both image and tabular datasets. Building on these results, we propose a comprehensive metric—the Compression Score—to evaluate the quality of synthetic datasets based on their induced model compression performance. The Compression Score captures both data diversity and discriminability, and we illustrate its benefits over the popular Inception Score in the context of image classification.
rejected-papers
The authors propose a scheme to compress models using student-teacher distillation, where training data are augmented using examples generated from a conditional GAN. The reviewers were generally in agreement that 1) that the experimental results generally support the claims made by the authors, and 2) that the paper is clearly written and easy to follow. However, the reviewers also raised a number of concerns: 1) that the experiments were conducted on small-scale tasks, 2) the use of the compression score might be impractical since it would require retraining a compressed model, and is affected by the effectiveness of the compression algorithm which is an additional confounding factor. The authors in their rebuttal address 2) by noting that the student training was not too expensive, but I believe that this cost is task specific. Overall, I think 1) is a significant concern, and the AC agrees with the reviewers that an evaluation of the techniques on large-scale datasets would strengthen the paper.
test
[ "SJgkBBmkkV", "HyxInbRhRm", "HJgHTcROhQ", "B1lCL19nAQ", "HyxcNcq5hm", "ryl2dVuqAQ", "H1gl10wcA7", "BJeSQhv9Am", "BJgyrccuh7" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thank you for your responses and your suggestions to further improve the paper; we include some additional clarifications below.\n\n-Practicality of compression score\n\nThe compression score is quite practical to compute as only one epoch of training is conducted (this is stated and motivated in Sec. 5.1, but in ...
[ -1, -1, 6, -1, 5, -1, -1, -1, 5 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, 4 ]
[ "HyxcNcq5hm", "B1lCL19nAQ", "iclr_2019_Byxz4n09tQ", "ryl2dVuqAQ", "iclr_2019_Byxz4n09tQ", "HJgHTcROhQ", "BJgyrccuh7", "HyxcNcq5hm", "iclr_2019_Byxz4n09tQ" ]
iclr_2019_H1ERcs09KQ
Hierarchically Clustered Representation Learning
The joint optimization of representation learning and clustering in the embedding space has experienced a breakthrough in recent years. In spite of the advance, clustering with representation learning has been limited to flat-level categories, which oftentimes involves cohesive clustering with a focus on instance relations. To overcome the limitations of flat clustering, we introduce hierarchically clustered representation learning (HCRL), which simultaneously optimizes representation learning and hierarchical clustering in the embedding space. Specifically, we place a nonparametric Bayesian prior on embeddings to handle dynamic mixture hierarchies under the variational autoencoder framework, and to adopt the generative process of a hierarchical-versioned Gaussian mixture model. Compared with a few prior works focusing on unifying representation learning and hierarchical clustering, HCRL is the first model to consider a generation of deep embeddings from every component of the hierarchy, not just leaf components. This generation process enables more meaningful separations and mergers of clusters via branches in a hierarchy. In addition to obtaining hierarchically clustered embeddings, we can reconstruct data by the various abstraction levels, infer the intrinsic hierarchical structure, and learn the level-proportion features. We conducted evaluations with image and text domains, and our quantitative analyses showed competent likelihoods and the best accuracies compared with the baselines.
rejected-papers
While this was a borderline paper, concerns about the novelty and significance of the presented work exist on the part of all reviewers, and no reviewer was willing to argue for acceptance. Many good points to the work exist, and a stronger case on these issues would greatly strengthen the paper overall. I look forward to a future submission.
test
[ "H1ejpCZuT7", "HJgnkeMd67", "H1xXoJM_a7", "BJg7xJMuT7", "HJx9wwdJaQ", "HyedILfT2X", "BkgQFESc37" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[Q] The novelty over the nCRP-VAE approach of Goyal et al. (2017) is pretty minor. The main difference seems to be that the model can select clusters at different levels, but I didn't quite get the intuition for why this should be desirable.\n[A1] The reviewers addressed one of the main differentiated features bet...
[ -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "HJx9wwdJaQ", "BkgQFESc37", "HyedILfT2X", "HJx9wwdJaQ", "iclr_2019_H1ERcs09KQ", "iclr_2019_H1ERcs09KQ", "iclr_2019_H1ERcs09KQ" ]
iclr_2019_H1GLm2R9Km
Learning Backpropagation-Free Deep Architectures with Kernels
One can substitute each neuron in any neural network with a kernel machine and obtain a counterpart powered by kernel machines. The new network inherits the expressive power and architecture of the original but works in a more intuitive way since each node enjoys the simple interpretation as a hyperplane (in a reproducing kernel Hilbert space). Further, using the kernel multilayer perceptron as an example, we prove that in classification, an optimal representation that minimizes the risk of the network can be characterized for each hidden layer. This result removes the need of backpropagation in learning the model and can be generalized to any feedforward kernel network. Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand. Empirical results are provided to validate our theory.
rejected-papers
The reviewers mostly raised two concerns regarding the paper: a) why this algorithm is more interpretability than BP (which is just gradient descent); b) the exposition of the paper is somewhat confusing at various places; c) the lack of large-scale experiment results to show this is practically relevant. In the AC's opinion, a principled kernel-based approach can be counted as interpretable, and there the AC would support the paper if a) is the only concern. However, c) seems to be a serious concern since the paper doesn't seem to have experiments beyond fashion MNIST (e.g., CIFAR is pretty easy to train these days) and doesn't have experiments with convolutional models. Based on c), the AC decided that the paper is not quite ready for acceptance.
val
[ "rkgdyqCweV", "HkeHkKRPlE", "H1xiK8H4kE", "H1xZy96tnQ", "ryejatHZaQ", "BJxxS8z9Am", "BJxwGCYLpX", "HJeGi9_cTQ", "SJgTm7sJRm", "B1x5SQXKAQ", "rJgDEhHWT7", "SkeGvhSW67", "rklkp6H-6X", "Byebf0BbT7", "rJeHa0rZ6Q", "SJgeY0rZTX", "ryg8TpKL67", "H1evQ5_96X", "r1epFXo1Rm", "BJl_zmmKCm"...
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "auth...
[ "\nDear Reviewer 3,\n\nHello! We really hope we have fully addressed your concerns in our earlier reply and it would be great if you could give us some feedback. Do you think we have fully addressed your concerns? In particular, if you think our newly-added experimental results could validate the practicality of ou...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "rklAnxYt2X", "rJelxUjc2X", "H1xZy96tnQ", "iclr_2019_H1GLm2R9Km", "iclr_2019_H1GLm2R9Km", "iclr_2019_H1GLm2R9Km", "rklAnxYt2X", "rklAnxYt2X", "rklAnxYt2X", "rklAnxYt2X", "rklAnxYt2X", "rklAnxYt2X", "H1xZy96tnQ", "H1xZy96tnQ", "H1xZy96tnQ", "H1xZy96tnQ", "H1xZy96tnQ", "H1xZy96tnQ", ...
iclr_2019_H1GaLiAcY7
Learning to Separate Domains in Generalized Zero-Shot and Open Set Learning: a probabilistic perspective
This paper studies the problem of domain division which aims to segment instances drawn from different probabilistic distributions. This problem exists in many previous recognition tasks, such as Open Set Learning (OSL) and Generalized Zero-Shot Learning (G-ZSL), where the testing instances come from either seen or unseen/novel classes with different probabilistic distributions. Previous works only calibrate the confident prediction of classifiers of seen classes (WSVM Scheirer et al. (2014)) or taking unseen classes as outliers Socher et al. (2013). In contrast, this paper proposes a probabilistic way of directly estimating and fine-tuning the decision boundary between seen and unseen classes. In particular, we propose a domain division algorithm to split the testing instances into known, unknown and uncertain domains, and then conduct recognition tasks in each domain. Two statistical tools, namely, bootstrapping and KolmogorovSmirnov (K-S) Test, for the first time, are introduced to uncover and fine-tune the decision boundary of each domain. Critically, the uncertain domain is newly introduced in our framework to adopt those instances whose domain labels cannot be predicted confidently. Extensive experiments demonstrate that our approach achieved the state-of-the-art performance on OSL and G-ZSL benchmarks.
rejected-papers
AR1 finds the paper overly lengthy and ill-focused on contributions of this work. Moreover, AR1 would like to see more results for G-ZSL. AR2 finds the paper is lacking in clarity, e.g. Eq. 9, and complete definition of the end-to-end decision pipeline is missing. AR2 points that the manuscript relies on GZSL and comparisons to it but other more recent methods could be also cited: - Generalized Zero-Shot Learning via Synthesized Examples by Verma et al. - Zero-Shot Kernel Learning by Zhang et al. - Model Selection for Generalized Zero-shot Learning by Zhang et al. - Generalized Zero-Shot Learning with Deep Calibration Network by Liu et al. - Multi-modal Cycle-consistent Generalized Zero-Shot Learning by Felix et al. - Open Set Learning with Counterfactual Images - Feature Generating Networks for Zero-Shot Learning Though, the authors are welcome to find even more relevant papers in google scholar. Overall, AC finds the paper interesting and finds the idea has some merits. Nonetheless, two reviewers maintained their scores below borderline due to numerous worries highlighted above. The authors are encouraged to work on presentation of this method and comparisons to more recent papers where possible. AC encourages the authors to re-submit their improved manuscript as, at this time, it feels this paper is not ready and cannot be accepted to ICLR.
train
[ "rkxngom51E", "Bygaxssw0X", "B1gaozD7JN", "HyezP08mk4", "HJe_4X7mJN", "SkgMwhsvR7", "HJexLoow0m", "BkljyQb63X", "H1gbL-Hqh7", "Bke3HWKwi7", "S1xlMpxe3X", "rygw026YoX", "Hyg4TiKdsQ", "SyleQWiIsQ" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "Dear reviewers,\n\nIn order to make this work more complete and convincing, we have revised our paper on the following parts:\n(1) add the recent state-of-the-art baselines;\n(2) implement our algorithm on ``Feature Generating Networks for Zero-Shot Learning’’ to prove the generalization ability of domain division...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2019_H1GaLiAcY7", "BkljyQb63X", "Hyg4TiKdsQ", "HJe_4X7mJN", "HJexLoow0m", "H1gbL-Hqh7", "Bke3HWKwi7", "iclr_2019_H1GaLiAcY7", "iclr_2019_H1GaLiAcY7", "iclr_2019_H1GaLiAcY7", "rygw026YoX", "Hyg4TiKdsQ", "SyleQWiIsQ", "iclr_2019_H1GaLiAcY7" ]
iclr_2019_H1Gfx3Rqtm
End-to-End Hierarchical Text Classification with Label Assignment Policy
We present an end-to-end reinforcement learning approach to hierarchical text classification where documents are labeled by placing them at the right positions in a given hierarchy. While existing “global” methods construct hierarchical losses for model training, they either make “local” decisions at each hierarchy node or ignore the hierarchy structure during inference. To close the gap between training/inference and optimize holistic metrics in an end-to-end manner, we propose to learn a label assignment policy to determine where to place the documents and when to stop. The proposed method, HiLAP, optimizes holistic metrics over the hierarchy, makes inter-dependent decisions during inference, and can be combined with different text encoding models for end-to-end training. Experiments on three public datasets show that HiLAP yields an average improvement of 33.4% in Macro-F1 and 5.0% in Samples-F1, outperforming state-of-the-art methods by a large margin.
rejected-papers
This paper presents a reinforcement learning approach to hierarchical text classification. Pros: A potentially interesting idea to drive the search process over a hierachical set of labels using reinforcement learning. Cons: The major concensus among all reviewers was that there were various concerns about experimental results, e.g., apple-to-apple comparisons against prior art (R1), proper tuning of hyper-parameters (R1, R2), the label space is too small (539) to have practical significance compared to tens of thousands of labels that have been used in other related work (R3), and other missing baselines (R3). In addition, even after the rebuttal, some of the technical clarity issues have not been fully resolved, e.g., what the proposed method is actually doing (optimizing F1 metric vs the ability to fix inconsistent labeling problem). Verdict: Reject. While authors came back with many detailed responses, they were not enough to address the major concerns reviewers had about the empirical significance of this work.
train
[ "H1ekqPbzkN", "HJxgQvZzJN", "r1gU3ZzaC7", "ryxHNgf6C7", "rygbzifGA7", "rkx7a5GGCm", "H1gPFFMfC7", "r1xXfFzf0m", "BJeYVNKAhX", "rJxlOb6o2m", "HkeXkVvo27", "rJgxyLuWoX" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "We very much appreciate you for pointing out this unclear answer. To further clarify our responses:\n\nFirst, “label consistency” is guaranteed by our proposed label assignment policy, e.g., if a label is assigned to the document then its ancestor label must be also assigned. We apply RL to learn such a policy net...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, -1 ]
[ "ryxHNgf6C7", "r1gU3ZzaC7", "rygbzifGA7", "rkx7a5GGCm", "rJxlOb6o2m", "rJxlOb6o2m", "HkeXkVvo27", "BJeYVNKAhX", "iclr_2019_H1Gfx3Rqtm", "iclr_2019_H1Gfx3Rqtm", "iclr_2019_H1Gfx3Rqtm", "iclr_2019_H1Gfx3Rqtm" ]
iclr_2019_H1M7soActX
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
Understanding the behavior of stochastic gradient descent (SGD) in the context of deep neural networks has raised lots of concerns recently. Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects. A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency. We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of position-dependent noise.
rejected-papers
The reviewers point our concerns regarding paper's novelty, theoretical soundness, and empirical strength. The authors provided to clarifications to the reviewers.
train
[ "H1lJasFPnX", "BJgWpYA53Q", "ByxZzQI9hX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the benefit of an anisotropic gradient covariance matrix in SGD optimization for training deep network in terms of escaping sharp minima (which has been discussed to correlate with poor generalization in recent literature). \n\nIn order to do so, SGD is studied as a discrete approximation of stoc...
[ 5, 4, 6 ]
[ 4, 5, 3 ]
[ "iclr_2019_H1M7soActX", "iclr_2019_H1M7soActX", "iclr_2019_H1M7soActX" ]
iclr_2019_H1MBuiAqtX
Unicorn: Continual learning with a universal, off-policy agent
Some real-world domains are best characterized as a single task, but for others this perspective is limiting. Instead, some tasks continually grow in complexity, in tandem with the agent's competence. In continual learning there are no explicit task boundaries or curricula. As learning agents have become more powerful, continual learning remains one of the frontiers that has resisted quick progress. To test continual learning capabilities we consider a challenging 3D domain with an implicit sequence of tasks and sparse rewards. We propose a novel agent architecture called Unicorn, which demonstrates strong continual learning and outperforms several baseline agents on the proposed domain. The agent achieves this by jointly representing and efficiently learning multiple policies for multiple goals, using a parallel off-policy learning setup.
rejected-papers
The authors present an interesting approach but there were multiple significant concerns with the clarity of the presentation, and some concern with the significance of the experimental results.
test
[ "HklM7Z5BCm", "Syx6uecB07", "SJelJx5SAQ", "Hkg9rFtpnm", "r1lL3QQ5n7", "H1e3_hEfh7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the feedback.\n\n*Presentation:*\n“- The writing is a bit dense in places, e.g., the discussion of baselines is a bit hard to read.\n- Description of algorithm is wrapped in long text, a clear algorithm box would make the approach much clearer.”\n\n> Thank you for the feedback. We have in...
[ -1, -1, -1, 4, 5, 6 ]
[ -1, -1, -1, 5, 4, 4 ]
[ "H1e3_hEfh7", "r1lL3QQ5n7", "Hkg9rFtpnm", "iclr_2019_H1MBuiAqtX", "iclr_2019_H1MBuiAqtX", "iclr_2019_H1MBuiAqtX" ]
iclr_2019_H1MzKs05F7
Adversarial Vulnerability of Neural Networks Increases with Input Dimension
Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. For most current network architectures, we prove that the L1-norm of these gradients grows as the square root of the input size. These nets therefore become increasingly vulnerable with growing image size. Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.
rejected-papers
This paper suggests that adversarial vulnerability scales with the dimension of the input of neural networks, and support this hypothesis theoretically and experimentally. The work is well-written, and all of the reviewers appreciated the easy-to-read and clear nature of the theoretical results, including the assumptions and limitations. (The AC did not consider the criticisms raised by Reviewer 3 justified. The norm-bound perturbations considered here are a sufficiently interesting unsolved problem in the community and a clear prerequisite to solving the broader network robustness problem.) However, many of the reviewers also agreed that the theoretical assumptions - and, in particular, the random initialization of the weights - greatly oversimplify the problem. Reviewers point out that the lack of data dependence and only considering the norm of the gradient considerably limit the significance of the corresponding theoretical results, and also does not properly address the issue of gradient masking.
train
[ "SJg9Wd9j1N", "rkxEtwcsyN", "r1lEwQ1XJV", "Hyl2CLpMyV", "HklcFfmhR7", "BkeQknMiRQ", "SylH2yQsCX", "H1xnecPrC7", "rklddGv7A7", "S1eK78EXR7", "rklD0Zcn67", "B1lARktMRm", "S1lSrptgAm", "SkgnHhtlAQ", "SJxgihtxRm", "HJemastx07", "rkgD7oKgCm", "rkxRoqKxRm", "H1gaUtVATX", "B1e9VFERpQ"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ "We thank all reviewers for their comments and implication in the review process and seize this opportunity for a final recap on our paper and its contributions.\n\nFirst, our work re-emphasises the strong link between small adversarial perturbations and gradient norms. Although this link was known, many still beli...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 9, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "iclr_2019_H1MzKs05F7", "Hyl2CLpMyV", "Hyl2CLpMyV", "S1eK78EXR7", "iclr_2019_H1MzKs05F7", "SkgnHhtlAQ", "H1xnecPrC7", "rklddGv7A7", "rkgD7oKgCm", "B1lARktMRm", "iclr_2019_H1MzKs05F7", "HJemastx07", "BJgK8oPUh7", "H1gaUtVATX", "H1eawIzc3X", "rklD0Zcn67", "rkxRoqKxRm", "iclr_2019_H1M...
iclr_2019_H1V4QhAqYQ
Augment your batch: better training with larger batches
Recently, there is regained interest in large batch training of neural networks, both of theory and practice. New insights and methods allowed certain models to be trained using large batches with no adverse impact on performance. Most works focused on accelerating wall clock training time by modifying the learning rate schedule, without introducing accuracy degradation. We propose to use large batch training to boost accuracy and accelerate convergence by combining it with data augmentation. Our method, "batch augmentation", suggests using multiple instances of each sample at the same large batch. We show empirically that this simple yet effective method improves convergence and final generalization accuracy. We further suggest possible reasons for its success.
rejected-papers
The authors propose to use large batch training of neural networks, where each batch contains multiple augmentations of each sample. The experiments demonstrate that this leads to better performance compared to training with small batches. However, as noted by Reviewers 2 and 3, the experiments do not convincingly show where the improvement comes from. Considering that the described technique is very simplistic, having an extensive ablation study and comparison to the strong baselines is essential. The rebuttal didn’t address the reviewers' concerns, and they argue for rejection.
train
[ "Bkxz6RFD27", "HyxXFQr93Q", "H1e8jGltRm", "BygBHGetAX", "rkxiJGxFRQ", "HJxBD9g52X", "SygfJScCh7", "r1xfC5BZ57", "HygOo-VbqQ", "B1gG-GNW5m", "Hyx56MEb9X", "BylUSs_3Y7", "ByxmMenAFX", "SyxdsQZAKm", "Bkgdfp03FX", "Syg9ycvsY7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "public", "author", "author", "author", "author", "author", "public", "public", "public", "public" ]
[ "The paper shows that training with large batch size (e.g., with MxB samples) serves as an effective regularization method for deep networks, thus improving the convergence and generalization accuracy of the models. The enlarged batch of MxB consists of multiple (i.e., B) transforms of each of the M samples from th...
[ 4, 4, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_H1V4QhAqYQ", "iclr_2019_H1V4QhAqYQ", "HyxXFQr93Q", "HJxBD9g52X", "Bkxz6RFD27", "iclr_2019_H1V4QhAqYQ", "Bkxz6RFD27", "ByxmMenAFX", "iclr_2019_H1V4QhAqYQ", "Bkgdfp03FX", "SyxdsQZAKm", "Syg9ycvsY7", "iclr_2019_H1V4QhAqYQ", "BylUSs_3Y7", "BylUSs_3Y7", "iclr_2019_H1V4QhAqYQ" ]
iclr_2019_H1e0-30qKm
Unlabeled Disentangling of GANs with Guided Siamese Networks
Disentangling underlying generative factors of a data distribution is important for interpretability and generalizable representations. In this paper, we introduce two novel disentangling methods. Our first method, Unlabeled Disentangling GAN (UD-GAN, unsupervised), decomposes the latent noise by generating similar/dissimilar image pairs and it learns a distance metric on these pairs with siamese networks and a contrastive loss. This pairwise approach provides consistent representations for similar data points. Our second method (UD-GAN-G, weakly supervised) modifies the UD-GAN with user-defined guidance functions, which restrict the information that goes into the siamese networks. This constraint helps UD-GAN-G to focus on the desired semantic variations in the data. We show that both our methods outperform existing unsupervised approaches in quantitative metrics that measure semantic accuracy of the learned representations. In addition, we illustrate that simple guidance functions we use in UD-GAN-G allow us to directly capture the desired variations in the data.
rejected-papers
The paper received mixed reviews. It proposes a variant of Siamese network objective function, which is interesting. However, it’s unclear if the performance of the unguided method is much better than other baselines (e.g., InfoGAN). The guided version of the method seems to require much domain-specific knowledge and design of the feature function, which makes the paper difficult to apply to broader cases.
val
[ "SJlv8jwWlE", "HyeZp_ivTQ", "Syg5BZ_3yV", "H1xX2I782m", "HkgCEG12kV", "ryxmbDb7J4", "HJlG9UWmk4", "H1xlkUt20X", "S1lR3oO8TX", "B1xWrHZPRQ", "H1emnxgIRX", "HJl6Oxl8A7", "HklxQglURX", "rkemAkgIRQ", "BJxPm1eLRX", "HyxHQfLq3m" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Thank you again for your review.\n\nUD-GAN and other unsupervised techniques already capture and disentangle various attributes in a given dataset. In this paper, our guided approach (UD-GAN-G) complements the unsupervised literature by offering a simple way to further disentangle some of the spuriously correlated...
[ -1, 5, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "Syg5BZ_3yV", "iclr_2019_H1e0-30qKm", "rkemAkgIRQ", "iclr_2019_H1e0-30qKm", "HJlG9UWmk4", "B1xWrHZPRQ", "H1xlkUt20X", "H1emnxgIRX", "iclr_2019_H1e0-30qKm", "HklxQglURX", "H1xX2I782m", "HyxHQfLq3m", "S1lR3oO8TX", "HyeZp_ivTQ", "iclr_2019_H1e0-30qKm", "iclr_2019_H1e0-30qKm" ]
iclr_2019_H1e572A5tQ
TarMAC: Targeted Multi-Agent Communication
We explore the collaborative multi-agent setting where a team of deep reinforcement learning agents attempt to solve a shared task in partially observable environments. In this scenario, learning an effective communication protocol is key. We propose a communication protocol that allows for targeted communication, where agents learn \emph{what} messages to send and \emph{who} to send them to. Additionally, we introduce a multi-stage communication approach where the agents co-ordinate via several rounds of communication before taking an action in the environment. We evaluate our approach on several cooperative multi-agent tasks, of varying difficulties with varying number of agents, in a variety of environments ranging from 2D grid layouts of shapes and simulated traffic junctions to complex 3D indoor environments. We demonstrate the benefits of targeted as well as multi-stage communication. Moreover, we show that the targeted communication strategies learned by the agents are quite interpretable and intuitive.
rejected-papers
The reviewers raised a number of concerns including the lack of clarity of various parts of the paper, lack of explanation, incremental novelty, and insufficiently demonstrated significance of the proposed. The authors’ rebuttal addressed some of the reviewers’ concerns but not fully. Overall, I believe that the paper presents some interesting extensions for multi-agent communication but in its current form the paper lacks explanations, comparisons and discussions. Hence, I cannot recommend this paper for presentation at ICLR.
train
[ "HyxYwslR0X", "H1ezrqgRCm", "rygy0x6aRm", "Bklshep6RX", "H1xbixp6Rm", "B1g8KsS5Am", "S1e-enr50Q", "HklSMiBqA7", "SJlVpcB50m", "r1lZZ5r5RQ", "HklTK_ChTX", "BJgU41z93X", "S1lmskpu27", "S1xvVXP_nm" ]
[ "author", "public", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Yes, we think targeted communication implies targeting in both directions. Just the receiver deciding who to listen to would be targeted listening. Just the sender deciding who to send messages to would be targeted speaking/broadcasting. What we have is targeted two-way communication.", "1) To the best of my und...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "H1ezrqgRCm", "r1lZZ5r5RQ", "S1xvVXP_nm", "S1lmskpu27", "BJgU41z93X", "S1xvVXP_nm", "S1xvVXP_nm", "S1lmskpu27", "BJgU41z93X", "HklTK_ChTX", "iclr_2019_H1e572A5tQ", "iclr_2019_H1e572A5tQ", "iclr_2019_H1e572A5tQ", "iclr_2019_H1e572A5tQ" ]
iclr_2019_H1e6ij0cKQ
EFFICIENT SEQUENCE LABELING WITH ACTOR-CRITIC TRAINING
Neural approaches to sequence labeling often use a Conditional Random Field (CRF) to model their output dependencies, while Recurrent Neural Networks (RNN) are used for the same purpose in other tasks. We set out to establish RNNs as an attractive alternative to CRFs for sequence labeling. To do so, we address one of the RNN’s most prominent shortcomings, the fact that it is not exposed to its own errors with the maximum-likelihood training. We frame the prediction of the output sequence as a sequential decision-making process, where we train the network with an adjusted actor-critic algorithm (AC-RNN). We comprehensively compare this strategy with maximum-likelihood training for both RNNs and CRFs on three structured-output tasks. The proposed AC-RNN efficiently matches the performance of the CRF on NER and CCG tagging, and outperforms it on Machine Transliteration. We also show that our training strategy is significantly better than other techniques for addressing RNN’s exposure bias, such as Scheduled Sampling, and Self-Critical policy training.
rejected-papers
this is an interesting approach to use reinforcement learning to replace CRF for sequence tagging, which would potentially be beneficial when the tag set is gigantic. unfortunately the conducted experiments do not really show this, which makes it difficult to see whether the proposed approach is indeed a viable alternative to CRF for sequence tagging with a large tag set. this sentiment was shared by all the reviewers, and R1 especially pointed out major and minor issues with the submission and was not convinced by the authors' response.
test
[ "ByeZJ0I_C7", "SygQFc2_pX", "BkgkqFBq3Q", "H1e59mLw27", "HJxbDOnEnm", "rJxxaVYCh7", "rygn_-YR37" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "\"- Our main goal is to replace CRF in tagging, especially for tasks with large number of labels. `\"\n\nBut then you should look at tagging tasks which is what (neural) CRF approaches are good for; transliteration, as pointed out in the introduction of the paper is not such a task. \n\n- \"Combining MLE and RL re...
[ -1, -1, 5, 4, 4, -1, -1 ]
[ -1, -1, 4, 3, 5, -1, -1 ]
[ "SygQFc2_pX", "HJxbDOnEnm", "iclr_2019_H1e6ij0cKQ", "iclr_2019_H1e6ij0cKQ", "iclr_2019_H1e6ij0cKQ", "BkgkqFBq3Q", "H1e59mLw27" ]
iclr_2019_H1e8wsCqYX
Laplacian Networks: Bounding Indicator Function Smoothness for Neural Networks Robustness
For the past few years, Deep Neural Network (DNN) robustness has become a question of paramount importance. As a matter of fact, in sensitive settings misclassification can lead to dramatic consequences. Such misclassifications are likely to occur when facing adversarial attacks, hardware failures or limitations, and imperfect signal acquisition. To address this question, authors have proposed different approaches aiming at increasing the robustness of DNNs, such as adding regularizers or training using noisy examples. In this paper we propose a new regularizer built upon the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DNN architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. Since it is agnostic to the type of deformations that are expected when predicting with the DNN, the proposed regularizer can be combined with existing ad-hoc methods. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness of DNNs on classical supervised learning vision datasets.
rejected-papers
The paper proposes a new graph-based regularizer to improve the robustness of deep nets. The idea is to encourage smoothness on a graph built on the features at different layers. Experiments on CIFAR-10 show that the method provides robustness over very different types of perturbations such as adversarial examples or quantization. The reviewers raised concerns around the significance of the results, the reliance on a single dataset and the unexplained link between adversarial examples and the regularization. Despite the revision, the reviewers maintain their concerns. For this reason this work is not ready for publication.
train
[ "HkxS_FPa1N", "Bkxk6rS90X", "HylwcBS50X", "Byxdh60KAQ", "Hklc1NHM0m", "HkgHAZrzCX", "BkxDHGSzCX", "S1lZvmrGAm", "B1e-Sv1a6m", "HylHP_15Tm", "SklOai40nX", "rJxOvPC63m", "BJlgl3WHhQ" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are sorry for the delay, we have the results using the code from the provided github (which we will be uploading to github as soon as the ICLR review process is over). \n\nWe compared the two networks provided in the madrylab github (secret and adv_trained) with 2 experiments using our regularizer with m=8 and ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "Byxdh60KAQ", "Byxdh60KAQ", "Byxdh60KAQ", "HkgHAZrzCX", "HylHP_15Tm", "BJlgl3WHhQ", "rJxOvPC63m", "SklOai40nX", "iclr_2019_H1e8wsCqYX", "iclr_2019_H1e8wsCqYX", "iclr_2019_H1e8wsCqYX", "iclr_2019_H1e8wsCqYX", "iclr_2019_H1e8wsCqYX" ]
iclr_2019_H1eH4n09KX
Adversarial Audio Super-Resolution with Unsupervised Feature Losses
Neural network-based methods have recently demonstrated state-of-the-art results on image synthesis and super-resolution tasks, in particular by using variants of generative adversarial networks (GANs) with supervised feature losses. Nevertheless, previous feature loss formulations rely on the availability of large auxiliary classifier networks, and labeled datasets that enable such classifiers to be trained. Furthermore, there has been comparatively little work to explore the applicability of GAN-based methods to domains other than images and video. In this work we explore a GAN-based method for audio processing, and develop a convolutional neural network architecture to perform audio super-resolution. In addition to several new architectural building blocks for audio processing, a key component of our approach is the use of an autoencoder-based loss that enables training in the GAN framework, with feature losses derived from unlabeled data. We explore the impact of our architectural choices, and demonstrate significant improvements over previous works in terms of both objective and perceptual quality.
rejected-papers
The paper presents an algorithm for audio super-resolution using adversarial models along with additional losses, e.g. using auto-encoders and reconstruction losses, to improve the generation process. Strengths - Proposes audio super resolution based on GANs, extending some of the techniques proposed for vision / image to audio. - The authors improved the paper during the review process by including results from a user study and ablation analysis. Weaknesses - Although the paper presents an interesting application of GANs for the audio task, overall novelty is limited since the setup closely follows what has been done for vision and related tasks, and the baseline system. This is also not the first application of GANs for audio tasks. - Performance improvement over previously proposed (U-Net) models is small. It would have been useful to also include UNet4 in user-study, as one of the reviewers’ pointed out, since it sounds better in a few cases. - It is not entirely clear if the method would be an improvement of state-of-the-art audio generative models like Wavenet. Reviewers agree that the general direction of this work is interesting, but the results are not compelling enough at the moment for the paper to be accepted to ICLR. Given these review comments, the recommendation is to reject the paper.
train
[ "ByxtWRRtRX", "r1gbtNucCm", "rJe_yVk5Am", "Hyg0QWJq0X", "H1xfEekcAX", "BklQ2J15Rm", "rJx81Y1ph7", "BJgxdni52X", "BJxtJ80O2m" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the reviewers’ detailed feedback and thoughtful questions. We have responded to each review independently, and summarize changes to the manuscript here. \n\nMajor changes include the addition of a qualitative user study, a model ablation analysis, comparisons against an off-the-shelf speech classifie...
[ -1, -1, -1, -1, -1, -1, 4, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2019_H1eH4n09KX", "rJe_yVk5Am", "BJxtJ80O2m", "BJgxdni52X", "BklQ2J15Rm", "rJx81Y1ph7", "iclr_2019_H1eH4n09KX", "iclr_2019_H1eH4n09KX", "iclr_2019_H1eH4n09KX" ]
iclr_2019_H1eMBn09Km
Using GANs for Generation of Realistic City-Scale Ride Sharing/Hailing Data Sets
This paper focuses on the synthetic generation of human mobility data in urban areas. We present a novel and scalable application of Generative Adversarial Networks (GANs) for modeling and generating human mobility data. We leverage actual ride requests from ride sharing/hailing services from four major cities in the US to train our GANs model. Our model captures the spatial and temporal variability of the ride-request patterns observed for all four cities on any typical day and over any typical week. Previous works have succinctly characterized the spatial and temporal properties of human mobility data sets using the fractal dimensionality and the densification power law, respectively, which we utilize to validate our GANs-generated synthetic data sets. Such synthetic data sets can avoid privacy concerns and be extremely useful for researchers and policy makers on urban mobility and intelligent transportation.
rejected-papers
While the reviewers all agree that this paper proposes an interesting application of GANs, they would like to see clearer explanations of the technical details, more convincing evaluations, and better justifications of the assumptions and practical values of the proposed algorithms.
train
[ "BylCid5VyV", "B1l1garq0X", "HyxzQNH9CX", "SklwxdMcAm", "S1log2v6nQ", "SyezMqrt37", "SyeIpvuunX" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for clearing up some of the issues in the paper. I appreciate that you want to keep this paper 'simple' in terms of model and do other work in the future. However, I feel that you would need to do some of these other items to get this paper to a higher level for a top conference such as this one.", "[Than...
[ -1, -1, -1, -1, 4, 5, 5 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "B1l1garq0X", "SyeIpvuunX", "SyezMqrt37", "S1log2v6nQ", "iclr_2019_H1eMBn09Km", "iclr_2019_H1eMBn09Km", "iclr_2019_H1eMBn09Km" ]
iclr_2019_H1eRBoC9FX
Unsupervised Meta-Learning for Reinforcement Learning
Meta-learning is a powerful tool that learns how to quickly adapt a model to new tasks. In the context of reinforcement learning, meta-learning algorithms can acquire reinforcement learning procedures to solve new problems more efficiently by meta-learning prior tasks. The performance of meta-learning algorithms critically depends on the tasks available for meta-training: in the same way that supervised learning algorithms generalize best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks. In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design. If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated. In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning. We describe a general recipe for unsupervised meta-reinforcement learning, and describe an effective instantiation of this approach based on a recently proposed unsupervised exploration technique and model-agnostic meta-learning. We also discuss practical and conceptual considerations for developing unsupervised meta-learning methods. Our experimental results demonstrate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design, significantly exceeds the performance of learning from scratch, and even matches performance of meta-learning methods that use hand-specified task distributions.
rejected-papers
This paper introduces unsupervised meta-learning algorithms for RL. Major concerns of the paper include: 1. Lack of clarity. The presentation of the method can be improved. 2. The motivation and justification of applying unsupervised meta-learning needs to be strengthened. More discussions and better motivating examples may be useful. 3. Experimental details are not sufficient and comparisons may not be sufficient to support the aim. Overall, this paper cannot be accepted yet.
train
[ "HJx-pgsOpQ", "BJgp_qXpnQ", "BJeMLJhw2Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper considers a particular setting of so-called meta-reinforcement learning (meta-RL) where there is a distribution over reward functions (the transition function is fixed) and with some access to this distribution, the goal is to produce a learning algorithm that \"learns well\" on the distribution. At tra...
[ 3, 6, 4 ]
[ 4, 3, 2 ]
[ "iclr_2019_H1eRBoC9FX", "iclr_2019_H1eRBoC9FX", "iclr_2019_H1eRBoC9FX" ]
iclr_2019_H1eadi0cFQ
Escaping Flat Areas via Function-Preserving Structural Network Modifications
Hierarchically embedding smaller networks in larger networks, e.g.~by increasing the number of hidden units, has been studied since the 1990s. The main interest was in understanding possible redundancies in the parameterization, as well as in studying how such embeddings affect critical points. We take these results as a point of departure to devise a novel strategy for escaping from flat regions of the error surface and to address the slow-down of gradient-based methods experienced in plateaus of saddle points. The idea is to expand the dimensionality of a network in a way that guarantees the existence of new escape directions. We call this operation the opening of a tunnel. One may then continue with the larger network either temporarily, i.e.~closing the tunnel later, or permanently, i.e.~iteratively growing the network, whenever needed. We develop our method for fully-connected as well as convolutional layers. Moreover, we present a practical version of our algorithm that requires no network structure modification and can be deployed as plug-and-play into any current deep learning framework. Experimentally, our method shows significant speed-ups.
rejected-papers
The paper proposes a method to escape saddle points by adding and removing units during training. The method does so by preserving the function when the unit is added while increasing the gradient norm to move away from the critical point. The experimental evaluation shows that the proposed method does escape when positioned at a saddle point - as found by the Newton method. The reviewers find the theoretical ideas interesting and novel, but they raised concerns about the method's applicability for typical initializations, the experimental setup, as well as the terminology used in the paper. The title and terminology were improved with the revision, but the other issues were not sufficiently addressed.
val
[ "Skg1pk8KhQ", "HJxUDp2s27", "HJggSEsEC7", "ryeOGVsE0Q", "SJgrbqOinQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "The paper addresses the problem of increasing and decreasing the number of hidden nodes (aka, dimensionality) in the network such that the optimization will not enter the plateaus of saddle points. The opening or closing of tunnels (filters) guarantee the existence of “new escape directions” and faster convergence...
[ 6, 6, -1, -1, 4 ]
[ 4, 3, -1, -1, 4 ]
[ "iclr_2019_H1eadi0cFQ", "iclr_2019_H1eadi0cFQ", "SJgrbqOinQ", "HJxUDp2s27", "iclr_2019_H1eadi0cFQ" ]
iclr_2019_H1ecDoR5Y7
Local Stability and Performance of Simple Gradient Penalty μ-Wasserstein GAN
Wasserstein GAN(WGAN) is a model that minimizes the Wasserstein distance between a data distribution and sample distribution. Recent studies have proposed stabilizing the training process for the WGAN and implementing the Lipschitz constraint. In this study, we prove the local stability of optimizing the simple gradient penalty μ-WGAN(SGP μ-WGAN) under suitable assumptions regarding the equilibrium and penalty measure μ. The measure valued differentiation concept is employed to deal with the derivative of the penalty terms, which is helpful for handling abstract singular measures with lower dimensional support. Based on this analysis, we claim that penalizing the data manifold or sample manifold is the key to regularizing the original WGAN with a gradient penalty. Experimental results obtained with unintuitive penalty measures that satisfy our assumptions are also provided to support our theoretical results.
rejected-papers
All three reviewers expressed concerns about the assumptions made for the local stability analysis. The AC thus recommends "revise and resubmit".
train
[ "SygABgCoyN", "HylXNpUoy4", "ryePce8jk4", "rkg1ktuu3Q", "SylIqtVQp7", "BylHr_fna7", "H1e2yLG2am", "B1xXUvGn6Q", "BylhohL6hQ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thank you for your further comment and concern. We would like to explain that our main purpose is not to avoid differentiability issue of the penalty term with approximation. We aimed to find the necessary conditions first, and then tried to build up rigorous proofs for the singular case. The main contributions of...
[ -1, -1, -1, 6, 5, -1, -1, -1, 4 ]
[ -1, -1, -1, 4, 4, -1, -1, -1, 3 ]
[ "HylXNpUoy4", "ryePce8jk4", "SylIqtVQp7", "iclr_2019_H1ecDoR5Y7", "iclr_2019_H1ecDoR5Y7", "rkg1ktuu3Q", "BylhohL6hQ", "SylIqtVQp7", "iclr_2019_H1ecDoR5Y7" ]
iclr_2019_H1eiZnAqKm
The Expressive Power of Gated Recurrent Units as a Continuous Dynamical System
Gated recurrent units (GRUs) were inspired by the common gated recurrent unit, long short-term memory (LSTM), as a means of capturing temporal structure with less complex memory unit architecture. Despite their incredible success in tasks such as natural and artificial language processing, speech, video, and polyphonic music, very little is understood about the specific dynamic features representable in a GRU network. As a result, it is difficult to know a priori how successful a GRU-RNN will perform on a given data set. In this paper, we develop a new theoretical framework to analyze one and two dimensional GRUs as a continuous dynamical system, and classify the dynamical features obtainable with such system. We found rich repertoire that includes stable limit cycles over time (nonlinear oscillations), multi-stable state transitions with various topologies, and homoclinic orbits. In addition, we show that any finite dimensional GRU cannot precisely replicate the dynamics of a ring attractor, or more generally, any continuous attractor, and is limited to finitely many isolated fixed points in theory. These findings were then experimentally verified in two dimensions by means of time series prediction.
rejected-papers
The paper analyses GRUs using dynamic systems theory. The paper is well-written and the theory seems to be solid. But there is agreement amongst the reviewers that the application of the method might not scale well beyond rather simple 1- or 2-D GRUs (i.e., with one or two GRUs). This limitation, which is an increasingly serious problem in machine-learning papers, should be solved before the paper should be published. A very recent extension of the simulations to 16 GRUs improves this, but a rigorous analysis of higher-dimensional systems is pending and poses a considerable block for acceptance.
train
[ "HkxD98Zv2Q", "rylOpfu5AQ", "r1g-jzu90m", "SygKuf_qAQ", "S1xTRZu5AX", "H1glN-d507", "BJlZ9PDCnm", "H1l8eaK1hQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors analyse GRUs with hidden sizes of one and two as continuous-time dynamical systems, claiming that the expressive power of the hidden state representation can provide prior knowledge on how well a GRU will perform on a given dataset. Their analysis shows what kind of hidden state dynamics the GRU can ap...
[ 6, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_H1eiZnAqKm", "H1l8eaK1hQ", "HkxD98Zv2Q", "HkxD98Zv2Q", "BJlZ9PDCnm", "iclr_2019_H1eiZnAqKm", "iclr_2019_H1eiZnAqKm", "iclr_2019_H1eiZnAqKm" ]
iclr_2019_H1eqviAqYX
Why Do Neural Response Generation Models Prefer Universal Replies?
Recent advances in neural Sequence-to-Sequence (Seq2Seq) models reveal a purely data-driven approach to the response generation task. Despite its diverse variants and applications, the existing Seq2Seq models are prone to producing short and generic replies, which blocks such neural network architectures from being utilized in practical open-domain response generation tasks. In this research, we analyze this critical issue from the perspective of the optimization goal of models and the specific characteristics of human-to-human conversational corpora. Our analysis is conducted by decomposing the goal of Neural Response Generation (NRG) into the optimizations of word selection and ordering. It can be derived from the decomposing that Seq2Seq based NRG models naturally tend to select common words to compose responses, and ignore the semantic of queries in word ordering. On the basis of the analysis, we propose a max-marginal ranking regularization term to avoid Seq2Seq models from producing the generic and uninformative responses. The empirical experiments on benchmarks with several metrics have validated our analysis and proposed methodology.
rejected-papers
This paper seeks to shed light on why seq2seq models favor generic replies. The problem is an important one, unfortunately the responses proposed in the paper are not satisfactory. Most reviewers note problems and general lack of rigorousness in the assumptions used to produce the theoretical part of the paper (e.g., strong assumption of independence of generated words). The experiments themselves are not convincing enough to warrant acceptance by themselves.
train
[ "HyldP2V50X", "ByeU9HCY0m", "SkxioCfEA7", "SyxJIcNgR7", "SyxbWIRwpm", "HJxtELCw6X", "rJxuYSCva7", "HklTVX0PT7", "rJlhR7RPpm", "HyeqfvoShm", "S1x8dxbT2m", "BkxwjHa9hX", "H1gWrt6x3Q", "r1xSVJPlh7", "ryejoBaCiX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for the insightful comments. Actually, K \\leq L_S is not rigorous theoretical, as there may exist a few cases not satisfying this inequation. \nNevertheless, we can claim that, given the obvious condition “1 \\leq L_S”, the upper bound in Equation 3 is \\log frac{1}{K}, and the analysis still holds. We ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, -1, -1 ]
[ "ByeU9HCY0m", "rJxuYSCva7", "SyxJIcNgR7", "HklTVX0PT7", "rJxuYSCva7", "HklTVX0PT7", "H1gWrt6x3Q", "S1x8dxbT2m", "BkxwjHa9hX", "iclr_2019_H1eqviAqYX", "iclr_2019_H1eqviAqYX", "iclr_2019_H1eqviAqYX", "iclr_2019_H1eqviAqYX", "ryejoBaCiX", "iclr_2019_H1eqviAqYX" ]
iclr_2019_H1f7S3C9YQ
SynonymNet: Multi-context Bilateral Matching for Entity Synonyms
Being able to automatically discover synonymous entities from a large free-text corpus has transformative effects on structured knowledge discovery. Existing works either require structured annotations, or fail to incorporate context information effectively, which lower the efficiency of information usage. In this paper, we propose a framework for synonym discovery from free-text corpus without structured annotation. As one of the key components in synonym discovery, we introduce a novel neural network model SynonymNet to determine whether or not two given entities are synonym with each other. Instead of using entities features, SynonymNet makes use of multiple pieces of contexts in which the entity is mentioned, and compares the context-level similarity via a bilateral matching schema to determine synonymity. Experimental results demonstrate that the proposed model achieves state-of-the-art results on both generic and domain-specific synonym datasets: Wiki+Freebase, PubMed+UMLS and MedBook+MKG, with up to 4.16% improvement in terms of Area Under the Curve (AUC) and 3.19% in terms of Mean Average Precision (MAP) compare to the best baseline method.
rejected-papers
This paper presents a model to identify entity mentions that are synonymous. This could have utility in practical scenarios that handle entities. The main criticism of the paper is regarding the baselines used. Most of the baselines that are compared against are extremely simple. There is a significant body of literature that models paraphrase and entailment and many of those baselines are missing (decomposable attention, DIIN, other cross-attention mechanisms). Adding those experiments would make the experimental setup stronger. There is a bit of a disagreement between reviewers, but I agree with the two reviewers who point out the weakness of the experimental setup, and fixing those issues could improve the paper significantly.
train
[ "rJxebEdOCX", "rJgca4uuC7", "ryx-GS_u07", "BJlwNEO_AX", "S1l6EuIAnm", "S1gffzlahX", "Ske--iHV3m" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the comments and suggestions.\n\nFor the mentioned related works, Snow et a.l 2005, Sun and Grishman 2010, Liao et al. 2017, Cambria et al. 2018, they are not designed for synonym discovery task, so we do not compare with them in the experiments. The mentioned related works introduce diff...
[ -1, -1, -1, -1, 5, 7, 4 ]
[ -1, -1, -1, -1, 4, 5, 4 ]
[ "Ske--iHV3m", "S1l6EuIAnm", "S1l6EuIAnm", "S1gffzlahX", "iclr_2019_H1f7S3C9YQ", "iclr_2019_H1f7S3C9YQ", "iclr_2019_H1f7S3C9YQ" ]
iclr_2019_H1fF0iR9KX
Geometry aware convolutional filters for omnidirectional images representation
Due to their wide field of view, omnidirectional cameras are frequently used by autonomous vehicles, drones and robots for navigation and other computer vision tasks. The images captured by such cameras, are often analysed and classified with techniques designed for planar images that unfortunately fail to properly handle the native geometry of such images. That results in suboptimal performance, and lack of truly meaningful visual features. In this paper we aim at improving popular deep convolutional neural networks so that they can properly take into account the specific properties of omnidirectional data. In particular we propose an algorithm that adapts convolutional layers, which often serve as a core building block of a CNN, to the properties of omnidirectional images. Thus, our filters have a shape and size that adapts with the location on the omnidirectional image. We show that our method is not limited to spherical surfaces and is able to incorporate the knowledge about any kind of omnidirectional geometry inside the deep learning network. As depicted by our experiments, our method outperforms the existing deep neural network techniques for omnidirectional image classification and compression tasks.
rejected-papers
Strengths: This paper proposed to use graph-based deep learning methods to apply deep learning techniques to images coming from omnidirectional cameras. Weaknesses: The projected MNIST dataset looks very localized on the sphere and therefore does not seem to leverage that much of the global connectivity of the graph All reviewers pointed out limitations in the experimental results. There were significant concerns about the relation of the model to the existing literature. It was pointed out that both the comparison to other methodology, and empirical comparisons were lacking. The paper received three reject recommendations. There was some discussion with reviewers, which emphasized open issues in the comparison to and references to existing literature as highlighted by contributed comment from Michael Bronstein. Work is clearly not mature enough at this point for ICLR, insufficient comparisons / illustrations
train
[ "HyenTpSY0m", "BJxHDTHK07", "B1gzxTSFAX", "SJlurqK4Tm", "H1goq7Y52m", "H1eOGZlqiQ", "H1lMxTcvc7", "SylGMfZCKm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We would like to thank you for the comments. We have updated the description of the approach to make it more clear and self contained. We have also added a better description of the architecture that we use and reorganized the Section 4 to better summarize the results. Please find a detailed answer to the raised q...
[ -1, -1, -1, 4, 6, 4, -1, -1 ]
[ -1, -1, -1, 4, 4, 5, -1, -1 ]
[ "H1eOGZlqiQ", "H1goq7Y52m", "SJlurqK4Tm", "iclr_2019_H1fF0iR9KX", "iclr_2019_H1fF0iR9KX", "iclr_2019_H1fF0iR9KX", "SylGMfZCKm", "iclr_2019_H1fF0iR9KX" ]
iclr_2019_H1faSn0qY7
DL2: Training and Querying Neural Networks with Logic
We present DL2, a system for training and querying neural networks with logical constraints. The key idea is to translate these constraints into a differentiable loss with desirable mathematical properties and to then either train with this loss in an iterative manner or to use the loss for querying the network for inputs subject to the constraints. We empirically demonstrate that DL2 is effective in both training and querying scenarios, across a range of constraints and data sets.
rejected-papers
Unfortunately, this paper fell just below the bar for acceptance. The reviewers all saw significant promise in this work, stating that it is intriguing, "novel and provides an interesting solution to a challenging problem" and that "many interesting use cases are clear". AnonReviewer2 particularly argued for acceptance, arguing that the proposed approach provides a very flexible method for incorporating constraints in neural network training. A concern of AnonReviewer2 was that there was no guarantee that this loss would be convex or converge to an optimum while statisfying the constraints. The other two reviewers unfortunately felt that while the proposed approach was "interesting", "promising" and "intriguing", the quality of the paper, in terms of exposition, was too low to justify acceptance. Arguably, it seems the writing doesn't do the idea justice in this case and the paper would ultimately be significantly more impactful if it was carefully rewritten.
train
[ "HJx46y-q37", "Syxwc66ECQ", "BkgUupaVCX", "Hye-hTp4CX", "SJl8YjQzRX", "ryl5-oQMCX", "SkxcDq7f0m", "Hye6GmQfC7", "H1ldgwo3nX", "HJllffWqn7" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors propose DL2 a system for training and querying neural networks with logical constraints\n\nThe proposed approach is intriguing but in my humble opinion the presentation of the paper could be improved. Indeed I think that the paper is bit too hard to follow. \nThe example at page 2 is not ...
[ 6, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_H1faSn0qY7", "SJl8YjQzRX", "ryl5-oQMCX", "Hye6GmQfC7", "HJx46y-q37", "HJllffWqn7", "H1ldgwo3nX", "iclr_2019_H1faSn0qY7", "iclr_2019_H1faSn0qY7", "iclr_2019_H1faSn0qY7" ]
iclr_2019_H1fevoAcKX
Globally Soft Filter Pruning For Efficient Convolutional Neural Networks
This paper propose a cumulative saliency based Globally Soft Filter Pruning (GSFP) scheme to prune redundant filters of Convolutional Neural Networks (CNNs).Specifically, the GSFP adopts a robust pruning method, which measures the global redundancy of the filter in the whole model by using the soft pruning strategy. In addition, in the model recovery process after pruning, we use the cumulative saliency strategy to improve the accuracy of pruning. GSFP has two advantages over previous works:(1) More accurate pruning guidance. For a pre-trained CNN model, the saliency of the filter varies with different input data. Therefore, accumulating the saliency of the filter over the entire data set can provide more accurate guidance for pruning. On the other hand, pruning from a global perspective is more accurate than local pruning. (2) More robust pruning strategy. We propose a reasonable normalization formula to prevent certain layers of filters in the network from being completely clipped due to excessive pruning rate.
rejected-papers
This paper proposes new heuristics to prune and compress neural networks. The paper is well organized. However, reviewers are concerned that the novelty is relatively limited. The advantage of the proposed method is marginal on ImageNet. What is effective is not very clear. Therefore, recommend for rejection.
train
[ "Bkl5HjrcCX", "BygGizk_A7", "H1efDM1OC7", "HkgEZfyuRm", "H1xCb6dA2Q", "SJe_P77q3X", "ryeoNbWH2Q" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I thank the authors for addressing the various concerns raised by the reviewers and for performing additional experiments. I am satisfied with the authors experiments to demonstrate the superior performance of their proposed method versus the work of Molchanov et al., 2017. However, given how similar the two works...
[ -1, -1, -1, -1, 6, 5, 4 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "H1efDM1OC7", "ryeoNbWH2Q", "SJe_P77q3X", "H1xCb6dA2Q", "iclr_2019_H1fevoAcKX", "iclr_2019_H1fevoAcKX", "iclr_2019_H1fevoAcKX" ]
iclr_2019_H1fsUiRcKQ
Fast adversarial training for semi-supervised learning
In semi-supervised learning, Bad GAN approach is one of the most attractive method due to the intuitional simplicity and powerful performances. Bad GAN learns a classifier with bad samples distributed on complement of the support of the input data. But Bad GAN needs additional architectures, a generator and a density estimation model, which involves huge computation and memory consumption cost. VAT is another good semi-supervised learning algorithm, which utilizes unlabeled data to improve the invariance of the classifier with respect to perturbation of inputs. In this study, we propose a new method by combining the ideas of Bad GAN and VAT. The proposed method generates bad samples of high-quality by use of the adversarial training used in VAT. We give theoretical explanations why the adversarial training is good at both generating bad samples and semi-supervised learning. An advantage of the proposed method is to achieve the competitive performances with much fewer computations. We demonstrate advantages our method by various experiments with well known benchmark image datasets.
rejected-papers
The paper combines the ideas of VAT and Bad GAN, replacing the fake samples in Bad GAN objective with VAT generated samples. The motivation behind using the K+1 SSL framework with VAT examples remains unclear, particularly in the light of Prop. 2 which shows smoothness of classifier around the unlabeled examples is enough (which VAT already encourages). R2 and R3 have raised the point of limited insight and lack of motivation behind combining VAT and Bad GAN objectives in this way. R2 and R3 are also concerned about the empirical results which show only marginal improvements over VAT/BadGAN in most settings. AC feels that the idea of the paper is interesting but agrees with R2/R3 that the proposed objective is not motivated well enough (what is the precise advantage of using K+1 SSL formulation with VAT examples?). The paper really falls on the borderline and could be improved if this point is addressed convincingly.
test
[ "S1g-q0VK0Q", "HyldOAVF0m", "B1lxrAVYCX", "SJxwMRNtRX", "Hye43a4F0X", "SyeJXc193X", "Bkxkf55Kh7", "rJeyUf5whm" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- The authors try to use Proposition 1 to motivate the use of VAT for generating complementary examples. However, it seems that the authors misinterprets the concept of bad examples proposed in Dai et al. The original definition (which led to the theoretical guarantees in Dai et al) of bad examples is low-density ...
[ -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "HyldOAVF0m", "rJeyUf5whm", "SJxwMRNtRX", "Bkxkf55Kh7", "SyeJXc193X", "iclr_2019_H1fsUiRcKQ", "iclr_2019_H1fsUiRcKQ", "iclr_2019_H1fsUiRcKQ" ]
iclr_2019_H1g0piA9tQ
Evaluation Methodology for Attacks Against Confidence Thresholding Models
Current machine learning algorithms can be easily fooled by adversarial examples. One possible solution path is to make models that use confidence thresholding to avoid making mistakes. Such models refuse to make a prediction when they are not confident of their answer. We propose to evaluate such models in terms of tradeoff curves with the goal of high success rate on clean examples and low failure rate on adversarial examples. Existing untargeted attacks developed for models that do not use confidence thresholding tend to underestimate such models' vulnerability. We propose the MaxConfidence family of attacks, which are optimal in a variety of theoretical settings, including one realistic setting: attacks against linear models. Experiments show the attack attains good results in practice. We show that simple defenses are able to perform well on MNIST but not on CIFAR, contributing further to previous calls that MNIST should be retired as a benchmarking dataset for adversarial robustness research. We release code for these evaluations as part of the cleverhans (Papernot et al 2018) library (ICLR reviewers should be careful not to look at who contributed these features to cleverhans to avoid de-anonymizing this submission).
rejected-papers
The reviewers agree the paper is not ready for publication.
train
[ "Hye-MhpZT7", "S1x0HKfqh7", "SygQJELD3X", "S1xAN6UJT7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper introduces a family of attack on confidence thresholding algortihms. Such algorithms are allowed to refuse to make predictions when their confidence is below a certain threshold. \n\nThere are certainly interesting links between such models and KWIK [1] algorithms (which are also supposed to be able to ...
[ 2, 3, 4, -1 ]
[ 4, 3, 4, -1 ]
[ "iclr_2019_H1g0piA9tQ", "iclr_2019_H1g0piA9tQ", "iclr_2019_H1g0piA9tQ", "S1x0HKfqh7" ]
iclr_2019_H1gDgn0qY7
A Study of Robustness of Neural Nets Using Approximate Feature Collisions
In recent years, various studies have focused on the robustness of neural nets. While it is known that neural nets are not robust to examples with adversarially chosen perturbations as a result of linear operations on the input data, we show in this paper there could be a convex polytope within which all examples are misclassified by neural nets due to the properties of ReLU activation functions. We propose a way to find such polytopes empirically and demonstrate that such polytopes exist in practice. Furthermore, we show that such polytopes exist even after constraining the examples to be a composition of image patches, resulting in perceptibly different examples at different locations in the polytope that are all misclassified.
rejected-papers
The paper presents a novel view on adversarial examples, where models using ReLU are inherently sensitive to adversarial examples because ReLU activations yield a polytope of examples with exactly the same activation. Reviewers found the finding interesting and novel but argue it is limited in impact. I also found the idea interesting but the paper could probably be improved as all reviewers have remarked. Overall, I found it borderline but probably not enough for acceptance.
test
[ "BJlUHIpjC7", "S1g9bI6j0Q", "rJeVuraoC7", "rygTSrpi0X", "ByeriKLT27", "Hylxvzg9nQ", "HklJbSZFnX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review. Below are our responses to your questions and concerns:\n\nQ1: The perturbation set is generally a high-dimensional polytope. Although it has a compact representation in terms of intersection of hyperplanes, it may have many more vertices, so the endeavor of attempting to characterize al...
[ -1, -1, -1, -1, 6, 4, 4 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "ByeriKLT27", "Hylxvzg9nQ", "rygTSrpi0X", "HklJbSZFnX", "iclr_2019_H1gDgn0qY7", "iclr_2019_H1gDgn0qY7", "iclr_2019_H1gDgn0qY7" ]
iclr_2019_H1gFuiA9KX
Skip-gram word embeddings in hyperbolic space
Embeddings of tree-like graphs in hyperbolic space were recently shown to surpass their Euclidean counterparts in performance by a large margin. Inspired by these results, we present an algorithm for learning word embeddings in hyperbolic space from free text. An objective function based on the hyperbolic distance is derived and included in the skip-gram negative-sampling architecture from word2vec. The hyperbolic word embeddings are then evaluated on word similarity and analogy benchmarks. The results demonstrate the potential of hyperbolic word embeddings, particularly in low dimensions, though without clear superiority over their Euclidean counterparts. We further discuss subtleties in the formulation of the analogy task in curved spaces.
rejected-papers
although the proposed method could be considered an interesting application to recently popular hypobolic space to word embeddings, it is unclear why this needs to be done so. experiments also do not support why or whether the application of hyperbolic space to word embedding is necessary.
train
[ "HJxqAEsKCX", "HJlhjEoFC7", "BkgBq4sYRQ", "HJxfpQCOh7", "rJlysr_ka7", "HyxrmoZ5nX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer,\n\nThank you very much for your feedback and suggestions. We've updated the paper with a section explaining our motivation for learning word embeddings in hyperbolic space. Our main goal was to learn word embeddings in low dimensions, with the future aim of working towards downstream tasks that coul...
[ -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "HJxfpQCOh7", "HyxrmoZ5nX", "rJlysr_ka7", "iclr_2019_H1gFuiA9KX", "iclr_2019_H1gFuiA9KX", "iclr_2019_H1gFuiA9KX" ]
iclr_2019_H1gNHs05FX
Clinical Risk: wavelet reconstruction networks for marked point processes
Timestamped sequences of events, pervasive in domains with data logs, e.g., health records, are often modeled as point processes with rate functions over time. Leading classical methods for risk scores such as Cox and Hawkes processes use such data but make strong assumptions about the shape and form of multivariate influences, resulting in time-to-event distributions irreflective of many real world processes. Recent methods in point processes and recurrent neural networks capably model rate functions but may be complex and difficult to interrogate. Our work develops a high-performing, interrogable model. We introduce wavelet reconstruction networks, a multivariate point process with a sparse wavelet reconstruction kernel to model rate functions from marked, timestamped data. We show they achieve improved performance and interrogability over baselines in forecasting complications and scheduled care visits in patients with diabetes.
rejected-papers
There was discussion of this paper, and the accept reviewer was not willing to argue for acceptance of this paper, while the reject reviewers, specifically pointing to the clarity of the work, argued for rejection. There appear to be many good ideas related to wavelets, and hopefully the authors can work on polishing the paper and resubmitting.
val
[ "SkeXCAVxJN", "ByxUx5VxyV", "SkeveKAn07", "BJxJaA_QAm", "Byl9XsGbCQ", "BylCPlQ107", "B1l3jE--R7", "H1gpW0je07", "rkxnK7YeCm", "SylMhmIJAX", "S1gihA710m", "Hyg8Wkua3m", "H1ezsgoLnQ", "S1gHWtEpsX" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\"\"\"If the data has any mark the model does not handle, then the NLL would be \\infty. Why not so in your case? How do you make sure your comparison is fair in this case?\"\"\"\nThe outcomes of interest are not marked, but the features may be. Thus, if our features processing e.g. in Hawkes captures the timing ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "SkeveKAn07", "BJxJaA_QAm", "S1gihA710m", "Byl9XsGbCQ", "B1l3jE--R7", "Hyg8Wkua3m", "H1gpW0je07", "rkxnK7YeCm", "SylMhmIJAX", "S1gHWtEpsX", "H1ezsgoLnQ", "iclr_2019_H1gNHs05FX", "iclr_2019_H1gNHs05FX", "iclr_2019_H1gNHs05FX" ]
iclr_2019_H1gRM2A5YX
Analysis of Memory Organization for Dynamic Neural Networks
An increasing number of neural memory networks have been developed, leading to the need for a systematic approach to analyze and compare their underlying memory capabilities. Thus, in this paper, we propose a taxonomy for four popular dynamic models: vanilla recurrent neural network, long short-term memory, neural stack and neural RAM and their variants. Based on this taxonomy, we create a framework to analyze memory organization and then compare these network architectures. This analysis elucidates how different mapping functions capture the information in the past of the input, and helps to open the dynamic neural network black box from the perspective of memory usage. Four representative tasks that would fit optimally the characteristics of each memory network are carefully selected to show each network's expressive power. We also discuss how to use this taxonomy to help users select the most parsimonious type of memory network for a specific task. Two natural language processing applications are used to evaluate the methodology in a realistic setting.
rejected-papers
This paper presents a taxonomic study of neural network architectures, focussing on those which seek to map onto different part of the hierarchy of models of computation (DFAs, PDAs, etc). The paper splits between defining the taxonomy and comparing its elements on synthetic and "NLP" tasks (in fact, babi, which is also synthetic). I'm a fairly biased assessor of this sort of paper, as I generally like this topical area and think there is a need for more work of this nature in our field. I welcome, and believe the CFP calls for, papers like this ("learning representations of outputs or [structured] states", "theoretical issues in deep learning")). However, despite my personal enthusiasm, the reviews tell a different story. The scores for this paper are all over the place, and that's after some attempt at harmonisation! I am satisfied that the authors have had a fair shot at defending their paper and that the reviewers have engaged with the discussion process. I'm afraid the emerging consensus still seems to be in favour of rejection. Despite my own views, I'm not comfortable bumping it up into acceptance territory on the basis of this assessment. Reviewer 1 is the only enthusiastic proponent of the paper, but their statement of support for the paper has done little to sway the others. The arguments by reviewer 3 specifically are quite salient: it is important to seek informative and useful taxonomies of the sort presented in this work, but they must have practical utility. From reading the paper, I share some of this reviewer's concerns: while it is clear to me what use there is the production of studies of the sort presented in this paper, it is not immediately clear what the utility of *this* study is. Would I, practically speaking, be able to make an informed choice as to what model class to attempt for a problem that wouldn't be indistinguishable from common approaches (e.g. "start simple, add complexity"). I am afraid I agree with this reviewer that I would not. My conclusion is that there is not a strong consensus for accepting the paper. While I wouldn't mind seeing this work presented at the conference, but due to the competitive nature of the paper selection process, I'm afraid the line must be drawn somewhere. I do look forward to re-reading this paper after the authors have had a chance to improve and expand upon it.
train
[ "SJxaPnl1yE", "B1xznfekkE", "H1gP7OO5h7", "r1eAq4JkJV", "rklC1N1kyE", "rygFFm111N", "BkxX1hS9Cm", "Byg3_iBcAQ", "BJx18slFA7", "SkeykfvchX", "ByxuxqBECm", "HkgtX-7Gh7", "B1gmZWimCX", "rkgpvJsmAQ", "HkgHWyimRQ", "BJenaa9QRX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "You are welcome to adjust your score if you think it reflects your understanding of the paper's strength, but please be reminded there is absolutely no need to agree with other reviewers, or reconcile scores. If you think it's worth a high score, you are more than welcome to keep it like that: you just ideally nee...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, 5, -1, 3, -1, -1, -1, -1 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 5, -1, 5, -1, -1, -1, -1 ]
[ "B1xznfekkE", "rygFFm111N", "iclr_2019_H1gRM2A5YX", "BJx18slFA7", "HkgtX-7Gh7", "H1gP7OO5h7", "ByxuxqBECm", "ByxuxqBECm", "HkgHWyimRQ", "iclr_2019_H1gRM2A5YX", "B1gmZWimCX", "iclr_2019_H1gRM2A5YX", "HkgtX-7Gh7", "SkeykfvchX", "SkeykfvchX", "H1gP7OO5h7" ]
iclr_2019_H1gZV30qKQ
Transfer Value or Policy? A Value-centric Framework Towards Transferrable Continuous Reinforcement Learning
Transferring learned knowledge from one environment to another is an important step towards practical reinforcement learning (RL). In this paper, we investigate the problem of transfer learning across environments with different dynamics while accomplishing the same task in the continuous control domain. We start by illustrating the limitations of policy-centric methods (policy gradient, actor- critic, etc.) when transferring knowledge across environments. We then propose a general model-based value-centric (MVC) framework for continuous RL. MVC learns a dynamics approximator and a value approximator simultaneously in the source domain, and makes decision based on both of them. We evaluate MVC against popular baselines on 5 benchmark control tasks in a training from scratch setting and a transfer learning setting. Our experiments demonstrate MVC achieves comparable performance with the baselines when it is trained from scratch, while it significantly surpasses them when it is used in the transfer setting.
rejected-papers
The paper studies whether the best strategy for transfer learning in RL is to transfer value estimates or policy probabilities. The paper also presents a model-based value-centric (MVC) framework for continuous RL. The reviewers raised concerns regarding (1) the coherence of the story, (2) the novelty and importance of the MVC framework and (3) the significance of the experiments. I encourage the authors to either focus on the algorithmic aspect or the transfer learning aspect and expand on the experimental results to make them more convincing. I appreciate the changes made to improve the paper, but in its current form the paper is still below the acceptance threshold at ICLR. PS: in my view one can think of value as (shifted and scaled) log of policy. Hence, it is a bit ambiguous to ask whether to transfer value or policy.
val
[ "S1xrTtvxk4", "BJxZA6Pc0m", "Syek2TPqA7", "SJlUFav5CQ", "B1xSFcPta7", "SyeV8pj_6X", "Hyx2gsaHTm", "rJlxYy146m", "SJx7zg14am", "rJgTKis-6m", "rklNBiiZaQ", "B1er_zja3m", "H1lXESRY2X" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I greatly appreciate including some of the requested comparisons in the appendix. The new results indicate that existing algorithms can match or even outperform MVC in both learning from scratch and in the transfer learning setup, and since MVC has limited novelty, I will stick with my earlier assessment. However,...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 4, 2 ]
[ "BJxZA6Pc0m", "B1xSFcPta7", "SyeV8pj_6X", "iclr_2019_H1gZV30qKQ", "Hyx2gsaHTm", "SJx7zg14am", "iclr_2019_H1gZV30qKQ", "H1lXESRY2X", "rJlxYy146m", "rklNBiiZaQ", "B1er_zja3m", "iclr_2019_H1gZV30qKQ", "iclr_2019_H1gZV30qKQ" ]
iclr_2019_H1gh_sC9tm
Prior Networks for Detection of Adversarial Attacks
Adversarial examples are considered a serious issue for safety critical applications of AI, such as finance, autonomous vehicle control and medicinal applications. Though significant work has resulted in increased robustness of systems to these attacks, systems are still vulnerable to well-crafted attacks. To address this problem several adversarial attack detection methods have been proposed. However, system can still be vulnerable to adversarial samples that are designed to specifically evade these detection methods. One recent detection scheme that has shown good performance is based on uncertainty estimates derived from Monte-Carlo dropout ensembles. Prior Networks, a new method of estimating predictive uncertainty, have been shown to outperform Monte-Carlo dropout on a range of tasks. One of the advantages of this approach is that the behaviour of a Prior Network can be explicitly tuned to, for example, predict high uncertainty in regions where there are no training data samples. In this work Prior Networks are applied to adversarial attack detection using measures of uncertainty in a similar fashion to Monte-Carlo Dropout. Detection based on measures of uncertainty derived from DNNs and Monte-Carlo dropout ensembles are used as a baseline. Prior Networks are shown to significantly out-perform these baseline approaches over a range of adversarial attacks in both detection of whitebox and blackbox configurations. Even when the adversarial attacks are constructed with full knowledge of the detection mechanism, it is shown to be highly challenging to successfully generate an adversarial sample.
rejected-papers
This paper addresses an important topic and was generally well-written. However, reviewers pointed out serious issues with the evaluation (using weak or poorly chosen attacks), and some conceptual confusions (e.g. conflating adversarial examples with out-of-distribution examples, unsubstantiated claim that adversarial examples lie off the data manifold).
train
[ "Skx1QFCJa7", "rJg3w5wo3Q", "HJg-pUrYnX", "HJgV24y43m", "rkgyL9jk3m", "HkgF_Qfpjm", "ByxiVpqSsm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public" ]
[ "This paper proposes a new detection method for adversarial examples, based on a prior network, which gives an uncertainty estimate for the network's predictions.\n\nThe idea is interesting and the writing is clear. However, I have several major concerns. A major one of these is that the paper considers \"detection...
[ 3, 4, 4, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2019_H1gh_sC9tm", "iclr_2019_H1gh_sC9tm", "iclr_2019_H1gh_sC9tm", "ByxiVpqSsm", "HkgF_Qfpjm", "iclr_2019_H1gh_sC9tm", "iclr_2019_H1gh_sC9tm" ]
iclr_2019_H1glKiCqtm
The Effectiveness of Pre-Trained Code Embeddings
Word embeddings are widely used in machine learning based natural language processing systems. It is common to use pre-trained word embeddings which provide benefits such as reduced training time and improved overall performance. There has been a recent interest in applying natural language processing techniques to programming languages. However, none of this recent work uses pre-trained embeddings on code tokens. Using extreme summarization as the downstream task, we show that using pre-trained embeddings on code tokens provides the same benefits as it does to natural languages, achieving: over 1.9x speedup, 5\% improvement in test loss, 4\% improvement in F1 scores, and resistance to over-fitting. We also show that the choice of language used for the embeddings does not have to match that of the task to achieve these benefits and that even embeddings pre-trained on human languages provide these benefits to programming languages.
rejected-papers
All three reviewers agree that the research question—should pretrained embeddings be used in code understanding tasks—is a reasonable one. However, there were some early issues with the way in which the paper reported results (involving both metrics and baselines). After some discussion with the reviewers, it seems that the paper now presents a clear picture of the results, but that these results are not sufficiently strong to warrant acceptance. I'm wary to turn down a paper over what are basically negative results, but for results like this to be useful to the community, they'd have to come from a very thorough experiment, and they'd have to be accompanied by a frank and detailed discussion. Neither of the two more confident authors are convinced that this paper meets that bar.
val
[ "HJxUz-QtAm", "ryeueXCIpQ", "BkxIAGALaX", "SkehizCLa7", "HkxhFfRLT7", "HJxg8bip37", "BJgeZaFq2X", "rJebNL4c27" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you especially for doing such a nice job of supplying additional experiments that show a comparison to pre-trained English embeddings. The results appear to be in keeping with your observations about syntax vs. semantics as well as my expectations. I'm slightly disappointed that they did show that pre-traine...
[ -1, -1, -1, -1, -1, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "ryeueXCIpQ", "rJebNL4c27", "BJgeZaFq2X", "HJxg8bip37", "iclr_2019_H1glKiCqtm", "iclr_2019_H1glKiCqtm", "iclr_2019_H1glKiCqtm", "iclr_2019_H1glKiCqtm" ]
iclr_2019_H1gupiC5KQ
The wisdom of the crowd: reliable deep reinforcement learning through ensembles of Q-functions
Reinforcement learning agents learn by exploring the environment and then exploiting what they have learned. This frees the human trainers from having to know the preferred action or intrinsic value of each encountered state. The cost of this freedom is reinforcement learning is slower and more unstable than supervised learning. We explore the possibility that ensemble methods can remedy these shortcomings and do so by investigating a novel technique which harnesses the wisdom of the crowds by bagging Q-function approximator estimates. Our results show that this proposed approach improves all three tasks and reinforcement learning approaches attempted. We are able to demonstrate that this is a direct result of the increased stability of the action portion of the state-action-value function used by Q-learning to select actions and by policy gradient methods to train the policy.
rejected-papers
The paper suggests using an ensemble of Q functions for Q-learning. This idea is related to bootstrapped DQN and more recent work on distributional RL and quantile regression in RL. Given the similarity, a comparison against these approaches (or a subset of those) is necessary. The experiments are limited to very simple environment (e.g. swing-up and cart-pole). The paper in its current form does not pass the bar for acceptance at ICLR.
train
[ "rylFGfkfTX", "rygPQvCc37", "rJxyZpEH37" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a cute idea as suggesting ensembles of Q-function approximations rather than a singular DQN. \n\nHowever, at the core of it, this boils down to previously studied methods in the literature, one of which also is not cited here: \n\n@inproceedings{osband2016deep,\n title={Deep exploration via bo...
[ 4, 5, 3 ]
[ 5, 3, 4 ]
[ "iclr_2019_H1gupiC5KQ", "iclr_2019_H1gupiC5KQ", "iclr_2019_H1gupiC5KQ" ]
iclr_2019_H1l-SjA5t7
Explicit Information Placement on Latent Variables using Auxiliary Generative Modelling Task
Deep latent variable models, such as variational autoencoders, have been successfully used to disentangle factors of variation in image datasets. The structure of the representations learned by such models is usually observed after training and iteratively refined by tuning the network architecture and loss function. Here we propose a method that can explicitly place information into a specific subset of the latent variables. We demonstrate the use of the method in a task of disentangling global structure from local features in images. One subset of the latent variables is encouraged to represent local features through an auxiliary modelling task. In this auxiliary task, the global structure of an image is destroyed by dividing it into pixel patches which are then randomly shuffled. The full set of latent variables is trained to model the original data, obliging the remainder of the latent representation to model the global structure. We demonstrate that this approach successfully disentangles the latent variables for global structure from local structure by observing the generative samples of SVHN and CIFAR10. We also clustering the disentangled global structure of SVHN and found that the emerging clusters represent meaningful groups of global structures – including digit identities and the number of digits presence. Finally, we discuss the problem of evaluating the clustering accuracy when ground truth categories are not expressive enough.
rejected-papers
While the paper has good quality and clarity and the proposed idea seems interesting, all three reviewers agree that the paper needs more challenging experiments to justify the proposed idea. The authors are not able to include additional experiments (such as these based on different transformations) into their revision to better convince the reviewers. In addition, the AC feels that the technical novelty of the paper is rather minor (some incremental change to VAE). In particular, related to some concerns of Reviewer 3, the AC feels the proposed idea is not too much different than introducing certain kind of side-information for supervision; the main novelty seems to be distorting the data itself somehow to provide these side information (which does not seems to be that novel).
train
[ "H1ll0045Rm", "H1l0mwe53m", "HJem9qJxA7", "SyxYOL1lRm", "BJlSjVJxAQ", "HklzaBc32m", "rklJyvKq3X" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers and AC, \n\nWe added details about hyper-parameters (beta) used in the experiments. We also added a discussion that this parameter does not significantly affect the disentanglement between global and local variables but it does affect the blurriness of the reconstruction and the clustering preferenc...
[ -1, 5, -1, -1, -1, 6, 7 ]
[ -1, 4, -1, -1, -1, 4, 4 ]
[ "iclr_2019_H1l-SjA5t7", "iclr_2019_H1l-SjA5t7", "rklJyvKq3X", "H1l0mwe53m", "HklzaBc32m", "iclr_2019_H1l-SjA5t7", "iclr_2019_H1l-SjA5t7" ]
iclr_2019_H1lADsCcFQ
LEARNING ADVERSARIAL EXAMPLES WITH RIEMANNIAN GEOMETRY
Adversarial examples, referred to as augmented data points generated by imperceptible perturbation of input samples, have recently drawn much attention. Well-crafted adversarial examples may even mislead state-of-the-art deep models to make wrong predictions easily. To alleviate this problem, many studies focus on investigating how adversarial examples can be generated and/or resisted. All the existing work handles this problem in the Euclidean space, which may however be unable to describe data geometry. In this paper, we propose a generalized framework that addresses the learning problem of adversarial examples with Riemannian geometry. Specifically, we define the local coordinate systems on Riemannian manifold, develop a novel model called Adversarial Training with Riemannian Manifold, and design a series of theory that manages to learn the adversarial examples in the Riemannian space feasibly and efficiently. The proposed work is important in that (1) it is a generalized learning methodology since Riemmanian manifold space would be degraded to the Euclidean space in a special case; (2) it is the first work to tackle the adversarial example problem tractably through the perspective of geometry; (3) from the perspective of geometry, our method leads to the steepest direction of the loss function. We also provide a series of theory showing that our proposed method can truly find the decent direction for the loss function with a comparable computational time against traditional adversarial methods. Finally, the proposed framework demonstrates superior performance to the traditional counterpart methods on benchmark data including MNIST, CIFAR-10 and SVHN.
rejected-papers
On the positive side, this is among the first papers to exploit non-Euclidean geometry, specifically curvature for adversarial learning. However, reviewers are largely in agreement that the technical correctness of this paper is unconvincing despite substantial technical exchanges with the authors.
train
[ "Hke7EtoNJE", "r1gffTpQJV", "Hkl2qWTQkV", "Bkg4ESK7yN", "Skl9q38X1N", "S1e38iEm1N", "H1x7uEwG1N", "BkgK4WPJJE", "B1kv6Ik1V", "rkxERy9aAQ", "HkxuKaZTAX", "ryxUTQjnA7", "S1lA09dcAX", "HJxW3LucRX", "rylftL_5AQ", "BylN8Lfq2X", "HkljG9Wr2m", "H1eZFnFN2m" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Again, we regret that the paper may not be presented in a clear way. We will further enhance it with more interpretations as well as illustrations.\n\nNonetheless, please note such notation that $x$ and $x+epsilon$ are in the Riemannian space has been commonly used in the related machine learning studies on Rieman...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 5 ]
[ "r1gffTpQJV", "Hkl2qWTQkV", "Bkg4ESK7yN", "Skl9q38X1N", "S1e38iEm1N", "H1x7uEwG1N", "BkgK4WPJJE", "B1kv6Ik1V", "rkxERy9aAQ", "HkxuKaZTAX", "ryxUTQjnA7", "HJxW3LucRX", "H1eZFnFN2m", "HkljG9Wr2m", "BylN8Lfq2X", "iclr_2019_H1lADsCcFQ", "iclr_2019_H1lADsCcFQ", "iclr_2019_H1lADsCcFQ" ]
iclr_2019_H1lC8o0cKX
Unsupervised Emergence of Spatial Structure from Sensorimotor Prediction
Despite its omnipresence in robotics application, the nature of spatial knowledge and the mechanisms that underlie its emergence in autonomous agents are still poorly understood. Recent theoretical work suggests that the concept of space can be grounded by capturing invariants induced by the structure of space in an agent's raw sensorimotor experience. Moreover, it is hypothesized that capturing these invariants is beneficial for a naive agent trying to predict its sensorimotor experience. Under certain exploratory conditions, spatial representations should thus emerge as a byproduct of learning to predict. We propose a simple sensorimotor predictive scheme, apply it to different agents and types of exploration, and evaluate the pertinence of this hypothesis. We show that a naive agent can capture the topology and metric regularity of its spatial configuration without any a priori knowledge, nor extraneous supervision.
rejected-papers
This paper is borderline for publication for the following reasons: 1) the title is misleading. The majority of the ICLR audience understands by "spatial structure" the structure of the external 3D world, as opposed to the position of the sensors in the internal coordinate system of the agent. Though the authors argue that knowing the positions of the sensors eventually leads to learning the 3D world structure, this appears like a leap in the argument. 2) The equation s=\phi(m) described a mapping from robot postures to sensory states. This means the agent should remain within the same scene. The description of this equation in the manuscript as "The mapping  can be seen as describing how “the world” transforms changes in motor states into changes in sensory states ..." makes this equation appear more general than what it is. s'=\psi(s,m) would be better described by such sentence.
train
[ "B1gLzIx21V", "B1ewloQh1V", "HJgzrxzhkN", "BkglrUl3yV", "S1e4yH5ok4", "BkxVXXd5hQ", "rkxtWB9KR7", "rkgKCbqKAX", "rkxWj2YYA7", "ryettnKt0m", "HJeu_oVKR7", "S1er13fc37", "B1x0lO8VCQ", "SkesfuUEC7", "rke-XIIVA7", "BklsprUN0X", "SylWNvIV07", "HJgyOvIECQ", "Byx758UVRQ", "BkehP884CX"...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ "d) We could definitely run some control experiments with translational augmentation; although they couldn’t be added to the paper now that the updating deadline is passed.\nWe’re not sure however about the type of experiment you have in mind. Would you add translational noise on the motor states?, on the sensor po...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "S1e4yH5ok4", "HJgzrxzhkN", "B1gLzIx21V", "S1e4yH5ok4", "BklsprUN0X", "iclr_2019_H1lC8o0cKX", "rkgKCbqKAX", "ryettnKt0m", "HJeu_oVKR7", "HJeu_oVKR7", "BkehP884CX", "iclr_2019_H1lC8o0cKX", "S1er13fc37", "S1er13fc37", "r1xp--ns2m", "r1xp--ns2m", "BkxVXXd5hQ", "BkxVXXd5hQ", "BkxVXXd...
iclr_2019_H1lFZnR5YX
Neural Regression Tree
Regression-via-Classification (RvC) is the process of converting a regression problem to a classification one. Current approaches for RvC use ad-hoc discretization strategies and are suboptimal. We propose a neural regression tree model for RvC. In this model, we employ a joint optimization framework where we learn optimal discretization thresholds while simultaneously optimizing the features for each node in the tree. We empirically show the validity of our model by testing it on two challenging regression tasks where we establish the state of the art.
rejected-papers
While the idea of revisiting regression-via-classification is interesting, the reviewers all agree that the paper lacks a proper motivating story for why this perspective is important. Furthermore, the baselines are weak, and there is additional relevant work that should be considered and discussed.
train
[ "H1eB8QXqam", "S1lO457jnm", "rke_Ie993Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a new approach to regression via classification problem utilizing a hybrid model between a neural network and a decision tree. The paper is very well written and easy to follow. It presents results on two very similar regression tasks and claims state of the art performance on both. The paper ...
[ 5, 3, 4 ]
[ 3, 4, 5 ]
[ "iclr_2019_H1lFZnR5YX", "iclr_2019_H1lFZnR5YX", "iclr_2019_H1lFZnR5YX" ]
iclr_2019_H1lGHsA9KX
A Resizable Mini-batch Gradient Descent based on a Multi-Armed Bandit
Determining the appropriate batch size for mini-batch gradient descent is always time consuming as it often relies on grid search. This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multi-armed bandit that achieves performance equivalent to that of best fixed batch-size. At each epoch, the RMGD samples a batch size according to a certain probability distribution proportional to a batch being successful in reducing the loss function. Sampling from this probability provides a mechanism for exploring different batch size and exploiting batch sizes with history of success. After obtaining the validation loss at each epoch with the sampled batch size, the probability distribution is updated to incorporate the effectiveness of the sampled batch size. Experimental results show that the RMGD achieves performance better than the best performing single batch size. It is surprising that the RMGD achieves better performance than grid search. Furthermore, it attains this performance in a shorter amount of time than grid search.
rejected-papers
It is a simple but good idea to consider the choice of mini-batch size as a multi-armed bandit problem. Experiments also show a slight improvement compared to the best fixed batch size. The main concerns from the reviewers are that (1) treating the choice of hyper-parameters as a bandit problem is known and has been exploited in different context, and this paper is limited to the choice of the mini-batch size, (2) the improvement in the test error is not significant. The authors' feedback did not solve the concerns raised by R2. This paper conveys a nice idea but as the current form it falls slightly below the standard of the ICLR publications. One direction for improvement, as suggested by the reviewer, would be extending the idea for a wider hyper-parameter selection problems.
train
[ "H1eLL5ttCQ", "r1x4pUtK0X", "rkxzgGYFCX", "H1ex_gYFR7", "HJx9ZYNchX", "Hylaq3TunQ", "Bke3qSqd37" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n1. About bandit-based hyperparameter optimization and wider context of parameter/model selection:\n First, the authors were shortsighted to consider batch size problem as a hyperparameter optimization problem, and for this problem, the proposed algorithm does not provide the best batch size instead it provides ...
[ -1, -1, -1, -1, 6, 7, 4 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "Bke3qSqd37", "Hylaq3TunQ", "HJx9ZYNchX", "iclr_2019_H1lGHsA9KX", "iclr_2019_H1lGHsA9KX", "iclr_2019_H1lGHsA9KX", "iclr_2019_H1lGHsA9KX" ]
iclr_2019_H1lIzhC9FX
Learning to remember: Dynamic Generative Memory for Continual Learning
Continuously trainable models should be able to learn from a stream of data over an undefined period of time. This becomes even more difficult in a strictly incremental context, where data access to previously seen categories is not possible. To that end, we propose making use of a conditional generative adversarial model where the generator is used as a memory module through neural masking to emulate neural plasticity in the human brain. This memory module is further associated with a dynamic capacity expansion mechanism. Taken together, this method facilitates a resource efficient capacity adaption to accommodate new tasks, while retaining previously attained knowledge. The proposed approach outperforms state-of-the-art algorithms on publicly available datasets, overcoming catastrophic forgetting.
rejected-papers
The authors propose to tackle the problem of catastrophic forgetting in continual learning by adopting the generative replay strategy with the generator network as an extendable memory module. While acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns that were viewed by AC as critical issues: (1) poor presentation clarity of the manuscript and incremental technical contribution in light of prior work by Serra et al. (2018); (2) rigorous experiments and in-depth analysis of the baseline models in terms of accuracy, number of parameters, memory demand and model complexity would significantly strengthen the evaluation – see R1’s and R3’s suggestions how to improve; (3) simple strategies such as storing a number of examples and memory replay should not be neglected and evaluated to assess the scope of the contribution. Additionally R1 raised a concern that preventing the generator from forgetting should be supported by an ablation study on both, the discriminator and the generator, abilities to remember and to forget. R1 and R3 provided very detailed and constructive reviews, as acknowledged by the authors. R2 expressed similar concerns about time/memory comparison of different methods, but his/her brief review did not have a substantial impact on the decision. AC suggests in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.
train
[ "SJeLjug5R7", "HyxZlDe9A7", "rJgfByXdRQ", "BygjNpyHAm", "Hkg19YkSAm", "S1laMLagAQ", "rkxwLL6eA7", "H1eAyvBa2Q", "rJggAvorhQ", "HkgsUIdM3Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We updated the paper with the CIFAR results as well as cite the mentioned papers on capacity growth. \n\nConsidering the comparison to Progressive Networks:\nSimilarly to Progressive Neural Networks [1] and its evolution [2] our method addresses the challenge of knowledge transfer by ensuring the reusability of pa...
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ "BygjNpyHAm", "Hkg19YkSAm", "rJggAvorhQ", "S1laMLagAQ", "rkxwLL6eA7", "H1eAyvBa2Q", "H1eAyvBa2Q", "iclr_2019_H1lIzhC9FX", "iclr_2019_H1lIzhC9FX", "iclr_2019_H1lIzhC9FX" ]
iclr_2019_H1lJws05K7
On the Selection of Initialization and Activation Function for Deep Neural Networks
The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure. An inappropriate selection can lead to the loss of information of the input during forward propagation and the exponential vanishing/exploding of gradients during back-propagation. Understanding the theoretical properties of untrained random networks is key to identifying which deep networks may be trained successfully as recently demonstrated by Schoenholz et al. (2017) who showed that for deep feedforward neural networks only a specific choice of hyperparameters known as the `edge of chaos' can lead to good performance. We complete this analysis by providing quantitative results showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos. By further extending this analysis, we identify a class of activation functions that improve the information propagation over ReLU-like functions. This class includes the Swish activation, ϕswish(x)=x⋅sigmoid(x), used in Hendrycks & Gimpel (2016), Elfwing et al. (2017) and Ramachandran et al. (2017). This provides a theoretical grounding for the excellent empirical performance of ϕswish observed in these contributions. We complement those previous results by illustrating the benefit of using a random initialization on the edge of chaos in this context.
rejected-papers
The paper attempts to extend the recent analysis of random deep networks to alternative activation functions. Unfortunately, none of the reviewers recommended the paper be accepted. The current presentation suffers from a lack of clarity and a sufficiently convincing supporting argument/evidence to satisfy the reviewers. The contribution is perceived as too incremental in light of previous work.
train
[ "BJlp2oCh2Q", "S1gMNNUmAm", "HJlerbNMCQ", "SyWiEUnxCQ", "H1xmIp5lC7", "Skxk9USxRX", "HJlWgjtJ0m", "rylzpsv6pX", "HkeLejGp6m", "SkeeBUGp6Q", "SJxDorH5aQ", "ryxeYSB96Q", "ryguQHBq67", "H1xr-BSq6Q", "rJe10kLt2m", "SJgij-iuhX" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors prove some theoretical results under the mean field regime and support their conclusions with a small number of experiments. Their central argument is that a correlation curve that leads to sub-exponential correlation convergence (edge of chaos) can still lead to rapid convergence if the rate is e.g. q...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_H1lJws05K7", "HJlerbNMCQ", "SyWiEUnxCQ", "H1xmIp5lC7", "Skxk9USxRX", "HJlWgjtJ0m", "rylzpsv6pX", "SkeeBUGp6Q", "SkeeBUGp6Q", "H1xr-BSq6Q", "SJgij-iuhX", "rJe10kLt2m", "BJlp2oCh2Q", "BJlp2oCh2Q", "iclr_2019_H1lJws05K7", "iclr_2019_H1lJws05K7" ]
iclr_2019_H1lPUiRcYQ
Computing committor functions for the study of rare events using deep learning with importance sampling
The committor function is a central object of study in understanding transitions between metastable states in complex systems. However, computing the committor function for realistic systems at low temperatures is a challenging task, due to the curse of dimensionality and the scarcity of transition data. In this paper, we introduce a computational approach that overcomes these issues and achieves good performance on complex benchmark problems with rough energy landscapes. The new approach combines deep learning, importance sampling and feature engineering techniques. This establishes an alternative practical method for studying rare transition events among metastable states of complex, high dimensional systems.
rejected-papers
This paper proposes a neural network based method for computing committor functions, which are used to understand transitions between stable states in complex systems. The authors improve over the techniques of Khoo et al. with a method to approximately satisfy boundary conditions and an importance sampling method to deal with rare events. This is a good application paper, introducing a new application to the ML audience, but the technical novelty is a bit limited. The reviewers see value in the paper, however scaling w.r.t. dimensionality appears to be an issue with this approach.
train
[ "rJeiwZE4pm", "Sylyw6fVCX", "HJx03cM4Am", "r1xJe9zVRX", "SkxYZ5n8T7", "Bkl-rzuQaX", "rklLlifQT7", "SJl7Wx8MpQ", "Syxwna5-aQ", "rylH9snv2m" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In response to the authors' rebuttal, I have increased my ratings accordingly. I strongly encourage the authors to include those ablative study results in the work. I also strongly recommend an ablative study on importance sampling so as to provide more quantitative results, in addition to Fig. 4. Finally, I hope ...
[ 6, -1, -1, -1, 6, -1, -1, -1, 5, 7 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, 4, 4 ]
[ "iclr_2019_H1lPUiRcYQ", "SkxYZ5n8T7", "rJeiwZE4pm", "Bkl-rzuQaX", "iclr_2019_H1lPUiRcYQ", "rklLlifQT7", "rylH9snv2m", "Syxwna5-aQ", "iclr_2019_H1lPUiRcYQ", "iclr_2019_H1lPUiRcYQ" ]
iclr_2019_H1lS8oA5YQ
Feature Attribution As Feature Selection
Feature attribution methods identify "relevant" features as an explanation of a complex machine learning model. Several feature attribution methods have been proposed; however, only a few studies have attempted to define the "relevance" of each feature mathematically. In this study, we formalize the feature attribution problem as a feature selection problem. In our proposed formalization, there arise two possible definitions of relevance. We name the feature attribution problems based on these two relevances as Exclusive Feature Selection (EFS) and Inclusive Feature Selection (IFS). We show that several existing feature attribution methods can be interpreted as approximation algorithms for EFS and IFS. Moreover, through exhaustive experiments, we show that IFS is better suited as the formalization for the feature attribution problem than EFS.
rejected-papers
All in all, while the reviewers found that the problem at hand is interesting to study, the submission's contributions in terms of significance/novelty did not rise to the standards for acceptance. The reasoning is most succinctly discussed by R3 who argues that IFS and EFS are basically feature selection and applying them to feature attribution is not particularly novel from a methodological point of view.
train
[ "HJgCuX21A7", "ryxJaoskRm", "Hkgw0NokRQ", "HJxZkrZ03X", "ryl4-82n2X", "HylGTEbi2m" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "First of all, we would like to thank you for your time and efforts to review our paper.\n\n> the time complexity\n\nGrad-IFS (which attained the best AOIC) requires solving an optimization problem, and it is therefore not very fast. Practically it takes around a few minutes with one GPU. We are working on improvin...
[ -1, -1, -1, 4, 4, 3 ]
[ -1, -1, -1, 2, 4, 3 ]
[ "HylGTEbi2m", "ryl4-82n2X", "HJxZkrZ03X", "iclr_2019_H1lS8oA5YQ", "iclr_2019_H1lS8oA5YQ", "iclr_2019_H1lS8oA5YQ" ]
iclr_2019_H1lUOsA9Fm
Synthnet: Learning synthesizers end-to-end
Learning synthesizers and generating music in the raw audio domain is a challenging task. We investigate the learned representations of convolutional autoregressive generative models. Consequently, we show that mappings between musical notes and the harmonic style (instrument timbre) can be learned based on the raw audio music recording and the musical score (in binary piano roll format). Our proposed architecture, SynthNet uses minimal training data (9 minutes), is substantially better in quality and converges 6 times faster than the baselines. The quality of the generated waveforms (generation accuracy) is sufficiently high that they are almost identical to the ground truth. Therefore, we are able to directly measure generation error during training, based on the RMSE of the Constant-Q transform. Mean opinion scores are also provided. We validate our work using 7 distinct harmonic styles and also provide visualizations and links to all generated audio.
rejected-papers
The paper describes a WaveNet-like model for MIDI-conditional music audio generation. As noted by all reviewers, the major limitation of the paper is that the method is evaluated on a synthetic dataset. The rebuttal and post-rebuttal discussion didn't change the reviewers' opinion.
train
[ "r1lHLoSBAQ", "S1gXLqSBC7", "SyxYX3qfAQ", "Skx0moqG07", "SkekojcfR7", "ryebHi-X0m", "B1ef6FZXRX", "B1lTST9GRX", "rklb-RqfRX", "B1MR69fAQ", "SyezspcfAQ", "rJedO6qz07", "rJgDg6qf07", "HklGl5cfRQ", "BJxbph5GAQ", "SklAOnqz0Q", "B1ehJP5M0Q", "Hkx0y2qfAm", "HklZiBczAX", "HyxV1jcfCQ",...
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer"...
[ "The ICLR enforced page limit is 10 with a strong suggestion to stick to 8 pages.\n\nAny article can be improved ad infinitum.\n\nWe have made several contributions, set standards and laid some fundamental work.\n\nWe appreciate your suggestions and will consider them as extensions in future work and encourage the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "ryebHi-X0m", "B1ef6FZXRX", "BygzYhni3m", "B1lT_ciXTQ", "B1lT_ciXTQ", "rJedO6qz07", "rklb-RqfRX", "BkeotIhKhm", "BkeotIhKhm", "BkeotIhKhm", "BkeotIhKhm", "BkeotIhKhm", "BkeotIhKhm", "BkeotIhKhm", "BkeotIhKhm", "BygzYhni3m", "BygzYhni3m", "BygzYhni3m", "B1lT_ciXTQ", "B1lT_ciXTQ"...
iclr_2019_H1ldNoC9tX
Classification from Positive, Unlabeled and Biased Negative Data
Positive-unlabeled (PU) learning addresses the problem of learning a binary classifier from positive (P) and unlabeled (U) data. It is often applied to situations where negative (N) data are difficult to be fully labeled. However, collecting a non-representative N set that contains only a small portion of all possible N data can be much easier in many practical situations. This paper studies a novel classification framework which incorporates such biased N (bN) data in PU learning. The fact that the training N data are biased also makes our work very different from those of standard semi-supervised learning. We provide an empirical risk minimization-based method to address this PUbN classification problem. Our approach can be regarded as a variant of traditional example-reweighting algorithms, with the weight of each example computed through a preliminary step that draws inspiration from PU learning. We also derive an estimation error bound for the proposed method. Experimental results demonstrate the effectiveness of our algorithm in not only PUbN learning scenarios but also ordinary PU leaning scenarios on several benchmark datasets.
rejected-papers
The paper proposes an algorithm for semi-supervised learning, which incorporate biased negative data into the existing PU learning framework. The reviewers and AC commonly note the critical limitation of practical value of the paper and results are rather straightforward. AC decided the paper might not be ready to publish as other contributions are not enough to compensate the issue.
test
[ "ByxOA-y9T7", "HkgJWb15am", "HJxGYGk5TQ", "S1eRDg1qp7", "r1euaUPDTm", "HygdSLPv6m", "BkeevSvD6m", "ryeWcKMPaX", "H1ggpksxaQ", "HkxQIauh2Q", "SJga3o753Q" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Effectively we have mentioned that our problem setup can be viewed as a special case of dataset shift but we did not establish any connection between our method and any other algorithms dealing with the dataset shift problem.\n\nThe reweighting technique is a popular solution to many related problems like covariat...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "SJga3o753Q", "SJga3o753Q", "HkxQIauh2Q", "SJga3o753Q", "SJga3o753Q", "SJga3o753Q", "SJga3o753Q", "H1ggpksxaQ", "iclr_2019_H1ldNoC9tX", "iclr_2019_H1ldNoC9tX", "iclr_2019_H1ldNoC9tX" ]
iclr_2019_H1lnJ2Rqt7
LARGE BATCH SIZE TRAINING OF NEURAL NETWORKS WITH ADVERSARIAL TRAINING AND SECOND-ORDER INFORMATION
Stochastic Gradient Descent (SGD) methods using randomly selected batches are widely-used to train neural network (NN) models. Performing design exploration to find the best NN for a particular task often requires extensive training with different models on a large dataset, which is very computationally expensive. The most straightforward method to accelerate this computation is to distribute the batch of SGD over multiple processors. However, large batch training often times leads to degradation in accuracy, poor generalization, and even poor robustness to adversarial attacks. Existing solutions for large batch training either do not work or require massive hyper-parameter tuning. To address this issue, we propose a novel large batch training method which combines recent results in adversarial training (to regularize against ``sharp minima'') and second order optimization (to use curvature information to change batch size adaptively during training). We extensively evaluate our method on Cifar-10/100, SVHN, TinyImageNet, and ImageNet datasets, using multiple NNs, including residual networks as well as compressed networks such as SqueezeNext. Our new approach exceeds the performance of the existing solutions in terms of both accuracy and the number of SGD iterations (up to 1\% and 3×, respectively). We emphasize that this is achieved without any additional hyper-parameter tuning to tailor our method to any of these experiments.
rejected-papers
I would like to commend the authors on their work engaging with the reviewers and for working to improve training time. However, there is not enough support among the reviewers to accept this submission. The reviewers raised several important points about the paper, but I believe there are a few other issues not adequately highlighted in the reviews that prevent this work from being accepted: 1. [premises] It has not been adequately established that "large batch training often times leads to degradation in accuracy" inherently which is an important premise of this work. Reports from the literature can largely be explained by other things in the experimental protocol. Even the framing of this issue has become confused since, although it may be possible to achieve the same accuracy at any batch size with careful tuning, this might require using (at worst) the same number of steps as the smaller batch size in some cases and thus result in little to no speedup. For example see https://arxiv.org/abs/1705.08741 and recent work in https://arxiv.org/abs/1811.03600 for more information. Even Keskar et al. reported that data augmentation eliminated the solution quality difference between their larger batch size and their smaller batch size experiments which indicates that even if noisiness from small batches serving to regularize training other regularization techniques can serve just as well. 2. [baseline strength] The appropriate baseline is standard minibatch SGD w/momentum (or ADAM or whatever) algorithm with extremely careful tuning of *all* of the hyperparameters. None of the popular learning rate heuristics will always work and other optimization parameters need to be tuned as well. If learning rate decay is used, it should also be tuned especially if one is trying to measure a speedup. The submission does not provide a sufficiently convincing baseline. 3. [measurement protocol] The protocol for measuring a speedup is not convincing without more information on how the baselines were tuned to achieve the same accuracy in the fewest steps. Approximating the protocols in https://arxiv.org/abs/1811.03600 would be one alternative. Additionally there are a variety of framing of issues around hyperparameter tuning, but, because they are easier to fix, they are not as salient for the decision.
train
[ "HJeN-Bjty4", "Bkl9upa53m", "r1eMKZgr14", "B1emuVOcCQ", "HkxY5E_9AQ", "Syx8xWO5R7", "HyxCSeO5CQ", "S1eDak_5R7", "S1xtFQylaQ", "SkxjN0yHsQ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewer for the detailed review and feedback. We will update the paper accordingly.\n", "This paper studies the large batch size training of neural networks, and incorporates adversarial training and second-order information to improve the efficiency and effectiveness of the proposed ...
[ -1, 7, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "r1eMKZgr14", "iclr_2019_H1lnJ2Rqt7", "Syx8xWO5R7", "SkxjN0yHsQ", "B1emuVOcCQ", "Bkl9upa53m", "S1xtFQylaQ", "iclr_2019_H1lnJ2Rqt7", "iclr_2019_H1lnJ2Rqt7", "iclr_2019_H1lnJ2Rqt7" ]
iclr_2019_H1lo3sC9KX
Asynchronous SGD without gradient delay for efficient distributed training
Asynchronous distributed gradient descent algorithms for training of deep neural networks are usually considered as inefficient, mainly because of the Gradient delay problem. In this paper, we propose a novel asynchronous distributed algorithm that tackles this limitation by well-thought-out averaging of model updates, computed by workers. The algorithm allows computing gradients along the process of gradient merge, thus, reducing or even completely eliminating worker idle time due to communication overhead, which is a pitfall of existing asynchronous methods. We provide theoretical analysis of the proposed asynchronous algorithm, and show its regret bounds. According to our analysis, the crucial parameter for keeping high convergence rate is the maximal discrepancy between local parameter vectors of any pair of workers. As long as it is kept relatively small, the convergence rate of the algorithm is shown to be the same as the one of a sequential online learning. Furthermore, in our algorithm, this discrepancy is bounded by an expression that involves the staleness parameter of the algorithm, and is independent on the number of workers. This is the main differentiator between our approach and other solutions, such as Elastic Asynchronous SGD or Downpour SGD, in which that maximal discrepancy is bounded by an expression that depends on the number of workers, due to gradient delay problem. To demonstrate effectiveness of our approach, we conduct a series of experiments on image classification task on a cluster with 4 machines, equipped with a commodity communication switch and with a single GPU card per machine. Our experiments show a linear scaling on 4-machine cluster without sacrificing the test accuracy, while eliminating almost completely worker idle time. Since our method allows using commodity communication switch, it paves a way for large scale distributed training performed on commodity clusters.
rejected-papers
Improving the staleness of asynchronous SGD is an important topic. This paper proposed an algorithm to restrict the staleness and provided theoretical analysis. However, the reviewers did not consider the proposed algorithm a significant contribution. The paper still did not solve the staleness problem, and it was lack of discussion or experimental comparison with the state of the art ASGD algorithms. Reviewer 3 also found the explanation of the algorithm hard to follow.
train
[ "SJeYoULs2m", "ByeWt-9S2Q", "H1lgeOmEo7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Overall, this paper is well written and clearly present their main contribution.\nHowever, the novel asynchronous distributed algorithm seems not be significant enough.\nThe delayed gradient condition has been widely discussed, but there are not enough comparison between these variants.\n", "The paper proposes a...
[ 5, 4, 4 ]
[ 4, 4, 5 ]
[ "iclr_2019_H1lo3sC9KX", "iclr_2019_H1lo3sC9KX", "iclr_2019_H1lo3sC9KX" ]
iclr_2019_H1ltQ3R9KQ
Causal Reasoning from Meta-reinforcement learning
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences.We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
rejected-papers
The reviewers raised a number of concerns including insufficiently demonstrated benefits of the proposed methodology, lack of explanations, and the lack of thorough and convincing experimental evaluation. The authors’ rebuttal failed to alleviate these concerns fully. I agree with the main concerns raised and, although I also believe that the work can result eventually in a very interesting paper, I cannot suggest it at this stage for presentation at ICLR.
train
[ "BkxZIo17J4", "rJxLL_L40m", "rkxv6DIVRm", "SyxDjPH10m", "rylNDB5n6m", "r1gd5N52pQ", "S1eTXE52TX", "S1ldPsHiT7", "Syl5shBjpQ", "HyeYcqHs6X", "rkeS2DrjpX", "BkgPvmG96X", "SJlspzn9nm", "H1gHF1d4nQ" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nWe have endeavored to address all the reviewers' concerns, and hope they will find our manuscript much improved after the additions and clarifications detailed below:\n\n>> Changed the title to “Causality from Meta Reinforcement Learning” incorporating feedback from reviewer 1.\n>> Changed phrasing in the abstra...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, 4, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2019_H1ltQ3R9KQ", "rkxv6DIVRm", "SyxDjPH10m", "iclr_2019_H1ltQ3R9KQ", "r1gd5N52pQ", "S1eTXE52TX", "BkgPvmG96X", "HyeYcqHs6X", "S1ldPsHiT7", "SJlspzn9nm", "H1gHF1d4nQ", "iclr_2019_H1ltQ3R9KQ", "iclr_2019_H1ltQ3R9KQ", "iclr_2019_H1ltQ3R9KQ" ]
iclr_2019_H1lug3R5FX
On the Geometry of Adversarial Examples
Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. We propose a geometric framework, drawing on tools from the manifold reconstruction literature, to analyze the high-dimensional geometry of adversarial examples. In particular, we highlight the importance of codimension: for low-dimensional data manifolds embedded in high-dimensional space there are many directions off the manifold in which to construct adversarial examples. Adversarial examples are a natural consequence of learning a decision boundary that classifies the low-dimensional data manifold well, but classifies points near the manifold incorrectly. Using our geometric framework we prove (1) a tradeoff between robustness under different norms, (2) that adversarial training in balls around the data is sample inefficient, and (3) sufficient sampling conditions under which nearest neighbor classifiers and ball-based adversarial training are robust.
rejected-papers
The paper gives a theoretical analysis highlighting the role of codimension on the pervasiveness of adversarial examples. The paper demonstrates that a single decision boundary cannot be robust in different norms. They further proved that it is insufficient to learn robust decision boundaries by training against adversarial examples drawn from balls around the training set. The main concern with the paper is that most of the theoretical results might have a very restrictive scope and the writing is difficult to follow. The authors expressed concerns about a review not being very constructive. In a nutshell, the review in question points out that the theory might be too restrictive, that the experimental section is not very strong, that there are other works on related topics, and that the writing of the paper could be improved. While I understand the disappointing of the authors, the main points here appear to be consistent with the other reviews, which also mention that the theoretical results in this paper are not very general, that the writing is a bit complicated or heavy in mathematics, and not easy to follow, or that it is not clear if the bounds can be useful or easily applied in other work. One reviewer rates the paper marginally above the acceptance threshold, while two other reviewers rate the paper below the acceptance threshold.
train
[ "Byx6bww8JE", "ryl1arv8JN", "rklS3XKBk4", "Ske8ZAzg1E", "S1xb4q4n0X", "HkePK_6o0m", "rklFsS2oRQ", "Skesu13o0Q", "Bk_T6Ywc0m", "Hye6LVtG6Q", "Skx0lEKzam", "SJe00fYzpm", "rJgtqzFfaX", "SyeHPNA92m", "rkxyHuB5n7", "S1e3vSFun7" ]
[ "author", "author", "public", "public", "public", "author", "public", "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Hi Emin,\n\nWe’re glad you found our paper insightful. In particular we’d like to thank you for bringing additional references to our attention. \n\n> Regarding potential improvements to our results on k-nn.\n\nThis is actually not true in our mathematical model. The reason it is not true is because we place no co...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "rklS3XKBk4", "Ske8ZAzg1E", "iclr_2019_H1lug3R5FX", "iclr_2019_H1lug3R5FX", "HkePK_6o0m", "rklFsS2oRQ", "Skesu13o0Q", "Bk_T6Ywc0m", "iclr_2019_H1lug3R5FX", "S1e3vSFun7", "rkxyHuB5n7", "SyeHPNA92m", "iclr_2019_H1lug3R5FX", "iclr_2019_H1lug3R5FX", "iclr_2019_H1lug3R5FX", "iclr_2019_H1lug...
iclr_2019_H1x1noAqKX
Discriminative out-of-distribution detection for semantic segmentation
Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input. These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive. We propose to address this issue by discriminative detection of OOD pixels in input data. Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task. Instead, we train a dedicated OOD model which discriminates the primary training set from a much larger "background" dataset which approximates the variety of the visual world. We perform our experiments on high resolution natural images in a dense prediction setup. We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset. We evaluate our approach on WildDash test, which is currently the only public test dataset with out-of-distribution images. The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.
rejected-papers
The paper addresses the problem of out-of-distribution detection for helping the segmentation process. The reviewers and AC note the critical limitation of novelty of this paper to meet the high standard of ICLR. AC also thinks the authors should avoid using explicit OOD datasets (e.g., ILVRC) due to the nature of this problem. Otherwise, this is a toy binary classification problem. AC thinks the proposed method has potential and is interesting, but decided that the authors need more works to publish.
train
[ "H1eDFLhW0X", "HygePKfMTX", "rJg8XtMMa7", "S1g1JuzM6m", "SyeibDGGpQ", "S1lH3uCypm", "SJewfWvLhm", "rygvK39V2m" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have revised the paper according to the reviewers' comments.\n\nHere is the summary of changes:\n\n1. Introduction\nparagraph 3 - shorten uncertainty definitions (Reviewer 1: \"text needs a bit more improvement\")\nparagraph 4 - better explain the need to differentiate between the two types of uncertainties (Re...
[ -1, -1, -1, -1, -1, 4, 7, 3 ]
[ -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "iclr_2019_H1x1noAqKX", "S1lH3uCypm", "S1lH3uCypm", "rygvK39V2m", "SJewfWvLhm", "iclr_2019_H1x1noAqKX", "iclr_2019_H1x1noAqKX", "iclr_2019_H1x1noAqKX" ]
iclr_2019_H1x3SnAcYQ
A Better Baseline for Second Order Gradient Estimation in Stochastic Computation Graphs
Motivated by the need for higher order gradients in multi-agent reinforcement learning and meta-learning, this paper studies the construction of baselines for second order Monte Carlo gradient estimators in order to reduce the sample variance. Following the construction of a stochastic computation graph (SCG), the Infinitely Differentiable Monte-Carlo Estimator (DiCE) can generate correct estimates of arbitrary order gradients through differentiation. However, a baseline term that serves as a control variate for reducing variance is currently provided only for first order gradient estimation, limiting the utility of higher-order gradient estimates. To improve the sample efficiency of DiCE, we propose a new baseline term for higher order gradient estimation. This term may be easily included in the objective, and produces unbiased variance-reduced estimators under (automatic) differentiation, without affecting the estimate of the objective itself or of the first order gradient. We provide theoretical analysis and numerical evaluations of our baseline term, which demonstrate that it can dramatically reduce the variance of second order gradient estimators produced by DiCE. This computational tool can be easily used to estimate second order gradients with unprecedented efficiency wherever automatic differentiation is utilised, and has the potential to unlock applications of higher order gradients in reinforcement learning and meta-learning.
rejected-papers
This paper extends the DiCE estimator with a better control variate baseline for variance reduction. The reviewers all think the paper is fairly clear and well written. However, as the reviews and discussion indicates, there are several critical issues, including lack of explanation of the choice of baseline, the lack more realistic experiments and a few misleading assertions. We encourage the authors to rewrite the paper to address these criticism. We believe this work will make a successful submission with proper modification in the future.
train
[ "BylyjkynC7", "Byx4BJJ3RQ", "SJgVaACi0X", "SJxbt-3sjX", "HJl83sY967", "ryglNsz9p7", "B1eK5IRF6X", "Hyl7UE0Ya7", "HJeQrTXahX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "“I would \"reverse engineer\" from the exact derivatives and figure out the corresponding DiCE formula.”\nThere are two separate but related challenges: First of all, you need to formulate the correct baseline for the 2nd order derivatives. In particular we wanted to make sure that this baseline can be constructed...
[ -1, -1, -1, 3, 6, 5, -1, -1, 6 ]
[ -1, -1, -1, 4, 3, 4, -1, -1, 3 ]
[ "ryglNsz9p7", "HJl83sY967", "SJxbt-3sjX", "iclr_2019_H1x3SnAcYQ", "iclr_2019_H1x3SnAcYQ", "iclr_2019_H1x3SnAcYQ", "SJxbt-3sjX", "HJeQrTXahX", "iclr_2019_H1x3SnAcYQ" ]
iclr_2019_H1xAH2RqK7
Generative Adversarial Models for Learning Private and Fair Representations
We present Generative Adversarial Privacy and Fairness (GAPF), a data-driven framework for learning private and fair representations of the data. GAPF leverages recent advances in adversarial learning to allow a data holder to learn "universal" representations that decouple a set of sensitive attributes from the rest of the dataset. Under GAPF, finding the optimal decorrelation scheme is formulated as a constrained minimax game between a generative decorrelator and an adversary. We show that for appropriately chosen adversarial loss functions, GAPF provides privacy guarantees against strong information-theoretic adversaries and enforces demographic parity. We also evaluate the performance of GAPF on multi-dimensional Gaussian mixture models and real datasets, and show how a designer can certify that representations learned under an adversary with a fixed architecture perform well against more complex adversaries.
rejected-papers
While there was some support for the ideas presented, the majority of the reviewers did not think the submission was ready for presentation at ICLR. Concerns raised included that the experiments needed more work, and the paper needs to do a better job of distinguishing the contributions beyond those of past work.
test
[ "ryxFGgOcAX", "rylDOiBc0X", "rygU8-Cp6X", "SJlvx8RP67", "rJlZpem8p7", "HyluRikQpm", "H1xyb2JXp7", "HklGflyXTm", "B1lkEkyQ6X", "S1xBl0nJp7", "SJggqV6v37" ]
[ "author", "author", "official_reviewer", "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers and readers for their feedback. We have updated our paper on OpenReview. We list below the major changes we have made to the paper. \n\n1. We have rewritten “our contributions” subsection to highlight our main contributions.\n\n2. We have moved the “related work” section to the introduct...
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2019_H1xAH2RqK7", "rygU8-Cp6X", "iclr_2019_H1xAH2RqK7", "rJlZpem8p7", "iclr_2019_H1xAH2RqK7", "SJggqV6v37", "SJggqV6v37", "S1xBl0nJp7", "S1xBl0nJp7", "iclr_2019_H1xAH2RqK7", "iclr_2019_H1xAH2RqK7" ]
iclr_2019_H1xEtoRqtQ
Scaling shared model governance via model splitting
Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation. Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads. As a scalable technique for shared model governance, we propose splitting deep learning model between multiple parties. This paper empirically investigates the security guarantee of this technique, which is introduced as the problem of model completion: Given the entire training data set or an environment simulator, and a subset of the parameters of a trained deep learning model, how much training is required to recover the model’s original performance? We define a metric for evaluating the hardness of the model completion problem and study it empirically in both supervised learning on ImageNet and reinforcement learning on Atari and DeepMind Lab. Our experiments show that (1) the model completion problem is harder in reinforcement learning than in supervised learning because of the unavailability of the trained agent’s trajectories, and (2) its hardness depends not primarily on the number of parameters of the missing part, but more so on their type and location. Our results suggest that model splitting might be a feasible technique for shared model governance in some settings where training is very expensive.
rejected-papers
As all the reviewers have highlighted, there is some interesting analysis in this paper on understanding which models can be easier to complete. The experiments are quite thorough, and seem reproducible. However, the biggest limitation---and the ones that is making it harder for the reviewers to come to a consensus---is the fact that the motivation seems mismatched with the provided approach. There is quite a lot of focus on security, and being robust to an adversary. Model splitting is proposed as a reasonable solution. However, the Model Completion hardness measure proposed is insufficiently justified, both in that its not clear what security guarantees it provides nor is it clear why training time was chosen over other metrics (like number of samples, as mentioned by a reviewer). If this measure had been previously proposed, and the focus of this paper was to provide empirical insight, that might be fine, but that does not appear to be the case. This mismatch is evident also in the writing in the paper. After the introduction, the paper largely reads as understanding how retrainable different architectures are under which problem settings, when replacing an entire layer, with little to no mention of security or privacy. In summary, this paper has some interesting ideas, but an unclear focus. The proposed strategy should be better justified. Or, maybe even better for the larger ICLR audience, the provided analysis could be motivated for other settings, such as understanding convergence rates or trainability in neural networks.
train
[ "Hke4tdLNkN", "SJgPugUVyE", "r1lc6BNV1V", "Sygbk-VEJE", "Hkl_02QmkE", "B1lby2Fjhm", "H1eUOsTtRQ", "rylzQd6_TX", "SkxYOwTu67", "B1gdPwTdTQ", "rklIsV6_pQ", "HJla2wojnm", "HkgkrCXq2Q" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the clarifications on this portion, as there are multiple ways to define the parties for this setup.", "Apologies for insisting on this point, but we think it's not just a point about semantics. We are worried about a misunderstanding since you stated above that one of your two major critiques of t...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 9 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "SJgPugUVyE", "r1lc6BNV1V", "Sygbk-VEJE", "Hkl_02QmkE", "B1gdPwTdTQ", "iclr_2019_H1xEtoRqtQ", "iclr_2019_H1xEtoRqtQ", "HkgkrCXq2Q", "B1gdPwTdTQ", "B1lby2Fjhm", "HJla2wojnm", "iclr_2019_H1xEtoRqtQ", "iclr_2019_H1xEtoRqtQ" ]
iclr_2019_H1xEwsR9FX
Convolutional CRFs for Semantic Segmentation
For the challenging semantic image segmentation task the best performing models have traditionally combined the structured modelling capabilities of Conditional Random Fields (CRFs) with the feature extraction power of CNNs. In more recent works however, CRF post-processing has fallen out of favour. We argue that this is mainly due to the slow training and inference speeds of CRFs, as well as the difficulty of learning the internal CRF parameters. To overcome both issues we propose to add the assumption of conditional independence to the framework of fully-connected CRFs. This allows us to reformulate the inference in terms of convolutions, which can be implemented highly efficiently on GPUs.Doing so speeds up inference and training by two orders of magnitude. All parameters of the convolutional CRFs can easily be optimized using backpropagation. Towards the goal of facilitating further CRF research we have made our implementations publicly available.
rejected-papers
The authors replace the large filtering step in the permutohedral lattice with a spatially varying convolutional kernel. They show that inference is more efficient and training is easier. In practice, the synthetic experiments seem to show a greater improvement than appears in real data. There are concerns about the clarity, lack of theoretical proofs, and at times overstated claims that do not have sufficient support. The ratings before the rebuttal and discussion were 7-4-6. After, R1 adjusted their score from 6 to 4. R2 initially gave a 7 but later said "I think the authors missed an opportunity here. I rated it as an accept, because I saw what it could have been after a good revision. The core idea is good, but fully agree with R1 and R3 that the paper needs work (which the authors were not willing to do). I checked the latest revision (as of Monday morning). None of R3's writing/claims issues are fixed, neither were my additional experimental requests, not even R1's typos." There is therefore a consensus among reviewers for reject.
train
[ "H1xxcZlcA7", "Byl-3GecAQ", "ryeRgNlcAm", "rkg0cXxcAX", "HkluXmx90X", "S1gGpZl50X", "r1xcAAPFAX", "HkxFA4SvRm", "BkxsyqS9n7", "r1eHPMg9nm", "B1x4tVwB2X" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I heavily disagree that my notation is confusion or incorrect. I am main using well established conventions and notation and I am quite thorough in defining open parameters and objects. Running indices in sums and matrices (like $i$ in $sum_{i} = i*i$) as well as explicit function arguments (like $x$ in $f(x) = x*...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "r1eHPMg9nm", "r1eHPMg9nm", "r1eHPMg9nm", "r1eHPMg9nm", "Byl-3GecAQ", "H1xxcZlcA7", "B1x4tVwB2X", "BkxsyqS9n7", "iclr_2019_H1xEwsR9FX", "iclr_2019_H1xEwsR9FX", "iclr_2019_H1xEwsR9FX" ]
iclr_2019_H1xQSjCqFQ
Excitation Dropout: Encouraging Plasticity in Deep Neural Networks
We propose a guided dropout regularizer for deep networks based on the evidence of a network prediction: the firing of neurons in specific paths. In this work, we utilize the evidence at each neuron to determine the probability of dropout, rather than dropping out neurons uniformly at random as in standard dropout. In essence, we dropout with higher probability those neurons which contribute more to decision making at training time. This approach penalizes high saliency neurons that are most relevant for model prediction, i.e. those having stronger evidence. By dropping such high-saliency neurons, the network is forced to learn alternative paths in order to maintain loss minimization, resulting in a plasticity-like behavior, a characteristic of human brains too. We demonstrate better generalization ability, an increased utilization of network neurons, and a higher resilience to network compression using several metrics over four image/video recognition benchmarks.
rejected-papers
The reviewers overall agree that excitation dropout is a novel idea that seems to produce good empirical performance. However, they remain optimistic, but unconvinced by the experiments in their current form. The authors have done an admiral job of addressing this through more experiments, including providing error bars, however it seems as though the reviewers still require more. I would recommend creating tables of architecture x dropout technique, where dropout technique includes information dropout, adaptive dropout, curriculum dropout, and standard dropout, across several standard datasets. Alternatively, the authors could try to be more ambitious and classify Imagenet. Essentially, it seems as though the current small-scale datasets have become somewhat saturated, and therefore the bar for gauging a new method on them is higher in terms of experimental rigor. This means the best strategy is to either try more difficult benchmarks, or be extremely thorough and complete in your experiments. Regarding the wide resnet result, while I can appreciate that the original version published with higher errors, the later draft should still be taken into account as it has a) been out for a while now and b) can been reproduced in open source implementations (e.g., https://github.com/szagoruyko/wide-residual-networks).
train
[ "HJe9afONyE", "rklNZbuNkV", "Hkxi7ItOCX", "HygQ1UKO0m", "r1x_eqdVCX", "H1g1pt_NR7", "rJlOfYOV0X", "B1ejnwdN07", "rJgPNwdE0m", "HygEnQyj2X", "B1e25dCt2m", "rJlnWO0FhX" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\n1.\n\n(a): We now consider the same architecture (i.e. All-CNN-32) used to report results with Information Dropout (TPAMI’18) for Cifar 10 (the common dataset between the two works). We perform two experiments by replacing the two dropout layers in the All-CNN-32 architecture with 1) Excitation Dropout and 2) ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "HygQ1UKO0m", "Hkxi7ItOCX", "H1g1pt_NR7", "rJlOfYOV0X", "rJlnWO0FhX", "rJlnWO0FhX", "rJlnWO0FhX", "B1e25dCt2m", "HygEnQyj2X", "iclr_2019_H1xQSjCqFQ", "iclr_2019_H1xQSjCqFQ", "iclr_2019_H1xQSjCqFQ" ]
iclr_2019_H1xk8jAqKQ
Backplay: 'Man muss immer umkehren'
Model-free reinforcement learning (RL) requires a large number of trials to learn a good policy, especially in environments with sparse rewards. We explore a method to improve the sample efficiency when we have access to demonstrations. Our approach, Backplay, uses a single demonstration to construct a curriculum for a given task. Rather than starting each training episode in the environment's fixed initial state, we start the agent near the end of the demonstration and move the starting point backwards during the course of training until we reach the initial state. Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency. This includes reward shaping, behavioral cloning, and reverse curriculum generation.
rejected-papers
-pros: - good, sensible idea - good evaluations on the domains considered - good analysis -cons: - novelty, broader evaluation I think this is a good and interesting paper and I appreciate the authors' engagment with the reviewers. I agree with the authors that it is not fair to compare their work to a blog post which hasn't been published and I have taken this into account. However, there is still concern among the reviewers about the strength of the technical contribution and the decision was made not to accept for ICLR this year.
train
[ "SygiXscYyV", "Byxqk_mKkE", "S1xMBBpZkV", "S1xWghSch7", "B1lAG01c0Q", "S1eNOpkc07", "rJgq3KK7a7", "BkgXtgGQTm", "BJxTSbz7aX", "rJlgUMez6X", "rJeXOphc2Q" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thanks for reviewing the new version. With regards to Novelty, which prior works are you thinking of when you consider our submission?", "Thank you for taking the time to include additional experiments and clarifications to the paper. The new pommerman experiments do provide a clearer comparison of the different...
[ -1, -1, -1, 5, -1, -1, 5, -1, -1, -1, 5 ]
[ -1, -1, -1, 4, -1, -1, 4, -1, -1, -1, 3 ]
[ "Byxqk_mKkE", "S1eNOpkc07", "BkgXtgGQTm", "iclr_2019_H1xk8jAqKQ", "iclr_2019_H1xk8jAqKQ", "rJgq3KK7a7", "iclr_2019_H1xk8jAqKQ", "S1xWghSch7", "S1xWghSch7", "rJeXOphc2Q", "iclr_2019_H1xk8jAqKQ" ]
iclr_2019_H1xmqiAqFm
Investigating CNNs' Learning Representation under label noise
Deep convolutional neural networks (CNNs) are known to be robust against label noise on extensive datasets. However, at the same time, CNNs are capable of memorizing all labels even if they are random, which means they can memorize corrupted labels. Are CNNs robust or fragile to label noise? Much of researches focusing on such memorization uses class-independent label noise to simulate label corruption, but this setting is simple and unrealistic. In this paper, we investigate the behavior of CNNs under class-dependently simulated label noise, which is generated based on the conceptual distance between classes of a large dataset (i.e., ImageNet-1k). Contrary to previous knowledge, we reveal CNNs are more robust to such class-dependent label noise than class-independent label noise. We also demonstrate the networks under class-dependent noise situations learn similar representation to the no noise situation, compared to class-independent noise situations.
rejected-papers
The paper analyzes the performance of CNN models when data is mislabelled in different manners. The reviewers and AC note the critical limitation of novelty of this paper to meet the high standard of ICLR. AC thinks the proposed method has potential and is interesting, but decided that the authors need more works to publish.
train
[ "S1lJST7XCm", "Sylfaqm7CQ", "rJgyKPQQCX", "SkgpvOmcnm", "B1eIw6R_nm", "Hkxm6ymuhX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We highly appreciate your efforts for reviewing. Your comments with citations are helpful. To sum up, we modeled the label noise so that each ground truth label is replaced with the wrong one randomly sampled from the same group, which is clustered based on conceptual distance. CNNs show robustness to class-depend...
[ -1, -1, -1, 5, 4, 5 ]
[ -1, -1, -1, 4, 5, 5 ]
[ "Hkxm6ymuhX", "B1eIw6R_nm", "SkgpvOmcnm", "iclr_2019_H1xmqiAqFm", "iclr_2019_H1xmqiAqFm", "iclr_2019_H1xmqiAqFm" ]
iclr_2019_H1xpe2C5Km
Trace-back along capsules and its application on semantic segmentation
In this paper, we propose a capsule-based neural network model to solve the semantic segmentation problem. By taking advantage of the extractable part-whole dependencies available in capsule layers, we derive the probabilities of the class labels for individual capsules through a recursive, layer-by-layer procedure. We model this procedure as a traceback pipeline and take it as a central piece to build an end-to-end segmentation network. Under the proposed framework, image-level class labels and object boundaries are jointly sought in an explicit manner, which poses a significant advantage over the state-of-the-art fully convolutional network (FCN) solutions. Experiments conducted on modified MNIST and neuroimages demonstrate that our model considerably enhance the segmentation performance compared to the leading FCN variant.
rejected-papers
This paper proposes a method for tracing activations in a capsule-based network in order to obtain semantic segmentation from classification predictions. Reviewers 1 and 2 rate the paper as marginally above threshold, while Reviewer 3 rates it as marginally below. Reviewer 3 particularly points to experimental validation as a major weakness, stating: "not sure if the method will generalize well beyond MNIST", "I’m concerned that the results are not transferable to other datasets and that the method shines promising just because of the simple datasets only." The AC shares these concerns and does not believe the current experimental validation is sufficient. MNIST is a toy dataset, and may have been appropriate for introducing capsules as a new concept, but it is simply not difficult enough to serve as a quantitative benchmark to distinguish capsule performance from U-Net. U-Net and Tr-CapsNet appear to have similar performance on both MNIST and the hippocampus dataset; the relatively small advantage to Tr-CapsNet is not convincing. Furthermore, as Reviewer 1 suggests, it would seem appropriate to include experimental comparison to other capsule-based segmentation approaches (e.g. LaLonde and Bagci, Capsules for Object Segmentation, 2018). This related work is mentioned, but not used as an experimental baseline.
train
[ "rJxVMMmU1E", "r1xxviOHyV", "rke5zidHkE", "HJgBEQYHJV", "BJgRIcuryN", "HJenfKtBkE", "B1xuej1kyE", "BJxadp8URQ", "BJxpgTL80X", "r1gAtqIURX", "B1gGFFLUAX", "H1enWuL8AX", "r1lglVjsnX", "B1et6sL9hX", "Hker6v3V37" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Response: Thanks for your comment. We will certainly add the explanations into the final version of the paper. \n\nThe explanation of the averaging is as following:\nThe equation P(Ck|i) = SUM_n Pn(Ck|i)/N is proposed to calculate P for a convolutional capsule in layer L (i.e. any capsule in the overlapping area o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "HJenfKtBkE", "rke5zidHkE", "B1xuej1kyE", "BJgRIcuryN", "H1enWuL8AX", "B1gGFFLUAX", "BJxadp8URQ", "BJxpgTL80X", "Hker6v3V37", "B1et6sL9hX", "r1lglVjsnX", "iclr_2019_H1xpe2C5Km", "iclr_2019_H1xpe2C5Km", "iclr_2019_H1xpe2C5Km", "iclr_2019_H1xpe2C5Km" ]
iclr_2019_H1z_Z2A5tX
DON’T JUDGE A BOOK BY ITS COVER - ON THE DYNAMICS OF RECURRENT NEURAL NETWORKS
To be effective in sequential data processing, Recurrent Neural Networks (RNNs) are required to keep track of past events by creating memories. Consequently RNNs are harder to train than their feedforward counterparts, prompting the developments of both dedicated units such as LSTM and GRU and of a handful of training tricks. In this paper, we investigate the effect of different training protocols on the representation of memories in RNN. While reaching similar performance for different protocols, RNNs are shown to exhibit substantial differences in their ability to generalize for unforeseen tasks or conditions. We analyze the dynamics of the network’s hidden state, and uncover the reasons for this difference. Each memory is found to be associated with a nearly steady state of the dynamics whose speed predicts performance on unforeseen tasks and which we refer to as a ’slow point’. By tracing the formation of the slow points we are able to understand the origin of differences between training protocols. Our results show that multiple solutions to the same task exist but may rely on different dynamical mechanisms, and that training protocols can bias the choice of such solutions in an interpretable way.
rejected-papers
This paper analyses the dynamics of RNNs, cq GRU and LSTM. The paper is mostly experimental w.r.t. the difficulty of training RNNs; this is also caused by the fact that the theoretical foundations of the paper seem not to be solid enough. Experimentation with CIFAR10 is not completely stable. The review results make the paper balance at the middle. The merit of the paper for the greater community is doubted, in its current form.
train
[ "Ske-B1trJE", "BkeNoZL-om", "ryxxYu_5A7", "SyejZFu5Am", "SkxRh_O9A7", "BylAb_uq07", "HyxmF2QohQ", "rylXT7xO2m" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the referee for a prompt response and constructive comments.\n\nRegarding the anomaly revealed with the GRU on CIFAR-10: further investigation of this case shows that increasing regularization leads to DeCu outperforming VoCu, as in all other scenarios.\n\nAs the referee requested, we will present more bi...
[ -1, 6, -1, -1, -1, -1, 5, 7 ]
[ -1, 3, -1, -1, -1, -1, 4, 4 ]
[ "BkeNoZL-om", "iclr_2019_H1z_Z2A5tX", "rylXT7xO2m", "BkeNoZL-om", "rylXT7xO2m", "HyxmF2QohQ", "iclr_2019_H1z_Z2A5tX", "iclr_2019_H1z_Z2A5tX" ]
iclr_2019_H1zxjsCqKQ
Gradient-based learning for F-measure and other performance metrics
Many important classification performance metrics, e.g. F-measure, are non-differentiable and non-decomposable, and are thus unfriendly to gradient descent algorithm. Consequently, despite their popularity as evaluation metrics, these metrics are rarely optimized as training objectives in neural network community. In this paper, we propose an empirical utility maximization scheme with provable learning guarantees to address the non-differentiability of these metrics. We then derive a strongly consistent gradient estimator to handle non-decomposability. These innovations enable end-to-end optimization of these metrics with the same computational complexity as optimizing a decomposable and differentiable metric, e.g. cross-entropy loss.
rejected-papers
This manuscript proposes a gradient-based learning scheme for non-differentiable and non-decomposable metrics. The key idea is to optimize a soft predictor directly (instead of aiming for a deterministic predictor), which results in a differentiable loss for many of these metrics. Theoretical results are provided which describe the performance of this approach. The reviewers and ACs noted weakness in the original submission related to the clarity of the presentation and novelty as related to already published work. There was also a concern about the usefulness the main theoretical results due to asymptotic assumptions. The manuscript would be significantly strengthened if the reliance on infinite sample sizes is resolved, or sufficient empirical evidence is provided which suggests that the asymptotic issues are not practically significant.
train
[ "SJelZfOtCX", "r1eujgeB37", "SklGS11FCX", "SJlQ9lsK6m", "ryewV_5KTQ", "rJgelvYYa7", "r1lueqC63X", "BJguooef3m", "HklmIaLChX" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "Yes, the loss is nonconvex w.r.t. \\theta, even in the case of accuracy. (I was thinking about convexity w.r.t. the classifier's posterior probabilities when writing the response)", "Update: I still feel that the paper should have either strong theory, strong experiments, or some of each to be accepted, but that...
[ -1, 3, -1, -1, -1, -1, 5, 5, -1 ]
[ -1, 4, -1, -1, -1, -1, 3, 5, -1 ]
[ "SklGS11FCX", "iclr_2019_H1zxjsCqKQ", "ryewV_5KTQ", "BJguooef3m", "r1eujgeB37", "r1lueqC63X", "iclr_2019_H1zxjsCqKQ", "iclr_2019_H1zxjsCqKQ", "iclr_2019_H1zxjsCqKQ" ]
iclr_2019_HJG0ojCcFm
Negotiating Team Formation Using Deep Reinforcement Learning
When autonomous agents interact in the same environment, they must often cooperate to achieve their goals. One way for agents to cooperate effectively is to form a team, make a binding agreement on a joint plan, and execute it. However, when agents are self-interested, the gains from team formation must be allocated appropriately to incentivize agreement. Various approaches for multi-agent negotiation have been proposed, but typically only work for particular negotiation protocols. More general methods usually require human input or domain-specific data, and so do not scale. To address this, we propose a framework for training agents to negotiate and form teams using deep reinforcement learning. Importantly, our method makes no assumptions about the specific negotiation protocol, and is instead completely experience driven. We evaluate our approach on both non-spatial and spatially extended team-formation negotiation environments, demonstrating that our agents beat hand-crafted bots and reach negotiation outcomes consistent with fair solutions predicted by cooperative game theory. Additionally, we investigate how the physical location of agents influences negotiation outcomes.
rejected-papers
This paper was reviewed by three experts. Initially, the reviews were mixed with several concerns raised. After the author response, there continue to be concerns about need for significantly more experiments. If this were a journal, it is clear that recommendation would be "major revision". Since that option is not available and the paper clearly needs another round of reviews, we must unfortunately reject. We encourage the authors to incorporate reviewer feedback and submit a stronger manuscript at a future venue.
train
[ "S1gEXHjmRX", "SylVbi5QA7", "B1xs4Xqm0X", "r1xtnTcuaX", "SyxHgL9d6Q", "BJePkI4PaQ", "rkg7yVVPpm", "rJgnVWg-TX", "H1llkY3J6m", "BkguDb8kpm", "HygI9hnqnm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks again for your feedback! \n\n\n- Indeed, the heart of the paper is the Shapley value comparison. You felt that experiment 4 is incomplete, and that we should add new experiments in the vein of experiment 4 to help understand how and why the correlation with Shapley values occurs. We have added two experimen...
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, -1, 2, 3 ]
[ "SyxHgL9d6Q", "BJePkI4PaQ", "r1xtnTcuaX", "H1llkY3J6m", "rkg7yVVPpm", "BkguDb8kpm", "rJgnVWg-TX", "iclr_2019_HJG0ojCcFm", "HygI9hnqnm", "iclr_2019_HJG0ojCcFm", "iclr_2019_HJG0ojCcFm" ]
iclr_2019_HJG1Uo09Fm
Learning to Reinforcement Learn by Imitation
Meta-reinforcement learning aims to learn fast reinforcement learning (RL) procedures that can be applied to new tasks or environments. While learning fast RL procedures holds promise for allowing agents to autonomously learn a diverse range of skills, existing methods for learning efficient RL are impractical for real world settings, as they rely on slow reinforcement learning algorithms for meta-training, even when the learned procedures are fast. In this paper, we propose to learn a fast reinforcement learning procedure through supervised imitation of an expert, such that, after meta-learning, an agent can quickly learn new tasks through trial-and-error. Through our proposed method, we show that it is possible to learn fast RL using demonstrations, rather than relying on slow RL, where expert agents can be trained quickly by using privileged information or off-policy RL methods. Our experimental evaluation on a number of complex simulated robotic domains demonstrates that our method can effectively learn to learn from spare rewards and is significantly more efficient than prior meta reinforcement learning algorithms.
rejected-papers
This paper proposes a meta-learning algorithm for reinforcement learning that incorporates expert demonstrations. The objective is to improve sample efficiency, which is an important problem. The referees find the approach well-motivated and pertinent, but the theoretical and practical contributions of the paper too slim. A concern was also raised in regard to reproducibility of the results, missing details about the implementation and comparisons with previous results. The authors did not respond to the reviews. The four referees are not convinced by this paper, with ratings from strong reject to ok, but not good enough.
train
[ "H1lhD7nMAQ", "Bkgc0v81R7", "SylOb8hgaX", "SJgxyIYqnX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a meta-learning algorithm for reinforcement learning that incorporates expert demonstrations. The goal is to reduce the sample complexity of meta-RL algorithms in the validation phase. The paper provides a good discussion of the background literature. Experimental results are provided on multi-...
[ 4, 3, 2, 5 ]
[ 3, 2, 5, 2 ]
[ "iclr_2019_HJG1Uo09Fm", "iclr_2019_HJG1Uo09Fm", "iclr_2019_HJG1Uo09Fm", "iclr_2019_HJG1Uo09Fm" ]
iclr_2019_HJG7m2AcF7
Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations
We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters.
rejected-papers
The paper proposes to build word representation based on a histogram over context word vectors, allowing them to measure distances between words in terms of optimal transport between these histograms. An empirical analysis shows that the proposed approach is competitive with others on semantic textual similarity and hypernym detection tasks. While the idea is definitely interesting, the paper would be streghten by a more extensive empirical analysis.
train
[ "HylY1l3ZlV", "SJeRp4fX14", "S1gWv4zXJ4", "S1eC3VETAm", "HyxRxKIjCQ", "rylvwtHK3X", "SJlO8eO50Q", "B1xjU8XzRX", "HyeULBwq07", "rJlZ0BUGRX", "S1eNs17z0X", "H1xbR1mfRQ", "SJgglB7fCX", "H1gOKm7fC7", "BkeLx_Gz0m", "BJedqIv3hm", "SJgryzvchX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers and area chair, \n\nAs promised earlier, we have released a python package for carrying out Hypernymy evaluations in an easy manner and with all the datasets organized in one place. The link is https://github.com/context-mover/HypEval\n\nWe also aim to release other parts of the code soon on the sam...
[ -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_HJG7m2AcF7", "SJgryzvchX", "BJedqIv3hm", "HyxRxKIjCQ", "SJlO8eO50Q", "iclr_2019_HJG7m2AcF7", "rJlZ0BUGRX", "iclr_2019_HJG7m2AcF7", "iclr_2019_HJG7m2AcF7", "H1xbR1mfRQ", "rylvwtHK3X", "rylvwtHK3X", "BJedqIv3hm", "BJedqIv3hm", "SJgryzvchX", "iclr_2019_HJG7m2AcF7", "iclr_2019...
iclr_2019_HJGtFoC5Fm
On the Margin Theory of Feedforward Neural Networks
Past works have shown that, somewhat surprisingly, over-parametrization can help generalization in neural networks. Towards explaining this phenomenon, we adopt a margin-based perspective. We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for deep networks. In the case of two-layer networks, an infinite-width neural network enjoys the best generalization guarantees. The typical infinite feature methods are kernel methods; we compare the neural net margin with that of kernel methods and construct natural instances where kernel methods have much weaker generalization guarantees. We validate this gap between the two approaches empirically. Finally, this infinite-neuron viewpoint is also fruitful for analyzing optimization. We show that a perturbed gradient flow on infinite-size networks finds a global optimizer in polynomial time.
rejected-papers
This paper has received reviews from multiple experts who raise a litany of issues. These have been addressed quite convincingly by the authors, but I believe that ultimately this work needs to go through another round of reviewing, and this cannot be achieved in the context of ICLR's reviewing setup. I look forward to reading the final version of the paper in the near future.
train
[ "SJx2OUH03m", "HylsIRj4kV", "HJlZlCiNJN", "BkgEo6j4JE", "HyeP9fFVJV", "ryxFGkYE1N", "HyeH06Ncnm", "HylhWTn96X", "B1lqqnnqT7", "r1lETj2c6X", "BygSiUhc67", "ryx7bLh56Q", "BJgJ0Znqa7", "SyldwJaw6Q", "SJlojG-Z6X", "H1gRoMvRnX", "SJlLO7Xicm", "HygPR-m5qm" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "UPDATE: after revisions and discussion. There seems to be some interesting results presented in this paper which I think would be good to have discussed at the conference. This is conditional on further revisions of the work by the authors.\n\n\nThis paper studies margin theory for neural nets.\n\n1. First it is s...
[ 6, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 5, 5, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1 ]
[ "iclr_2019_HJGtFoC5Fm", "HyeH06Ncnm", "ryxFGkYE1N", "HyeP9fFVJV", "BJgJ0Znqa7", "B1lqqnnqT7", "iclr_2019_HJGtFoC5Fm", "iclr_2019_HJGtFoC5Fm", "SyldwJaw6Q", "SJlojG-Z6X", "H1gRoMvRnX", "HyeH06Ncnm", "SJx2OUH03m", "iclr_2019_HJGtFoC5Fm", "iclr_2019_HJGtFoC5Fm", "iclr_2019_HJGtFoC5Fm", ...