paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_BylXi3NKvS
FALCON: Fast and Lightweight Convolution for Compressing and Accelerating CNN
How can we efficiently compress Convolutional Neural Networks (CNN) while retaining their accuracy on classification tasks? A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution. However, previous works based on depthwise separable convolution are limited since 1) they are mostly heuristic approaches without a precise understanding of their relations to standard convolution, and 2) their accuracies do not match that of the standard convolution. In this paper, we propose FALCON, an accurate and lightweight method for compressing CNN. FALCON is derived by interpreting existing convolution methods based on depthwise separable convolution using EHP, our proposed mathematical formulation to approximate the standard convolution kernel. Such interpretation leads to developing a generalized version rank-k FALCON which further improves the accuracy while sacrificing a bit of compression and computation reduction rates. In addition, we propose FALCON-branch by fitting FALCON into the previous state-of-the-art convolution unit ShuffleUnitV2 which gives even better accuracy. Experiments show that FALCON and FALCON-branch outperform 1) existing methods based on depthwise separable convolution and 2) standard CNN models by up to 8x compression and 8x computation reduction while ensuring similar accuracy. We also demonstrate that rank-k FALCON provides even better accuracy than standard convolution in many cases, while using a smaller number of parameters and floating-point operations.
reject
The submission presents an approach to accelerating convolutional networks. The framework is related to depthwise separable convolutions. The reviews are split. R3 expresses concerns about the experimental evaluation and results. The AC agrees with these concerns. The AC also notes that the submission is 10 pages long. Taking all factors into account, the AC recommends against accepting the paper.
train
[ "H1g06OtooS", "HygXMrYsiH", "H1eMjvosjr", "Skxt3YYojB", "SJxCBRojiB", "r1gYSFBpKB", "r1eucsM15S", "BkgHOuxVqS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to express our sincere gratitude for the high-quality reviews and comments. Attached below are our responses to your suggestions and concerns.\n\nQ. Why this method could perform better than other tensor based decomposition methods (some of them are having even smaller memory footprint as they decomp...
[ -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, 1, 5, 5 ]
[ "BkgHOuxVqS", "r1eucsM15S", "r1gYSFBpKB", "H1g06OtooS", "iclr_2020_BylXi3NKvS", "iclr_2020_BylXi3NKvS", "iclr_2020_BylXi3NKvS", "iclr_2020_BylXi3NKvS" ]
iclr_2020_ryg7jhEtPB
On importance-weighted autoencoders
The importance weighted autoencoder (IWAE) (Burda et al., 2016) is a popular variational-inference method which achieves a tighter evidence bound (and hence a lower bias) than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over K>1 Monte Carlo samples. Unfortunately, IWAE crucially relies on the availability of reparametrisations and even if these exist, the multi-sample objective leads to inference-network gradients which break down as K is increased (Rainforth et al., 2018). This breakdown can only be circumvented by removing high-variance score-function terms, either by heuristically ignoring them (which yields the 'sticking-the-landing' IWAE (IWAE-STL) gradient from Roeder et al. (2017)) or through an identity from Tucker et al. (2019) (which yields the 'doubly-reparametrised' IWAE (IWAE-DREG) gradient). In this work, we argue that directly optimising the proposal distribution in importance sampling as in the reweighted wake-sleep (RWS) algorithm from Bornschein & Bengio (2015) is preferable to optimising IWAE-type multi-sample objectives. To formalise this argument, we introduce an adaptive-importance sampling framework termed adaptive importance sampling for learning (AISLE) which slightly generalises the RWS algorithm. We then show that AISLE admits IWAE-STL and IWAE-DREG (i.e. the IWAE-gradients which avoid breakdown) as special cases.
reject
The authors argue that directly optimizing the IS proposal distribution as in RWS is preferable to optimizing the IWAE multi-sample objective. They formalize this with an adaptive IS framework, AISLE, that generalizes RWS, IWAE-STL and IWAE-DREG. Generally reviewers found the paper to be well-written and the connections drawn in this paper interesting. However, all reviewers raised concerns about the lack of experiments (Reviewer 3 suggested several experiments that could be done to clarify remaining questions) and practical takeaways. The authors responded by explaining that "the main "practical" takeaway from our work is the following: If one is interested in the bias-reduction potential offered by IWAEs over plain VAEs then the adaptive importance-sampling framework appears to be a better starting point for designing new algorithms than the specific multi-sample objective used by IWAE. This is because the former retains all of the benefits of the latter without inheriting its drawbacks." I did not find this argument convincing as a primary advantage of variational approaches over WS is that the variational approach optimizes a unified objective. At least in principle, this is a serious drawback of the WS approaches. Experiments and/or a discussion of this is warranted. This paper is borderline, and unfortunately, due to the high number of quality submissions this year, I have to recommend rejection at this point.
val
[ "ryllMzxatr", "S1euSLkyqS", "Hye7QXK2ir", "SJgE17t3sH", "rJgSYMF3iH", "B1xwuJF3oH", "H1x4XOu3sB", "BygWq_BCFr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "UPDATE: bumping up my score after the revisions\n\n---\n\n\nNice connections but novelty and practical takeaways unclear\n\nSUMMARY OF THE PAPER:\n\nThis paper views recent IWAE-based [1] methods (IWAE-STL [2], IWAE-DREG [3], RWS [4, 5]) for training generative models p and inference networks q under a common fram...
[ 8, 3, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_ryg7jhEtPB", "iclr_2020_ryg7jhEtPB", "SJgE17t3sH", "rJgSYMF3iH", "S1euSLkyqS", "ryllMzxatr", "BygWq_BCFr", "iclr_2020_ryg7jhEtPB" ]
iclr_2020_BJeUs3VFPH
Domain Adaptation via Low-Rank Basis Approximation
Domain adaptation focuses on the reuse of supervised learning models in a new context. Prominent applications can be found in robotics, image processing or web mining. In these areas, learning scenarios change by nature, but often remain related and motivate the reuse of existing supervised models. While the majority of symmetric and asymmetric domain adaptation algorithms utilize all available source and target domain data, we show that efficient domain adaptation requires only a substantially smaller subset from both domains. This makes it more suitable for real-world scenarios where target domain data is rare. The presented approach finds a target subspace representation for source and target data to address domain differences by orthogonal basis transfer. By employing a low-rank approximation, the approach remains low in computational time. The presented idea is evaluated in typical domain adaptation tasks with standard benchmark data.
reject
Three reviewers have scored this paper as 1/1/3 and they have not increased their rating after the rebuttal and the paper revision. The main criticism revolves around the choice of datasets, missing comparisons with the existing methods, complexity and practical demonstration of speed. Other concerns touch upon a loose bound and a weak motivation regarding the low-rank mechanism in connection to DA. On balance, the authors resolved some issues in the revised manuscripts but reviewers remain unconvinced about plenty other aspects, thus this paper cannot be accepted to ICLR2020.
val
[ "SygYXty2jB", "SJxZbsC9oB", "rJlLQXevoH", "rkl5S7xPsH", "SJx8DGevir", "SJeQ8PccYr", "Bklsscm6Yr", "HkgkZ4H6tH" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> Authors claim that the proposed method is the fastest domain adaptation algorithm in terms of computational complexity. It is necessary to demonstrates this statement experimentally, especially in large-scale datasets.\nWe proposed a domain adaptation algorithm. Hence, we have tested it on domain adaptation dat...
[ -1, -1, -1, -1, -1, 1, 3, 1 ]
[ -1, -1, -1, -1, -1, 5, 3, 5 ]
[ "SJxZbsC9oB", "rJlLQXevoH", "Bklsscm6Yr", "SJeQ8PccYr", "HkgkZ4H6tH", "iclr_2020_BJeUs3VFPH", "iclr_2020_BJeUs3VFPH", "iclr_2020_BJeUs3VFPH" ]
iclr_2020_BJevihVtwB
BOOSTING ENCODER-DECODER CNN FOR INVERSE PROBLEMS
Encoder-decoder convolutional neural networks (CNN) have been extensively used for various inverse problems. However, their prediction error for unseen test data is difficult to estimate a priori, since the neural networks are trained using only selected data and their architectures are largely considered blackboxes. This poses a fundamental challenge in improving the performance of neural networks. Recently, it was shown that Stein’s unbiased risk estimator (SURE) can be used as an unbiased estimator of the prediction error for denoising problems. However, the computation of the divergence term in SURE is difficult to implement in a neural network framework, and the condition to avoid trivial identity mapping is not well defined. In this paper, inspired by the finding that an encoder-decoder CNN can be expressed as a piecewise linear representation, we provide a close form expression of the unbiased estimator for the prediction error. The close form representation leads to a novel boosting scheme to prevent a neural network from converging to an identity mapping so that it can enhance the performance. Experimental results show that the proposed algorithm provides consistent improvement in various inverse problems.
reject
This paper introduces a closed-form expression for the Stein’s unbiased estimator for the prediction error, and a boosting approach based on this, with empirical evaluation. While this paper is interesting, all reviewers seem to agree that more work is required before this paper can be published at ICLR.
val
[ "SkgVua6MoB", "rklQHI6zsB", "B1lIQuQIir", "ByxXMf0fsB", "Bkg9o73nKH", "HJejsuDCtH", "SyegTqoCFr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "General Comments: \n==> Thanks for the constructive comments. In the revised article and in this letter we have done our best to clarify the contents of the article and to avoid confusion by the reviewers.\n\n1.(1) I think Proposition 1 has minor errors, there is no need to apply Jensen's inequality since there's ...
[ -1, -1, -1, -1, 6, 1, 3 ]
[ -1, -1, -1, -1, 1, 4, 4 ]
[ "HJejsuDCtH", "SyegTqoCFr", "iclr_2020_BJevihVtwB", "Bkg9o73nKH", "iclr_2020_BJevihVtwB", "iclr_2020_BJevihVtwB", "iclr_2020_BJevihVtwB" ]
iclr_2020_S1ewjhEFwr
Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning
In this paper, we propose a deep reinforcement learning (DRL) based framework to efficiently perform runtime channel pruning on convolutional neural networks (CNNs). Our DRL-based framework aims to learn a pruning strategy to determine how many and which channels to be pruned in each convolutional layer, depending on each specific input instance in runtime. The learned policy optimizes the performance of the network by restricting the computational resource on layers under an overall computation budget. Furthermore, unlike other runtime pruning methods which require to store all channels parameters in inference, our framework can reduce parameters storage consumption at deployment by introducing a static pruning component. Comparison experimental results with existing runtime and static pruning methods on state-of-the-art CNNs demonstrate that our proposed framework is able to provide a tradeoff between dynamic flexibility and storage efficiency in runtime channel pruning.
reject
Main content: Proposes a deep RL unified framework to manage the trade-off between static pruning to decrease storage requirements and network flexibility for dynamic pruning to decrease runtime costs Summary of discussion: reviewer1: Reviewer likes the proposed DRL approach, but writing and algorithmic details are lacking reviewer2: Pruning methods are certainly imortant, but there are details missing wrt the algorithm in the paper. reviewer3: Presents a novel RL algorithm, showing good results on CIFAR10 and ISLVRC2012. Algorithmic details and parameters are not clearly explained. Recommendation: All reviewers liked the work but the writing/algorithmic details are lacking. I recommend Reject.
train
[ "r1gQu9whjS", "Sye_sYP3jr", "r1ePHtPnsr", "r1eX2wP2jB", "H1lVKaN15B", "rJg3jfxlcH", "S1eQmrWAqr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the constructive comment. We would like to address your concerns as follow.\n\nQ1:\t“1) Majority of the description of the models and architecture is written in text and is very difficult to parse, while this could've been avoided by usage of mathematical notations for operations. This is specifically e...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 1, 3, 4 ]
[ "H1lVKaN15B", "rJg3jfxlcH", "rJg3jfxlcH", "S1eQmrWAqr", "iclr_2020_S1ewjhEFwr", "iclr_2020_S1ewjhEFwr", "iclr_2020_S1ewjhEFwr" ]
iclr_2020_rkxDon4Yvr
Discriminator Based Corpus Generation for General Code Synthesis
Current work on neural code synthesis consists of increasingly sophisticated architectures being trained on highly simplified domain-specific languages, using uniform sampling across program space of those languages for training. By comparison, program space for a C-like language is vast, and extremely sparsely populated in terms of `useful' functionalities; this requires a far more intelligent approach to corpus generation for effective training. We use a genetic programming approach using an iteratively retrained discriminator to produce a population suitable as labelled training data for a neural code synthesis architecture. We demonstrate that use of a discriminator-based training corpus generator, trained using only unlabelled problem specifications in classic Programming-by-Example format, greatly improves network performance compared to current uniform sampling techniques.
reject
This paper proposes a method to automatically generate corpora for training program synthesis systems. The reviewers did seem to appreciate the core idea of the paper, but pointed out a number of problems with experimental design that preclude the publication of the paper at this time. The reviewers gave a number of good comments, so I hope that the authors can improve the paper for publication at a different venue in the future.
train
[ "SyxQGiN_jH", "Hkgax9PvKS", "B1lGk2hsFB", "HyeJX6jkqB", "BJxbU4QAuH", "ryg55v4i_B" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "Dear reviewers,\n\nWe recognise and acknowledge your reviews, and at this point are not asking for you to change your scores. However, as the reviews are open to general viewing, we are submitting a response to answer the main points raised.\n\nA core point of ambiguity on our part is the intended focus of the pap...
[ -1, 1, 1, 1, -1, -1 ]
[ -1, 5, 5, 5, -1, -1 ]
[ "iclr_2020_rkxDon4Yvr", "iclr_2020_rkxDon4Yvr", "iclr_2020_rkxDon4Yvr", "iclr_2020_rkxDon4Yvr", "ryg55v4i_B", "iclr_2020_rkxDon4Yvr" ]
iclr_2020_BkxFi2VYvS
Semi-supervised Semantic Segmentation using Auxiliary Network
Recently, the convolutional neural networks (CNNs) have shown great success on semantic segmentation task. However, for practical applications such as autonomous driving, the popular supervised learning method faces two challenges: the demand of low computational complexity and the need of huge training dataset accompanied by ground truth. Our focus in this paper is semi-supervised learning. We wish to use both labeled and unlabeled data in the training process. A highly efficient semantic segmentation network is our platform, which achieves high segmentation accuracy at low model size and high inference speed. We propose a semi-supervised learning approach to improve segmentation accuracy by including extra images without labels. While most existing semi-supervised learning methods are designed based on the adversarial learning techniques, we present a new and different approach, which trains an auxiliary CNN network that validates labels (ground-truth) on the unlabeled images. Therefore, in the supervised training phase, both the segmentation network and the auxiliary network are trained using labeled images. Then, in the unsupervised training phase, the unlabeled images are segmented and a subset of image pixels are picked up by the auxiliary network; and then they are used as ground truth to train the segmentation network. Thus, at the end, all dataset images can be used for retraining the segmentation network to improve the segmentation results. We use Cityscapes and CamVid datasets to verify the effectiveness of our semi-supervised scheme, and our experimental results show that it can improve the mean IoU for about 1.2% to 2.9% on the challenging Cityscapes dataset.
reject
The paper presents a semi-supervised learning approach to handle semantic classification (pixel-level classification). The approach extends Hung et al. 18, using a confidence map generated by an auxiliary network, aimed to improve the identification of small objects. The reviews state that the paper novelty is limited compared to the state of the art; the reviewers made several suggestions to improve the processing pipeline (including all images, including the confidence weights). The reviews also state that the paper needs be carefully polished. The area chair hopes that the suggestions about the contents and writing of the paper will help to prepare an improved version of the paper.
val
[ "Skx3g4UIKS", "H1ejMTzTtB", "BJlYwvx1qH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This submission introduces a semi-supervised method using auxiliary network for improved semantic segmentation. The authors modify a previous work as their main network architecture and use another small network as auxiliary branch. The framework can work in a semi-supervised setting since they can use confidence ...
[ 3, 3, 3 ]
[ 4, 5, 5 ]
[ "iclr_2020_BkxFi2VYvS", "iclr_2020_BkxFi2VYvS", "iclr_2020_BkxFi2VYvS" ]
iclr_2020_rJlYsn4YwS
Gradient-free Neural Network Training by Multi-convex Alternating Optimization
In recent years, stochastic gradient descent (SGD) and its variants have been the dominant optimization methods for training deep neural networks. However, SGD suffers from limitations such as the lack of theoretical guarantees, vanishing gradients, excessive sensitivity to input, and difficulties solving highly non-smooth constraints and functions. To overcome these drawbacks, alternating minimization-based methods for deep neural network optimization have attracted fast-increasing attention recently. As an emerging and open domain, however, several new challenges need to be addressed, including 1) Convergence depending on the choice of hyperparameters, and 2) Lack of unified theoretical frameworks with general conditions. We, therefore, propose a novel Deep Learning Alternating Minimization (DLAM) algorithm to deal with these two challenges. Our innovative inequality-constrained formulation infinitely approximates the original problem with non-convex equality constraints, enabling our proof of global convergence of the DLAM algorithm under mild, practical conditions, regardless of the choice of hyperparameters and wide range of various activation functions. Experiments on benchmark datasets demonstrate the effectiveness of DLAM.
reject
The paper proposes a new learning algorithm for deep neural networks that first reformulates the problem as a multi-convex and then uses an alternating update to solve. The reviewers are concerned about the closeness to previous work, comparisons with related work like dlADMM, and the difficulty of the dataset. While the authors proposed the possibility of addressing some of these issues, the reviewers feel that without actually addressing them, the paper is not yet ready for publication.
train
[ "HyxSFgn2YS", "S1g6FG9jiB", "SJgiOJ9isS", "HyxgOkVPcS" ]
[ "official_reviewer", "author", "author", "official_reviewer" ]
[ "This paper proposes Deep Learning Alternating Minimization (DLAM) algorithm. First, deep learning optimization problems are formulated as multi-convex Problem 2, by introducing additional variables, constraints and relaxations. Second, alternating update is then used to solve Problem 2, the analysis shows that wei...
[ 1, -1, -1, 6 ]
[ 3, -1, -1, 3 ]
[ "iclr_2020_rJlYsn4YwS", "HyxSFgn2YS", "HyxgOkVPcS", "iclr_2020_rJlYsn4YwS" ]
iclr_2020_BJxqohNFPB
S-Flow GAN
Our work offers a new method for domain translation from semantic label maps and Computer Graphic (CG) simulation edge map images to photo-realistic im- ages. We train a Generative Adversarial Network (GAN) in a conditional way to generate a photo-realistic version of a given CG scene. Existing architectures of GANs still lack the photo-realism capabilities needed to train DNNs for computer vision tasks, we address this issue by embedding edge maps, and training it in an adversarial mode. We also offer an extension to our model that uses our GAN architecture to create visually appealing and temporally coherent videos.
reject
The submission proposes a new GAN-based method for translating from semantic maps of (synthetic) images/videos (from computer graphics) to photo-realistic images/videos with the aid of edge maps. The main innovation is the inclusion of edge maps to the generator, where the edge maps are initially computed using the spatial Laplacian operator, and later output from their DNED network. According to the authors, the edge map allows them to generate images with fine details and to generate output images at higher resolutions. The authors use their method to generate both single images as well as videos. The submission received relatively low scores (2 rejects and 1 weak reject). This was unchanged after the rebuttal (the authors did not submit a revised version of their paper). The reviewers voiced concerns about the following: 1. Limited novelty All of the reviewers indicated that they felt the novelty of the proposed approach of not high as the work seemed to make only small modifications on prior work. In the author response, the authors provided some details on where they felt their innovation to be. The paper can be improved by building on those and having experiments/examples to probe those claims in more detail. 2. Application to other datasets The proposed method is demonstrated only on two datasets of driving scenarios (Cityscapes and Synthia). It is unclear how the method will generalize to other types of inputs. Experiments on other datasets will demonstrate whether the proposed approach can work well for other types of images. 3. The overall quality of the writing. The overall quality of the writing is poor and hard to follow in places. The paper should also include more discussion of domain adaptation in the related work section. It's possible that with improved writing that situates the work and explains the novel aspects of the work better, that the concern about limited novelty will be partially alleviated. The paper also needs an editing pass as there are many grammar/spelling/capitalization issues. Page 2: "We make Three" --> "We make three" Page 3: "as can bee seen in fig 3.2" --> "as can be seen in ..." (it's unclear which figure "fig 3.2" refers to, as figures are labeled Figure 1, Figure 2, etc) Page 4: Equation (3), symbol e is not explained (it is presumably the edge map) Page 7: "bellow" --> "below" Overall, there are interesting elements in this paper and the reviewers noted that the generated results look good. However, the paper will need to be improved considerably. The authors are encouraged to improve their work and submit to an appropriate venue.
train
[ "S1lOOAZTFB", "Hkx9woWCFr", "BkxuSoZu9S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a method to improve the image-to-image translation. By utilizing the CG-based Synthia and embedded edge maps, it has shown some effects on the Cityscapes dataset.\n\nThis paper is not novel. Generally, it just utilized additional sources to help the generation. It is difficult to generalize to ...
[ 1, 3, 1 ]
[ 4, 5, 4 ]
[ "iclr_2020_BJxqohNFPB", "iclr_2020_BJxqohNFPB", "iclr_2020_BJxqohNFPB" ]
iclr_2020_Syl5o2EFPB
Learning Compact Reward for Image Captioning
Adversarial learning has shown its advances in generating natural and diverse descriptions in image captioning. However, the learned reward of existing adversarial methods is vague and ill-defined due to the reward ambiguity problem. In this paper, we propose a refined Adversarial Inverse Reinforcement Learning (rAIRL) method to handle the reward ambiguity problem by disentangling reward for each word in a sentence, as well as achieve stable adversarial training by refining the loss function to shift the stationary point towards Nash equilibrium. In addition, we introduce a conditional term in the loss function to mitigate mode collapse and to increase the diversity of the generated descriptions. Our experiments on MS COCO show that our method can learn compact reward for image captioning.
reject
The paper proposed a refined AIRL method to deal with the reward ambiguity problem in image captioning, wherein the main idea is to refine the loss function in word level instead in sentence level, and introduce a conditional term in the loss function to mitigate mode collapse problem. The results show the proposed method improves the performance and achieves state-of-the-art performance. However there are concerns from the reviewers that the motivation of the work was not well explained and some inprecise parts exist in the paper. The concept of "reward ambiguity problem" is not properly addressed according the opinion of reviewer2. I would like to see these concerns be well addressed before the paper can be accepted.
train
[ "rkgeqEBhoS", "SyexNH0DjS", "BylGWNpjoB", "S1eQG8RvjH", "rkxRCTpDsS", "B1xsvvvVtB", "S1eMFynatr", "BJlRe9-RFS" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We're grateful that you review our revision again and provide some further suggestions. We have updated the revision again according to your new comments. We have made the following updates:\n\n1) We provide examples of the re-written captions in Appendix B.\n2) We thought ''diversity'' referred to the diversity o...
[ -1, -1, -1, -1, -1, 6, 1, 6 ]
[ -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "BylGWNpjoB", "S1eMFynatr", "S1eQG8RvjH", "B1xsvvvVtB", "BJlRe9-RFS", "iclr_2020_Syl5o2EFPB", "iclr_2020_Syl5o2EFPB", "iclr_2020_Syl5o2EFPB" ]
iclr_2020_Skg9jnVFvH
Progressive Upsampling Audio Synthesis via Effective Adversarial Training
This paper proposes a novel generative model called PUGAN, which progressively synthesizes high-quality audio in a raw waveform. PUGAN leverages on the recently proposed idea of progressive generation of higher-resolution images by stacking multiple encode-decoder architectures. To effectively apply it to raw audio generation, we propose two novel modules: (1) a neural upsampling layer and (2) a sinc convolutional layer. Compared to the existing state-of-the-art model called WaveGAN, which uses a single decoder architecture, our model generates audio signals and converts them in a higher resolution in a progressive manner, while using a significantly smaller number of parameters, e.g., 20x smaller for 44.1kHz output, than an existing technique called WaveGAN. Our experiments show that the audio signals can be generated in real-time with the comparable quality to that of WaveGAN with respect to the inception scores and the human evaluation.
reject
Inspired by WaveGAN, this paper proposes a PUGAN to synthesizes high-quality audio in a raw waveform. The paper is well motivated. But all the reviewers find that the paper is lack of clarity and details, and there are some problems in the experiments.
train
[ "ryx4C1j3jB", "S1ehokj2sH", "SkeMY1s2jB", "rklzwJshjr", "BygVwssiYS", "H1lWhT9AtS", "rygWVYtxcS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "*** Note that, its audio fidelity is far away from the state-of-the-art results and it was only tested on simple dataset (sounds of ten-digit commands) ***\n\nAs the reviewer mentioned, autoregressive models such as WaveNet performed well in the text-to-speech (TTS) applications. However, a comparative analysis of...
[ -1, -1, -1, -1, 1, 6, 3 ]
[ -1, -1, -1, -1, 5, 5, 4 ]
[ "BygVwssiYS", "H1lWhT9AtS", "rygWVYtxcS", "rygWVYtxcS", "iclr_2020_Skg9jnVFvH", "iclr_2020_Skg9jnVFvH", "iclr_2020_Skg9jnVFvH" ]
iclr_2020_Sye2s2VtDr
Automatically Learning Feature Crossing from Model Interpretation for Tabular Data
Automatically feature generation is a major topic of automated machine learning. Among various feature generation approaches, feature crossing, which takes cross-product of sparse features, is a promising way to effectively capture the interactions among categorical features in tabular data. Previous works on feature crossing try to search in the set of all the possible cross feature fields. This is obviously not efficient when the size of original feature fields is large. Meanwhile, some deep learning-based methods combines deep neural networks and various interaction components. However, due to the existing of Deep Neural Networks (DNN), only a few cross features can be explicitly generated by the interaction components. Recently, piece-wise interpretation of DNN has been widely studied, and the piece-wise interpretations are usually inconsistent in different samples. Inspired by this, we give a definition of interpretation inconsistency in DNN, and propose a novel method called CrossGO, which selects useful cross features according to the interpretation inconsistency. The whole process of learning feature crossing can be done via simply training a DNN model and a logistic regression (LR) model. CrossGO can generate compact candidate set of cross feature fields, and promote the efficiency of searching. Extensive experiments have been conducted on several real-world datasets. Cross features generated by CrossGO can empower a simple LR model achieving approximate or even better performances comparing with complex DNN models.
reject
The authors propose a simple but effective method for feature crossing using interpretation inconsistency (as defined by the authors). I think this is a good work and the authors as well as the reviewers participated well in the discussions. However, there is still disagreement about the positioning of the paper. In particular, all the reviewers felt that additional baselines should be tried. While the authors have strongly rebutted the necessity of these baselines the reviewers are not convinced about it. Given the strong reservations of the all the 3 reviewers at this point I cannot recommend the acceptance of this paper. I strongly suggest that in subsequent submissions the authors should position their work better and perhaps compare with some of the related works recommended by the reviewers.
train
[ "H1eu0MZ2tS", "BkxGEFUsir", "SJg3bP8jir", "BJg9a68tjB", "Byg60oZKsH", "HJevRwU_sB", "HklzxQvuoS", "Hkl-QfvOiS", "HJe48C8OjS", "SylUA2RIoS", "rylMxaALsr", "ByltxKWzYr", "ByejreosKS", "H1xvyQfwFH", "HJlvhVq9FH", "r1l85X2_tS", "HJgTxp58tH", "Bkx_BOE4Fr", "BkePA3fEtr", "B1gM3VC9ur"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "author", "public", "author", "public" ]
[ "In the paper, the authors proposed CrossGO, an algorithm for finding crossing features useful for prediction.\nIn CrossGO, one trains a neural network that captures feature crossing implicitly.\nThen, possible crossing features are estimated using the gradient-based saliency.\nThe idea here is that, if a feature h...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Sye2s2VtDr", "rylMxaALsr", "SylUA2RIoS", "Byg60oZKsH", "HJevRwU_sB", "H1eu0MZ2tS", "ByltxKWzYr", "ByltxKWzYr", "ByltxKWzYr", "ByejreosKS", "ByejreosKS", "iclr_2020_Sye2s2VtDr", "iclr_2020_Sye2s2VtDr", "iclr_2020_Sye2s2VtDr", "r1l85X2_tS", "H1xvyQfwFH", "BkePA3fEtr", "icl...
iclr_2020_rkl2s34twS
Wildly Unsupervised Domain Adaptation and Its Powerful and Efficient Solution
In unsupervised domain adaptation (UDA), classifiers for the target domain (TD) are trained with clean labeled data from the source domain (SD) and unlabeled data from TD. However, in the wild, it is hard to acquire a large amount of perfectly clean labeled data in SD given limited budget. Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD---we name it wildly UDA (WUDA). We show that WUDA ruins all UDA methods if taking no care of label noise in SD, and to this end, we propose a Butterfly framework, a powerful and efficient solution to WUDA. Butterfly maintains four models (e.g., deep networks) simultaneously, where two take care of all adaptations (i.e., noisy-to-clean, labeled-to-unlabeled, and SD-to-TD-distributional) and then the other two can focus on classification in TD. As a consequence, Butterfly possesses all the conceptually necessary components for solving WUDA. Experiments demonstrate that under WUDA, Butterfly significantly outperforms existing baseline methods.
reject
The authors proposed a new problem setting called Wildly UDA (WUDA) where the labels in the source domain are noisy. They then proposed the "butterfly" method, combining co-teaching with pseudo labeling and evaluated the method on a range of WUDA problem setup. In general, there is a concern that Butterfly as the combination between co-teaching and pseudo labeling is weak on the novelty side. In this case the value of the method can be assessed by strong empirical result. However as pointed out by Reviewer 3, a common setup (SVHN<-> MNIST) that appeared in many UDA paper was missing in the original draft. The author added the result for SVHN<-> MNIST as a response to review 3, however they only considered the UDA setting, not WUDA, hence the value of that experiment was limited. In addition, there are other UDA methods that achieve significantly better performance on SVHN<->MNIST that should be considered among the baselines. For example DIRT-T (Shu et al 2018) has a second phase where the decision boundary on the target domain is adjusted, and that could provide some robustness against a decision boundary affected by noise. Shu et al (2018) A DIRT-T Approach to Unsupervised Domain Adaptation. ICLR 2018. https://arxiv.org/abs/1802.08735 I suggest that the authors consider performing the full experiment with WUDA using SVHN<->MNIST, and also consider the use of stronger UDA methods among the baseline.
train
[ "H1e6wnNniB", "r1gWdfkBsr", "BkeaKtESiB", "rkxN3FEBsB", "rJxilcNriB", "SJxXdByrsH", "S1xdXfJHir", "BJlGrhaEiH", "HJgdZVvHjB", "HJlcZBJLYH", "rJeaTeMitr", "HylUhej6KH" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "According to valuable comments from three reviewers, we have revised our paper. In this revision, we demonstrate more details related to our method, which should help reviewers better understand our proposal. We also revise our previous responses according to the revision. Please have a look.", "The major concer...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "iclr_2020_rkl2s34twS", "HylUhej6KH", "HJlcZBJLYH", "HJlcZBJLYH", "HJlcZBJLYH", "rJeaTeMitr", "HylUhej6KH", "iclr_2020_rkl2s34twS", "iclr_2020_rkl2s34twS", "iclr_2020_rkl2s34twS", "iclr_2020_rkl2s34twS", "iclr_2020_rkl2s34twS" ]
iclr_2020_Hyepjh4FwB
ProtoAttend: Attention-Based Prototypical Learning
We propose a novel inherently interpretable machine learning method that bases decisions on few relevant examples that we call prototypes. Our method, ProtoAttend, can be integrated into a wide range of neural network architectures including pre-trained models. It utilizes an attention mechanism that relates the encoded representations to samples in order to determine prototypes. The resulting model outperforms state of the art in three high impact problems without sacrificing accuracy of the original model: (1) it enables high-quality interpretability that outputs samples most relevant to the decision-making (i.e. a sample-based interpretability method); (2) it achieves state of the art confidence estimation by quantifying the mismatch across prototype labels; and (3) it obtains state of the art in distribution mismatch detection. All this can be achieved with minimal additional test time and a practically viable training time computational cost.
reject
This paper proposes an interpretable machine learning method, ProtoAttend, that bases decisions on few relevant "prototypes." The proposed method uses an attention mechanism (possibly sparse, via sparsemax) that relates the encoded representations to samples in order to determine prototypes. The resulting model enables similarity-based interpretability, confidence estimation by quantifying the mismatch across prototype labels, and can be used for distribution mismatch detection. While the proposed model is interesting, the reviewers raised several concerns regarding the choice of prototypes and the evaluation of human interoperation. The paper would benefit from more experiments besides the provided user studies to check if the provided prototypes can help human users correctly guess the model prediction. I encourage the authors to address these suggestions in a future resubmission.
train
[ "BJewxlf5sH", "HkxNQgf9sH", "Ske2v_9ior", "BkxZUxPqjS", "r1evmU2pFr", "Byx38twAYH", "HJlFcFhb9H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your valuable comments and finding our method efficient and effective!\n\nI have the following comments:\nQ1: The authors did a really good job in empirical studies, verifying the superiority of ProtoAttend.\n\nA1: We really appreciate that you acknowledge our contributions that are demonstrated with st...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 3, 1, 3 ]
[ "r1evmU2pFr", "BJewxlf5sH", "HJlFcFhb9H", "Byx38twAYH", "iclr_2020_Hyepjh4FwB", "iclr_2020_Hyepjh4FwB", "iclr_2020_Hyepjh4FwB" ]
iclr_2020_HJxJ2h4tPr
HighRes-net: Multi-Frame Super-Resolution by Recursive Fusion
Generative deep learning has sparked a new wave of Super-Resolution (SR) algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details. Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views. This is important for satellite monitoring of human impact on the planet -- from deforestation, to human rights violations -- that depend on reliable imagery. To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion: (i) co-registration, (ii) fusion, (iii) up-sampling, and (iv) registration-at-the-loss. Co-registration of low-res views is learned implicitly through a reference-frame channel, with no explicit registration mechanism. We learn a global fusion operator that is applied recursively on an arbitrary number of low-res pairs. We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet. We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth observation data at scale. Our approach recently topped the European Space Agency's MFSR competition on real-world satellite imagery.
reject
This paper proposes a multi-frame super-resolution method including recursive fusion for co-registration and registration loss to solve the problem where the super-resolution results and the high-resolution labels are not pixel-wise aligned. While reviewer #1 is positive about this paper, reviewer #2 and #3 rated weak reject and reject respectively. Both reviewer #2 and #3 have extensive experience in the topic of image super-resolution. The major concerns raised by the reviewers include the lack of many references, the comparison of recursive fusion with related work, limited test databases, using a single translational motion for the SR images, and limited novelty on the network modules. The authors provided detailed response to the concerns, however they did not change the overall rating of the reviewers. While the ACs agree that this work has merits, given the various concerns raised by the reviewers, this paper can not be accepted at its current state.
train
[ "B1xzJwcdiS", "ryxbd5DuiH", "r1gDAM_KjB", "BylRdPcOjB", "SylUYtXjiH", "BkgWk9QjiH", "B1e8Eomior", "HyxD6omsiB", "SJxgsDXooS", "Skgjk07soH", "ryeA92Xjir", "H1epQKQjjr", "rJlJUoH7iS", "HJlTwr8_ir", "B1l6sKmUoB", "BJgRT5MMsr", "S1xi4K-Gor", "Skl8XqOndH", "rygMx9RTYS", "SJx-WbR0KB"...
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. > “This paper lacks many references. Recently, many works focus on MFSR containing video SR and stereo image SR via deep learning.”\n\nThank you for pointing out these references. We've included them in our revision.\nWe want to stress that our setting is different from video SR in several ways:\n\n- We learn t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "rygMx9RTYS", "SJx-WbR0KB", "SJx-WbR0KB", "rygMx9RTYS", "Skl8XqOndH", "Skl8XqOndH", "Skl8XqOndH", "Skl8XqOndH", "Skl8XqOndH", "Skl8XqOndH", "Skl8XqOndH", "Skl8XqOndH", "SJx-WbR0KB", "SJx-WbR0KB", "BJgRT5MMsr", "S1xi4K-Gor", "SJx-WbR0KB", "iclr_2020_HJxJ2h4tPr", "iclr_2020_HJxJ2h4...
iclr_2020_HyxehhNtvS
Why Learning of Large-Scale Neural Networks Behaves Like Convex Optimization
In this paper, we present some theoretical work to explain why simple gradient descent methods are so successful in solving non-convex optimization problems in learning large-scale neural networks (NN). After introducing a mathematical tool called canonical space, we have proved that the objective functions in learning NNs are convex in the canonical model space. We further elucidate that the gradients between the original NN model space and the canonical space are related by a pointwise linear transformation, which is represented by the so-called disparity matrix. Furthermore, we have proved that gradient descent methods surely converge to a global minimum of zero loss provided that the disparity matrices maintain full rank. If this full-rank condition holds, the learning of NNs behaves in the same way as normal convex optimization. At last, we have shown that the chance to have singular disparity matrices is extremely slim in large NNs. In particular, when over-parameterized NNs are randomly initialized, the gradient decent algorithms converge to a global minimum of zero loss in probability.
reject
This paper studies the problem of optimization for neural networks, by comparing the optimization problem in parameter space with the corresponding problem in function space. It argues that overparametrised models leads to a convex problem formulation leading to global optimality. All reviewers agreed that this paper lacks mathematical rigor and novelty relative to the current works on overparametrised neural networks. Its arguments need to be substantially reworked before it can be considered for publication, and as a consequence the AC recommends rejection.
train
[ "SJxjTucgcr", "rken2bFEcS", "ryl74Xw_cr", "SJgepihG9H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper claims to analyze the global convergence of large-scale neural networks (NNs). Their analysis relies on the idea of “canonical space”, which upon further reading, is nothing more than multi-dimensional Fourier analysis.\n\nWhile the analysis of NNs is an extremely important problem, it is not clear what...
[ 1, 1, 1, 1 ]
[ 3, 3, 4, 5 ]
[ "iclr_2020_HyxehhNtvS", "iclr_2020_HyxehhNtvS", "iclr_2020_HyxehhNtvS", "iclr_2020_HyxehhNtvS" ]
iclr_2020_SkxW23NtPH
GDP: Generalized Device Placement for Dataflow Graphs
Runtime and scalability of large neural networks can be significantly affected by the placement of operations in their dataflow graphs on suitable devices. With increasingly complex neural network architectures and heterogeneous device characteristics, finding a reasonable placement is extremely challenging even for domain experts. Most existing automated device placement approaches are impractical due to the significant amount of compute required and their inability to generalize to new, previously held-out graphs. To address both limitations, we propose an efficient end-to-end method based on a scalable sequential attention mechanism over a graph neural network that is transferable to new graphs. On a diverse set of representative deep learning models, including Inception-v3, AmoebaNet, Transformer-XL, and WaveNet, our method on average achieves 16% improvement over human experts and 9.2% improvement over the prior art with 15 times faster convergence. To further reduce the computation cost, we pre-train the policy network on a set of dataflow graphs and use a superposition network to fine-tune it on each individual graph, achieving state-of-the-art performance on large hold-out graphs with over 50k nodes, such as an 8-layer GNMT.
reject
This paper presents a new reinforcement learning based approach to device placement for operations in computational graphs and demonstrates improvements for large scale standard models. The paper is borderline with all reviewers appreciating the paper even the reviewer with the lowest score. The reviewer with the lowest score is basing the score on minor reservation regarding lack of detail in explaining the experiments. Based upon the average score rejection is recommended. The reviewers' comments can help improve the paper and it is definitely recommended to submit it to the next conference.
test
[ "Skx27H9AtS", "HJli4Hz9sr", "SJgyHp-cjr", "rkgTdbb9jr", "HJgsarg9oB", "HJeA5abqsB", "rJxsWfbqor", "rJllOjPMjS", "HJg8eLkaKB", "SyeWKze-qS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors propose an end-to-end policy for graph placement and partitioning of computational graphs produced \"under-the-hood\" by platforms like Tensorflow. As the sizes of the neural networks increase, using distributed deep learning is becoming more and more necessary. Primitives like the one su...
[ 6, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 1, 5 ]
[ "iclr_2020_SkxW23NtPH", "iclr_2020_SkxW23NtPH", "HJg8eLkaKB", "Skx27H9AtS", "SyeWKze-qS", "SJgyHp-cjr", "rkgTdbb9jr", "iclr_2020_SkxW23NtPH", "iclr_2020_SkxW23NtPH", "iclr_2020_SkxW23NtPH" ]
iclr_2020_S1lXnhVKPr
Variance Reduced Local SGD with Lower Communication Complexity
To accelerate the training of machine learning models, distributed stochastic gradient descent (SGD) and its variants have been widely adopted, which apply multiple workers in parallel to speed up training. Among them, Local SGD has gained much attention due to its lower communication cost. Nevertheless, when the data distribution on workers is non-identical, Local SGD requires O(T34N34) communications to maintain its \emph{linear iteration speedup} property, where T is the total number of iterations and N is the number of workers. In this paper, we propose Variance Reduced Local SGD (VRL-SGD) to further reduce the communication complexity. Benefiting from eliminating the dependency on the gradient variance among workers, we theoretically prove that VRL-SGD achieves a \emph{linear iteration speedup} with a lower communication complexity O(T12N32) even if workers access non-identical datasets. We conduct experiments on three machine learning tasks, and the experimental results demonstrate that VRL-SGD performs impressively better than Local SGD when the data among workers are quite diverse.
reject
The paper presents a novel variance reduction algorithm for SGD. The presentation is clear. But the theory is not good enough. The reivewers worry about the converge results and the technical part is not sound.
train
[ "rkl57upLir", "SkgBb_6LiS", "Hkl1TwaIsH", "HyxV4BPPtB", "rJgPsOx6FS", "HJelZJgJ9B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer 3, \n\nThank you for your suggestion to improve our experiments. \n\nWe have added the comparison between EASGD and VRL-SGD in the latest version. And the experimental results show that EASGD is not as good as VRL-SGD in non-iid case, which is reasonable in our opinions with the following reasons. \n...
[ -1, -1, -1, 3, 1, 6 ]
[ -1, -1, -1, 4, 5, 4 ]
[ "HJelZJgJ9B", "HyxV4BPPtB", "rJgPsOx6FS", "iclr_2020_S1lXnhVKPr", "iclr_2020_S1lXnhVKPr", "iclr_2020_S1lXnhVKPr" ]
iclr_2020_BJe7h34YDS
Understanding and Stabilizing GANs' Training Dynamics with Control Theory
Generative adversarial networks~(GANs) have made significant progress on realistic image generation but often suffer from instability during the training process. Most previous analyses mainly focus on the equilibrium that GANs achieve, whereas a gap exists between such theoretical analyses and practical implementations, where it is the training dynamics that plays a vital role in the convergence and stability of GANs. In this paper, we directly model the dynamics of GANs and adopt the control theory to understand and stabilize it. Specifically, we interpret the training process of various GANs as certain types of dynamics in a unified perspective of control theory which enables us to model the stability and convergence easily. Borrowed from control theory, we adopt the widely-used negative feedback control to stabilize the training dynamics, which can be considered as an L2 regularization on the output of the discriminator. We empirically verify our method on both synthetic data and natural image datasets. The results demonstrate that our method can stabilize the training dynamics as well as converge better than baselines.
reject
This paper suggests stabilizing the training of GANs using ideas from control theory. The reviewers all noted that the approach was well-motivated and seemed convinced that that the problem was a worthwhile one. However, there were universal concerns about the comparisons with baselines and performance over previous works on Stabilizing GAN training and the authors were not able to properly address them.
train
[ "BJgowci9oS", "SkgsHqjcsH", "B1lEyci9sH", "B1eU3tjcoS", "r1xj8Fs9iS", "SJx_1-jojS", "r1gWc5jqor", "SJx4v04YKS", "HJel4kxAKH", "rJxoXLbCFB" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nQ3: How does negative feedback (NF) influence the training of stable dynamics and further evaluation: \nA3: As stated above in our response to Q1, we added the new results of applying negative feedback to SN-GAN, which is a state-of-the-art variant of GANs with empirically stable performance. Our results (See T...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "SkgsHqjcsH", "HJel4kxAKH", "B1eU3tjcoS", "rJxoXLbCFB", "iclr_2020_BJe7h34YDS", "iclr_2020_BJe7h34YDS", "SJx4v04YKS", "iclr_2020_BJe7h34YDS", "iclr_2020_BJe7h34YDS", "iclr_2020_BJe7h34YDS" ]
iclr_2020_r1gV3nVKPS
Beyond Classical Diffusion: Ballistic Graph Neural Network
This paper presents the ballistic graph neural network. Ballistic graph neural network tackles the weight distribution from a transportation perspective and has many different properties comparing to the traditional graph neural network pipeline. The ballistic graph neural network does not require to calculate any eigenvalue. The filters propagate exponentially faster(σ2∼T2) comparing to traditional graph neural network(σ2∼T). We use a perturbed coin operator to perturb and optimize the diffusion rate. Our results show that by selecting the diffusion speed, the network can reach a similar accuracy with fewer parameters. We also show the perturbed filters act as better representations comparing to pure ballistic ones. We provide a new perspective of training graph neural network, by adjusting the diffusion rate, the neural network's performance can be improved.
reject
This submission has been assessed by three reviewers who scored it 3/1/3, and they have remained unconvinced after the rebuttal. The main issues voiced are the difficult readability of the paper, cryptic at times due to a mix of physical and DL notations, and a lack of sufficient experimentation to support all claims. The reviewers acknowledge the authors' efforts to resolve the main issues but find these efforts insufficient. Thus, this paper cannot be accepted to ICLR2020.
train
[ "rkePgdaptB", "BklYKnt1jr", "Bkl4EiEtor", "BJlYXXr1iH", "SylSprN1iB", "HJxnbHqiFB", "HJxcTdsmcH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper \"Beyond Classical Diffusion: Ballistic Graph Neural Network\" tackles the problem of graph vertices representation. While most existing works rely on classical random walks on the graph, the paper proposes to cope with the \"speed of diffusion\" problem by introducing ballistic walk.\n\nI noticed the co...
[ 3, -1, -1, -1, -1, 3, 1 ]
[ 4, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_r1gV3nVKPS", "rkePgdaptB", "HJxcTdsmcH", "HJxcTdsmcH", "HJxnbHqiFB", "iclr_2020_r1gV3nVKPS", "iclr_2020_r1gV3nVKPS" ]
iclr_2020_SJlNnhVYDr
Soft Token Matching for Interpretable Low-Resource Classification
We propose a model to tackle classification tasks in the presence of very little training data. To this aim, we introduce a novel matching mechanism to focus on elements of the input by using vectors that represent semantically meaningful concepts for the task at hand. By leveraging highlighted portions of the training data, a simple, yet effective, error boosting technique guides the learning process. In practice, it increases the error associated to relevant parts of the input by a given factor. Results on text classification tasks confirm the benefits of the proposed approach in both balanced and unbalanced cases, thus being of practical use when labeling new examples is expensive. In addition, the model is interpretable, as it allows for human inspection of the learned weights.
reject
The authors focus on low-resource text classifications tasks augmented with "rationales". They propose a new technique that improves performance over existing approaches and that allows human inspection of the learned weights. Although the reviewers did not find any major faults with the paper, they were in consensus that the paper should be rejected at this time. Generally, the reviewers' reservations were in terms of novelty and extent of technical contribution. Given the large number of submissions this year, I am recommending rejection for this paper.
train
[ "r1gbbP6TKB", "SJlsCX2CFr", "SylQfmSusH", "BkgDf1SdsB", "HkeBzbrOiB", "S1eEulHdiH", "rkl07eSdoB", "Skl6egS_jB", "rylTafBXqB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "After responses:\n\nI read the authors response and decided to stick to my original score mostly because:\n\n1 - I understand that interpretability is hard to define. I also agree with the authors response. However, this is still not reflected in the paper in any way. I expect a discussion on what is the relevant ...
[ 3, 3, -1, -1, -1, -1, -1, -1, 1 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_SJlNnhVYDr", "iclr_2020_SJlNnhVYDr", "HkeBzbrOiB", "rylTafBXqB", "r1gbbP6TKB", "SJlsCX2CFr", "Skl6egS_jB", "BkgDf1SdsB", "iclr_2020_SJlNnhVYDr" ]
iclr_2020_rJeB22VFvS
Towards More Realistic Neural Network Uncertainties
Statistical models are inherently uncertain. Quantifying or at least upper-bounding their uncertainties is vital for safety-critical systems. While standard neural networks do not report this information, several approaches exist to integrate uncertainty estimates into them. Assessing the quality of these uncertainty estimates is not straightforward, as no direct ground truth labels are available. Instead, implicit statistical assessments are required. For regression, we propose to evaluate uncertainty realism---a strict quality criterion---with a Mahalanobis distance-based statistical test. An empirical evaluation reveals the need for uncertainty measures that are appropriate to upper-bound heavy-tailed empirical errors. Alongside, we transfer the variational U-Net classification architecture to standard supervised image-to-image tasks. It provides two uncertainty mechanisms and significantly improves uncertainty realism compared to a plain encoder-decoder model.
reject
This paper proposes two contributions to improve uncertainty in deep learning. The first is a Mahalanobis distance based statistical test and the second a model architecture. Unfortunately, the reviewers found the message of the paper somewhat confusing and particularly didn't understand the connection between these two contributions. A major question from the reviewers is why the proposed statistical test is better than using a proper scoring rule such as negative log likelihood. Some empirical justification of this should be presented.
train
[ "Bkl1amlrFS", "B1l5_QS3FB", "HylmsHQCYB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper a new method for evaluating prediction uncertainties for regression and classification tasks.\n\nI argue this work should be rejected. For regression the method is based on Gaussian assumption, which has no reason to be correct (especially for BayesianNN), and for classification the method is not clear a...
[ 1, 3, 1 ]
[ 4, 3, 4 ]
[ "iclr_2020_rJeB22VFvS", "iclr_2020_rJeB22VFvS", "iclr_2020_rJeB22VFvS" ]
iclr_2020_rJlUhhVYvS
Understanding Isomorphism Bias in Graph Data Sets
In recent years there has been a rapid increase in classification methods on graph structured data. Both in graph kernels and graph neural networks, one of the implicit assumptions of successful state-of-the-art models was that incorporating graph isomorphism features into the architecture leads to better empirical performance. However, as we discover in this work, commonly used data sets for graph classification have repeating instances which cause the problem of isomorphism bias, i.e. artificially increasing the accuracy of the models by memorizing target information from the training set. This prevents fair competition of the algorithms and raises a question of the validity of the obtained results. We analyze 54 data sets, previously extensively used for graph-related tasks, on the existence of isomorphism bias, give a set of recommendations to machine learning practitioners to properly set up their models, and open source new data sets for the future experiments.
reject
Thanks to reviewers and authors for an interesting discussion. It seems the central question is whether learning to identify correct bijections should be part of graph classification problems, or whether this leads to bias and overfitting. Reviews are generally negative, putting this in the lower third of the submissions. The paper, however, inspired an interesting discussion, and I would encourage the authors to continue this line of work, addressing the question of bias and overfitting more directly, possibly going beyond dataset evaluation and, for example, thinking about how to evaluate whether training on non-isomorphic graphs leads to better off-training set generalization.
test
[ "rkxQNjz3iS", "rkgXKjLxoH", "H1xfHrUxjH", "HJex9umljH", "HJesTqYkjr", "SklrAvRctS", "SJeT-mmc5S", "BJxu9Rwn9S", "SJlFHnO6cr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers and Area Chairs, \n\nWe carefully reviewed the concerns and we strengthened our paper with the theoretical part (for convenience, in red color in an updated version of the paper). \n\nWe theoretically derive an upper bound for the generalization gap expressed in the terms of Rademacher complexity of...
[ -1, -1, -1, -1, -1, 6, 1, 1, 3 ]
[ -1, -1, -1, -1, -1, 4, 3, 5, 5 ]
[ "iclr_2020_rJlUhhVYvS", "SklrAvRctS", "SJeT-mmc5S", "BJxu9Rwn9S", "SJlFHnO6cr", "iclr_2020_rJlUhhVYvS", "iclr_2020_rJlUhhVYvS", "iclr_2020_rJlUhhVYvS", "iclr_2020_rJlUhhVYvS" ]
iclr_2020_B1gUn24tPr
Classification Attention for Chinese NER
The character-based model, such as BERT, has achieved remarkable success in Chinese named entity recognition (NER). However, such model would likely miss the overall information of the entity words. In this paper, we propose to combine priori entity information with BERT. Instead of relying on additional lexicons or pre-trained word embeddings, our model has generated entity classification embeddings directly on the pre-trained BERT, having the merit of increasing model practicability and avoiding OOV problem. Experiments show that our model has achieved state-of-the-art results on 3 Chinese NER datasets.
reject
The paper is interested in Chinese Name Entity Recognition, building on a BERT pre-trained model. All reviewers agree that the contribution has limited novelty. Motivation leading to the chosen architecture is also missing. In addition, the writing of the paper should be improved.
train
[ "BJlwlG5MiB", "SylCtptboB", "S1gg_BFZiS", "ryejz8JhFr", "ByeUmO-R5H", "rylcHyzA9r" ]
[ "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "thank you so much for your kind review. Here gives answsers to all of your questions\n\n1、\tResult compare with ERNIE+CRF\n Baidu did not publish the pre-training model for the Ernine 2.0, but from the paper, the results of MSRA are as follows:\n Ernie 1.0 Base f1:93.8\n Ernie 2.0 Base f1:...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 3, 1, 3 ]
[ "ryejz8JhFr", "ByeUmO-R5H", "rylcHyzA9r", "iclr_2020_B1gUn24tPr", "iclr_2020_B1gUn24tPr", "iclr_2020_B1gUn24tPr" ]
iclr_2020_rkgPnhNFPB
Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures
This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called concentrated random vectors. Further exploiting the fact that Gram matrices, of the type G = X'X with X = [x_1 , . . . , x_n ] ∈ R p×n and x_i independent concentrated random vectors from a mixture model, behave asymptotically (as n, p → ∞) as if the x_i were drawn from a Gaussian mixture, suggests that DL representations of GAN-data can be fully described by their first two statistical moments for a wide range of standard classifiers. Our theoretical findings are validated by generating images with the BigGAN model and across different popular deep representation networks.
reject
The paper theoretically shows that the data (embedded by representations learned by GANs) are essentially the same as a high dimensional Gaussian mixture. The result is based on a recent result from random matrix theory on the covariance matrix of data, which the authors extend to a theorem on the Gram matrix of the data. The authors also provide a small experiment comparing the spectrum and principle 2D subspace of BigGAN and Gaussian mixtures, demonstrating that their theorem applies in practice. Two of the reviews (with confident reviewers) were quite negative about the contributions of the paper, and the reviewers unfortunately did not participate in the discussion period. Overall, the paper seems solid, but the reviews indicate that improvements are needed in the structure and presentation of the theoretical results. Given the large number of submissions at ICLR this year, the paper in its current form does not pass the quality threshold for acceptance.
train
[ "B1gTYwE3oH", "SJegKPx5oB", "ByltivlcjS", "HyevOLg9jS", "HyeZ4IlciS", "BylHxsbqtS", "rJlB6yg6tr", "HJlAnKJCKB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We gratefully thank all reviewers for their feedback and constructive comments, and also for the time taken to review our paper. We also thank the ACs for their efforts. We have tried to respond to all the reviewer's comments and have updated our paper by addressing them. We hope this will encourage reviewers 1 an...
[ -1, -1, -1, -1, -1, 6, 3, 1 ]
[ -1, -1, -1, -1, -1, 1, 5, 4 ]
[ "iclr_2020_rkgPnhNFPB", "HJlAnKJCKB", "BylHxsbqtS", "rJlB6yg6tr", "rJlB6yg6tr", "iclr_2020_rkgPnhNFPB", "iclr_2020_rkgPnhNFPB", "iclr_2020_rkgPnhNFPB" ]
iclr_2020_r1eOnh4YPB
How Does Learning Rate Decay Help Modern Neural Networks?
Learning rate decay (lrDecay) is a \emph{de facto} technique for training modern neural networks. It starts with a large learning rate and then decays it multiple times. It is empirically observed to help both optimization and generalization. Common beliefs in how lrDecay works come from the optimization analysis of (Stochastic) Gradient Descent: 1) an initially large learning rate accelerates training or helps the network escape spurious local minima; 2) decaying the learning rate helps the network converge to a local minimum and avoid oscillation. Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex. We provide another novel explanation: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns. The proposed explanation is validated on a carefully-constructed dataset with tractable pattern complexity. And its implication, that additional patterns learned in later stages of lrDecay are more complex and thus less transferable, is justified in real-world datasets. We believe that this alternative explanation will shed light into the design of better training strategies for modern neural networks.
reject
This paper seeks to understand the effect of learning rate decay in neural net training. This is an important question in the field and this paper also proposes to show why previous explanations were not correct. However, the reviewers found that the paper did not explain the experimental setup enough to be reproducible. Furthermore, there are significant problems with the novelty of the work due to its overlap with works such as (Nakiran et al., 2019), (Li et al. 2019) or (Jastrzębski et al. 2017).
train
[ "rJxKrLM3KS", "SygcfD6htS", "S1lvGJSE9H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper investigates the role of learning rate decay in neural network training. While there are prevalent ideas of how/why learning rate decay help both optimization and generalization of neural networks, this work proposes interpretation based on pattern complexity. The mechanism the paper proposes is that ini...
[ 3, 1, 1 ]
[ 3, 4, 4 ]
[ "iclr_2020_r1eOnh4YPB", "iclr_2020_r1eOnh4YPB", "iclr_2020_r1eOnh4YPB" ]
iclr_2020_rklFh34Kwr
Bayesian Inference for Large Scale Image Classification
Bayesian inference promises to ground and improve the performance of deep neural networks. It promises to be robust to overfitting, to simplify the training procedure and the space of hyperparameters, and to provide a calibrated measure of uncertainty that can enhance decision making, agent exploration and prediction fairness. Markov Chain Monte Carlo (MCMC) methods enable Bayesian inference by generating samples from the posterior distribution over model parameters. Despite the theoretical advantages of Bayesian inference and the similarity between MCMC and optimization methods, the performance of sampling methods has so far lagged behind optimization methods for large scale deep learning tasks. We aim to fill this gap and introduce ATMC, an adaptive noise MCMC algorithm that estimates and is able to sample from the posterior of a neural network. ATMC dynamically adjusts the amount of momentum and noise applied to each parameter update in order to compensate for the use of stochastic gradients. We use a ResNet architecture without batch normalization to test ATMC on the Cifar10 benchmark and the large scale ImageNet benchmark and show that, despite the absence of batch normalization, ATMC outperforms a strong optimization baseline in terms of both classification accuracy and test log-likelihood. We show that ATMC is intrinsically robust to overfitting on the training data and that ATMC provides a better calibrated measure of uncertainty compared to the optimization baseline.
reject
This paper proposes a variant of Hamiltonian Monte Carlo for Bayesian inference in deep learning. Although the reviewers acknowledge the ambition, scope and novelty of the paper they still have a number of reservations regarding experimental results and claims (regarding need for hyperparameter tuning). The overall score consequently falls below acceptance. Rejection is recommended. These reservations made by the referees should definitely be addressable before next conference deadline so looking forward to see the paper published asap.
test
[ "Hyl58skOjH", "S1lKL5JujB", "H1gNOtJ_sr", "BJl3XMGhtS", "BJlHL35H9B", "rJlQBJSP9H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for taking the time to write and elaborate review with detailed questions. We would like to provide answers to the questions raised:\n\n1) ATMC falls within the framework of SG-MCMC methods and its theoretical soundness relies on the general framework (Eq. 1) to define unbiased SG-MCMC method...
[ -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, 4, 4, 1 ]
[ "BJl3XMGhtS", "BJlHL35H9B", "rJlQBJSP9H", "iclr_2020_rklFh34Kwr", "iclr_2020_rklFh34Kwr", "iclr_2020_rklFh34Kwr" ]
iclr_2020_HJlF3h4FvB
Distillation ≈ Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized NN
Distillation is a method to transfer knowledge from one model to another and often achieves higher accuracy with the same capacity. In this paper, we aim to provide a theoretical understanding on what mainly helps with the distillation. Our answer is "early stopping". Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping. This can be justified by a new concept, Anisotropic In- formation Retrieval (AIR), which means that the neural network tends to fit the informative information first and the non-informative information (including noise) later. Motivated by the recent development on theoretically analyzing overparame- terized neural networks, we can characterize AIR by the eigenspace of the Neural Tangent Kernel(NTK). AIR facilities a new understanding of distillation. With that, we further utilize distillation to refine noisy labels. We propose a self-distillation al- gorithm to sequentially distill knowledge from the network in the previous training epoch to avoid memorizing the wrong labels. We also demonstrate, both theoret- ically and empirically, that self-distillation can benefit from more than just early stopping. Theoretically, we prove convergence of the proposed algorithm to the ground truth labels for randomly initialized overparameterized neural networks in terms of l2 distance, while the previous result was on convergence in 0-1 loss. The theoretical result ensures the learned neural network enjoy a margin on the training data which leads to better generalization. Empirically, we achieve better testing accuracy and entirely avoid early stopping which makes the algorithm more user-friendly.
reject
This paper tries to bridge early stopping and distillation. 1) In Section 2, the authors empirically show more distillation effect when early stopping. 2) In Section 3, the authors propose a new provable algorithm for training noisy labels. In the discussion phase, all reviewers discussed a lot. In particular, a reviewer highlights the importance of Section 3. On the other hand, other reviewers pointed out "what is the role of Section 2", as the abstract/intro tends to emphasize the content of Section 2. I mostly agree all pros and cons pointed out by reviewers. I agree that the paper proposed an interesting idea for refining noisy labels with theoretical guarantees. However, the major reason for my reject decision is that the current write-up is a bit below the borderline to be accepted considering the high standard of ICLR, e.g., many typos (what is the172norm in page 4?) and misleading intro/abstract/organization. In overall, it was also hard for me to read the paper. I do believe that the paper should be much improved if the authors make more significant editorial efforts considering a more broad range of readers. I have additional suggestions for improving the paper, which I hope are useful. * Put Section 3 earlier (i.e., put Section 2 later) and revise intro/abstract so that the reader can clearly understand what is the main contribution. * Section 2.1 is weak to claim more distillation effect when early stopping. More experimental or theoretical study are necessary, e.g., you can control temperature parameter T of knowledge distillation to provide the "early stopping" effect without actual "early stopping" (the choice of T is not mentioned in the draft as it is the important hyper-parameter). * More experimental supports for your algorithm should be desirable, e.g., consider more datasets, state-of-the-art baselines, noisy types, and neural architectures (e.g., NLP models). * Softening some sentences for avoiding some potential over-claims to some readers.
val
[ "r1xggkhG9r", "ryxgNmD3or", "rJlea-XcoB", "Skgtb22tor", "Skxkyc8vsB", "SkeE6XPMjB", "B1x0BeDzoS", "HJe2-XIMiH", "SygV18rziH", "Bye77_1GoB", "HJe26OJMsr", "rJgoFcQoKr", "SJxfNIDYqr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a self-distillation algorithm for training an over-parameterized neural network with noisily labelled data. It is shown that for a binary classification task on clustered data (same as [Li et al. 2019]), even if the labels are corrupted, the self-distillation algorithm applied on a sufficiently...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_HJlF3h4FvB", "r1xggkhG9r", "iclr_2020_HJlF3h4FvB", "rJgoFcQoKr", "B1x0BeDzoS", "B1x0BeDzoS", "HJe2-XIMiH", "SygV18rziH", "Bye77_1GoB", "SJxfNIDYqr", "r1xggkhG9r", "iclr_2020_HJlF3h4FvB", "iclr_2020_HJlF3h4FvB" ]
iclr_2020_S1x522NFvS
On unsupervised-supervised risk and one-class neural networks
Most unsupervised neural networks training methods concern generative models, deep clustering, pretraining or some form of representation learning. We rather deal in this work with unsupervised training of the final classification stage of a standard deep learning stack, with a focus on two types of methods: unsupervised-supervised risk approximations and one-class models. We derive a new analytical solution for the former and identify and analyze its similarity with the latter. We apply and validate the proposed approach on multiple experimental conditions, in particular on four challenging recent Natural Language Processing tasks as well as on an anomaly detection task, where it improves over state-of-the-art models.
reject
This paper makes a connection between one-class neural networks and the unsupervised approximation of the binary classifier risk under the hinge loss. An important contribution of the paper is the algorithm to train a binary classifier without supervision by using class prior and the hypothesis that class conditional classifier scores have normal distribution. The technical contribution of the paper is novel and brings an increased understanding into one-class neural networks. The equations and the modeling present in the paper are sound and the paper is well-written. However, in its current form, as pointed out by the reviewers, the experimental section is rather weak and can be substantially improved by adding extra experiments as suggested by reviewers #1, #2. Since its submission the paper has not yet been updated to incorporate these comments. Thus, for now, I recommend rejection of this paper, however on improvements I'm sure it can be a good contribution in other conferences.
train
[ "SkxQrbQiYS", "rkxx7P0qiH", "Syx7kQAqoB", "SJg_Rspqor", "HygubeqpFS", "B1lq7qqAKH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "UPDATE:\nI acknowledge that I’ve read the author responses as well as other reviews. \nAfter reading, I would keep my rating at 3 (Weak Reject), since the key reasons for my rating still hold.\n\n####################\n\nThis work makes a connection between recently introduced one-class neural networks [8, 4] and t...
[ 3, -1, -1, -1, 3, 6 ]
[ 5, -1, -1, -1, 4, 3 ]
[ "iclr_2020_S1x522NFvS", "SkxQrbQiYS", "HygubeqpFS", "B1lq7qqAKH", "iclr_2020_S1x522NFvS", "iclr_2020_S1x522NFvS" ]
iclr_2020_BJe932EYwS
PNAT: Non-autoregressive Transformer by Position Learning
Non-autoregressive generation is a new paradigm for text generation. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling of output words is an essential problem in non-autoregressive text generation. In this paper, we propose PNAT, which explicitly models positions of output words as latent variables in text generation. The proposed PNATis simple yet effective. Experimental results show that PNATgives very promising results in machine translation and paraphrase generation tasks, outperforming many strong baselines.
reject
This paper presents a non-autoregressive NMT model which predicts the positions of the words to be produced as a latent variable in addition to predicting the words. This is a novel idea in the field of several other papers which are trying to do similar things, and obtains good results on benchmark tasks. The major concerns are systematic comparisons with the FlowSeq paper which seems to have been published before the ICLR submission deadline. The reviewers are still not convinced by the empirical performance comparison as well as speed comparisons. With some more work this could be a good contribution. As of now, I am recommending a Rejection.
train
[ "SkgebGyeqS", "HyeJbbY3oB", "B1eUvwdniS", "BJxq8jdnoS", "BkgNOS_3oB", "ryxJTkF3sr", "rke4f6_nsB", "HylmOtO2sH", "H1xSod_hjH", "BylAtfRPjB", "B1lmT07ZsS", "B1edalI1jr", "Sklk_sZaYH", "Hyguu2Pc9B", "r1xk5ddCFB", "rkw2oYcFS", "HylEgeZ9tB", "r1xX8qzGYr", "rkgqDJBxtB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "public", "public", "official_reviewer", "author", "public", "author", "author", "public", "public" ]
[ "This work proposes an alternative approach to non-autoregressive translation (NAT) by predicting positions in addition to the word identities, such that the word order in the final prediction doesn't matter as long as the positions are correct. The length of the translation is predicted similar to Gu et al 2017, a...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_BJe932EYwS", "ryxJTkF3sr", "Sklk_sZaYH", "BylAtfRPjB", "iclr_2020_BJe932EYwS", "SkgebGyeqS", "BJxq8jdnoS", "B1edalI1jr", "B1lmT07ZsS", "iclr_2020_BJe932EYwS", "iclr_2020_BJe932EYwS", "iclr_2020_BJe932EYwS", "iclr_2020_BJe932EYwS", "r1xk5ddCFB", "iclr_2020_BJe932EYwS", "rkgqD...
iclr_2020_Bkxonh4Ywr
Localizing and Amortizing: Efficient Inference for Gaussian Processes
The inference of Gaussian Processes concerns the distribution of the underlying function given observed data points. GP inference based on local ranges of data points is able to capture fine-scale correlations and allow fine-grained decomposition of the computation. Following this direction, we propose a new inference model that considers the correlations and observations of the K nearest neighbors for the inference at a data point. Compared with previous works, we also eliminate the data ordering prerequisite to simplify the inference process. Additionally, the inference task is decomposed to small subtasks with several technique innovations, making our model well suits the stochastic optimization. Since the decomposed small subtasks have the same structure, we further speed up the inference procedure with amortized inference. Our model runs efficiently and achieves good performances on several benchmark tasks.
reject
This paper presents a method for speeding up Gaussian process inference by leveraging locality information through k-nearest neighbours. The key idea is well-motivated intuitively, however the way in which it is implemented seems to introduce new complications. One such issue is KNN overhead in high dimensions, but R1 outlines other potential issues too. Moreover, the method's merit is not demonstrated in a convincing way through the experiments. The authors have provided a rebuttal for those issues, but it does not seem to solve the concerns entirely.
train
[ "HkewrlOptS", "BkgHY8LhjB", "SJlyDa2jjB", "ByxFBOroor", "r1eqTvSssH", "HJeSiUBjoB", "SkldBNWptS", "BJelqU8X9H", "S1xMXMHjFB", "Bylk097IYB" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "PAPER SUMMARY:\n\nThis paper proposes a fast inference method for Gaussian processes (GPs) that imposes a sparse decomposition on the VI approximation of the posterior GP (for computational efficiency) using the KNN set of each data point. This is further coupled with armortized inference for better scalability. \...
[ 1, -1, -1, -1, -1, -1, 3, 3, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, 5, 5, -1, -1 ]
[ "iclr_2020_Bkxonh4Ywr", "SJlyDa2jjB", "HJeSiUBjoB", "SkldBNWptS", "HkewrlOptS", "BJelqU8X9H", "iclr_2020_Bkxonh4Ywr", "iclr_2020_Bkxonh4Ywr", "Bylk097IYB", "iclr_2020_Bkxonh4Ywr" ]
iclr_2020_SJl3h2EYvS
CLAREL: classification via retrieval loss for zero-shot learning
We address the problem of learning fine-grained cross-modal representations. We propose an instance-based deep metric learning approach in joint visual and textual space. The key novelty of this paper is that it shows that using per-image semantic supervision leads to substantial improvement in zero-shot performance over using class-only supervision. On top of that, we provide a probabilistic justification for a metric rescaling approach that solves a very common problem in the generalized zero-shot learning setting, i.e., classifying test images from unseen classes as one of the classes seen during training. We evaluate our approach on two fine-grained zero-shot learning datasets: CUB and FLOWERS. We find that on the generalized zero-shot classification task CLAREL consistently outperforms the existing approaches on both datasets.
reject
This paper demonstrates that per-image semantic supervision, as opposed to class-only supervision, can benefit zero-shot learning performance in certain contexts. Evaluations are conducted using CUB and FLOWERS fine-grained zero-shot data sets. In terms of evaluation, the paper received mixed final scores (two reject, one accept). During the rebuttal period, both reject reviewers considered the author responses, but in the end did not find the counterarguments sufficiently convincing. For example, one reviewer maintained that in its present form, the paper appeared too shallow without additional experiments and analyses to justify the suitability of the contrastive loss used for obtaining embeddings applied to zero-shot learning. Another continued to believe post-rebuttal that reference Reed et al., (2016) undercut the novelty of the proposed approach. And consistent with these sentiments, even the reviewer who voted for acceptance alluded to the limited novelty of the proposed approach; however, the author response merely states that a future revision will clarify the novelty. But this then requires another round of reviewing to determine whether the contribution is sufficiently new, especially given that all reviewers raised this criticism in one way or another. Furthermore, the rebuttal also mentions the inclusion of some additional experiments, but again, we don't know how these will turn out. Based on these considerations then, the AC did not see sufficient justification for accepting a paper with aggregate scores that are otherwise well below the norm for successful ICLR submissions.
train
[ "rkx6ViXCKH", "rkxFo0q3oH", "r1xr9z_3ir", "r1g75WrnjB", "rkg6padisr", "BylkWCOisH", "S1eWjT_joB", "rkxBq85pYr", "B1gFTZm0YB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper tackles zero-shot and generalised zero-shot learning by using the per-image semantic information. An instance-based loss is introduced to align images and their corresponding text in the same embedding space. To solve the extreme imbalanced issue of generalized zero-shot learning, the authors propose to...
[ 1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_SJl3h2EYvS", "r1g75WrnjB", "rkxBq85pYr", "B1gFTZm0YB", "rkx6ViXCKH", "rkx6ViXCKH", "rkx6ViXCKH", "iclr_2020_SJl3h2EYvS", "iclr_2020_SJl3h2EYvS" ]
iclr_2020_HyxnnnVtwB
High performance RNNs with spiking neurons
The increasing need for compact and low-power computing solutions for machine learning applications has triggered a renaissance in the study of energy-efficient neural network accelerators. In particular, in-memory computing neuromorphic architectures have started to receive substantial attention from both academia and industry. However, most of these architectures rely on spiking neural networks, which typically perform poorly compared to their non-spiking counterparts in terms of accuracy. In this paper, we propose a new adaptive spiking neuron model that can also be abstracted as a low-pass filter. This abstraction enables faster and better training of spiking networks using back-propagation, without simulating spikes. We show that this model dramatically improves the inference performance of a recurrent neural network and validate it with three complex spatio-temporal learning tasks: the temporal addition task, the temporal copying task, and a spoken-phrase recognition task. Application of these results will lead to the development of powerful spiking models for neuromorphic hardware that solve relevant edge-computing and Internet-of-Things applications with high accuracy and ultra-low power consumption.
reject
This paper presents a new mechanism to train spiking neural networks that is more suitable for neuromorphic chips. While the text is well written and the experiments provide an interesting analysis, the relevance of the proposed neuron models to the ICLR/ML community seems small at this point. My recommendation is that this paper should be submitted to a more specialised conference/workshop dedicated to hardware methods.
train
[ "Bkx31xO0YB", "BJeWJNSkjr", "BJe3z5RfiB", "Ske0xUCfiB", "SyluBkCMiH", "Hyeao9wxjB", "rkxbmCrTKS", "BJgkNLcAtr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper promises efficient training of spiking neuron models using back-propagation. The authors say that this is important because spiking networks offer significant energy savings, yet they typically perform poorly compared to prevalent artificial neural networks (ANNs). They aim to support this with several s...
[ 6, -1, -1, -1, -1, -1, 6, 1 ]
[ 1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_HyxnnnVtwB", "BJgkNLcAtr", "rkxbmCrTKS", "Bkx31xO0YB", "iclr_2020_HyxnnnVtwB", "iclr_2020_HyxnnnVtwB", "iclr_2020_HyxnnnVtwB", "iclr_2020_HyxnnnVtwB" ]
iclr_2020_B1l6nnEtwr
AN EFFICIENT HOMOTOPY TRAINING ALGORITHM FOR NEURAL NETWORKS
We present a Homotopy Training Algorithm (HTA) to solve optimization problems arising from neural networks. The HTA starts with several decoupled systems with low dimensional structure and tracks the solution to the high dimensional coupled system. The decoupled systems are easy to solve due to the low dimensionality but can be connected to the original system via a continuous homotopy path guided by the HTA. We have proved the convergence of HTA for the non-convex case and existence of the homotopy solution path for the convex case. The HTA has provided a better accuracy on several examples including VGG models on CIFAR-10. Moreover, the HTA would be combined with the dropout technique to provide an alternative way to train the neural networks.
reject
The work proposes to learn neural networks using a homotopy-based continuation method. Reviewers found the idea interesting, but the manuscript poorly written, and lacking in experimental results. With no response from the authors, I recommend rejecting the paper.
train
[ "H1xf8Ek2FB", "SygsJ2af9H", "Skla9ft35S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose the Homotopy Training Algorithm (HTA) for neural network optimization problems. They claim that HTA starts with several simplified problems and tracks the solution to the original problem via a continuous homotopy path. They give the theoretical analysis and conduct experiments o...
[ 3, 3, 1 ]
[ 5, 1, 3 ]
[ "iclr_2020_B1l6nnEtwr", "iclr_2020_B1l6nnEtwr", "iclr_2020_B1l6nnEtwr" ]
iclr_2020_Byla224KPr
An Empirical Study on Post-processing Methods for Word Embeddings
Word embeddings learnt from large corpora have been adopted in various applications in natural language processing and served as the general input representations to learning systems. Recently, a series of post-processing methods have been proposed to boost the performance of word embeddings on similarity comparison and analogy retrieval tasks, and some have been adapted to compose sentence representations. The general hypothesis behind these methods is that by enforcing the embedding space to be more isotropic, the similarity between words can be better expressed. We view these methods as an approach to shrink the covariance/gram matrix, which is estimated by learning word vectors, towards a scaled identity matrix. By optimising an objective in the semi-Riemannian manifold with Centralised Kernel Alignment (CKA), we are able to search for the optimal shrinkage parameter, and provide a post-processing method to smooth the spectrum of learnt word vectors which yields improved performance on downstream tasks.
reject
This paper explores a post-processing method for word vectors to "smooth the spectrum," and show improvements on some downstream tasks. Reviewers had some questions about the strength of the results, and the results on words of differing frequency. The reviewers also have comments on the clarity of the paper, as well as the exposition of some of the methods. Also, for future submissions to ICLR and other such conferences, it is more typical to address the authors comments in a direct response rather than to make changes to the document without summarizing and pointing reviewers to these changes. Without direction about what was changed or where to look, there is a lot of burden being placed on the reviewers to find your responses to their comments.
train
[ "rygKXyonjB", "BJeWeuyWiS", "BygceR5dFS", "rylgvrfAtB", "r1g_kvLgqr" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We really appreciate reviewers' and audience's comments on improving our paper, and we uploaded a new version of our submission. Thanks, ", "Dear authors,\n\nFor the word translation task, I wonder whether the post-processing happens before the alignment (i.e., it is applied to the two monolingual embeddings) or...
[ -1, -1, 1, 6, 3 ]
[ -1, -1, 5, 3, 4 ]
[ "iclr_2020_Byla224KPr", "iclr_2020_Byla224KPr", "iclr_2020_Byla224KPr", "iclr_2020_Byla224KPr", "iclr_2020_Byla224KPr" ]
iclr_2020_Hkgpnn4YvH
Graph Neural Networks For Multi-Image Matching
In geometric computer vision applications, multi-image feature matching gives more accurate and robust solutions compared to simple two-image matching. In this work, we formulate multi-image matching as a graph embedding problem, then use a Graph Neural Network to learn an appropriate embedding function for aligning image features. We use cycle consistency to train our network in an unsupervised fashion, since ground truth correspondence can be difficult or expensive to acquire. Geometric consistency losses are added to aid training, though unlike optimization based methods no geometric information is necessary at inference time. To the best of our knowledge, no other works have used graph neural networks for multi-image feature matching. Our experiments show that our method is competitive with other optimization based approaches.
reject
The paper proposes a method for learning multi-image matching using graph neural networks. The model is learned by making use of cycle consistency constraints and geometric consistency, and it achieves a performance that is comparable to the state of the art. While the reviewers view the proposed method interesting in general, they raised issues regarding the evaluation, which is limited in terms of both the chosen datasets and prior methods. After rounds of discussion, the reviewers reached a consensus that the submission is not mature enough to be accepted for this venue at this time. Therefore, I recommend rejecting this submission.
train
[ "S1egIL32FS", "SkgDmMH2sS", "r1eTiAf2sr", "SkebzKM3ir", "SJxKfSlssS", "r1xi5mgisS", "SJggBXgojr", "Skxp-7gjjB", "ByxriGloor", "HygrfK3jKB", "Byx4oeYW9B" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents graph neural network approach for learning multi-view feature similarity. \nThe input data is local feature descriptors (SIFT), keypoint location, orientation and scale. The objective is to learn embedding such that features corresponding to the same 3d location will have similar embeddings, whi...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_Hkgpnn4YvH", "r1eTiAf2sr", "SkebzKM3ir", "SJxKfSlssS", "Skxp-7gjjB", "iclr_2020_Hkgpnn4YvH", "HygrfK3jKB", "S1egIL32FS", "Byx4oeYW9B", "iclr_2020_Hkgpnn4YvH", "iclr_2020_Hkgpnn4YvH" ]
iclr_2020_ryeRn3NtPH
Adversarial Inductive Transfer Learning with input and output space adaptation
We propose Adversarial Inductive Transfer Learning (AITL), a method for addressing discrepancies in input and output spaces between source and target domains. AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies. Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information. The challenge is that clinical data (i.e. patients) with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets (e.g. cancer cell lines) and clinical datasets. Discrepancies exist between 1) the genomic data of pre-clinical and clinical datasets (the input space), and 2) the different measures of the drug response (the output space). To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies. Experimental results indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately.
reject
The paper proposes an adversarial inductive transfer learning method that handles distribution changes in both input and output spaces. While the studied problem is interesting, reviewers have major concerns about the incremental modeling contribution, the lack of comparative study to existing methods and ablation study to disentangling different modules. Overall, the current study is less convincing from either theoretical analysis or experimental results. Hence I recommend rejection.
train
[ "BylVCvZKoH", "rkxB43ZFsr", "H1gMPFZFoB", "SJg0uDZYjH", "BJlfxmLTKS", "S1x9U-W2tS", "B1loCbGk5S" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your comments and feedback. Although we did not investigate the theoretical aspects of this new problem, we provided enough empirical evidence in our experiments to show the applicability of AITL. We respectfully disagree with the reviewer’s criticism that we studied only one pharmacogenomi...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 3, 3, 1 ]
[ "S1x9U-W2tS", "H1gMPFZFoB", "BJlfxmLTKS", "B1loCbGk5S", "iclr_2020_ryeRn3NtPH", "iclr_2020_ryeRn3NtPH", "iclr_2020_ryeRn3NtPH" ]
iclr_2020_ryx0nnEKwH
Improving Batch Normalization with Skewness Reduction for Deep Neural Networks
Batch Normalization (BN) is a well-known technique used in training deep neural networks. The main idea behind batch normalization is to normalize the features of the layers (i.e., transforming them to have a mean equal to zero and a variance equal to one). Such a procedure encourages the optimization landscape of the loss function to be smoother, and improve the learning of the networks for both speed and performance. In this paper, we demonstrate that the performance of the network can be improved, if the distributions of the features of the output in the same layer are similar. As normalizing based on mean and variance does not necessarily make the features to have the same distribution, we propose a new normalization scheme: Batch Normalization with Skewness Reduction (BNSR). Comparing with other normalization approaches, BNSR transforms not just only the mean and variance, but also the skewness of the data. By tackling this property of a distribution, we are able to make the output distributions of the layers to be further similar. The nonlinearity of BNSR may further improve the expressiveness of the underlying network. Comparisons with other normalization schemes are tested on the CIFAR-100 and ImageNet datasets. Experimental results show that the proposed approach can outperform other state-of-the-arts that are not equipped with BNSR.
reject
The paper proposes a novel mechanism to reduce the skewness of the activations. The paper evaluates their claims on the CIFAR-10 and Tiny Imagenet dataset. The reviewers found the scale of the experiments to be too limited to support the claims. Thus we recommend the paper be improved by considering larger datasets such as the full Imagenet. The paper should also better motivate the goal of reducing skewness.
train
[ "HJlamBFoYr", "SylxYe7TtS", "H1l5qFkCKr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed to improve the regular batch normalization by reducing the skewness of the hidden features. To this end, the authors introduce a non-linear function to reduce the skewness. However, the analysis and experiments are too weak to support the authors' claim.\n\n1. The motivation is not clear. Why i...
[ 3, 3, 3 ]
[ 4, 5, 5 ]
[ "iclr_2020_ryx0nnEKwH", "iclr_2020_ryx0nnEKwH", "iclr_2020_ryx0nnEKwH" ]
iclr_2020_Bkgk624KDB
Learning Effective Exploration Strategies For Contextual Bandits
In contextual bandits, an algorithm must choose actions given observed contexts, learning from a reward signal that is observed only for the action chosen. This leads to an exploration/exploitation trade-off: the algorithm must balance taking actions it already believes are good with taking new actions to potentially discover better choices. We develop a meta-learning algorithm, MELEE, that learns an exploration policy based on simulated, synthetic contextual bandit tasks. MELEE uses imitation learning against these simulations to train an exploration policy that can be applied to true contextual bandit tasks at test time. We evaluate on both a natural contextual bandit problem derived from a learning to rank dataset as well as hundreds of simulated contextual bandit problems derived from classification tasks. MELEE outperforms seven strong baselines on most of these datasets by leveraging a rich feature representation for learning an exploration strategy.
reject
This paper introduces MELEE, a meta-learning procedure for contextual bandits. In particular, MELEE learns how to explore by training on datasets with full-information about what every reward each action would obtain (e.g., using classification datasets). The idea is strongly related to imitation learning, and a regret bound is demonstrated for the procedure that comes from that literature. Experiments are performed. Perhaps due to the generality in which the algorithm was presented, reviewers found some parts of the work unintuitive and difficult to follow. The work may greatly benefit from having an explicit running example for F and pi and how it evolves during training. Some reviewers were not impressed by the experimental results relative to epsilon-greedy. Yes, epsilon-greedy is a strong baseline, but MELEE introduces significant technical debt and data infrastructure so it seems fair to expect a sizable bump over epsilon-greedy or else why is it worth it? Perhaps with revisions and experiments within a domain that justify its complexity, this paper may be suitable at another venue. But it is not deemed acceptable at this time, Reject.
train
[ "HJlzZRt3sr", "BJeD3aNsjr", "Syl1CF8Oor", "S1xGZY8OiB", "ryxJ7uU_iS", "Hyxr-LhDYH", "rkg1o4HTYH", "S1eaIvcAYH" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors appreciate the reviewers' suggestions for improving the overall exposure of the paper. In order to make it easier for reviewers’ to track the changes we kept the structure largely consistent with the original submission, but we’ll take all of these comments into account in the final version.\n\nWe've u...
[ -1, -1, -1, -1, -1, 1, 1, 3 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2020_Bkgk624KDB", "Syl1CF8Oor", "Hyxr-LhDYH", "rkg1o4HTYH", "S1eaIvcAYH", "iclr_2020_Bkgk624KDB", "iclr_2020_Bkgk624KDB", "iclr_2020_Bkgk624KDB" ]
iclr_2020_HkxJpnVtPr
A Stochastic Trust Region Method for Non-convex Minimization
We target the problem of finding a local minimum in non-convex finite-sum minimization. Towards this goal, we first prove that the trust region method with inexact gradient and Hessian estimation can achieve a convergence rate of order O(1/k2/3) as long as those differential estimations are sufficiently accurate. Combining such result with a novel Hessian estimator, we propose a sample-efficient stochastic trust region (STR) algorithm which finds an (ϵ,ϵ)-approximate local minimum within O~(n/ϵ1.5) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor of O(n1/6). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
reject
This paper proposes a stochastic trust region method for local minimum finding based on variance reduction, which achieves better oracle complexities than some of the previous work. This is a borderline paper and has been carefully discussed. The main concern of the reviewers is that this paper falls short of proper experiment evaluation to support their theoretical analysis. In detail, the authors proved a globally sublinear convergence rate to a local minimum, yet the experiments demonstrate a linear or even quadratic convergence starting from the initialization. There is a big gap between the theoretical analysis and experiments, which is probably due to the experimental design. In addition, the authors did not submit a revision during the author response (while it is optional), so it is unclear whether a major revision is required to address all the reviewers’ comments. In fact, one reviewer thinks that a major revision is needed. I agree with the reviewers’ evaluation and encourage the authors to improve this paper before future submission.
train
[ "rylTGcNqsS", "Ske6YFV9iH", "rkxut2swoH", "rkeYPtRFjS", "SJlK12jDjr", "ByeYqiswoB", "HJxpOdOpFH", "ByxtD62RYB", "BJxYfYlkqB", "BkgKOMlnqS" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the immediate response. Is there anything else that needs to be clarified?", "We thank the reviewer for the positive comments.", "1. biased estimator:\nEstimators 3 and 4 are indeed unbiased gradient and Hessian estimators, respectively, which can be checked via a simple induction. Ta...
[ -1, -1, -1, -1, -1, -1, 3, 6, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 1, 4 ]
[ "rkeYPtRFjS", "ByxtD62RYB", "HJxpOdOpFH", "ByeYqiswoB", "BJxYfYlkqB", "BkgKOMlnqS", "iclr_2020_HkxJpnVtPr", "iclr_2020_HkxJpnVtPr", "iclr_2020_HkxJpnVtPr", "iclr_2020_HkxJpnVtPr" ]
iclr_2020_HkxeThNFPH
Safe Policy Learning for Continuous Control
We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that keep the agent in desirable situations, both during training and at convergence. We formulate these problems as {\em constrained} Markov decision processes (CMDPs) and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them. Our algorithms can use any standard policy gradient (PG) method, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints. Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction.
reject
The paper is about learning policies in RL while ensuring safety (avoid constraint violations) during training and testing. For this meta review, I ignore Reviewer #3 because that review is useless. The discussion between the authors and Reviewer #1 was useful. Overall, the paper introduces an interesting idea, and the wider context (safe learning) is very relevant. However, I also have some concerns. One of my biggest concerns is that the method proposed here relies heavily on linearizations to deal with nonlinearities. However, the fact that this leads to approximation errors is not being acknowledged much. There are also small things, such as the (average) KL divergence between parameters, which makes no sense to me because the parameters don't have distributions (section 3.1). In terms of experiments, I appreciate that the authors tested the proposed method on multiple environments. The results, however, show that safety cannot be guaranteed. For example, in Figure 1(c), SDDPG clearly violates the constraints. The figures are also misleading because they show the summary statistics of the trajectories (mean and standard deviation). If we were to look at individual trajectories, we would find trajectories that violate the constraints. This fact is brushed under the carpet in the evaluation, and the paper even claims that "our algorithms quickly stabilize the constraint cost below the threshold". This may be true on average, but not for all trajectories. A more careful analysis and a more honest discussion would have been useful. In the robotics experiment, I would like to understand why we allow for any collisions. Why can't we set $d_0=0$, thereby disallowing for collisions. The threshold in the paper looks pretty arbitrary. Again, the paper states that "Figure 4a and Figure 4b show that the Lyapunov-based PG algorithms have higher success rates". This is a pretty optimistic interpretation of the figure given the size of the error bars. There are some points in the conclusion, I also disagree with: 1) "achieve safe learning": Given that some trajectories violate the constraints, "safe" is maybe a bit of an overstatement 2) "better data efficiency": compared to what? 3) "scalable to tackle real-world problems": I disagree with this one as well because for all experiments you will need to run an excessive number of trials, which will not be feasible on a real-world system (assuming we are talking about robots). Overall, I think the paper has some potential, but it needs some more careful theoretical analysis (e.g., effect of linearization errors) and some better empirical analysis. Additionally, given that the paper is at around 9 pages (including the figures in the appendix, which the main paper cites), we are supposed to have higher standards on acceptance than an 8-pages paper. Therefore, I recommend to reject this paper.
train
[ "r1ezdqJ3oS", "B1gP-eEjoB", "BJxbkV6YjS", "BJgnXsffsr", "rJg94tfGsr", "rke7PtGfjB", "BJxv60VsYH", "S1lJDVGRtB", "HkgUCdPEcB" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Yes, editing section 3.1 is in your best interest as it was the hardest section for me to follow in an otherwise well argued paper. \n\nLine Search:\n\nOkay, that sounds fair. Thank you for the clarification. Maybe make this clear somewhere? Appendix is also fine as these are finer details and you are already runn...
[ -1, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 1, 4 ]
[ "B1gP-eEjoB", "BJxbkV6YjS", "BJgnXsffsr", "BJxv60VsYH", "HkgUCdPEcB", "S1lJDVGRtB", "iclr_2020_HkxeThNFPH", "iclr_2020_HkxeThNFPH", "iclr_2020_HkxeThNFPH" ]
iclr_2020_BJleph4KvS
HaarPooling: Graph Pooling with Compressive Haar Basis
Deep Graph Neural Networks (GNNs) are instrumental in graph classification and graph-based regression tasks. In these tasks, graph pooling is a critical ingredient by which GNNs adapt to input graphs of varying size and structure. We propose a new graph pooling operation based on compressive Haar transforms, called HaarPooling. HaarPooling is computed following a chain of sequential clusterings of the input graph. The input of each pooling layer is transformed by the compressive Haar basis of the corresponding clustering. HaarPooling operates in the frequency domain by the synthesis of nodes in the same cluster and filters out fine detail information by compressive Haar transforms. Such transforms provide an effective characterization of the data and preserve the structure information of the input graph. By the sparsity of the Haar basis, the computation of HaarPooling is of linear complexity. The GNN with HaarPooling and existing graph convolution layers achieves state-of-the-art performance on diverse graph classification problems.
reject
This paper presents a new graph pooling method, called HaarPooling. Based on the hierarchical HaarPooling, the graph classification problems can be solved under the graph neural network framework. One major concern of reviewers is the experiment design. Authors add a new real world dataset in revision. Another concern is computational performance. The main text did not give a comprehensive analysis and the rebuttal did not fully address these problems. Overall, this paper presents an interesting graph pooling approach for graph classification while the presentation needs further polish. Based on the reviewers’ comments, I choose to reject the paper.
train
[ "HkeXNTWhoS", "r1gF_iZ3jB", "B1eu5aJnsB", "S1eIDszYir", "ByleEww9sr", "HJxo09zYiH", "HkeYoczYiH", "S1gjk9fKsr", "BJxj8ufYiH", "ByxuAn3TKB", "Syg7UrsD5S", "SkxciEMt9B", "BJlDfpXqcH" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- I think the meaning of (5) has been changed from the previous version because (5) in the updated version no longer means that the compressive and full Haar bases change the length of an input signal differently. Is my understanding correct?\n\nYes. We have a new interpretation.\n\n- The second equation of (5) is...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 3 ]
[ "r1gF_iZ3jB", "B1eu5aJnsB", "ByleEww9sr", "SkxciEMt9B", "S1gjk9fKsr", "Syg7UrsD5S", "BJlDfpXqcH", "BJxj8ufYiH", "ByxuAn3TKB", "iclr_2020_BJleph4KvS", "iclr_2020_BJleph4KvS", "iclr_2020_BJleph4KvS", "iclr_2020_BJleph4KvS" ]
iclr_2020_HylWahVtwB
Neural Architecture Search in Embedding Space
The neural architecture search (NAS) algorithm with reinforcement learning can be a powerful and novel framework for the automatic discovering process of neural architectures. However, its application is restricted by noncontinuous and high-dimensional search spaces, which result in difficulty in optimization. To resolve these problems, we proposed NAS in embedding space (NASES), which is a novel framework. Unlike other NAS with reinforcement learning approaches that search over a discrete and high-dimensional architecture space, this approach enables reinforcement learning to search in an embedding space by using architecture encoders and decoders. The current experiment demonstrated that the performance of the final architecture network using the NASES procedure is comparable with that of other popular NAS approaches for the image classification task on CIFAR-10. The beneficial-performance and effectiveness of NASES was impressive even when only the architecture-embedding searching and pre-training controller were applied without other NAS tricks such as parameter sharing. Specifically, considerable reduction in searches was achieved by reducing the average number of searching to < 100 architectures to achieve a final architecture for the NASES procedure.
reject
This paper proposes a method for neural architecture search in embedding space. This is an interesting idea, but its novelty is limited due to its similarity to the NAO approach. Also, the empirical evaluation is too limited; comparisons should have been performed to NAO and other contemporary NAS methods, such as DARTS. Due the factors above, all reviewers gave rejecting scores (3,3,1). The rebuttal did not remove the main issues, resulting in the reviewers sticking to their scores. I therefore recommend rejection.
train
[ "S1lnWYNBjr", "Sye2ajfriB", "B1gSO6FVsB", "rkg56JppKr", "Hkl24SyG9B", "BJeoxR3Q5H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank for your comment,\n\n1. This suggestion is very useful. The reason of only performs experiment on CIFAR10 is the original plan is to only compare with different methods of improving search space (like cell-based). If we performs more dataset will be greater, but it can not deny the contribution of this pape...
[ -1, -1, -1, 1, 3, 3 ]
[ -1, -1, -1, 5, 3, 3 ]
[ "Hkl24SyG9B", "BJeoxR3Q5H", "rkg56JppKr", "iclr_2020_HylWahVtwB", "iclr_2020_HylWahVtwB", "iclr_2020_HylWahVtwB" ]
iclr_2020_H1gza2NtwH
Towards understanding the true loss surface of deep neural networks using random matrix theory and iterative spectral methods
The geometric properties of loss surfaces, such as the local flatness of a solution, are associated with generalization in deep learning. The Hessian is often used to understand these geometric properties. We investigate the differences between the eigenvalues of the neural network Hessian evaluated over the empirical dataset, the Empirical Hessian, and the eigenvalues of the Hessian under the data generating distribution, which we term the True Hessian. Under mild assumptions, we use random matrix theory to show that the True Hessian has eigenvalues of smaller absolute value than the Empirical Hessian. We support these results for different SGD schedules on both a 110-Layer ResNet and VGG-16. To perform these experiments we propose a framework for spectral visualization, based on GPU accelerated stochastic Lanczos quadrature. This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.
reject
The reviewers all appreciated the importance of the topic: understanding the local geometry of loss surfaces of large models is viewed as critical to understand generalization and design better optimization methods. However, reviewers also pointed out the strength of the assumptions and the limitations of the empirical study. Despite the claim that these assumptions are weaker than those made in prior work, this did not convince the reviewers that the conclusion could be applied to common loss landscapes. I encourage the authors to address the points made by the reviewers and submit an updated version to a later conference.
test
[ "HygJKX5cYS", "BJxc2AWnjH", "rJg_z0W2sH", "Bkek9aW3sH", "BJev2S5DFH", "S1eNmT16KH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nIn this paper, the authors uses the random matrix theory to study the spectrum distribution of the empirical Hessian and true Hessian for deep learning, and proposed an efficient spectrum visualization methods. The results obtained in the paper can shed some lights on the understanding of existing optimization a...
[ 3, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, -1, 5, 3 ]
[ "iclr_2020_H1gza2NtwH", "BJev2S5DFH", "HygJKX5cYS", "S1eNmT16KH", "iclr_2020_H1gza2NtwH", "iclr_2020_H1gza2NtwH" ]
iclr_2020_BJxGan4FPB
Transfer Alignment Network for Double Blind Unsupervised Domain Adaptation
How can we transfer knowledge from a source domain to a target domain when each side cannot observe the data in the other side? The recent state-of-the-art deep architectures show significant performance in classification tasks which highly depend on a large number of training data. In order to resolve the dearth of abundant target labeled data, transfer learning and unsupervised learning leverage data from different sources and unlabeled data as training data, respectively. However, in some practical settings, transferring source data to target domain is restricted due to a privacy policy. In this paper, we define the problem of unsupervised domain adaptation under double blind constraint, where either the source or the target domain cannot observe the data in the other domain, but data from both domains are used for training. We propose TAN (Transfer Alignment Network for Double Blind Domain Adaptation), an effective method for the problem by aligning source and target domain features. TAN maps the target feature into source feature space so that the classifier learned from the labeled data in the source domain is readily used in the target domain. Extensive experiments show that TAN 1) provides the state-of-the-art accuracy for double blind domain adaptation, and 2) outperforms baselines regardless of the proportion of target domain data in the training data.
reject
This paper tackles the problem of how to adapt a model from a source to a target domain when both data is not available simultaneously (even unlabeled) to a single learner. This is of relevance for certain privacy preserving applications where one setting would like to benefit from information learned in a related setting but due to various factors may not be willing to directly share data. The proposed solution is a transfer alignment network (TAN) which consists of two autoencoders (each trained independently on the source and the target) and an aligner which has the task of mapping the latent codes of one domain to the other. All three reviewers expressed concerns for this submission. Of greatest concern was the experimental setting. The datasets chosen were non-standard and there was no prior work to compare against directly so the results presented are difficult to contextualize. The authors have responded to this concern by specifying the existing domain adaptation benchmarks are more challenging and require more complex architectures to handle the “more complex data manifolds”. The fact that existing benchmark datasets may be more complex the the dataset explored in this work is a concern. The authors should take care to clarify whether their proposed solution may only be applicable to specific types of data. In addition, the authors claim to address a new problem setting and therefore cannot compare directly to existing work. One suggestion is if using new data, report performance of existing work under the standard setting to give readers some grounding for the privacy preserving setting. Another option would be to provide scaffold results in the standard UDA setting but with frozen feature spaces. Another option would be to ablate the choice of L2 loss for learning the transformer and instead train using an adversarial loss, L1 loss etc. There are many ways the authors could both explore a new problem statement and provide convincing experimental evidence for their solution. The AC encourages the authors to revise their manuscript, paying special attention to clarity and experimental details in order to further justify their proposed work.
train
[ "BylQiZW2iS", "S1lDRlWnoS", "ByxkVbWnsH", "r1xX5KR0OB", "Syl16_j5tS", "BJlfihuRYH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the careful reading of the paper and their constructive comments. We would like to answer the reviewer’s questions as follows:\n\n1. Problem setting\nOur goal is to improve the performance of the target task by transferring only the trained source model. Our definition of blind setting re...
[ -1, -1, -1, 1, 1, 1 ]
[ -1, -1, -1, 4, 5, 5 ]
[ "r1xX5KR0OB", "BJlfihuRYH", "Syl16_j5tS", "iclr_2020_BJxGan4FPB", "iclr_2020_BJxGan4FPB", "iclr_2020_BJxGan4FPB" ]
iclr_2020_BJxVT3EKDH
Corpus Based Amharic Sentiment Lexicon Generation
Sentiment classification is an active research area with several applications including analysis of political opinions, classifying comments, movie reviews, news reviews and product reviews. To employ rule based sentiment classification, we require sentiment lexicons. However, manual construction of sentiment lexicon is time consuming and costly for resource-limited languages. To bypass manual development time and costs, we tried to build Amharic Sentiment Lexicons relying on corpus based approach. The intention of this approach is to handle sentiment terms specific to Amharic language from Amharic Corpus. Small set of seed terms are manually prepared from three parts of speech such as noun, adjective and verb. We developed algorithms for constructing Amharic sentiment lexicons automatically from Amharic news corpus. Corpus based approach is proposed relying on the word co-occurrence distributional embedding including frequency based embedding (i.e. Positive Point-wise Mutual Information PPMI). First we build word-context unigram frequency count matrix and transform it to point-wise mutual Information matrix. Using this matrix, we computed the cosine distance of mean vector of seed lists and each word in the corpus vocabulary. Based on the threshold value, the top closest words to the mean vector of seed list are added to the lexicon. Then the mean vector of the new sentiment seed list is updated and process is repeated until we get sufficient terms in the lexicon. Using PPMI with threshold value of 100 and 200, we got corpus based Amharic Sentiment lexicons of size 1811 and 3794 respectively by expanding 519 seeds. Finally, the lexicon generated in corpus based approach is evaluated.
reject
This papers addresses the problem of creating sentiment lexicon for a resource limited language (Amharic). This task is time consuming and requires skilled annotators. Hence the authors propose a method for constructing this automatically from News corpora. They start with a seed list of sentiment bearing words and then add new words to this list based on their PPMO scores with existing words. While the reviewers agreed that this work is of practical importance, they had a few objections which I have summarised below: 1) Lack of novelty: The work has very few new ideas 2) Lack of comparison with existing work: Several missing citations have been pointed out by the reviewers 3) Weak experiments: The experimental section needs to be strengthened with more comparisons to existing work as well as proving the results for at least one more language. 4) Organisation of the paper: The paper needs to be restructured for better presentation. In particular, the Results and Discussions section does not really contain any discussions. 5) Grammatical errors: Please proofread the paper thoroughly and fix all grammatical and typo errors. Based on the reviewer comments and lack of any response from the authors, I recommend that the paper in it current form cannot be accepted.
train
[ "HJgnKrVaFH", "Hkec2hOaYB", "SklXBpOQcB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes a corpus based approach of mining lexicon for a low resource language focusing on Amharic sentiment. The proposed approach is a classic one already presented elsewhere, i.e., PMI to compute the similarity of terms using the co-occurrence statistics and thresholding to control the precision of le...
[ 1, 1, 1 ]
[ 4, 4, 3 ]
[ "iclr_2020_BJxVT3EKDH", "iclr_2020_BJxVT3EKDH", "iclr_2020_BJxVT3EKDH" ]
iclr_2020_HygN634KvH
Temporal Probabilistic Asymmetric Multi-task Learning
When performing multi-task predictions with time-series data, knowledge learned for one task at a specific time step may be useful in learning for another task at a later time step (e.g. prediction of sepsis may be useful for prediction of mortality for risk prediction at intensive care units). To capture such dynamically changing asymmetric relationships between tasks and long-range temporal dependencies in time-series data, we propose a novel temporal asymmetric multi-task learning model, which learns to combine features from other tasks at diverse timesteps for the prediction of each task. One crucial challenge here is deciding on the direction and the amount of knowledge transfer, since loss-based knowledge transfer Lee et al. (2016; 2017) does not apply in our case where we do not have loss at each timestep. We propose to tackle this challenge by proposing a novel uncertainty- based probabilistic knowledge transfer mechanism, such that we perform knowledge transfer from more certain tasks with lower variance to uncertain ones with higher variance. We validate our Temporal Probabilistic Asymmetric Multi-task Learning (TP-AMTL) model on two clinical risk prediction tasks against recent deep learning models for time-series analysis, which our model significantly outperforms by successfully preventing negative transfer. Further qualitative analysis of our model by clinicians suggests that the learned knowledge transfer graphs are helpful in analyzing the model’s predictions.
reject
The authors propose a method for multi-task learning with time series data. The reviewers found the paper interesting, but the majority found the description of the method in the paper confusing and several technical details missing. Moreover, the reviewers were not convinced that the technique used for uncertainty quantification of the features at each stage of the time series is well founded.
train
[ "BkeglwevsH", "SyxKzq1viH", "BJlC7ygwiB", "rkxW8okvjB", "H1eb9eT2Yr", "Bkg9636aFB", "B1lz_JB7qB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "During the rebuttal period, we have made the following changes to the paper, based on the reviewers' comments. \n\n- We modified Figure 2 such that Figure 2(c) cannot be mistaken as missing. \n- We modified the texts in variational inference part to explicitly describe the posterior and the prior (section 3....
[ -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2020_HygN634KvH", "B1lz_JB7qB", "H1eb9eT2Yr", "Bkg9636aFB", "iclr_2020_HygN634KvH", "iclr_2020_HygN634KvH", "iclr_2020_HygN634KvH" ]
iclr_2020_B1x8anVFPr
On Layer Normalization in the Transformer Architecture
The Transformer architecture is popularly used in natural language processing tasks. To train a Transformer model, a carefully designed learning rate warm-up stage is usually needed: the learning rate has to be set to an extremely small value at the beginning of the optimization and then gradually increases in some given number of iterations. Such a stage is shown to be crucial to the final performance and brings more hyper-parameter tunings. In this paper, we study why the learning rate warm-up stage is important in training the Transformer and theoretically show that the location of layer normalization matters. It can be proved that at the beginning of the optimization, for the original Transformer, which places the layer normalization between the residual blocks, the expected gradients of the parameters near the output layer are large. Then using a large learning rate on those gradients makes the training unstable. The warm-up stage is practically helpful to avoid this problem. Such an analysis motivates us to investigate a slightly modified Transformer architecture which locates the layer normalization inside the residual blocks. We show that the gradients in this Transformer architecture are well-behaved at initialization. Given these findings, we are the first to show that this Transformer variant is easier and faster to train. The learning rate warm-up stage can be safely removed, and the training time can be largely reduced on a wide range of applications.
reject
This paper investigates layer normalization and learning rate warmup in transformers, demonstrating that placing layer norm inside the residual connection (pre-LN) leads to better behaved gradients than post-LN placement. Doing so allows the learning rate warm-up stage to be removed, leading to faster training. Reviewers were mildly positive about the submission, commenting on the interesting insight provided about transformers, as well as the clear, focused motivation and contribution. However they also stated that it seem rather incremental of a contribution, as pre-LN placement has been introduced before, and found it confusingly written at times. R2 clearly read it very closely, and had many detailed comments and discussions with authors and other reviewers. They had concerns about the relationship of this work with gradient clipping. The authors deserve credit for quickly investigating this in further experiments. Interestingly, the found that even with gradient clipping, post-LN models still needed the learning rate warm-up stage, although this issue went away with smaller clipped values or much lower learning rates. Overall, R2 appears to find the paper’s motivation very compelling, but the insights incomplete and not fully satisfactory, while all reviewers find the novelty rather limited. I think a future submission that forges closer connections between the empirical findings and the theoretical interpretations would be of a great interest to the community, but in its current form is probably unsuitable for publication at ICLR 2020.
test
[ "BJgoO9NCtr", "HkeXAYfnjS", "SJlN5Ax2jH", "r1ePuAensS", "S1lJhhl3iH", "SyxxU5rSir", "B1eVreT8jr", "H1xxWsSvjB", "rJe2FF8BsS", "rkxErYHSsB", "S1xR5KBSoS", "BJeSauSBoS", "SJxoWm9qdr", "BJlyv2xIqH", "HyxkcAdt_r", "rkgwwHZtuH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "\n=========Update========\nI appreciate the authors' response and additional appendix sections connecting theory and practice, verifying the claims about the final layer gradients scaling with L. I have also read the other reviews. I'm still not completely convinced that the multi-layer analysis well explains the ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, -1, -1 ]
[ "iclr_2020_B1x8anVFPr", "r1ePuAensS", "S1xR5KBSoS", "BJeSauSBoS", "H1xxWsSvjB", "iclr_2020_B1x8anVFPr", "rJe2FF8BsS", "B1eVreT8jr", "rkxErYHSsB", "SJxoWm9qdr", "BJgoO9NCtr", "BJlyv2xIqH", "iclr_2020_B1x8anVFPr", "iclr_2020_B1x8anVFPr", "rkgwwHZtuH", "iclr_2020_B1x8anVFPr" ]
iclr_2020_Bkgwp3NtDH
Programmable Neural Network Trojan for Pre-trained Feature Extractor
Neural network (NN) trojaning attack is an emerging and important attack that can broadly damage the system deployed with NN models. Different from adversarial attack, it hides malicious functionality in the weight parameters of NN models. Existing studies have explored NN trojaning attacks in some small datasets for specific domains, with limited numbers of fixed target classes. In this paper, we propose a more powerful trojaning attack method for large models, which outperforms existing studies in capability, generality, and stealthiness. First, the attack is programmable that the malicious misclassification target is not fixed and can be generated on demand even after the victim's deployment. Second, our trojaning attack is not limited in a small domain; one trojaned model on a large-scale dataset can affect applications of different domains that reuses its general features. Third, our trojan shows no biased behavior for different target classes, which makes it more difficult to defend.
reject
This paper proposes a general framework for constructing Trojan/Backdoor attacks on deep neural networks. The authors argue that the proposed method can support dynamic and out-of-scope target classes, which are particularly applicable to backdoor attacks in the transfer learning setting. This paper has been very carefully discussed. While the idea is interesting and could be of interest to the broader community, all reviewers agree that it lacks of experimental comparison with existing methods for backdoor attacks on benchmark problems. The paper needs to be significantly revised before publication. I encourage the authors to improve this paper and resubmit to future conference.
train
[ "ryxekA56KS", "HyxvH-iRFS", "ByeO3k7ejr", "ryg9V7Qesr", "H1xr7k7eoB", "SylgR6HCYr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes an adaption of existing backdoor attacks, with the main goal of enabling backdoor attacks in the transfer learning setting. Specifically, instead of pre-defining the trigger pattern and the target label, they train a neural network to generate different trigger patterns for different target ima...
[ 3, 1, -1, -1, -1, 6 ]
[ 5, 4, -1, -1, -1, 5 ]
[ "iclr_2020_Bkgwp3NtDH", "iclr_2020_Bkgwp3NtDH", "SylgR6HCYr", "ryxekA56KS", "HyxvH-iRFS", "iclr_2020_Bkgwp3NtDH" ]
iclr_2020_SkevphEYPB
POP-Norm: A Theoretically Justified and More Accelerated Normalization Approach
Batch Normalization (BatchNorm) has been a default module in modern deep networks due to its effectiveness for accelerating training deep neural networks. It is widely accepted that the great success of BatchNorm is owing to reduction of internal covariate shift (ICS), but recently it is demonstrated that the link between them is fairly weak. The intrinsic reason behind effectiveness of BatchNorm is still unrevealed that limits it to be made better use. In light of this, we propose a new normalization approach, referred to as Pre-Operation Normalization (POP-Norm), which is theoretically ensured to speed up the training convergence. Not surprisingly, POP-Norm and BatchNorm are largely the same. Hence the similarities can help us to theoretically interpret the root of BatchNorm's effectiveness. There are still some significant distinctions between the two approaches. Just the distinctions make POP-Norm achieve faster convergence rate and better performance than BatchNorm, which are validated in extensive experiments on benchmark datasets: CIFAR10, CIFAR100 and ILSVRC2012.
reject
The authors propose an alternative to batch norm, which they call POP-norm, and provide theoretical justification for POP-norm in nonconvex optimization on the basis of variance reduction. They then present empirical arguments. One of the most cogent reviewers believed the theoretical results were known and the empirical arguments unconvincing because the method is similar to batch norm up to a change in learning rate and some minor differences. Unfortunately, the reviewers did not engage with the author rebuttals at all. The authors seem to have addressed most points. However, if the reviewers are unwilling to engage, despite multiple emails, there's not much I can do, short of redoing the whole process from scratch. And I'll take the lack of engagement as lack of interest by the reviewers. Not being an expert in optimization myself, I'm not going to override the scores. I do know enough to know that there are standard bounds for both convex and nonconvex optimization that improve with decreased variance.
train
[ "S1x3wWGXtr", "Bke5lxKoYr", "rJl0UqjjYB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes Pre-OPeration Normalization (POP-norm) as a theoretically grounded modification of batch normalization (Batch-norm). The authors provide an analysis of stochastic gradient descent (SGD) showing that the convergence can be accelerated by lowering the gradient Lipschitz constant and gradient vari...
[ 3, 3, 1 ]
[ 1, 4, 4 ]
[ "iclr_2020_SkevphEYPB", "iclr_2020_SkevphEYPB", "iclr_2020_SkevphEYPB" ]
iclr_2020_B1lFa3EFwB
Stablizing Adversarial Invariance Induction by Discriminator Matching
Incorporating the desired invariance into representation learning is a key challenge in many situations, e.g., for domain generalization and privacy/fairness constraints. An adversarial invariance induction (AII) shows its power on this purpose, which maximizes the proxy of the conditional entropy between representations and attributes by adversarial training between an attribute discriminator and feature extractor. However, the practical behavior of AII is still unclear as the previous analysis assumes the optimality of the attribute classifier, which is rarely held in practice. This paper first analyzes the practical behavior of AII both theoretically and empirically, indicating that AII has theoretical difficulty as it maximizes variational {\em upper} bound of the actual conditional entropy, and AII catastrophically fails to induce invariance even in simple cases as suggested by the above theoretical findings. We then argue that a simple modification to AII can significantly stabilize the adversarial induction framework and achieve better invariant representations. Our modification is based on the property of conditional entropy; it is maximized if and only if the divergence between all pairs of marginal distributions over z between different attributes is minimized. The proposed method, {\em invariance induction by discriminator matching}, modify AII objective to explicitly consider the divergence minimization requirements by defining a proxy of the divergence by using the attribute discriminator. Empirical validations on both the toy dataset and four real-world datasets (related to applications of user anonymization and domain generalization) reveal that the proposed method provides superior performance when inducing invariance for nuisance factors.
reject
The paper proposes a modification to improve adversarial invariance induction for learning representations under invariance constraints. The authors provide both a formal analysis and experimental evaluation of the method. The reviewers generally agree that the experimental evaluation is rigorous and above average, but the paper lacks clarity making it difficult to judge the significance of it. Therefore, I recommend rejection, but encourage the authors to improve the presentation and resubmit.
train
[ "H1eWRiz_jr", "BJgDJ_GOjr", "r1lLPXGusr", "HkggTqZusH", "BJxoDJtnKS", "HJg0D4d0YH", "rkg8J4Vg9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the detailed and encouraging comments. We have been updated the manuscript following the reviewers' comments. Please refer to the thread of \"Summary of general updates.\". \n\nBelow are several clarifications. \n\n--- Response ---\n> In eq. 1, it is not explained what the expectation is ...
[ -1, -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, -1, 3, 4, 1 ]
[ "rkg8J4Vg9r", "HJg0D4d0YH", "BJxoDJtnKS", "iclr_2020_B1lFa3EFwB", "iclr_2020_B1lFa3EFwB", "iclr_2020_B1lFa3EFwB", "iclr_2020_B1lFa3EFwB" ]
iclr_2020_ryeK6nNFDr
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients
Adversarial examples have been well known as a serious threat to deep neural networks (DNNs). To ensure successful and safe operations of DNNs on realworld tasks, it is urgent to equip DNNs with effective defense strategies. In this work, we study the detection of adversarial examples, based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD), but with different parameters (i.e., shape factor, mean, and variance). GGD is a general distribution family to cover many popular distributions (e.g., Laplacian, Gaussian, or uniform). It is more likely to approximate the intrinsic distributions of internal responses than any specific distribution. Besides, since the shape factor is more robust to different databases rather than the other two parameters, we propose to construct discriminative features via the shape factor for adversarial detection, employing the magnitude of Benford-Fourier coefficients (MBF), which can be easily estimated using responses. Finally, a support vector machine is trained as the adversarial detector through leveraging the MBF features. Through the Kolmogorov-Smirnov (KS) test, we empirically verify that: 1) the posterior vectors of both adversarial and benign examples follow GGD; 2) the extracted MBF features of adversarial and benign examples follow different distributions. Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust on detecting adversarial examples of different crafting methods and different sources, in contrast to state-of-the-art adversarial detection methods.
reject
This paper presents a new metric for adversarial attack's detection. The reviewers find the idea interesting, but the some part has not been clearly explained, and there are questions on the reproducibility issue of the experiments.
train
[ "HJxDgg2qsr", "H1ghiCjqoB", "SklISRo9or", "SyeSm3HiYr", "SkeujiMl5H", "Skx3wFCmcr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the comments and appreciation, and would like to answer the reviewer’s questions as follows:\nMajor 1. \nYes, we indeed assume that “all the response entries of one layer as different samples on one certain GGD”. However, it should be emphasized that, as demonstrated above Eq. (5), “assum...
[ -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, 5, 3, 5 ]
[ "Skx3wFCmcr", "SyeSm3HiYr", "SkeujiMl5H", "iclr_2020_ryeK6nNFDr", "iclr_2020_ryeK6nNFDr", "iclr_2020_ryeK6nNFDr" ]
iclr_2020_Byx9p2EtDH
MULTIPOLAR: Multi-Source Policy Aggregation for Transfer Reinforcement Learning between Diverse Environmental Dynamics
Transfer reinforcement learning (RL) aims at improving learning efficiency of an agent by exploiting knowledge from other source agents trained on relevant tasks. However, it remains challenging to transfer knowledge between different environmental dynamics without having access to the source environments. In this work, we explore a new challenge in transfer RL, where only a set of source policies collected under unknown diverse dynamics is available for learning a target task efficiently. To address this problem, the proposed approach, MULTI-source POLicy AggRegation (MULTIPOLAR), comprises two key techniques. We learn to aggregate the actions provided by the source policies adaptively to maximize the target task performance. Meanwhile, we learn an auxiliary network that predicts residuals around the aggregated actions, which ensures the target policy's expressiveness even when some of the source policies perform poorly. We demonstrated the effectiveness of MULTIPOLAR through an extensive experimental evaluation across six simulated environments ranging from classic control problems to challenging robotics simulations, under both continuous and discrete action spaces.
reject
The paper considers the case where policies have been learned in several environments - differing only according to their transition functions. The goal is to achieve a policy for another environment on the top of the former policies. The approach is based on learning a state-dependent combination (aggregation) of the former policies, together with a "residual policy". On the top of the aggregated + residual policies is defined a Gaussian distribution. The approach is validated in six OpenAI Gym environments. Lesion studies show that both the aggregation of several policies (the more the better, except for the computational cost) and the residual policy are beneficial. Quite a few additional experiments have been conducted during the rebuttal period according to the reviewers' demands (impact of the quality of the initial policies; comparing to fine-tuning an existing source policy). A key issue raised in the discussion concerns the difference between the sources and the target environment. It is understood that "even a small difference in the dynamics" can call for significantly different policies. Still, the point of bridging the reality gap seems to be not as close as the authors think, for training the aggregation and residual modules requires hundreds of thousands of time steps - which is an issue in real-world robotics. I encourage the authors to pursue this promising line of research; the paper would be definitely very strong with a proof of concept on the sim-to-real transfer task.
train
[ "rJxxKMssdB", "rJxD1HJpYH", "rJeceSXosr", "ryeXhsEqjB", "rJlnJobwor", "H1xIucZDjB", "HyxHOSbvsr", "HJxAtmWPsr", "HkxsdugwiS", "HkeRltpV5B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper introduces a novel multi-source policy transfer problem, where we want to utilize policies from multiple source domains with different dynamics to improve the performance of the policy on our target dynamics. \n\nThe paper addresses the problem by adaptively aggregating the deterministic actions produced...
[ 1, 8, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_Byx9p2EtDH", "iclr_2020_Byx9p2EtDH", "HyxHOSbvsr", "iclr_2020_Byx9p2EtDH", "H1xIucZDjB", "rJxxKMssdB", "HJxAtmWPsr", "rJxD1HJpYH", "HkeRltpV5B", "iclr_2020_Byx9p2EtDH" ]
iclr_2020_Hyl9ahVFwH
Learning Similarity Metrics for Numerical Simulations
We propose a novel approach to compute a stable and generalizing metric (LNSM) with convolutional neural networks (CNN) to compare field data from a variety of numerical simulation sources. Our method employs a Siamese network architecture that is motivated by the mathematical properties of a metric and is known to work well for finding similarities of other data modalities. We leverage a controllable data generation setup with partial differential equation (PDE) solvers to create increasingly different outputs from a reference simulation. In addition, the data generation allows for adjusting the difficulty of the resulting learning task. A central component of our learned metric is a specialized loss function, that introduces knowledge about the correlation between single data samples into the training process. To demonstrate that the proposed approach outperforms existing simple metrics for vector spaces and other learned, image based metrics we evaluate the different methods on a large range of test data. Additionally, we analyze generalization benefits of using the proposed correlation loss and the impact of an adjustable training data difficulty.
reject
The authors present a Siamese neural net architecture for learning similarities among field data generated by numerical simulations of partial differential equations. The goal would be to find which two field data are more similar to each. One use case mentioned is the debugging of new numerical simulators, by comparing them with existing ones. The reviewers had mixed opinions on the paper. I agree with a negative comment of all three reviewers that the paper lacks a bit on the originality of the technique and the justification of the new loss proposed, as well as the fact that no strong explicit real world use case was given. I find this problematic especially given that similarities of solutions to PDEs is not a mainstream topic of the conference. Hence a good real world example use of the method would be more convincing.
train
[ "Syx5mi3lqS", "SklHyaNvoH", "BylHdsEDor", "rJe0P5EPjH", "ryge3KVwiS", "HJeG8N4atS", "rJeuRmuJ9H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper is very well written and easy to understand. They focus no domains of data where there exists some controllable parameter(s) for data generation, using this parameter in a way that resembles self-supervised learning losses. The main contribution I think is in the use of correlations of changes in the sc...
[ 6, -1, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_Hyl9ahVFwH", "rJeuRmuJ9H", "HJeG8N4atS", "Syx5mi3lqS", "iclr_2020_Hyl9ahVFwH", "iclr_2020_Hyl9ahVFwH", "iclr_2020_Hyl9ahVFwH" ]
iclr_2020_BJena3VtwS
The Visual Task Adaptation Benchmark
Representation learning promises to unlock deep learning for the long tail of vision tasks without expansive labelled datasets. Yet, the absence of a unified yardstick to evaluate general visual representations hinders progress. Many sub-fields promise representations, but each has different evaluation protocols that are either too constrained (linear classification), limited in scope (ImageNet, CIFAR, Pascal-VOC), or only loosely related to representation quality (generation). We present the Visual Task Adaptation Benchmark (VTAB): a diverse, realistic, and challenging benchmark to evaluate representations. VTAB embodies one principle: good representations adapt to unseen tasks with few examples. We run a large VTAB study of popular algorithms, answering questions like: How effective are ImageNet representation on non-standard datasets? Are generative models competitive? Is self-supervision useful if one already has labels?
reject
The authors present a new benchmark for evaluating a plethora of models on a variety of tasks. In terms of scores, the paper received a borderline rating, with two reviews being rather superficial unfortunately. The last reviewer was positive. The reviewers generally agreed that the benchmark is interesting and carries value, and the AC agrees. Authors certainly invested a significant effort in designing the benchmark and performing a detailed analysis over several tasks and methods. However, the effort seems more engineering in nature and insights are somewhat lacking. For an experimental paper, presenting the results is interesting yet not sufficient. A much more in-depth analysis and discuss would warrant a deeper understanding of the results and open directions for future work. This part is currently underwhelming.
train
[ "ryghzSJ9jB", "SkeMSAznFS", "Hyg5kfjOsB", "H1gYWHAPsH", "B1gBVMA-iH", "rJxRLZ0bor", "HkguKg0boB", "rkxC3VqNqH", "H1lno2my5H", "rygGjjO2YS", "BJxjI0OUKH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "I've updated my original review reflecting the discussion (See \"### Post discussion update ###\" appended to the original review) and changed my score to an acceptance. \n\nI did not address \"### Linear Evaluation and Fine-tuning\" and \"### Additional Baselines\" in the update as I agree that incorporating ever...
[ -1, 8, -1, -1, -1, -1, -1, 3, 6, -1, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, 5, 1, -1, -1 ]
[ "Hyg5kfjOsB", "iclr_2020_BJena3VtwS", "H1gYWHAPsH", "B1gBVMA-iH", "SkeMSAznFS", "H1lno2my5H", "rkxC3VqNqH", "iclr_2020_BJena3VtwS", "iclr_2020_BJena3VtwS", "BJxjI0OUKH", "iclr_2020_BJena3VtwS" ]
iclr_2020_S1l66nNFvB
Graph Warp Module: an Auxiliary Module for Boosting the Power of Graph Neural Networks in Molecular Graph Analysis
Graph Neural Network (GNN) is a popular architecture for the analysis of chemical molecules, and it has numerous applications in material and medicinal science. Current lines of GNNs developed for molecular analysis, however, do not fit well on the training set, and their performance does not scale well with the complexity of the network. In this paper, we propose an auxiliary module to be attached to a GNN that can boost the representation power of the model without hindering the original GNN architecture. Our auxiliary module can improve the representation power and the generalization ability of a wide variety of GNNs, including those that are used commonly in biochemical applications.
reject
This paper presents an auxiliary module to boost the representation power of GNNs. The new module consists of virtual supernode, attention unit, and warp gate unit. The usefulness of each component is shown in well-organized experiments. This is the very borderline paper with split scores. While all reviewers basically agree that the empirical findings in the paper are interesting and could be valuable to the community, one reviewer raised concern regarding the incremental novelty of the method, which is also understood by other reviewers. The impression was not changed through authors’ response and reviewer discussion, and there is no strong opinion to champion the paper. Therefore, I’d like to recommend rejection this time.
train
[ "S1gLZR0Ljr", "rJe3Jqr8oH", "rygoiFrLoB", "BkesSFrUjB", "HJeaT6S-5B", "Hkg526KzqB", "rJlciCPDtS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the clarification and for updating the paper.", "Thank you very much for the comments! \nWhile we believe that supernode is an essential component of our invention, we also believe that supernode alone is not sufficient. \nFor this claim, please notice that our method performs much better than the...
[ -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, 4, 4, 5 ]
[ "BkesSFrUjB", "HJeaT6S-5B", "rJlciCPDtS", "Hkg526KzqB", "iclr_2020_S1l66nNFvB", "iclr_2020_S1l66nNFvB", "iclr_2020_S1l66nNFvB" ]
iclr_2020_Bylp62EKDH
Extreme Triplet Learning: Effectively Optimizing Easy Positives and Hard Negatives
The Triplet Loss approach to Distance Metric Learning is defined by the strategy to select triplets and the loss function through which those triplets are optimized. During optimization, two especially important cases are easy positive and hard negative mining which consider, the closest example of the same and different classes. We characterize how triplets behave based during optimization as a function of these similarities, and highlight that these important cases have technical problems where standard gradient descent behaves poorly, pulling the negative example closer and/or pushing the positive example farther away. We derive an updated loss function that fixes these problems and shows improvements to the state of the art for CUB, CAR, SOP, In-Shop Clothes datasets.
reject
The authors propose a novel distance metric learning approach. Reviews were mixed, and while the discussion was interesting to follow, some issues, including novelty, comparison with existing approaches, and impact, remain unresolved, and overall, the paper does not seem quite ready for publication.
test
[ "BkxyNgtMsH", "HJlYx9uzoS", "BJxNpWDatS", "BylhzTwy5r", "rklqVPTmqB" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "-This trade-off between pushing away hard negatives and pulling in easy positives has actually previously been considered in the linear metric learning setting, though the terminology was different then. See the “Large Margin Nearest Neighbors” algorithm by Weinberger and Saul, for example which trains on “target ...
[ -1, -1, 3, 8, 3 ]
[ -1, -1, 4, 1, 4 ]
[ "BJxNpWDatS", "rklqVPTmqB", "iclr_2020_Bylp62EKDH", "iclr_2020_Bylp62EKDH", "iclr_2020_Bylp62EKDH" ]
iclr_2020_SylR6n4tPS
Learning to Generate Grounded Visual Captions without Localization Supervision
When automatically generating a sentence description for an image or video, it often remains unclear how well the generated caption is grounded, or if the model hallucinates based on priors in the dataset and/or the language model. The most common way of relating image regions with words in caption models is through an attention mechanism over the regions that are used as input to predict the next word. The model must therefore learn to predict the attentional weights without knowing the word it should localize. This is difficult to train without grounding supervision since recurrent models can propagate past information and there is no explicit signal to force the captioning model to properly ground the individual decoded words. In this work, we help the model to achieve this via a novel cyclical training regimen that forces the model to localize each word in the image after the sentence decoder generates it, and then reconstruct the sentence from the localized image region(s) to match the ground-truth. Our proposed framework only requires learning one extra fully-connected layer (the localizer), a layer that can be removed at test time. We show that our model significantly improves grounding accuracy without relying on grounding supervision or introducing extra computation during inference for both image and video captioning tasks.
reject
This paper proposes a cyclical training scheme for grounded visual captioning, where a localization model is trained to identify the regions in the image referred to by caption words, and a reconstruction step is added conditioned on this information. This extends prior work which required grounding supervision. While the proposed approach is sensible and grounding of generated captions is an important requirement, some reviewers (me included) pointed out concerns about the relevance of this paper's contributions. I found the authors’ explanation that the objective is not to improve the captioning accuracy but to refine its grounding performance without any localization supervision a bit unconvincing -- I would expect that better grounding would be reflected in overall better captioning performance, which seems to have happened with the supervised model of Zhou et al. (2019). In fact, even the localization gains seem rather small: “The attention accuracy for localizer is 20.4% and is higher than the 19.3% from the decoder at the end of training.” Overall, the proposed model is an incremental change on the training of an image captioning system, by adding a localizer component, which is not used at test time. The authors' claim that “The network is implicitly regularized to update its attention mechanism to match with the localized image regions” is also unclear to me -- there is nothing in the loss function that penalizes the difference between these two attentions, as the gradient doesn’t backprop from one component to another. Sharing the LSTM and Language LSTM doesn’t imply this, as the localizer is just providing guidance to the decoder, but there is no reason this will help the attention of the original model. Other natural questions left unanswered by this paper are: - What happens if we use the localizer also in test time (calling the decoder twice)? Will the captions improve? This experiment would be needed to assess the potential of this method to help image captioning. - Can we keep refining this iteratively? - Can we add a loss term on the disagreement of the two attentions to actually achieve the said regularisation effect? Finally, the paper [1] (cited by the authors) seems to employ a similar strategy (encoder-decoder with reconstructor) with shown benefits in video captioning. [1] Bairui Wang, Lin Ma, Wei Zhang, and Wei Liu. Reconstruction network for video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7622–7631, 2018. I suggest addressing some of these concerns in a revised version of the paper.
train
[ "H1eaHSKsir", "rJgm-zqojH", "HJgzzoYjiS", "rklXW5YsoS", "HkgFL1cosS", "HJxCP8Fisr", "HJlUh-qsjS", "HJedg7qjsB", "BygE12YaKB", "Hkg1_5l1cB", "SJeXvXSl9S" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nWe would like to thank the reviewer for the thoughtful, constructive feedback, and especially paying extra attention to the details. With the reviewer’s suggestions, we have clarified the caption of Figure 1, corrected the equations, and made the format of the references consistent. Please see the revised pdf ve...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "Hkg1_5l1cB", "HkgFL1cosS", "rklXW5YsoS", "BygE12YaKB", "SJeXvXSl9S", "H1eaHSKsir", "HkgFL1cosS", "HkgFL1cosS", "iclr_2020_SylR6n4tPS", "iclr_2020_SylR6n4tPS", "iclr_2020_SylR6n4tPS" ]
iclr_2020_B1lC62EKwr
Evidence-Aware Entropy Decomposition For Active Deep Learning
We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data. By leveraging the second-order uncertainty representation provided by subjective logic (SL), we conduct evidence-based theoretical analysis and formally decompose the predicted entropy over multiple classes into two distinct sources of uncertainty: vacuity and dissonance, caused by lack of evidence and conflict of strong evidence, respectively. The evidence based entropy decomposition provides deeper insights on the nature of uncertainty, which can help effectively explore a large and high-dimensional unlabeled data space. We develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification through kernel density estimation (KDE). The accurately estimated multiple sources of uncertainty are systematically integrated and dynamically balanced using a data sampling function for label-efficient active deep learning (ADL). Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model.
reject
The authors propose a new perspective on active learning by borrowing concepts from subjective logic. In particular, they model uncertainty as a combination of dissonance and vacuity; two orthogonal forms of uncertainty that may invite additional labels for different reasons. The concepts introduced are not specific to deep learning but are generally applicable. Experiments on 2d data and a couple standard datasets are provided. The derivation of the model is intuitive but it's not clear that it is "better" than any other intuitively derived model for active learning. With the field of active learning having such a long history, the field has moved towards a standard of expecting theoretical guarantees to distinguish a new method from the rest; this paper provides none. Instead anecdotal examples and small experiments are performed. Like other reviews, I am extremely skeptical about the use of KDE which is known to have essentially no inferential ability in high dimensions (such as in deep learning situations where presumably images are involved). It is hard not to feel as though deep learning is somewhat of a red herring in this paper. I recommend the authors lean into understanding the method from a perspective beyond anecdotes and experiments if they wish for this method to gain traction.
train
[ "S1lzMRn8sr", "Hklpvn28sS", "H1e5on2LjS", "BJgGN63UsS", "Ske1Gh2IoS", "SylVvHLkcH", "r1ldvyygcB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the thoughtful feedback and recommendation for acceptance.\n\nQ1. The intuition of dissonance and connection to SL. \n\nThe definitions of vacuity and dissonance (equation 6) are given by subjective logic, which has been made clear in the revised paper. Intuitively, the dissonance is comp...
[ -1, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, -1, -1, -1, 1, 3 ]
[ "SylVvHLkcH", "r1ldvyygcB", "r1ldvyygcB", "r1ldvyygcB", "iclr_2020_B1lC62EKwr", "iclr_2020_B1lC62EKwr", "iclr_2020_B1lC62EKwr" ]
iclr_2020_B1eyA3VFwS
Enforcing Physical Constraints in Neural Neural Networks through Differentiable PDE Layer
Recent studies at the intersection of physics and deep learning have illustrated successes in the application of deep neural networks to partially or fully replace costly physics simulations. Enforcing physical constraints to solutions generated by neural networks remains a challenge, yet it is essential to the accuracy and trustworthiness of such model predictions. Many systems in the physical sciences are governed by Partial Differential Equations (PDEs). Enforcing these as hard constraints, we show, are inefficient in conventional frameworks due to the high dimensionality of the generated fields. To this end, we propose the use of a novel differentiable spectral projection layer for neural networks that efficiently enforces spatial PDE constraints using spectral methods, yet is fully differentiable, allowing for its use as a layer in neural networks that supports end-to-end training. We show that its computational cost is cheaper than a regular convolution layer. We apply it to an important class of physical systems – incompressible turbulent flows, where the divergence-free PDE constraint is required. We train a 3D Conditional Generative Adversarial Network (CGAN) for turbulent flow super-resolution efficiently, whilst guaranteeing the spatial PDE constraint of zero divergence. Furthermore, our empirical results show that the model produces realistic flow fields with more accurate flow statistics when trained with hard constraints imposed via the proposed novel differentiable spectral projection layer, as compared to soft constrained and unconstrained counterparts.
reject
This paper introduces an FFT-based loss function to enforce physical constraints in a CNN-based PDE solver. The proposed idea seems sensible, but the reviewers agreed that not enough attention was paid to baseline alternatives, and that a single example problem was not enough to understand the pros and cons of this method.
train
[ "H1xdjdvhtH", "S1esiqF4FS", "H1ltqZlCYH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to use a differentiable FFT layer to enforce hard constraints for results generated by a CNN. This is demonstrated and evaluated for a 3D turbulence data set (an interesting and challenging problem), and evaluated for a single case.\n\nWhile this goal is good by itself, and the domain of applic...
[ 3, 6, 3 ]
[ 5, 3, 5 ]
[ "iclr_2020_B1eyA3VFwS", "iclr_2020_B1eyA3VFwS", "iclr_2020_B1eyA3VFwS" ]
iclr_2020_BJglA3NKwS
Siamese Attention Networks
Attention operators have been widely applied on data of various orders and dimensions such as texts, images, and videos. One challenge of applying attention operators is the excessive usage of computational resources. This is due to the usage of dot product and softmax operator when computing similarity scores. In this work, we propose the Siamese similarity function that uses a feed-forward network to compute similarity scores. This results in the Siamese attention operator (SAO). In particular, SAO leads to a dramatic reduction in the requirement of computational resources. Experimental results show that our SAO can save 94% memory usage and speed up the computation by a factor of 58 compared to the regular attention operator. The computational advantage of SAO is even larger on higher-order and higher-dimensional data. Results on image classification and restoration tasks demonstrate that networks with SAOs are as effective as models with regular attention operator, while significantly outperform those without attention operators.
reject
The submission presents a Siamese attention operator that lowers the computational costs of attention operators for applications such as image recognition. The reviews are split. R1 posted significant concerns with the content of the submission. The concerns remain after the authors' responses and revision. One of the concerns is the apparent dual submission with "Kronecker Attention Networks". The AC agrees with these concerns and recommends rejecting the submission.
train
[ "B1lCtyTlqB", "HylsrDYTtS", "SyeIqjBOir", "BJlnrz01jB", "SkxmBKIRtS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "In this paper, the authors propose a new mechanism to perform the attention operators. The similarity between a key and a query is performed as the dot product between a trainable weight and the addition of the key and query. The proposed Siamese attention operator is much more efficient than prior attention meth...
[ 6, 6, -1, -1, 3 ]
[ 3, 3, -1, -1, 4 ]
[ "iclr_2020_BJglA3NKwS", "iclr_2020_BJglA3NKwS", "iclr_2020_BJglA3NKwS", "HylsrDYTtS", "iclr_2020_BJglA3NKwS" ]
iclr_2020_H1l-02VKPB
Topology-Aware Pooling via Graph Attention
Pooling operations have shown to be effective on various tasks in computer vision and natural language processing. One challenge of performing pooling operations on graph data is the lack of locality that is not well-defined on graphs. Previous studies used global ranking methods to sample some of the important nodes, but most of them are not able to incorporate graph topology information in computing ranking scores. In this work, we propose the topology-aware pooling (TAP) layer that uses attention operators to generate ranking scores for each node by attending each node to its neighboring nodes. The ranking scores are generated locally while the selection is performed globally, which enables the pooling operation to consider topology information. To encourage better graph connectivity in the sampled graph, we propose to add a graph connectivity term to the computation of ranking scores in the TAP layer. Based on our TAP layer, we develop a network on graph data, known as the topology-aware pooling network. Experimental results on graph classification tasks demonstrate that our methods achieve consistently better performance than previous models.
reject
This paper proposes to incorporate graph topology into pooling operations on graphs, to better define the notion of locality necessary for pooling. While the paper tackles an important problems, and seems to be also well-written, the reviewers agree that there are several issues regarding the contribution and empirical results that need to be addressed before this paper is ready for publication.
train
[ "r1gXgfF6uB", "H1ghLsy4cB", "HkeXw-909H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a topology-aware-pooling method to generate ranking scores for each node and so the pooling (or coarsening) of the graph can be achieved by picking those nodes with higher aggregated attention scores. The ranking scores for each node are computed by taking the average of the scores of its neigh...
[ 3, 6, 1 ]
[ 4, 4, 5 ]
[ "iclr_2020_H1l-02VKPB", "iclr_2020_H1l-02VKPB", "iclr_2020_H1l-02VKPB" ]
iclr_2020_Byg-An4tPr
Differential Privacy in Adversarial Learning with Provable Robustness
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in DP, to establish a new connection between DP preservation and provable robustness. To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. An end-to-end theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks.
reject
The authors propose a framework for relating adversarial robustness, privacy and utility and show how one can train models to simultaneously attain these properties. The paper also makes interesting connections between the DP literature and the robustness literature thereby porting over composition theorems to this new setting. The paper makes very interesting contributions, but a few key points require some improvement: 1) The initial version of the paper relied on an approximation of the objective function in order to obtain DP guarantees. While the authors clarified how the approximation impacts model performance in the rebuttal and revision, the reviewers still had concerns about the utility-privacy-robustness tradeoff achieved by the algorithm. 2) The presentation of the paper seems tailored to audiences familiar with DP and is not easy for a broader audience to follow. Despite this limitations, the paper does make significant novel contributions on an improtant problem (simultaneously achieveing privacy, robustness and utility) and could be of interest. Overall, I consider this paper borderline and vote for rejection, but strongly encourage the authors to improve the paper wrt the above concerns and resubmit to a future venue.
val
[ "H1l6f1c4sH", "H1g9xr5NsH", "SylVkiF4iH", "SkeIt0YNoS", "SJxwaM54sH", "HylQV3tNsH", "B1geoEjdYB", "HklhXFj5FS", "BJgntIT-qH" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q4: The results of the proposed algorithm in Figure 7 is much better than the state-of-the-art, could the authors explain clearly where such improvements come from?\n\nA: There is a misunderstanding here.\nFirst, Note that the experiments on clean models in [1, 2] and our experiment in Fig. 7 are completely differ...
[ -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "HklhXFj5FS", "iclr_2020_Byg-An4tPr", "B1geoEjdYB", "HklhXFj5FS", "BJgntIT-qH", "B1geoEjdYB", "iclr_2020_Byg-An4tPr", "iclr_2020_Byg-An4tPr", "iclr_2020_Byg-An4tPr" ]
iclr_2020_BklfR3EYDH
Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction
To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed. One way of constructing such representations is by focusing on the important events in a sequence. In this paper, we propose a model that learns both to discover such key events (or keyframes) as well as to represent the sequence in terms of them. We do so using a hierarchical Keyframe-Inpainter (KeyIn) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes. We propose a fully differentiable formulation for efficiently learning the keyframe placement. We show that KeyIn finds informative keyframes in several datasets with diverse dynamics. When evaluated on a planning task, KeyIn outperforms other recent proposals for learning hierarchical representations.
reject
The paper is interesting in video prediction, introducing a hierarchical approach: keyframes are first predicted, then intermediate frames are generated. While it is acknowledge the authors do a step in the right direction, several issues remain: (i) the presentation of the paper could be improved (ii) experiments are not convincing enough (baselines, images not realistic enough, marginal improvements) to validate the viability of the proposed approach over existing ones.
train
[ "rklBAZ8AYH", "Hygl1RwhjB", "BkxTgd-YsS", "r1g3Cw-FiH", "B1xEGS-Yir", "S1lMWv-Yjr", "HJg39U-tor", "ryldwNWYjB", "SklIUCrW5B", "SJl6wpNVcS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper introduces a model trained for video prediction hierarchically: a series of significant frames called “keyframes” in the paper are first predicted and then intermediate frames between keyframes couples are generated. The training criterion is maximum likelihood with a variational approximation. Experimen...
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_BklfR3EYDH", "iclr_2020_BklfR3EYDH", "rklBAZ8AYH", "rklBAZ8AYH", "SJl6wpNVcS", "SklIUCrW5B", "SklIUCrW5B", "iclr_2020_BklfR3EYDH", "iclr_2020_BklfR3EYDH", "iclr_2020_BklfR3EYDH" ]
iclr_2020_B1gXR3NtwS
Deep Bayesian Structure Networks
Bayesian neural networks (BNNs) introduce uncertainty estimation to deep networks by performing Bayesian inference on network weights. However, such models bring the challenges of inference, and further BNNs with weight uncertainty rarely achieve superior performance to standard models. In this paper, we investigate a new line of Bayesian deep learning by performing Bayesian reasoning on the structure of deep neural networks. Drawing inspiration from the neural architecture search, we define the network structure as random weights on the redundant operations between computational nodes, and apply stochastic variational inference techniques to learn the structure distributions of networks. Empirically, the proposed method substantially surpasses the advanced deep neural networks across a range of classification and segmentation tasks. More importantly, our approach also preserves benefits of Bayesian principles, producing improved uncertainty estimation than the strong baselines including MC dropout and variational BNNs algorithms (e.g. noisy EK-FAC).
reject
The authors develop stochastic variational approaches to learn Bayesian "structure distributions" for neural networks. While the reviewers appreciated the updates to the paper made by the authors, there will still a number of remaining concerns. There were particularly concerns about the clarity of the paper (remarking on informality of language and lack of changes in the revision with respect to comments in the original review), and the fairness of comparisons. Regarding comparisons, one reviewer comments: "I do not agree that the comparison with DARTS is fair because the authors remove the options for retraining in both DARTS and DBSN. The reason DARTS trains using one half of the data and validate on the other is that it includes a retraining phase where all data is used. Therefore fair comparison should use the same procedure as DARTS (including a retraining phrase). At the very least, to compare methods without retraining, results of DARTS with more data (e.g., 80%) for training should be reported." The authors are encouraged to continue with this work, carefully accounting for reviewer comments in future revisions.
train
[ "HJg9OkhKiB", "SyeUI0oYiB", "S1ltTdjKjB", "HklxltstsH", "BklLPknYor", "Hyli5RjYir", "r1xIaRsFjH", "SJl2wvitsr", "rygDVQg2FS", "rygXoHanYr", "H1ebOaz0FB" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the comments, though there may be quite a few misunderstandings. We clarify the potential misunderstandings and address the detailed concerns below.\n\n\nQ1: About the prior: “the priors of $\\alpha$ are parameterized the same as the variational distributions thus the KL term is effective...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "rygDVQg2FS", "rygXoHanYr", "H1ebOaz0FB", "H1ebOaz0FB", "rygDVQg2FS", "rygXoHanYr", "rygXoHanYr", "iclr_2020_B1gXR3NtwS", "iclr_2020_B1gXR3NtwS", "iclr_2020_B1gXR3NtwS", "iclr_2020_B1gXR3NtwS" ]
iclr_2020_HJlXC3EtwB
Learning to Anneal and Prune Proximity Graphs for Similarity Search
This paper studies similarity search, which is a crucial enabler of many feature vector--based applications. The problem of similarity search has been extensively studied in the machine learning community. Recent advances of proximity graphs have achieved outstanding performance through exploiting the navigability of the underlying graph structure. In this work, we introduce the annealable proximity graph (APG) method to learn and reshape proximity graphs for efficiency and effective similarity search. APG makes proximity graph edges annealable, which can be effectively trained with a stochastic optimization algorithm. APG identifies important edges that best preserve graph navigability and prune inferior edges without drastically changing graph properties. Experimental results show that APG achieves state-of-the-art results not only by producing proximity graphs with less number of edges but also speeding up the search time by 20--40\% across different datasets with almost no loss of accuracy.
reject
The paper proposes a method to prune edges in proximity graphs for faster similarity search. The method works by making the graph edges annealable and optimizing over the weights. The paper tackles an important and practically relevant problem as also acknowledged by the reviewers. However there are some concerns about empirical results, in particular about missing comparisons with tree-structure based algorithms (perhaps with product quantization for high dimensional data), and about modest empirical improvement on two of the three datasets used in the paper, which leaves room for convincing empirical justification of the method. Authors are encouraged to take the reviewers' comments into account and resubmit to a future venue.
train
[ "S1gwLVxpYB", "rJe2gvhV9S", "S1e4bDH3iB", "SJetF6EKjS", "B1eb4mHKsB", "ryeMrGSKor", "SJlUc3mVtH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper suggests an approach for learning how to sparsify similarity search graphs. Graph-based methods currently attain state of the art performance for similarity search, and reducing their number of edges may speed them up even further. The paper suggests a learning framework that uses sample queries in orde...
[ 6, 6, -1, -1, -1, -1, 8 ]
[ 3, 3, -1, -1, -1, -1, 1 ]
[ "iclr_2020_HJlXC3EtwB", "iclr_2020_HJlXC3EtwB", "ryeMrGSKor", "rJe2gvhV9S", "SJlUc3mVtH", "S1gwLVxpYB", "iclr_2020_HJlXC3EtwB" ]
iclr_2020_BklVA2NYvH
Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability
Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin's maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model's adversarial robustness.
reject
The authors propose a framework for improving the robustness of neural networks to adversarial perturbations via optimal control techniques (Lyapunov Stability and the Pontryagin Maximum Principle, in particular). By considering a continuous-time limit of the training process, the authors use the PMP to derive udpate rules for the neural network weights that would result in a robust network. While the approach is interesting, the paper has some serious deficiencies that make it unacceptable for publication in its current form: 1. Quality of empirical evaluation: The authors only report final numbers on CIFAR-10 for a fixed set of adversarial attacks. It has been observed repeatedly in the adversarial robustness literature that adversarial evaluation of neural networks has to be done carefully to guard against possible underestimation of the worst-case attack. In particular, the specific details of the adversarial attacks used (number of steps, number of initializations, performance under larger perturbation radii) that are necessary to trust the results are not given (see https://arxiv.org/pdf/1902.06705.pdf for example). 2. Unclear novelty: The authors do not sufficiently explain the novelty in their approach relative to prior work (particular prior work that has used optimal control ideas in this context). 3. Computational cost: The authors do not give sufficient details to judge the computational overhead of their method to judge how much more expensive it is to train with their approach relative to standard or adversarial training. While one reviewer voted for a weak accept, the other reviewers were in consensus on rejection. The authors did not respond during the rebuttal phase and hence the reviews were unchanged. In summary, I vote for rejection. However, I think this paper has potentially interesting ideas that should be carefully developed and evaluated in a future revision.
train
[ "S1luKEChtB", "rkg4GwQTYH", "HygB_6lkqB", "HkeDzuMLdr", "H1ldrNsrOr", "Skeh1cXV_B", "BJeq6phz_r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "Summary:\nThe goal of this paper is to train neural networks (NNs) in a way to be robust to adversarial attacks. The authors formulate training a NN as finding an optimal controller for a discrete dynamical system. This formulation allows them to use an optimal control algorithm, called method of successive approx...
[ 1, 6, 1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2020_BklVA2NYvH", "iclr_2020_BklVA2NYvH", "iclr_2020_BklVA2NYvH", "H1ldrNsrOr", "Skeh1cXV_B", "BJeq6phz_r", "iclr_2020_BklVA2NYvH" ]
iclr_2020_rJgHC2VKvB
Recurrent Neural Networks are Universal Filters
Recurrent neural networks (RNN) are powerful time series modeling tools in ma- chine learning. It has been successfully applied in a variety of fields such as natural language processing (Mikolov et al. (2010), Graves et al. (2013), Du et al. (2015)), control (Fei & Lu (2017)) and traffic forecasting (Ma et al. (2015)), etc. In those application scenarios, RNN can be viewed as implicitly modelling a stochastic dy- namic system. Another type of popular neural network, deep (feed-forward) neural network has also been successfully applied in different engineering disciplines, whose approximation capability has been well characterized by universal approxi- mation theorem (Hornik et al. (1989), Park & Sandberg (1991), Lu et al. (2017)). However, the underlying approximation capability of RNN has not been fully understood in a quantitative way. In our paper, we consider a stochastic dynamic system with noisy observations and analyze the approximation capability of RNN in synthesizing the optimal state estimator, namely optimal filter. We unify the recurrent neural network into Bayesian filtering framework and show that recurrent neural network is a universal approximator of optimal finite dimensional filters under some mild conditions. That is to say, for any stochastic dynamic systems with noisy sequential observations that satisfy some mild conditions, we show that (informal) ∀ > 0, ∃ RNN-based filter, s.t. lim sup x̂ k|k − E[x k |Y k ] < , k→∞ where x̂ k|k is RNN-based filter’s estimate of state x k at step k conditioned on the observation history and E[x k |Y k ] is the conditional mean of x k , known as the optimal estimate of the state in minimum mean square error sense. As an interesting special case, the widely used Kalman filter (KF) can be synthesized by RNN.
reject
Based on the Bayesian approach to filtering problem, the paper proves that RNN are universal approximators for the filtering problem. Two reviewers, however, have doubts about the novelty and difficulty to get the result. Although I do not fully agree that Reviewer3 that the proof is just "DNN can fit anything" - it is not this case, but the concerns of Reviewer2 are more strong, especially about the usage of the term "recurrent neural network". The paper is purely theoretical and does not have any numerical experiments, which probably makes it too weak for ICLR in this form. However, I encourage the authors to continue to work on the subject, since the approach looks very interesting but it still very far from practice.
train
[ "B1eDOwm5sH", "BJgsLnX5iS", "r1llz97coH", "H1xBXV7qor", "SyghxB-0YS", "HJlBaEZg5B", "HylewpCeqB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed response.\n\nCOMMENT: However, their so-called RNN-based filter formulation is not anywhere close to the usual RNN models, such as Elman's basic RNN model or LSTM, that are currently known in the literature. Hence the paper's title \"RNNs are universal filters\" and the way it is presen...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "HJlBaEZg5B", "SyghxB-0YS", "HylewpCeqB", "iclr_2020_rJgHC2VKvB", "iclr_2020_rJgHC2VKvB", "iclr_2020_rJgHC2VKvB", "iclr_2020_rJgHC2VKvB" ]
iclr_2020_r1xwA34KDB
Learning Invariants through Soft Unification
Human reasoning involves recognising common underlying principles across many examples by utilising variables. The by-products of such reasoning are invariants that capture patterns across examples such as "if someone went somewhere then they are there" without mentioning specific people or places. Humans learn what variables are and how to use them at a young age, and the question this paper addresses is whether machines can also learn and use variables solely from examples without requiring human pre-engineering. We propose Unification Networks that incorporate soft unification into neural networks to learn variables and by doing so lift examples into invariants that can then be used to solve a given task. We evaluate our approach on four datasets to demonstrate that learning invariants captures patterns in the data and can improve performance over baselines.
reject
The main concern raised by reviewers is the limited experiments, which are on simple tasks and missing some baselines to state-of-the-art methods. While the overall approach is interesting, the reviewers found the empirical evidence to be fairly unconvincing.
train
[ "SJx70JRSiB", "SJx0jk0HiS", "HylCdJCBor", "BJxGVXNpKB", "HygP8HETtH", "rkgXIqLJqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer, thank you for your feedback. We are excited that you find the idea interesting and important. To answer your questions and comments:\n\n“But the relationship between soft unification and attention isn't really spelled out -- what's the same, what's different?” - The difference / similarity is mentio...
[ -1, -1, -1, 1, 3, 3 ]
[ -1, -1, -1, 4, 1, 3 ]
[ "BJxGVXNpKB", "HygP8HETtH", "rkgXIqLJqH", "iclr_2020_r1xwA34KDB", "iclr_2020_r1xwA34KDB", "iclr_2020_r1xwA34KDB" ]
iclr_2020_rkl_Ch4YwS
A TWO-STAGE FRAMEWORK FOR MATHEMATICAL EXPRESSION RECOGNITION
Although mathematical expressions (MEs) recognition have achieved great progress, the development of MEs recognition in real scenes is still unsatisfactory. Inspired by the recent work of neutral network, this paper proposes a novel two-stage approach which takes a printed mathematical expression image as input and generates LaTeX sequence as output. In the first stage, this method locates and recognizes the math symbols of input image by object detection algorithm. In the second stage, it translates math symbols with position information into LaTeX sequences by seq2seq model equipped with attention mechanism. In particular, the detection of mathematical symbols and the structural analysis of mathematical formulas are carried out separately in two steps, which effectively improves the recognition accuracy and enhances the generalization ability. The experiment demonstrates that the two-stage method significantly outperforms the end-to-end method. Especially, the ExpRate(expression recognition rate) of our model is 74.1%, 20.3 percentage points higher than that of the end-to-end model on the test data that doesn’t come from the same source as training data.
reject
One reviewer is positive, while the others recommend rejection. The authors did not submit a rebuttal, thus the reviewers kept their original assessment.
test
[ "Syle2m0aKH", "BJlB2ry9cB", "r1lT36Q5cS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of converting an image of a math expression into LaTeX. They note that while the model proposed in Deng et al works well on the IM2LATEX-100K dataset, is doesn't generalize well to equations in real-world settings that you'd have in a photograph or a scan of an equation. They pro...
[ 1, 3, 6 ]
[ 4, 3, 3 ]
[ "iclr_2020_rkl_Ch4YwS", "iclr_2020_rkl_Ch4YwS", "iclr_2020_rkl_Ch4YwS" ]
iclr_2020_rklnA34twH
Universal Learning Approach for Adversarial Defense
Adversarial attacks were shown to be very effective in degrading the performance of neural networks. By slightly modifying the input, an almost identical input is misclassified by the network. To address this problem, we adopt the universal learning framework. In particular, we follow the recently suggested Predictive Normalized Maximum Likelihood (pNML) scheme for universal learning, whose goal is to optimally compete with a reference learner that knows the true label of the test sample but is restricted to use a learner from a given hypothesis class. In our case, the reference learner is using his knowledge on the true test label to perform minor refinements to the adversarial input. This reference learner achieves perfect results on any adversarial input. The proposed strategy is designed to be as close as possible to the reference learner in the worst-case scenario. Specifically, the defense essentially refines the test data according to the different hypotheses, where each hypothesis assumes a different label for the sample. Then by comparing the resulting hypotheses probabilities, we predict the label and detect whether the sample is adversarial or natural. Combining our method with adversarial training we create a robust scheme which can handle adversarial input along with detection of the attack. The resulting scheme is demonstrated empirically.
reject
The reviewers attempted to give this paper a fair assessment, but were unanimous in recommending rejection. The technical quality of motivation was questioned, while the experimental evaluation was not found to be clear or convincing. Hopefully the feedback provided can help the authors improve their paper.
train
[ "Hkx52LytjH", "rkeQAf1tiS", "BkgR-gJKiB", "SJlBSAC3FS", "HJx5pd_6Kr", "Skxnt89CYH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your kind and constructive review.\n\nWe would like to address your points in the same order:\n\n1.\t“Additionally, I would like to see more discussions about the limitations of the proposed method.”\nAnswer: Our method has 4 main drawbacks:\n1.\tAs you noted, it is less effective against weak attack...
[ -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, 1, 1, 1 ]
[ "SJlBSAC3FS", "HJx5pd_6Kr", "Skxnt89CYH", "iclr_2020_rklnA34twH", "iclr_2020_rklnA34twH", "iclr_2020_rklnA34twH" ]
iclr_2020_SkgTR3VFvH
Pipelined Training with Stale Weights of Deep Convolutional Neural Networks
The growth in the complexity of Convolutional Neural Networks (CNNs) is increasing interest in partitioning a network across multiple accelerators during training and pipelining the backpropagation computations over the accelerators. Existing approaches avoid or limit the use of stale weights through techniques such as micro-batching or weight stashing. These techniques either underutilize of accelerators or increase memory footprint. We explore the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization and keeps memory overhead modest. We use 4 CNNs (LeNet-5, AlexNet, VGG and ResNet) and show that when pipelining is limited to early layers in a network, training with stale weights converges and results in models with comparable inference accuracies to those resulting from non-pipelined training on MNIST and CIFAR-10 datasets; a drop in accuracy of 0.4%, 4%, 0.83% and 1.45% for the 4 networks, respectively. However, when pipelining is deeper in the network, inference accuracies drop significantly. We propose combining pipelined and non-pipelined training in a hybrid scheme to address this drop. We demonstrate the implementation and performance of our pipelined backpropagation in PyTorch on 2 GPUs using ResNet, achieving speedups of up to 1.8X over a 1-GPU baseline, with a small drop in inference accuracy.
reject
The paper proposed a new pipelined training approach to better utilize the memory and computation power to speed up deep convolutional neural network training. The authors experimentally justified that the proposed pipeline training, using stale weights without weights stacking or micro-batching, is simpler and does converge on a few networks. The main concern for this paper is the missing of convergence analysis of the proposed method as requested by the reviewers. The authors brought up the concern of the limited space in the paper, which can be addressed by putting convergence analysis into appendix. From a reader perspective, knowing the convergence property of the methods is much more important than knowing it works for a few networks on a particular dataset.
train
[ "Syx_qXgojr", "Sye1hVSDjS", "BkxCvEBDsH", "H1xO7NHvjr", "rkeLnmSDiH", "B1g1LXBPsB", "B1eVKwraFH", "HkluNlHfcS", "B1gFWGts5S", "SJgqHOk3cB" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We updated the PDF of our submission to address the comments raised by the reviewers, as discussed in our responses to their comments. The updates are mostly in the related work section and minimally affect the rest of the paper describing your work. Specifically we:\n\n1. Updated the results for the raining of th...
[ -1, -1, -1, -1, -1, -1, 3, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 1, 3 ]
[ "iclr_2020_SkgTR3VFvH", "B1eVKwraFH", "HkluNlHfcS", "B1gFWGts5S", "SJgqHOk3cB", "iclr_2020_SkgTR3VFvH", "iclr_2020_SkgTR3VFvH", "iclr_2020_SkgTR3VFvH", "iclr_2020_SkgTR3VFvH", "iclr_2020_SkgTR3VFvH" ]
iclr_2020_H1eCR34FPB
Sequence-level Intrinsic Exploration Model for Partially Observable Domains
Training reinforcement learning policies in partially observable domains with sparse reward signal is an important and open problem for the research community. In this paper, we introduce a new sequence-level intrinsic novelty model to tackle the challenge of training reinforcement learning policies in sparse rewarded partially observable domains. First, we propose a new reasoning paradigm to infer the novelty for the partially observable states, which is built upon forward dynamics prediction. Different from conventional approaches that perform self-prediction or one-step forward prediction, our proposed approach engages open-loop multi-step prediction, which enables the difficulty of novelty prediction to flexibly scale and thus results in high-quality novelty scores. Second, we propose a novel dual-LSTM architecture to facilitate the sequence-level reasoning over the partially observable state space. Our proposed architecture efficiently synthesizes information from an observation sequence and an action sequence to derive meaningful latent representations for inferring the novelty for states. To evaluate the efficiency of our proposed approach, we conduct extensive experiments on several challenging 3D navigation tasks from ViZDoom and DeepMind Lab. We also present results on two hard-exploration domains from Atari 2600 series in Appendix to demonstrate our proposed approach could generalize beyond partially observable navigation tasks. Overall, the experiment results reveal that our proposed intrinsic novelty model could outperform several state-of-the-art curiosity baselines with considerable significance in the testified domains.
reject
This paper introduces a new architecture based on intrinsic rewards, to deal with partially observable and sparse reward domains. The reviewers found the novelty of the work not particularly high, and had concerns about the general utility of the method based on the empirical evidence. This paper has numerous issues and could use significant revision in terms of writing, connections to literature, experiment design, and clarity of results. Much of the discussion focused on the scaling parameter. From an algorithmic point of view, the scaling parameter is very problematic. It is domain specific and when tuned per domain resulted in very different values. The ablation study showed that only two settings in one domain led to good performance, whereas the other resulted in no learning (for some reason the other two values were not plotted). There are concerns that the baselines were not completely fair. In many cases different domains were used to compare against RND and ICM, and there appears to be no tuning of these baselines for the new domains---this a problem due to the inherent bias in favor one's own method. In the solaris domain which was used in the RND paper, the results don't appear to match the RND paper, and in vizdoom the performance numbers are difficult to compare for ICM because a different metric is used---even if you don't like their performance numbers at least report them once so we can be confident the baselines are well calibrated. One reviewer pointed out the meta-parameters where different for RND than the published previous, but the paper does not describe what approach was used to tune those parameters and this is not acceptable. We cannot have much confidence that these results are reflective of those methods. Finally, there is no comment on how the performance numbers were computed and no description of how the errorbars where computed or what they represent. The paper focuses on partially observable domains, the evidence that this method is effective in closer to Markov settings is unclear. The Atari experiments do not yield significant results by large (solairs looks as if there is no learning occurring at all---a no comment about it in the text to explain). The paper claims evidence the approach can work well in both cases, but it was not even indicated if frame-stacking was used in the Atari experiments. In fact, the result was only alluded too in the conclusion---there was no reference in the main text to a specific result in the appendix. Text is very challenging to read. The language is informal and imprecise, and the paper frequently uses terms incorrectly or in different ways through (e.g., the use of the term novelty throughout) This is clearly an interesting direction. The authors should keep working, but this paper is not ready for publication. I urge the authors to dig deeper in the literature to gain a more nuanced understanding of the topic. Barto et al's excellent paper on the topic is a great place to start: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3858647/
train
[ "rJehROA9oH", "rJx44505iB", "rygMpvRcsH", "H1lMq8C9iS", "H1lkWKzYdB", "r1lJqjxTKS", "Syl2HJJl9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for providing a detailed feedback. As requested, we have conducted additional experiments to study the effect of performing multi-step predictions (i.e., K) (appendix B.6). Also, we have added additional baselines of 1-step predictions in all the main experiments (Figure 4-6 in the main paper...
[ -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 5, 3, 5 ]
[ "r1lJqjxTKS", "r1lJqjxTKS", "Syl2HJJl9r", "H1lkWKzYdB", "iclr_2020_H1eCR34FPB", "iclr_2020_H1eCR34FPB", "iclr_2020_H1eCR34FPB" ]
iclr_2020_S1x0CnEtvB
AutoGrow: Automatic Layer Growing in Deep Convolutional Networks
Depth is a key component of Deep Neural Networks (DNNs), however, designing depth is heuristic and requires many human efforts. We propose AutoGrow to automate depth discovery in DNNs: starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, stops growing and thus discovers the depth. We propose robust growing and stopping policies to generalize to different network architectures and datasets. Our experiments show that by applying the same policy to different network architectures, AutoGrow can always discover near-optimal depth on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. For example, in terms of accuracy-computation trade-off, AutoGrow discovers a better depth combination in ResNets than human experts. Our AutoGrow is efficient. It discovers depth within similar time of training a single DNN.
reject
This paper proposes a neural architecture search method based on greedily adding layers with random initializations. The reviewers all recommend rejection due to various concerns about the significance of the contribution, novelty, and experimental design. They checked the author response and maintained their ratings.
train
[ "SkeAVHaRYS", "HJxnGdUnjS", "SkgtGIhjjS", "BkeMtZsijr", "r1emxj7GKH", "HkgRdsvQqr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper's contribution is a method for automatically growing the depth of a neural network during training. It compares several heuristics that may be used to successfully achieve this goal and identifies a set of choices that work well together on multiple datasets.\n\nThe paper focuses on CNNs that conform to...
[ 3, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, 1, 4 ]
[ "iclr_2020_S1x0CnEtvB", "r1emxj7GKH", "SkeAVHaRYS", "HkgRdsvQqr", "iclr_2020_S1x0CnEtvB", "iclr_2020_S1x0CnEtvB" ]
iclr_2020_Syee1pVtDS
Distributed Online Optimization with Long-Term Constraints
We consider distributed online convex optimization problems, where the distributed system consists of various computing units connected through a time-varying communication graph. In each time step, each computing unit selects a constrained vector, experiences a loss equal to an arbitrary convex function evaluated at this vector, and may communicate to its neighbors in the graph. The objective is to minimize the system-wide loss accumulated over time. We propose a decentralized algorithm with regret and cumulative constraint violation in O(Tmax{c,1−c}) and O(T1−c/2), respectively, for any c∈(0,1), where T is the time horizon. When the loss functions are strongly convex, we establish improved regret and constraint violation upper bounds in O(log⁡(T)) and O(Tlog⁡(T)). These regret scalings match those obtained by state-of-the-art algorithms and fundamental limits in the corresponding centralized online optimization problem (for both convex and strongly convex loss functions). In the case of bandit feedback, the proposed algorithms achieve a regret and constraint violation in O(Tmax{c,1−c/3}) and O(T1−c/2) for any c∈(0,1). We numerically illustrate the performance of our algorithms for the particular case of distributed online regularized linear regression problems.
reject
The paper proposes a decentralized algorithm with regret for distributed online convex optimization problems. The reviewers worry about the assumptions and the theoretical settings, they also find that the experimental evaluation is insufficient.
train
[ "Skek9hvCKH", "ryx0Hfpvor", "HygP8E7msS", "HJgoRcIfor", "BkxpNs8MiS", "r1g3i58GoH", "SJxEdyinFr", "ByeydF1zqr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors study distributed online convex optimization where the distributed system consists of various computing units connected by a time varying graph. The authors prove optimal regret bounds for a proposed decentralized algorithm and experimentally evaluate the performance of their algorithms on distributed ...
[ 3, -1, -1, -1, -1, -1, 6, 6 ]
[ 1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_Syee1pVtDS", "BkxpNs8MiS", "iclr_2020_Syee1pVtDS", "Skek9hvCKH", "SJxEdyinFr", "ByeydF1zqr", "iclr_2020_Syee1pVtDS", "iclr_2020_Syee1pVtDS" ]
iclr_2020_HJlMkTNYvH
MODiR: Multi-Objective Dimensionality Reduction for Joint Data Visualisation
Many large text collections exhibit graph structures, either inherent to the content itself or encoded in the metadata of the individual documents. Example graphs extracted from document collections are co-author networks, citation networks, or named-entity-cooccurrence networks. Furthermore, social networks can be extracted from email corpora, tweets, or social media. When it comes to visualising these large corpora, either the textual content or the network graph are used. In this paper, we propose to incorporate both, text and graph, to not only visualise the semantic information encoded in the documents' content but also the relationships expressed by the inherent network structure. To this end, we introduce a novel algorithm based on multi-objective optimisation to jointly position embedded documents and graph nodes in a two-dimensional landscape. We illustrate the effectiveness of our approach with real-world datasets and show that we can capture the semantics of large document collections better than other visualisations based on either the content or the network information.
reject
There is a consensus among reviewers that the paper should not be accepted. No rebuttal was provided, so the paper is rejected.
train
[ "Skx-6E86tr", "SJgoG_OpYS" ]
[ "official_reviewer", "official_reviewer" ]
[ "Authors introduce a novel algorithm based on multi-objective optimization to jointly position embedded documents and graph nodes in a 2-d landscape. \n\nFirst, the multi-objective optimization used in this paper is quite confusing. Since the overall error defined in (5) is a weighted summation of three objectives,...
[ 1, 3 ]
[ 4, 3 ]
[ "iclr_2020_HJlMkTNYvH", "iclr_2020_HJlMkTNYvH" ]
iclr_2020_rklz16Vtvr
ISBNet: Instance-aware Selective Branching Networks
Recent years have witnessed growing interests in designing efficient neural networks and neural architecture search (NAS). Although remarkable efficiency and accuracy have been achieved, existing expert designed and NAS models neglect the fact that input instances are of varying complexity and thus different amounts of computation are required. Inference with a fixed model that processes all instances through the same transformations would incur computational resources unnecessarily. Customizing the model capacity in an instance-aware manner is required to alleviate such a problem. In this paper, we propose a novel Instance-aware Selective Branching Network-ISBNet to support efficient instance-level inference by selectively bypassing transformation branches of insignificant importance weight. These weights are dynamically determined by a lightweight hypernetwork SelectionNet and recalibrated by gumbel-softmax for sparse branch selection. Extensive experiments show that ISBNet achieves extremely efficient inference in terms of parameter size and FLOPs comparing to existing networks. For example, ISBNet takes only 8.70% parameters and 31.01% FLOPs of the efficient network MobileNetV2 with comparable accuracy on CIFAR-10.
reject
This paper proposes a method for finding neural architecture which, through the use of selective branching, can avoid processing portions of the network on a per-data-point basis. While the reviewers felt that the idea proposed was technically interesting and well-presented, they had substantial concerns about the evaluation that persisted post-rebuttal, and lead to a consensus rejection recommendation.
test
[ "rJe-E5CMoS", "HkgYeiAzoB", "SkxboFl2tB", "r1eH7iPC9H" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "First, we thank you for your acknowledgment of our contributions. As for the concerns over the experiment section, we would like to recap that in this work, we have proposed a GENERAL and CUSTOMIZABLE framework for neural networks to inference without UNNECESSARY computation of the BACKBONE network on a PER-INPUT ...
[ -1, -1, 3, 3 ]
[ -1, -1, 4, 5 ]
[ "r1eH7iPC9H", "SkxboFl2tB", "iclr_2020_rklz16Vtvr", "iclr_2020_rklz16Vtvr" ]
iclr_2020_rJeGJaEtPH
MIST: Multiple Instance Spatial Transformer Networks
We propose a deep network that can be trained to tackle image reconstruction and classification problems that involve detection of multiple object instances, without any supervision regarding their whereabouts. The network learns to extract the most significant top-K patches, and feeds these patches to a task-specific network -- e.g., auto-encoder or classifier -- to solve a domain specific problem. The challenge in training such a network is the non-differentiable top-K selection process. To address this issue, we lift the training optimization problem by treating the result of top-K selection as a slack variable, resulting in a simple, yet effective, multi-stage training. Our method is able to learn to detect recurrent structures in the training dataset by learning to reconstruct images. It can also learn to localize structures when only knowledge on the occurrence of the object is provided, and in doing so it outperforms the state-of-the-art.
reject
Two reviewers are negative on this paper while the other one is slightly positive. Overall, this paper does not make the bar of ICLR. A reject is recommended.
train
[ "HJxFQjptoB", "rylaR56tiS", "S1lOrqTKjr", "SJge34RuOB", "SkxcvDuXcH", "B1gBC5jEcr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the great suggestions on how to improve the writeup. We have completely restructured the method presentation, significantly revised our derivation, and moved background work to the appendix. Please see the revised PDF for details, and let us know if you have further suggestions. For your convenience, we...
[ -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "SJge34RuOB", "SkxcvDuXcH", "B1gBC5jEcr", "iclr_2020_rJeGJaEtPH", "iclr_2020_rJeGJaEtPH", "iclr_2020_rJeGJaEtPH" ]
iclr_2020_ryl71a4YPB
A Unified framework for randomized smoothing based certified defenses
Randomized smoothing, which was recently proved to be a certified defensive technique, has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions still remain unanswered in the existing frameworks, such as (i) whether Gaussian mechanism is an optimal choice for certifying ℓ2-normed robustness, and (ii) whether randomized smoothing can certify ℓ∞-normed robustness (on high-dimensional datasets like ImageNet). To answer these questions, we introduce a {\em unified} and {\em self-contained} framework to study randomized smoothing-based certified defenses, where we mainly focus on the two most popular norms in adversarial machine learning, {\em i.e.,} ℓ2 and ℓ∞ norm. We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the ℓ2 and ℓ∞-normed robustness. We further show that the largest ℓ∞ radius certified by randomized smoothing is upper bounded by O(1/d), where d is the dimensionality of the data. This theoretical finding suggests that certifying ℓ∞-normed robustness by randomized smoothing may not be scalable to high-dimensional data. The veracity of our framework and analysis is verified by extensive evaluations on CIFAR10 and ImageNet.
reject
After the rebuttal, the reviewers agree that this paper would benefit from further revisions to clarify issues regarding the motivation of the DP-based security definition, any relationship it may have to standard definitions of privacy, and the role of dimensionality in the theoretical guarantees.
train
[ "Ske9LpCEjB", "r1lHcOR4jr", "Bye0GSySsS", "BkxqJ5AKsS", "SkeCKq0oYH", "rklU2Gp6FH", "rkg0VfQCYH", "Syga-SBqdB", "ByeBRbG5dS", "HygsIG4q_B", "B1ez4d1cdS", "Byg3XvFvdH", "SylAUxYPur" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public", "author", "public" ]
[ "Thanks for your valuable comments. We appreciate your efforts on reviewing this paper.\n\n(1) Your observation on the connection between (Lecurer et al.) and our framework is correct. But actually, Eg(x) is also not a deterministic classifier. It shares the same issue with the definition of random g (Cohen et al.)...
[ -1, -1, -1, -1, 3, 1, 3, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "rkg0VfQCYH", "rklU2Gp6FH", "SkeCKq0oYH", "HygsIG4q_B", "iclr_2020_ryl71a4YPB", "iclr_2020_ryl71a4YPB", "iclr_2020_ryl71a4YPB", "HygsIG4q_B", "B1ez4d1cdS", "ByeBRbG5dS", "Byg3XvFvdH", "SylAUxYPur", "iclr_2020_ryl71a4YPB" ]
iclr_2020_SylVJTNKDr
Entropy Minimization In Emergent Languages
There is a growing interest in studying the languages emerging when neural agents are jointly trained to solve tasks requiring communication through a discrete channel. We investigate here the information-theoretic complexity of such languages, focusing on the basic two-agent, one-exchange setup. We find that, under common training procedures, the emergent languages are subject to an entropy minimization pressure that has also been detected in human language, whereby the mutual information between the communicating agent's inputs and the messages is minimized, within the range afforded by the need for successful communication. This pressure is amplified as we increase communication channel discreteness. Further, we observe that stronger discrete-channel-driven entropy minimization leads to representations with increased robustness to overfitting and adversarial attacks. We conclude by discussing the implications of our findings for the study of natural and artificial communication systems.
reject
This paper studies the information-theoretic complexity for emergent languages when pairs of neural networks are trained to solve a two player communication game. One of the primary claims of the paper was that under common training protocols, networks were biased towards low entropy solutions. During the discussion period, one reviewer shared an ipython notebook investigating the experiments shown in Figure 1. There it was discovered that low entropy solutions were only obtained for networks which were themselves initialized at low entropy configurations. When networks are initialized at high entropy configurations, the converged solution would remain high entropy. This experiment raises questions about the validity of the claim that there was "pressure" towards low entropy solutions to the task. Therefore, a more careful analysis of the phenomenon is required.
train
[ "HkebXr-r9r", "SJgZchUCFH", "H1xvQprPoH", "BJg8EGfwjB", "Hye_DAWvoB", "ryeiRa-VsS", "SyxlfYA-jS", "BJl5xynboH", "HyexMgDZoH", "rJeK0IFgjS", "B1exhXP6KB", "S1gTnJOuqr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper sets up a couple discrete communication games in which agents must communicate a discrete message in order to solve a classification task. The paper claims that networks trained to perform this case show a tendency to use as little information as necessary to solve the task.\n\nI vote to reject this pa...
[ 1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_SylVJTNKDr", "iclr_2020_SylVJTNKDr", "HyexMgDZoH", "iclr_2020_SylVJTNKDr", "ryeiRa-VsS", "BJl5xynboH", "B1exhXP6KB", "S1gTnJOuqr", "SJgZchUCFH", "HkebXr-r9r", "iclr_2020_SylVJTNKDr", "iclr_2020_SylVJTNKDr" ]
iclr_2020_SJgNkpVFPr
VILD: Variational Imitation Learning with Diverse-quality Demonstrations
The goal of imitation learning (IL) is to learn a good policy from high-quality demonstrations. However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs. IL in such situations can be challenging, especially when the level of demonstrators' expertise is unknown. We propose a new IL paradigm called Variational Imitation Learning with Diverse-quality demonstrations (VILD), where we explicitly model the level of demonstrators' expertise with a probabilistic graphical model and estimate it along with a reward function. We show that a naive estimation approach is not suitable to large state and action spaces, and fix this issue by using a variational approach that can be easily implemented using existing reinforcement learning methods. Experiments on continuous-control benchmarks demonstrate that VILD outperforms state-of-the-art methods. Our work enables scalable and data-efficient IL under more realistic settings than before.
reject
The paper proposes a new imitation learning algorithm that explicitly models the quality of demonstrators. All reviewers agreed that the problem and the approach were interesting, the paper well-written, and the experiments well-conducted. However, there was a shared concern about the applicability of the method to more realistic settings, in which the model generating the demonstrations does not fall under the assumptions of the method. The authors did add a real-world experiment during the rebuttal, but the reviewers were suspicious of the reported InfoGAIL performance and were not persuaded to change their assessment. Following this discussion, I recommend rejection at this time, but it seems like a good paper and I encourage the authors to do a more careful validation experiment, and resubmit to a future venue.
train
[ "Syl8v55Ljr", "r1gMMccLoB", "rygm2F9Usr", "rkgHrt9IsB", "rklEUwoaFB", "S1lT67syqS", "Byll76MWcH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the reviews. Our responses to reviewer 3’s comments are provided below.\n\n*However, the experiments are set up to match VILD’s model, and it is not as clear what would happen in a more realistic setting where there will be misspecification.\n- We include an experiment with real-world data in the rev...
[ -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "Byll76MWcH", "rklEUwoaFB", "S1lT67syqS", "iclr_2020_SJgNkpVFPr", "iclr_2020_SJgNkpVFPr", "iclr_2020_SJgNkpVFPr", "iclr_2020_SJgNkpVFPr" ]
iclr_2020_SJeS16EKPr
Learning relevant features for statistical inference
We introduce an new technique to learn correlations between two types of data. The learned representation can be used to directly compute the expectations of functions over one type of data conditioned on the other, such as Bayesian estimators and their standard deviations. Specifically, our loss function teaches two neural nets to extract features representing the probability vectors of highest singular value for the stochastic map (set of conditional probabilities) implied by the joint dataset, relative to the inner product defined by the Fisher information metrics evaluated at the marginals. We test the approach using a synthetic dataset, analytical calculations, and inference on occluded MNIST images. Surprisingly, when applied to supervised learning (one dataset consists of labels), this approach automatically provides regularization and faster convergence compared to the cross-entropy objective. We also explore using this approach to discover salient independent features of a single dataset.
reject
This manuscript proposes an approach for estimating cross-correlations between model outputs, related to deep CCA. Authors note that the procedure improves results when applied to supervised learning problems. The reviewers have pointed out the close connection to previous work on deep CCA, and the author(s) have agreed. The reviewers agree that the paper has promise if properly expanded both theoretically and empirically.
train
[ "rJxGh2HKsH", "Hyxa6trYiS", "ryezGmHDoB", "rkeXtYGPiS", "BklaGd7KKB", "rklS9UB6YS", "HyggZ2a-5S" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Indeed, my objective function turns out to be the same as that of deep CCA. This is addressed in the revised submission.\n\n> I would suggest to give a clear definition of the problem before delving into the theory in section 2. In its current version, it is difficult to assess how the parts of section 2 relate to...
[ -1, -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, -1, 4, 1, 3 ]
[ "HyggZ2a-5S", "rklS9UB6YS", "rkeXtYGPiS", "iclr_2020_SJeS16EKPr", "iclr_2020_SJeS16EKPr", "iclr_2020_SJeS16EKPr", "iclr_2020_SJeS16EKPr" ]
iclr_2020_ryxUkTVYvH
Towards Controllable and Interpretable Face Completion via Structure-Aware and Frequency-Oriented Attentive GANs
Face completion is a challenging conditional image synthesis task. This paper proposes controllable and interpretable high-resolution and fast face completion by learning generative adversarial networks (GANs) progressively from low resolution to high resolution. We present structure-aware and frequency-oriented attentive GANs. The proposed structure-aware component leverages off-the-shelf facial landmark detectors and proposes a simple yet effective method of integrating the detected landmarks in generative learning. It facilitates facial expression transfer together with facial attributes control, and helps regularize the structural consistency in progressive training. The proposed frequency-oriented attentive module (FOAM) encourages GANs to attend to only finer details in the coarse-to-fine progressive training, thus enabling progressive attention to face structures. The learned FOAMs show a strong pattern of switching its attention from low-frequency to high-frequency signals. In experiments, the proposed method is tested on the CelebA-HQ benchmark. Experiment results show that our approach outperforms state-of-the-art face completion methods. The proposed method is also fast with mean inference time of 0.54 seconds for images at 1024x1024 resolution (using a Titan Xp GPU).
reject
This work performs fast controllable and interpretable face completion, by proposing a progressive GAN with frequency-oriented attention modules (FOAM). The proposed FOAM encourages GANs to highlight more to finer details in the progressive training process. This paper is well written and is easy to understand. While reviewer #1 is overall positive about this work, the reviewer #2 and #141 rated weak reject with various concerns, including unconvincing experiments, very common framework, limited novelty, and the lack of ablation study. The authors provided response to the questions, but did not change the rating of the reviewers. Given the various concerns raised, the ACs agree that this paper can not be accepted at its current state.
train
[ "BJeLaiLnoB", "H1x7gVU3sB", "Hyx1gy82oS", "rkxBCAd6YH", "HyxSxA-Rtr", "ryxuOhO2cB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the comments and appreciation, and would like to answer the reviewer’s concerns as follows:\n\nQ1: Have you tried to train models without using facial landmarks? Are facial landmarks only for controlling facial expressions?\nYes, we have tried this. It performed worse on face completion t...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 4, 3, 1 ]
[ "rkxBCAd6YH", "HyxSxA-Rtr", "ryxuOhO2cB", "iclr_2020_ryxUkTVYvH", "iclr_2020_ryxUkTVYvH", "iclr_2020_ryxUkTVYvH" ]
iclr_2020_S1lDkaEFwS
Thwarting finite difference adversarial attacks with output randomization
Adversarial input poses a critical problem to deep neural networks (DNN). This problem is more severe in the "black box" setting where an adversary only needs to repeatedly query a DNN to estimate the gradients required to create adversarial examples. Current defense techniques against attacks in this setting are not effective. Thus, in this paper, we present a novel defense technique based on randomization applied to a DNN's output layer. While effective as a defense technique, this approach introduces a trade off between accuracy and robustness. We show that for certain types of randomization, we can bound the probability of introducing errors by carefully setting distributional parameters. For the particular case of finite difference black box attacks, we quantify the error introduced by the defense in the finite difference estimate of the gradient. Lastly, we show empirically that the defense can thwart three adaptive black box adversarial attack algorithms.
reject
This paper proposes a defense technique against query-based attacks based on randomization applied to a DNN's output layer. It further shows that for certain types of randomization, they can bound the probability of introducing errors by carefully setting distributional parameters. It has some valuable contributions; however, the rebuttal does not fully address the concerns.
train
[ "SyxsIAlnjr", "H1xruv5oiS", "BkgtLKIqjS", "rJgndKNqoB", "SkgxviV9jB", "SJxAjqEcsS", "Byehyc4qjH", "HyeCSCxhYr", "rklTFuDlcS", "SkglhWCXcB", "SkxkhdKrqS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your reply!\n\nIt seems that you did not mention the value of \"h\" used in the experiments in the paper (tell me if I am wrong). According to your code, it seems that h=1e-4 in the experiments. I think you should mention this in the paper.\n\nThanks for your verification that when a larger \"h\" is use...
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 4 ]
[ "SkgxviV9jB", "BkgtLKIqjS", "rJgndKNqoB", "HyeCSCxhYr", "SkxkhdKrqS", "SkglhWCXcB", "rklTFuDlcS", "iclr_2020_S1lDkaEFwS", "iclr_2020_S1lDkaEFwS", "iclr_2020_S1lDkaEFwS", "iclr_2020_S1lDkaEFwS" ]
iclr_2020_SklOypVKvS
A Data-Efficient Mutual Information Neural Estimator for Statistical Dependency Testing
Measuring Mutual Information (MI) between high-dimensional, continuous, random variables from observed samples has wide theoretical and practical applications. Recent works have developed accurate MI estimators through provably low-bias approximations and tight variational lower bounds assuming abundant supply of samples, but require an unrealistic number of samples to guarantee statistical significance of the estimation. In this work, we focus on improving data efficiency and propose a Data-Efficient MINE Estimator (DEMINE) that can provide a tight lower confident interval of MI under limited data, through adding cross-validation to the MINE lower bound (Belghazi et al., 2018). Hyperparameter search is employed and a novel meta-learning approach with task augmentation is developed to increase robustness to hyperparamters, reduce overfitting and improve accuracy. With improved data-efficiency, our DEMINE estimator enables statistical testing of dependency at practical dataset sizes. We demonstrate the effectiveness of DEMINE on synthetic benchmarks and a real world fMRI dataset, with application of inter-subject correlation analysis.
reject
The paper deal with a mutual information based dependency test. The reviewers have provided extensive and constructive feedback on the paper. The authors have in turn given detailed response withsome new experiments and plans for improvement. Overall the reviewers are not convinced the paper is ready for publication.
train
[ "BklLqYip9B", "rJe_Inq3sH", "B1xD5RI1iB", "rJlbnvS1oH", "HkltWhSyjB", "SJxb8_g0FS", "Byl0ceB1cH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Second update:\nWe thank the authors for updating their paper.\n\nThe work is now improving, and is on the right track for publication at a future conference. There are a few comments on the new results, and suggestions for further improvement:\n\n* The issue of possible false positives due to sample dependence i...
[ 1, -1, -1, -1, -1, 6, 3 ]
[ 5, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_SklOypVKvS", "iclr_2020_SklOypVKvS", "BklLqYip9B", "Byl0ceB1cH", "SJxb8_g0FS", "iclr_2020_SklOypVKvS", "iclr_2020_SklOypVKvS" ]
iclr_2020_Skld1aVtPB
Deep Mining: Detecting Anomalous Patterns in Neural Network Activations with Subset Scanning
This work views neural networks as data generating systems and applies anomalous pattern detection techniques on that data in order to detect when a network is processing a group of anomalous inputs. Detecting anomalies is a critical component for multiple machine learning problems including detecting the presence of adversarial noise added to inputs. More broadly, this work is a step towards giving neural networks the ability to detect groups of out-of-distribution samples. This work introduces ``Subset Scanning methods from the anomalous pattern detection domain to the task of detecting anomalous inputs to neural networks. Subset Scanning allows us to answer the question: "``Which subset of inputs have larger-than-expected activations at which subset of nodes?" Framing the adversarial detection problem this way allows us to identify systematic patterns in the activation space that span multiple adversarially noised images. Such images are ``"weird together". Leveraging this common anomalous pattern, we show increased detection power as the proportion of noised images increases in a test set. Detection power and accuracy results are provided for targeted adversarial noise added to CIFAR-10 images on a 20-layer ResNet using the Basic Iterative Method attack.
reject
The paper investigates the use of the subset scanning to the detection of anomalous patterns in the input to a neural network. The paper has received mixed reviews (one positive and two negatives). The reviewers agree that the idea is interesting, has novelty, and is worth investigating. At the same time they raise issues about the clarity and the lack of comparisons with baselines. Despite a very detailed rebuttal, both of the negative reviewers still feel that addressing their concerns through paper revision would be needed for acceptance.
train
[ "r1xF6fM2iB", "HJlbrTgiir", "B1x3kyyooH", "H1ln7EC5jB", "SJeGDWi5sS", "rkletIhsYH", "rkx6zcYyqr", "Hyx_XCKQcS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your detailed feedback and taking the time to understand our work!\n\nWe've left a larger, top-level comment regarding convergence of the iterative algorithm.\n\nre: targeted attacks for different labels\nThis work was done and can be found in the last row of each of the 3 tables labeled \"ALL\". In th...
[ -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "Hyx_XCKQcS", "rkletIhsYH", "rkx6zcYyqr", "iclr_2020_Skld1aVtPB", "iclr_2020_Skld1aVtPB", "iclr_2020_Skld1aVtPB", "iclr_2020_Skld1aVtPB", "iclr_2020_Skld1aVtPB" ]
iclr_2020_S1gKkpNKwH
Reinforcement Learning with Chromatic Networks
We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way. By defining the combinatorial search space of NAS to be the set of different edge-partitionings (colorings) into same-weight classes, we represent compact architectures via efficient learned edge-partitionings. For several RL tasks, we manage to learn colorings translating to effective policies parameterized by as few as 17 weight parameters, providing >90 % compression over vanilla policies and 6x compression over state-of-the-art compact policies based on Toeplitz matrices, while still maintaining good reward. We believe that our work is one of the first attempts to propose a rigorous approach to training structured neural network architectures for RL problems that are of interest especially in mobile robotics with limited storage and computational resources.
reject
This paper describes a method for learning compact RL policies suitable for mobile robotic applications with limited storage. The proposed pipeline is a scalable combination of efficient neural architecture search (ENAS) and evolution strategies (ES). Empirical evaluations are conducted on various OpenAI Gym and quadruped locomotion tasks, producing policies with as little as 10s of weight parameters, and significantly increased compression-reward trade-offs are obtained relative to some existing compact policies. Although reviewers appreciated certain aspects of this paper, after the rebuttal period there was no strong support for acceptance and several unsettled points were expressed. For example, multiple reviewers felt that additional baseline comparisons were warranted to better calibrate performance, e.g., random coloring, wider range of generic compression methods, classic architecture search methods, etc. Moreover, one reviewer remained concerned that the scope of this work was limited to very tiny model sizes whereby, at least in many cases, running the uncompressed model might be adequate.
train
[ "BklUetMLoS", "rke1c_G8jS", "BklbOFMLoS", "Byl1ODM8jS", "HJeR2PMIsB", "BklD1K8hFr", "ryxxgC3nYr", "SyeLlLDCKB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ ">> Chromatic Strategy specific for RL vs SL\nWe believe our method is most natural and specialized for RL tasks, as it combines ES and ENAS in a highly scalable way. To give context, vanilla NAS [Zoph et al, 2018] for classical supervised learning setting (SL) requires a large population of ~450 GPU-workers (“chil...
[ -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "rke1c_G8jS", "ryxxgC3nYr", "BklD1K8hFr", "SyeLlLDCKB", "iclr_2020_S1gKkpNKwH", "iclr_2020_S1gKkpNKwH", "iclr_2020_S1gKkpNKwH", "iclr_2020_S1gKkpNKwH" ]
iclr_2020_S1xjJpNYvB
Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators
Although few-shot learning research has advanced rapidly with the help of meta-learning, its practical usefulness is still limited because most of the researches assumed that all meta-training and meta-testing examples came from a single domain. We propose a simple but effective way for few-shot classification in which a task distribution spans multiple domains including previously unseen ones during meta-training. The key idea is to build a pool of embedding models which have their own metric spaces and to learn to select the best one for a particular task through multi-domain meta-learning. This simplifies task-specific adaptation over a complex task distribution as a simple selection problem rather than modifying the model with a number of parameters at meta-testing time. Inspired by common multi-task learning techniques, we let all models in the pool share a base network and add a separate modulator to each model to refine the base network in its own way. This architecture allows the pool to maintain representational diversity and each model to have domain-invariant representation as well. Experiments show that our selection scheme outperforms other few-shot classification algorithms when target tasks could come from many different domains. They also reveal that aggregating outputs from all constituent models is effective for tasks from unseen domains showing the effectiveness of our framework.
reject
This paper addresses the problem of few-shot classification across multiple domains. The main algorithmic contribution consists of a selection criteria to choose the best source domain embedding for a given task using a multi-domain modulator. All reviewers were in agreement that this paper is not ready for publication. Some key concerns were the lack of scalability (though the authors argue that this may not be a concern as all models are only stored during meta-training, still if you want to incorporate many training settings it may become challenging) and low algorithmic novelty. The issue with novelty is that there is inconclusive experimental evidence to justify the selection criteria over simple methods like averaging, especially when considering novel test time domains. The authors argue that since their approach chooses the single best training domain it may not be best suited to generalize to a novel test time domain. Based on the reviews and discussions the AC does not recommend acceptance. The authors should consider revisions for clarity and to further polish their claims providing any additional experiments to justify where appropriate.
train
[ "SylQgZoPiS", "HygU0xsPoH", "BkgxpxsvoB", "HJgONWIA_B", "Sylxa5s6tH", "SkeKQwaatS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your valuable feedback.\n\nAs the reviewer pointed out, our selection network is not as effective for novel domains as the averaging methods. Since the current selection network chooses only a single best model from the pool, the chosen model might be suboptimal to represent a novel domain which is n...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "HJgONWIA_B", "Sylxa5s6tH", "SkeKQwaatS", "iclr_2020_S1xjJpNYvB", "iclr_2020_S1xjJpNYvB", "iclr_2020_S1xjJpNYvB" ]
iclr_2020_S1eik6EtPB
Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness
The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations. Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense. In particular, given a set of risk sources (domains), minimizing the maximal loss induced from the domain set can be reformulated as a general min-max problem that is different from AT. Examples of this general formulation include attacking model ensembles, devising universal perturbation under multiple inputs or data transformations, and generalized AT over different types of attack models. We show that these problems can be solved under a unified and theoretically principled min-max optimization framework. We also show that the self-adjusted domain weights learned from our method provides a means to explain the difficulty level of attack and defense over multiple domains. Extensive experiments show that our approach leads to substantial performance improvement over the conventional averaging strategy.
reject
This submission studies an interesting problem. However, as some of the reviewers point out, the novelty of the proposed contributions is fairly limited.
train
[ "BkexHiNTqH", "r1eAJd-qiH", "S1xhnPzvsB", "Sye9PwXvoS", "r1xkrwMPoB", "S1lloBMPoB", "HJgBn_zwsB", "rJgAWufPjr", "SJl0HLfPiB", "rkxIOeMg9B", "Hyg4PfI4qB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies how a min-max framework can incorporate different tasks related to adversarial robustness. Specifically, the authors study adversarial attacks against model ensembles, universal perturbations, and attacks constrained by the union of Lp norms. They propose optimizing a probability distribution ove...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_S1eik6EtPB", "iclr_2020_S1eik6EtPB", "r1xkrwMPoB", "S1xhnPzvsB", "Hyg4PfI4qB", "BkexHiNTqH", "iclr_2020_S1eik6EtPB", "rkxIOeMg9B", "S1lloBMPoB", "iclr_2020_S1eik6EtPB", "iclr_2020_S1eik6EtPB" ]
iclr_2020_H1g6kaVKvH
Learning with Long-term Remembering: Following the Lead of Mixed Stochastic Gradient
Current deep neural networks can achieve remarkable performance on a single task. However, when the deep neural network is continually trained on a sequence of tasks, it seems to gradually forget the previous learned knowledge. This phenomenon is referred to as catastrophic forgetting and motivates the field called lifelong learning. The central question in lifelong learning is how to enable deep neural networks to maintain performance on old tasks while learning a new task. In this paper, we introduce a novel and effective lifelong learning algorithm, called MixEd stochastic GrAdient (MEGA), which allows deep neural networks to acquire the ability of retaining performance on old tasks while learning new tasks. MEGA modulates the balance between old tasks and the new task by integrating the current gradient with the gradient computed on a small reference episodic memory. Extensive experimental results show that the proposed MEGA algorithm significantly advances the state-of-the-art on all four commonly used life-long learning benchmarks, reducing the error by up to 18%.
reject
The submission is concerned with the catastrophic forgetting problem of continual learning, and proposes a gradient-based method which uses buffers of data seen previously to integrate the angles of the gradients and thereby mitigate forgetting. Empirical results are given on several benchmarks. The reviewers were impressed with the thorough validation and strong results, but noticed that the much simpler MEGA-D baseline did almost as well. Given this, they were not convinced that the proposed approach was necessary. Although the authors provided a strong rebuttal and an additional ablation, the reviewers did not feel that their concerns were met. My recommendation is to reject the submission at this time.
train
[ "Skl8qrtstB", "BJgJKvDaYB", "HJgMB6EjiH", "HJlUkYQsoS", "SygxF2XrsS", "H1x5fqWSsB", "rJefEaWBor", "Syx8F3WSoB", "SJgNXibrjr", "rJx7qtA4jS", "rke-ypoCFS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "UPDATE:\n\nI thank the authors for proactively engaging with the review process and improving the paper.\n\nAfter considering the other reviews and discussions with other reviewers, I also share the concern that the simple MEGA-D baseline performs very well, with little additional gain from the full MEGA approach ...
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_H1g6kaVKvH", "iclr_2020_H1g6kaVKvH", "HJlUkYQsoS", "SJgNXibrjr", "iclr_2020_H1g6kaVKvH", "rJx7qtA4jS", "BJgJKvDaYB", "BJgJKvDaYB", "Skl8qrtstB", "rke-ypoCFS", "iclr_2020_H1g6kaVKvH" ]
iclr_2020_SJlpy64tvB
Attacking Lifelong Learning Models with Gradient Reversion
Lifelong learning aims at avoiding the catastrophic forgetting problem of traditional supervised learning models. Episodic memory based lifelong learning methods such as A-GEM (Chaudhry et al., 2018b) are shown to achieve the state-of-the-art results across the benchmarks. In A-GEM, a small episodic memory is utilized to store a random subset of the examples from previous tasks. While the model is trained on a new task, a reference gradient is computed on the episodic memory to guide the direction of the current update. While A-GEM has strong continual learning ability, it is not clear that if it can retain the performance in the presence of adversarial attacks. In this paper, we examine the robustness ofA-GEM against adversarial attacks to the examples in the episodic memory. We evaluate the effectiveness of traditional attack methods such as FGSM and PGD.The results show that A-GEM still possesses strong continual learning ability in the presence of adversarial examples in the memory and simple defense techniques such as label smoothing can further alleviate the adversarial effects. We presume that traditional attack methods are specially designed for standard supervised learning models rather than lifelong learning models. we therefore propose a principled way for attacking A-GEM called gradient reversion(GREV) which is shown to be more effective. Our results indicate that future lifelong learning research should bear adversarial attacks in mind to develop more robust lifelong learning algorithms.
reject
The paper investigates questions around adversarial attacks in a continual learning algorithm, i.e., A-GEM. While reviewers agree that this is a novel topic of great importance, the contributions are quite narrow, since only a single model (A-GEM) is considered and it is not immediately clear whether this method transfers to other lifelong learning models (or even other models that belong to the same family as A-GEM). This is an interesting submission, but at the moment due to its very narrow scope, it seems more appropriate as a workshop submission investigating a very particular question (that of attacking A-GEM). As such, I cannot recommend acceptance.
train
[ "BJxR4mo3FS", "B1x9LHd6tr", "SJxFkDtf5H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper does a good job of raising awareness of adversarial attacks in lifelong learning research with deep neural networks. This is the first time I have considered this problem, but not sure whether any prior work exists in the specific subfield.\n\nAt the conceptual level, many issues can arise when a lifelo...
[ 3, 3, 3 ]
[ 4, 4, 4 ]
[ "iclr_2020_SJlpy64tvB", "iclr_2020_SJlpy64tvB", "iclr_2020_SJlpy64tvB" ]
iclr_2020_S1eRya4KDB
A novel Bayesian estimation-based word embedding model for sentiment analysis
The word embedding models have achieved state-of-the-art results in a variety of natural language processing tasks. Whereas, current word embedding models mainly focus on the rich semantic meanings while are challenged by capturing the sentiment information. For this reason, we propose a novel sentiment word embedding model. In line with the working principle, the parameter estimating method is highlighted. On the task of semantic and sentiment embeddings, the parameters in the proposed model are determined by using both the maximum likelihood estimation and the Bayesian estimation. Experimental results show the proposed model significantly outperforms the baseline methods in sentiment analysis for low-frequency words and sentences. Besides, it is also effective in conventional semantic and sentiment analysis tasks.
reject
This paper proposes a method to improve word embedding by incorporating sentiment probabilities. Reviewer appreciate the interesting and simple approach and acknowledges improved results in low-frequency words. However, reviewers find that the paper is lacking in two major aspects: 1) Writing is unclear, and thus it is difficult to understand and judge the contributions of this research. 2) Perhaps because of 1, it is not convincing that the improvements are significant and directly resulting from the modeling contributions. I thank the authors for submitting this work to ICLR, and I hope that the reviewers' comments are helpful in improving this research for future submission.
train
[ "B1ex4abKsS", "S1ePWeWFoS", "SJeVmSZtiH", "Hyly8cjatS", "H1lBl62aYS", "BklVBWjCFS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your insightful comments. \n\nOur model is currently applied to the corpus of labeled samples. The widely-used models, such as word2vec and GloVe, are trained by the unlabeled samples, which are infinite for using. Since the datasets with labeled samples are of high cost to make, the commonly used la...
[ -1, -1, -1, 6, 1, 3 ]
[ -1, -1, -1, 1, 3, 3 ]
[ "Hyly8cjatS", "BklVBWjCFS", "H1lBl62aYS", "iclr_2020_S1eRya4KDB", "iclr_2020_S1eRya4KDB", "iclr_2020_S1eRya4KDB" ]
iclr_2020_rkgCJ64tDB
Scale-Equivariant Neural Networks with Decomposed Convolutional Filters
Encoding the input scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many vision tasks especially when dealing with multiscale input signals. We study, in this paper, a scale-equivariant CNN architecture with joint convolutions across the space and the scaling group, which is shown to be both sufficient and necessary to achieve scale-equivariant representations. To reduce the model complexity and computational burden, we decompose the convolutional filters under two pre-fixed separable bases and truncate the expansion to low-frequency components. A further benefit of the truncated filter expansion is the improved deformation robustness of the equivariant representation. Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.
reject
This paper presents a CNN architecture equivariant to scaling and translation which is realized by the proposed joint convolution across the space and scaling groups. All reviewers find the theorical side of the paper is sound and interesting. Through the discussion based on authors’ rebuttal, one reviewer decided to update the score to Weak Accept, putting this paper on the borderline. However, some concerns still remain. Some reviewers are still not convinced regarding the novelty of the paper, particularly in terms of the difference from (Chen+,2019). Also, they agree that experiments are still very weak and not convincing enough. Overall, as there was no opinion to champion this paper, I’d like to recommend rejection this time. I encourage authors to polish the experimentations taking in the reviewers’ suggestions.
train
[ "S1eoXs9itr", "B1xJB4oioH", "rkgXT4jjjH", "H1lhKNoiiS", "SJlJpXoijr", "SyxQFmsijS", "HygbiOKaFB", "H1lhZLO3qS", "Hyx-NVVHYH", "Hyxb1Z94tH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "*Paper summary*\n\nThe authors propose a CNN architecture, that is theoretically equivariant to isotropic scalings and translations. For this they add an extra scale-dimension to activation tensors, along with the existing two spatial dimensions. In practice they implement this with scale-steerable filters, which ...
[ 6, -1, -1, -1, -1, -1, 3, 6, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, 1, 1, -1, -1 ]
[ "iclr_2020_rkgCJ64tDB", "S1eoXs9itr", "H1lhKNoiiS", "B1xJB4oioH", "HygbiOKaFB", "H1lhZLO3qS", "iclr_2020_rkgCJ64tDB", "iclr_2020_rkgCJ64tDB", "Hyxb1Z94tH", "iclr_2020_rkgCJ64tDB" ]
iclr_2020_S1gyl6Vtvr
MaskConvNet: Training Efficient ConvNets from Scratch via Budget-constrained Filter Pruning
In this paper, we propose a framework, called MaskConvNet, for ConvNets filter pruning. MaskConvNet provides elegant support for training budget-aware pruned networks from scratch, by adding a simple mask module to a ConvNet architecture. MaskConvNet enjoys several advantages - (1) Flexible, the mask module can be integrated with any ConvNets in a plug-and-play manner. (2) Simple, the mask module is implemented by a hard Sigmoid function with a small number of trainable mask variables, adding negligible memory and computational overheads to the networks during training. (3) Effective, it is able to achieve competitive pruning rate while maintaining comparable accuracy with the baseline ConvNets without pruning, regardless of the datasets and ConvNet architectures used. (4) Fast, it is observed that the number of training epochs required by MaskConvNet is close to training a baseline without pruning. (5) Budget-aware, with a sparsity budget on target metric (e.g. model size and FLOP), MaskConvNet is able to train in a way that the optimizer can adaptively sparsify the network and automatically maintain sparsity level, till the pruned network produces good accuracy and fulfill the budget constraint simultaneously. Results on CIFAR-10 and ImageNet with several ConvNet architectures show that MaskConvNet works competitively well compared to previous pruning methods, with budget-constraint well respected. Code is available at https://www.dropbox.com/s/c4zi3n7h1bexl12/maskconv-iclr-code.zip?dl=0. We hope MaskConvNet, as a simple and general pruning framework, can address the gaps in existing literate and advance future studies to push the boundaries of neural network pruning.
reject
This paper presents a method to learn a pruned convolutional network during conventional training. Pruning the network has advantages (in deployment) of reducing the final model size and reducing the required FLOPS for compute. The method adds a pruning mask on each layer with an additional sparsity loss on the mask variables. The method avoids the cost of a train-prune-retrain optimization process that has been used in several earlier papers. The method is evaluated on CIFAR-10 and ImageNet with three standard convolutional network architectures. The results show comparable performance to the original networks with the learned sparse networks. The reviewers made many substantial comments on the paper and most of these were addressed in the author response and subsequent discussion. For example, Reviewer1 mentioned two other papers that promote sparsity implicitly during training (Q3), and the authors acknowledged the omission and described how those methods had less flexibility on a target metric (FLOPS) that is not parameter size. Many of the author responses described changes to an updated paper that would clarify the claims and results (R1: Q2-7, R2:Q3). However, the reviewers raised many concerns for the original paper and they did not see an updated paper that contains the proposed revisions. Given the numerous concerns with the original submission, the reviewers wanted to see the revised paper to assess whether their concerns had been addressed adequately. Additionally, the paper does not have a comparison experiment with state-of the art results, and the current results were not sufficiently convincing for the reviewers. Reviewer1 and author response to questions 13--15 suggest that the experimental results with ResNet-34 are inadequate to show the benefits of the approach, but results for the proposed method with the larger ResNet-50 (which could show benefits) are not yet ready. The current paper is not ready for publication.
val
[ "Byxx1z7noB", "B1lUAIeFsB", "rkezZAJtsS", "SJgiDp1KjB", "BkljV61FoS", "HylNL2G_iH", "SkxyX2zdiB", "Bkl2aoGdjB", "rJeplPkCtB", "ryel6kz0Kr", "S1eo1JwJ9S", "B1gnLfV4qB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Thank you for your response!\n\nMinor remark on the paper: When you introduce unsparsification technique you use \"S\" which you never define in the text. I'm guessing you're referring to \"\\mathcal{L}_S\".\n\nIn my notes I asked a couple more questions and it would be nice if you could comment on those too.\n\nI...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, 5, -1 ]
[ "rkezZAJtsS", "B1gnLfV4qB", "rJeplPkCtB", "BkljV61FoS", "ryel6kz0Kr", "SkxyX2zdiB", "Bkl2aoGdjB", "S1eo1JwJ9S", "iclr_2020_S1gyl6Vtvr", "iclr_2020_S1gyl6Vtvr", "iclr_2020_S1gyl6Vtvr", "iclr_2020_S1gyl6Vtvr" ]
iclr_2020_BJlkgaNKvr
Towards Understanding the Regularization of Adversarial Robustness on Neural Networks
The problem of adversarial examples has shown that modern Neural Network (NN) models could be rather fragile. Among the most promising techniques to solve the problem, one is to require the model to be {\it ϵ-adversarially robust} (AR); that is, to require the model not to change predicted labels when any given input examples are perturbed within a certain range. However, it is widely observed that such methods would lead to standard performance degradation, i.e., the degradation on natural examples. In this work, we study the degradation through the regularization perspective. We identify quantities from generalization analysis of NNs; with the identified quantities we empirically find that AR is achieved by regularizing/biasing NNs towards less confident solutions by making the changes in the feature space (induced by changes in the instance space) of most layers smoother uniformly in all directions; so to a certain extent, it prevents sudden change in prediction w.r.t. perturbations. However, the end result of such smoothing concentrates samples around decision boundaries, resulting in less confident solutions, and leads to worse standard performance. Our studies suggest that one might consider ways that build AR into NNs in a gentler way to avoid the problematic regularization.
reject
The paper investigates why adversarial training can sometimes degrade model performance on clean input examples. The reviewers agreed that the paper provides valuable insights into how adversarial training affects the distribution of activations. On the other hand, the reviewers raised concerns about the experimental setup as well as the clarity of the writing and felt that the presentation could be improved. Overall, I think this paper explores a very interesting direction and such papers are valuable to the community. It's a borderline paper currently but I think it could turn into a great paper with another round of revision. I encourage the authors to revise the draft and resubmit to a different venue.
test
[ "B1e_-c4kcH", "r1lOwA8TFS", "HkeqLaKhir", "HygM8Mx2or", "HJxtXGg3jB", "Hkel77l3sH", "BkeCvll3oS", "S1gntXx2sH", "SkeVjlg2iS", "BJeOtaIFKr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes to explain a phenomenon that the increasing robustness for adversarial examples might lead to performance degradation on natural examples. The authors analyzed it from the following aspects: \n\n1)\tAdversarial robustness reduces the variance of output at most layers in terms of reducing the sta...
[ 6, 3, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_BJlkgaNKvr", "iclr_2020_BJlkgaNKvr", "HJxtXGg3jB", "r1lOwA8TFS", "r1lOwA8TFS", "r1lOwA8TFS", "B1e_-c4kcH", "BJeOtaIFKr", "B1e_-c4kcH", "iclr_2020_BJlkgaNKvr" ]
iclr_2020_S1xxx64YwH
Ecological Reinforcement Learning
Reinforcement learning algorithms have been shown to effectively learn tasks in a variety of static, deterministic, and simplistic environments, but their application to environments which are characteristic of dynamic lifelong settings encountered in the real world has been limited. Understanding the impact of specific environmental properties on the learning dynamics of reinforcement learning algorithms is important as we want to align the environments in which we develop our algorithms with the real world, and this is strongly coupled with the type of intelligence which can be learned. In this work, we study what we refer to as ecological reinforcement learning: the interaction between properties of the environment and the reinforcement learning agent. To this end, we introduce environments with characteristics that we argue better reflect natural environments: non-episodic learning, uninformative ``fundamental drive'' reward signals, and natural dynamics that cause the environment to change even when the agent fails to take intelligent actions. We show these factors can have a profound effect on the learning progress of reinforcement learning algorithms. Surprisingly, we find that these seemingly more challenging learning conditions can often make reinforcement learning agents learn more effectively. Through this study, we hope to shift the focus of the community towards learning in realistic, natural environments with dynamic elements.
reject
This paper investigates how the properties of an environment affect the success of reinforcement learning, and in particular finds that random dynamics and non-episodic learning makes learning easier, even though these factors make learning more difficult when applied individually. The paper was reviewed by three experts who gave Reject, Weak Reject, and Weak Reject recommendations. The main concerns are about missing connections to related work, overstating some contributions, and experimental details. While the author response addressed many of these issues, reviewers felt another round of peer review is really needed before this paper can be accepted; R2's post-rebuttal comments give some specific, constructive, concrete suggestions for preparing a revision.
val
[ "H1gXFeAOFr", "SylFmB_9oS", "SJeogruqor", "BJlLLDO5sH", "rJx9KYu9jH", "SyggDvc7oS", "S1gUDMlCtr", "r1eZLxow9B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThis paper discusses the value of creating more challenging environments for training reinforcement learning agents. Specifically, the paper focuses on three characteristics of the environment that the paper claims are necessary for developing intelligent agents. The first of these properties is stochas...
[ 1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_S1xxx64YwH", "SJeogruqor", "H1gXFeAOFr", "S1gUDMlCtr", "r1eZLxow9B", "iclr_2020_S1xxx64YwH", "iclr_2020_S1xxx64YwH", "iclr_2020_S1xxx64YwH" ]
iclr_2020_BkgZxpVFvH
LSTOD: Latent Spatial-Temporal Origin-Destination prediction model and its applications in ride-sharing platforms
Origin-Destination (OD) flow data is an important instrument in transportation studies. Precise prediction of customer demands from each original location to a destination given a series of previous snapshots helps ride-sharing platforms to better understand their market mechanism. However, most existing prediction methods ignore the network structure of OD flow data and fail to utilize the topological dependencies among related OD pairs. In this paper, we propose a latent spatial-temporal origin-destination (LSTOD) model, with a novel convolutional neural network (CNN) filter to learn the spatial features of OD pairs from a graph perspective and an attention structure to capture their long-term periodicity. Experiments on a real customer request dataset with available OD information from a ride-sharing platform demonstrate the advantage of LSTOD in achieving at least 6.5% improvement in prediction accuracy over the second best model.
reject
The paper proposes a deep learning architecture for forecasting Origin-Destination (OD) flow. The model integrates several existing modules including spatiotemporal graph convolution and periodically shifted attention mechanism. The reviewers agree that the paper is not written well, and the experiments are also not executed well. Overall, we recommend rejection.
train
[ "SkgOAjUniB", "SylfPjLniS", "rylmNo8hjS", "SkeWVbvntB", "S1eFS7t3FB", "H1lEfW8aFB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We want to appreciate many insightful comments from this reviewer first. However, there may be some misunderstanding we want to clarify. \n\nSorry for the confusion of our VACN layer with the traditional graph convolutions. We have included some explanations in the introduction section (marked by color blue) to el...
[ -1, -1, -1, 1, 6, 1 ]
[ -1, -1, -1, 5, 3, 3 ]
[ "SkeWVbvntB", "S1eFS7t3FB", "H1lEfW8aFB", "iclr_2020_BkgZxpVFvH", "iclr_2020_BkgZxpVFvH", "iclr_2020_BkgZxpVFvH" ]
iclr_2020_Syl-xpNtwS
Learning Representations in Reinforcement Learning: an Information Bottleneck Approach
The information bottleneck principle is an elegant and useful approach to representation learning. In this paper, we investigate the problem of representation learning in the context of reinforcement learning using the information bottleneck framework, aiming at improving the sample efficiency of the learning algorithms.We analytically derive the optimal conditional distribution of the representation, and provide a variational lower bound. Then, we maximize this lower bound with the Stein variational (SV) gradient method. We incorporate this framework in the advantageous actor critic algorithm (A2C) and the proximal policy optimization algorithm (PPO). Our experimental results show that our framework can improve the sample efficiency of vanilla A2C and PPO significantly. Finally, we study the information-bottleneck (IB) perspective in deep RL with the algorithm called mutual information neural estimation(MINE). We experimentally verify that the information extraction-compression process also exists in deep RL and our framework is capable of accelerating this process. We also analyze the relationship between MINE and our method, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound.
reject
The authors propose to use the information bottleneck to learn state representations for RL. They optimize the IB objective using Stein variational gradient descent and combine it with A2C and PPO. On a handful of Atari games, they show improved performance. The reviewers primary concerns were: *Limited evaluation. The method was only evaluated on a handful of the Atari games and no comparison to other representation learning methods was done. *Using a simple Gaussian embedding function would eliminate the need for amortized SVGD. The authors should compare to that alternative to demonstrate the necessity of their approach. The ideas explored in the paper are interesting, but given the issues with evaluation, the paper is not ready for acceptance at this time.
train
[ "SJlFMnxCtS", "SygCrkonoS", "S1gBKUynsH", "Syxro0RoiB", "SklmEnRjjH", "rJgp9Wr3Yr", "HJxLVnA3FB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a representation learning algorithm for RL based on the Information Bottleneck (IB) principle. This formulation leads to the observed state X being mapped to a latent variable Z ~ P(Z | X), in such a way that the standard loss function in actor-critic RL methods is augmented with a term minimiz...
[ 3, -1, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_Syl-xpNtwS", "S1gBKUynsH", "HJxLVnA3FB", "rJgp9Wr3Yr", "SJlFMnxCtS", "iclr_2020_Syl-xpNtwS", "iclr_2020_Syl-xpNtwS" ]
iclr_2020_rylfl6VFDH
Adaptive network sparsification with dependent variational beta-Bernoulli dropout
While variational dropout approaches have been shown to be effective for network sparsification, they are still suboptimal in the sense that they set the dropout rate for each neuron without consideration of the input data. With such input independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss. To overcome this limitation, we propose adaptive variational dropout whose probabilities are drawn from sparsity inducing beta-Bernoulli prior. It allows each neuron to be evolved either to be generic or specific for certain inputs, or dropped altogether. Such input-adaptive sparsity- inducing dropout allows the resulting network to tolerate larger degree of sparsity without losing its expressive power by removing redundancies among features. We validate our dependent variational beta-Bernoulli dropout on multiple public datasets, on which it obtains significantly more compact networks than baseline methods, with consistent accuracy improvements over the base networks.
reject
This paper introduces a new adaptive variational dropout approach to balance accuracy, sparsity and computation. The method proposed here is sound, the motivation for smaller (perhaps sparser) networks is easy to follow. The paper provides experiments in several data-sets and compares against several other regularization/pruning approaches, and measures accuracy, speedup, and memory. The reviewers agreed on all these points, but overall they found the results unconvincing. They requested (1) more baselines (which the authors added), (2) larger tasks/datasets, and (3) more variety in network architectures. The overall impression was it was hard to see a clear benefit of the proposed approach, based on the provided tables of results. The paper could sharpen its impact with several adjustments. The results are much more clear looking at the error vs speedup graphs. Presenting "representative results" in the tables was confusing, especially considering the proposed approach rarely dominated across all measures. It was unclear how the variants of the algorithms presented in the tables were selected---explaining this would help a lot. In addition, more text is needed to help the reader understand how improvements in speed, accuracy, and memory matter. For example in LeNet 500-300 is a speedup of ~12 @ 1.26 error for BB worth-it/important compared a speedup of ~8 for similar error for L_0? How should the reader think about differences in speedup, memory and accuracy---perhaps explanations linking to the impact of these metrics to their context in real applications. I found myself wondering this about pretty much every result, especially when better speedup and memory could be achieved at the cost of some accuracy---how much does the reduction in accuracy actually matter? Is speed and size the dominant thing? I don't know. Overall the analysis and descriptions of the results are very terse, leaving much to the reader to figure out. For example (fig 2 bottom right). If a result is worth including in the paper it's worth explaining it to the reader. Summary statements like "BB and DBB either achieve significantly smaller error than the baseline methods, or significant speedup and memory saving at similar error rates." Is not helpful where there are so many dimensions of performance to figure out. The paper spends a lot of time explaining what was done in a matter of fact way, but little time helping the reader interpret the results. There are other issues that hurt the paper, including reporting the results of only 3 runs, sometimes reporting median without explanation, undefined metrics like speedup ,%memory (explain how they are calculated), restricting the batchsize for all methods to a particular value without explanation, and overall somewhat informal and imprecise discussion of the empirical methodology. The authors did a nice job responding to the reviewers (illustrating good understanding of the area and the strengths of their method), and this could be a strong paper indeed if the changes suggested above were implemented. Including SSL and SVG in the appendix was great, but they really should have been included in the speedup vs error plots throughout the paper. This is a nice direction and was very close. Keep going!
train
[ "HJeGyk0TFS", "SkgvcIJm5H", "B1lsXAmjor", "BylobSmosr", "Bkx7gB7isS", "HygN6EXssS", "B1gZK4XosB", "rkxKWVQisr", "Bkg6KFj0Fr", "rygd9WuvqH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new way of training variational dropout which is adaptive to input samples due to the proposed sparsity-inducing beta-Bernoulli prior. The authors provide a good motivation for their model, introduce beta-Bernoulli and dependant beta-Bernoulli prior and propose the method in the variational in...
[ 6, 3, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, 1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_rylfl6VFDH", "iclr_2020_rylfl6VFDH", "Bkx7gB7isS", "HJeGyk0TFS", "Bkg6KFj0Fr", "SkgvcIJm5H", "rygd9WuvqH", "iclr_2020_rylfl6VFDH", "iclr_2020_rylfl6VFDH", "iclr_2020_rylfl6VFDH" ]
iclr_2020_r1lfga4KvS
Extreme Value k-means Clustering
Clustering is the central task in unsupervised learning and data mining. k-means is one of the most widely used clustering algorithms. Unfortunately, it is generally non-trivial to extend k-means to cluster data points beyond Gaussian distribution, particularly, the clusters with non-convex shapes (Beliakov & King, 2006). To this end, we, for the first time, introduce Extreme Value Theory (EVT) to improve the clustering ability of k-means. Particularly, the Euclidean space was transformed into a novel probability space denoted as extreme value space by EVT. We thus propose a novel algorithm called Extreme Value k-means (EV k-means), including GEV k-means and GPD k-means. In addition, we also introduce the tricks to accelerate Euclidean distance computation in improving the computational efficiency of classical k-means. Furthermore, our EV k-means is extended to an online version, i.e., online Extreme Value k-means, in utilizing the Mini Batch k-means to cluster streaming data. Extensive experiments are conducted to validate our EV k-means and online EV k-means on synthetic datasets and real datasets. Experimental results show that our algorithms significantly outperform competitors in most cases.
reject
This paper explores extending k-means to allow to clusters with non-convex shapes. This paper introduces a new algorithm, relying on empirical comparisons to illustrate its contribution. The main issue with the paper is that the empirical claims do not support that the new method is indeed better. The paper claims the new method outperforms the competitors in most cases. However, the original submission reported median performance and when the authors provided mean performance and additional baseline methods (at the reviewers' request) there appear to be little evidence to support the claim. In addition there are no measures of significance provided. The authors provided no commentary to help the reviewers understand the new results. There might be some important speed gains at the cost of final performance, but on the evidence provided we are not able to evaluate the cost in final performance. The text changes size after section 5.3 and is 9% smaller. Watch out for this formatting issue in future submissions
train
[ "BJlI6ipjqB", "B1xAsmc3FH", "SJldIO6ntS", "SJglh4GrcB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper considers extending the k-means algorithm to allow for finding clusters with non-convex shapes. Particularly, it uses an existing theoretical framework (Extreme Value Theory) to maps a Euclidean space into what it calls the extreme value space, and proposes two Extreme Value k-means algorithms: GEV k-mea...
[ 3, 3, 1, 6 ]
[ 1, 3, 5, 5 ]
[ "iclr_2020_r1lfga4KvS", "iclr_2020_r1lfga4KvS", "iclr_2020_r1lfga4KvS", "iclr_2020_r1lfga4KvS" ]