paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_S1lSapVtwS
Stochastic Conditional Generative Networks with Basis Decomposition
While generative adversarial networks (GANs) have revolutionized machine learning, a number of open questions remain to fully understand them and exploit their power. One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space. To address this, we introduce BasisGAN, a stochastic conditional multi-mode image generator. By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we learn a plug-and-played basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into convolutional filters. By sampling basis elements instead of filters, we dramatically reduce the cost of modeling the parameter space with no sacrifice on either image diversity or fidelity. To illustrate this proposed plug-and-play framework, we construct variants of BasisGAN based on state-of-the-art conditional image generation networks, and train the networks by simply plugging in a basis generator, without additional auxiliary components, hyperparameters, or training objectives. The experimental success is complemented with theoretical results indicating how the perturbations introduced by the proposed sampling of basis elements can propagate to the appearance of generated images.
accept-poster
Main content: BasiGAN, a novel method for introducing stochasticity in conditional GANs Summary of discussion: reviewer1: interesting work and results on GANs. Reviewer had a question on pre-defned basis but i think it was answered by the authors. reviewer3: interesting and novel work on GANS, wel-written paper and improves on SOTA. The main uestion is around bases again like reviewer 1, but it seems the authors have addressed this. reviewer4: Novel interesting work. Main comments are around making Theorem 1 more theoretically correct, which it sounds like the authors addressed. Recommendation: Poster. Well written and novel paper and authors addressed a lot of concerns.
train
[ "BJlUubq2jS", "BklLXs42iS", "BklGOIsiiH", "SyljhBiosr", "Bkguf8jsjS", "SkxALfA6YS", "rJed5l9tqH", "BkgrY8ih9S" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThanks for the additional valuable comments. \n\n* Smaller K\n\nThe current experiment setting of K=7 was selected as performing the best after evaluating different K values starting from K=1 when preparing for the submission. \nTypically, when using K=5 or K=6, the performance drops slightly with good diversity...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "BklLXs42iS", "BklGOIsiiH", "BkgrY8ih9S", "rJed5l9tqH", "SkxALfA6YS", "iclr_2020_S1lSapVtwS", "iclr_2020_S1lSapVtwS", "iclr_2020_S1lSapVtwS" ]
iclr_2020_rkgO66VKDS
LEARNED STEP SIZE QUANTIZATION
Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.
accept-poster
Main content: Paper is about training low precision networks to a high-accuracy. Discussion: reviewer 2: impressive results, main questions are around some clarity in the experiments tried, but sounds like authors addressed most of this in rebuttal. reviewer 1: well written paper, but authors think some technical details could be clarified. reviewer 3: well written but experimental section could be improved. Recommendation: all reviewers are in consensus, well written paper but some experiments/technical details could be improved. i vote poster.
train
[ "Bke1XVDW9S", "B1x3eYnYjS", "BygcuF2tiS", "S1xuBt3tsH", "HyxlCO2tjB", "BygMO0Lhtr", "rJx7-yzYqB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper trains low-precision network with quantized weights and quantized activation. The main idea is to split the scale and quantized values. Both scales and weights are updated with backprop and SGD. The paper presents excellent experimental results on ImageNet. \n\nThe paper is generally well written and ...
[ 6, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rkgO66VKDS", "rJx7-yzYqB", "BygMO0Lhtr", "Bke1XVDW9S", "iclr_2020_rkgO66VKDS", "iclr_2020_rkgO66VKDS", "iclr_2020_rkgO66VKDS" ]
iclr_2020_HylsTT4FvB
On the "steerability" of generative adversarial networks
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: https://ali-design.github.io/gan_steerability/
accept-poster
All three reviewers agree that the paper provide an interesting study on the ability of generative adversarial networks to model geometric transformations and a simple practical approach to how such ability can be improved. Acceptance as a poster is recommended.
test
[ "SkgnU09hjB", "HJl6phN2iS", "HyxKK7PvsS", "rklMAQPPjS", "B1gLu2Q1iS", "SylrPNUFFH", "rkxJexgWqH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their comments and feedback. We made the following changes to address the reviewers’ comments:\n\nReview #1: Added section B.6, Fig 26, 27 in the appendix for transformations in Progressive GAN.\n\nReview #3: Added an additional Fig 28 in the appendix comparing Stylegan latent space ...
[ -1, -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2020_HylsTT4FvB", "rkxJexgWqH", "B1gLu2Q1iS", "SylrPNUFFH", "iclr_2020_HylsTT4FvB", "iclr_2020_HylsTT4FvB", "iclr_2020_HylsTT4FvB" ]
iclr_2020_SkgC6TNFvr
Reinforced active learning for image segmentation
Learning-based approaches for semantic segmentation have two inherent challenges. First, acquiring pixel-wise labels is expensive and time-consuming. Second, realistic segmentation datasets are highly unbalanced: some categories are much more abundant than others, biasing the performance to the most represented ones. In this paper, we are interested in focusing human labelling effort on a small subset of a larger pool of data, minimizing this effort while maximizing performance of a segmentation model on a hold-out set. We present a new active learning strategy for semantic segmentation based on deep reinforcement learning (RL). An agent learns a policy to select a subset of small informative image regions -- opposed to entire images -- to be labeled, from a pool of unlabeled data. The region selection decision is made based on predictions and uncertainties of the segmentation model being trained. Our method proposes a new modification of the deep Q-network (DQN) formulation for active learning, adapting it to the large-scale nature of semantic segmentation problems. We test the proof of concept in CamVid and provide results in the large-scale dataset Cityscapes. On Cityscapes, our deep RL region-based DQN approach requires roughly 30% less additional labeled data than our most competitive baseline to reach the same performance. Moreover, we find that our method asks for more labels of under-represented categories compared to the baselines, improving their performance and helping to mitigate class imbalance.
accept-poster
Authors propose a novel scheme to perform active learning on image segmentation. This structured task is highly time consuming for humans to perform and challenging to model theoretically as to potentially apply existing active learning methods. Reviewers have remaining concerns over computation and that the empirical evaluation is not overwhelming (e.g., more comparisons). Nevertheless, the paper appears to bring new ideas to the table for this important problem.
train
[ "r1guYvhkcr", "SygNqDShjH", "H1xgTMD7sB", "rkxPe5lmjS", "SJxy7Olmsr", "rJgIp8gmsB", "r1eSMTzp9H", "rygKeuYyOr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "public" ]
[ "# Summary #\nThe paper works on active learning for semantic segmentation, aiming to annotate as few \"blocks/patches\" as possible while training a strong model. The authors proposed to learn a query policy via Q learning, and design states and actions specifically for segmentation. The experimental results show ...
[ 6, -1, -1, -1, -1, -1, 6, -1 ]
[ 3, -1, -1, -1, -1, -1, 3, -1 ]
[ "iclr_2020_SkgC6TNFvr", "iclr_2020_SkgC6TNFvr", "rygKeuYyOr", "SJxy7Olmsr", "r1guYvhkcr", "r1eSMTzp9H", "iclr_2020_SkgC6TNFvr", "iclr_2020_SkgC6TNFvr" ]
iclr_2020_SygW0TEFwH
Sign Bits Are All You Need for Black-Box Attacks
We present a novel black-box adversarial attack algorithm with state-of-the-art model evasion rates for query efficiency under ℓ∞ and ℓ2 metrics. It exploits a \textit{sign-based}, rather than magnitude-based, gradient estimation approach that shifts the gradient estimation from continuous to binary black-box optimization. It adaptively constructs queries to estimate the gradient, one query relying upon the previous, rather than re-estimating the gradient each step with random query construction. Its reliance on sign bits yields a smaller memory footprint and it requires neither hyperparameter tuning or dimensionality reduction. Further, its theoretical performance is guaranteed and it can characterize adversarial subspaces better than white-box gradient-aligned subspaces. On two public black-box attack challenges and a model robustly trained against transfer attacks, the algorithm's evasion rates surpass all submitted attacks. For a suite of published models, the algorithm is 3.8× less failure-prone while spending 2.5× fewer queries versus the best combination of state of art algorithms. For example, it evades a standard MNIST model using just 12 queries on average. Similar performance is observed on a standard IMAGENET model with an average of 579 queries.
accept-poster
This paper presents a novel black-box adversarial attack algorithm, which exploits a sign-based rather than magnitude-based, gradient estimator for black-box optimization. It also adaptively constructs queries to estimate the gradient. The proposed approach outperforms many state-of-the-art black-box attack methods in terms of query complexity. There is a unanimous agreement to accept this paper.
train
[ "SyxFBUmsKr", "r1ePbcshYS", "Skx_vxdjsr", "H1e7EOw8jB", "r1eJBPwLjS", "ryxo4LwLir", "SJxDS4vLoS", "ryxyGg_yoB", "SJx_zBx6tH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "I'm satisfied with the response. I'll keep my original rating towards acceptance.\n\n----------------------------\n\nThis paper proposes a black-box adversarial attack method to improve query efficiency and attack success rate. Instead of estimating the gradient of a black-box model, the proposed method estimates ...
[ 6, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_SygW0TEFwH", "iclr_2020_SygW0TEFwH", "ryxo4LwLir", "ryxyGg_yoB", "SyxFBUmsKr", "r1ePbcshYS", "SJx_zBx6tH", "iclr_2020_SygW0TEFwH", "iclr_2020_SygW0TEFwH" ]
iclr_2020_HkgH0TEYwH
Deep Semi-Supervised Anomaly Detection
Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets. Typically anomaly detection is treated as an unsupervised learning problem. In practice however, one may have---in addition to a large set of unlabeled samples---access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous. Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples. Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific. In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection. We further introduce an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution, which can serve as a theoretical interpretation for our method. In extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, we demonstrate that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data.
accept-poster
Issues raised by the reviewers have been addressed by the authors, and thus I suggest the acceptance of this paper.
val
[ "SJlHdAKiiB", "SylNjaV9iS", "H1gldRN9jS", "ryg433VciB", "SJgDdz7xor", "Byl7mCRpFr", "BylBYZcAFH", "B1xf_hzkqS" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your comment. We will add your recent paper to our related work. Note that we do compare to semi-supervised anomaly detection methods such as (hybrid) SSAD in the paper. ", "We understand your concerns with the \"information-theoretic view,\" nonetheless we think the framework should rema...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 1, 4, 3 ]
[ "SJgDdz7xor", "B1xf_hzkqS", "BylBYZcAFH", "iclr_2020_HkgH0TEYwH", "iclr_2020_HkgH0TEYwH", "iclr_2020_HkgH0TEYwH", "iclr_2020_HkgH0TEYwH", "iclr_2020_HkgH0TEYwH" ]
iclr_2020_HyxLRTVKPH
Budgeted Training: Rethinking Deep Neural Network Training Under Resource Constraints
In most practical settings and theoretical analyses, one assumes that a model can be trained until convergence. However, the growing complexity of machine learning datasets and models may violate such assumptions. Indeed, current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints. Therefore, we introduce a formal setting for studying training under the non-asymptotic, resource-constrained regime, i.e., budgeted training. We analyze the following problem: "given a dataset, algorithm, and fixed resource budget, what is the best achievable performance?" We focus on the number of optimization iterations as the representative resource. Under such a setting, we show that it is critical to adjust the learning rate schedule according to the given budget. Among budget-aware learning schedules, we find simple linear decay to be both robust and high-performing. We support our claim through extensive experiments with state-of-the-art models on ImageNet (image classification), Kinetics (video classification), MS COCO (object detection and instance segmentation), and Cityscapes (semantic segmentation). We also analyze our results and find that the key to a good schedule is budgeted convergence, a phenomenon whereby the gradient vanishes at the end of each allowed budget. We also revisit existing approaches for fast convergence and show that budget-aware learning schedules readily outperform such approaches under (the practical but under-explored) budgeted training setting.
accept-poster
This paper formalizes the problem of training deep networks in the presence of a budget, expressed here as a maximum total number of optimization iterations, and evaluates various budget-aware learning schedules, finding simple linear decay to work well. Post-discussion, the reviewers all felt that this was a good paper. There were some concerns about the lack of theoretical justification for linear decay, but these were overruled by the practical use of these papers to the community. Therefore I am recommending it be accepted.
train
[ "SJgRJmmbjB", "S1xyJwsnsB", "B1lG-xMcoB", "S1lJo1f9sB", "Hkx2U1zcir", "BygJ8bHctB", "SyeXsBqpKS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Pros:\nThe paper is clearly written. It provides an interesting perspective for training neural networks under resource constraints. The problem setting is novel. The proposed solution is simply decaying learning rate linearly from the initial value to zero during training, which is parameter free. \n\nCons:\n\n- ...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_HyxLRTVKPH", "iclr_2020_HyxLRTVKPH", "BygJ8bHctB", "SyeXsBqpKS", "SJgRJmmbjB", "iclr_2020_HyxLRTVKPH", "iclr_2020_HyxLRTVKPH" ]
iclr_2020_SygpC6Ntvr
Minimizing FLOPs to Learn Efficient Sparse Representations
Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification. Retrieval of such representations from a large database is however computationally challenging. Approximate methods based on learning compact representations, have been widely explored for this problem, such as locality sensitive hashing, product quantization, and PCA. In this work, in contrast to learning compact representations, we propose to learn high dimensional and sparse representations that have similar representational capacity as dense embeddings while being more efficient due to sparse matrix multiplication operations which can be much faster than dense multiplication. Following the key insight that the number of operations decreases quadratically with the sparsity of embeddings provided the non-zero entries are distributed uniformly across dimensions, we propose a novel approach to learn such distributed sparse embeddings via the use of a carefully constructed regularization function that directly minimizes a continuous relaxation of the number of floating-point operations (FLOPs) incurred during retrieval. Our experiments show that our approach is competitive to the other baselines and yields a similar or better speed-vs-accuracy tradeoff on practical datasets.
accept-poster
This paper studies methods for using weight sparsification to reduce the computational load of network inference. While there is not absolute consensus on whether this paper should be accepted, one of the main criticisms of this paper is that sparse compute is not always realistic or efficient on a GPU. While this may be true of the current SOTA in hardware, emerging computing platforms and CPU libraries may handle sparse networks quite well. For this reason, I am willing to down-weight this criticism. Based on the remaining comments, this paper has the merit to be accepted, even if it is a bit forward looking in terms of the hardware platforms it targets.
train
[ "r1xAylF2ir", "H1xy8gYnoB", "ByeykJtnsH", "HJlnoe_PoH", "HJgnKfuvsS", "rke37kOvsB", "Bygin0DvoB", "HyeUOAxaYH", "BJejmeERKS", "BkxR8nw95S" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Further updates:\n\n4. We have now added results on the Cifar-100 dataset in Appendix C, reporting the precision. Our results indicate that our models use less than $50\\%$ computation compared to SDH, however with a slightly lower precision.", "Further updates:\n\n4. We have now added results on the Cifar-100 d...
[ -1, -1, -1, -1, -1, -1, -1, 8, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "HyeUOAxaYH", "BJejmeERKS", "iclr_2020_SygpC6Ntvr", "HyeUOAxaYH", "BJejmeERKS", "BkxR8nw95S", "iclr_2020_SygpC6Ntvr", "iclr_2020_SygpC6Ntvr", "iclr_2020_SygpC6Ntvr", "iclr_2020_SygpC6Ntvr" ]
iclr_2020_S1ly10EKDS
Reanalysis of Variance Reduced Temporal Difference Learning
Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance. A variance reduced TD (VRTD) algorithm was proposed by \cite{korda2015td}, which applies the variance reduction technique directly to the online TD learning with Markovian samples. In this work, we first point out the technical errors in the analysis of VRTD in \cite{korda2015td}, and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance. We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate. Furthermore, the variance error (for both i.i.d.\ and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD. As a result, the overall computational complexity of VRTD to attain a given accurate solution outperforms that of TD under Markov sampling and outperforms that of TD under i.i.d.\ sampling for a sufficiently small conditional number.
accept-poster
The paper studies the variance reduced TD algorithm by Konda and Prashanth (2015). The original paper provided a convergence analysis that had some technical issues. This paper provides a new convergence analysis, and shows the advantage of VRTD to vanilla TD in terms of reducing the bias and variance. Several of the five reviewers are expert in this area and all of them are positive about it. Therefore, I recommend acceptance of this work.
train
[ "ryl2mm7I9r", "rke_FDh39r", "r1xTu9l6cB", "SJx-yLU5jH", "rkeCSV89or", "SJg-2f8qjS", "H1eb_xU9sr", "HyxYIb89sr", "rJghCzooFS", "BklAWLvF9r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper is on temporal difference learning, specifically variance reduction of it. As per the claims of the paper, a previous method from (Korda and La, 2015) had technical errors, which the paper corrects and provides a better analysis of variance reduction. In the end, the paper focuses on the variance of the ...
[ 3, 8, 8, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_S1ly10EKDS", "iclr_2020_S1ly10EKDS", "iclr_2020_S1ly10EKDS", "rke_FDh39r", "r1xTu9l6cB", "ryl2mm7I9r", "rJghCzooFS", "BklAWLvF9r", "iclr_2020_S1ly10EKDS", "iclr_2020_S1ly10EKDS" ]
iclr_2020_Hyg-JC4FDr
Imitation Learning via Off-Policy Distribution Matching
When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning (RL) algorithm. Traditionally, estimation of the distribution ratio requires on-policy data, which has caused previous work to either be exorbitantly data- inefficient or alter the original objective in a manner that can drastically change its optimum. In this work, we show how the original distribution ratio estimation objective may be transformed in a principled manner to yield a completely off-policy objective. In addition to the data-efficiency that this provides, we are able to show that this objective also renders the use of a separate RL optimization unnecessary. Rather, an imitation policy may be learned directly from this objective without the use of explicit rewards. We call the resulting algorithm ValueDICE and evaluate it on a suite of popular imitation learning benchmarks, finding that it can achieve state-of-the-art sample efficiency and performance.
accept-poster
This work addresses new insights in the imitation learning setting, and shows how a popular type of approach can be extended in a principled way to the off-policy learning setting. Several requests for clarification were addressed in the rebuttal phase, in particular regarding the empirical evaluation in off-policy settings. The authors improved the empirical validation and overall clarity of the paper. The resulting manuscript provides valuable new insights, in particular in its principled connections, and extension to previous work.
val
[ "BklKqZM9sS", "rygmkPmKoS", "rylI7N7vjr", "rylbaG7wiS", "BklxvZXvoB", "HklnbLThtr", "SkxD8yNatH", "Hkg8cdP6YB" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "For Figure 2, as we mention in the paper, we used the implementation of BC from the original implementation of GAIL. For Figure 4, in order to plot the rewards w.r.t. different number of updates, we use our own implementation. The use of our own implementation also allowed us to borrow the same settings as used fo...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "rygmkPmKoS", "BklxvZXvoB", "HklnbLThtr", "SkxD8yNatH", "Hkg8cdP6YB", "iclr_2020_Hyg-JC4FDr", "iclr_2020_Hyg-JC4FDr", "iclr_2020_Hyg-JC4FDr" ]
iclr_2020_rkgMkCEtPB
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning. An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks. Despite MAML's popularity, a fundamental open question remains -- is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta initialization already containing high quality features? We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor. This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of the underlying neural network. ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML. We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm). We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.
accept-poster
Paper received mixed reviews: WR (R1), A (R2 and R3). AC has read reviews/rebuttal and examined paper. AC agrees that R1's concerns are misplaced and feels the paper should be accepted.
train
[ "H1g5i-Zc9r", "r1lsg0nptH", "H1xctUU2oB", "r1gL7hcnjB", "H1eF5J9GjB", "Byl4wk9GoB", "Hkx5nntGjr", "r1lUaFTfFH", "H1e8wWvQFr", "Sygws6-GKS" ]
[ "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Thank you for this nice paper. I really enjoyed reading it. \n\nI just wanted to point to our work -- Javed and Martha, 2019 [1] -- which proposes a meta-learning framework similar to ANIL. More concretely, ANIL is a special case of our method (See Figure 1 in the paper: https://arxiv.org/pdf/1905.12588.pdf). For ...
[ -1, 3, 8, -1, -1, -1, -1, 8, -1, -1 ]
[ -1, 4, 3, -1, -1, -1, -1, 1, -1, -1 ]
[ "iclr_2020_rkgMkCEtPB", "iclr_2020_rkgMkCEtPB", "iclr_2020_rkgMkCEtPB", "H1xctUU2oB", "r1lUaFTfFH", "r1lsg0nptH", "H1g5i-Zc9r", "iclr_2020_rkgMkCEtPB", "Sygws6-GKS", "iclr_2020_rkgMkCEtPB" ]
iclr_2020_H1lmyRNFvr
Augmenting Genetic Algorithms with Deep Neural Networks for Exploring the Chemical Space
Challenges in natural sciences can often be phrased as optimization problems. Machine learning techniques have recently been applied to solve such problems. One example in chemistry is the design of tailor-made organic materials and molecules, which requires efficient methods to explore the chemical space. We present a genetic algorithm (GA) that is enhanced with a neural network (DNN) based discriminator model to improve the diversity of generated molecules and at the same time steer the GA. We show that our algorithm outperforms other generative models in optimization tasks. We furthermore present a way to increase interpretability of genetic algorithms, which helped us to derive design principles
accept-poster
Paper received reviews of A, WA, WR. AC has carefully read all reviews/responses. R1 is less experienced in this area. AC sides with R2,R3 and feels paper should be accepted. Interesting topic and interesting problem. Authors are encouraged to strengthen experiments in final version.
train
[ "ryl8UpFhiS", "HygnUBpOqr", "SkxbCMKhjB", "HJla3GF3jS", "SkgkFMthjS", "H1gGIGthoS", "rJgEfztnjS", "rJx_CbYhsB", "BkeHDjoRtr", "B1eeq2hD9H" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for the response, and edits to the paper.\n\nWhile I still believe the experiments could be extended (and as a community, we need to move to more challenging benchmarks at these tasks, but that is another conversation to be had), I acknowledge the short rebuttal times.\n\nI've changed my recomm...
[ -1, 8, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "rJx_CbYhsB", "iclr_2020_H1lmyRNFvr", "BkeHDjoRtr", "BkeHDjoRtr", "B1eeq2hD9H", "B1eeq2hD9H", "HygnUBpOqr", "HygnUBpOqr", "iclr_2020_H1lmyRNFvr", "iclr_2020_H1lmyRNFvr" ]
iclr_2020_HJe_yR4Fwr
Improved Sample Complexities for Deep Neural Networks and Robust Classification via an All-Layer Margin
For linear classifiers, the relationship between (normalized) output margin and generalization is captured in a clear and simple bound – a large output margin implies good generalization. Unfortunately, for deep models, this relationship is less clear: existing analyses of the output margin give complicated bounds which sometimes depend exponentially on depth. In this work, we propose to instead analyze a new notion of margin, which we call the “all-layer margin.” Our analysis reveals that the all-layer margin has a clear and direct relationship with generalization for deep models. This enables the following concrete applications of the all-layer margin: 1) by analyzing the all-layer margin, we obtain tighter generalization bounds for neural nets which depend on Jacobian and hidden layer norms and remove the exponential dependency on depth 2) our neural net results easily translate to the adversarially robust setting, giving the first direct analysis of robust test error for deep networks, and 3) we present a theoretically inspired training algorithm for increasing the all-layer margin. Our algorithm improves both clean and adversarially robust test performance over strong baselines in practice.
accept-poster
This works presents a new and interesting notion of margin for deep neural networks (that incorporates representation at all layers). It then develops generalization bounds based on the introduced margin. The reviewers pointed some concerns, including some notation issues, complexity in case of residual networks, removal of exponential dependence on depth, and dependence on a hard to compute quantity - \kapp^{adv}. Some of these concerns were addressed by the authors. At the end, most of the reviewers find the notion of all-layer margin introduced in this paper a very novel and promising idea for characterizing generalization in deep networks. Agreeing with reviewers, I recommend accept. However, I request the authors to accommodate remaining comments /concerns raised by R1 in the final version of your paper. In particular, in your response to R1 you mentioned for one case you saw improvement even with dropout, but that is not mentioned in the revision; Please include related details in the draft.
test
[ "BklNuffTtB", "SkeSsuF3jr", "SyegBsdnjB", "SyxFeT5jiB", "BJx2Bn5ojS", "r1gLaBqjoS", "Hyl9zrqsjS", "SJg504qojS", "rye0FNG0tB", "H1lRC3iCYH", "Hkxnszbmqr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proves generalization bounds for Neural networks in terms of all layer margins. All layer margins are the smallest relative perturbations of outputs of each layer, that result in misclassification. This new quantity allows to show generalization bounds that scales as sum of complexities of each layer a...
[ 3, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 4, 3, 1 ]
[ "iclr_2020_HJe_yR4Fwr", "SyegBsdnjB", "r1gLaBqjoS", "iclr_2020_HJe_yR4Fwr", "BklNuffTtB", "rye0FNG0tB", "H1lRC3iCYH", "Hkxnszbmqr", "iclr_2020_HJe_yR4Fwr", "iclr_2020_HJe_yR4Fwr", "iclr_2020_HJe_yR4Fwr" ]
iclr_2020_B1l6y0VFPr
Identity Crisis: Memorization and Generalization Under Extreme Overparameterization
We study the interplay between memorization and generalization of overparameterized networks in the extreme case of a single training example and an identity-mapping task. We examine fully-connected and convolutional networks (FCN and CNN), both linear and nonlinear, initialized randomly and then trained to minimize the reconstruction error. The trained networks stereotypically take one of two forms: the constant function (memorization) and the identity function (generalization). We formally characterize generalization in single-layer FCNs and CNNs. We show empirically that different architectures exhibit strikingly different inductive biases. For example, CNNs of up to 10 layers are able to generalize from a single example, whereas FCNs cannot learn the identity function reliably from 60k examples. Deeper CNNs often fail, but nonetheless do astonishing work to memorize the training output: because CNN biases are location invariant, the model must progressively grow an output pattern from the image boundaries via the coordination of many layers. Our work helps to quantify and visualize the sensitivity of inductive biases to architectural choices such as depth, kernel width, and number of channels.
accept-poster
The paper studies the effect of various hyperparameters of neural networks including architecture, width, depth, initialization, optimizer, etc. on the generalization and memorization. The paper carries out a rather through empirical study of these phenomena. The authors also rain a model to mimic identity function which allows rich visualization and easy evaluation. The reviewers were mostly positive but expressed concern about the general picture. One reviewer also has concerns about "generality of the observed phenomenon in this paper". The authors had a thorough response which addressed many of these concerns. My view of the paper is positive. I think the authors do a great job of carrying out careful experiments. As a result I think this is a good addition to ICLR and recommend acceptance.
train
[ "SJlTDSnTFH", "HJxFdVCItS", "HylQIB5uiH", "B1xd-H9_sS", "HJl4AX5usH", "rkl93vlTKB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper studies influence of different hyperparameters of neural networks: architecture, width, depth, initialization, optimizer, etc. on the generalization/memorization trade-off.\n\nTo do this, paper propose a clever trick: train a model to mimic identity function, so that output should be exactly the same, as...
[ 8, 6, -1, -1, -1, 3 ]
[ 3, 1, -1, -1, -1, 4 ]
[ "iclr_2020_B1l6y0VFPr", "iclr_2020_B1l6y0VFPr", "HJxFdVCItS", "rkl93vlTKB", "SJlTDSnTFH", "iclr_2020_B1l6y0VFPr" ]
iclr_2020_HklkeR4KPB
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
We improve the recently-proposed ``MixMatch semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring. - Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of ground-truth labels. - Augmentation anchoring} feeds multiple strongly augmented versions of an input into the model and encourages each output to be close to the prediction for a weakly-augmented version of the same input. To produce strong augmentations, we propose a variant of AutoAugment which learns the augmentation policy while the model is being trained. Our new algorithm, dubbed ReMixMatch, is significantly more data-efficient than prior work, requiring between 5 times and 16 times less data to reach the same accuracy. For example, on CIFAR-10 with 250 labeled examples we reach 93.73% accuracy (compared to MixMatch's accuracy of 93.58% with 4000 examples) and a median accuracy of 84.92% with just four labels per class.
accept-poster
This works improves the MixMatch semi-supervised algorithm along the two directions of distribution alignment and augmentation anchoring, which together make the approach more data-efficient than prior work. All reviewers agree that the impressive empirical results in the paper are its main strength, but express concern that the method is overly complicated and hacking together many known pieces, as well as doubt as to the extent of the contribution of the augmentation method itself, with requests for better augmentation controls. While some of these concerns have not been addressed by authors in their response, the strength of empirical results seems enough to justify an acceptance recommendation.
val
[ "S1gw58DsjH", "BklzQ8Pisr", "H1eouSDijr", "r1gjMgfaYH", "rJljF0tRKS", "r1eFU86RYS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. One set of experiments that I think would be interesting is aimed at understanding the distribution-matching part. For example, it would be great if the author could demonstrate that without this loss term the distribution of the predicted classes is wrong in the experiments from Section 4.\nA: We actually ran ...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 4, 5, 4 ]
[ "r1gjMgfaYH", "rJljF0tRKS", "r1eFU86RYS", "iclr_2020_HklkeR4KPB", "iclr_2020_HklkeR4KPB", "iclr_2020_HklkeR4KPB" ]
iclr_2020_BJxWx0NYPr
Adaptive Structural Fingerprints for Graph Attention Networks
Graph attention network (GAT) is a promising framework to perform convolution and massage passing on graphs. Yet, how to fully exploit rich structural information in the attention mechanism remains a challenge. In the current version, GAT calculates attention scores mainly using node features and among one-hop neighbors, while increasing the attention range to higher-order neighbors can negatively affect its performance, reflecting the over-smoothing risk of GAT (or graph neural networks in general), and the ineffectiveness in exploiting graph structural details. In this paper, we propose an ``"adaptive structural fingerprint" (ADSF) model to fully exploit graph topological details in graph attention network. The key idea is to contextualize each node with a weighted, learnable receptive field encoding rich and diverse local graph structures. By doing this, structural interactions between the nodes can be inferred accurately, thus significantly improving subsequent attention layer as well as the convergence of learning. Furthermore, our model provides a useful platform for different subspaces of node features and various scales of graph structures to ``cross-talk'' with each other through the learning of multi-head attention, being particularly useful in handling complex real-world data. Empirical results demonstrate the power of our approach in exploiting rich structural information in GAT and in alleviating the intrinsic oversmoothing problem in graph neural networks.
accept-poster
This paper is consistently supported by all three reviewers during initial review and discussions. Thus an accept is recommended.
train
[ "BketPXOxiB", "S1xPXdp2jS", "HkxhtAD6KS", "ryxd7o9hsS", "BJlXMvwgjr", "r1l_HDiojB", "H1eSNZHrKr", "r1l6ctjijH", "Sygd55Olir", "HklUNY5Htr" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "First we highly appreciate the valuable comment from the reviewer and we are glad to have received positive comments. The reviewer gives a great summarization of the main idea of our work, and also an interesting suggestion, which allows us to further improve the quality of our work. Our response is as follows. \...
[ -1, -1, 6, -1, -1, -1, 6, -1, -1, 6 ]
[ -1, -1, 4, -1, -1, -1, 4, -1, -1, 1 ]
[ "HklUNY5Htr", "BJlXMvwgjr", "iclr_2020_BJxWx0NYPr", "r1l6ctjijH", "HkxhtAD6KS", "BJlXMvwgjr", "iclr_2020_BJxWx0NYPr", "Sygd55Olir", "H1eSNZHrKr", "iclr_2020_BJxWx0NYPr" ]
iclr_2020_BkxXe0Etwr
CAQL: Continuous Action Q-Learning
Reinforcement learning (RL) with value-based methods (e.g., Q-learning) has shown success in a variety of domains such as games and recommender systems (RSs). When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient. However, one major challenge of extending Q-learning to tackle continuous-action RL problems is that obtaining optimal Bellman backup requires solving a continuous action-maximization (max-Q) problem. While it is common to restrict the parameterization of the Q-function to be concave in actions to simplify the max-Q problem, such a restriction might lead to performance degradation. Alternatively, when the Q-function is parameterized with a generic feed-forward neural network (NN), the max-Q problem can be NP-hard. In this work, we propose the CAQL method which minimizes the Bellman residual using Q-learning with one of several plug-and-play action optimizers. In particular, leveraging the strides of optimization theories in deep NN, we show that max-Q problem can be solved optimally with mixed-integer programming (MIP)---when the Q-function has sufficient representation power, this MIP-based optimization induces better policies and is more robust than counterparts, e.g., CEM or GA, that approximate the max-Q solution. To speed up training of CAQL, we develop three techniques, namely (i) dynamic tolerance, (ii) dual filtering, and (iii) clustering. To speed up inference of CAQL, we introduce the action function that concurrently learns the optimal policy. To demonstrate the efficiency of CAQL we compare it with state-of-the-art RL algorithms on benchmark continuous control problems that have different degrees of action constraints and show that CAQL significantly outperforms policy-based methods in heavily constrained environments.
accept-poster
All three reviewers gave scores of Weak Accept. AC has read the reviews and rebuttal and agrees that the paper makes a solid contribution and should be accepted.
train
[ "HylD-wzHor", "ByxVH8MBiH", "H1lX6OD6Kr", "S1lSZ2z0FH", "rkeMEqSY_S", "r1eqvrj_dH" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Which approximation to use?: In Section 4 we provide three techniques---dynamic tolerance, dual filtering, and clustering---to accelerate training (reduce computational cost). Dynamic tolerance is useful in a very general sense, and can be applied to any problem. It speeds up the MIP solver by adjusting its tolera...
[ -1, -1, 6, 6, -1, -1 ]
[ -1, -1, 4, 3, -1, -1 ]
[ "H1lX6OD6Kr", "S1lSZ2z0FH", "iclr_2020_BkxXe0Etwr", "iclr_2020_BkxXe0Etwr", "r1eqvrj_dH", "iclr_2020_BkxXe0Etwr" ]
iclr_2020_BJluxREKDB
Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning
We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning. We focus on a backtracking search algorithm, which can already solve formulas of impressive size - up to hundreds of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For a family of challenging problems, we learned a heuristic that solves significantly more formulas compared to the existing handwritten heuristics.
accept-poster
This paper proposes a new method to learning heuristics for quantified boolean formulas through RL. The focus is on a method called backtracking search algorithm. The paper proposes a new representation of formulas to scale the predictions of this method. The reviewers have an overall positive response to this paper. R1 and R2 both agree that the paper should be accepted, and have given some minor feedback to improve the paper. R3 initially was critical of the paper, but the rebuttal helped to clarify their doubt. They still have one more comment and I encourage the authors to address this in the final version of the paper. R3 meant to increase their score but somehow this is not reflected in the current score. Based on their comments though, I am assuming the scores to be 6,8,6 which makes the cut for ICLR. Therefore, I recommend to accept this paper.
train
[ "rygvYeucsr", "ByedLWdqir", "H1x_cC8miB", "Sygo7h8XsH", "SJeDZyV5tr", "Hye1Ff45YH", "Bkg4fpnatr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We believe that review #3 contains some misunderstandings, which we address first:\n\n>This paper investigates the problem of predicting the truth of quantified boolean \n>formulae using deep reinforcement learning.\n\nThis paper does not predict the truth of formulas, but instead predicts heuristic decisions in a...
[ -1, -1, -1, -1, 3, 8, 6 ]
[ -1, -1, -1, -1, 3, 1, 3 ]
[ "SJeDZyV5tr", "rygvYeucsr", "Bkg4fpnatr", "Hye1Ff45YH", "iclr_2020_BJluxREKDB", "iclr_2020_BJluxREKDB", "iclr_2020_BJluxREKDB" ]
iclr_2020_rkgOlCVYvB
Pure and Spurious Critical Points: a Geometric Study of Linear Networks
The critical locus of the loss function of a neural network is determined by the geometry of the functional space and by the parameterization of this space by the network's weights. We introduce a natural distinction between pure critical points, which only depend on the functional space, and spurious critical points, which arise from the parameterization. We apply this perspective to revisit and extend the literature on the loss function of linear neural networks. For this type of network, the functional space is either the set of all linear maps from input to output space, or a determinantal variety, i.e., a set of linear maps with bounded rank. We use geometric properties of determinantal varieties to derive new results on the landscape of linear networks with different loss functions and different parameterizations. Our analysis clearly illustrates that the absence of "bad" local minima in the loss landscape of linear networks is due to two distinct phenomena that apply in different settings: it is true for arbitrary smooth convex losses in the case of architectures that can express all linear maps ("filling architectures") but it holds only for the quadratic loss when the functional space is a determinantal variety ("non-filling architectures"). Without any assumption on the architecture, smooth convex losses may lead to landscapes with many bad minima.
accept-poster
This paper studies the landscape of linear networks and its critical point. The authors utilize geometric properties of determinantal varieties to derive interesting results on the landscape of linear networks. The reviewers raised some concerns about the fact that many of the results stated here can already be achieved using other techniques and therefore had some concerns about the novelty of these results. The authors provided a detailed response addressing these concerns. One reviewer however still had some concerns about the novelty. My own understanding of the paper is that while some of these results can be obtained using other approaches the proof techniques (brining ideas from algebraic geometry) is novel and could be rather useful. While at this point it is not clear that the techniques generalize to the nonlinear case I think algebraic geometry perspective have a good potential and provide some diversity in the theoretical techniques. As a result I recommend acceptance if possible.
train
[ "S1l-FJK8jS", "r1lMcltIor", "BkgRLxYIoH", "HJesexYLjr", "rJxKVkKUiB", "ryeoxyAntS", "BJe-Pb7RKS", "S1lfYa-IcH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Extensions of the analysis to nonlinear networks**\n\nWe discuss possible extensions of our analysis to the case of nonlinear networks. If the paper is accepted, we will include some of these observations in the conclusions.\n\nFirst, we emphasize that the notions of pure and spurious critical points are well-de...
[ -1, -1, -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, -1, -1, 1, 4, 3 ]
[ "rJxKVkKUiB", "ryeoxyAntS", "BJe-Pb7RKS", "S1lfYa-IcH", "iclr_2020_rkgOlCVYvB", "iclr_2020_rkgOlCVYvB", "iclr_2020_rkgOlCVYvB", "iclr_2020_rkgOlCVYvB" ]
iclr_2020_SJeYe0NtvH
Neural Text Generation With Unlikelihood Training
Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core. In particular, standard likelihood training and decoding leads to dull and repetitive outputs. While some post-hoc fixes have been proposed, in particular top-k and nucleus sampling, they do not address the fact that the token-level probabilities predicted by the model are poor. In this paper we show that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution. We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model. We show that both token and sequence level unlikelihood training give less repetitive, less dull text while maintaining perplexity, giving superior generations using standard greedy or beam search. According to human evaluations, our approach with standard beam search also outperforms the currently popular decoding methods of nucleus sampling or beam blocking, thus providing a strong alternative to existing techniques.
accept-poster
This paper introduces a new objective for text generation with neural nets. The main insight is that the standard likelihood objective assigns excessive probability to sequences containing repeated and frequent words. The paper proposes an objective that penalizes these patterns. This technique yields better text generation than alternative methods according to human evaluations. The reviewers found the paper to be written clearly. They found the problem to be relevant and found the proposed solution method to be both novel and simple. The experiments were carefully designed and the results were convincing. The reviewers raised several concerns on particular details of the method. These concerns were largely addressed by the authors in their response. Overall, the reviewers did not find the weaknesses of the paper to be serious flaws. This paper should be published. The paper provides a clearly presented solution for a relevant problem, along with careful experiments.
val
[ "H1lxTev3Yr", "HkxH_epujS", "Sklyik6ujB", "S1etXJadjB", "HJeHygz6YH", "ryeEqCDG5B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper proposes training losses, unlikelihood objective, for mitigating the repetition problem of the text generated by recent neural language models. The problem is well-motivated by evidence from the existing literature. Specifically, the paper argues that the main cause of the degenerated output is the ma...
[ 6, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, 4, 4 ]
[ "iclr_2020_SJeYe0NtvH", "H1lxTev3Yr", "HJeHygz6YH", "ryeEqCDG5B", "iclr_2020_SJeYe0NtvH", "iclr_2020_SJeYe0NtvH" ]
iclr_2020_rJeqeCEtvH
Semi-Supervised Generative Modeling for Controllable Speech Synthesis
We present a novel generative model that combines state-of-the-art neural text- to-speech (TTS) with semi-supervised probabilistic latent variable models. By providing partial supervision to some of the latent variables, we are able to force them to take on consistent and interpretable purposes, which previously hasn’t been possible with purely unsupervised methods. We demonstrate that our model is able to reliably discover and control important but rarely labelled attributes of speech, such as affect and speaking rate, with as little as 1% (30 minutes) supervision. Even at such low supervision levels we do not observe a degradation of synthesis quality compared to a state-of-the-art baseline. We will release audio samples at https://google.github.io/tacotron/publications/semisupervised_generative_modeling_for_controllable_speech_synthesis/.
accept-poster
The authors propose to enforce interpretability and controllability on latent variables, like affect and speaking rate, in a speech synthesis model by training in a semi-supervised way, with a small amount of labeled data with the variables of interest labeled. The idea is sensible and the results are very encouraging, and the authors have addressed the initial concerns brought up by the reviewers.
train
[ "BygP7mHAtr", "BygnsoO3Kr", "Bkgb0FksYS", "rklAIqvjjr", "Hklj08DijB", "H1xOK6qssB", "B1xrsKPssB", "ryewNFwjjS", "HkgcGKPjsr", "Bkewu_voor", "SJg-bPDosr", "rygNsSPosS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "The authors propose to do neural text to speech, conditioned on attributes such as valence and arousal and speech rate. A seq-to-seq network is trained using stochastic gradient variational Bayes. \nThe idea is interesting and new.\n\nThe method section could be made clearer by giving first some intuition, ...
[ 6, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_rJeqeCEtvH", "iclr_2020_rJeqeCEtvH", "iclr_2020_rJeqeCEtvH", "B1xrsKPssB", "rygNsSPosS", "iclr_2020_rJeqeCEtvH", "Bkgb0FksYS", "HkgcGKPjsr", "Bkewu_voor", "BygnsoO3Kr", "Hklj08DijB", "BygP7mHAtr" ]
iclr_2020_SkxybANtDB
Dynamic Time Lag Regression: Predicting What & When
This paper tackles a new regression problem, called Dynamic Time-Lag Regression (DTLR), where a cause signal drives an effect signal with an unknown time delay. The motivating application, pertaining to space weather modelling, aims to predict the near-Earth solar wind speed based on estimates of the Sun's coronal magnetic field. DTLR differs from mainstream regression and from sequence-to-sequence learning in two respects: firstly, no ground truth (e.g., pairs of associated sub-sequences) is available; secondly, the cause signal contains much information irrelevant to the effect signal (the solar magnetic field governs the solar wind propagation in the heliosphere, of which the Earth's magnetosphere is but a minuscule region). A Bayesian approach is presented to tackle the specifics of the DTLR problem, with theoretical justifications based on linear stability analysis. A proof of concept on synthetic problems is presented. Finally, the empirical results on the solar wind modelling task improve on the state of the art in solar wind forecasting.
accept-poster
The paper proposes a Bayesian approach for time-series regression when the explanatory time-series influences the response time-series with a time lag. The time lag is unknown and allowed to be non-stationary process. Reviewers have appreciated the significance of the problem and novelty of the proposed method, and also highlighted the importance of the application domain considered by the paper.
test
[ "S1xcpKz3oH", "Byx0S2M3jH", "SkgnSiznsH", "HygbsdMniH", "Syx3u5oKFB", "HyleWB2CYH", "rkxgO3WwcB", "rJg8LdshcH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Re novelty (it is a quite normal model for spatio-temporal data): \nIn mainstream spatio-temporal problems, there is no dynamic unknown time-lag phenomenon. More precisely, for most problems in biology, ecology, meteorology, medicine and forestry, the time-lag (between the cause series and the effect series) foll...
[ -1, -1, -1, -1, 6, 6, 6, 8 ]
[ -1, -1, -1, -1, 1, 1, 3, 4 ]
[ "rkxgO3WwcB", "Syx3u5oKFB", "HyleWB2CYH", "rJg8LdshcH", "iclr_2020_SkxybANtDB", "iclr_2020_SkxybANtDB", "iclr_2020_SkxybANtDB", "iclr_2020_SkxybANtDB" ]
iclr_2020_HkgxW0EYDS
Scalable Model Compression by Entropy Penalized Reparameterization
We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a “latent” space, amounting to a reparameterization. This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training. Classification accuracy and model compressibility is maximized jointly, with the bitrate–accuracy trade-off specified by a hyperparameter. We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures. Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training.
accept-poster
The paper describes a simple method for neural network compression by applying Shannon-type encoding. This is a fresh and nice idea, as noted by reviewers. A disadvantage is that the architectures on ImageNet are not the most efficient ones. Also, the review misses several important works on low-rank factorization of weights for the compression (Lebedev et. al, Novikov et. al). But overall, a good paper.
train
[ "HkeBVXFusH", "BJeX7qddjr", "B1gRaYuuoB", "S1gcuF_ujS", "r1xcYOqhKS", "ryeXVdBRKB", "H1eIk0dc9B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers,\n\nthank you for your valuable suggestions. We uploaded a new version of the paper with some of the changes we discuss in our individual responses below. In particular, we simplified the notation, and we now discuss further references, as suggested. We also took the liberty to improve some of the f...
[ -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2020_HkgxW0EYDS", "H1eIk0dc9B", "ryeXVdBRKB", "r1xcYOqhKS", "iclr_2020_HkgxW0EYDS", "iclr_2020_HkgxW0EYDS", "iclr_2020_HkgxW0EYDS" ]
iclr_2020_Bkl7bREtDr
AMRL: Aggregated Memory For Reinforcement Learning
In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy. We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration. Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs. Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order. We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time. Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.
accept-poster
This paper introduces a way to augment memory in recurrent neural networks with order-independent aggregators. In noisy environments this results in an increase in training speed and stability. The reviewers considered this to be a strong paper with potential for impact, and were satisfied with the author response to their questions and concerns.
train
[ "SJgxcBUnoH", "rJxMhbcAtB", "ryxqgvvRtS", "BJxg4lI2sB", "BJxv2X8Esr", "BJemcNLEsB", "Skxn0MIEsr", "H1gKWjh15H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thanks for the polite response. I've updated the review score.", "UPDATE: The response helped address my questions. I've raised the score to 6.\n\nThis paper studies reinforcement learning for settings where the observations contain noise and where observations have long-range dependencies with the past. The pro...
[ -1, 6, 8, -1, -1, -1, -1, 6 ]
[ -1, 1, 5, -1, -1, -1, -1, 4 ]
[ "BJxv2X8Esr", "iclr_2020_Bkl7bREtDr", "iclr_2020_Bkl7bREtDr", "BJemcNLEsB", "rJxMhbcAtB", "ryxqgvvRtS", "H1gKWjh15H", "iclr_2020_Bkl7bREtDr" ]
iclr_2020_HJxV-ANKDH
Efficient Riemannian Optimization on the Stiefel Manifold via the Cayley Transform
Strictly enforcing orthonormality constraints on parameter matrices has been shown advantageous in deep learning. This amounts to Riemannian optimization on the Stiefel manifold, which, however, is computationally expensive. To address this challenge, we present two main contributions: (1) A new efficient retraction map based on an iterative Cayley transform for optimization updates, and (2) An implicit vector transport mechanism based on the combination of a projection of the momentum and the Cayley transform on the Stiefel manifold. We specify two new optimization algorithms: Cayley SGD with momentum, and Cayley ADAM on the Stiefel manifold. Convergence of Cayley SGD is theoretically analyzed. Our experiments for CNN training demonstrate that both algorithms: (a) Use less running time per iteration relative to existing approaches that enforce orthonormality of CNN parameters; and (b) Achieve faster convergence rates than the baseline SGD and ADAM algorithms without compromising the performance of the CNN. Cayley SGD and Cayley ADAM are also shown to reduce the training time for optimizing the unitary transition matrices in RNNs.
accept-poster
This paper presents a method for optimizing parameter matrices of deep learning objectives while enforcing orthonormality constraints. While advantageous in certain respects, such constraints can be expensive to maintain when using existing methods. To address this issue, an new algorithm is proposed based on the Cayley Transform and analyzed in terms of convergence. After the discussion period two reviewers supported acceptance while one still voted for rejection. Consequently, in recommending acceptance here for a poster, it is worth examining the significance of unresolved concerns. First, the reject reviewer raised the valid point that the convergence proof relies on the assumption of Lipschitz continuous gradients, and yet the experiments use ReLU activation functions that do not satisfy this criteria. In my view though, it is sometimes reasonable to derive useful theory under the assumption of Lipschitz continuous derivatives that nonetheless provides insight into the case where these derivatives may not be Lipschitz on a set of measure zero (which would be the case with ReLU activations). So while ideally it might be nice to extend the theory to remove this assumption, the algorithm seems to work fine with ReLU activations in practice. And this seems reasonable given the improbability of any iterate exactly hitting the measure-zero points where the gradients are discontinuous. Beyond this issue, some criticisms were mentioned in terms of how and where the timing comparisons were presented. However, I believe that these issues can be easily remedied in a final revision.
train
[ "Bkg1qrA6KS", "Bkxvuko2iB", "SkgK8A9hjS", "SJe6GZAsiB", "rkg0GANniS", "H1xzebf3iH", "HkxGoYasjr", "HylvOITijS", "r1xFtxTjor", "B1lfdOP6KH", "SJx4F9R-9r", "rJe61D1T9B", "Skl9XLDEcH", "SkeWyYP2YS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "Summary\nThis paper aims to improve upon current solutions for optimizing neural networks with orthonormal convolutional kernels/MLP layers. Optimizing neural networks while restricting the parameter matrices to remain orthonormal/on the Stiefel manifold is said to lead to faster convergence in terms of the number...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1 ]
[ "iclr_2020_HJxV-ANKDH", "H1xzebf3iH", "rkg0GANniS", "B1lfdOP6KH", "SJe6GZAsiB", "HkxGoYasjr", "Bkg1qrA6KS", "SJx4F9R-9r", "rJe61D1T9B", "iclr_2020_HJxV-ANKDH", "iclr_2020_HJxV-ANKDH", "Skl9XLDEcH", "SkeWyYP2YS", "iclr_2020_HJxV-ANKDH" ]
iclr_2020_HkgrZ0EYwB
Unpaired Point Cloud Completion on Real Scans using Adversarial Training
As 3D scanning solutions become increasingly popular, several deep learning setups have been developed for the task of scan completion, i.e., plausibly filling in regions that were missed in the raw scans. These methods, however, largely rely on supervision in the form of paired training data, i.e., partial scans with corresponding desired completed scans. While these methods have been successfully demonstrated on synthetic data, the approaches cannot be directly used on real scans in absence of suitable paired training data. We develop a first approach that works directly on input point clouds, does not require paired training data, and hence can directly be applied to real scans for scan completion. We evaluate the approach qualitatively on several real-world datasets (ScanNet, Matterport3D, KITTI), quantitatively on 3D-EPN shape completion benchmark dataset, and demonstrate realistic completions under varying levels of incompleteness.
accept-poster
This paper presents an unsupervised method for completing point clouds obtained from real 3D scans based on GAN. Generally, the paper is well-organized, and its contributions and experimental supports are clearly presented, from which all reviewers got positive impressions. Although the technical contribution of the method seems marginal as it is essentially a combination of established methods, it well fits in a novel and practical application scenario, and its useful is convincingly demonstrated in intensive experiments. We conclude that the paper provides favorable insights covering the weakness in technical novelty, so I’d like to recommend acceptance.
train
[ "BklAiJIOsH", "HJxonprdiS", "BkgH-1IdsS", "H1gpNRSOsH", "rklENFSJqB", "rkgx7edm5B", "SyguezJ3cr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank Reviewer#1 for valuable comments.\n\nQ: “how are the hyper-parameters set in the unsupervised setting? Appendix D details the model selection procedure: using a validation set and select the model that gives the best f1 score. I do not understand how this can be done without ground truth.”\n\nA: We thank ...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 1, 4, 3 ]
[ "rklENFSJqB", "SyguezJ3cr", "rkgx7edm5B", "SyguezJ3cr", "iclr_2020_HkgrZ0EYwB", "iclr_2020_HkgrZ0EYwB", "iclr_2020_HkgrZ0EYwB" ]
iclr_2020_HJe_Z04Yvr
Adjustable Real-time Style Transfer
Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another. Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn't have control over the synthesized output. Moreover, the style transfer depends on the hyper-parameters of the model with varying ``optimum" for different input images. Therefore, if the stylized output is not appealing to the user, she/he has to try multiple models or retrain one with different hyper-parameters to get a favorite stylization. In this paper, we address these issues by proposing a novel method which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters. These parameters enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image. Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyper-parameters. We also demonstrate how these parameters can be randomized to generate results which are diverse but still very similar in style and content.
accept-poster
This paper offers an innovative approach to adjusting style transfer parameters. The reviewers were consistent, and all recommend acceptance. I concur.
train
[ "rJgU0j9sjB", "HygJNiqsoH", "HyxqK55sjB", "HJgiTy_GsB", "B1xU8is7qS", "ByeD9kIN5S" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nWe thank the reviewer for their helpful and comprehensive review. We updated the paper the address the main raised issues and clarify the questions. Please find the detailed answers below.\n\n==============================\n\n[Q] I'd highly encourage the authors to consolidate some of the early parts of the pape...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 1, 5, 1 ]
[ "B1xU8is7qS", "ByeD9kIN5S", "HJgiTy_GsB", "iclr_2020_HJe_Z04Yvr", "iclr_2020_HJe_Z04Yvr", "iclr_2020_HJe_Z04Yvr" ]
iclr_2020_rygFWAEFwS
Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.
accept-poster
The authors proposed a simple and effective approach to parallel training based on stochastic weight averaging. Moreover, the authors have carefully addressed the reviewer comments in the discussion period, particularly the relation to local SGD, to the satisfaction of reviewers. Local SGD mimics sequential SGD with noise induced by lack of synchronization, whereas SWAP averages multiple samples from a stationary distribution, and synchronizes at the end. Please clarify these points and carefully account for reviewer comments in the final version. Overall, the proposed approach will make an excellent addition to the program, both elegant and practically useful.
train
[ "HJxGKR8pYH", "Skl7GqVisr", "BJxyvtNijH", "S1eQvhgvjS", "S1eVmCKvKB", "Ske2EIQ0KH", "SklCx0H1cH", "HJg5GyRAtB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "public" ]
[ "This paper proposes a 2-stage SGD variant that improves the generalization. The experiments show good performance.\nHowever, there are some weakness in this paper:\n\n1. (Minor issue) The Update() function in Algorithm 1 seems to be something very general. However, it seems that Update() is simply SGD or SGD with ...
[ 6, -1, -1, -1, 6, 3, -1, -1 ]
[ 3, -1, -1, -1, 3, 4, -1, -1 ]
[ "iclr_2020_rygFWAEFwS", "Ske2EIQ0KH", "S1eVmCKvKB", "HJxGKR8pYH", "iclr_2020_rygFWAEFwS", "iclr_2020_rygFWAEFwS", "HJg5GyRAtB", "iclr_2020_rygFWAEFwS" ]
iclr_2020_Byg5ZANtvH
Short and Sparse Deconvolution --- A Geometric Approach
Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure. Variants of this problem arise in applications such as image deblurring, microscopy, neural spike sorting, and more. The problem is challenging in both theory and practice, as natural optimization formulations are nonconvex. Moreover, practical deconvolution problems involve smooth motifs (kernels) whose spectra decay rapidly, resulting in poor conditioning and numerical challenges. This paper is motivated by recent theoretical advances \citep{zhang2017global,kuo2019geometry}, which characterize the optimization landscape of a particular nonconvex formulation of SaSD. This is used to derive a provable algorithm that exactly solves certain non-practical instances of the SaSD problem. We leverage the key ideas from this theory (sphere constraints, data-driven initialization) to develop a practical algorithm, which performs well on data arising from a range of application areas. We highlight key additional challenges posed by the ill-conditioning of real SaSD problems and suggest heuristics (acceleration, continuation, reweighting) to mitigate them. Experiments demonstrate the performance and generality of the proposed method.
accept-poster
The work considers sparse and short blind deconvolution problem, which is to inverse a convolution of a sparse source (such as spikes at cell locations in microscopy) with a short (of limited spatial size) kernel or point spread function, not known in advance. This is posed as a bilinear lasso optimization problem. The work applies a non-linear optimization method with some practical improvements (such as data-driven initialization, momentum, homotopy continuation). The paper extends the work by Kuo et al. (2019) by providing a practical algorithm for solving those inverse problems. A focus of the paper is to solve the bilinear lasso instead of the approximate bilinear lasso, because this approximation is poor for coherent problems. Having read the rebuttal and the paper, I believe the authors addressed the issues raised by Reviewer #2 in a sufficient way. small things: - it would be good to define $\iota$ (zero-padding operator) in (1) - it would be good to define $p, p_0$ just below (3). They seem to be appearing out of the blue without any direct relation to anything mentioned prior in section 2. - it would be good to cite some older/historic references for various optimization methods , e.g. [1] below. [1] Richter & deCarlo Continuation methods: Theory and applications IEEE Transactions on Systems, Man, and Cybernetics, 1983 https://ieeexplore.ieee.org/abstract/document/6313131
train
[ "SyeLEY3ssH", "S1lPN6hjoB", "Hyeqzh3sjH", "HklZ3uqptH", "rJlh3raLqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the accurate interpretation and the appreciation of our work. We address your comments below.\n\n*Comparison with state-of-the-art methods on practical problems. As suggested by the reviewer, we have added an experiment that compares the proposed method with state-of-the-art algorithms (...
[ -1, -1, -1, 6, 3 ]
[ -1, -1, -1, 1, 1 ]
[ "HklZ3uqptH", "Hyeqzh3sjH", "rJlh3raLqH", "iclr_2020_Byg5ZANtvH", "iclr_2020_Byg5ZANtvH" ]
iclr_2020_HJg2b0VYDr
Selection via Proxy: Efficient Data Selection for Deep Learning
Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets. However, they can be prohibitively expensive to apply in deep learning because they depend on feature representations that need to be learned. In this work, we show that we can greatly improve the computational efficiency by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model, using smaller architectures, and training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signals for data selection. We evaluate this "selection via proxy" (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error (often within 0.1%). For core-set selection on CIFAR10, proxies that are over 10× faster to train than their larger, more accurate targets can remove up to 50% of the data without harming the final accuracy of the target, leading to a 1.6× end-to-end training time improvement.
accept-poster
This paper proposes to perform sample selection for deep learning - which can be very computationally expensive - using a smaller and simpler proxy network. The paper shows that such proxies are faster to train and do not substantially harm the accuracy of the final network. The reviewers were all in agreement that the problem is important, and that the paper is comprehensive and well executed. I therefore recommend it should be accepted.
train
[ "rJg6y-fhor", "BJex1If3iB", "r1eLXlG2or", "HJlG-8MhiS", "B1xdvSf3oB", "HklEW4MhiS", "Hkxo5Xfhor", "Hyxoogf3sr", "BJxUiL7GiH", "rkgV59XrYr", "B1ewdKNaYH" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# R4P5: “- In the abstract: \"improvement in data selection runtime\" Is this \"data selection runtime\" different from the total runtime? If not, it could be clearer to simply state it as the \"total runtime (including the time to repeatedly train and select points)\". I was unsure if this was a different measure...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, 4 ]
[ "r1eLXlG2or", "rkgV59XrYr", "BJxUiL7GiH", "rkgV59XrYr", "B1ewdKNaYH", "Hkxo5Xfhor", "B1ewdKNaYH", "BJxUiL7GiH", "iclr_2020_HJg2b0VYDr", "iclr_2020_HJg2b0VYDr", "iclr_2020_HJg2b0VYDr" ]
iclr_2020_B1lnbRNtwr
Global Relational Models of Source Code
Models of code can learn distributed representations of a program's syntax and semantics to predict many non-trivial properties of a program. Recent state-of-the-art models leverage highly structured representations of programs, such as trees, graphs and paths therein (e.g. data-flow relations), which are precise and abundantly available for code. This provides a strong inductive bias towards semantically meaningful relations, yielding more generalizable representations than classical sequence-based models. Unfortunately, these models primarily rely on graph-based message passing to represent relations in code, which makes them de facto local due to the high cost of message-passing steps, quite in contrast to modern, global sequence-based models, such as the Transformer. In this work, we bridge this divide between global and structured models by introducing two new hybrid model families that are both global and incorporate structural bias: Graph Sandwiches, which wrap traditional (gated) graph message-passing layers in sequential message-passing layers; and Graph Relational Embedding Attention Transformers (GREAT for short), which bias traditional Transformers with relational information from graph edge types. By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation. Starting with a graph-based model that already improves upon the prior state-of-the-art for this task by 20%, we show that our proposed hybrid models improve an additional 10-15%, while training both faster and using fewer parameters.
accept-poster
The paper investigates hybrid NN architectures to represent programs, involving both local (RNN, Transformer) and global (Gated Graph NN) structures, with the goal of exploiting the program structure while permitting the fast flow of information through the whole program. The proof of concept for the quality of the representation is the performance on the VarMisuse task (identifying where a variable was replaced by another one, and which variable was the correct one). Other criteria regard the computational cost of training and number of parameters. Varied architectures, involving fast and local transmission with and without attention mechanisms, are investigated, comparing full graphs and compressed (leaves-only) graphs. The lessons learned concern the trade-off between the architecture of the model, the computational time and the learning curve. It is suggested that the Transformer learns from scratch to connect the tokens as appropriate; and that interleaving RNN and GNN allows for more effective processing, with less message passes and less parameters with improved accuracy. A first issue raised by the reviewers concerns the computational time (ca 100 hours on P100 GPUs); the authors focus on the performance gain w.r.t. GGNN in terms of computational time (significant) and in terms of epochs. Another concern raised by the reviewers is the moderate originality of the proposed architecture. I strongly recommend that the authors make their architecture public; this is imo the best way to evidence the originality of the proposed solution. The authors did a good job in answering the other concerns, in particular concerning the computational time and the choice of the samples. I thus recommend acceptance.
train
[ "SJgoCH9TKB", "H1gGWvQ5oB", "HkxIsMCKir", "ryg-TanKsS", "ryxhyq3tjB", "Bkg3QfqQoS", "HygHbzqXiH", "SJgK1zcQir", "r1xSTbc7sS", "HklypX-iYH", "HJelNGkaKr", "BygeU4qC_S", "SJeTdt32OS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The paper proposes improvements on existing probabilistic models for code that predicts and repairs variable misuses. This is a variant of the task, proposed by Vasic et al. The task takes a dataset of python functions, introduces errors in these functions and makes a classifier that would identify what errors wer...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, -1, -1 ]
[ "iclr_2020_B1lnbRNtwr", "HkxIsMCKir", "ryg-TanKsS", "SJgK1zcQir", "r1xSTbc7sS", "iclr_2020_B1lnbRNtwr", "HklypX-iYH", "SJgoCH9TKB", "HJelNGkaKr", "iclr_2020_B1lnbRNtwr", "iclr_2020_B1lnbRNtwr", "SJeTdt32OS", "iclr_2020_B1lnbRNtwr" ]
iclr_2020_BJl6bANtwH
Detecting Extrapolation with Local Ensembles
We present local ensembles, a method for detecting extrapolation at test time in a pre-trained model. We focus on underdetermination as a key component of extrapolation: we aim to detect when many possible predictions are consistent with the training data and model class. Our method uses local second-order information to approximate the variance of predictions across an ensemble of models from the same class. We compute this approximation by estimating the norm of the component of a test point's gradient that aligns with the low-curvature directions of the Hessian, and provide a tractable method for estimating this quantity. Experimentally, we show that our method is capable of detecting when a pre-trained model is extrapolating on test data, with applications to out-of-distribution detection, detecting spurious correlates, and active learning.
accept-poster
This paper presents an ensembling approach to detect underdetermination for extrapolating to test points. The problem domain is interesting and the approach is simple and useful. While reviewers were positive about the work, they raised several points for improvement. The authors are strongly encouraged to include the discussion here in the final version.
train
[ "ByeyHIaqtr", "HyeaUbbior", "S1gXvgDqiB", "Skx8NgPcsS", "B1g3hkD9jB", "HylycJP9jB", "HylPlkv5sS", "HJxqsALqiS", "S1gq0LRRKH", "BJxhlmtn9H" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "# Summary of contribution\n- The paper provides a novel fast and simple approximation of second-order local parameter sensitivity of neural networks, to estimate a form of uncertainty wrt to a test sample, which is further used and tested as a novelty detector. \n- The method analyzes the most significant eigenvec...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_BJl6bANtwH", "iclr_2020_BJl6bANtwH", "ByeyHIaqtr", "ByeyHIaqtr", "HylycJP9jB", "ByeyHIaqtr", "S1gq0LRRKH", "BJxhlmtn9H", "iclr_2020_BJl6bANtwH", "iclr_2020_BJl6bANtwH" ]
iclr_2020_S1eRbANtDB
Learning to Link
Clustering is an important part of many modern data analysis pipelines, including network analysis and data retrieval. There are many different clustering algorithms developed by various communities, and it is often not clear which algorithm will give the best performance on a specific clustering task. Similarly, we often have multiple ways to measure distances between data points, and the best clustering performance might require a non-trivial combination of those metrics. In this work, we study data-driven algorithm selection and metric learning for clustering problems, where the goal is to simultaneously learn the best algorithm and metric for a specific application. The family of clustering algorithms we consider is parameterized linkage based procedures that includes single and complete linkage. The family of distance functions we learn over are convex combinations of base distance functions. We design efficient learning algorithms which receive samples from an application-specific distribution over clustering instances and learn a near-optimal distance and clustering algorithm from these classes. We also carry out a comprehensive empirical evaluation of our techniques showing that they can lead to significantly improved clustering performance on real-world datasets.
accept-poster
All reviewers come to agreement that this is a solid paper worth publishing at ICLR; the authors are encouraged to incorporate additional comments suggested by reviewers.
train
[ "HkgPT3nJqH", "HJx7F1_hoS", "r1xup6d2or", "HJgvQy_hjS", "B1xD6Cv3iS", "HkgY83ahtr", "rJlGe3uIqS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper proposed a data-driven method of selecting a linkage-based clustering algorithm from a large space. The space of algorithms is parameterized by two sets of parameters which indicate the convex combinations of metrics and merge functions. They analyze the sample complexity for small generaliz...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, 1, 5 ]
[ "iclr_2020_S1eRbANtDB", "HkgY83ahtr", "HJx7F1_hoS", "HkgPT3nJqH", "rJlGe3uIqS", "iclr_2020_S1eRbANtDB", "iclr_2020_S1eRbANtDB" ]
iclr_2020_ryebG04YvB
Adversarially robust transfer learning
Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain. We start by observing that robust networks contain robust feature extractors. By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks. We then consider the case of "fine tuning" a network by re-training end-to-end in the target domain. When using lifelong learning strategies, this process preserves the robustness of the source network while achieving high accuracy. By using such strategies, it is possible to produce accurate and robust models with little data, and without the cost of adversarial training. Additionally, we can improve the generalization of adversarially trained models, while maintaining their robustness.
accept-poster
This paper presents an empirical study towards understanding the transferability of robustness (of a deep model against adversarial examples) in the process of transfer learning across different tasks. The paper received divergent reviews, and an in-depth discussion was raised among the reviewers. + Reviewers generally agree that the paper makes an interesting study to the robust ML community. The paper provides a nice exploration of the hypothesis that robust models learn robust intermediate representations, and leverages this insight to help in transferring robustness without adversarial training on every new target domain. - Reviewers also have concerns that, as an experimental paper, it should perform a larger study on different datasets and transfer problems to eliminate the bias to specific tasks, and explore the behavior when the task relatedness increases or decreases. AC agrees with the reviewers and encourages the authors to incorporate these constructive suggestions in the revision, in particular, explore more tasks with different task relatedness. I recommend acceptance, assuming the comments will be fully addressed.
train
[ "H1xaqgL3or", "S1gqPAioYr", "HJl5HxXjiS", "HJxb0kXsjH", "Syxi1JXjoS", "r1gOhAzijH", "rJeQU0GosS", "r1xHX9DptH", "SyxAm1RaFS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for addressing my questions and remarks. Based on these modifications, I have updated my score by one point.\n\nMinor comment: In the paper revision, the order in which Table 3 and Table 2 appear in the paper is reversed. The authors should update this in the final manuscript.", "Paper summary: This p...
[ -1, 8, -1, -1, -1, -1, -1, 8, 1 ]
[ -1, 5, -1, -1, -1, -1, -1, 5, 3 ]
[ "HJl5HxXjiS", "iclr_2020_ryebG04YvB", "S1gqPAioYr", "S1gqPAioYr", "r1xHX9DptH", "SyxAm1RaFS", "SyxAm1RaFS", "iclr_2020_ryebG04YvB", "iclr_2020_ryebG04YvB" ]
iclr_2020_SJeNz04tDS
Overlearning Reveals Sensitive Attributes
``"Overlearning'' means that a model trained for a seemingly simple objective implicitly learns to recognize attributes and concepts that are (1) not part of the learning objective, and (2) sensitive from a privacy or bias perspective. For example, a binary gender classifier of facial images also learns to recognize races, even races that are not represented in the training data, and identities. We demonstrate overlearning in several vision and NLP models and analyze its harmful consequences. First, inference-time representations of an overlearned model reveal sensitive attributes of the input, breaking privacy protections such as model partitioning. Second, an overlearned model can be "`re-purposed'' for a different, privacy-violating task even in the absence of the original training data. We show that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes. Finally, we investigate where, when, and why overlearning happens during model training.
accept-poster
This paper introduces the problem of overlearning, which can be thought of as unintended transfer learning from a (victim) source model to a target task that the source model’s creator had not intended its model to be used for. The paper raises good points about privacy legislation limitations due to the fact that overlearning makes it impossible to foresee future uses of a given dataset. Please incorporate the revisions suggested in the reviews to add clarity to the overlearning versus censoring confusion addressed by the reviewers.
train
[ "HJgfSVUjor", "BylOx4Ljir", "Bkxl67Lisr", "rJgKW00iKS", "r1lokTbf5H", "SJezFeRmcB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the review!\n\n(1) $E_{aux}$ and $C_{aux}$ have similar architecture to $E$ and $C$ except for the output layer. They can have different architectures as long as the dimension for z is the same, in order to calculate the feature space $L_2$ loss.\n\n(2) Following [1,2], the auxiliary dataset in our exp...
[ -1, -1, -1, 6, 1, 6 ]
[ -1, -1, -1, 1, 5, 5 ]
[ "SJezFeRmcB", "r1lokTbf5H", "rJgKW00iKS", "iclr_2020_SJeNz04tDS", "iclr_2020_SJeNz04tDS", "iclr_2020_SJeNz04tDS" ]
iclr_2020_SJgwzCEKwH
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Mode connectivity provides novel geometric insights on analyzing loss landscapes and enables building high-accuracy pathways between well-trained neural networks. In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness. Our experiments cover various types of adversarial attacks applied to different network architectures and datasets. When network models are tampered with backdoor or error-injection attacks, our results demonstrate that the path connection learned using limited amount of bonafide data can effectively mitigate adversarial effects while maintaining the original accuracy on clean data. Therefore, mode connectivity provides users with the power to repair backdoored or error-injected models. We also use mode connectivity to investigate the loss landscapes of regular and robust models against evasion attacks. Experiments show that there exists a barrier in adversarial robustness loss on the path connecting regular and adversarially-trained models. A high correlation is observed between the adversarial robustness loss and the largest eigenvalue of the input Hessian matrix, for which theoretical justifications are provided. Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
accept-poster
This paper investigates improving robustness to adversarial examples by using mode connectivity in the loss function. The paper received three reviews by experts working in related areas. In a strongly positive review, R1 recommends Accept, but gives some specific technical questions. The authors submitted a response to these questions; in post-review comments, R1 was satisfied and maintained the highly positive review. R2 recommended Weak Reject and also asked specific technical questions, including some additional details on experiments, statistical significance, etc. The author response also convincingly responded to these concerns. R3 recommended Weak Accept but suggested improving the writing, which authors have done in their revision. Given that R1 and R3 are highly positive and R2's concerns were addressed in the response and revision, we now recommend (weak) Accept.
train
[ "rJxEPld6KB", "Byl-HRlcir", "rJg7c6l9jS", "HJgbGTe9jr", "H1xXGumycH", "rJeeI-l2FH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies leveraging mode connectivity to defend against different types of attacks, including backdoor attacks, adversarial examples, and error-injection attacks. They perform a comprehensive evaluation to show the benign test accuracy and attack success rate over the models in the connected path between...
[ 8, -1, -1, -1, 6, 3 ]
[ 5, -1, -1, -1, 3, 4 ]
[ "iclr_2020_SJgwzCEKwH", "rJeeI-l2FH", "rJxEPld6KB", "H1xXGumycH", "iclr_2020_SJgwzCEKwH", "iclr_2020_SJgwzCEKwH" ]
iclr_2020_rJgqMRVYvr
Differentially Private Meta-Learning
Parameter-transfer is a well-known and versatile approach for meta-learning, with applications including few-shot learning, federated learning, with personalization, and reinforcement learning. However, parameter-transfer algorithms often require sharing models that have been trained on the samples from specific tasks, thus leaving the task-owners susceptible to breaches of privacy. We conduct the first formal study of privacy in this setting and formalize the notion of task-global differential privacy as a practical relaxation of more commonly studied threat models. We then propose a new differentially private algorithm for gradient-based parameter transfer that not only satisfies this privacy requirement but also retains provable transfer learning guarantees in convex settings. Empirically, we apply our analysis to the problems of federated learning with personalization and few-shot classification, showing that allowing the relaxation to task-global privacy from the more commonly studied notion of local privacy leads to dramatically increased performance in recurrent neural language modeling and image classification.
accept-poster
Thanks to the authors for the submission. This paper studies differentially private meta-learning, where the algorithm needs to use information across several learning tasks to protect the privacy of the data set from each task. The reviewers agree that this is a natural problem and the paper presents a solution that is essentially an adoption of differentially private SGD. There are several places the paper can improve. For the experimental evaluation, the authors should include a wider range of epsilon values in order to investigate the accuracy-privacy trade-off. The authors should also consider expanding the existing experiments with other datasets.
train
[ "Bylx5489oS", "rklckSIcsS", "B1lGUrI9ir", "SkgU7N8csS", "rkeUYdN7iB", "B1gikEDpFH", "Byg8MiRp9H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your time and thoughtful review! We hope to address your comments below. [note: We decided to respond to each reviewer individually, though we note that there is significant overlap in our responses to R1 and R2 since these two reviewers had several similar comments/suggestions]\n\n1) Varying epsilon...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 5, 3, 1 ]
[ "rkeUYdN7iB", "Byg8MiRp9H", "B1gikEDpFH", "iclr_2020_rJgqMRVYvr", "iclr_2020_rJgqMRVYvr", "iclr_2020_rJgqMRVYvr", "iclr_2020_rJgqMRVYvr" ]
iclr_2020_r1e9GCNKvH
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation
Recent advances in the sparse neural network literature have made it possible to prune many large feed forward and convolutional networks with only a small quantity of data. Yet, these same techniques often falter when applied to the problem of recovering sparse recurrent networks. These failures are quantitative: when pruned with recent techniques, RNNs typically obtain worse performance than they do under a simple random pruning scheme. The failures are also qualitative: the distribution of active weights in a pruned LSTM or GRU network tend to be concentrated in specific neurons and gates, and not well dispersed across the entire architecture. We seek to rectify both the quantitative and qualitative issues with recurrent network pruning by introducing a new recurrent pruning objective derived from the spectrum of the recurrent Jacobian. Our objective is data efficient (requiring only 64 data points to prune the network), easy to implement, and produces 95 % sparse GRUs that significantly improve on existing baselines. We evaluate on sequential MNIST, Billion Words, and Wikitext.
accept-poster
Based on current unanimous reviews, the paper is accepted.
train
[ "S1lr4INojr", "HJxW5p8sOS", "HJg30sq9jS", "SylVhHMYiH", "rJgAqEkmjS", "SyejPNkXjS", "rkx9M4J7sB", "rklCPNPKur", "rkevyhvpYS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thoughtful comments and taking the time to review our rebuttal. We really appreciate it, and we will continue to refine our results in the remaining time.", "\nNotes: \n\n -RNN network pruning has proven to be challenging using the techniques often used with other network types. \n\n -One i...
[ -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 1, -1, -1, -1, -1, -1, 1, 1 ]
[ "HJg30sq9jS", "iclr_2020_r1e9GCNKvH", "SyejPNkXjS", "SyejPNkXjS", "rklCPNPKur", "HJxW5p8sOS", "rkevyhvpYS", "iclr_2020_r1e9GCNKvH", "iclr_2020_r1e9GCNKvH" ]
iclr_2020_rkgAGAVKPr
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Few-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle it, we find the procedure and datasets that are used to assess their progress lacking. To address this limitation, we propose Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks. We experiment with popular baselines and meta-learners on Meta-Dataset, along with a competitive method that we propose. We analyze performance as a function of various characteristics of test tasks and examine the models’ ability to leverage diverse training sources for improving their generalization. We also propose a new set of baselines for quantifying the benefit of meta-learning in Meta-Dataset. Our extensive experimentation has uncovered important research challenges and we hope to inspire work in these directions.
accept-poster
While the reviewers have some outstanding issues regarding the organization and clarity of the paper, the overall consensus is that the proposed evaluation methods is a useful improvement over current standards for meta-learning.
test
[ "rJxECyLMqB", "rkgGRcpjir", "B1e8gOEsoS", "S1gpHECOjB", "rkgpTEYXir", "HkxqRQYXjB", "Hyg1O4YQor", "ry4i8swFr", "ByeE8SYkqS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors of this paper construct a new few-shot learning dataset. The whole dataset consists of several data from different sources. The authors test several representative meta-learning models (e.g., matching network, Prototype network, MAML) on this dataset and give the analysis. Furthermore, the authors comb...
[ 3, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rkgAGAVKPr", "B1e8gOEsoS", "HkxqRQYXjB", "iclr_2020_rkgAGAVKPr", "ry4i8swFr", "rJxECyLMqB", "ByeE8SYkqS", "iclr_2020_rkgAGAVKPr", "iclr_2020_rkgAGAVKPr" ]
iclr_2020_ByxRM0Ntvr
Are Transformers universal approximators of sequence-to-sequence functions?
Despite the widespread adoption of Transformer models for NLP tasks, the expressive power of these models is not well-understood. In this paper, we establish that Transformer models are universal approximators of continuous permutation equivariant sequence-to-sequence functions with compact support, which is quite surprising given the amount of shared parameters in these models. Furthermore, using positional encodings, we circumvent the restriction of permutation equivariance, and show that Transformer models can universally approximate arbitrary continuous sequence-to-sequence functions on a compact domain. Interestingly, our proof techniques clearly highlight the different roles of the self-attention and the feed-forward layers in Transformers. In particular, we prove that fixed width self-attention layers can compute contextual mappings of the input sequences, playing a key role in the universal approximation property of Transformers. Based on this insight from our analysis, we consider other simpler alternatives to self-attention layers and empirically evaluate them.
accept-poster
The paper provides a proof that Transformer networks (a popular deep learning model) are universal approximators for sequence-to-sequence functions. The theorem relies on the idea of contextual mappings (Definition 3.1), which models the attention layers. The results provide an important starting point for understanding a very widely used architecture. As with many theoretical papers, the reviewers provided several suggestions as to which are important parts to be presented in the main paper. The authors were very responsive during the discussion period, updating the structure of the paper significantly. This shows nice evidence supporting the need for a long discussion period for ICLR. One reviewer upgraded their score (to 8), which is not reflected in the system. This is an excellent paper, providing much needed theoretical analysis of a popular neural architecture. Clear accept.
train
[ "BJlbyQ5qtB", "r1x1o8r3oS", "HJx5Q5Ijjr", "ryeEzlaEoH", "ByxAw1p4iB", "r1xsCAh4iS", "rylrdpvAFB", "ryxi5EF5qB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper discusses the universal approximation capability of the Transformer, under certain assumptions, analyze the role of different components of the Transformer (e.g., self-attention layer for contextual mapping), and propose the use of some other layers that can also provide contextual mapping.\n\nOverall...
[ 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_ByxRM0Ntvr", "ByxAw1p4iB", "iclr_2020_ByxRM0Ntvr", "ryxi5EF5qB", "rylrdpvAFB", "BJlbyQ5qtB", "iclr_2020_ByxRM0Ntvr", "iclr_2020_ByxRM0Ntvr" ]
iclr_2020_rkg-mA4FDr
Pre-training Tasks for Embedding-based Large-scale Retrieval
We consider the large-scale query-document retrieval problem: given a query (e.g., a question), return the set of relevant documents (e.g., paragraphs containing the answer) from a large document corpus. This problem is often solved in two steps. The retrieval phase first reduces the solution space, returning a subset of candidate documents. The scoring phase then re-ranks the documents. Critically, the retrieval algorithm not only desires high recall but also requires to be highly efficient, returning candidates in time sublinear to the number of documents. Unlike the scoring phase witnessing significant advances recently due to the BERT-style pre-training tasks on cross-attention models, the retrieval phase remains less well studied. Most previous works rely on classic Information Retrieval (IR) methods such as BM-25 (token matching + TF-IDF weights). These models only accept sparse handcrafted features and can not be optimized for different downstream tasks of interest. In this paper, we conduct a comprehensive study on the embedding-based retrieval models. We show that the key ingredient of learning a strong embedding-based Transformer model is the set of pre-training tasks. With adequately designed paragraph-level pre-training tasks, the Transformer models can remarkably improve over the widely-used BM-25 as well as embedding models without Transformers. The paragraph-level pre-training tasks we studied are Inverse Cloze Task (ICT), Body First Selection (BFS), Wiki Link Prediction (WLP), and the combination of all three.
accept-poster
This paper conducts a comprehensive study on different retrieval algorithms and show that the two-tower Transformer models with properly designed pre-training tasks can largely improve over the widely used BM-25 algorithm. In fact, the deep learning based two tower retrieval model is already used in the IR field. The main contribution lies in the comprehensive experimental evaluation. Blind Review #3 has a major misunderstanding of the paper; hence his review will be excluded. The other two reviewers tend to accept the paper with several minor comments. As the authors promise to release the code as a baseline for further works, I agree to accept the paper.
train
[ "SkekhUVKKH", "SkeYP1w2iB", "Bkl0ITL3jB", "SJxMz_HijH", "HkldCDSjjB", "Hkxq7kL4iS", "HkeEEx7fjr", "B1g8nRzGsB", "HJlz2pzziB", "HJlvYglaKS", "S1eXkuHEcS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies a query-related document retrieval problem using a framework which they call “two-tower retrieval method”. The task is to learn query representation and document representation in order to retrieve query-related documents by the maximum inner product. This is a realistic setting for large-scale ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rkg-mA4FDr", "SJxMz_HijH", "B1g8nRzGsB", "Hkxq7kL4iS", "iclr_2020_rkg-mA4FDr", "HkeEEx7fjr", "SkekhUVKKH", "HJlvYglaKS", "S1eXkuHEcS", "iclr_2020_rkg-mA4FDr", "iclr_2020_rkg-mA4FDr" ]
iclr_2020_Skl4mRNYDr
Deep Imitative Models for Flexible Inference, Planning, and Control
Imitation Learning (IL) is an appealing approach to learn desirable autonomous behavior. However, directing IL to achieve arbitrary goals is difficult. In contrast, planning-based algorithms use dynamics models and reward functions to achieve goals. Yet, reward functions that evoke desirable behavior are often difficult to specify. In this paper, we propose "Imitative Models" to combine the benefits of IL and goal-directed planning. Imitative Models are probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals. We derive families of flexible goal objectives, including constrained goal regions, unconstrained goal sets, and energy-based goals. We show that our method can use these objectives to successfully direct behavior. Our method substantially outperforms six IL approaches and a planning-based approach in a dynamic simulated autonomous driving task, and is efficiently learned from expert demonstrations without online data collection. We also show our approach is robust to poorly-specified goals, such as goals on the wrong side of the road.
accept-poster
This paper proposes to build an 'imitative model' to improve the performance for imitation learning. The main idea is to combine the model-based RL type of work to the imitation learning approach. The model is trained using a probabilistic method and can help the agent imitate goals that were previously not easy to achieve with previous works. Reviewers 2 and 3 strongly agree that the paper should be accepted. R3 has increased their score after the rebuttal, and the authors' response helped in this case. Based on reviewers score, I recommend to accept this paper.
train
[ "r1xK6GH5YS", "SJgvwUOnoB", "BJerEzNqiH", "S1x_4iWTYH", "HyeKKZV5oH", "S1x_g9s_jr", "BkglsYjOsB", "ByxJUto_oB", "BylOnuj_oH", "r1xBD1MnKS" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\n- key problem: expert-like probabilistic online motion planning to reach arbitrary goals without reward shaping thanks to off-line learning from expert demonstrations;\n- contributions: 1) an imitative planning procedure via gradient-based log-likelihood maximization leveraging \"imitative models\" q(fut...
[ 8, -1, -1, 8, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_Skl4mRNYDr", "BJerEzNqiH", "BylOnuj_oH", "iclr_2020_Skl4mRNYDr", "BylOnuj_oH", "BkglsYjOsB", "r1xK6GH5YS", "r1xBD1MnKS", "S1x_4iWTYH", "iclr_2020_Skl4mRNYDr" ]
iclr_2020_S1lEX04tPr
CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning
A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success. This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others' success, and credit-assignment for interactions between actions and goals of different agents. To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment. We use a function augmentation scheme to bridge value and policy functions across the curriculum. The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.
accept-poster
This paper was generally well received by reviewers and was rated as a weak accept by all. The AC recommends acceptance.
train
[ "BklZ_qinsB", "Bkl7fr1OjB", "B1gWFfyOsH", "Byx5AlJdjr", "r1e0fqAaKr", "Hyg_RfRAtr", "HJlEGDCkqS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the responses. I've read through the other reviews and comments posted for this paper and still keep my original score of a weak accept.", "We thank Reviewer 2 for the positive feedback on the motivation, analysis and experimental results of the paper, and for raising pertinent and constructive questi...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "Byx5AlJdjr", "r1e0fqAaKr", "Hyg_RfRAtr", "HJlEGDCkqS", "iclr_2020_S1lEX04tPr", "iclr_2020_S1lEX04tPr", "iclr_2020_S1lEX04tPr" ]
iclr_2020_HJlSmC4FPS
Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural Networks
We study the generalization properties of deep convolutional neural networks for image denoising in the presence of varying noise levels. We provide extensive empirical evidence that current state-of-the-art architectures systematically overfit to the noise levels in the training set, performing very poorly at new noise levels. We show that strong generalization can be achieved through a simple architectural modification: removing all additive constants. The resulting "bias-free" networks attain state-of-the-art performance over a broad range of noise levels, even when trained over a limited range. They are also locally linear, which enables direct analysis with linear-algebraic tools. We show that the denoising map can be visualized locally as a filter that adapts to both image structure and noise level. In addition, our analysis reveals that deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace, with dimensionality inversely proportional to noise level, that captures features of natural images.
accept-poster
This paper focuses on studying neural network-based denoising methods. The paper makes the interesting observation that most existing denoising approaches have a tendency to overfit to knowledge of the noise level. The authors claim that simply removing the bias on the network parameters enables a variety of improvements in this regard and provide some theoretical justification for their results. The reviewers were mostly postive but raised some concerns about generalization beyond Gaussian noise and not "being very well theoretically motivated". These concerns seem to have at least partially been alleviated during the discussion period. I agree with the reviewers. I think the paper looks at an important phenomena for denoising (role of variance parameter) and is well suited to ICLR. I recommend acceptance. I suggest that the authors continue to further improve the paper based on the reviewers' comments.
val
[ "BJlQ1Kmoor", "Hyx2cOQosr", "HJek5vmojB", "S1g7YX7jsS", "SyeZPBHTYB", "SJeF98JCtS", "HklObMGU9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Comment 1: \"One main practical concern is that only Gaussian noise is considered in this paper which provides good theoretical analysis. It would be interesting to see if this BF-CNN is extendable to more noise types.\"\n\nThis an interesting point. We have trained bias-free networks on uniform noise and find tha...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "SyeZPBHTYB", "SJeF98JCtS", "HklObMGU9r", "iclr_2020_HJlSmC4FPS", "iclr_2020_HJlSmC4FPS", "iclr_2020_HJlSmC4FPS", "iclr_2020_HJlSmC4FPS" ]
iclr_2020_SJxIm0VtwH
Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets
Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks. While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear. In this paper, we aim at bridging this gap from both theoretical and empirical perspectives. First, we analyze a variant of Optimistic Stochastic Gradient (OSG) proposed in~\citep{daskalakis2017training} for solving a class of non-convex non-concave min-max problem and establish O(ϵ−4) complexity for finding ϵ-first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by~\citep{iusem2017extragradient}. Then we propose an adaptive variant of OSG named Optimistic Adagrad (OAdagrad) and reveal an \emph{improved} adaptive complexity O(ϵ−21−α), where α characterizes the growth rate of the cumulative stochastic gradient and 0≤α≤1/2. To the best of our knowledge, this is the first work for establishing adaptive complexity in non-convex non-concave min-max optimization. Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training. Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically.
accept-poster
This work proposes a new adaptive method for solving certain min-max problems. The reviewers all appreciated the work and most of their concerns were addressed in the rebuttal. Given the current interest in both adaptive methods and min-max problems, this work is suited for publication at ICLR.
train
[ "BJenu0PTFB", "r1xHK7B9KB", "S1gnHVNijr", "H1xBbhmjoB", "ByePmZ8Psr", "H1x0vZUPsB", "B1eReZUvoS", "BJejdlUvsH", "Hyl_8gIDjS", "rklsNMOaYS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposed a new algorithm (Optimistic Stochastic Gradient) for solving a class of non-convex non-concave min-max problem. The convergence theory is established for finding first order stationary point. The authors also proposed an adaptive variant of the proposed algorithm, called OAdagrad and showed an ...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_SJxIm0VtwH", "iclr_2020_SJxIm0VtwH", "H1xBbhmjoB", "H1x0vZUPsB", "BJenu0PTFB", "r1xHK7B9KB", "BJenu0PTFB", "rklsNMOaYS", "iclr_2020_SJxIm0VtwH", "iclr_2020_SJxIm0VtwH" ]
iclr_2020_HJeO7RNKPr
DeepV2D: Video to Depth with Differentiable Structure from Motion
We propose DeepV2D, an end-to-end deep learning architecture for predicting depth from video. DeepV2D combines the representation ability of neural networks with the geometric principles governing image formation. We compose a collection of classical geometric algorithms, which are converted into trainable modules and combined into an end-to-end differentiable architecture. DeepV2D interleaves two stages: motion estimation and depth estimation. During inference, motion and depth estimation are alternated and converge to accurate depth.
accept-poster
This work proposes a CNN architecture for joint depth and camera motion estimation from videos. The paper presents a differentiable formulation of the problem to allow its end-to-end learning, and the reviewers unanimously find the proposed approach reasonable and agree that this is a solid paper. Some of the reviewers find the method itself to be too mechanical, but they all agree that this is a well-engineered solution.
test
[ "Bke2zsVniH", "SJxr1aEhir", "rJxjhoEhsH", "ByeyusN2jH", "BJe8GmY2FS", "SJlDS4yrcS", "BkgjyQ7ccr", "B1x0lB_6qH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and suggestions. We have submitted a revised version following your suggestions. Below we address individual points. \n\n1. Our overall approach is quite modular and components can easily be swapped depending on the application. We add Appendix C in our revision which demonstrates several...
[ -1, -1, -1, -1, 8, 6, 6, 6 ]
[ -1, -1, -1, -1, 4, 3, 1, 5 ]
[ "B1x0lB_6qH", "BkgjyQ7ccr", "SJlDS4yrcS", "BJe8GmY2FS", "iclr_2020_HJeO7RNKPr", "iclr_2020_HJeO7RNKPr", "iclr_2020_HJeO7RNKPr", "iclr_2020_HJeO7RNKPr" ]
iclr_2020_rkenmREFDr
Learning Space Partitions for Nearest Neighbor Search
Space partitions of Rd underlie a vast and important class of fast nearest neighbor search (NNS) algorithms. Inspired by recent theoretical work on NNS for general metric spaces (Andoni et al. 2018b,c), we develop a new framework for building space partitions reducing the problem to balanced graph partitioning followed by supervised classification. We instantiate this general approach with the KaHIP graph partitioner (Sanders and Schulz 2013) and neural networks, respectively, to obtain a new partitioning procedure called Neural Locality-Sensitive Hashing (Neural LSH). On several standard benchmarks for NNS (Aumuller et al. 2017), our experiments show that the partitions obtained by Neural LSH consistently outperform partitions found by quantization-based and tree-based methods as well as classic, data-oblivious LSH.
accept-poster
This paper proposes a new framework for improved nearest neighbor search by learning a space partition of the data, allowing for better scalability in distributed settings and overall better performance over existing benchmarks. The two reviewers who were most confident were both positive about the contributions and the revisions. The one reviewer who recommended reject was concerned about the metric used and whether comparison with baselines was fair. In my opinion, the authors seem to have been very receptive to reviewer comments and answered these issues to my satisfaction. After author and reviewer engagement, both R1 and myself are satisfied with the addition of the new baselines and think the authors have sufficiently addressed the major concerns. For the final version of the paper, I’d urge the authors to take seriously R4’s comment regarding clarity and add algorithmic details as per their suggestion.
test
[ "HyeSOPl2tr", "H1lxL2uhoH", "BylnGn_3oB", "SJe9ghunir", "HyekOsO3jB", "r1eWb4FrFS", "S1gSN8yaKr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThe paper proposes a scheme to learn space partitions for improved\nnearest neighbor search by first converting the search problem into a\nsupervised classification problem by graph-partitioning the\nk-nearest-neighbor graph, and then using some machine learning model\nto learn space partitions corresponding to ...
[ 6, -1, -1, -1, -1, 3, 6 ]
[ 5, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_rkenmREFDr", "r1eWb4FrFS", "SJe9ghunir", "HyeSOPl2tr", "S1gSN8yaKr", "iclr_2020_rkenmREFDr", "iclr_2020_rkenmREFDr" ]
iclr_2020_S1xnXRVFwH
Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP
The lottery ticket hypothesis proposes that over-parameterization of deep neural networks (DNNs) aids training by increasing the probability of a “lucky” sub-network initialization being present rather than by helping the optimization process (Frankle& Carbin, 2019). Intriguingly, this phenomenon suggests that initialization strategies for DNNs can be improved substantially, but the lottery ticket hypothesis has only previously been tested in the context of supervised learning for natural image tasks. Here, we evaluate whether “winning ticket” initializations exist in two different domains: natural language processing (NLP) and reinforcement learning (RL).For NLP, we examined both recurrent LSTM models and large-scale Transformer models (Vaswani et al., 2017). For RL, we analyzed a number of discrete-action space tasks, including both classic control and pixel control. Consistent with workin supervised image classification, we confirm that winning ticket initializations generally outperform parameter-matched random initializations, even at extreme pruning rates for both NLP and RL. Notably, we are able to find winning ticket initializations for Transformers which enable models one-third the size to achieve nearly equivalent performance. Together, these results suggest that the lottery ticket hypothesis is not restricted to supervised learning of natural images, but rather represents a broader phenomenon in DNNs.
accept-poster
This paper explores the application of the lottery ticket hypothesis to NLP and RL problems for better initialisations of deep networks and reduced model sizes. This is evaluated in a variety of settings, including continuous control and ATARI games for RL, and LSTMs and Transformers for NLP, showing very positive results. The main issue raised by the reviewers was the lack of algorithmic novelty in the paper. Despite this, I believe the paper to present an important contribution that could stimulate much additional research. The paper is well written and the results are rigorous and interesting. For these reasons I recommend acceptance.
train
[ "B1g57OkcoS", "r1eXMuy5oH", "BkxEUdm9FB", "HyxTjvVaKH", "Hkx3RhBR5r" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[1] Identifying and Understanding Deep Learning Phenomena, ICML 2019 workshop, http://deep-phenomena.org/ \n\n[2] Science meets Engineering of Deep Learning, NeurIPS 2019 workshop, https://sites.google.com/view/sedl-neurips-2019/main \n\n[3] Ali Rahimi, Test of Time award talk, NeurIPS 2017, https://www.youtube.co...
[ -1, -1, 6, 3, 3 ]
[ -1, -1, 5, 1, 4 ]
[ "r1eXMuy5oH", "iclr_2020_S1xnXRVFwH", "iclr_2020_S1xnXRVFwH", "iclr_2020_S1xnXRVFwH", "iclr_2020_S1xnXRVFwH" ]
iclr_2020_SklTQCNtvS
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input. Several algorithms have been proposed for this problem but they typically require huge amount (>20,000) of queries for attacking one example. Among them, one of the state-of-the-art approaches (Cheng et al., 2019) showed that hard-label attack can be modeled as an optimization problem where the objective function can be evaluated by binary search with additional model queries, thereby a zeroth order optimization algorithm can be applied. In this paper, we adopt the same optimization formulation but propose to directly estimate the sign of gradient at any direction instead of the gradient itself, which enjoys the benefit of single query. Using this single query oracle for retrieving sign of directional derivative, we develop a novel query-efficient Sign-OPT approach for hard-label black-box attack. We provide a convergence analysis of the new algorithm and conduct experiments on several models on MNIST, CIFAR-10 and ImageNet. We find that Sign-OPT attack consistently requires 5X to 10X fewer queries when compared to the current state-of-the-art approaches, and usually converges to an adversarial example with smaller perturbation.
accept-poster
The reviewers had several concerns with the paper related to novelty and comparisons with other approaches. During the discussion phase, these concerns were adequately addressed.
train
[ "ryl-CgIwiB", "r1gzqlLDjH", "HJxBPlpEFB", "H1eZtKICKB" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for valuable comments and suggestions.\n\nAbout the L-smoothness: This assumption is common for convergence analysis of both first-order and zeroth-order methods for nonconvex optimization, e.g., [3,4] since it is a common starting point to bound the stationary gap. And we couldn’t prove g(th...
[ -1, -1, 6, 3 ]
[ -1, -1, 3, 4 ]
[ "HJxBPlpEFB", "H1eZtKICKB", "iclr_2020_SklTQCNtvS", "iclr_2020_SklTQCNtvS" ]
iclr_2020_HJxR7R4FvS
RaCT: Toward Amortized Ranking-Critical Training For Collaborative Filtering
We investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to more directly maximize ranking-based objective functions. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require re-running the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. We demonstrate the actor-critic's ability to significantly improve the performance of a variety of prediction models, and achieve better or comparable performance to a variety of strong baselines on three large-scale datasets.
accept-poster
The reviewers generally agreed that the application and method are interesting and relevant, and the paper should be accepted. I would encourage the authors to carefully go through the reviewers' suggestions and address them in the final.
train
[ "Bkga5rXiiH", "Skez_B7oiH", "rkeVfEXisH", "rJe1xRyEKH", "r1edoAD6tS", "SyxRaUE0Yr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate your supportive review.", "Thank you for your detailed and thoughtful review. \n\nSee above for corrections to our NLL-vs-NDCG example — we have updated the PDF to better contrast the two.\n\nQ: On the statement “Making RaCT scalable to large-scale datasets”: \nA: We’d like to clarify that our scal...
[ -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, 5, 5, 4 ]
[ "rJe1xRyEKH", "r1edoAD6tS", "SyxRaUE0Yr", "iclr_2020_HJxR7R4FvS", "iclr_2020_HJxR7R4FvS", "iclr_2020_HJxR7R4FvS" ]
iclr_2020_SJleNCNtDH
Intrinsic Motivation for Encouraging Synergistic Behavior
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually. Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own. Thus, we propose to incentivize agents to take (joint) actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent. We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy. While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken. We validate our approach in robotic bimanual manipulation and multi-agent locomotion tasks with sparse rewards; we find that our approach yields more efficient learning than both 1) training with only the sparse reward and 2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior. Videos are available on the project webpage: https://sites.google.com/view/iclr2020-synergistic.
accept-poster
The authors address the important issue of exploration in reinforcement learning. In this case, they propose to use reward shaping to encourage joint-actions whose outcomes deviate from the sequential counterpart. Although the proposed intrinsic reward is targeted at a particular family of two-agent robotic tasks, one can imagine generalizing some of the ideas here to other multi-agent learning tasks. The reviewers agree that the paper is of interest to the ICLR audience.
train
[ "H1xiVYPvjS", "Syev-KvDoB", "r1lY_dPDsr", "rJxsN_wDjB", "SJxFBZDpYr", "BJlLBP6CKH", "HJeoB2EbcH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to review! \n\n“one can imagine generalizing some of the ideas here to other multi-agent learning tasks.”\n- As highlighted above, we have conducted experiments on a new domain, Ant Push. We have also explored N=3 agents in this setting. Please see the global comment for more details....
[ -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, 3, 5, 3 ]
[ "BJlLBP6CKH", "SJxFBZDpYr", "HJeoB2EbcH", "iclr_2020_SJleNCNtDH", "iclr_2020_SJleNCNtDH", "iclr_2020_SJleNCNtDH", "iclr_2020_SJleNCNtDH" ]
iclr_2020_rygG4AVFvH
Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation
Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks. However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods. These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal. As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance. This solution dubbed Chameleon leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain-knowledge inspired logic to improve the samples itself. Experimentation with real hardware shows that Chameleon provides 4.45x speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%.
accept-poster
This paper proposes to optimize the code optimal code in DNN compilers using adaptive sampling and reinforcement learning. This method achieves significant speedup in compilation time and execution time. The authors made strong efforts in addressing the problems raised by the reviewers, and promised to make the code publicly available, which is of particular importance for works of this nature.
val
[ "SkeyGWrHKr", "rJlpKaw3ir", "BkgEyt8LsH", "BJgBFdL8or", "rJl8EOL8ir", "BylKi-56tH", "HkxnE68RYH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new solution called CHAMELEON for deep learning code optimization, which accelerates the process of compiling codes and achieves faster training and inference of deep networks. The proposed method can be used to compile various deep network architectures. Experimental results show that the pro...
[ 6, -1, -1, -1, -1, 6, 3 ]
[ 1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_rygG4AVFvH", "iclr_2020_rygG4AVFvH", "HkxnE68RYH", "BylKi-56tH", "SkeyGWrHKr", "iclr_2020_rygG4AVFvH", "iclr_2020_rygG4AVFvH" ]
iclr_2020_H1gB4RVKvB
Recurrent neural circuits for contour detection
We introduce a deep recurrent neural network architecture that approximates visual cortical circuits (Mély et al., 2018). We show that this architecture, which we refer to as the 𝜸-net, learns to solve contour detection tasks with better sample efficiency than state-of-the-art feedforward networks, while also exhibiting a classic perceptual illusion, known as the orientation-tilt illusion. Correcting this illusion significantly reduces \gnetw contour detection accuracy by driving it to prefer low-level edges over high-level object boundary contours. Overall, our study suggests that the orientation-tilt illusion is a byproduct of neural circuits that help biological visual systems achieve robust and efficient contour detection, and that incorporating these circuits in artificial neural networks can improve computer vision.
accept-poster
All the reviewers recommend accept, and the found the paper interesting and novel.
train
[ "SJxTen9jiH", "r1ee0w3k9S", "S1lEwWacsH", "S1e1ZpHqoH", "H1gB33B5oB", "r1eVm3Hcsr", "ryxOWnSqiS", "Byl1ksH9iH", "ByeQ23xaKB", "SkgVXkl0FH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed rebuttal and additional experiments. I updated my initial rating, please refer to the initial comment.", "The authors propose a new artificial neural network architecture that is derived from a human visual model (Mély et al., 2018). The original (human vision) model can explain some ...
[ -1, 6, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "r1eVm3Hcsr", "iclr_2020_H1gB4RVKvB", "Byl1ksH9iH", "ByeQ23xaKB", "SkgVXkl0FH", "ryxOWnSqiS", "r1ee0w3k9S", "iclr_2020_H1gB4RVKvB", "iclr_2020_H1gB4RVKvB", "iclr_2020_H1gB4RVKvB" ]
iclr_2020_Hye_V0NKwr
Locality and Compositionality in Zero-Shot Learning
In this work we study locality and compositionality in the context of learning representations for Zero Shot Learning (ZSL). In order to well-isolate the importance of these properties in learned representations, we impose the additional constraint that, differently from most recent work in ZSL, no pre-training on different datasets (e.g. ImageNet) is performed. The results of our experiment show how locality, in terms of small parts of the input, and compositionality, i.e. how well can the learned representations be expressed as a function of a smaller vocabulary, are both deeply related to generalization and motivate the focus on more local-aware models in future research directions for representation learning.
accept-poster
This paper investigates the role of locality (ability to encode only information specific to locations of interest) and compositionality (ability to be expressed as a combination of simpler parts) in Zero-Shot Learning (ZSL). Main contributions of the paper are (i) compared to previous ZSL frameworks, the proposed approach is that the model is not allowed to be pretrained on another dataset (ii) a thorough evaluation of existing methods. Following discussions, weaknesses are (i) the proposed method (CMDIM) isn't sufficiently different or interesting compared to existing methods (ii) the paper does not do an in-depth discussion of locality and compositionality. The empirical evaluation being extensive, the accept decision is chosen.
train
[ "SJeARAZTYr", "rkx4AoO0Yr", "rkg4oKg3ir", "r1grNcg3iH", "SygcZ5xhiS", "HJgLUdTnYB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "UPDATE: My recommendation has been borderline because the discussion of the paper about the nature of locality and compositionality seems to be less in-depth than I would have expected, but if the authors will revise the submission to shift the focus of the paper to more focused on analysis and evaluation and weak...
[ 6, 8, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, 4 ]
[ "iclr_2020_Hye_V0NKwr", "iclr_2020_Hye_V0NKwr", "rkx4AoO0Yr", "HJgLUdTnYB", "SJeARAZTYr", "iclr_2020_Hye_V0NKwr" ]
iclr_2020_BygFVAEKDH
Understanding Knowledge Distillation in Non-autoregressive Machine Translation
Non-autoregressive machine translation (NAT) systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models. Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance. Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear. In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial to NAT training. We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data. Furthermore, a strong correlation is observed between the capacity of an NAT model and the optimal complexity of the distilled data for the best translation quality. Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models. We achieve the state-of-the-art performance for the NAT-based models, and close the gap with the autoregressive baseline on WMT14 En-De benchmark.
accept-poster
Main content: Blind review #3 summarized it well, as follows: This paper studies knowledge distillation in the context of non-autoregressive translation. In particular, it is well known that in order to make NAT competitive with AT, one needs to train the NAT system on a distilled dataset from the teacher model. Using initial experiments on EN=>ES/FR/DE, the authors argue that this necessity arises from the overly-multimodal nature of the output distribution, and that the AT teacher model produces a less multimodal distribution that is easier to model with NAT. Based on this, the authors propose two quantities that estimate the complexity (conditional entropy) and faithfulness (cross entropy vs real data), and derive approximations to these based on independence assumptions and an alignment model. The translations from the teacher output are indeed found to be less complex, thereby facilitating easier training for the NAT student model. -- Discussion: Questions were mostly about how robust the results were on other language pairs and random starting points. Authors addressed questions reasonably. One low review came from a reviewer who admitted not knowing the field, and I agree with the other two reviewers. -- Recommendation and justification: I think papers that offer empirically support for scientific insight (giving an "a-ha!" reaction), rather than massive engineering efforts to beat the state of the art, are very worthwhile in scientific conferences. This paper meets that criteria for acceptance.
train
[ "B1gjfhYBOr", "rJeOiuN2jr", "rJlBYihKiH", "Skl0PthYoB", "BygmuCAKjB", "r1eumhbnYH", "rJeXUeVstr", "rklYWJn9FS", "SkxKyddAOr", "rygAQB6a_H" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "[EDIT: Thanks for the response! I am updating my score to 8 given the rebuttal to my and other reviewers' questions]\n\n--------------------------------------------\nSummary:\n--------------------------------------------\nThis paper studies knowledge distillation in the context of non-autoregressive translation. I...
[ 8, -1, -1, -1, 8, 3, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, 5, 1, -1, -1, -1, -1 ]
[ "iclr_2020_BygFVAEKDH", "BygmuCAKjB", "B1gjfhYBOr", "r1eumhbnYH", "iclr_2020_BygFVAEKDH", "iclr_2020_BygFVAEKDH", "rklYWJn9FS", "SkxKyddAOr", "rygAQB6a_H", "iclr_2020_BygFVAEKDH" ]
iclr_2020_Byl5NREFDr
Thieves on Sesame Street! Model Extraction of BERT-based APIs
We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model. Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT (Devlin et al., 2019), we show that the adversary does not need any real training data to successfully mount the attack. In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks, including natural language inference and question answering. Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model. Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against some adversaries can also be circumvented by more clever ones.
accept-poster
Two knowledgable reviewers recommend accepting the paper, and the less familiar reviewer is also positive. The final decision is to accept the paper. It's an interesting and timely topic with insightful results.
val
[ "rJxuOr0KsB", "HJeUtwCKor", "B1lMh7AYoS", "SJlh2oSRKH", "HklZG620Fr", "r1exx7fTKS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the detailed summary and comments.\n\n>> the victim model is fine-tuned on the original data, therefore it has picked up some of the data heuristics used to generate the queries, the annotators are not trained on, or shown any of the original examples (there is a control run, but these are presumably...
[ -1, -1, -1, 8, 6, 8 ]
[ -1, -1, -1, 3, 3, 1 ]
[ "r1exx7fTKS", "SJlh2oSRKH", "HklZG620Fr", "iclr_2020_Byl5NREFDr", "iclr_2020_Byl5NREFDr", "iclr_2020_Byl5NREFDr" ]
iclr_2020_BJx040EFvH
Fast is better than free: Revisiting adversarial training
Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent (PGD). In this paper, we make the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Specifically, we show that adversarial training with the fast gradient sign method (FGSM), when combined with random initialization, is as effective as PGD-based training but has significantly lower cost. Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy at epsilon=8/255 in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at epsilon=2/255 in 12 hours, in comparison to past work based on ``free'' adversarial training which took 10 and 50 hours to reach the same respective thresholds.
accept-poster
This paper provides a surprising result: that randomization and FGSM can produce robust models faster than previous methods given the right mix of cyclic learning rate, mixed precision, etc. This paper produced a fair bit of controversy among both the community and the reviewers to the point where there were suggestions of bugs, evaluation problems, and other issues leading to the results. In the end, the authors released the code (and made significant updates to the paper based on all the feedback). Multiple reviewers checked the code and were happy. There was an extensive author response, and all the reviewers indicated that their primary concerns were address, save concerns about the sensitivity of step-size and the impact of early stopping. Overall, the paper is well written and clear. The proposed approach is simple and well explained. The result is certainly interesting, and this paper will continue to generate fruitful debate. There are still things to address to improve the paper, listed above. I strongly encourage the authors to continue to improve the work and make a more concerted effort to carefully discuss the impacts of early stopping.
val
[ "ByeiH0AoFH", "Hyxi6-9rYS", "Byxm03EsiS", "SJl64i4ijH", "SklIbTJcjr", "Hye27hy5sS", "SyeFPoWDsS", "SkxcEtWPsH", "HJeOG_GmsS", "r1gL4MMzor", "BJeWKHKJjr", "HJeaa4KysB", "rJlYEVtJiB", "HygOpABTFS", "BygK0geC5B", "BkxATaruuS", "rkedM-SUuB", "SkgQ8qBkuB", "B1x0_Ul4OB", "SyeT0YG-dr"...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "public", "public", "public", "public", "author", "author", "public", ...
[ "The main claim of this paper is that a simple strategy of randomization plus fast gradient sign method (FGSM) adversarial training yields robust neural networks. This is somewhat surprising because previous works indicate that FGSM is not a powerful attack compared to iterative versions of it like projected gradie...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_BJx040EFvH", "iclr_2020_BJx040EFvH", "SklIbTJcjr", "Hye27hy5sS", "HJeOG_GmsS", "SkxcEtWPsH", "iclr_2020_BJx040EFvH", "r1gL4MMzor", "r1gL4MMzor", "rJlYEVtJiB", "HygOpABTFS", "ByeiH0AoFH", "Hyxi6-9rYS", "iclr_2020_BJx040EFvH", "B1x0_Ul4OB", "iclr_2020_BJx040EFvH", "S1g82u51_...
iclr_2020_rkgyS0VFvr
DBA: Distributed Backdoor Attacks against Federated Learning
Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded. While federated learning (FL) is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities. In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack (DBA) --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL. DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively. Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data. We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings. Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors. We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking. To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations (size, gap, and location), scaling factor in FL, data distribution, and poison ratio and interval. Our proposed DBA and thorough evaluation results shed lights on characterizing the robustness of FL.
accept-poster
Thanks for the discussion, all. This paper proposes an attack strategy against federated learning. Reviewers put this in the top tier, and the authors responded appropriately to their criticisms.
train
[ "r1gNsdHnjH", "rkgZSdB3oH", "r1gTFFSnor", "BJl5fZS6YS", "SJe4yM7lcr", "rkeucgq15r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your appreciation of our work!", "Thanks so much for your valuable review comments! \n\nFollowing your suggestion, we evaluated the Byzantine settings Multi-Krum (Blanchard et al 2017) and Bulyan (El Mhamdi et al 2018 ICML). For both DBA and centralized attack we use the aggregation rule that can t...
[ -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, 5, 5, 1 ]
[ "rkeucgq15r", "SJe4yM7lcr", "BJl5fZS6YS", "iclr_2020_rkgyS0VFvr", "iclr_2020_rkgyS0VFvr", "iclr_2020_rkgyS0VFvr" ]
iclr_2020_rJeXS04FPH
DeFINE: Deep Factorized Input Token Embeddings for Neural Sequence Modeling
For sequence models with large vocabularies, a majority of network parameters lie in the input and output layers. In this work, we describe a new method, DeFINE, for learning deep token representations efficiently. Our architecture uses a hierarchical structure with novel skip-connections which allows for the use of low dimensional input and output layers, reducing total parameters and training time while delivering similar or better performance versus existing methods. DeFINE can be incorporated easily in new or existing sequence models. Compared to state-of-the-art methods including adaptive input representations, this technique results in a 6% to 20% drop in perplexity. On WikiText-103, DeFINE reduces the total parameters of Transformer-XL by half with minimal impact on performance. On the Penn Treebank, DeFINE improves AWD-LSTM by 4 points with a 17% reduction in parameters, achieving comparable performance to state-of-the-art methods with fewer parameters. For machine translation, DeFINE improves the efficiency of the Transformer model by about 1.4 times while delivering similar performance.
accept-poster
The authors design a deep model architecture for learning word embeddings with better performance and/or more efficient use of parameters. Results on language modeling and machine translation are promising. Pros: Interesting idea and nice results. New model may have some independent value beyond NLP. Cons: Empirical comparisons could be more thorough. For example, it is not clear (to me at least) what would be the benefits of this approach applied to whole words versus a competitor using subword units.
val
[ "rkgNTosmiB", "SygrHioQor", "SJeIxKeYqr", "B1xqgNTisH", "HygR6URcjB", "Syee-Rw9sr", "Skl4MjimiH", "SyxVJ6jQsB", "rklnJ3oQsS", "r1xKino7or", "rJgZbiimiB", "BklYJjjmiH", "BJxLp5i7sB", "H1l3GKsmir", "Syl00P_TFH", "HJgUu7or9B" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the feedback! \n\nReg. Table 1b:\n=============================================================\nResponse: This table compares DeFINE with state-of-the-art models which use significantly more computational resources. We highlight that our model’s modest performance decrease is accompanied by a drastic r...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "HJgUu7or9B", "HJgUu7or9B", "iclr_2020_rJeXS04FPH", "HygR6URcjB", "BJxLp5i7sB", "iclr_2020_rJeXS04FPH", "SJeIxKeYqr", "Syl00P_TFH", "Syl00P_TFH", "Syl00P_TFH", "SJeIxKeYqr", "SJeIxKeYqr", "SJeIxKeYqr", "iclr_2020_rJeXS04FPH", "iclr_2020_rJeXS04FPH", "iclr_2020_rJeXS04FPH" ]
iclr_2020_rylVHR4FPB
Sampling-Free Learning of Bayesian Quantized Neural Networks
Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important. In this paper, we propose Bayesian quantized networks (BQNs), quantized neural networks (QNNs) for which we learn a posterior distribution over their discrete parameters. We provide a set of efficient algorithms for learning and prediction in BQNs without the need to sample from their parameters or activations, which not only allows for differentiable learning in quantized models but also reduces the variance in gradients estimation. We evaluate BQNs on MNIST, Fashion-MNIST and KMNIST classification datasets compared against bootstrap ensemble of QNNs (E-QNN). We demonstrate BQNs achieve both lower predictive errors and better-calibrated uncertainties than E-QNN (with less than 20% of the negative log-likelihood).
accept-poster
This paper proposes Bayesian quantized networks and efficient algorithms for learning and prediction of these networks. The reviewers generally thought that this was a novel and interesting paper. There were a few concerns about the clarity of parts of the paper and the experimental results. These concerns were addressed during the discussion phase, and the reviewers agree that the paper should be accepted.
train
[ "BkxSShajiB", "S1MoJ3aioB", "SyxjPspisB", "BJeeLPasjr", "rygNYBzLFS", "ByeXozehKB", "SygNIyraKB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your efforts in reviewing our submission.\n\n(1) Concerns of the presentation. \nIn the revised version, we modified the presentation as suggested and reduced the length of the paper to 8 pages, moving technical details to the appendix so that the main body focuses on the ideas and concepts. Specific...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 4, 5, 3 ]
[ "rygNYBzLFS", "ByeXozehKB", "SygNIyraKB", "iclr_2020_rylVHR4FPB", "iclr_2020_rylVHR4FPB", "iclr_2020_rylVHR4FPB", "iclr_2020_rylVHR4FPB" ]
iclr_2020_ByeUBANtvB
Learning to solve the credit assignment problem
Backpropagation is driving today's artificial neural networks (ANNs). However, despite extensive research, it remains unclear if the brain implements this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative: neurons can randomly introduce change, and use unspecific feedback signals to observe their effect on the cost and thus approximate their gradient. However, the convergence rate of such learning scales poorly with the number of involved neurons. Here we propose a hybrid learning approach. Each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide. We provide proof that our approach converges to the true gradient for certain classes of networks. In both feedforward and convolutional networks, we empirically show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning. Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.
accept-poster
Initial reviews of this paper cited some concerns about a lack of comparison to SOTA and baselines, and also some debate over claims of what is (or is not) "biologically plausible." However, after extensive back-and-forth between the authors and reviewers these issues have been addressed and the paper has been improved. There is now consensus among authors that this paper should be accepted. I would like to thank the reviewers and authors for taking the time to thoroughly discuss this paper.
train
[ "rylITWMqKS", "r1l1YdnaFS", "S1gFsvUhoH", "BkgyfPUnoS", "rkllrd-Yir", "r1gTxKWFor", "Bket5_-YsB", "HygI4DIuiS", "B1x2cMt5KS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a method that addresses the \"weight transport\" problem [1] (not cited by the authors) emphasizing the biological infeasibility of artificial neural networks (ANNs) which are trained by gradients computed by the backpropagation algorithm [2a, 2b]. It is arguably the most eminent criticism (amo...
[ 6, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 5, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_ByeUBANtvB", "iclr_2020_ByeUBANtvB", "iclr_2020_ByeUBANtvB", "r1l1YdnaFS", "rylITWMqKS", "rylITWMqKS", "rylITWMqKS", "B1x2cMt5KS", "iclr_2020_ByeUBANtvB" ]
iclr_2020_HJx8HANFDH
Four Things Everyone Should Know to Improve Batch Normalization
A key component of most neural network architectures is the use of normalization layers, such as Batch Normalization. Despite its common use and large utility in optimizing deep architectures, it has been challenging both to generically improve upon Batch Normalization and to understand the circumstances that lend themselves to other enhancements. In this paper, we identify four improvements to the generic form of Batch Normalization and the circumstances under which they work, yielding performance gains across all batch sizes while requiring no additional computation during training. These contributions include proposing a method for reasoning about the current example in inference normalization statistics, fixing a training vs. inference discrepancy; recognizing and validating the powerful regularization effect of Ghost Batch Normalization for small and medium batch sizes; examining the effect of weight decay regularization on the scaling and shifting parameters γ and β; and identifying a new normalization algorithm for very small batch sizes by combining the strengths of Batch and Group Normalization. We validate our results empirically on six datasets: CIFAR-100, SVHN, Caltech-256, Oxford Flowers-102, CUB-2011, and ImageNet.
accept-poster
This paper proposes techniques to improve training with batch normalization. The paper establishes the benefits of these techniques experimentally using ablation studies. The reviewers found the results to be promising and of interest to the community. However, this paper is borderline due in part due to the writing (notation issues) and because it does not discuss related work enough. We encourage the authors to properly address these issues before the camera ready.
test
[ "r1glmJ2ptH", "SygALR6ior", "Bke7e0Tsor", "B1l2ospjoB", "r1xcvsToiH", "H1lhSYpijB", "H1gqe_rmjr", "r1lmLcNaYB", "B1lHRzu6YS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "The paper performs an empirical study of four batch-normalization improvements and proposes a new normalization technique for small batch sizes, based on group and batch normalizations. Among others, the authors address the inconsistency between the train and the test stages and the problem of small batch sizes. T...
[ 6, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_HJx8HANFDH", "r1lmLcNaYB", "r1lmLcNaYB", "B1lHRzu6YS", "r1glmJ2ptH", "H1gqe_rmjr", "iclr_2020_HJx8HANFDH", "iclr_2020_HJx8HANFDH", "iclr_2020_HJx8HANFDH" ]
iclr_2020_BJedHRVtPB
Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving
Detecting objects such as cars and pedestrians in 3D plays an indispensable role in autonomous driving. Existing approaches largely rely on expensive LiDAR sensors for accurate depth information. While recently pseudo-LiDAR has been introduced as a promising alternative, at a much lower cost based solely on stereo images, there is still a notable performance gap. In this paper we provide substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation. Concretely, we adapt the stereo network architecture and loss function to be more aligned with accurate depth estimation of faraway objects --- currently the primary weakness of pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely sparse LiDAR sensors, which alone provide insufficient information for 3D detection, to de-bias our depth estimation. We propose a depth-propagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. We show on the KITTI object detection benchmark that our combined approach yields substantial improvements in depth estimation and stereo-based 3D object detection --- outperforming the previous state-of-the-art detection accuracy for faraway objects by 40%. Our code is available at https://github.com/mileyan/Pseudo_Lidar_V2.
accept-poster
Three knowledgable reviewers give a positive evaluation of the paper. The decision is to accept.
train
[ "ryx0WO8jiH", "S1lWwstRFr", "Byepnv8oiH", "ryeL8DUioB", "SklURuLnFH", "Hkl-5f5CYr", "H1lBgSqhKB", "Bkg947IgKr" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "1.[detour over the disparity cost volume] Thanks for pointing this out. It is definitely possible to construct the depth cost volume directly, however constructing the disparity cost volume brings us simplicity in implementation and efficiency through utilizing matrix operations. Although it does require some addi...
[ -1, 6, -1, -1, 6, 6, -1, -1 ]
[ -1, 3, -1, -1, 4, 3, -1, -1 ]
[ "SklURuLnFH", "iclr_2020_BJedHRVtPB", "S1lWwstRFr", "Hkl-5f5CYr", "iclr_2020_BJedHRVtPB", "iclr_2020_BJedHRVtPB", "Bkg947IgKr", "iclr_2020_BJedHRVtPB" ]
iclr_2020_SkxJ8REYPH
SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum
Distributed optimization is essential for training large models on large datasets. Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local SGD steps, and decentralized methods (e.g., using gossip algorithms) to decouple communications among workers. Although these methods run faster than AllReduce-based methods, which use blocking communication before every update, the resulting models may be less accurate after the same number of updates. Inspired by the BMUF method of Chen & Huo (2016), we propose a slow momentum (SlowMo) framework, where workers periodically synchronize and perform a momentum update, after multiple iterations of a base optimization algorithm. Experiments on image classification and machine translation tasks demonstrate that SlowMo consistently yields improvements in optimization and generalization performance relative to the base optimizer, even when the additional overhead is amortized over many updates so that the SlowMo runtime is on par with that of the base optimizer. We provide theoretical convergence guarantees showing that SlowMo converges to a stationary point of smooth non-convex losses. Since BMUF can be expressed through the SlowMo framework, our results also correspond to the first theoretical convergence guarantees for BMUF.
accept-poster
This paper presents a new approach, SlowMo, to improve communication-efficient distribution training with SGD. The main method is based on the BMUF approach and relies on workers to periodically synchronize and perform a momentum update. This works well in practice as shown in the empirical results. Reviewers had a couple of concerns regarding the significance of the contributions. After the rebuttal period some of their doubts were clarified. Even though they find that the solutions of the paper are an incremental extension of existing work, they believe this is a useful extension. For this reason, I recommend to accept this paper.
val
[ "rkeNRR7o9r", "r1x8jRsdsB", "BkeAnNiXoS", "HJgG84oQiB", "S1l_Rmi7jS", "H1g7D8tTFS", "SJg5vflkqH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents a simple momentum scheme which can be applied to distributed and decentralized SGD schemes. The scheme proposes to do a sequence inner/local steps of any optimizer without any momentum, but then only apply momentum on the outer level, after each global synchronization round.\n\nThe paper is clea...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_SkxJ8REYPH", "iclr_2020_SkxJ8REYPH", "H1g7D8tTFS", "SJg5vflkqH", "rkeNRR7o9r", "iclr_2020_SkxJ8REYPH", "iclr_2020_SkxJ8REYPH" ]
iclr_2020_SJx1URNKwH
MetaPix: Few-Shot Video Retargeting
We address the task of unsupervised retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task.
accept-poster
Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission.
train
[ "HyePk2nK5r", "Hkloac0Djr", "r1lyB9CPor", "HkedzcCPjS", "BkeVb25Otr", "ByezyYV9Fr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a novel and interesting task that learn to retarget human actions with few-shot samples. The overall pipeline is built by applying meta-learning strategy on pre-trained retargeting module. It follows a conditional generator and discriminator structure that leverages few-shot frames to retarget...
[ 6, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, 1, 5 ]
[ "iclr_2020_SJx1URNKwH", "BkeVb25Otr", "ByezyYV9Fr", "HyePk2nK5r", "iclr_2020_SJx1URNKwH", "iclr_2020_SJx1URNKwH" ]
iclr_2020_ryxz8CVYDH
Learning to Learn by Zeroth-Order Oracle
In the learning to learn (L2L) framework, we cast the design of optimization algorithms as a machine learning problem and use deep neural networks to learn the update rules. In this paper, we extend the L2L framework to zeroth-order (ZO) optimization setting, where no explicit gradient information is available. Our learned optimizer, modeled as recurrent neural network (RNN), first approximates gradient by ZO gradient estimator and then produces parameter update utilizing the knowledge of previous iterations. To reduce high variance effect due to ZO gradient estimator, we further introduce another RNN to learn the Gaussian sampling rule and dynamically guide the query direction sampling. Our learned optimizer outperforms hand-designed algorithms in terms of convergence rate and final solution on both synthetic and practical ZO optimization tasks (in particular, the black-box adversarial attack task, which is one of the most widely used tasks of ZO optimization). We finally conduct extensive analytical experiments to demonstrate the effectiveness of our proposed optimizer.
accept-poster
This paper proposes to extend learning to learn framework based on zeroth-order optimization. Generally, the paper is well presented and easy to follow. The core idea is to incorporate another RNN to adaptively to learn the Gaussian sampling rule. Although the method does not seem to have a strong theorical support, its effectiveness is evaluated in the well-organized experiments including realistic tasks like black-box adversarial attack. All reviewers including two experts in this field admit the novelty of the methods and are positive to the acceptance. I’d like to support their opinions and recommend accepting the paper. As R#1 still finds some details unclear, please try to clarify these points in the final version of the paper.
train
[ "HylOroWRYB", "rylVGWoAFB", "BJgs4nvCFB", "Hkxl6r0isr", "Byxm28RjoB", "ryePwUAsjS", "rJlWfrRoir", "HJx62E0siH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper proposes a zeroth-order optimization framework that employs an RNN to modulate the sampling used to estimate gradients and a second RNN that models the parameter update. More specifically, query directions are sampled from a Gaussian distribution with a diagonal covariance whose evolution is determined b...
[ 6, 6, 8, -1, -1, -1, -1, -1 ]
[ 1, 5, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2020_ryxz8CVYDH", "iclr_2020_ryxz8CVYDH", "iclr_2020_ryxz8CVYDH", "BJgs4nvCFB", "HylOroWRYB", "HylOroWRYB", "rylVGWoAFB", "rylVGWoAFB" ]
iclr_2020_H1gX8C4YPr
DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames
We present Decentralized Distributed Proximal Policy Optimization (DD-PPO), a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever "stale"), making it conceptually simple and easy to implement. In our experiments on training virtual robots to navigate in Habitat-Sim, DD-PPO exhibits near-linear scaling -- achieving a speedup of 107x on 128 GPUs over a serial implementation. We leverage this scaling to train an agent for 2.5 Billion steps of experience (the equivalent of 80 years of human experience) -- over 6 months of GPU-time training in under 3 days of wall-clock time with 64 GPUs. This massive-scale training not only sets the state of art on Habitat Autonomous Navigation Challenge 2019, but essentially "solves" the task -- near-perfect autonomous navigation in an unseen environment without access to a map, directly from an RGB-D camera and a GPS+Compass sensor. Fortuitously, error vs computation exhibits a power-law-like distribution; thus, 90% of peak performance is obtained relatively early (at 100 million steps) and relatively cheaply (under 1 day with 8 GPUs). Finally, we show that the scene understanding and navigation policies learned can be transferred to other navigation tasks -- the analog of "ImageNet pre-training + task-specific fine-tuning" for embodied AI. Our model outperforms ImageNet pre-trained CNNs on these transfer tasks and can serve as a universal resource (all models and code are publicly available).
accept-poster
The authors present and implement a synchronous, distributed RL called Decentralized Distributed Proximal Policy Optimization. The proposed technique was validated for pointgoal visual navigation task on recently introduced Habitat challenge 2019 and got the state of art performance. Two reviews recommend this paper for acceptance with only some minor comments, such as revising the title. The Blind Review #2 has several major concerns about the implementation details. In the rebuttal, the authors provided the source code to make the results reproducible. Overall, the paper is well written with promising experimental results. I also recommend it for acceptance.
train
[ "r1xyq2tnKB", "Hylr8ivqjS", "rkxazsD9oH", "ByxIp9P9ir", "HkgFW5P5sB", "HyghBWDjYr", "r1lmtrSe9S" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents a novel scheme of distributing PPO reinforcement learning algorithm for hundreds of GPUs. Proposed technique was validated for pointgoal visual navigation task on recently introduced Habitat challenge and sim. \n\nBesides the technical contribution, paper shows that when have enough computationa...
[ 8, -1, -1, -1, -1, 8, 3 ]
[ 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_H1gX8C4YPr", "rkxazsD9oH", "r1lmtrSe9S", "HyghBWDjYr", "r1xyq2tnKB", "iclr_2020_H1gX8C4YPr", "iclr_2020_H1gX8C4YPr" ]
iclr_2020_BJxVI04YvB
PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction
We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i.e., the confidence set for a given input contains the true label with high probability. We demonstrate how our approach can be used to construct PAC confidence sets on ResNet for ImageNet, a visual object tracking model, and a dynamics model for the half-cheetah reinforcement learning problem.
accept-poster
This paper describes a method for bounding the confidence around predictions made by deep networks. Reviewers agree that this result is of technical interest to the community, and with the added reorganization and revisions described by the authors, they and the AC agree the paper should be accepted.
train
[ "r1lOV7t2KS", "rylS1jhijS", "Hklz8R2jjr", "S1gsEivqsS", "HkljJqvqjB", "SkgJP1bKoB", "Byl8KRltjB", "rkgiuG8pYH", "HJgsF1xecr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper presents an approach for generating confidence set predictions from deep networks. That is, the smallest set of predictions where the true answer is included in that set. Theory is used to derive an algorithm with PAC-style bounds on the population risk. \n\n\nOverall Assessment: I like the cor...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_BJxVI04YvB", "r1lOV7t2KS", "iclr_2020_BJxVI04YvB", "r1lOV7t2KS", "r1lOV7t2KS", "rkgiuG8pYH", "HJgsF1xecr", "iclr_2020_BJxVI04YvB", "iclr_2020_BJxVI04YvB" ]
iclr_2020_SJgVU0EKwS
Precision Gating: Improving Neural Network Efficiency with Dynamic Dual-Precision Activations
We propose precision gating (PG), an end-to-end trainable dynamic dual-precision quantization technique for deep neural networks. PG computes most features in a low precision and only a small proportion of important features in a higher precision to preserve accuracy. The proposed approach is applicable to a variety of DNN architectures and significantly reduces the computational cost of DNN execution with almost no accuracy loss. Our experiments indicate that PG achieves excellent results on CNNs, including statically compressed mobile-friendly networks such as ShuffleNet. Compared to the state-of-the-art prediction-based quantization schemes, PG achieves the same or higher accuracy with 2.4× less compute on ImageNet. PG furthermore applies to RNNs. Compared to 8-bit uniform quantization, PG obtains a 1.2% improvement in perplexity per word with 2.7× computational cost reduction on LSTM on the Penn Tree Bank dataset.
accept-poster
The submission proposes an approach to accelerate network training by modifying the precision of individual weights, allowing a substantial speed up without a decrease in model accuracy. The magnitude of the activations determines whether it will be computed at a high or low bitwidth. The reviewers agreed that the paper should be published given the strong results, though there were some salient concerns which the authors should address in their final revision, such as how the method could be implemented on GPU and what savings could be achieved. Recommendation is to accept.
train
[ "HkeV9s_jFr", "Hyec_uZ2jH", "SJgqWKsijB", "Sygaq4VUjB", "BJxVzZEUiS", "SklmUjQ8iB", "SkegaYm8oB", "S1lyvcKksB", "BJlTMlMpFB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents an interesting quantization technique that is, unusually, end-to-end trainable and not just an inference technique. According to the experiments, the method achieves better performance and computational savings as compared to other quantization method baselines. The results are admirably demons...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_SJgVU0EKwS", "BJxVzZEUiS", "SkegaYm8oB", "HkeV9s_jFr", "BJlTMlMpFB", "S1lyvcKksB", "S1lyvcKksB", "iclr_2020_SJgVU0EKwS", "iclr_2020_SJgVU0EKwS" ]
iclr_2020_Bke8UR4FPB
Oblique Decision Trees from Derivatives of ReLU Networks
We show how neural models can be used to realize piece-wise constant functions such as decision trees. The proposed architecture, which we call locally constant networks, builds on ReLU networks that are piece-wise linear and hence their associated gradients with respect to the inputs are locally constant. We formally establish the equivalence between the classes of locally constant networks and decision trees. Moreover, we highlight several advantageous properties of locally constant networks, including how they realize decision trees with parameter sharing across branching / leaves. Indeed, only M neurons suffice to implicitly model an oblique decision tree with 2M leaf nodes. The neural representation also enables us to adopt many tools developed for deep networks (e.g., DropConnect (Wan et al., 2013)) while implicitly training decision trees. We demonstrate that our method outperforms alternative techniques for training oblique decision trees in the context of molecular property classification and regression tasks.
accept-poster
This paper leverages the piecewise linearity of predictions in ReLU neural networks to encode and learn piecewise constant predictors akin to oblique decision trees. The reviewers think the paper is interesting, and the idea is clever. The paper can be further improved in experiments. This includes comparison to ensembles of traditional trees or (in some cases) simple ReLU networks. Also the tradeoffs other than accuracy between the method and baselines are also interesting.
train
[ "HJlLZg0dqH", "BJg0mfZ2tr", "H1xha_UojS", "B1x-0SWKoH", "S1gghHW8iB", "rkxW5rWUoB", "SkxKU4WIoB", "HJgPu7-IjS", "Byg1Um-LjB", "rkgFDc0HcB" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "*Summary*\nThis paper leverages the piecewise linearity of predictions in ReLU neural networks to encode and learn piecewise constant predictors akin to oblique decision trees (trees with splits made on linear combinations of features instead of axis-aligned splits). The core observation is that the Jacobian of a ...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_Bke8UR4FPB", "iclr_2020_Bke8UR4FPB", "B1x-0SWKoH", "SkxKU4WIoB", "HJlLZg0dqH", "HJlLZg0dqH", "BJg0mfZ2tr", "rkgFDc0HcB", "iclr_2020_Bke8UR4FPB", "iclr_2020_Bke8UR4FPB" ]
iclr_2020_B1guLAVFDB
Span Recovery for Deep Neural Networks with Applications to Input Obfuscation
The tremendous success of deep neural networks has motivated the need to better understand the fundamental properties of these networks, but many of the theoretical results proposed have only been for shallow networks. In this paper, we study an important primitive for understanding the meaningful input space of a deep network: span recovery. For k<n, let A∈Rk×n be the innermost weight matrix of an arbitrary feed forward neural network M:Rn→R, so M(x) can be written as M(x)=σ(Ax), for some network σ:Rk→R. The goal is then to recover the row span of A given only oracle access to the value of M(x). We show that if M is a multi-layered network with ReLU activation functions, then partial recovery is possible: namely, we can provably recover k/2 linearly independent vectors in the row span of A using poly(n) non-adaptive queries to M(x). Furthermore, if M has differentiable activation functions, we demonstrate that \textit{full} span recovery is possible even when the output is first passed through a sign or 0/1 thresholding function; in this case our algorithm is adaptive. Empirically, we confirm that full span recovery is not always possible, but only for unrealistically thin layers. For reasonably wide networks, we obtain full span recovery on both random networks and networks trained on MNIST data. Furthermore, we demonstrate the utility of span recovery as an attack by inducing neural networks to misclassify data obfuscated by controlled random noise as sensical inputs.
accept-poster
The authors propose a way to recover latent factors implicitly constructed by a neural net with black box access to the nets output. This can be useful for identifying possible adversarial attacks. The majority of reviewers agrees that this is a solid technical and experimental contribution.
train
[ "BketSBj2cB", "ryxh_S_3jB", "BklYREN2oB", "rJegc2VhsH", "Syx-nIN3jB", "BJgdO8MJ5H", "BygwlnWNcB", "BkgMU573cr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the problem of span recovery for deep neural networks, that is\nrecovering the space of inputs that affect the output of the network.\nThe authors propose two algorithm, one for the case of ReLU activations and one\nfor the case of differentiable activations, and theoretically prove that they can...
[ 3, -1, -1, -1, -1, 6, 8, 6 ]
[ 5, -1, -1, -1, -1, 1, 5, 1 ]
[ "iclr_2020_B1guLAVFDB", "BygwlnWNcB", "BketSBj2cB", "BJgdO8MJ5H", "BkgMU573cr", "iclr_2020_B1guLAVFDB", "iclr_2020_B1guLAVFDB", "iclr_2020_B1guLAVFDB" ]
iclr_2020_ByxY8CNtvr
Improving Neural Language Generation with Spectrum Control
Recent Transformer-based models such as Transformer-XL and BERT have achieved huge success on various natural language processing tasks. However, contextualized embeddings at the output layer of these powerful models tend to degenerate and occupy an anisotropic cone in the vector space, which is called the representation degeneration problem. In this paper, we propose a novel spectrum control approach to address this degeneration problem. The core idea of our method is to directly guide the spectra training of the output embedding matrix with a slow-decaying singular value prior distribution through a reparameterization framework. We show that our proposed method encourages isotropy of the learned word representations while maintains the modeling power of these contextual neural models. We further provide a theoretical analysis and insight on the benefit of modeling singular value distribution. We demonstrate that our spectrum control method outperforms the state-of-the-art Transformer-XL modeling for language model, and various Transformer-based models for machine translation, on common benchmark datasets for these tasks.
accept-poster
Main content: Blind review #2 summarizes it well: Summary: This paper deals with the representation degeneration problem in neural language generation, as some prior works have found that the singular value distribution of the (input-output-tied) word embedding matrix decays quickly. The authors proposed an approach that directly penalizes deviations of the SV distribution from the two prior distributions, as well as a few other auxiliary losses on the orthogonality of U and V (which are now learnable). The experiments were conducted on small and large scale language modeling datasets as well as the relatively small IWSLT 2014 De-En MT dataset. Pros: + The paper is well-written with great clarity. The dimensionality of the involved matrices (and their decompositions) are clearly provided, and the approach is clearly described. The authors also did a great job providing the details of their experimental setup. + The experiments seem to show consistent improvements over the baseline methods (at least the ones listed by the authors) on a relatively extensive set of tasks (e.g., of both small and large scales, of two different NLP tasks). Via WT2 and WT103, the authors also showed that their method worked on both LSTM and Transformers (which it should, as the SVD on word embedding should be independent of the underlying architecture). + I think studying the expressivity of the output embedding matrix layer is a very interesting (and important) topic for NLP. (e.g., While models like BERT are widely used, the actual most frequently re-used module of BERT is its pre-trained word embeddings.) -- Discussion: The reviewers agree that it is a very well written paper, and this is important as a conference paper to illuminate readers. The one main objection is that spectrum control regularization was previously proposed and applied to GANs (Jiang et al ICLR 2019). However the authors convincingly point out that the technique is widely used, not only for GANs, and that application to neural language generation has quite different characteristics requiring a different, new approach: "our proposed prior distributions as shown in Figure 2 in our paper are fundamentally different from the singular value distributions learned using their penalty functions (See Figure 1 and Table 7 in Jiang et al.’s paper). Figure 1 in their paper suggests that their penalty function, i.e., D-optimal Reg, will encourage all the singular values close to 1, which is well aligned with their motivation for training GAN. However, if we use such penalty function to train neural language models, the learned word representations will lose the power of modeling contextual information, and can result in much worse results than the baseline methods." -- Recommendation and justification: I concur with the majority of reviewers that this paper is a weak accept. Though not revolutionary, it is well written, has usefully broad application, and is supported well empirically.
train
[ "Hyl1_Z4HiB", "rkxq1WEBsr", "Hkx1vlEriB", "BJg6QHLiYS", "HyeHZ7XCtH", "Skl7NjSCFS", "H1lsI7xtcH", "BJlDmk6wcS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thanks for your constructive comments, we answer your questions/comments as follows:\n\n1) In practice we use our method to train the models from scratch, we have clarified this in Section 4.3 in the revision.\n\n2) Our method is efficient and the memory cost is reasonable, and we have added a training time and me...
[ -1, -1, -1, 6, 3, 6, -1, -1 ]
[ -1, -1, -1, 4, 5, 3, -1, -1 ]
[ "BJg6QHLiYS", "HyeHZ7XCtH", "Skl7NjSCFS", "iclr_2020_ByxY8CNtvr", "iclr_2020_ByxY8CNtvr", "iclr_2020_ByxY8CNtvr", "BJlDmk6wcS", "iclr_2020_ByxY8CNtvr" ]
iclr_2020_SJlh8CEYDB
Learn to Explain Efficiently via Neural Logic Inductive Learning
The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain the problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art models, we find NLIL is able to search for rules that are x10 times longer while remaining x3 times faster. We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.
accept-poster
This paper proposes a differentiable inductive logic programming method in the vein of recent work on the topic, with efficiency-focussed improvements. Thanks the very detailed comments and discussion with the reviewers, my view is that the paper is acceptable to ICLR. I am mindful of the reasons for reluctance from reviewer #3 — while these are not enough to reject the paper, I would strongly, *STRONGLY* advise the authors to consider adding a short section providing comparison to traditional ILP methods and NLM in their camera ready.
train
[ "HylB_NPsiB", "B1eQHHIijr", "Bkg0Zszjir", "HJgJxvzioB", "rkgAlxfAtr", "Syly91Wjor", "S1gH4ClosH", "SyeY5eb5iH", "BJl0EE-9sr", "rJxfJQZqoH", "SJxkCCg5oB", "HyeWvfWqjH", "SygoWVWcjr", "rylYm7W5sB", "Bkx3fb-qoH", "Skl_EBtrtS", "Bylt5yX0FS" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reivewer3 for reassessing the paper. Our responses to your questions are as follows.\n\n\n- What is {<x, P, x'>} for unary P? \n\nIf the query predicate is unary then there is only one entity. For notation consistency, we use the same format. One can treat the second argument of the unary query as a p...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "B1eQHHIijr", "SyeY5eb5iH", "Syly91Wjor", "S1gH4ClosH", "iclr_2020_SJlh8CEYDB", "rylYm7W5sB", "rkgAlxfAtr", "Bylt5yX0FS", "Skl_EBtrtS", "rkgAlxfAtr", "iclr_2020_SJlh8CEYDB", "rkgAlxfAtr", "Skl_EBtrtS", "rkgAlxfAtr", "Bylt5yX0FS", "iclr_2020_SJlh8CEYDB", "iclr_2020_SJlh8CEYDB" ]
iclr_2020_ryx1wRNFvB
Improved memory in recurrent neural networks with sequential non-normal dynamics
Training recurrent neural networks (RNNs) is a hard problem due to degeneracies in the optimization landscape, a problem also known as vanishing/exploding gradients. Short of designing new RNN architectures, previous methods for dealing with this problem usually boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period. The basic motivation behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve (Euclidean) norms and effectively deal with vanishing/exploding gradients. However, this ignores the crucial effects of non-linearity and noise. In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks. Moreover, in the presence of noise, norm preservation itself ceases to be the ideal objective. A more sensible objective is maximizing the signal-to-noise ratio (SNR) of the propagated signal instead. Previous work has shown that in the linear case, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure. Motivated by this finding, here we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Our experimental results show that non-normal RNNs outperform their orthogonal counterparts in a diverse range of benchmarks. We also find evidence for increased non-normality and hidden chain-like feedforward motifs in trained RNNs initialized with orthogonal recurrent connectivity matrices.
accept-poster
This paper proposes to explore nonnormal matrix initialization in RNNs. Two reviewers recommended acceptance and one recommended rejection. The reviewers recommending acceptance highlighted the utility of the approach, its potential to inspire future work, and the clarity and quality of writing and accompanying experiments. One reviewer recommending weak acceptance expressed appreciation of the quality of the rebuttal and that their concerns were largely addressed. The reviewer recommending rejection was primarily concerned with the novelty of the method. Their review suggested the inclusion of an additional citation, which was included in a revised version for the rebuttal but not with a direct comparison of results. On the balance, the paper has a relatively high degree of support from the reviewers, and presents an interesting and potentially useful initialization in a clear and well-motivated way.
train
[ "HylSCdN2sB", "HyerkTJAFH", "ryl5UEe9iS", "rylgt7lqoS", "HklMAZl9iB", "S1x8Rex5oH", "Bkxc7hICKr", "S1lp0zP-9r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the nice rebuttal and the additional experiments!\n\nThe rebuttal addresses most of my initial concerns, I updated my rating to reflect this.\n", "Contributions:\n This paper proposes to explore nonnormal matrix initialization in RNNs. Authors demonstrate on various tasks (Copy/Addition, Permuted-SMNI...
[ -1, 6, -1, -1, -1, -1, 8, 3 ]
[ -1, 4, -1, -1, -1, -1, 1, 4 ]
[ "rylgt7lqoS", "iclr_2020_ryx1wRNFvB", "iclr_2020_ryx1wRNFvB", "HyerkTJAFH", "Bkxc7hICKr", "S1lp0zP-9r", "iclr_2020_ryx1wRNFvB", "iclr_2020_ryx1wRNFvB" ]
iclr_2020_SygWvAVFPr
Neural Module Networks for Reasoning over Text
Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Neural module networks (NMNs) learn to parse such questions as executable programs composed of learnable modules, performing well on synthetic visual QA domains. However, we find that it is challenging to learn these models for non-synthetic questions on open-domain text, where a model needs to deal with the diversity of natural language and perform a broader range of reasoning. We extend NMNs by: (a) introducing modules that reason over a paragraph of text, performing symbolic reasoning (such as arithmetic, sorting, counting) over numbers and dates in a probabilistic and differentiable manner; and (b) proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text. Additionally, we show that a limited amount of heuristically-obtained question program and intermediate module output supervision provides sufficient inductive bias for accurate learning. Our proposed model significantly outperforms state-of-the-art models on a subset of the DROP dataset that poses a variety of reasoning challenges that are covered by our modules.
accept-poster
This work extends the previously introduced NMN for VQA for handling reasoning over text using symbolic reasoning components that can perform counting, sorting etc and can be compositionally combined. Moreover, to successfully train the model, the authors introduce a simple unsupervised auxiliary loss for training the IE components as well heuristically incorporating inductive biases in the behaviour on couple of components. All reviews agreed that this is a challenging topic and an interesting approach to symbolic reasoning over text. At the same time, reviewers did point that experiments are borderline thin, since the authors start with DROP and drop questions that are not particularly suited for symbolic reasoning, resulting in a substantially smaller dataset. Despite the fact that the experiments could probably be stronger, I’m recommending acceptance cause this topic is very interesting and this is a good paper to raise discussions at ICLR,
train
[ "BkgvQQW0tH", "B1xkXk7B5H", "rJxuoEI3jr", "Byl0OT2ujS", "ryl1M62uor", "ByeEE63diH", "Hyl4DhoAtS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a model and a training framework for question answering which requires compositional reasoning over the input text, by building executable neural modules and training based on additional auxiliary supervision signals.\n\nI really like this paper and the approach taken: tackling complex QA task...
[ 6, 6, -1, -1, -1, -1, 8 ]
[ 3, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2020_SygWvAVFPr", "iclr_2020_SygWvAVFPr", "iclr_2020_SygWvAVFPr", "B1xkXk7B5H", "BkgvQQW0tH", "Hyl4DhoAtS", "iclr_2020_SygWvAVFPr" ]
iclr_2020_HJgfDREKDB
Higher-Order Function Networks for Learning Composable 3D Object Representations
We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second 'mapping' network. This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space, such as the unit sphere. We study the effectiveness of our method through various experiments on subsets of the ShapeNet dataset. We find that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters. Our smallest mapping network has only about 7000 parameters and shows reconstruction quality on par with state-of-the-art object decoder architectures with millions of parameters. Further experiments on feature mixing through the composition of learned functions show that the encoding captures a meaningful subspace of objects.
accept-poster
The submission presents an approach to single-view 3D reconstruction. The approach is quite creative and involves predicting the weights of a network that is then applied to a point set. The presentation is good. The experimental protocol is well-informed and the results are convincing. The reviewers' concerns have largely been addressed by the authors' responses and the revision. In particular, R2, who gave a "3", posted "I would now advise to raise my score (3 previously) to a be in line with the 6: Weak Accept given by the other reviewers." This means that all three reviewers recommend accepting the paper. The AC agrees.
train
[ "rygQahhosH", "r1gIbEt9jr", "BkxjmrtcjH", "rkl6-fY9jr", "Syee2fmjYH", "HyxflLtAYB", "SklBkB-f9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their insightful suggestions and comments, as well as their appreciation of the novelty of our method HOF. We feel that we were able to address all of their concerns in our followup experiments. Most of all, the reviewers requested additional experiments comparing our method with other b...
[ -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 4, 5, 3 ]
[ "iclr_2020_HJgfDREKDB", "HyxflLtAYB", "Syee2fmjYH", "SklBkB-f9r", "iclr_2020_HJgfDREKDB", "iclr_2020_HJgfDREKDB", "iclr_2020_HJgfDREKDB" ]
iclr_2020_H1x5wRVtvS
Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling
For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework. VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator. We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance. This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion. By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks.
accept-poster
This paper proposes a bidirectional joint image-text model using a variational hetero-encoder (VHE) randomized generative adversarial network (GAN). The proposed VHE-GAN model encodes an image to decode its associated text. Three reviewers have split reviews. Reviewer #3 is overall positive about this work. Reviewer #1 rated weak acceptance, while request more comparison with latest works. Reviewer #2 rated weak reject raised concerns on the motivation of the approach, the lack of ablation and lack of comparison with the latest work. During the rebuttal, the authors provide additional comparison and ablation, which seem to address the major concerns. Given the overall positive feedback and the quality of rebuttal, the AC recommends acceptance.
train
[ "ryeEe7T0Fr", "H1l5whD9jS", "HJeVdgd9sB", "HkeEgRwcjB", "B1eRbaw9sH", "HylBJaH6FS", "SkxqhVvaYr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: The authors design a new model for bidirectional joint image-text modeling using a variational hetero-encoder\n(VHE) randomized generative adversarial network (GAN) that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into an end-to-end multimodal model. Their proposed VHE-GA...
[ 6, -1, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_H1x5wRVtvS", "iclr_2020_H1x5wRVtvS", "HylBJaH6FS", "SkxqhVvaYr", "ryeEe7T0Fr", "iclr_2020_H1x5wRVtvS", "iclr_2020_H1x5wRVtvS" ]
iclr_2020_r1eowANFvr
Towards Fast Adaptation of Neural Architectures with Meta Learning
Recently, Neural Architecture Search (NAS) has been successfully applied to multiple artificial intelligence areas and shows better performance compared with hand-designed networks. However, the existing NAS methods only target a specific task. Most of them usually do well in searching an architecture for single task but are troublesome for multiple datasets or multiple tasks. Generally, the architecture for a new task is either searched from scratch, which is neither efficient nor flexible enough for practical application scenarios, or borrowed from the ones searched on other tasks, which might be not optimal. In order to tackle the transferability of NAS and conduct fast adaptation of neural architectures, we propose a novel Transferable Neural Architecture Search method based on meta-learning in this paper, which is termed as T-NAS. T-NAS learns a meta-architecture that is able to adapt to a new task quickly through a few gradient steps, which makes the transferred architecture suitable for the specific task. Extensive experiments show that T-NAS achieves state-of-the-art performance in few-shot learning and comparable performance in supervised learning but with 50x less searching cost, which demonstrates the effectiveness of our method.
accept-poster
This paper introduces T-NAS, a neural architecture search (NAS) method that can quickly adapt architectures to new datasets based on gradient-based meta-learning. It is a combination of the NAS method DARTS and the meta-learning method MAML. All reviewers had some questions and minor criticisms that the authors replied to, and in the private discussion of reviewers and AC all reviewers were happy with the authors' answers. There was unanimous agreement that this is a solid poster. Therefore, I recommend acceptance as a poster.
train
[ "ryxfMAznsS", "rJgPexXnsS", "SygfPAGhsr", "ByexFlX9tH", "HklZo4iptr", "Hkeug-RptS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your insightful and valuable suggestions. We have tried our best to improve our paper according to your comments. Please see the responses as follows.\n\nQ1: ‘I think it would be good to add the method of Liu et al 2018b as a background’.\nA1: We have added DARTS (Liu et al 2018b) to the background, ...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 4, 4, 3 ]
[ "Hkeug-RptS", "ByexFlX9tH", "HklZo4iptr", "iclr_2020_r1eowANFvr", "iclr_2020_r1eowANFvr", "iclr_2020_r1eowANFvr" ]
iclr_2020_B1x6w0EtwH
Graph Constrained Reinforcement Learning for Natural Language Action Spaces
Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language. They are ideal environments for studying how to extend reinforcement learning agents to meet the challenges of natural language understanding, partial observability, and action generation in combinatorially-large text-based action spaces. We present KG-A2C, an agent that builds a dynamic knowledge graph while exploring and generates actions using a template-based action space. We contend that the dual uses of the knowledge graph to reason about game state and to constrain natural language generation are the keys to scalable exploration of combinatorially large natural language actions. Results across a wide variety of IF games show that KG-A2C outperforms current IF agents despite the exponential increase in action space size.
accept-poster
This paper applies reinforcement learning to text adventure games by using knowledge graphs to constrain the action space. This is an exciting problem with relatively little work performed on it. Reviews agree that this is an interesting paper, well written, with good results. There are some concerns about novelty but general agreement that the paper should be accepted. I therefore recommend acceptance.
train
[ "rJxq_Sx0FB", "r1ga-eIL5H", "BJgGH86ooB", "SJxU3HajsH", "HkgmOrToor", "BygA6oW15H" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a knowledge graph advantage actor critic (KG-A2C) model to allow an agent to do reinforcement learning in the interactive fiction game. Under the general framework of A2C, the core contribution of the paper is to apply a graph attention network on the knowledge graph to help learn better repres...
[ 6, 6, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, 4 ]
[ "iclr_2020_B1x6w0EtwH", "iclr_2020_B1x6w0EtwH", "rJxq_Sx0FB", "BygA6oW15H", "r1ga-eIL5H", "iclr_2020_B1x6w0EtwH" ]
iclr_2020_BJxG_0EtDS
Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control
Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lower-dimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control.
accept-poster
This paper studies optimal control with low-dimensional representation. The paper presents interesting progress, although I urge the authors to address all issues raised by reviewers in their revisions.
train
[ "r1xykC03qS", "BkxlDTiMor", "Bkg_J0ofor", "BkggJasfoS", "HkxhB8v8KS", "HylFTIdRYB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers learning low-dimensional representations from high-dimensional observations for control purposes. The authors extend the E2C framework by introducing the new PCC-Loss function. This new loss function aims to reflect the prediction in the observation space, the consistency between latent and ob...
[ 6, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, 1, 1 ]
[ "iclr_2020_BJxG_0EtDS", "r1xykC03qS", "HkxhB8v8KS", "HylFTIdRYB", "iclr_2020_BJxG_0EtDS", "iclr_2020_BJxG_0EtDS" ]
iclr_2020_ryxQuANKPB
Augmenting Non-Collaborative Dialog Systems with Explicit Semantic and Strategic Dialog History
We study non-collaborative dialogs, where two agents have a conflict of interest but must strategically communicate to reach an agreement (e.g., negotiation). This setting poses new challenges for modeling dialog history because the dialog's outcome relies not only on the semantic intent, but also on tactics that convey the intent. We propose to model both semantic and tactic history using finite state transducers (FSTs). Unlike RNN, FSTs can explicitly represent dialog history through all the states traversed, facilitating interpretability of dialog structure. We train FSTs on a set of strategies and tactics used in negotiation dialogs. The trained FSTs show plausible tactic structure and can be generalized to other non-collaborative domains (e.g., persuasion). We evaluate the FSTs by incorporating them in an automated negotiating system that attempts to sell products and a persuasion system that persuades people to donate to a charity. Experiments show that explicitly modeling both semantic and tactic history is an effective way to improve both dialog policy planning and generation performance.
accept-poster
This work proposes use of two pre-trained FST models to explicitly incorporate semantic and strategic/tactic information from dialog history into non-collaborative (negotiation) dialog systems. Experiments on two datasets from prior work show the advantage of this model in automated and human evaluation. While all reviewers found the work interesting, they made many suggestions regarding the presentation. Author'(s) rebuttal included explanations and changes to the presentation. Hence, I suggest acceptance as a poster presentation.
train
[ "rJxHJvQDoB", "SygoELQwsS", "HJxIj9Gwor", "BygDgZA3KB", "B1xaAx7AKH", "Sye3Ss8Q5B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your time and thoughtful review! \n\n“Can you explain more about the choice for the parameters? For example, why the possible strategy output is a 15-dimensional binary-value vector?”\n * We draw insights from behavioral research and operationalize these in 15 negotiation strategies in our strateg...
[ -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, 1, 5, 4 ]
[ "BygDgZA3KB", "B1xaAx7AKH", "Sye3Ss8Q5B", "iclr_2020_ryxQuANKPB", "iclr_2020_ryxQuANKPB", "iclr_2020_ryxQuANKPB" ]
iclr_2020_SkeHuCVFDr
BERTScore: Evaluating Text Generation with BERT
We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics.
accept-poster
Thanks for an interesting discussion. The authors present a supposedly task-independent evaluation metric for generation tasks with references that relies on BERT or similar pretrained language models and a BERT-internal alignment. Reviewers are moderately positive. I encourage the authors to think about a) whether their approach scales to language pairs where wordpieces are less comparable; b) whether second order similarly, e.g., using RSA, would be better than alignment-based similarity; c) whether this metric works in the extremes, e.g., can it distinguish between bad output and super-bad output (where in both cases alignment may be impossible), and can it distinguish between good output and super-good output (where BERT scores may be too biased by BERT's training objective).
train
[ "S1lNbqHjKB", "HyxKnrzGsS", "BJe3MBfMjB", "HJe7CzzGir", "BJlIpZuCKH", "HJgQNGqJ9r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n*** Update ***\nI'd like to thank the authors for answering my questions, and I am satisfied with their response. I have read the other reviews for this paper as well, and I am keeping my score.\n\n\nThis paper proposes BERTScore, a method for automatic evaluation of text. Their method uses BERT to produce conte...
[ 8, -1, -1, -1, 3, 6 ]
[ 5, -1, -1, -1, 5, 4 ]
[ "iclr_2020_SkeHuCVFDr", "HJgQNGqJ9r", "BJlIpZuCKH", "S1lNbqHjKB", "iclr_2020_SkeHuCVFDr", "iclr_2020_SkeHuCVFDr" ]
iclr_2020_SkgKO0EtvS
Neural Execution of Graph Algorithms
Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs. In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving. Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm). As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are best-suited for such objectives, and validate this claim empirically. We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks---showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm.
accept-poster
It seems to be an interesting contribution to the area. I suggest acceptance.
train
[ "HklfytHTKB", "HkekL5I2sS", "rJetet8hjB", "Sye99OU2sr", "rygAWdU2ir", "SJg7FXl6KS", "BkgYLHqx5H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper investigates using GNNs to learn graph algorithms. It proposes a model which consists of algorithm-dependent encoder and decoder, and algorithm-independent processor. \nAuthors try to learn BFS, Bellman-Ford and Prim algorithms on various types of random graphs. \nExperimental results suggest that MPNN ...
[ 8, -1, -1, -1, -1, 8, 1 ]
[ 5, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_SkgKO0EtvS", "iclr_2020_SkgKO0EtvS", "SJg7FXl6KS", "HklfytHTKB", "BkgYLHqx5H", "iclr_2020_SkgKO0EtvS", "iclr_2020_SkgKO0EtvS" ]
iclr_2020_r1lF_CEYwS
On the Need for Topology-Aware Generative Models for Manifold-Based Defenses
ML algorithms or models, especially deep neural networks (DNNs), have shown significant promise in several areas. However, recently researchers have demonstrated that ML algorithms, especially DNNs, are vulnerable to adversarial examples (slightly perturbed samples that cause mis-classification). Existence of adversarial examples has hindered deployment of ML algorithms in safety-critical sectors, such as security. Several defenses for adversarial examples exist in the literature. One of the important classes of defenses are manifold-based defenses, where a sample is "pulled back" into the data manifold before classifying. These defenses rely on the manifold assumption (data lie in a manifold of lower dimension than the input space). These defenses use a generative model to approximate the input distribution. This paper asks the following question: do the generative models used in manifold-based defenses need to be topology-aware? Our paper suggests the answer is yes. We provide theoretical and empirical evidence to support our claim.
accept-poster
This paper studies the role of topology in designing adversarial defenses. Specifically , the authors study defense strategies that rely on the assumption that data lies on a low-dimensional manifold, and show theoretical and empirical evidence that such defenses need to build a topological understanding of the data. Reviewers were initially positive, but had some concerns pertaining to clarity and limited experimental setup. After a productive rebuttal phase, now reviewers are mostly in favor of acceptance, thanks to the improved readibility and clarity. Despite the small-scale experimental validation, ultimately both reviewers and AC conclude this paper is worthy of publication.
train
[ "Skg5fxnRYr", "BJbpU5hoS", "BylhlU9hjr", "BkxjO615FH", "SkxkP9InoS", "B1gZIRbhsr", "HJeA-Cb3iH", "BkxWR5ZioB", "HJe4ZU5t9r" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper argues that defenses against adversarial attacks need to be stronger than they currently are. Defenses that use generative models assume that there exists a manifold of data that is modeled by a trained generative model that can be used to project any out-of-manifold data unto the manifold. However, thi...
[ 8, -1, -1, 6, -1, -1, -1, -1, 3 ]
[ 4, -1, -1, 5, -1, -1, -1, -1, 3 ]
[ "iclr_2020_r1lF_CEYwS", "iclr_2020_r1lF_CEYwS", "SkxkP9InoS", "iclr_2020_r1lF_CEYwS", "B1gZIRbhsr", "BkxjO615FH", "Skg5fxnRYr", "HJe4ZU5t9r", "iclr_2020_r1lF_CEYwS" ]
iclr_2020_S1xtORNFwH
FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary
We present a novel method of compression of deep Convolutional Neural Networks (CNNs) by weight sharing through a new representation of convolutional filters. The proposed method reduces the number of parameters of each convolutional layer by learning a 1D vector termed Filter Summary (FS). The convolutional filters are located in FS as overlapping 1D segments, and nearby filters in FS share weights in their overlapping regions in a natural way. The resultant neural network based on such weight sharing scheme, termed Filter Summary CNNs or FSNet, has a FS in each convolution layer instead of a set of independent filters in the conventional convolution layer. FSNet has the same architecture as that of the baseline CNN to be compressed, and each convolution layer of FSNet has the same number of filters from FS as that of the basline CNN in the forward process. With compelling computational acceleration ratio, the parameter space of FSNet is much smaller than that of the baseline CNN. In addition, FSNet is quantization friendly. FSNet with weight quantization leads to even higher compression ratio without noticeable performance loss. We further propose Differentiable FSNet where the way filters share weights is learned in a differentiable and end-to-end manner. Experiments demonstrate the effectiveness of FSNet in compression of CNNs for computer vision tasks including image classification and object detection, and the effectiveness of DFSNet is evidenced by the task of Neural Architecture Search.
accept-poster
The paper proposes to compress convolutional neural networks via weight sharing across filters of each convolution layer. A fast convolution algorithm is also designed for the convolution layer with this approach. Experimental results show (i) effectiveness in CNN compression, (ii) acceleration on the tasks of image classification, object detection and neural architecture search. While the authors addressed most of reviewers' concerns, the weakness of the paper which remains is that no wall-clock runtime numbers (only FLOPS) are reported - so efficiency of the approach in practice in uncertain.
test
[ "SkxyixFH6H", "HygHH-nnsr", "Skx_Q-3niH", "BkehCgjltr", "B1l0SlB6YS" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper authors propose a novel idea (called Filter Summary, FS) how to compress convolutional neural networks with 2D convolutions (kernels are 3D tensors, it is also applicable to the 1D convolutions). Compression of the convolution operation is done with weight sharing: unwrapped kernel (channel-major) i...
[ 8, -1, -1, 6, 6 ]
[ 3, -1, -1, 1, 3 ]
[ "iclr_2020_S1xtORNFwH", "B1l0SlB6YS", "BkehCgjltr", "iclr_2020_S1xtORNFwH", "iclr_2020_S1xtORNFwH" ]
iclr_2020_HJe6uANtwH
Capsules with Inverted Dot-Product Attention Routing
We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent's state and the child's vote. The new mechanism 1) designs routing via inverted dot-product attention; 2) imposes Layer Normalization as normalization; and 3) replaces sequential iterative routing with concurrent iterative routing. When compared to previously proposed routing algorithms, our method improves performance on benchmark datasets such as CIFAR-10 and CIFAR-100, and it performs at-par with a powerful CNN (ResNet-18) with 4x fewer parameters. On a different task of recognizing digits from overlayed digit images, the proposed capsule model performs favorably against CNNs given the same number of layers and neurons per layer. We believe that our work raises the possibility of applying capsule networks to complex real-world tasks.
accept-poster
This work presents a routing algorithm for capsule networks, and demonstrates empirical evaluation on CIFAR-10 and CIFAR-100. The results outperform existing capsule networks and are at-par with CNNs. Reviewers appreciated the novelty, introducing a new simpler routing mechanism, and achieving good performance on real world datasets. In particular, removing the squash function and experimenting with concurrent routing was highlighted as significant progress. There were some concerns (e.g. claiming novelty for inverted dot-product attention) and clarification questions (e.g. same learning rate schedule for all models). The authors provided a response and revised the submission , which addresses most of these concerns. At the end, majority of reviewers recommended accept. Alongside with them, I acknowledge the novelty of using layer norm and parallel execution, and recommend accept.
train
[ "SklNZ1259S", "BkelrkX5sB", "HyxmfJXqir", "H1xGyJm9sH", "S1xoapM5iS", "H1x0kqZJ5S", "rygSM4-q9r", "HylYeJdU_r", "BkxFhbtE_B" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Authors improve upon dynamic routing between capsules by removing the squash function (norm normalization) and apply a layerNorm normalization instead. Furthermore, they experiment with concurrent routing rather than sequential routing (route all caps layers once, then all layers concurrently again and again). Thi...
[ 3, -1, -1, -1, -1, 6, 8, -1, -1 ]
[ 5, -1, -1, -1, -1, 1, 5, -1, -1 ]
[ "iclr_2020_HJe6uANtwH", "H1x0kqZJ5S", "rygSM4-q9r", "SklNZ1259S", "iclr_2020_HJe6uANtwH", "iclr_2020_HJe6uANtwH", "iclr_2020_HJe6uANtwH", "BkxFhbtE_B", "iclr_2020_HJe6uANtwH" ]
iclr_2020_BylA_C4tPr
Composition-based Multi-Relational Graph Convolutional Networks
Graph Convolutional Networks (GCNs) have recently been shown to be quite successful in modeling graph-structured data. However, the primary focus has been on handling simple undirected graphs. Multi-relational graphs are a more general and prevalent form of graphs where each edge has a label and direction associated with it. Most of the existing approaches to handle such graphs suffer from over-parameterization and are restricted to learning representations of nodes only. In this paper, we propose CompGCN, a novel Graph Convolutional framework which jointly embeds both nodes and relations in a relational graph. CompGCN leverages a variety of entity-relation composition operations from Knowledge Graph Embedding techniques and scales with the number of relations. It also generalizes several of the existing multi-relational GCN methods. We evaluate our proposed method on multiple tasks such as node classification, link prediction, and graph classification, and achieve demonstrably superior results. We make the source code of CompGCN available to foster reproducible research.
accept-poster
This paper proposes and evaluates a formulation of graph convolutional networks for multi-relation graphs. The paper was reviewed by three experts working in this area and received three Weak Accept decisions. The reviewers identified some concerns, including novelty with respect to existing work and specific details of the experimental setup and results that were not clear. The authors have addressed most of these concerns in their response, including adding a table that explicitly explains the contribution with respect to existing work and clarifying the missing details. Given the unanimous Weak Accept decision, the ACs also recommend Accept as a poster.
train
[ "r1xUesj5oS", "rJg_8csqjr", "SklHHKscjH", "BkeK-XITYB", "SJlBICeCYS", "rJlDK_SAFS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the constructive comments.\n\n1. In the revised version of the paper, we have added an additional table (Table 1 in the revised version) to clearly demarcate how our methods differ from R-GCN in terms of its applicability and parameter complexity. We have also added additional explanations to make th...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 5, 4, 3 ]
[ "BkeK-XITYB", "SJlBICeCYS", "rJlDK_SAFS", "iclr_2020_BylA_C4tPr", "iclr_2020_BylA_C4tPr", "iclr_2020_BylA_C4tPr" ]
iclr_2020_rklbKA4YDS
Gradient-Based Neural DAG Learning
We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while avoiding the combinatorial nature of the problem. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.
accept-poster
In this paper, the authors propose a novel approach for learning the structure of a directed acyclic graph from observational data that allows to flexibly model nonlinear relationships between variables using neural networks. While the reviewers initially had concerns with respect to the positioning of the paper and various questions regarding theoretical results and experiments, these concerns have been addressed satisfactorily during the discussion period. The paper is now acceptable for publication in ICLR-2020.
train
[ "HyefuvE-5B", "HJxLg1uTYS", "HkxS9xbTKH", "H1lEKOq2oS", "r1x5hvqnsr", "BJl6q1UujB", "Hye8mW8OoH", "SylGzOrfjB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Summary: \nThe authors propose a prediction model for directed acyclic graphs (DAGs) over a fixed set of vertices based on a neural network. The present work follows the previous work on undirected acyclic graphs, where the key constraint is (3), ensuring the acyclic property. The proposed method performed favorab...
[ 6, 6, 8, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2020_rklbKA4YDS", "iclr_2020_rklbKA4YDS", "iclr_2020_rklbKA4YDS", "Hye8mW8OoH", "iclr_2020_rklbKA4YDS", "HJxLg1uTYS", "HkxS9xbTKH", "HyefuvE-5B" ]
iclr_2020_HJxMYANtPH
The Local Elasticity of Neural Networks
This paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector x' is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector x that is dissimilar to x' in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on real-life and synthetic datasets, whereas this is not observed in linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn corroborates the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss some implications of local elasticity to shed light on several intriguing aspects of deep neural networks.
accept-poster
This paper presents a new phenomenon referred to as the "local elasticity of neural networks". The main argument is that the SGD update for nonlinear network at a local input x does not change the predictions at a different input x' (see Fig. 2). This is then connected to similarity using nearest-neighbor and kernel methods. An algorithm is also presented. The reviewers find the paper intriguing and believe that this could be interesting for the community. After the rebuttal period, one of the reviewers increased their score. I do agree with the view of the reviewers, although I found that the paper's presentation can be improved. For example, Fig. 1 is not clear at all, and the related work section basically talks about many existing works but does not discuss why they are related to this work and how this work add value to this existing works. I found Fig. 2 very clear and informative. I hope that the authors could further improve the presentation. This should help in improving the impact of the paper. With the reviewers score, I recommend to accept this paper, and encourage the authors to improve the presentation of the paper.
train
[ "SylBdSQ6tH", "BkxUgbM5oS", "rygKVEAKoB", "r1xqNmAKsH", "B1g_nrRWiS", "rygYDrRbjH", "Hygy1rCWjH", "BklTr756KH", "ryxdtTj1qS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper contributes to the understanding of neural networks and provides a new clustering technique:\n 1) The paper introduces the interesting notion of local elasticity which considers the relative variation in output values for two different inputs before and after an SGD-style update;\n 2) The derived simil...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_HJxMYANtPH", "B1g_nrRWiS", "ryxdtTj1qS", "iclr_2020_HJxMYANtPH", "SylBdSQ6tH", "BklTr756KH", "ryxdtTj1qS", "iclr_2020_HJxMYANtPH", "iclr_2020_HJxMYANtPH" ]
iclr_2020_H1ezFREtwH
Composing Task-Agnostic Policies with Deep Reinforcement Learning
The composition of elementary behaviors to solve challenging transfer learning problems is one of the key elements in building intelligent machines. To date, there has been plenty of work on learning task-specific policies or skills but almost no focus on composing necessary, task-agnostic skills to find a solution to new problems. In this paper, we propose a novel deep reinforcement learning-based skill transfer and composition method that takes the agent's primitive policies to solve unseen tasks. We evaluate our method in difficult cases where training policy through standard reinforcement learning (RL) or even hierarchical RL is either not feasible or exhibits high sample complexity. We show that our method not only transfers skills to new problem settings but also solves the challenging environments requiring both task planning and motion control with high data efficiency.
accept-poster
This paper considers deep reinforcement learning skill transfer and composition, through an attention model that weighs the contributions of several base policies conditioned on the task and state, and uses this to output an action. The method is evaluated on several Mujoco tasks. There were two main areas of concern. The first was around issues with using equivalent primitives and training times for comparison methods. The second was around the general motivation of the paper, and also the motivation for using a BiRNN. These issues were resolved in a comprehensive discussion, leaving this as an interesting paper that should be accepted.
train
[ "rJeMg4_gqB", "H1gDo92oiH", "ryefEkSAFS", "rJx39nwjsS", "Hkx4LknOKS", "HkxcMdQojH", "SklBuDQsjB", "rJxJ_tPqsB", "rJgTUVm5jB", "HJlIdAecjB", "B1lHiYgciB", "SkgD-el5sS", "B1gp_Si_jB", "Skg7lT9UjH", "HJl4XUqLoH", "S1lytXcIir" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper presents an approach in which new tasks can be solved by an attention model that can weigh the contribution of different base policies conditioned on the current state of the environment and task-specific goals. The authors demonstrate their method on a selection of RL tasks, such as an ant maze navigat...
[ 6, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 5, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_H1ezFREtwH", "iclr_2020_H1ezFREtwH", "iclr_2020_H1ezFREtwH", "rJxJ_tPqsB", "iclr_2020_H1ezFREtwH", "rJgTUVm5jB", "HJlIdAecjB", "SkgD-el5sS", "B1lHiYgciB", "B1lHiYgciB", "S1lytXcIir", "Skg7lT9UjH", "HJl4XUqLoH", "ryefEkSAFS", "rJeMg4_gqB", "Hkx4LknOKS" ]
iclr_2020_SJlVY04FwH
Convergence of Gradient Methods on Bilinear Zero-Sum Games
Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, while understanding the dynamics of gradient algorithms for solving such formulations has remained a grand challenge. As a first step, we restrict to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions. We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates. In particular, our results offer formal evidence that alternating updates converge "better" than simultaneous ones.
accept-poster
All reviewers found the work interesting but worried about the extension to non-bilinear games. This is a point the authors should explicitly address in their work before publication.
train
[ "SJepczfJ5r", "B1lJvI1CKH", "SkltPFJssr", "BygpAG9siS", "rkgsqJTjiB", "Skg9SRA5iS", "S1gglldXqB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "*Summary*\nThis paper studies the convergence of multiple methods (Gradient, extragradient, optimistic and momentum) on a bilinear minmax game. More precisely, this paper uses spectral condition to study the difference between simultaneous (Jacobi) and alternating (Gau\\ss-Seidel) updates. The analysis is based on...
[ 8, 6, -1, -1, -1, -1, 3 ]
[ 5, 3, -1, -1, -1, -1, 1 ]
[ "iclr_2020_SJlVY04FwH", "iclr_2020_SJlVY04FwH", "S1gglldXqB", "SJepczfJ5r", "B1lJvI1CKH", "iclr_2020_SJlVY04FwH", "iclr_2020_SJlVY04FwH" ]
iclr_2020_rkgHY0NYwr
Discovering Motor Programs by Recomposing Demonstrations
In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations. Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives. On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks. Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration. Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives. We demonstrate both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration. This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing. Our results may be viewed at https://sites.google.com/view/discovering-motor-programs.
accept-poster
The work presents a novel and effective solution to learning reusable motor skills. The urgency of this problem and the considerable rebuttal of the authors merits publication of this paper, which is not perfect but needs community attention.
test
[ "BJgXR-BhoH", "SkgV8IkhjS", "BkgO5K7ijB", "rkly43bosS", "r1x2Aun5sB", "Skgc1cCOoS", "B1xPoFC_oB", "ryxLNtAOor", "ByldWKR_oH", "SJghY1zyjS", "r1xxwd-6KB", "rylbhjAZ5H" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for providing these details, I believe this addresses my concerns. Please make sure that they are included in the paper, or in the supplementary material.", "We hope the following clarifications help in understanding the significance of our RL results. \n\nIn the case of the low-level control baseline...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 1 ]
[ "SkgV8IkhjS", "r1x2Aun5sB", "rkly43bosS", "Skgc1cCOoS", "ByldWKR_oH", "B1xPoFC_oB", "SJghY1zyjS", "rylbhjAZ5H", "r1xxwd-6KB", "iclr_2020_rkgHY0NYwr", "iclr_2020_rkgHY0NYwr", "iclr_2020_rkgHY0NYwr" ]
iclr_2020_rJlUt0EYwS
Learning from Explanations with Neural Execution Tree
While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive. Natural language (NL) explanations have been demonstrated very useful additional supervision, which can provide sufficient domain knowledge for generating more labeled data over new instances, while the annotation time only doubles. However, directly applying them for augmenting model learning encounters two challenges: (1) NL explanations are unstructured and inherently compositional, which asks for a modularized model to represent their semantics, (2) NL explanations often have large numbers of linguistic variants, resulting in low recall and limited generalization ability. In this paper, we propose a novel Neural Execution Tree (NExT) framework to augment training data for text classification using NL explanations. After transforming NL explanations into executable logical forms by semantic parsing, NExT generalizes different types of actions specified by the logical forms for labeling data instances, which substantially increases the coverage of each NL explanation. Experiments on two NLP tasks (relation extraction and sentiment analysis) demonstrate its superiority over baseline methods. Its extension to multi-hop question answering achieves performance gain with light annotation effort.
accept-poster
This paper proposing a framework for augmenting classification systems with explanations was very well received by two reviewers, and on reviewer labeling themselves as "perfectly neutral". I see no reason not to recommend acceptance.
train
[ "S1e1GcC2Fr", "S1lktlSYjB", "H1xOK0EKjB", "HJxfj1iFsr", "BygR22VKjH", "S1x9js4tsS", "SJxnxjNtiH", "SkgwWtPCFS", "Skli-hQx9B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper explores using natural language explanations as auxiliary training data for NLP tasks. It first transforms natural language expressions into a logical form through CCG, and then use a neural module network architecture to label data instances. Experimental analyses are conducted on two tasks -- relati...
[ 8, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_rJlUt0EYwS", "iclr_2020_rJlUt0EYwS", "Skli-hQx9B", "Skli-hQx9B", "Skli-hQx9B", "S1e1GcC2Fr", "SkgwWtPCFS", "iclr_2020_rJlUt0EYwS", "iclr_2020_rJlUt0EYwS" ]
iclr_2020_Byx_YAVYPH
Jelly Bean World: A Testbed for Never-Ending Learning
Machine learning has shown growing success in recent years. However, current machine learning systems are highly specialized, trained for particular problems or domains, and typically on a single narrow dataset. Human learning, on the other hand, is highly general and adaptable. Never-ending learning is a machine learning paradigm that aims to bridge this gap, with the goal of encouraging researchers to design machine learning systems that can learn to perform a wider variety of inter-related tasks in more complex environments. To date, there is no environment or testbed to facilitate the development and evaluation of never-ending learning systems. To this end, we propose the Jelly Bean World testbed. The Jelly Bean World allows experimentation over two-dimensional grid worlds which are filled with items and in which agents can navigate. This testbed provides environments that are sufficiently complex and where more generally intelligent algorithms ought to perform better than current state-of-the-art reinforcement learning approaches. It does so by producing non-stationary environments and facilitating experimentation with multi-task, multi-agent, multi-modal, and curriculum learning settings. We hope that this new freely-available software will prompt new research and interest in the development and evaluation of never-ending learning systems and more broadly, general intelligence systems.
accept-poster
This paper proposes a flexible environment for studying never ending learning. During the discussion period, all reviewers found the paper to be borderline. Pros: - we don't have good lifelong or never-ending RL environments, and this paper seems to provide one - includes a number of interesting features such as multiple input modalities, non-episodic interactions, flexible task definitions Cons: - procedurally generated, toy environment - unclear if the environment reflects the characteristics of real world NEL problems In the balance, I think the environments add value to the RL community, and being presented at ICLR would increase its visibility.
train
[ "BJeyARJKtS", "SkeQ9rArsr", "BJlSxS0rir", "r1e70V0Hor", "S1e1rZAroH", "Bkxts1RHjr", "B1gpQlCnKr", "Hkeqnqz0FS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThis paper introduces a new environment for testing lifelong or never-ending learning. The goal of the environment is to act as a new benchmark testbed for challenging existing agents and models across areas of research, encouraging and pushing new research towards solving challenges in curriculum learn...
[ 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_Byx_YAVYPH", "iclr_2020_Byx_YAVYPH", "B1gpQlCnKr", "B1gpQlCnKr", "BJeyARJKtS", "Hkeqnqz0FS", "iclr_2020_Byx_YAVYPH", "iclr_2020_Byx_YAVYPH" ]
iclr_2020_ryeFY0EFwS
Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.
accept-poster
The paper proposes an intuitive causal explanation for the generalization properties of GD methods. The reviewers appreciated the insights, with one reviewer claiming that there was significant overlap with existing work. I ultimately decided to accept this paper as I believe intuitive explanations are critical to the propagation of ideas. That being said, there is a tendency in this community to erase past, especially theoretical, work, for that very reason that theoretical work is less popular. Hence, I want to make it clear that the acceptance of this paper is based on the premise that the authors will incorporate all of reviewer 3's comments and give enough credit to all relevant work (namely, all the papers cited by the reviewer) with a proper discussion on the link between these.
train
[ "SyeJ6wiiir", "HylgsBQooB", "BkxwRE7isH", "B1gkNWJ9sB", "S1xeHDatjS", "SyevULcuiH", "SyezDJXujH", "B1e7ygTwoH", "BJgmA0iDsB", "SJxF7_SDiS", "BkenJ2fwsH", "BJlCNQzDor", "Hyg5ylGviH", "H1lrQjbvsS", "Bkee0vePoB", "ByeO6clPsS", "rygQLDaBoB", "rklaFOpSjB", "rJglFKTBoB", "r1e2BU6rjr"...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", ...
[ "We have uploaded a new version of the paper with detailed discussion of the papers brought to our attention since the initial submission. The bulk of the changes are in Section 4 (\"Discussion and Related Work\"). \n\n(There are some minor changes in Section 5 (\"Future Work\") and formatting adjustments elsewhere...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 3, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1, 4, -1, -1 ]
[ "iclr_2020_ryeFY0EFwS", "B1gkNWJ9sB", "rJglFKTBoB", "S1xeHDatjS", "B1e7ygTwoH", "SyezDJXujH", "BJgmA0iDsB", "SJxF7_SDiS", "BJlCNQzDor", "BkenJ2fwsH", "Bkee0vePoB", "H1lrQjbvsS", "Bkee0vePoB", "ByeO6clPsS", "rklaFOpSjB", "rygQLDaBoB", "SkeccRZqFr", "SkeccRZqFr", "Byg52vRAYr", "r...
iclr_2020_HJgCF0VFwr
Probabilistic Connection Importance Inference and Lossless Compression of Deep Neural Networks
Deep neural networks (DNNs) can be huge in size, requiring a considerable a mount of energy and computational resources to operate, which limits their applications in numerous scenarios. It is thus of interest to compress DNNs while maintaining their performance levels. We here propose a probabilistic importance inference approach for pruning DNNs. Specifically, we test the significance of the relevance of a connection in a DNN to the DNN’s outputs using a nonparemtric scoring testand keep only those significant ones. Experimental results show that the proposed approach achieves better lossless compression rates than existing techniques
accept-poster
This paper proposes a novel approach for pruning deep neural networks using non-parametric statistical tests to detect 3-way interactions among two nodes and the output. While the reviewers agree that this is a neat idea, the paper has been limited in terms of experimental validation. The authors provided further experimental results during the discussion period and the reviewers agree that the paper is now acceptable for publication at ICLR-2020.
train
[ "Byxg4B70KB", "H1eqVc_oir", "HyxFYBuijr", "HyeKZYvosB", "HJlXboViir", "rygbRLhYoH", "rJx68saFjB", "B1xITaAdir", "BygxUKXssS", "B1xSzo7ojS", "HkgWHyVijB", "ByefJP6lqH", "BJlTymjYcH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose a new pruning technique that utilizes the statistical dependency between the corresponding nodes and outputs. The dependency is measured by a kernel based dependency measure which is closely related to MMD. The test statistics derived from the dependency measure have an asymptoti...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_HJgCF0VFwr", "HyxFYBuijr", "HyeKZYvosB", "HkgWHyVijB", "Byxg4B70KB", "BJlTymjYcH", "ByefJP6lqH", "Byxg4B70KB", "BJlTymjYcH", "ByefJP6lqH", "BygxUKXssS", "iclr_2020_HJgCF0VFwr", "iclr_2020_HJgCF0VFwr" ]
iclr_2020_rJxlc0EtDr
MEMO: A Deep Network for Flexible Combination of Episodic Memories
Recent research developing neural network architectures with external memory have often used the benchmark bAbI question and answering dataset which provides a challenging number of tasks requiring reasoning. Here we employed a classic associative inference task from the human neuroscience literature in order to more carefully probe the reasoning capacity of existing memory-augmented architectures. This task is thought to capture the essence of reasoning -- the appreciation of distant relationships among elements distributed across multiple facts or memories. Surprisingly, we found that current architectures struggle to reason over long distance associations. Similar results were obtained on a more complex task involving finding the shortest path between nodes in a path. We therefore developed a novel architecture, MEMO, endowed with the capacity to reason over longer distances. This was accomplished with the addition of two novel components. First, it introduces a separation between memories/facts stored in external memory and the items that comprise these facts in external memory. Second, it makes use of an adaptive retrieval mechanism, allowing a variable number of ‘memory hops’ before the answer is produced. MEMO is capable of solving our novel reasoning tasks, as well as all 20 tasks in bAbI.
accept-poster
The authors introduce a new associative inference task from cognitive psychology, show shortcomings of current memory-augmented architectures, and introduce a new memory architecture that performs better with respect to the task. The reviewers like the motivation and thought the experimental results were strong, although they also initially had several questions and pointed to areas of the paper which lacked clarity. The authors updated the paper in response to the reviewer's questions and increased the clarity of the paper. The reviewers are satisfied and believe the paper should be accepted.
train
[ "BkxAILESqH", "HkgdXevQsH", "B1xLUyv7iH", "BJlbQ9mwqS" ]
[ "official_reviewer", "author", "author", "official_reviewer" ]
[ "Summary:\n\nThis paper proposes two main changes to the End2End Memory Network (EMN) architecture: a separation between facts and the items that comprise these facts in the external memory, policy to learn the number of memory-hops to reason. The paper also introduces a new Paired Associative Inference (PAI) task ...
[ 8, -1, -1, 6 ]
[ 4, -1, -1, 3 ]
[ "iclr_2020_rJxlc0EtDr", "BkxAILESqH", "BJlbQ9mwqS", "iclr_2020_rJxlc0EtDr" ]
iclr_2020_SyxV9ANFDH
Economy Statistical Recurrent Units For Inferring Nonlinear Granger Causality
Granger causality is a widely-used criterion for analyzing interactions in large-scale networks. As most physical interactions are inherently nonlinear, we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series measurements. Our proposed approach relies on modeling the embedded nonlinearities in the measurements using a component-wise time series prediction model based on Statistical Recurrent Units (SRUs). We make a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes’ time series measurements. We propose a variant of SRU, called economy-SRU, which, by design has considerably fewer trainable parameters, and therefore less prone to overfitting. The economy-SRU computes a low-dimensional sketch of its high-dimensional hidden state in the form of random projections to generate the feedback for its recurrent processing. Additionally, the internal weight parameters of the economy-SRU are strategically regularized in a group-wise manner to facilitate the proposed network in extracting meaningful predictive features that are highly time-localized to mimic real-world causal events. Extensive experiments are carried out to demonstrate that the proposed economy-SRU based time series prediction model outperforms the MLP, LSTM and attention-gated CNN-based time series models considered previously for inferring Granger causality.
accept-poster
The authors propose a modification of the statistical recurrent unit for modelling mutliple time series and show that it can be very useful in practice for identifying granger causality when the time series are non-linearly related. The contributions are primarily conceptual and empirical. The reviewers agree that this is a useful contribution in the causality literature.
train
[ "HJl3kmaisB", "Syxgfyocjr", "B1lg-5c9jH", "HkgqsvqqoS", "SkgppS55oB", "r1gRwkYpFr", "HyxuKq0CtB", "rJgVWTWJ5S" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have uploaded a revision of the paper with the following updates:\n\n1. Added appendix (C.1) containing new experiments to compare different design choices for the encoder $D_{r}$ in the eSRU design, as part of our response to a comment by reviewer #1. \n\n2. Added a sentence (in page-6, last line) to motivate...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 1 ]
[ "iclr_2020_SyxV9ANFDH", "HyxuKq0CtB", "HyxuKq0CtB", "rJgVWTWJ5S", "r1gRwkYpFr", "iclr_2020_SyxV9ANFDH", "iclr_2020_SyxV9ANFDH", "iclr_2020_SyxV9ANFDH" ]
iclr_2020_Bkxv90EKPB
Bayesian Meta Sampling for Fast Uncertainty Adaptation
Meta learning has been making impressive progress for fast model adaptation. However, limited work has been done on learning fast uncertainty adaption for Bayesian modeling. In this paper, we propose to achieve the goal by placing meta learning on the space of probability measures, inducing the concept of meta sampling for fast uncertainty adaption. Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter. The meta sampler is constructed by adopting a neural-inverse-autoregressive-flow (NIAF) structure, a variant of the recently proposed neural autoregressive flows, to efficiently generate meta samples to be adapted. The sample adapter moves meta samples to task-specific samples, based on a newly proposed and general Bayesian sampling technique, called optimal-transport Bayesian sampling. The combination of the two components allows a simple learning procedure for the meta sampler to be developed, which can be efficiently optimized via standard back-propagation. Extensive experimental results demonstrate the efficiency and effectiveness of the proposed framework, obtaining better sample quality and faster uncertainty adaption compared to related methods.
accept-poster
This paper presents a meta-learning algorithm that represents uncertainty both at the meta-level and at the task-level. The approach contains an interesting combination of techniques. The reviewers raised concerns about the thoroughness of the experiments, which were resolved in a convincing way in the rebuttal. Concerns about clarity remain, and the authors are *strongly encouraged* to revise the paper throughout to make the presentation more clear and understandable, including to readers who do not have a meta-learning background. See the reviewer's comments for further details on how the organization of the paper and the presentation of the ideas can be improved.
train
[ "r1gdxJVhiB", "HJeI7rOAFB", "HyeAp0J3oH", "Bkl11skhsB", "BkgVnF1hjS", "SJeoqM13jr", "HyeWJG12iS", "SkWqUPoKH", "SkxuSryk5B" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank you for your valuable time and update. ", "Summary of paper:\nThe authors propose a neural sampler for probabilistic models in the meta-learning setting. \nTheir main claim is that their model captures uncertainty in samples better than competing methods and does so at lower cost.\nIn particular, they p...
[ -1, 6, -1, -1, -1, -1, -1, 6, 3 ]
[ -1, 5, -1, -1, -1, -1, -1, 4, 1 ]
[ "HyeAp0J3oH", "iclr_2020_Bkxv90EKPB", "BkgVnF1hjS", "SkWqUPoKH", "HJeI7rOAFB", "SkxuSryk5B", "iclr_2020_Bkxv90EKPB", "iclr_2020_Bkxv90EKPB", "iclr_2020_Bkxv90EKPB" ]
iclr_2020_H1e_cC4twS
Non-Autoregressive Dialog State Tracking
Recent efforts in Dialogue State Tracking (DST) for task-oriented dialogues have progressed toward open-vocabulary or generation-based approaches where the models can generate slot value candidates from the dialogue history itself. These approaches have shown good performance gain, especially in complicated dialogue domains with dynamic slot values. However, they fall short in two aspects: (1) they do not allow models to explicitly learn signals across domains and slots to detect potential dependencies among \textit{(domain, slot)} pairs; and (2) existing models follow auto-regressive approaches which incur high time cost when the dialogue evolves over multiple domains and multiple turns. In this paper, we propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST) which can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots. In particular, the non-autoregressive nature of our method not only enables decoding in parallel to significantly reduce the latency of DST for real-time dialogue response generation, but also detect dependencies among slots at token level in addition to slot and domain level. Our empirical results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus, and the latency of our model is an order of magnitude lower than the previous state of the art as the dialogue history extends over time.
accept-poster
(Please note that I am basing the meta-review on two reviews plus my own thorough read of the paper) This paper proposes an interesting adaptation of the non-autoregressive neural encoder-decoder models previously proposed for machine translation to dialog state tracking. Experimental results demonstrate state-of-the-art for the MultiWOZ, multi-domain dialog corpus. The reviewers suggest that while the NA approach is not novel, author's adaptation of the approach to dialog state tracking and detailed experimental analysis are interesting and convincing. Hence I suggest accepting the paper as a poster presentation.
train
[ "H1eka5M3iS", "BylfZ9M3ir", "ryg_LKz3jS", "H1gtzTAzYS", "SJltpFKaFr", "HygVkzEAFB", "HJe0EhvPcS", "BygEDW6qur" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Thank you very much for your review. Below are our responses:\n\n1. About the details of parameter settings, we simply set the loss hyper-parameters $\\alpha$ and $\\beta$ to 1. We set the optimizer parameters for training to $\\beta_1=0.9$, $\\beta_2=0.98$, and $\\epsilon=10^{-9}$.\n\n2. Thanks for pointing out ...
[ -1, -1, -1, -1, 6, 1, 6, -1 ]
[ -1, -1, -1, -1, 3, 3, 4, -1 ]
[ "SJltpFKaFr", "HygVkzEAFB", "HJe0EhvPcS", "BygEDW6qur", "iclr_2020_H1e_cC4twS", "iclr_2020_H1e_cC4twS", "iclr_2020_H1e_cC4twS", "iclr_2020_H1e_cC4twS" ]