paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_H1eRI04KPB
Likelihood Contribution based Multi-scale Architecture for Generative Flows
Deep generative modeling using flows has gained popularity owing to the tractable exact log-likelihood estimation with efficient training and synthesis process. However, flow models suffer from the challenge of having high dimensional latent space, same in dimension as the input space. An effective solution to the above challenge as proposed by Dinh et al. (2016) is a multi-scale architecture, which is based on iterative early factorization of a part of the total dimensions at regular intervals. Prior works on generative flows involving a multi-scale architecture perform the dimension factorization based on a static masking. We propose a novel multi-scale architecture that performs data dependent factorization to decide which dimensions should pass through more flow layers. To facilitate the same, we introduce a heuristic based on the contribution of each dimension to the total log-likelihood which encodes the importance of the dimensions. Our proposed heuristic is readily obtained as part of the flow training process, enabling versatile implementation of our likelihood contribution based multi-scale architecture for generic flow models. We present such an implementation for the original flow introduced in Dinh et al. (2016), and demonstrate improvements in log-likelihood score and sampling quality on standard image benchmarks. We also conduct ablation studies to compare proposed method with other options for dimension factorization.
reject
The authors propose a multi-scale architecture for generative flows that can learn which dimensions to pass through more flow layers based on a heuristic that judges the contribution to the likelihood. The authors compare the technique to some other flow based approaches. The reviewers asked for more experiments, which the authors delivered. However, the reviewers noted that a comparison to the SOTA for CIFAR in this setting was missing. Several reviewers raised their scores, but none were willing to argue for acceptance.
train
[ "rkgJFSRMqH", "SkgVejq6YB", "rkgff3v2sH", "BylgD1VhoB", "SJlVQOYqjB", "Hkx20h-RcB", "BklcIzIYjH", "SyxqiMIFjr", "B1gS3Z8KjS", "S1gnsxUFiB", "S1gJopHFsB", "H1lKFicPcS", "B1l2k0x7cS", "BJxkidpk9B", "HygYAc2y9B" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "author", "public" ]
[ "This paper propose a heuristic algorithm for deciding which random variables to be Gaussianized early in flow-based generative models. The proposed algorithm involves first training a flow without multi-scale training, for example, 32*32*c - 32*32*c - 32*32*c. Then, it computes the logdet term for each variable a...
[ 3, 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_H1eRI04KPB", "iclr_2020_H1eRI04KPB", "BylgD1VhoB", "SyxqiMIFjr", "Hkx20h-RcB", "iclr_2020_H1eRI04KPB", "SkgVejq6YB", "BklcIzIYjH", "rkgJFSRMqH", "Hkx20h-RcB", "iclr_2020_H1eRI04KPB", "B1l2k0x7cS", "BJxkidpk9B", "HygYAc2y9B", "iclr_2020_H1eRI04KPB" ]
iclr_2020_S1lJv0VYDr
Model Imitation for Model-Based Reinforcement Learning
Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments. However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments. This mismatching has seriously impacted the sample complexity of MBRL. The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts. Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN. We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one. Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return.
reject
This paper addresses challenges in offline model learning, i.e., in the setting where some trajectories are given and can be used for learning a model, which in turn serves to train an RL agent or plan action sequences in simulation. A key issue in this setting is that of compounding errors: as the simulated trajectory deviates from observed data, errors build up, leading to suboptimal performance in the target domain. The paper proposes a distribution matching approach that considers trajectory sequence information and provides theoretical guarantees as well as some promising empirical results. Several issues were raised by reviewers, including missing references, clarity issues, questions about limitations of the theoretical analysis, and limitations of the empirical validation. Many of the issues raised by reviewers were addressed by the authors during the rebuttal phase. At the same time, several issues remain. First, the authors committed to adding results for additional tasks (initially deemed too easy or too hard to show differences). Even if the tasks show little separation between methods, these would be important data points to include as they support additional comparisons with prior and future work. The AC has to assess the paper without taking promised additional results into account. Second, questions about the results for Ant are not sufficiently addressed. The plot shows no learning. The author response mentions initialization but this is not deemed a sufficient explanation. Given the remaining questions, my assessment is that the quality and contribution of the submission are not yet ready for publication at the current stage.
train
[ "H1gHX_waKr", "BkgHrnDiYB", "SJllz41hor", "SygBUaBjiH", "HyeJQHoFjS", "HJl4OmjYsB", "BklnTxjFsB", "HJlTfB8Ttr" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "In model-based reinforcement learning methods, in order to alleviate the compounding error induced in rollout-based planning, the paper proposes a distribution matching method, which should be better than a regular supervised learning approach (i.e. minimizing mean square error). Experiments on continuous control ...
[ 6, 6, -1, -1, -1, -1, -1, 6 ]
[ 3, 5, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_S1lJv0VYDr", "iclr_2020_S1lJv0VYDr", "SygBUaBjiH", "HJl4OmjYsB", "H1gHX_waKr", "HJlTfB8Ttr", "BkgHrnDiYB", "iclr_2020_S1lJv0VYDr" ]
iclr_2020_B1xewR4KvH
MANIFOLD FORESTS: CLOSING THE GAP ON NEURAL NETWORKS
Decision forests (DF), in particular random forests and gradient boosting trees, have demonstrated state-of-the-art accuracy compared to other methods in many supervised learning scenarios. In particular, DFs dominate other methods in tabular data, that is, when the feature space is unstructured, so that the signal is invariant to permuting feature indices. However, in structured data lying on a manifold---such as images, text, and speech---neural nets (NN) tend to outperform DFs. We conjecture that at least part of the reason for this is that the input to NN is not simply the feature magnitudes, but also their indices (for example, the convolution operation uses ``feature locality). In contrast, naive DF implementations fail to explicitly consider feature indices. A recently proposed DF approach demonstrates that DFs, for each node, implicitly sample a random matrix from some specific distribution. Here, we build on that to show that one can choose distributions in a manifold aware fashion. For example, for image classification, rather than randomly selecting pixels, one can randomly select contiguous patches. We demonstrate the empirical performance of data living on three different manifolds: images, time-series, and a torus. In all three cases, our Manifold Forest (Mf) algorithm empirically dominates other state-of-the-art approaches that ignore feature space structure, achieving a lower classification error on all sample sizes. This dominance extends to the MNIST data set as well. Moreover, both training and test time is significantly faster for manifold forests as compared to deep nets. This approach, therefore, has promise to enable DFs and other machine learning methods to close the gap with deep nets on manifold-valued data.
reject
This work explores how to leverage structure of this input in decision trees, the way this is done for example in convolutional networks. All reviewers agree that the experimental validation of the method as presented is extremely weak. Authors have not provided a response to answer the many concerns raised by reviewers. Therefore, we recommend rejection.
test
[ "SyeMINIucB", "SkxVEsCjKB", "HyeFQ5Kb9B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\n====== Post Rebuttal ======\n\nNo rebuttal was provided and all reviewers have raised issues. Therefore, I will maintain the original rating.\n\n====== Summary====== \n\nThe paper puts forward a potential issue with the standard decision tree and tries to remedy it. The issue is that standard decision trees, b...
[ 3, 3, 1 ]
[ 4, 5, 4 ]
[ "iclr_2020_B1xewR4KvH", "iclr_2020_B1xewR4KvH", "iclr_2020_B1xewR4KvH" ]
iclr_2020_rkx-wA4YPS
Adapting to Label Shift with Bias-Corrected Calibration
Label shift refers to the phenomenon where the marginal probability p(y) of observing a particular class changes between the training and test distributions, while the conditional probability p(x|y) stays fixed. This is relevant in settings such as medical diagnosis, where a classifier trained to predict disease based on observed symptoms may need to be adapted to a different distribution where the baseline frequency of the disease is higher. Given estimates of p(y|x) from a predictive model, one can apply domain adaptation procedures including Expectation Maximization (EM) and Black-Box Shift Estimation (BBSE) to efficiently correct for the difference in class proportions between the training and test distributions. Unfortunately, modern neural networks typically fail to produce well-calibrated estimates of p(y|x), reducing the effectiveness of these approaches. In recent years, Temperature Scaling has emerged as an efficient approach to combat miscalibration. However, the effectiveness of Temperature Scaling in the context of adaptation to label shift has not been explored. In this work, we study the impact of various calibration approaches on shift estimates produced by EM or BBSE. In experiments with image classification and diabetic retinopathy detection, we find that calibration consistently tends to improve shift estimation. In particular, calibration approaches that include class-specific bias parameters are significantly better than approaches that lack class-specific bias parameters, suggesting that reducing systematic bias in the calibrated probabilities is especially important for domain adaptation.
reject
This was a borderline paper, but in the end two of the reviewers remain unconvinced by this paper in its current form, and the last reviewer is not willing to argue for acceptance. The first reviewer's comments were taken seriously in making a decision on this paper. As such, it is my suggestion that the authors revise the paper in its current form, and resubmit, addressing some of the first reviewers comments, such as discussion of utility of the methodology, and to improve the exposition such that less knowledgable reviewers understand the material presented better. The comments that the first reviewer makes about lack of motivation for parts of the presented methodology is reflected in the other reviewers comments, and I'm convinced that the authors can address this issue and make this a really awesome submission at a future conference. On a different note, I think the authors should be congratulated on making their results reproducible. That is definitely something the field needs to see more of.
train
[ "ByeZrqUZ5S", "rJl3pBlIjB", "SyxjZLxLoB", "Bke6-BgUoS", "rklbC4Gatr", "S1xUndTTYr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper builds upon recent work on detecting and correcting for label shift.\nThey explore both the BBSE algorithm analyzed in Detecting and Correcting for Label Shift (2018)\nand another approach based on EM where the predictive posteriors and test set label distributions\nare iteratively computed, each an upd...
[ 6, -1, -1, -1, 1, 3 ]
[ 5, -1, -1, -1, 1, 1 ]
[ "iclr_2020_rkx-wA4YPS", "S1xUndTTYr", "rklbC4Gatr", "ByeZrqUZ5S", "iclr_2020_rkx-wA4YPS", "iclr_2020_rkx-wA4YPS" ]
iclr_2020_BklMDCVtvr
Discovering the compositional structure of vector representations with Role Learning Networks
Neural networks (NNs) are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure. To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure. ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into E’s representational vector space. The constituents of the approximating symbol structure are defined by structural positions — roles — that can be filled by symbols. We show that when E is constructed to explicitly embed a particular type of structure (e.g., string or tree), ROLE successfully extracts the ground-truth roles defining that structure. We then analyze a seq2seq network trained to perform a more complex compositional task (SCAN), where there is no ground truth role scheme available. For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the link between the representations and the behavior of a notoriously hard-to-interpret type of model. We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model’s output is also changed in the way predicted by our analysis. Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases that will improve the compositional abilities of such models.
reject
This work builds directly on McCoy et al. (2019a) and add a RNN that can replace what was human generated hypotheses to the role schemes. The final goal of ROLE is to analyze a network by identifying ‘symbolic structure’. The authors conduct sanity check by conducting experiments with ground truth, and extend the work further to apply it to a complex model. I wonder under what definition of ‘interpretable’ authors have in mind with the final output (figure 2) - the output is very complex. It remains questionable if this will give some ‘insight’ or how would humans parse this info such that it is ‘useful’ for them in some way. Overall, though this is a good paper, due to the number of strong papers this year, it cannot be accepted at this time. We hope the comments given by reviewers can help improve a future version.
val
[ "rkxh1B5siS", "r1lPQlG5oB", "B1eiwAW5iB", "SkgnSCW9jr", "Bkefx69aKS", "ByldQye0KB", "S1lCZkVNqS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Hello, the current revision includes the mean accuracy across three runs, as well standard deviations.", "Thank you for your feedback! We have done our best to incorporate the suggestions from you and the other reviewers into the revised version of the paper now uploaded to OpenReview. Your comments led to consi...
[ -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 1, 1, 3 ]
[ "S1lCZkVNqS", "S1lCZkVNqS", "Bkefx69aKS", "ByldQye0KB", "iclr_2020_BklMDCVtvr", "iclr_2020_BklMDCVtvr", "iclr_2020_BklMDCVtvr" ]
iclr_2020_H1lfwAVFwr
CAPACITY-LIMITED REINFORCEMENT LEARNING: APPLICATIONS IN DEEP ACTOR-CRITIC METHODS FOR CONTINUOUS CONTROL
Biological and artificial agents must learn to act optimally in spite of a limited capacity for processing, storing, and attending to information. We formalize this type of bounded rationality in terms of an information-theoretic constraint on the complexity of policies that agents seek to learn. We present the Capacity-Limited Reinforcement Learning (CLRL) objective which defines an optimal policy subject to an information capacity constraint. This objective is optimized by drawing from methods used in rate distortion theory and information theory, and applied to the reinforcement learning setting. Using this objective we implement a novel Capacity-Limited Actor-Critic (CLAC) algorithm and situate it within a broader family of RL algorithms such as the Soft Actor Critic (SAC) and discuss their similarities and differences. Our experiments show that compared to alternative approaches, CLAC offers improvements in generalization between training and modified test environments. This is achieved in the CLAC model while displaying high sample efficiency and minimal requirements for hyper-parameter tuning.
reject
This paper presents Capacity-Limited Reinforcement Learning (CLRL) which builds on methods in soft RL to enable learning in agents with limited capacity. The reviewers raised issues that were largely around three areas: there is a lack of clear motivation for the work, and many of the insights given lack intuition; many connections to related literature are missing; and the experimental results remain unconvincing. Although the ideas presented in the paper are interesting, more work is required for this to be accepted. Therefore at this point, this is unfortunately a rejection.
train
[ "rylS12pUYS", "HJxHhbyRFB", "r1eDVHYJ9B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a reinforcement learning method that regularizes the objective using the mutual information term.\nThe idea is simple and the paper is easy to follow. \n\nHowever, the novelty is limited since the difference between the proposed method and Soft Actor Critic (SAC) is just adding the entropy term ...
[ 3, 1, 1 ]
[ 4, 4, 4 ]
[ "iclr_2020_H1lfwAVFwr", "iclr_2020_H1lfwAVFwr", "iclr_2020_H1lfwAVFwr" ]
iclr_2020_ryg7vA4tPB
Rigging the Lottery: Making All Tickets Winners
Sparse neural networks have been shown to yield computationally efficient networks with improved inference times. There is a large body of work on training dense networks to yield sparse networks for inference (Molchanov et al., 2017;Zhu & Gupta, 2018; Louizos et al., 2017; Li et al., 2016; Guo et al., 2016). This limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-to-sparse training methods. Our method updates the topology of the network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires less floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results with ResNet-50, MobileNet v1 and MobileNet v2 on the ImageNet-2012 dataset. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static.
reject
A somewhat new approach to growing sparse networks. Experimental validation is good, focussing on ImageNet and CIFAR-10, plus experiments on language modelling. Though efficient in computation and storage size, the approach does not have a theoretical foundation. That does not agree with the intended scope of ICLR. I strongly suggest the authors submit elsewhere.
val
[ "HJle5RbnoH", "Bye6bbzhoH", "Hyxpjzz3ir", "r1x7kAZhsB", "r1gyEzYFoH", "rkl2g3uFsB", "SkeLaI_YoB", "HJeEa-4FjS", "Bye-WT3W9S", "S1es4WNtsH", "rylEHYPuiH", "BkeHLDDOoH", "BkxBkYPuir", "ryeKrSv_sr", "Skg97KzhKB", "H1gDIdo2FS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nComparison with SBP/VIB/L0\n========================\n\n(updated numbers to fix one big error in the FLOPs calculations - they were originally made with 50,000 images in the training set rather than 60,000 - and a few smaller ones. We also add the inference costs of and size of each network and the table below ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 3, 3 ]
[ "HJeEa-4FjS", "S1es4WNtsH", "rkl2g3uFsB", "Bye-WT3W9S", "HJeEa-4FjS", "SkeLaI_YoB", "S1es4WNtsH", "BkeHLDDOoH", "iclr_2020_ryg7vA4tPB", "BkeHLDDOoH", "Skg97KzhKB", "Bye-WT3W9S", "H1gDIdo2FS", "iclr_2020_ryg7vA4tPB", "iclr_2020_ryg7vA4tPB", "iclr_2020_ryg7vA4tPB" ]
iclr_2020_rJeXDANKwr
NADS: Neural Architecture Distribution Search for Uncertainty Awareness
Machine learning systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a different distribution from the one used for training. With their growing use in critical applications, it becomes important to develop systems that are able to accurately quantify its predictive uncertainty and screen out these anomalous inputs. However, unlike standard learning tasks, there is currently no well established guiding principle for designing architectures that can accurately quantify uncertainty. Moreover, commonly used OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples. To address these problems, we first seek to identify guiding principles for designing uncertainty-aware architectures, by proposing Neural Architecture Distribution Search (NADS). Unlike standard neural architecture search methods which seek for a single best performing architecture, NADS searches for a distribution of architectures that perform well on a given task, allowing us to identify building blocks common among all uncertainty aware architectures. With this formulation, we are able to optimize a stochastic outlier detection objective and construct an ensemble of models to perform OoD detection. We perform multiple OoD detection experiments and observe that our NADS performs favorably compared to state-of-the-art OoD detection methods.
reject
This paper introduces a neural architecture search method that is geared towards yielding good uncertainty estimates for out-of-distribution (OOD) samples. The reviewers found that the OOD prediction results are strong, but criticized various points, including the presentation of the OOD results, novelty as a NAS paper, missing citations to some recent papers, and a lack of baselines with simpler ensembles. The authors improved the presentation of their OOD results and provided new experiments, which causes one reviewer to increase his/her score from a weak reject to an accept. The other reviewers appreciated the rebuttal, but preferred not to change their scores from a weak reject and a reject, mostly due to lack of novelty as a NAS paper. I also read the paper, and my personal opinion is that it would definitely be very novel to have a good neural architecture search for handling uncertainty in deep learning; it is by no means the case that "NAS for X" is not interesting just because there are now a few papers for "NAS for Y". As long as X is relevant (which uncertainty in deep learning definitely is), and NAS finds a new state-of-the-art, I think this is great. For such an "application" paper of the NAS methodology, I do not find it necessary to introduce a novel NAS method, but just applying an existing one would be fine. The problem is more that the paper claims to introduce a new method, but that that method is too similar to existing ones, without a comparison; actually just using an existing NAS method would therefore make the contribution and the emphasis on the application domain clearer. I have one small question to the authors about a part that I did not understand: to optimize WAIC (Eq 1), why is it not optimal to just set the parameterization \phi such that the variance is minimized, i.e., return a delta distribution p_\phi that always returns the same architecture (one with a strong prediction)? Surely, that's not what the authors want, but wouldn't that minimize WAIC? I hope the authors will clarify this in a future version. In the private discussion of reviewers and AC, the most positive reviewer emphasized that the OOD results are strong, but admitted that the mixed sentiment is understandable since people who do not follow OOD detection could miss the importance and context of the results, and that the paper could definitely improve its messaging. The other reviewers' scores remained at 1 and 3, but the reviewers indicated that they would be positive about a future version of the paper that fixed the identified issues. My recommendation is to reject the paper and encourage the authors to continue this work and resubmit an improved version to a future venue.
train
[ "r1x0cOCwYr", "H1emJ_f9oS", "Syx4bZgcjH", "B1xFCxe5sB", "rJgS2leciS", "HJlXU1DsFS", "Skg5TJhhKH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper needs crucial, quick fixes.\n\nThis paper throws neural architecture search at the problem of out-of-distribution detection. Rather than searching over multi-class classifier architectures, and using the result for OOD detection, they instead search for generative model architectures.\nThe approach appe...
[ 8, -1, -1, -1, -1, 1, 3 ]
[ 5, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_rJeXDANKwr", "Syx4bZgcjH", "r1x0cOCwYr", "HJlXU1DsFS", "Skg5TJhhKH", "iclr_2020_rJeXDANKwr", "iclr_2020_rJeXDANKwr" ]
iclr_2020_HJx4PAEYDH
R-TRANSFORMER: RECURRENT NEURAL NETWORK ENHANCED TRANSFORMER
Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.
reject
The submission proposes a variant of a Transformer architecture that does not use positional embeddings to model local structural patterns but instead adds a recurrent layer before each attention layer to maintain local context. The approach is empirically verified on a number of domains. The reviewers had concerns with the paper, most notably that the architectural modification is not sufficiently novel or significant to warrant publication, that appropriate ablations and baselines were not done to convincingly show the benefit of the approach, that the speed tradeoff was not adequately discussed, and that the results were not compared to actual SOTA results. For these reasons, the recommendation is to reject the paper.
test
[ "H1lr66X6KB", "r1gU10tB_H", "BkliykIptB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper introduces the R-Transformer architecture which adds a local RNN layer before each attention layer in Transformer. The authors claim state-of-the-art performance but only test on tiny tasks where Transformer models have not been heavily optimized and omit the main problem with RNNs - namely their speed. ...
[ 3, 3, 6 ]
[ 5, 5, 3 ]
[ "iclr_2020_HJx4PAEYDH", "iclr_2020_HJx4PAEYDH", "iclr_2020_HJx4PAEYDH" ]
iclr_2020_HJeEP04KDH
Quantized Reinforcement Learning (QuaRL)
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to image-based models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. Additionally, we show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models' distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Finally, we demonstrate the real-world applications of quantization for reinforcement learning. We use half-precision training to train a Pong model 50 % faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18x speedup and a 4x reduction in memory usage over an unquantized policy.
reject
The paper investigates quantization for speeding up RL. While the reviewers agree that the idea is a good one (it should definitely help), they also have a number of concerns about the paper and presentation. In particular, the reviewers feel that the authors should have provided more insight into the challenges of quantization in RL and the tradeoffs involved. After having read the rebuttals, the reviewers believe that the authors are on the right track, but that the paper is still not ready for publication. If the authors take the reviewer comments and concerns seriously and update their paper accordingly, the reviewers believe that this could eventually result in a strong paper.
train
[ "SJxbS6petH", "Syx1z1vioH", "BkgkuaIooS", "rkgpN68iiH", "B1ls5I2aYr", "HJeTTeOscB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper investigates the impact of using a reduced precision (i.e., quantization) in different deep reinforcement learning (DRL) algorithms. It shows that overall, reducing the precision of the neural network in DRL algorithms from 32 bits to 16 or 8 bits doesn't have much effect on the quality of the learned p...
[ 3, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, -1, 5, 1 ]
[ "iclr_2020_HJeEP04KDH", "SJxbS6petH", "B1ls5I2aYr", "HJeTTeOscB", "iclr_2020_HJeEP04KDH", "iclr_2020_HJeEP04KDH" ]
iclr_2020_H1ervR4FwH
Improved Structural Discovery and Representation Learning of Multi-Agent Data
Central to all machine learning algorithms is data representation. For multi-agent systems, selecting a representation which adequately captures the interactions among agents is challenging due to the latent group structure which tends to vary depending on various contexts. However, in multi-agent systems with strong group structure, we can simultaneously learn this structure and map a set of agents to a consistently ordered representation for further learning. In this paper, we present a dynamic alignment method which provides a robust ordering of structured multi-agent data which allows for representation learning to occur in a fraction of the time of previous methods. We demonstrate the value of this approach using a large amount of soccer tracking data from a professional league.
reject
The work addresses the problem of inferring group structure from unstructured data in multi-agent learning settings, proposing a novel approach that has key computational / run time advantages over a prior approach. A key limitation raised by reviewers is the limited quantitative evaluation and comparison to previous approaches, as well as a resulting set of general insights into advantages of the proposed approach compared to prior work (beyond computational benefits). While some of the key limitations were addressed in the rebuttal, the contribution in its current form remains too narrow. The paper is not ready for publication at ICLR at this stage.
train
[ "rJgy-CEe9S", "r1lT7803tH", "HylpPK5ijH", "rylDjw5oiS", "rJlRVBqjjS", "BylifV19sB", "ByxdYG-Gsr", "r1ggDTxfoH", "Sklm9egGiH", "Bkx-lXf6KH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "After author rebuttal: \nThank you for your response to the review. I appreciate your point that sport applications can be interesting to this community, and your paper makes an important contribution in that direction. With that said, for a paper to be appealing to the ICLR audience, there should be more clear no...
[ 1, 6, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 3, 1, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_H1ervR4FwH", "iclr_2020_H1ervR4FwH", "r1ggDTxfoH", "Sklm9egGiH", "BylifV19sB", "ByxdYG-Gsr", "r1lT7803tH", "Bkx-lXf6KH", "rJgy-CEe9S", "iclr_2020_H1ervR4FwH" ]
iclr_2020_BJlSPRVFwS
Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization
Asynchronous distributed methods are a popular way to reduce the communication and synchronization costs of large-scale optimization. Yet, for all their success, little is known about their convergence guarantees in the challenging case of general non-smooth, non-convex objectives, beyond cases where closed-form proximal operator solutions are available. This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks. In this paper, we introduce the first convergence analysis covering asynchronous methods in the case of general non-smooth, non-convex objectives. Our analysis applies to stochastic sub-gradient descent methods both with and without block variable partitioning, and both with and without momentum. It is phrased in the context of a general probabilistic model of asynchronous scheduling accurately adapted to modern hardware properties. We validate our analysis experimentally in the context of training deep neural network architectures. We show their overall successful asymptotic convergence as well as exploring how momentum, synchronization, and partitioning all affect performance.
reject
This paper considers an interesting theoretical question. However, it would add to the strength of the paper if it was able to meaningfully connect the considered model as well as derived methodology to the challenges and performance that arise in practice.
train
[ "HylZKPjhor", "HJlzkXcnjH", "ByeLMiB2jH", "ryxCdm7ojS", "ryegE2B3ir", "r1x7qgQojB", "B1ekmR9sFB", "r1lhLbyaFB", "ryeuKR4lqH" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We again thank the reviewers for their insightful comments. We review the significant changes in this revision:\n\n- We have overhauled our implementation and are now able to provide practical end-to-end convergence speedup for PASSM versus SGD and other asynchronous methods! \n\n- We therefore present the first d...
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2020_BJlSPRVFwS", "B1ekmR9sFB", "r1lhLbyaFB", "B1ekmR9sFB", "r1lhLbyaFB", "ryeuKR4lqH", "iclr_2020_BJlSPRVFwS", "iclr_2020_BJlSPRVFwS", "iclr_2020_BJlSPRVFwS" ]
iclr_2020_S1eIw0NFvr
Selective Brain Damage: Measuring the Disparate Impact of Model Pruning
Neural network pruning techniques have demonstrated it is possible to remove the majority of weights in a network with surprisingly little degradation to top-1 test set accuracy. However, this measure of performance conceals significant differences in how different classes and images are impacted by pruning. We find that certain individual data points, which we term pruning identified exemplars (PIEs), and classes are systematically more impacted by the introduction of sparsity. Removing PIE images from the test-set greatly improves top-1 accuracy for both sparse and non-sparse models. These hard-to-generalize-to images tend to be of lower image quality, mislabelled, entail abstract representations, require fine-grained classification or depict atypical class examples.
reject
This work investigates neural network pruning through the lens of its influence over specific exemplars (which are found to often be lower quality or mislabelled images) and how removing them greatly helps metrics. The insight from the paper is interesting, as recognized by reviewers. However, experiments do not suggest that the findings shown in the paper would generalize to more pruning methods. Nor do the authors give directions for tackling the "hard exemplar" problem. Authors' response did provide justifications and clarifications, however the core of the concern remains. Therefore, we recommend rejection.
train
[ "rkeVZWq0FH", "ryxHCEMcsr", "B1lgWogtjS", "SylCQvrzjS", "HyluOv5_or", "ByxHnvh4YB", "BkxOsIC5FS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents an empirical study on the effect of pruning to the model performance on each class and example, which leads to a novel finding that it has disparate effects to each sample. Specifically, the authors have found out that examples that are affected the most by pruning are more difficult to classif...
[ 3, -1, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_S1eIw0NFvr", "B1lgWogtjS", "BkxOsIC5FS", "ByxHnvh4YB", "rkeVZWq0FH", "iclr_2020_S1eIw0NFvr", "iclr_2020_S1eIw0NFvr" ]
iclr_2020_HkeUDCNFPS
Learning Temporal Abstraction with Information-theoretic Constraints for Hierarchical Reinforcement Learning
Applying reinforcement learning (RL) to real-world problems will require reasoning about action-reward correlation over long time horizons. Hierarchical reinforcement learning (HRL) methods handle this by dividing the task into hierarchies, often with hand-tuned network structure or pre-defined subgoals. We propose a novel HRL framework TAIC, which learns the temporal abstraction from past experience or expert demonstrations without task-specific knowledge. We formulate the temporal abstraction problem as learning latent representations of action sequences and present a novel approach of regularizing the latent space by adding information-theoretic constraints. Specifically, we maximize the mutual information between the latent variables and the state changes. A visualization of the latent space demonstrates that our algorithm learns an effective abstraction of the long action sequences. The learned abstraction allows us to learn new tasks on higher level more efficiently. We convey a significant speedup in convergence over benchmark learning problems. These results demonstrate that learning temporal abstractions is an effective technique in increasing the convergence rate and sample efficiency of RL algorithms.
reject
This paper presents a novel hierarchical reinforcement learning framework, based on learning temporal abstractions from past experience or expert demonstrations using recurrent variational autoencoders and regularising the representations. This is certainly an interesting line of work, but there were two primary areas of concern in the reviews: the clarity of details of the approach, and the lack of comparison to baselines. While the former issue was largely dealt with in the rebuttals, the latter remained an issue for all reviewers. For this reason, I recommend rejection of the paper in its current form.
train
[ "SyxidIc2oS", "rkxI5WYnjr", "B1gHtQF3ir", "H1eKNNK3sB", "HkggPOsoor", "r1gZVWOsoS", "BJxX-m9LiB", "HJg0oMcLoS", "HygOc0FLjr", "SJlcwCtLiH", "rket7CKUsB", "S1lH51KIjH", "SkxfupZctr", "BJeV0RY3KH", "HygjWRsRFB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Option in our definition is a latent representation of action sequence. There is an assumption that all the options are applicable to all states, as a common assumption also adopted by Option-critic, etc. We have defined multiple termination conditions, which allow detecting abnormal states and end the option earl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ "H1eKNNK3sB", "r1gZVWOsoS", "HkggPOsoor", "S1lH51KIjH", "SJlcwCtLiH", "HygOc0FLjr", "SkxfupZctr", "SkxfupZctr", "BJeV0RY3KH", "BJeV0RY3KH", "BJeV0RY3KH", "HygjWRsRFB", "iclr_2020_HkeUDCNFPS", "iclr_2020_HkeUDCNFPS", "iclr_2020_HkeUDCNFPS" ]
iclr_2020_SkgvvCVtDS
DeepSimplex: Reinforcement Learning of Pivot Rules Improves the Efficiency of Simplex Algorithm in Solving Linear Programming Problems
Linear Programs (LPs) are a fundamental class of optimization problems with a wide variety of applications. Fast algorithms for solving LPs are the workhorse of many combinatorial optimization algorithms, especially those involving integer programming. One popular method to solve LPs is the simplex method which, at each iteration, traverses the surface of the polyhedron of feasible solutions. At each vertex of the polyhedron, one of several heuristics chooses the next neighboring vertex, and these vary in accuracy and computational cost. We use deep value-based reinforcement learning to learn a pivoting strategy that at each iteration chooses between two of the most popular pivot rules -- Dantzig and steepest edge. Because the latter is typically more accurate and computationally costly than the former, we assign a higher wall time-based cost to steepest edge iterations than Dantzig iterations. We optimize this weighted cost on a neural net architecture designed for the simplex algorithm. We obtain between 20% to 50% reduction in the gap between weighted iterations of the individual pivoting rules, and the best possible omniscient policies for LP relaxations of randomly generated instances of five-city Traveling Salesman Problem.
reject
This paper present a learning method for speeding up of LP, and apply it to the TSP problem. Reviewers and AC agree that the idea is quite interesting and promising. However, I think the paper is far from being ready to publish in various aspects: (a) much more editorial efforts are necessary (b) the TPS application of small scale is not super appealing Hence, I recommend rejection.
test
[ "rygvZne-9S", "BygYFMhZcS", "H1eYwcTTtB", "r1eZaGFhiB", "S1xsGeY3or", "S1ez3R_nir", "ryetp7OhoH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary: \nThe authors propose a deep-reinforcement learning method for training how to choose pivot rules for the simplex algorithm for a set of LP instances. In particular, the authors applied the RL-approach to randomly generated TSP problems with five cities, which reduces the costs. \n\nComments:\nI have some...
[ 1, 3, 1, -1, -1, -1, -1 ]
[ 3, 1, 4, -1, -1, -1, -1 ]
[ "iclr_2020_SkgvvCVtDS", "iclr_2020_SkgvvCVtDS", "iclr_2020_SkgvvCVtDS", "H1eYwcTTtB", "rygvZne-9S", "BygYFMhZcS", "iclr_2020_SkgvvCVtDS" ]
iclr_2020_HJxwvCEFvH
SPECTRA: Sparse Entity-centric Transitions
Learning an agent that interacts with objects is ubiquituous in many RL tasks. In most of them the agent's actions have sparse effects : only a small subset of objects in the visual scene will be affected by the action taken. We introduce SPECTRA, a model for learning slot-structured transitions from raw visual observations that embodies this sparsity assumption. Our model is composed of a perception module that decomposes the visual scene into a set of latent objects representations (i.e. slot-structured) and a transition module that predicts the next latent set slot-wise and in a sparse way. We show that learning a perception module jointly with a sparse slot-structured transition model not only biases the model towards more entity-centric perceptual groupings but also enables intrinsic exploration strategy that aims at maximizing the number of objects changed in the agent’s trajectory.
reject
This paper introduces a model that learns a slot-based representation and its transition model to predict the representation changes over time. While all the reviewers agree that this paper is focusing on an important problem, they expressed multiple concerns regarding the novelty of the approach as well as lacking experiments. It certainly is missing multiple important relevant works, thereby overclaiming at a few places. The authors provided a short general response to compare their approach with some of the previous works and conduct stronger experiments for a future submission. We believe this paper is not at the stage to be published at this point.
train
[ "HylRsVvnjr", "SkeIrpvpKH", "ByeZ0fDAFB", "HJgQLsmz5B" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewers for all their useful comments. We especially agree with Reviewer 2 regarding the prior work we claim to improve upon (representation for some RL downstream tasks vs accuracy of the forward model learned). We will come with a stronger body of experiments for a future submission....
[ -1, 3, 3, 3 ]
[ -1, 5, 5, 3 ]
[ "iclr_2020_HJxwvCEFvH", "iclr_2020_HJxwvCEFvH", "iclr_2020_HJxwvCEFvH", "iclr_2020_HJxwvCEFvH" ]
iclr_2020_HygwvC4tPH
Learning Cross-Context Entity Representations from Text
Language modeling tasks, in which words, or word-pieces, are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases. Motivated by the observation that efforts to code world knowledge into machine readable knowledge bases or human readable encyclopedias tend to be entity-centric, we investigate the use of a fill-in-the-blank task to learn context independent representations of entities from the text contexts in which those entities were mentioned. We show that large scale training of neural models allows us to learn high quality entity representations, and we demonstrate successful results on four domains: (1) existing entity-level typing benchmarks, including a 64% error reduction over previous work on TypeNet (Murty et al., 2018); (2) a novel few-shot category reconstruction task; (3) existing entity linking benchmarks, where we achieve a score of 87.3% on TAC-KBP 2010 without using any alias table, external knowledge base or in domain training data and (4) answering trivia questions, which uniquely identify entities. Our global entity representations encode fine-grained type categories, such as "Scottish footballers", and can answer trivia questions such as "Who was the last inmate of Spandau jail in Berlin?".
reject
The paper describes an approach for learning context dependent entity representations that encodes fine-grained entity types. The paper includes some good empirical results and observations, but the proposed approach is very simple but lacks technical novelty needed to top ML conference; the clarify of the presentation can also be improved.
train
[ "BJlJb4NhsS", "B1eYm7VnjS", "rJljCMN3sB", "HkxtUeV3sr", "HJl1Q0wptr", "rkgHVddg5B", "rke_FcY0YS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and detailed questions.\n\nAs discussed in the response to all reviewers, we agree that RELIC's model architecture is not particularly novel. We believe that this paper's contribution is in the extensive, and novel, experiments that go well beyond previous work in testing the extent to wh...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 5, 4, 4 ]
[ "rkgHVddg5B", "rke_FcY0YS", "HJl1Q0wptr", "iclr_2020_HygwvC4tPH", "iclr_2020_HygwvC4tPH", "iclr_2020_HygwvC4tPH", "iclr_2020_HygwvC4tPH" ]
iclr_2020_Byg_vREtvB
Generalized Bayesian Posterior Expectation Distillation for Deep Neural Networks
In this paper, we present a general framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network, significantly extending prior work on a method known as ``Bayesian Dark Knowledge." Our generalized framework applies to the case of classification models and takes as input the architecture of a ``teacher" network, a general posterior expectation of interest, and the architecture of a ``student" network. The distillation method performs an online compression of the selected posterior expectation using iteratively generated Monte Carlo samples from the parameter posterior of the teacher model. We further consider the problem of optimizing the student model architecture with respect to an accuracy-speed-storage trade-off. We present experimental results investigating multiple data sets, distillation targets, teacher model architectures, and approaches to searching for student model architectures. We establish the key result that distilling into a student model with an architecture that matches the teacher, as is done in Bayesian Dark Knowledge, can lead to sub-optimal performance. Lastly, we show that student architecture search methods can identify student models with significantly improved performance.
reject
The authors consider distilling posterior expectations for Bayesian neural networks. While reviewers found the material interesting, and the responses thoughtful, there were questions about the practical utility of the work. Evaluations of classification favour NLL (and typically do not show accuracy), and regression (which was considered in the original Bayesian Dark Knowledge paper) is not considered. In general, it is difficult to assess and interpret how the approach is working, and in what application regime it would be a gold standard, e.g., with respect to downstream tasks. The authors are encouraged to continue with this work, taking reviewer comments into account in a final version.
train
[ "HJeLOilLtr", "SJgR0mt2sH", "Hkx8oW7nsH", "r1xgFEvWqH", "Byg5145dir", "SygMPVcuiB", "rkxHN4UXjr", "B1edOTmboB", "Sklkc14ZjH", "SyeK3VXTYS" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Contributions:\n\nThe paper considers the distillation of a Bayesian neural network as presented in [Balan et al. 2015]\n\nThe main contribution of the paper is the extension of [Balan et al. 2015] to apply to general posterior expectations instead of being restricted to predictions. \n\nA second contribution of t...
[ 3, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, 3, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_Byg_vREtvB", "rkxHN4UXjr", "iclr_2020_Byg_vREtvB", "iclr_2020_Byg_vREtvB", "Sklkc14ZjH", "B1edOTmboB", "HJeLOilLtr", "r1xgFEvWqH", "SyeK3VXTYS", "iclr_2020_Byg_vREtvB" ]
iclr_2020_HJgcw0Etwr
Toward Understanding Generalization of Over-parameterized Deep ReLU network trained with SGD in Student-teacher Setting
To analyze deep ReLU network, we adopt a student-teacher setting in which an over-parameterized student network learns from the output of a fixed teacher network of the same depth, with Stochastic Gradient Descent (SGD). Our contributions are two-fold. First, we prove that when the gradient is zero (or bounded above by a small constant) at every data point in training, a situation called \emph{interpolation setting}, there exists many-to-one \emph{alignment} between student and teacher nodes in the lowest layer under mild conditions. This suggests that generalization in unseen dataset is achievable, even the same condition often leads to zero training error. Second, analysis of noisy recovery and training dynamics in 2-layer network shows that strong teacher nodes (with large fan-out weights) are learned first and subtle teacher nodes are left unlearned until late stage of training. As a result, it could take a long time to converge into these small-gradient critical points. Our analysis shows that over-parameterization plays two roles: (1) it is a necessary condition for alignment to happen at the critical points, and (2) in training dynamics, it helps student nodes cover more teacher nodes with fewer iterations. Both improve generalization. Experiments justify our finding.
reject
The article studies a student-teacher setting with over-realised student ReLU networks, with results on the types of solutions and dynamics. The reviewers found the line of work interesting, but they also raised concerns about the novelty of the presented results, the description of previous works, settings and claims, and experiments. The revision clarified some of the definitions, the nature of the observations, experiments, and related works, including a change of the title. However, the reviewers still were not convinced, in particular with the interpretation of the results, and keep their original ratings. With many points that were raised in the original reviews, the article would benefit from a more thorough revision.
train
[ "HJeVITt6KS", "SJe4TMToiS", "rJeJJf-hoS", "HJeZ0mTJjS", "r1xpKc6yjS", "r1x1IFPJsB", "BkeWiMPyoS", "BkekyRs2YS", "HJgl0PlpFr", "Syg2Kl4C5B", "rylNPKIt9S", "rkgrJwaV9B" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author" ]
[ "This paper studies the learning of over-parameterized neural networks in the student-teacher setting. More specifically, this paper assumes that there is a fixed teacher network providing the output for student network to learn, where the student network is typically over-parameterized (i.e., wider than teacher ne...
[ 3, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 4, -1, -1, -1 ]
[ "iclr_2020_HJgcw0Etwr", "iclr_2020_HJgcw0Etwr", "HJeZ0mTJjS", "HJgl0PlpFr", "BkekyRs2YS", "BkeWiMPyoS", "HJeVITt6KS", "iclr_2020_HJgcw0Etwr", "iclr_2020_HJgcw0Etwr", "rylNPKIt9S", "iclr_2020_HJgcw0Etwr", "iclr_2020_HJgcw0Etwr" ]
iclr_2020_BJl9PRVKDS
A Functional Characterization of Randomly Initialized Gradient Descent in Deep ReLU Networks
Despite their popularity and successes, deep neural networks are poorly understood theoretically and treated as 'black box' systems. Using a functional view of these networks gives us a useful new lens with which to understand them. This allows us us to theoretically or experimentally probe properties of these networks, including the effect of standard initializations, the value of depth, the underlying loss surface, and the origins of generalization. One key result is that generalization results from smoothness of the functional approximation, combined with a flat initial approximation. This smoothness increases with number of units, explaining why massively overparamaterized networks continue to generalize well.
reject
This article sets out to study the advantages of depth and overparametrization in neural networks from the perspective of function space, with results on univariate shallow fully connected ReLU networks and some experiments on deep networks. The article presents results on the concentration /dispersion of the slope / break point distribution of the functions represented by shallow univariate ReLU networks for parameters from various distributions. The reviewers found that the article contains interesting analysis, but that the presentation could be improved. The revision clarified some aspects and included some experiments illustrating breakpoint distributions in relation to the curvature of some target functions. However, the reviewers did not find this convincing enough, pointing out that the analysis focuses on a very restrictive setting and that that presentation of the article still could be improved. The discussion of implicit regularisation in section 2.4 seems promising, but it would benefit from a clearer motivation, background, and discussion.
train
[ "SkeCSmD6Yr", "S1l9SBhooB", "BkghqNnjiB", "rklO1B2ioB", "rkenpEhjiH", "SJeNrfnRKB", "SJlbpRPycB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper proposes a functional characterization to understand the empirical success of deep neural networks. In particular, this paper focuses on the case of deep fully connected univariate ReLU networks, and show that the parameters will result in a Continuous Piecewise Linear (CPWL) approximation to the targ...
[ 3, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_BJl9PRVKDS", "iclr_2020_BJl9PRVKDS", "SJeNrfnRKB", "SkeCSmD6Yr", "SJlbpRPycB", "iclr_2020_BJl9PRVKDS", "iclr_2020_BJl9PRVKDS" ]
iclr_2020_ryghPCVYvH
Generative Restricted Kernel Machines
We introduce a novel framework for generative models based on Restricted Kernel Machines (RKMs) with multi-view generation and uncorrelated feature learning capabilities, called Gen-RKM. To incorporate multi-view generation, this mechanism uses a shared representation of data from various views. The mechanism is flexible to incorporate both kernel-based, (deep) neural network and convolutional based models within the same setting. To update the parameters of the network, we propose a novel training procedure which jointly learns the features and shared representation. Experiments demonstrate the potential of the framework through qualitative evaluation of generated samples.
reject
The paper proposes a way to use kernel method for multi-view generation. The points are mapped into a common subspace (with CNN feature extractor and kernel on top), and then a generation procedure from a latent point is given. I found the paper not easy to ready and follow; the idea of using CNN + kernel methods have been around for some years (for example, see "Impostor networks" by Lebedev et. al), and explicit feature map shows that kernel is just an additional layer to the network. Overall, the approach is straightforward, the generation can be quite slow and the benefits are not clear. The reviewers are mildly negative, so I think this time this paper can not be accepted.
train
[ "BJly_2s4cB", "HJgqjEXpFH", "H1xxIIE3sr", "HklIe8N3jr", "rJeetrEhsH", "H1xJDr4hsr", "ryg6GNNniB", "BJgubSH1qS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "There exists two papers:\n[1] Multimodal Learning with Deep Boltzmann Machines, http://jmlr.org/papers/volume15/srivastava14b/srivastava14b.pdf\n[2] Deep Restricted Kernel Machines Using Conjugate Feature Duality, ftp://ftp.esat.kuleuven.ac.be/stadius/suykens/reports/deepRKM1.pdf\n\nIn particular [2] considers a m...
[ 6, 3, -1, -1, -1, -1, -1, 3 ]
[ 4, 1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_ryghPCVYvH", "iclr_2020_ryghPCVYvH", "HJgqjEXpFH", "BJgubSH1qS", "BJly_2s4cB", "BJly_2s4cB", "iclr_2020_ryghPCVYvH", "iclr_2020_ryghPCVYvH" ]
iclr_2020_S1x2PCNKDB
Task-Relevant Adversarial Imitation Learning
We show that a critical problem in adversarial imitation from high-dimensional sensory data is the tendency of discriminator networks to distinguish agent and expert behaviour using task-irrelevant features beyond the control of the agent. We analyze this problem in detail and propose a solution as well as several baselines that outperform standard Generative Adversarial Imitation Learning (GAIL). Our proposed solution, Task-Relevant Adversarial Imitation Learning (TRAIL), uses a constrained optimization objective to overcome task-irrelevant features. Comprehensive experiments show that TRAIL can solve challenging manipulation tasks from pixels by imitating human operators, where other agents such as behaviour cloning (BC), standard GAIL, improved GAIL variants including our newly proposed baselines, and Deterministic Policy Gradients from Demonstrations (DPGfD) fail to find solutions, even when the other agents have access to task reward.
reject
This paper attempts to improve adversarial imitation learning (GAIL) by encouraging the discriminator to focus on task-dependent features. An advantage of this paper is that it not only improves upon GAIL, but it is doing so after first demonstrating and analyzing an existing issue. On the other hand, the presentation of the paper and breadth of experiments could be significantly improved further than the updated version. It would also be necessary to clarify whether the baseline is vanilla PG or D4PG. A major point for discussion was the selection of the invariance set. The ablation studies and explanation provided during the rebuttal period towards this point are helpful, but somehow we still do not have the full picture to understand well how this method compares to existing literature.
train
[ "Sye2efgAtH", "B1gMweDDcr", "BkxRKQ8hoS", "SkxkTPKgsB", "SJezdDKgor", "ByxPIPYloH", "SJgm4PYeoS", "r1e0ftvNqB", "B1l0p2qwYS", "SJg-Z4xivH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "public", "author" ]
[ "The paper proposes an extension of the adversarial imitation learning framework where the discriminator is additionally incentivized not to distinguish frames that are different between the expert and the agent in irrelevant ways. The method relies on manually identifying a set of irrelevant frames.\n\nThe paper c...
[ 8, 6, -1, -1, -1, -1, -1, 3, -1, -1 ]
[ 1, 1, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "iclr_2020_S1x2PCNKDB", "iclr_2020_S1x2PCNKDB", "iclr_2020_S1x2PCNKDB", "B1gMweDDcr", "r1e0ftvNqB", "r1e0ftvNqB", "Sye2efgAtH", "iclr_2020_S1x2PCNKDB", "iclr_2020_S1x2PCNKDB", "iclr_2020_S1x2PCNKDB" ]
iclr_2020_BkgTwRNtPB
Solving Packing Problems by Conditional Query Learning
Neural Combinatorial Optimization (NCO) has shown the potential to solve traditional NP-hard problems recently. Previous studies have shown that NCO outperforms heuristic algorithms in many combinatorial optimization problems such as the routing problems. However, it is less efficient for more complicated problems such as packing, one type of optimization problem that faces mutual conditioned action space. In this paper, we propose a Conditional Query Learning (CQL) method to handle the packing problem for both 2D and 3D settings. By embedding previous actions as a conditional query to the attention model, we design a fully end-to-end model and train it for 2D and 3D packing via reinforcement learning respectively. Through extensive experiments, the results show that our method could achieve lower bin gap ratio and variance for both 2D and 3D packing. Our model improves 7.2% space utilization ratio compared with genetic algorithm for 3D packing (30 boxes case), and reduces more than 10% bin gap ratio in almost every case compared with extant learning approaches. In addition, our model shows great scalability to packing box number. Furthermore, we provide a general test environment of 2D and 3D packing for learning algorithms. All source code of the model and the test environment is released.
reject
This paper proposes an end-to-end deep reinforcement learning-based algorithm for the 2D and 3D bin packing problems. Its main contribution is conditional query learning (CQL) which allows effective decision over mutually conditioned action spaces through policy expressed as a sequence of conditional distributions. Efficient neural architectures for modeling of such a policy is proposed. Experiments validate the effectiveness of the algorithm through comparisons with genetic algorithm and vanilla RL baselines. The presentation is clear and the results are interesting, but the novelty seems insufficient for ICLR. The proposed model is based on transformer with the following changes: * encoder: position embedding is removed, state embedding is added to the multi-head attention layer and feed forward layer of the original transformer encoder; * decoder: three decoders one for the three steps, namely selection, rotation and location. * training: actor-critic algorithm
train
[ "HJgUoNKLjB", "r1lQwrtUsB", "HylF5LFIjr", "BJgv7YQcYS", "B1xTSLjtuS", "r1eDbB6-qr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your constructive comments and the acknowledgement of our research direction. we address your concerns below:\n\na) Indeed, there are a few typos and grammar mistakes, and we will fix that carefully in the new version. \n\nb) In the related works section, we do not introduce which class of BPPs we t...
[ -1, -1, -1, 1, 3, 6 ]
[ -1, -1, -1, 3, 3, 5 ]
[ "BJgv7YQcYS", "B1xTSLjtuS", "r1eDbB6-qr", "iclr_2020_BkgTwRNtPB", "iclr_2020_BkgTwRNtPB", "iclr_2020_BkgTwRNtPB" ]
iclr_2020_SJx0PAEFDS
Underwhelming Generalization Improvements From Controlling Feature Attribution
Overfitting is a common issue in machine learning, which can arise when the model learns to predict class membership using convenient but spuriously-correlated image features instead of the true image features that denote a class. These are typically visualized using saliency maps. In some object classification tasks such as for medical images, one may have some images with masks, indicating a region of interest, i.e., which part of the image contains the most relevant information for the classification. We describe a simple method for taking advantage of such auxiliary labels, by training networks to ignore the distracting features which may be extracted outside of the region of interest, on the training images for which such masks are available. This mask information is only used during training and has an impact on generalization accuracy in a dataset-dependent way. We observe an underwhelming relationship between controlling saliency maps and improving generalization performance.
reject
This paper studies the effect of training image classifier with masked images to exclude distraction regions in the image and avoid formation of spurious correlation between them and predicted labels. The paper proposes actdiff regularizer and demonstrates that it prevents such overfitting phenomenon on synthetic data. However, there was no success on real data. This is important as it shows that the improvement reported in some saliency-map based approaches in the literature may be due to other regularization effects such as cutout. This was a unique submission in my batch, as it embraces its negative results. Among our internal discussions, all reviewers that and we all believe that negative results are important and should be encouraged. However, in order for the negative results to be sufficiently insightful for the entire community, they need to be examined under well-organized experiments. This is the aspect that the reviewers think the paper needs to improve on. In particular, R2 believes the paper could consider a larger set of possible regularizations as well as a broader range of applications. The insights in such setting may then lead to solid insights on why the current approaches are not very helpful, and in which direction the follow-up researches should focus on.
train
[ "r1lEgf-2YB", "rylmtbuqsB", "BygBVeuqir", "Hylsb1OqjB", "rygobQ69FS", "rkl72wslcH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers how we can train image classification models so that they can ignore task irrelevant features.\nFor this purpose, the authors considered a situation where task relevant parts of the images are annotated as masks by the human experts.\nThe authors then proposed using Actdiff loss, Reconstructio...
[ 3, -1, -1, -1, 3, 3 ]
[ 1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_SJx0PAEFDS", "rygobQ69FS", "r1lEgf-2YB", "rkl72wslcH", "iclr_2020_SJx0PAEFDS", "iclr_2020_SJx0PAEFDS" ]
iclr_2020_Byx0PREtDH
BEYOND SUPERVISED LEARNING: RECOGNIZING UNSEEN ATTRIBUTE-OBJECT PAIRS WITH VISION-LANGUAGE FUSION AND ATTRACTOR NETWORKS
This paper handles a challenging problem, unseen attribute-object pair recognition, which asks a model to simultaneously recognize the attribute type and the object type of a given image while this attribute-object pair is not included in the training set. In the past years, the conventional classifier-based methods, which recognize unseen attribute-object pairs by composing separately-trained attribute classifiers and object classifiers, are strongly frustrated. Different from conventional methods, we propose a generative model with a visual pathway and a linguistic pathway. In each pathway, the attractor network is involved to learn the intrinsic feature representation to explore the inner relationship between the attribute and the object. With the learned features in both pathways, the unseen attribute-object pair is recognized by finding out the pair whose linguistic feature closely matches the visual feature of the given image. On two public datasets, our model achieves impressive experiment results, notably outperforming the state-of-the-art methods.
reject
The paper focuses on attribute-object pairs image recognition, leveraging some novel "attractor network". At this stage, all reviewers agree the paper needs a lot of improvements in the writing. There are also concerns regarding (i) novelty: the proposed approach being two encoder-decoder networks; (ii) lack of motivation for such architecture (iii) possible flow in the approach (are the authors using test labels?) and (iv) weak experiments.
train
[ "SylMYma6YH", "Bkl7Aytg5S", "SylOEzmTqr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper attempts to tackle unseen object-attribute recognition in still images. This follows a line of works that addresses such a problem using embedding (with operators), classification and generative approaches. The paper makes use of the visual attractor network, extracting different visual representations f...
[ 1, 3, 1 ]
[ 4, 5, 3 ]
[ "iclr_2020_Byx0PREtDH", "iclr_2020_Byx0PREtDH", "iclr_2020_Byx0PREtDH" ]
iclr_2020_SJlRDCVtwr
Simplicial Complex Networks
Universal approximation property of neural networks is one of the motivations to use these models in various real-world problems. However, this property is not the only characteristic that makes neural networks unique as there is a wide range of other approaches with similar property. Another characteristic which makes these models interesting is that they can be trained with the backpropagation algorithm which allows an efficient gradient computation and gives these universal approximators the ability to efficiently learn complex manifolds from a large amount of data in different domains. Despite their abundant use in practice, neural networks are still not well understood and a broad range of ongoing research is to study the interpretability of neural networks. On the other hand, topological data analysis (TDA) relies on strong theoretical framework of (algebraic) topology along with other mathematical tools for analyzing possibly complex datasets. In this work, we leverage a universal approximation theorem originating from algebraic topology to build a connection between TDA and common neural network training framework. We introduce the notion of automatic subdivisioning and devise a particular type of neural networks for regression tasks: Simplicial Complex Networks (SCNs). SCN's architecture is defined with a set of bias functions along with a particular policy during the forward pass which alternates the common architecture search framework in neural networks. We believe the view of SCNs can be used as a step towards building interpretable deep learning models. Finally, we verify its performance on a set of regression problems.
reject
The aper introduces simplicial complex networks, a new class of neural networks based on the idea of the subdivision of a simplicial complex. The paper is interesting and brings ideas of algebraic topology to inform the design of new neural network architectures. Reviewer 1 was positive about the ideas of this paper, but had several concerns about clarity, scalablity and the sense that the paper might still be in an early phase. Reviewer 2 had similar concerns about clarity, comparisons, and usefulness. Although there were no responses form the author, the discussion explored the paper further, but continued to think the idea is still in its early phase. The paper is not currently ready for acceptance, and we hope the authors will find useful feedback for their ongoing reasearch.
train
[ "S1lAcZpKFH", "BkgPbIw2tB" ]
[ "official_reviewer", "official_reviewer" ]
[ "1. Summary of the paper\n\nThis paper introduces *simplicial complex networks*, a new class of\nneural networks based on the idea of the subdivision of a simplicial\ncomplex. Using the simplicial approximation, a classical theorem in\nalgebraic topology, the paper demonstrates that the network is capable\nof learn...
[ 1, 1 ]
[ 5, 1 ]
[ "iclr_2020_SJlRDCVtwr", "iclr_2020_SJlRDCVtwr" ]
iclr_2020_Bylkd0EFwr
Bio-Inspired Hashing for Unsupervised Similarity Search
The fruit fly Drosophila's olfactory circuit has inspired a new locality sensitive hashing (LSH) algorithm, FlyHash. In contrast with classical LSH algorithms that produce low dimensional hash codes, FlyHash produces sparse high-dimensional hash codes and has also been shown to have superior empirical performance compared to classical LSH algorithms in similarity search. However, FlyHash uses random projections and cannot learn from data. Building on inspiration from FlyHash and the ubiquity of sparse expansive representations in neurobiology, our work proposes a novel hashing algorithm BioHash that produces sparse high dimensional hash codes in a data-driven manner. We show that BioHash outperforms previously published benchmarks for various hashing methods. Since our learning algorithm is based on a local and biologically plausible synaptic plasticity rule, our work provides evidence for the proposal that LSH might be a computational reason for the abundance of sparse expansive motifs in a variety of biological systems. We also propose a convolutional variant BioConvHash that further improves performance. From the perspective of computer science, BioHash and BioConvHash are fast, scalable and yield compressed binary representations that are useful for similarity search.
reject
This paper introduces a biologically inspired locally sensitive hashing method, a variant of FlyHash. While the paper contains interesting ideas and its presentation has been substantially improved from its original form during the discussion period, the paper still does not meet the quality bar of ICLR due to its limitations in terms of experiments and applicability to real-world scenarios.
train
[ "rkxbczB-cS", "BklzYdcaYr", "H1l6nBpQ9S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies a new model of locally sensitive hashing (LSH) that is inspired by the fruit fly Drosophila's olfactory circuit. Instead of mapping each input to a low-dimensional space, such LSH methods (FlyHash) map given $d$-dimensional inputs to an $m$-dimensional space such that $m \\gg d$. However, these ...
[ 6, 6, 3 ]
[ 4, 4, 1 ]
[ "iclr_2020_Bylkd0EFwr", "iclr_2020_Bylkd0EFwr", "iclr_2020_Bylkd0EFwr" ]
iclr_2020_SkgJOAEtvr
INTERNAL-CONSISTENCY CONSTRAINTS FOR EMERGENT COMMUNICATION
When communicating, humans rely on internally-consistent language representations. That is, as speakers, we expect listeners to behave the same way we do when we listen. This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting. We consider two hypotheses about the effect of internal-consistency constraints: 1) that they improve agents’ ability to refer to unseen referents, and 2) that they improve agents’ ability to generalize across communicative roles (e.g. performing as a speaker de- spite only being trained as a listener). While we do not find evidence in favor of the former, our results show significant support for the latter.
reject
This work examines how internal consistency objectives can help emergent communication, namely through possibly improving ability to refer to unseen referents and to generalize across communicative roles. Experimental results support the second hypothesis but not the first. Reviewers agree that this is an exciting object of study, but had reservations about the rationale for the first hypothesis (which was ultimately disproven), and for how the second hypothesis was investigated (lack of ablations to tease apart which part was most responsible for improvement, unsatisfactory framing). These concerns were not fully addressed by the response. While the paper is very promising and the direction quite interesting, this cannot in its current form be recommended for acceptance. We encourage authors to carefully examine reviewers' suggestions to improve their work for submission to another venue.
train
[ "HJePWIB0Kr", "Skxu-1xniS", "r1gX0kenoH", "rJeCmA12iS", "HkxM3i8CYS", "Skg9fxAyqS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper analyzes if enforcing internal-consistency for speaker-listener setup can (i) improve the ability of the agents to refer to unseen referents (ii) generalize for different communicative roles. The paper evaluates a transformer and arecurrent model modified with various sharing strategies on a single-turn ...
[ 6, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, 5, 3 ]
[ "iclr_2020_SkgJOAEtvr", "HkxM3i8CYS", "HJePWIB0Kr", "iclr_2020_SkgJOAEtvr", "iclr_2020_SkgJOAEtvr", "iclr_2020_SkgJOAEtvr" ]
iclr_2020_rygxdA4YPS
AdaScale SGD: A Scale-Invariant Algorithm for Distributed Training
When using distributed training to speed up stochastic gradient descent, learning rates must adapt to new scales in order to maintain training effectiveness. Re-tuning these parameters is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale SGD, a practical and principled algorithm that is approximately scale invariant. By continually adapting to the gradient’s variance, AdaScale often trains at a wide range of scales with nearly identical results. We describe this invariance formally through AdaScale’s convergence bounds. As the batch size increases, the bounds maintain final objective values, while smoothly transitioning away from linear speed-ups. In empirical comparisons, AdaScale trains well beyond the batch size limits of popular “linear learning rate scaling” rules. This includes large-scale training without model degradation for machine translation, image classification, object detection, and speech recognition tasks. The algorithm introduces negligible computational overhead and no tuning parameters, making AdaScale an attractive choice for large-scale training.
reject
Main summary: Novel rule for scaling learning rate, known as gain ration, for which the effective batch size is increased. Discussion: reviewer 2: main concern is reviewer can't tell if it's better of worse than linear learning rate scaling from their experiment section. reviewer 3: novlty/contribution is a bit too low for ICLR. reviewer 1: algorthmic clarity lacking. Recommendation: all 3 reviewers recommend reject, I agree.
train
[ "B1eHcr_coH", "SkeGCVuqjB", "rygzrVu5jr", "HkxTHXuqir", "S1lFOBTMKH", "SyxIwzyRKH", "S1xeppfZcr", "BkxgIV-4ur", "HJl9LrbVuS", "HJxM5aPTvr", "H1lMUaMaPB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public" ]
[ "Thank you for the detailed and constructive review.\n\nWe have updated all theorems to achieve the better rates. Hopefully it is clear now that the theoretical comparisons are fair.\n\nWe tried to write for multiple audiences, so that both deep learning practitioners and optimization experts would find the paper ...
[ -1, -1, -1, -1, 3, 3, 3, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 4, 3, -1, -1, -1, -1 ]
[ "S1lFOBTMKH", "SyxIwzyRKH", "S1xeppfZcr", "iclr_2020_rygxdA4YPS", "iclr_2020_rygxdA4YPS", "iclr_2020_rygxdA4YPS", "iclr_2020_rygxdA4YPS", "HJxM5aPTvr", "H1lMUaMaPB", "iclr_2020_rygxdA4YPS", "iclr_2020_rygxdA4YPS" ]
iclr_2020_BJlxdCVKDB
MoET: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees
Deep Reinforcement Learning (DRL) has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go. However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings. Viper, a recently proposed technique, constructs a decision tree policy by mimicking the DRL agent. Decision trees are interpretable as each action made can be traced back to the decision rule path that lead to it. However, one global decision tree approximating the DRL policy has significant limitations with respect to the geometry of decision boundaries. We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions. We propose a training procedure to support non-differentiable decision tree experts and integrate it into imitation learning procedure of Viper. We evaluate our algorithm on four OpenAI gym environments, and show that the policy constructed in such a way is more performant and better mimics the DRL agent by lowering mispredictions and increasing the reward. We also show that MoET policies are amenable for verification using off-the-shelf automated theorem provers such as Z3.
reject
This paper aims at making a deep RL policy interpretable and verifiable by distilling the policy represented by a deep neural network into an ensemble of decision trees. This should be done without hurting the performance of the policy. The authors achieve this by extending the existing Viper algorithm. The resulting approach can imitate the deep policy better compared with Viper while preserving verifiability. Experiments show that the proposed method improves in terms of cumulative reward and error rate over Viper in four benchmark tasks. The amount of improvement over the original Viper is not convincing given the presented results. Moreover, reviewers uniformly agree that the contribution of this work is incremental. I therefore recommend to reject this paper.
test
[ "rJgr4h0jYr", "BygPnwlTtH", "ryxYG0KnsB", "Bkli1w2tiB", "HJxlR_3KjB", "Sygn5t2djB", "BkgjfBi7jB", "rygIQh9QiH", "S1esuzi7or", "Skl_trsoFr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes an extension to the Viper[1] method for interpreting and verifying deep RL policies by learning a mixture of decision trees to mimic the originally learned policy. The proposed approach can imitate the deep policy better compared with Viper while preserving verifiability. Empirically the propose...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_BJlxdCVKDB", "iclr_2020_BJlxdCVKDB", "HJxlR_3KjB", "rygIQh9QiH", "S1esuzi7or", "iclr_2020_BJlxdCVKDB", "Skl_trsoFr", "BygPnwlTtH", "rJgr4h0jYr", "iclr_2020_BJlxdCVKDB" ]
iclr_2020_BJe-_CNKPH
Attention Interpretability Across NLP Tasks
The attention layer in a neural network model provides insights into the model’s reasoning behind its prediction, which are usually criticized for being opaque. Recently, seemingly contradictory viewpoints have emerged about the interpretability of attention weights (Jain & Wallace, 2019; Vig & Belinkov, 2019). Amid such confusion arises the need to understand attention mechanism more systematically. In this work, we attempt to fill this gap by giving a comprehensive explanation which justifies both kinds of observations (i.e., when is attention interpretable and when it is not). Through a series of experiments on diverse NLP tasks, we validate our observations and reinforce our claim of interpretability of attention through manual evaluation.
reject
This paper investigates the degree to which we might view attention weights as explanatory across NLP tasks and architectures. Notably, the authors distinguish between single and "pair" sequence tasks, the latter including NLI, and generation tasks (e.g., translation). The argument here is that attention weights do not provide explanatory power for single sequence tasks like classification, but do for NLI and generation. Another notable distinction from most (although not all; see the references below) prior work on the explainability of attention mechanisms in NLP is the inclusion of transformer/self-attentive architectures. Unfortunately, the paper needs work in presentation (in particular, in Section 3) before it is ready to be published.
train
[ "SJxPKKBnjH", "S1g2myhisr", "HyeVFpFWjS", "B1g-F3YZir", "HygyhLYbiH", "H1l-cYdUFS", "r1xry0J6tB", "Hke1Vxpg9r", "HJeleOoHdS", "rylxYA-GuB" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We request the reviewer to help us improve the paper by pointing out what needs to be corrected for removing the existing confusion in the paper. We provide clarification of the reviewer’s queries below:\n\nC1: Equation 1 is taken from Jain & Wallace, (2019) whereas Equation in Prop. 4.1 is from Dauphin et al. (20...
[ -1, -1, -1, -1, -1, 6, 6, 1, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, 4, 4, -1, -1 ]
[ "S1g2myhisr", "HygyhLYbiH", "r1xry0J6tB", "H1l-cYdUFS", "Hke1Vxpg9r", "iclr_2020_BJe-_CNKPH", "iclr_2020_BJe-_CNKPH", "iclr_2020_BJe-_CNKPH", "rylxYA-GuB", "iclr_2020_BJe-_CNKPH" ]
iclr_2020_rJlf_RVKwr
Sensible adversarial learning
The trade-off between robustness and standard accuracy has been consistently reported in the machine learning literature. Although the problem has been widely studied to understand and explain this trade-off, no studies have shown the possibility of a no trade-off solution. In this paper, motivated by the fact that the high dimensional distribution is poorly represented by limited data samples, we introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of natural accuracy and robustness. Specifically, we define a sensible adversary which is useful for learning a defense model and keeping a high natural accuracy simultaneously. We theoretically establish that the Bayes rule is the most robust multi-class classifier with the 0-1 loss under sensible adversarial learning. We propose a novel and efficient algorithm that trains a robust model with sensible adversarial examples, without a significant drop in natural accuracy. Our model on CIFAR10 yields state-of-the-art results against various attacks with perturbations restricted to l∞ with ε = 8/255, e.g., the robust accuracy 65.17% against PGD attacks as well as the natural accuracy 91.51%.
reject
Thanks for your detailed feedback to the reviewers, which clarified us a lot in many respects. However, there is still room for improvement; for example, convergence to a good solution needs to be further investigated. Given the high competition at ICLR2020, this paper is unfortunately below the bar. We hope that the reviewers' comments are useful for improving the paper for potential future publication.
train
[ "HkldfOq2jB", "SJgnGnK3iS", "HJl-rJchsH", "B1xxBWchjH", "SJgCgW92jS", "ryxzhxqhjr", "B1l5_l93iS", "HkxZ-AFhsr", "S1xM4Tt3jS", "BygQh5fMoH", "Bkx94fgQqB", "Bylzgg27qB" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In the updated paper, the major changes are \n\n1) The SENSE models have been tested again with stronger PGD attacks and the results are updated in Table 1 and Table3. \nMNIST: PGD40 96.46% -> PGD500 91.74%. \nCIFAR: PGD20 65.17% -> PGD100 57.23%. \n\nThe qualities of PGD attacks are checked by Figure 10 and Fig...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2020_rJlf_RVKwr", "Bkx94fgQqB", "BygQh5fMoH", "SJgCgW92jS", "HkxZ-AFhsr", "B1l5_l93iS", "HJl-rJchsH", "S1xM4Tt3jS", "Bylzgg27qB", "iclr_2020_rJlf_RVKwr", "iclr_2020_rJlf_RVKwr", "iclr_2020_rJlf_RVKwr" ]
iclr_2020_S1eQuCVFvB
Machine Truth Serum
Wisdom of the crowd revealed a striking fact that the majority answer from a crowd is often more accurate than any individual expert. We observed the same story in machine learning - ensemble methods leverage this idea to combine multiple learning algorithms to obtain better classification performance. Among many popular examples is the celebrated Random Forest, which applies the majority voting rule in aggregating different decision trees to make the final prediction. Nonetheless, these aggregation rules would fail when the majority is more likely to be wrong. In this paper, we extend the idea proposed in Bayesian Truth Serum that "a surprisingly more popular answer is more likely the true answer" to classification problems. The challenge for us is to define or detect when an answer should be considered as being "surprising". We present two machine learning aided methods which aim to reveal the truth when it is minority instead of majority who has the true answer. Our experiments over real-world datasets show that better classification performance can be obtained compared to always trusting the majority voting. Our proposed methods also outperform popular ensemble algorithms. Our approach can be generically applied as a subroutine in ensemble methods to replace majority voting rule.
reject
This paper proposes a family of new methods, based on Bayesian Truth Serum, that are meant to build better ensembles from a fixed set of constituent models. Reviewers found the problem and the general research direction interesting, but none of the three of them were convinced that the proposed methods are effective in the ways that the paper claims, even after some discussion. It seems as though this paper is dealing with a problem that doesn't generally lend itself to large improvements in results, but reviewers weren't satisfied that the small observed improvements were real, and urged the authors to explore additional settings and baselines, and to offer a full significance test.
train
[ "rkeYXtsmiS", "ryxhu5omsS", "rkgMptKk5B", "BkxEYviXoS", "HkebSSfciS", "BJlDWqa2Fr", "BklV_RvRtB" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for the valuable feedback.\n\nQuestion 1 (“Unless I am mistaken, the authors use a more powerful…...”): HMTS only uses MLP regressors to predict the information \"``how many other classifiers would agree with themselves\". We use linear regressors in HMTS instead of MLPs and obtain almost the s...
[ -1, -1, 6, -1, -1, 1, 3 ]
[ -1, -1, 3, -1, -1, 4, 1 ]
[ "BklV_RvRtB", "rkgMptKk5B", "iclr_2020_S1eQuCVFvB", "BJlDWqa2Fr", "iclr_2020_S1eQuCVFvB", "iclr_2020_S1eQuCVFvB", "iclr_2020_S1eQuCVFvB" ]
iclr_2020_SklE_CNFPr
Zeroth Order Optimization by a Mixture of Evolution Strategies
Evolution strategies or zeroth-order optimization algorithms have become popular in some areas of optimization and machine learning where only the oracle of function value evaluations is available. The central idea in the design of the algorithms is by querying function values of some perturbed points in the neighborhood of the current update and constructing a pseudo-gradient using the function values. In recent years, there is a growing interest in developing new ways of perturbation. Though the new perturbation methods are well motivating, most of them are criticized for lack of convergence guarantees even when the underlying function is convex. Perhaps the only methods that enjoy convergence guarantees are the ones that sample the perturbed points uniformly from a unit sphere or from a multivariate Gaussian distribution with an isotropic covariance. In this work, we tackle the non-convergence issue and propose sampling perturbed points from a mixture of distributions. Experiments show that our proposed method can identify the best perturbation scheme for the convergence and might also help to leverage the complementariness of different perturbation schemes.
reject
The paper proposes an adaptive sampling mechanism for zeroth order optimization that samples perturbed points from a mixture distribution with asymptotic convergence guarantees. The reviewers raised issues regarding the clarity of presentation, potential problems with the proofs, and simplicity of the experimental setup. The authors did not provide a response. Overall, the reviewers agree that the quality of the paper is not sufficient for publishing, and therefore I recommend rejection.
train
[ "BylC0IXsYr", "H1gVIxe6KS", "SyggtDwptr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The submission is proposing an adaptive derivative-free optimization method. In this method, the sampling covariances are adapted between different covariance adaptation heuristics. The main intuition is seeing each algorithm as an arm in multi-armed bandit (MAB) setting. Moreover, authors use EXP3.P as an online ...
[ 3, 1, 1 ]
[ 4, 3, 5 ]
[ "iclr_2020_SklE_CNFPr", "iclr_2020_SklE_CNFPr", "iclr_2020_SklE_CNFPr" ]
iclr_2020_rJlVdREKDS
Learning from Imperfect Annotations: An End-to-End Approach
Many machine learning systems today are trained on large amounts of human-annotated data. Annotation tasks that require a high level of competency make data acquisition expensive, while the resulting labels are often subjective, inconsistent, and may contain a variety of human biases. To improve data quality, practitioners often need to collect multiple annotations per example and aggregate them before training models. Such a multi-stage approach results in redundant annotations and may often produce imperfect ``ground truth'' labels that limit the potential of training supervised machine learning models. We propose a new end-to-end framework that enables us to: (i) merge the aggregation step with model training, thus allowing deep learning systems to learn to predict ground truth estimates directly from the available data, and (ii) model difficulties of examples and learn representations of the annotators that allow us to estimate and take into account their competencies. Our approach is general and has many applications, including training more accurate models on crowdsourced data, ensemble learning, as well as classifier accuracy estimation from unlabeled data. We conduct an extensive experimental evaluation of our method on 5 crowdsourcing datasets of varied difficulty and show accuracy gains of up to 25% over the current state-of-the-art approaches for aggregating annotations, as well as significant reductions in the required annotation redundancy.
reject
The paper introduces a novel way of jointly modeling annotator competencies and learning from imperfect annotations. Reviewers were moderately positive. One reviewer mentioned Carpenter (2002) and subsequent work. One prominent example of this line of work, which the authors do not cite, is: https://www.isi.edu/publications/licensed-sw/mace/ - from 2013. I encourage the authors to cite this paper. In the discussion, the authors point out this type of work is not *end-to-end* in their sense. However, there's, to the best of my knowledge, a relatively big body of literature on end-to-end approaches that the authors completely ignore, e.g., [0-3]. In the absence of a discussion of this work, it is hard to accept the paper. [0] https://link.springer.com/article/10.1007/s10994-013-5411-2 [1] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7405343 [2] http://www.cs.utexas.edu/~atn/nguyen-acl17.pdf [3] https://arxiv.org/pdf/1803.04223.pdf
val
[ "rye3iuF1qH", "SkeUOsCHir", "ryeQRjRSjr", "HJeFisASiB", "S1l5NiASsr", "Hkg9lOl6tr", "SklG7TV3cr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update after author response: \nI would like to thank the authors for their thoughtful response, and for addressing some of the concerns raised by the reviewers. One of my main complaints arose from a misunderstanding that none of the baselines model worker competencies and task difficulty. The authors clarified t...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_rJlVdREKDS", "rye3iuF1qH", "iclr_2020_rJlVdREKDS", "Hkg9lOl6tr", "SklG7TV3cr", "iclr_2020_rJlVdREKDS", "iclr_2020_rJlVdREKDS" ]
iclr_2020_HkxSOAEFDB
Octave Graph Convolutional Network
Many variants of Graph Convolutional Networks (GCNs) for representation learning have been proposed recently and have achieved fruitful results in various domains. Among them, spectral-based GCNs are constructed via convolution theorem upon theoretical foundation from the perspective of Graph Signal Processing (GSP). However, despite most of them implicitly act as low-pass filters that generate smooth representations for each node, there is limited development on the full usage of underlying information from low-frequency. Here, we first introduce the octave convolution on graphs in spectral domain. Accordingly, we present Octave Graph Convolutional Network (OctGCN), a novel architecture that learns representations for different frequency components regarding to weighted filters and graph wavelets bases. We empirically validate the importance of low-frequency components in graph signals on semi-supervised node classification and demonstrate that our model achieves state-of-the-art performance in comparison with both spectral-based and spatial-based baselines.
reject
Two reviewers are negative on this paper while the other one is slightly positive. Overall, the paper does not make the bar of ICLR and thus a reject is recommended.
val
[ "r1g_ALQoiB", "Skgfi8XosS", "SJxJ7L7sjS", "rJlkKoAe5r", "rygxsZzmqH", "BJgIcnIOcB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your earnest manner on review and we appreciate it very much!\nIndeed, both the different weighting between low vs high-frequency components and fewer dependencies across variables contribute to better performance. \n1. By assigning different weights between low vs high, the different importance of l...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 1, 3, 5 ]
[ "rJlkKoAe5r", "rygxsZzmqH", "BJgIcnIOcB", "iclr_2020_HkxSOAEFDB", "iclr_2020_HkxSOAEFDB", "iclr_2020_HkxSOAEFDB" ]
iclr_2020_HkeSdCEtDS
Alternating Recurrent Dialog Model with Large-Scale Pre-Trained Language Models
Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks. The recent success of large pre-trained language models such as BERT and GPT-2 have suggested the effectiveness of incorporating language priors in down-stream NLP tasks. However, how much pre-trained language models can help dialog response generation is still under exploration. In this paper, we propose a simple, general, and effective framework: Alternating Recurrent Dialog Model (ARDM). ARDM models each speaker separately and takes advantage of the large pre-trained language model. It requires no supervision from human annotations such as belief states or dialog acts to achieve effective conversations. ARDM outperforms or is on par with state-of-the-art methods on two popular task-oriented dialog datasets: CamRest676 and MultiWOZ. Moreover, we can generalize ARDM to more challenging, non-collaborative tasks such as persuasion. In persuasion tasks, ARDM is capable of generating human-like responses to persuade people to donate to a charity.
reject
This paper proposes an alternating dialog model based on transformers and GPT-2, that model each conversation side separately and aim to eliminate human supervision. Results on two dialog corpora are either better than or comparable to state-of-the-art. Two of the reviewers raise concerns about the novel contributions of the paper, and did not change their scores after authors' rebuttal. Furthermore, one reviewer raises concerns about the lack of detailed experiments aiming to explain where the improvements come from. Hence, I suggest rejecting the paper.
train
[ "HygwBZVGoB", "rJx2akNfiS", "rkl94NXfjB", "ryxMCQmfoB", "Bygq23JFFS", "HJgL0TmFKB", "B1g7ZMmRYS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "https://colab.research.google.com/drive/1cd-jTWqMnQST4vmnz1Ygb0VDkqu_jXDZ", "First, thank you for the reviews. Sorry for the confusion on the name GPT-2 in Table 1. We will update the entry named, GPT-2 in Table 1 to GPT-2-Finetune. Here, it stands for the GPT-2 fine-tuned on CamRest676. Without fine-tuning on ...
[ -1, -1, -1, -1, 1, 3, 8 ]
[ -1, -1, -1, -1, 5, 5, 5 ]
[ "iclr_2020_HkeSdCEtDS", "HJgL0TmFKB", "Bygq23JFFS", "B1g7ZMmRYS", "iclr_2020_HkeSdCEtDS", "iclr_2020_HkeSdCEtDS", "iclr_2020_HkeSdCEtDS" ]
iclr_2020_SygLu0VtPH
Deep Innovation Protection
Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels. This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for such 3D environments. The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt. We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss.
reject
This paper is a very borderline case. Mixed reviews. R2 score originally 4, moved to 5 (rounded up to WA 6), but still borderline. R1 was 6 (WA) and R3 was 3 (WR). R2 expert on this topic, R1 and R3 less so. AC has carefully read the reviews/rebuttal/comments and looked closely at the paper. AC feels that R2's review is spot on and that the contribution does not quite reach ICLR acceptance level, despite it being interesting work. So the AC feels the paper cannot be accepted at this time. But the work is definitely interesting -- the authors should improve their paper using R2's comments and resubmit.
test
[ "SkxpY7ZkqS", "ByeJ65nuiS", "BJxOe63UjH", "rkg_98t8sB", "BJlQufDPur", "HkeI7qdCKH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Review of “Deep Innovation Protection”\n\nThis paper builds on top of the World Models line of work that explores in more detail the role of evolutionary computing (as opposed to latent-modeling and planning direction like in Hafner2018) for such model-based architectures. Risi2019 is the first work that is able t...
[ 6, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, 3, 3 ]
[ "iclr_2020_SygLu0VtPH", "BJlQufDPur", "HkeI7qdCKH", "SkxpY7ZkqS", "iclr_2020_SygLu0VtPH", "iclr_2020_SygLu0VtPH" ]
iclr_2020_S1xLuRVFvr
Visual Explanation for Deep Metric Learning
This work explores the visual explanation for deep metric learning and its applications. As an important problem for learning representation, metric learning has attracted much attention recently, while the interpretation of such model is not as well studied as classification. To this end, we propose an intuitive idea to show where contributes the most to the overall similarity of two input images by decomposing the final activation. Instead of only providing the overall activation map of each image, we propose to generate point-to-point activation intensity between two images so that the relationship between different regions is uncovered. We show that the proposed framework can be directly deployed to a large range of metric learning applications and provides valuable information for understanding the model. Furthermore, our experiments show its effectiveness on two potential applications, i.e. cross-view pattern discovery and interactive retrieval.
reject
This submission proposes a method for providing visual explanations for why two images match by highlighting image regions that most contribute to similarity. Reviewers agreed that the problem is interesting but were divided on the degree of novelty of the proposed approach. AC shares R1’s concern that localization accuracy is not satisfactory as a quantitative measure of the quality of the explanations. In particular, it pre-supposes what the explanations ought to be, i.e. that a good explanation means good localization. A small user-study would be more convincing. A more convincing evaluation would also include a study of explanation of image pairs with different degrees of similarity (e.g. images that are dissimilar as well as images with the same object). AC also shares R2’s concern about the validity of the model diagnosis application. This discussion also relies on the assumption that better localization of the whole object means a better explanation. Further, the highlighted regions in Figure 5 are very similar. Once again, a user study would help to indicate whether these results really do improve explainability. Reviewers also had concerns about missing details and, while the authors did improve this, key details are still missing. For example, the localization method that was used was only referenced but should be described in the paper itself. Given that several concerns remain, AC recommends rejection.
train
[ "SylvE-sx5r", "SJek1lqijr", "S1g8pVv7iH", "B1eYFEwmjB", "SyxLmVwmjS", "S1x6hXwXsr", "Syx91gcotr", "B1gxS8ARFS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "= Summary \nThis paper presents a simple method that draws visual attention of deep embedding networks for metric learning. It basically follows the class attention mapping strategy based on global pooling operation [Zhou et al., CVPR 2016], but extends the original version to point-specific attention which is nov...
[ 6, -1, -1, -1, -1, -1, 3, 8 ]
[ 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_S1xLuRVFvr", "iclr_2020_S1xLuRVFvr", "Syx91gcotr", "B1gxS8ARFS", "S1x6hXwXsr", "SylvE-sx5r", "iclr_2020_S1xLuRVFvr", "iclr_2020_S1xLuRVFvr" ]
iclr_2020_SJlPOCEKvH
Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Universal feature extractors, such as BERT for natural language processing and VGG for computer vision, have become effective methods for improving deep learning models without requiring more labeled data. A common paradigm is to pre-train a feature extractor on large amounts of data then fine-tune it as part of a deep learning model on some downstream task (i.e. transfer learning). While effective, feature extractors like BERT may be prohibitively large for some deployment scenarios. We explore weight pruning for BERT and ask: how does compression during pre-training affect transfer learning? We find that pruning affects transfer learning in three broad regimes. Low levels of pruning (30-40%) do not affect pre-training loss or transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. High levels of pruning additionally prevent models from fitting downstream datasets, leading to further degradation. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability. We conclude that BERT can be pruned once during pre-training rather than separately for each task without affecting performance.
reject
This work explores weight pruning for BERT in three broad regimes of transfer learning: low, medium and high. Overall, the paper is well written and explained and the goal of efficient training and inference is meaningful. Reviewers have major concerns about this work is its technical innovation and value to the community: a reuse of pruning to BERT is not new in technical perspective, the marginal improvement in pruning ratio compared to other compression method for BERT, and the introduced sparsity that hinders efficient computation for modern hardware such as GPU. The rebuttal failed to answer a majority of these important concerns. Hence I recommend rejection.
val
[ "S1lvPpVDsr", "HkxT-iVDor", "rklUVnVDiH", "ByeKAjOzir", "BJeQOh2kjr", "HJeqpvgaYr", "H1xSiSF6tr", "BJeAGC3pYS" ]
[ "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to review our paper! We will try to answer your questions in order:\n\n1a. How did we pick 3 epochs for fine-tuning? Would a different amount of fine-tuning help?\n\n3 epochs was the amount of fine-tuning used in the original BERT paper. However, we also tried fine-tuning between 1 an...
[ -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "HJeqpvgaYr", "BJeAGC3pYS", "H1xSiSF6tr", "iclr_2020_SJlPOCEKvH", "iclr_2020_SJlPOCEKvH", "iclr_2020_SJlPOCEKvH", "iclr_2020_SJlPOCEKvH", "iclr_2020_SJlPOCEKvH" ]
iclr_2020_HkePOCNtPH
Non-Sequential Melody Generation
In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components. Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies. Here, we use DCGAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel.
reject
All the reviewers pointed out issues with the experiments, which the rebuttal did not address. The paper seems interesting, and the authors are encouraged to improve it.
train
[ "HJenBTmnor", "rJlnf6XhjB", "S1eWe673sB", "SkxwB_uvKB", "S1eEwK92Fr", "HygOPyIfqr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your feedback. We appreciate the meticulousness you put into reading the paper, and will use add those changes in the revision.\n\nWe were also wondering if you had any ideas of what metrics might be better than the note distribution? We opted to use this, as in the beginning of the project, the gene...
[ -1, -1, -1, 1, 3, 1 ]
[ -1, -1, -1, 5, 3, 3 ]
[ "SkxwB_uvKB", "S1eEwK92Fr", "HygOPyIfqr", "iclr_2020_HkePOCNtPH", "iclr_2020_HkePOCNtPH", "iclr_2020_HkePOCNtPH" ]
iclr_2020_S1e__ANKvB
Molecular Graph Enhanced Transformer for Retrosynthesis Prediction
With massive possible synthetic routes in chemistry, retrosynthesis prediction is still a challenge for researchers. Recently, retrosynthesis prediction is formulated as a Machine Translation (MT) task. Namely, since each molecule can be represented as a Simplified Molecular-Input Line-Entry System (SMILES) string, the process of synthesis is analogized to a process of language translation from reactants to products. However, the MT models that applied on SMILES data usually ignore the information of natural atomic connections and the topology of molecules. In this paper, we propose a Graph Enhanced Transformer (GET) framework, which adopts both the sequential and graphical information of molecules. Four different GET designs are proposed, which fuse the SMILES representations with atom embedding learned from our improved Graph Neural Network (GNN). Empirical results show that our model significantly outperforms the Transformer model in test accuracy.
reject
Several approaches can be used to feed structured data to a neural network, such as convolutions or recurrent network. This paper proposes to combine both roads, by presenting molecular structures to the network using both their graph structured and a serialized representation (SMILES), that are processed by a framework combining the strenth of Graph Neural Network and the sequential transformer architecture. The technical quality of the paper seems good, with R1 commenting on the performance relative to SOTA seq2seq based methods and R3 commenting on the benefits of using more plausible constraints. The problem of using data with complex structure is highly relevant for ICLR. However, the novelty was deemed on the low side. As a very competitive conference, this is one of the key aspects necessary for successful ICLR papers. All reviewers agree that the novelty is too low for the current (high) bar of ICLR.
test
[ "B1xi4igOjS", "S1esjRzdir", "ryl3TdMOsr", "rygfa6a_Fr", "HygJspHTFB", "H1gIxC_m9H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review. \n(1) For retrosynthesis prediction, our work is the first attempt to utilize both the graphical and sequential information of molecules. We fully investigate four organization forms of graph neural network (GNN) with Transformer and significantly improve the prediction performance in...
[ -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, 4, 4, 5 ]
[ "rygfa6a_Fr", "H1gIxC_m9H", "HygJspHTFB", "iclr_2020_S1e__ANKvB", "iclr_2020_S1e__ANKvB", "iclr_2020_S1e__ANKvB" ]
iclr_2020_Sked_0EYwB
Objective Mismatch in Model-based Reinforcement Learning
Model-based reinforcement learning (MBRL) has been shown to be a powerful framework for data-efficiently learning control of continuous tasks. Recent work in MBRL has mostly focused on using more advanced function approximators and planning schemes, leaving the general framework virtually unchanged since its conception. In this paper, we identify a fundamental issue of the standard MBRL framework -- what we call the objective mismatch issue. Objective mismatch arises when one objective is optimized in the hope that a second, often uncorrelated, metric will also be optimized. In the context of MBRL, we characterize the objective mismatch between training the forward dynamics model w.r.t. the likelihood of the one-step ahead prediction, and the overall goal of improving performance on a downstream control task. For example, this issue can emerge with the realization that dynamics models effective for a specific task do not necessarily need to be globally accurate, and vice versa globally accurate models might not be sufficiently accurate locally to obtain good control performance on a specific task. In our experiments, we study this objective mismatch issue and demonstrate that the likelihood of the one-step ahead prediction is not always correlated with downstream control performance. This observation highlights a critical flaw in the current MBRL framework which will require further research to be fully understood and addressed. We propose an initial method to mitigate the mismatch issue by re-weighting dynamics model training. Building on it, we conclude with a discussion about other potential directions of future research for addressing this issue.
reject
As the reviewers point out, this paper has potentially interesting ideas but it is in too preliminary state for publication at ICLR.
train
[ "S1eiQv52tB", "SkxJThTOiB", "rJljisTOsr", "H1glEjaOiH", "HJePeiTuiS", "Ske8EvwptS", "B1xf9mFAtr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper \"OBJECTIVE MISMATCH IN MODEL-BASED REINFORCEMENT LEARNING\" explores the relationships between model optimization and control improvement in model-based reinforcement learning. While it is an interesting problem, the paper fails at demonstrating really useful effects, and the writting needs to be greatl...
[ 3, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_Sked_0EYwB", "Ske8EvwptS", "B1xf9mFAtr", "S1eiQv52tB", "S1eiQv52tB", "iclr_2020_Sked_0EYwB", "iclr_2020_Sked_0EYwB" ]
iclr_2020_rJg9OANFwS
Topic Models with Survival Supervision: Archetypal Analysis and Neural Approaches
We introduce two approaches to topic modeling supervised by survival analysis. Both approaches predict time-to-event outcomes while simultaneously learning topics over features that help prediction. The high-level idea is to represent each data point as a distribution over topics using some underlying topic model. Then each data point's distribution over topics is fed as input to a survival model. The topic and survival models are jointly learned. The two approaches we propose differ in the generality of topic models they can learn. The first approach finds topics via archetypal analysis, a nonnegative matrix factorization method that optimizes over a wide class of topic models encompassing latent Dirichlet allocation (LDA), correlated topic models, and topic models based on the ``anchor word'' assumption; the resulting survival-supervised variant solves an alternating minimization problem. Our second approach builds on recent work that approximates LDA in a neural net framework. We add a survival loss layer to this neural net to form an approximation to survival-supervised LDA. Both of our approaches can be combined with a variety of survival models. We demonstrate our approach on two survival datasets, showing that survival-supervised topic models can achieve competitive time-to-event prediction accuracy while outputting clinically interpretable topics.
reject
The paper proposes two approaches to topic modeling supervised by survival analysis. The reviewers find some problems in novelty, algorithm and experiments, which is not ready for publish.
test
[ "H1lPoeVcsH", "H1gjFlNcjB", "rkxKvgNqoS", "r1eNf14FKH", "Skx9cEOatH", "r1x8ZfACFB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the feedback. Please see our responses to the other reviewers.", "Thanks for the very detailed review!\n\nIn terms of methods development, we agree that what we are proposing is not a major advance; we are just showing how the approach by Dawson and Kendziorski (2012) can easily be incorporated to mor...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 4, 3, 1 ]
[ "r1x8ZfACFB", "r1eNf14FKH", "Skx9cEOatH", "iclr_2020_rJg9OANFwS", "iclr_2020_rJg9OANFwS", "iclr_2020_rJg9OANFwS" ]
iclr_2020_rJe5_CNtPB
Attention Forcing for Sequence-to-sequence Model Training
Auto-regressive sequence-to-sequence models with attention mechanism have achieved state-of-the-art performance in many tasks such as machine translation and speech synthesis. These models can be difficult to train. The standard approach, teacher forcing, guides a model with reference output history during training. The problem is that the model is unlikely to recover from its mistakes during inference, where the reference output is replaced by generated output. Several approaches deal with this problem, largely by guiding the model with generated output history. To make training stable, these approaches often require a heuristic schedule or an auxiliary classifier. This paper introduces attention forcing, which guides the model with generated output history and reference attention. This approach can train the model to recover from its mistakes, in a stable fashion, without the need for a schedule or a classifier. In addition, it allows the model to generate output sequences aligned with the references, which can be important for cascaded systems like many speech synthesis systems. Experiments on speech synthesis show that attention forcing yields significant performance gain. Experiments on machine translation show that for tasks where various re-orderings of the output are valid, guiding the model with generated output history is challenging, while guiding the model with reference attention is beneficial.
reject
The paper proposed an attention-forcing algorithm that guides the sequence-to-sequence model training to make it more stable. But as pointed out by the reviewers, the proposed method requires alignment which is normally unavailable. The solution to address that is using another teacher-forcing model, which can be expensive. The major concern about this paper is the experimental justification is not sufficient: * lack of evaluations of the proposed method on different tasks; * lack of experiments on understanding how it interact with existing techniques such as scheduled sampling etc; * lack of comparisons to related existing supervised attention mechanisms.
train
[ "BJejk40KFr", "BkgF-WE2jH", "S1lI5J4niS", "SkgobyN2iB", "r1gXxa_TtB", "SJgSxr_k5B", "SylwnXl-qH", "HylOKK_MOB", "Syxg5YLMur", "HJx5g3HzdS", "ryl-yVgWuB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "public" ]
[ "This paper proposes an alternative mechanism of training the attention values of a sequence to sequence learning model as applied to tasks like speech synthesis and translation. During training they compute two forms of attention: (1) the standard soft-attention from a decoder fed with teacher forced output, and ...
[ 1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2020_rJe5_CNtPB", "BJejk40KFr", "r1gXxa_TtB", "SJgSxr_k5B", "iclr_2020_rJe5_CNtPB", "iclr_2020_rJe5_CNtPB", "iclr_2020_rJe5_CNtPB", "Syxg5YLMur", "HJx5g3HzdS", "ryl-yVgWuB", "iclr_2020_rJe5_CNtPB" ]
iclr_2020_rJeidA4KvS
Role-Wise Data Augmentation for Knowledge Distillation
Knowledge Distillation (KD) is a common method for transferring the ``knowledge'' learned by one machine learning model (the teacher) into another model (the student), where typically, the teacher has a greater capacity (e.g., more parameters or higher bit-widths). To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated. Due to the difference in model capacities, the student may not benefit fully from the same data points on which the teacher is trained. On the other hand, a human teacher may demonstrate a piece of knowledge with individualized examples adapted to a particular student, for instance, in terms of her cultural background and interests. Inspired by this behavior, we design data augmentation agents with distinct roles to facilitate knowledge distillation. Our data augmentation agents generate distinct training data for the teacher and student, respectively. We focus specifically on KD when the teacher network has greater precision (bit-width) than the student network. We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student. We compare our approach with existing KD methods on training popular neural architectures and demonstrate that role-wise data augmentation improves the effectiveness of KD over strong prior approaches. The code for reproducing our results will be made publicly available.
reject
This paper studies Population-Based Augmentation in the context of knowledge distillation (KD) and proposes a role-wise data augmentation schemes for improved KD. While the reviewers believe that there is some merit in the proposed approach, its incremental nature and inherent complexity require a cleaner exposition and a stronger empirical evaluation on additional data sets. I will hence recommend the rejection of this manuscript in the current state. Nevertheless, applying PBA to KD seems to be an interesting direction and we encourage the authors to add the missing experiments and to carefully incorporate the reviewer feedback to improve the manuscript.
train
[ "BklACH1FjS", "B1g1nS1KoB", "rJlfcBJFiB", "HJe4US1FoH", "BklZhUMtKH", "SJlEVGdpFB", "HyxSscotqB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. We are in the process of conducting more experiments including running experiments on a larger scale and more complex datasets such as ImageNet.\n2. We provide II-KD as part of our implementation details, rather than the main contribution. We believe it is fair to use it so that the baseline systems we compare ...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 4, 3, 1 ]
[ "iclr_2020_rJeidA4KvS", "BklZhUMtKH", "SJlEVGdpFB", "HyxSscotqB", "iclr_2020_rJeidA4KvS", "iclr_2020_rJeidA4KvS", "iclr_2020_rJeidA4KvS" ]
iclr_2020_BJx3_0VKPB
On the Unintended Social Bias of Training Language Generation Models with News Articles
There are concerns that neural language models may preserve some of the stereotypes of the underlying societies that generate the large corpora needed to train these models. For example, gender bias is a significant problem when generating text, and its unintended memorization could impact the user experience of many applications (e.g., the smart-compose feature in Gmail). In this paper, we introduce a novel architecture that decouples the representation learning of a neural model from its memory management role. This architecture allows us to update a memory module with an equal ratio across gender types addressing biased correlations directly in the latent space. We experimentally show that our approach can mitigate the gender bias amplification in the automatic generation of articles news while providing similar perplexity values when extending the Sequence2Sequence architecture.
reject
The reviewers had a hard time fully identifying the intended contribution behind this paper, and raised concerns that suggest that the experimental results are not sufficient to justify any substantial contribution with the level of certainty that would warrant publication at a top venue. The authors have not responded, and the concerns are serious, so I have no choice but to reject this paper despite its potentially valuable topic.
train
[ "rJelztl6FB", "Ske6R2YaYB", "ryl7v3dJcB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nSummary:\nThe authors propose using attention with Fair Region in Memory Network for reducing gender bias in language generation. \n\nDecision:\nOverall, the paper is poorly written and lack of enough substance, even though the idea seems reasonable. I am inclined to reject this paper. \n\nSupporting argument:\n...
[ 1, 3, 1 ]
[ 3, 3, 5 ]
[ "iclr_2020_BJx3_0VKPB", "iclr_2020_BJx3_0VKPB", "iclr_2020_BJx3_0VKPB" ]
iclr_2020_ryen_CEFwr
Unsupervised Disentanglement of Pose, Appearance and Background from Images and Videos
Unsupervised landmark learning is the task of learning semantic keypoint-like representations without the use of expensive keypoint-level annotations. A popular approach is to factorize an image into a pose and appearance data stream, then to reconstruct the image from the factorized components. The pose representation should capture a set of consistent and tightly localized landmarks in order to facilitate reconstruction of the input image. Ultimately, we wish for our learned landmarks to focus on the foreground object of interest. However, the reconstruction task of the entire image forces the model to allocate landmarks to model the background. This work explores the effects of factorizing the reconstruction task into separate foreground and background reconstructions, conditioning only the foreground reconstruction on the unsupervised landmarks. Our experiments demonstrate that the proposed factorization results in landmarks that are focused on the foreground object of interest. Furthermore, the rendered background quality is also improved, as the background rendering pipeline no longer requires the ill-suited landmarks to model its pose and appearance. We demonstrate this improvement in the context of the video-prediction.
reject
The paper proposes an approach for unsupervised learning of keypoint landmarks from images and videos by decomposing them into the foreground and static background. The technical approach builds upon related prior works such as Lorenz et al. 2019 and Jakab et al. 2018 by extending them with foreground/background separation. The proposed method works well for static background achieving strong pose prediction results. The weaknesses of the paper are that (1) the proposed method is a fairly reasonable but incremental extension of existing techniques; (2) it relies on a strong assumption on the property of static backgrounds; (3) video prediction results are of limited significance and scope. In particular, the proposed method may work for simple data like KTH but is very limited for modeling videos as it is not well-suited to handle moving backgrounds, interactions between objects (e.g., robot arm in the foreground and objects in the background), and stochasticity.
train
[ "H1lpF3LviS", "rJeztpIPiB", "B1x7BjLwsr", "SylWFdIwoS", "SkeSv4c_FB", "BJlyQwWpYS", "BJxhp7OpKS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed review and constructive comments.\n\nMatching the performance of Lorenz et al.:\nWe would first like to note that the primary comparison should be made against our internal baselines, as only in that case are the base architectures, losses, and data augmentations fully controlled.\n\nHe...
[ -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, 3, 3, 4 ]
[ "BJlyQwWpYS", "SkeSv4c_FB", "BJxhp7OpKS", "iclr_2020_ryen_CEFwr", "iclr_2020_ryen_CEFwr", "iclr_2020_ryen_CEFwr", "iclr_2020_ryen_CEFwr" ]
iclr_2020_BkxadR4KvS
Insights on Visual Representations for Embodied Navigation Tasks
Recent advances in deep reinforcement learning require a large amount of training data and generally result in representations that are often over specialized to the target task. In this work, we study the underlying potential causes for this specialization by measuring the similarity between representations trained on related, but distinct tasks. We use the recently proposed projection weighted Canonical Correlation Analysis (PWCCA) to examine the task dependence of visual representations learned across different embodied navigation tasks. Surprisingly, we find that slight differences in task have no measurable effect on the visual representation for both SqueezeNet and ResNet architectures. We then empirically demonstrate that visual representations learned on one task can be effectively transferred to a different task. Interestingly, we show that if the tasks constrain the agent to spatially disjoint parts of the environment, differences in representation emerge for SqueezeNet models but less-so for ResNets, suggesting that ResNets feature inductive biases which encourage more task-agnostic representations, even in the context of spatially separated tasks. We generalize our analysis to examine permutations of an environment and find, surprisingly, permutations of an environment also do not influence the visual representation. Our analysis provides insight on the overfitting of representations in RL and provides suggestions of how to design tasks that induce task-agnostic representations.
reject
The general consensus amongst the reviewers is that this paper is not quite ready for publication, and needs to dig a little deeper in some areas. Some reviewers thought the contributions are unclear, or unsupported. I hope these reviews will help you as you work towards finding a home for this work.
train
[ "SJxdi0zh5S", "rklB5o4nor", "SJxSui4hsH", "H1e7NiE2iS", "H1g06OP9YH", "Skgj9UVnYr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper tries to analyze the similarities and transferring abilities of learned visual representations for embodied navigation tasks. It uses PWCCA to measure the similarity. There are some interesting observations by smart experimental designing. \n\nI have several concerns.\n\n- for the non-disjoint experime...
[ 3, -1, -1, -1, 3, 3 ]
[ 1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_BkxadR4KvS", "H1g06OP9YH", "Skgj9UVnYr", "SJxdi0zh5S", "iclr_2020_BkxadR4KvS", "iclr_2020_BkxadR4KvS" ]
iclr_2020_H1xauR4Kvr
The Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions
Deep learning models achieve high predictive accuracy in a broad spectrum of tasks, but rigorously quantifying their predictive uncertainty remains challenging. Usable estimates of predictive uncertainty should (1) cover the true prediction target with a high probability, and (2) discriminate between high- and low-confidence prediction instances. State-of-the-art methods for uncertainty quantification are based predominantly on Bayesian neural networks. However, Bayesian methods may fall short of (1) and (2) — i.e., Bayesian credible intervals do not guarantee frequentist coverage, and approximate posterior inference may undermine discriminative accuracy. To this end, this paper tackles the following question: can we devise an alternative frequentist approach for uncertainty quantification that satisfies (1) and (2)? To address this question, we develop the discriminative jackknife (DJ), a formal inference procedure that constructs predictive confidence intervals for a wide range of deep learning models, is easy to implement, and provides rigorous theoretical guarantees on (1) and (2). The DJ procedure uses higher-order influence functions (HOIFs) of the trained model parameters to construct a jackknife (leave-one-out) estimator of predictive confidence intervals. DJ computes HOIFs using a recursive formula that requires only oracle access to loss gradients and Hessian-vector products, hence it can be applied in a post-hoc fashion without compromising model accuracy or interfering with model training. Experiments demonstrate that DJ performs competitively compared to existing Bayesian and non-Bayesian baselines.
reject
In this work, the authors develop a method for providing frequentist confidence intervals for a range of deep learning models with coverage guarantees. While deep learning models are being used pervasively, providing reasonable uncertainty estimates from these models remains challenging and an important open problem. Here, the authors argue that frequentist statistics can provide confidence intervals along with rigorous guarantees on their quality. They develop a jack-knife based procedure for deep learning. The reviews for this paper were all borderline, with two weak accepts and two weak rejects (one reviewer was added to provide an additional viewpoint). The reviewers all thought that the proposed methodology seemed sensible and well motivated. Among the cited issues, major topics of discussion were the close relation to related work (some of which is very recent, Giordano et al.) and that the reviewers felt the baselines were too weak (or weakly tuned). The reviewers ultimately did not seem convinced enough by the author rebuttal to raise their scores during discussion and there was no reviewer really willing to champion the paper for acceptance. Unfortunately, this paper falls below the bar for acceptance. It seems clear that there is compelling work here and addressing the reviewer comments (relation to related work, i.e. Robbins, Giordano and stronger baselines) would make the paper much stronger for a future submission.
train
[ "rJeuhP8LsS", "H1xbnXU8iH", "B1eSv7I8sS", "HJxMy0mLjr", "SyeSqJBIjB", "HyxLMA7IoH", "Hye8q5X8sr", "BJe50L_9FH", "rkl5mh7k9S", "H1xDF_cEqH", "rJgJLYo39H" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Below are point-by-point responses to all minor comments.\n\n*Minor comments *\n\nA5. We meant by the statement on page 3 that our analysis is oblivious to the training method used as long as it retrieves a local minimum. In the final manuscript, we will be careful to make a clear exclusion of the pathological opt...
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 1, 1, 3 ]
[ "H1xbnXU8iH", "B1eSv7I8sS", "H1xDF_cEqH", "rJgJLYo39H", "BJe50L_9FH", "HJxMy0mLjr", "rkl5mh7k9S", "iclr_2020_H1xauR4Kvr", "iclr_2020_H1xauR4Kvr", "iclr_2020_H1xauR4Kvr", "iclr_2020_H1xauR4Kvr" ]
iclr_2020_SyeyF0VtDr
Recurrent Event Network : Global Structure Inference Over Temporal Knowledge Graph
Modeling dynamically-evolving, multi-relational graph data has received a surge of interests with the rapid growth of heterogeneous event data. However, predicting future events on such data requires global structure inference over time and the ability to integrate temporal and structural information, which are not yet well understood. We present Recurrent Event Network (RE-Net), a novel autoregressive architecture for modeling temporal sequences of multi-relational graphs (e.g., temporal knowledge graph), which can perform sequential, global structure inference over future time stamps to predict new events. RE-Net employs a recurrent event encoder to model the temporally conditioned joint probability distribution for the event sequences, and equips the event encoder with a neighborhood aggregator for modeling the concurrent events within a time window associated with each entity. We apply teacher forcing for model training over historical data, and infer graph sequences over future time stamps by sampling from the learned joint distribution in a sequential manner. We evaluate the proposed method via temporal link prediction on five public datasets. Extensive experiments demonstrate the strength of RE-Net, especially on multi-step inference over future time stamps.
reject
The paper proposes a recurrent and autorgressive architecture to model temporal knowledge graphs and perform multi-time-step inference in the form of future link prediction. However, the reviewers feel that the papers are more of a straight application of current techniques. Furthermore, a better presentation of the experimental section will also help improve the paper.
train
[ "SkeU4A5UjB", "HJl_y3iTFS", "BJeY0s9LsB", "rklhFy2PiS", "Bkl8wJhDir", "B1ltHknvjH", "SkesRSBPsS", "HJeUTp9UoB", "rJx4zp5Isr", "S1le2i5LoH", "rJxg-r9pFB", "SkeXaQV0cr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your helpful feedback and valuable comments! We appreciate the questions you raised regarding the missing related work/baselines and analysis on model components. We answer each question in the response below and have revised our draft to incorporate the comments. In particular, we have add...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "SkeXaQV0cr", "iclr_2020_SyeyF0VtDr", "HJl_y3iTFS", "rJxg-r9pFB", "rJxg-r9pFB", "rJxg-r9pFB", "iclr_2020_SyeyF0VtDr", "SkeXaQV0cr", "SkeXaQV0cr", "HJl_y3iTFS", "iclr_2020_SyeyF0VtDr", "iclr_2020_SyeyF0VtDr" ]
iclr_2020_S1xJFREKvB
Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning
Stochastic Gradient Descent (SGD) with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance. However, due to the large stochasticity, SGD with Nesterov's momentum is not robust, i.e., its performance may deviate significantly from the expectation. In this work, we propose Amortized Nesterov's Momentum, a special variant of Nesterov's momentum which has more robust iterates, faster convergence in the early stage and higher efficiency. Our experimental results show that this new momentum achieves similar (sometimes better) generalization performance with little-to-no tuning. In the convex case, we provide optimal convergence rates for our new methods and discuss how the theorems explain the empirical results.
reject
This paper introduces a variant of Nesterov momentum which saves computation by only periodically recomputing certain quantities, and which is claimed to be more robust in the stochastic setting. The method seems easy to use, so there's probably no harm in trying it. However, the reviewers and I don't find the benefits persuasive. While there is theoretical analysis, its role is to show that the algorithm maintains the convergence properties while having other benefits. However, the computations saved by amortization seem like a small fraction of the total cost, and I'm having trouble seeing how the increased "robustness" is justified. (It's possible I missed something, but clarity of exposition is another area the paper could use some improvement in.) Overall, this submission seems promising, but probably needs to be cleaned up before publication at ICLR.
val
[ "H1gUDg7atB", "rklwYvMsiH", "SJesE3gcjS", "rylcSagDor", "Syxsc1PUiH", "rkxiIRLLiH", "Skx9mRIUjS", "ryxvroO4jB", "rkxRq5_EoH", "rkgk-BLXor", "H1gz-n-oKr", "r1elVZBh5H", "BJg7oswhqS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "%% Post Author Response comments %%\nThank you for your detailed response/revision. \n\n1 - Introducing “m-times” larger momentum: Somehow, this is not a particularly intuitive statement or one that reflects clearly in a theoretical bound. Since we are getting to issues surrounding the use of momentum with stochas...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2020_S1xJFREKvB", "SJesE3gcjS", "rkxRq5_EoH", "H1gz-n-oKr", "H1gUDg7atB", "r1elVZBh5H", "r1elVZBh5H", "BJg7oswhqS", "BJg7oswhqS", "iclr_2020_S1xJFREKvB", "iclr_2020_S1xJFREKvB", "iclr_2020_S1xJFREKvB", "iclr_2020_S1xJFREKvB" ]
iclr_2020_rJleFREKDr
Learning to Control Latent Representations for Few-Shot Learning of Named Entities
Humans excel in continuously learning with small data without forgetting how to solve old problems. However, neural networks require large datasets to compute latent representations across different tasks while minimizing a loss function. For example, a natural language understanding (NLU) system will often deal with emerging entities during its deployment as interactions with users in realistic scenarios will generate new and infrequent names, events, and locations. Here, we address this scenario by introducing a RL trainable controller that disentangles the representation learning of a neural encoder from its memory management role. Our proposed solution is straightforward and simple: we train a controller to execute an optimal sequence of read and write operations on an external memory with the goal of leveraging diverse activations from the past and provide accurate predictions. Our approach is named Learning to Control (LTC) and allows few-shot learning with two degrees of memory plasticity. We experimentally show that our system obtains accurate results for few-shot learning of entity recognition in the Stanford Task-Oriented Dialogue dataset.
reject
This work proposes to use policy-gradient RL to learn to read and write actions over memory locations using as reward the entropy reduction of memory location distribution. The authors perform experiments on NER in Stanford Dialogue task, that are framed though as few-shot learning. The reviewers have pointed out shortcomings of the paper with regards to its novelty, narrow contribution in combination thin experimental setup (the authors only look into one dataset and one task with minimal comparison to previous work and no ablation studies as to understand the behaviour of the model) and clarity (method description seems to be lacking some crucial components of the model). As such, I cannot recommend acceptance but I hope the authors will use the reviewers comments to transform this into a strong submission for a later conference.
train
[ "rJlyw3yTKr", "BJeyg1lTFH", "r1gvaG_X9B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors present a method, Learning to Control (LTC), that enables a reinforcement learning agent to learn to read and write external memory. They follow the intuition that human has two degrees of plasticity for memory, which leads to the dense-sparse memory design in this paper. The proposed me...
[ 3, 1, 1 ]
[ 3, 3, 3 ]
[ "iclr_2020_rJleFREKDr", "iclr_2020_rJleFREKDr", "iclr_2020_rJleFREKDr" ]
iclr_2020_rklxF0NtDr
Policy Message Passing: A New Algorithm for Probabilistic Graph Inference
A general graph-structured neural network architecture operates on graphs through two core components: (1) complex enough message functions; (2) a fixed information aggregation process. In this paper, we present the Policy Message Passing algorithm, which takes a probabilistic perspective and reformulates the whole information aggregation as stochastic sequential processes. The algorithm works on a much larger search space, utilizes reasoning history to perform inference, and is robust to noisy edges. We apply our algorithm to multiple complex graph reasoning and prediction tasks and show that our algorithm consistently outperforms state-of-the-art graph-structured models by a significant margin.
reject
This paper was reviewed by 3 experts, who recommend Weak Reject, Weak Reject, and Reject. The reviewers were overall supportive of the work presented in the paper and felt it would have merit for eventual publication. However, the reviewers identified a number of serious concerns about writing quality, missing technical details, experiments, and missing connections to related work. In light of these reviews, and the fact that the authors have not submitted a response to reviews, we are not able to accept the paper. However given the supportive nature of the reviews, we hope the authors will work to polish the paper and submit to another venue.
train
[ "rygh-IRtdH", "rJe6pYX-cS", "BkeMtm2WcH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces policy message passing: a graph neural network with multiple message types and an inference mechanism that assigns messages of a particular type to edges in a recurrent fashion. Experiments on visual reasoning tasks and on a noisy citation network classification task indicate competitive perf...
[ 1, 3, 3 ]
[ 5, 3, 3 ]
[ "iclr_2020_rklxF0NtDr", "iclr_2020_rklxF0NtDr", "iclr_2020_rklxF0NtDr" ]
iclr_2020_BylWYC4KwH
On Concept-Based Explanations in Deep Neural Networks
Deep neural networks (DNNs) build high-level intelligence on low-level raw features. Understanding of this high-level intelligence can be enabled by deciphering the concepts they base their decisions on, as human-level thinking. In this paper, we study concept-based explainability for DNNs in a systematic framework. First, we define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining a model's prediction behavior. Based on performance and variability motivations, we propose two definitions to quantify completeness. We show that under degenerate conditions, our method is equivalent to Principal Component Analysis. Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts. We use game-theoretic notions to aggregate over sets to define an importance score for each discovered concept, which we call \emph{ConceptSHAP}. On specifically-designed synthetic datasets and real-world text and image datasets, we validate the effectiveness of our framework in finding concepts that are complete in explaining the decision, and interpretable.
reject
This paper introduces an unsupervised concept learning and explanation algorithm, as well as a concept of "completeness" for evaluating representations in an unsupervised way. There are several valuable contributions here, and the paper improved substantially after the rebuttal. It would not be unreasonable to accept this paper. But after extensive post-review discussion, we decided that the completeness idea was the most valuable contribution, but that it was insufficiently investigated. To quote R3, who I agree with: " I think the paper could be strengthened considerably with a rewrite that focuses first on a shortcoming of existing methods in finding complete solutions. I also think their explanations for why PCA is not complete are somewhat speculative and I expect that studying the completeness of activation spaces in invertible networks would lead to some relevant insights"
train
[ "H1l_Q0B1jB", "BJlcqwZ0tH", "rklgXlcsoB", "r1gPLCTjiB", "SkgQ4iXsjr", "HJeJzPmisS", "B1eqccqKjS", "r1lnnYcKsB", "SyxYXt5FiB", "HyeQju9Yir", "rkxXDOqFsH", "Byld0hD0FS", "Skl0BXmAcH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update: after reading the rebuttals, I raised my rating to weak accept. Authors provide an interesting and possible limitation of TCAV during rebuttal (in the XOR case TCAV might not be able to identify meaningful subspace). It would strengthen the paper by somehow proving it. Maybe you can visualize the TCAV vect...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_BylWYC4KwH", "iclr_2020_BylWYC4KwH", "Skl0BXmAcH", "HyeQju9Yir", "HJeJzPmisS", "B1eqccqKjS", "H1l_Q0B1jB", "Skl0BXmAcH", "Byld0hD0FS", "BJlcqwZ0tH", "iclr_2020_BylWYC4KwH", "iclr_2020_BylWYC4KwH", "iclr_2020_BylWYC4KwH" ]
iclr_2020_rylztAEYvr
Iterative Target Augmentation for Effective Conditional Generation
Many challenging prediction problems, from molecular optimization to program synthesis, involve creating complex structured objects as outputs. However, available training data may not be sufficient for a generative model to learn all possible complex transformations. By leveraging the idea that evaluation is easier than generation, we show how a simple, broadly applicable, iterative target augmentation scheme can be surprisingly effective in guiding the training and use of such models. Our scheme views the generative model as a prior distribution, and employs a separately trained filter as the likelihood. In each augmentation step, we filter the model's outputs to obtain additional prediction targets for the next training epoch. Our method is applicable in the supervised as well as semi-supervised settings. We demonstrate that our approach yields significant gains over strong baselines both in molecular optimization and program synthesis. In particular, our augmented model outperforms the previous state-of-the-art in molecular optimization by over 10% in absolute gain.
reject
This paper proposes a training scheme to enhance the optimization process where the outputs are required to meet certain constraints. The authors propose to insert an additional target augmentation phase after the regular training. For each datapoint, the algorithm samples candidate outputs until it find a valid output according the an external filter. The model is further fine-tuned on the augmented dataset. The authors provided detailed answers and responses to the reviews, which the reviewers appreciated. However, some significant concerns remained, and due to a large number of stronger papers, this paper was not accepted at this time.
train
[ "SylmFKK6tr", "HJg3QAIvsB", "Bkge45S8oB", "S1e1ntHLjH", "S1lkPZB2tB", "r1x2uWNr9r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "(post rebuttal) I appreciate the authors for detailed rebuttal. I stay with the original score based on the following reason.\n\nIn RAML you sample from exponentiated reward distribution and do maximum likelihood (minimizing forward KL in classic control as inference framework). How you sample depends on the probl...
[ 3, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, 1, 4 ]
[ "iclr_2020_rylztAEYvr", "S1lkPZB2tB", "SylmFKK6tr", "r1x2uWNr9r", "iclr_2020_rylztAEYvr", "iclr_2020_rylztAEYvr" ]
iclr_2020_B1gXYR4YDH
DSReg: Using Distant Supervision as a Regularizer
In this paper, we aim at tackling a general issue in NLP tasks where some of the negative examples are highly similar to the positive examples, i.e., hard-negative examples). We propose the distant supervision as a regularizer (DSReg) approach to tackle this issue. We convert the original task to a multi-task learning problem, in which we first utilize the idea of distant supervision to retrieve hard-negative examples. The obtained hard-negative examples are then used as a regularizer, and we jointly optimize the original target objective of distinguishing positive examples from negative examples along with the auxiliary task objective of distinguishing soften positive examples (comprised of positive examples and hard-negative examples) from easy-negative examples. In the neural context, this can be done by feeding the final token representations to different output layers. Using this unbelievably simple strategy, we improve the performance of a range of different NLP tasks, including text classification, sequence labeling and reading comprehension.
reject
This paper proposes a way to handle the hard-negative examples (those very close to positive ones) in NLP, using a distant supervision approach that serves as a regularization. The paper addresses an important issue and is well written; however, reviewers pointed put several concerns, including testing the approach on the state-of-art neural nets, and making experiments more convincing by testing on larger problems.
val
[ "rkgfsgE4jH", "ryeGoDXNoB", "r1x4lqk0Yr", "r1lGMTihtr", "rkeE7qr0YH" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "About the justification of the approach:\n1.1) I feel like the proposed are not intuitively justified enough. They point out that \"L_2 can be thought as an objective to capture the shared features in positive examples and hard-negative examples\". Why would that be good from an intuitive perspective ?\n1.2) L_3 i...
[ -1, -1, 6, 3, 3 ]
[ -1, -1, 3, 4, 4 ]
[ "r1lGMTihtr", "rkeE7qr0YH", "iclr_2020_B1gXYR4YDH", "iclr_2020_B1gXYR4YDH", "iclr_2020_B1gXYR4YDH" ]
iclr_2020_rJlNKCNtPB
Adaptive Learned Bloom Filter (Ada-BF): Efficient Utilization of the Classifier
Recent work suggests improving the performance of Bloom filter by incorporating a machine learning model as a binary classifier. However, such learned Bloom filter does not take full advantage of the predicted probability scores. We proposed new algorithms that generalize the learned Bloom filter by using the complete spectrum of the scores regions. We proved our algorithms have lower False Positive Rate (FPR) and memory usage compared with the existing approaches to learned Bloom filter. We also demonstrated the improved performance of our algorithms on real-world datasets.
reject
The paper improves the Bloom filter learning by utilizing the complete spectrum of the scores regions. The paper is nicely written with strong motivation and theoretical analysis of the proposed model. The evaluation could be improved: all the experiments are only tested on the small datasets, which makes it hard to assess the practicality of the proposed method. The paper could lead to a strong publication in the future if the issue on evaluation can be addressed.
train
[ "Hklk4UTuiB", "rke_-zv3jB", "BkgkzhoOsr", "HyeW6wi_or", "Byx9PWQ6tH", "ByeFmAOpYr", "SkgM4JEJcH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for giving us the chance to address your misunderstandings and being open to change in scores!\n\nThe machine learning model is critical because it has discrimination power between the keys and non-keys, which cannot be replaced by a random hash function. Hierarchical random hashing is not any different fro...
[ -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, 1, 1, 1 ]
[ "Byx9PWQ6tH", "Hklk4UTuiB", "ByeFmAOpYr", "SkgM4JEJcH", "iclr_2020_rJlNKCNtPB", "iclr_2020_rJlNKCNtPB", "iclr_2020_rJlNKCNtPB" ]
iclr_2020_H1lBYCEFDB
A Coordinate-Free Construction of Scalable Natural Gradient
Most neural networks are trained using first-order optimization methods, which are sensitive to the parameterization of the model. Natural gradient descent is invariant to smooth reparameterizations because it is defined in a coordinate-free way, but tractable approximations are typically defined in terms of coordinate systems, and hence may lose the invariance properties. We analyze the invariance properties of the Kronecker-Factored Approximate Curvature (K-FAC) algorithm by constructing the algorithm in a coordinate-free way. We explicitly construct a Riemannian metric under which the natural gradient matches the K-FAC update; invariance to affine transformations of the activations follows immediately. We extend our framework to analyze the invariance properties of K-FAC appied to convolutional networks and recurrent neural networks, as well as metrics other than the usual Fisher metric.
reject
The authors analyze the natural gradient algorithm for training a neural net from a theoretical perspective and prove connections to the K-FAC algorithm. The paper is poorly written and contains no experimental evaluation or well established implications wrt practical significance of the results.
train
[ "H1ehOq-ssB", "Byx-sXrqjB", "r1eFpGr5oS", "H1lLrfrqjS", "BJlVUPfMKr", "Skx1SGmx5r", "B1l2T3Ue5B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I stick with my assessment, which seems pretty consistent with the others, except that the first reviewer even pointed out several flaws (buried in the mass of maths, apparently).\n\nI maintain that this paper has very little chance of creating any downstream impact, because it just offers another way to look at K...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 5, 1, 3 ]
[ "Byx-sXrqjB", "BJlVUPfMKr", "Skx1SGmx5r", "B1l2T3Ue5B", "iclr_2020_H1lBYCEFDB", "iclr_2020_H1lBYCEFDB", "iclr_2020_H1lBYCEFDB" ]
iclr_2020_Bkl8YR4YDB
Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs
In this paper, we investigate the problem of training neural machine translation (NMT) systems with a dataset of more than 40 billion bilingual sentence pairs, which is larger than the largest dataset to date by orders of magnitude. Unprecedented challenges emerge in this situation compared to previous NMT work, including severe noise in the data and prohibitively long training time. We propose practical solutions to handle these issues and demonstrate that large-scale pretraining significantly improves NMT performance. We are able to push the BLEU score of WMT17 Chinese-English dataset to 32.3, with a significant performance boost of +3.2 over existing state-of-the-art results.
reject
The authors address the problem of training an NMT model on a really massive parallel data set of 40 billion Chinese-English sentence pairs, an order of magnitude bigger than other cz-en experiments. To address noise and training time problems they propose pretraining + a couple of different ways of creating a fine-tuning data set. Two of the reviewers assert that the technical contribution is thin, and the results are SOTA but not really as good as you might hope with this amount of data. This combined with the fact that the data set is not released, makes me think that this paper is not a good fit with ICLR and would more appropriate for an application focussed conference. The authors engaged strongly with the reviewers, adding more backtranslation results. The reviewers took their responses into account but did not change their scores.
train
[ "rJxOid9SjH", "rkx1OUbhoS", "SklbY_5rjB", "B1xQw39Qsr", "r1gPaCZ9KB", "S1lNNanTYS", "SJeI_UPxcB", "SyxaX6OQ_r", "Hkgi3h5GuS", "BJla-C1fuB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public" ]
[ "Re: did you try the approach of \"Hassan et al (2018)\" suggested in Section 4.1, where only in-domain sentences are selected? It is true that \"every time we encounter a new domain, we have to retrain the model\", but I think this is still a more viable approach than pre-training on the full 40B sentences. Why no...
[ -1, -1, -1, -1, 3, 6, 3, -1, -1, -1 ]
[ -1, -1, -1, -1, 5, 5, 5, -1, -1, -1 ]
[ "SklbY_5rjB", "S1lNNanTYS", "r1gPaCZ9KB", "SJeI_UPxcB", "iclr_2020_Bkl8YR4YDB", "iclr_2020_Bkl8YR4YDB", "iclr_2020_Bkl8YR4YDB", "Hkgi3h5GuS", "iclr_2020_Bkl8YR4YDB", "iclr_2020_Bkl8YR4YDB" ]
iclr_2020_S1e5YC4KPS
Winning Privately: The Differentially Private Lottery Ticket Mechanism
We propose the differentially private lottery ticket mechanism (DPLTM). An end-to-end differentially private training paradigm based on the lottery ticket hypothesis. Using ``high-quality winners", selected via our custom score function, DPLTM significantly outperforms state-of-the-art. We show that DPLTM converges faster, allowing for early stopping with reduced privacy budget consumption. We further show that the tickets from DPLTM are transferable across datasets, domains, and architectures. Our extensive evaluation on several public datasets provides evidence to our claims.
reject
This paper provides an approach to improve the differentially private SGD method by leveraging a differentially private version of the lottery mechanism, which reduces the number of parameters in the gradient update (and the dimension of the noise vectors). While this combination appears to be interesting, there is a non-trivial technical issue raised by Reviewer 3 on the sensitivity analysis in the paper. (R3 brought up this issue even after the rebuttal.) This issue needs to be resolved or clarified for the paper to be published.
test
[ "HyxUS1_aKB", "rJgSHZ4mor", "BklzfVNXor", "Bye0pfEQsB", "HkeCIYs2tB", "B1x69C46tr", "BkgJ8kwUtr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposes a differentially private version of the lottery ticket mechanism using the exponential mechanism, thus improving the utility of DPSGD by reducing the number of parameters. It provides the privacy guarantee of the proposed algorithm and shows experimentally that the proposed algorithm outperform...
[ 3, -1, -1, -1, 3, 3, -1 ]
[ 3, -1, -1, -1, 1, 3, -1 ]
[ "iclr_2020_S1e5YC4KPS", "HyxUS1_aKB", "HkeCIYs2tB", "B1x69C46tr", "iclr_2020_S1e5YC4KPS", "iclr_2020_S1e5YC4KPS", "iclr_2020_S1e5YC4KPS" ]
iclr_2020_S1ecYANtPr
Representation Learning Through Latent Canonicalizations
We seek to learn a representation on a large annotated data source that generalizes to a target domain using limited new supervision. Many prior approaches to this problem have focused on learning disentangled representations so that as individual factors vary in a new domain, only a portion of the representation need be updated. In this work, we seek the generalization power of disentangled representations, but relax the requirement of explicit latent disentanglement and instead encourage linearity of individual factors of variation by requiring them to be manipulable by learned linear transformations. We dub these transformations latent canonicalizers, as they aim to modify the value of a factor to a pre-determined (but arbitrary) canonical value (e.g., recoloring the image foreground to black). Assuming a source domain with access to meta-labels specifying the factors of variation within an image, we demonstrate experimentally that our method helps reduce the number of observations needed to generalize to a similar target domain when compared to a number of supervised baselines.
reject
This paper proposes a method to allow models to generalize more effectively through the use of latent linear transforms. Overall, I think this method is interesting, but both R2 and R4 were concerned with the experimental evaluation being too simplistic, and the method not being applicable to areas where a good simulator is not available. This seems like a very valid concern to me, and given the high bar for acceptance to ICLR, I would suggest that the paper is not accepted at this time. I would encourage the authors to continue with follow-up experiments that better showcase the generality of the method, and re-submit a more polished draft to a conference in the near future.
train
[ "Hygpmj1EiH", "HygHltynoB", "BygujjyNiB", "H1lrdi14jH", "HJxrA9y4jr", "Hyguk1hhKB", "HylTsRJxqS", "Skg1tSgV5S" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q: \"...I find these assumptions too strong for the task of learning disentangled representation.\" \n\nA: We wish to emphasize that: \n(1) we only require access to meta-labels on the source set\n(2) our goal is not to find disentangled representations; Our goal is transferability so that we can learn on real dat...
[ -1, -1, -1, -1, -1, 3, 8, 3 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "Skg1tSgV5S", "HJxrA9y4jr", "Hyguk1hhKB", "HylTsRJxqS", "iclr_2020_S1ecYANtPr", "iclr_2020_S1ecYANtPr", "iclr_2020_S1ecYANtPr", "iclr_2020_S1ecYANtPr" ]
iclr_2020_HyljY04YDB
Towards Interpretable Molecular Graph Representation Learning
Recent work in graph neural networks (GNNs) has led to improvements in molecular activity and property prediction tasks. Unfortunately, GNNs often fail to capture the relative importance of interactions between molecular substructures, in part due to the absence of efficient intermediate pooling steps. To address these issues, we propose LaPool (Laplacian Pooling), a novel, data-driven, and interpretable hierarchical graph pooling method that takes into account both node features and graph structure to improve molecular understanding. We benchmark LaPool and show that it not only outperforms recent GNNs on molecular graph understanding and prediction tasks but also remains highly competitive on other graph types. We then demonstrate the improved interpretability achieved with LaPool using both qualitative and quantitative assessments, highlighting its potential applications in drug discovery.
reject
The paper introduces a new pooling approach "Laplacian pooling" for graph neural networks and applies this to molecular graphs. While the paper has been substantially improved from its original form, there are still various concerns regarding performance and interpretability that remain unanswered. In its current form the paper is not ready for acceptance to ICLR-2020.
val
[ "rkxeUi4AYB", "HkgBb_yBsr", "ryevOmoNir", "HJxP_ZnEjS", "HJl2iVsNiB", "BklNzmoEoS", "ByxxBKYotB", "SJxrWMtptH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThe authors propose a new pooling layer, LaPool, for hierarchical graph representation learning (Ying et al., 2019) by clustering nodes around centroids that are selected based on \"signal intensity variation\". The signal intensity variation of node x is defined as sum_{y in HOP(x, h)} ||x - y|| where...
[ 6, -1, -1, -1, -1, -1, 1, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_HyljY04YDB", "HJxP_ZnEjS", "rkxeUi4AYB", "BklNzmoEoS", "SJxrWMtptH", "ByxxBKYotB", "iclr_2020_HyljY04YDB", "iclr_2020_HyljY04YDB" ]
iclr_2020_SkgjKR4YwH
MixUp as Directional Adversarial Training
MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients. Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained. Because samples are preferentially moved in the direction of other classes \iffalse -- which are typically clustered in input space -- \fi we refer to this method as directional adversarial training, or DAT. We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT. We define untied MixUp (UMixUp), a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples. We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes. Motivated by the understanding that UMixUp is both a generalization of MixUp and a form of adversarial training, we experiment with different datasets and loss functions to show that UMixUp provides improved performance over MixUp. In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp.
reject
This paper builds a connection between MixUp and adversarial training. It introduces untied MixUp (UMixUp), which generalizes the methods of MixUp. Then, it also shows that DAT and UMixUp use the same method of MixUp for generating samples but use different label mixing ratios. Though it has some valuable theoretical contributions, I agree with the reviewers that it’s important to include results on adversarial robustness, where both adversarial training and MixUp are playing an important role.
train
[ "HkgcvJmUsS", "BkgNah2SoS", "BkeJhQhBjB", "rylctJ4rsH", "H1e7iqMSiB", "HylpaXGrsS", "SyghzqbroS", "S1lngqRNoH", "SJl42viOtS", "BJeCr7patr", "Hkl9FN5Z5r", "HJezbNCutB" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Although the main interest of this work is not adversarial robustness, we are in fact aware of the papers you brought up. As a short overall comment, we believe that with the current understanding of adversarial robustness and generalization in the deep learning community, it is too early to conclude that there is...
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, -1 ]
[ "BkgNah2SoS", "BkeJhQhBjB", "H1e7iqMSiB", "H1e7iqMSiB", "HylpaXGrsS", "SJl42viOtS", "BJeCr7patr", "Hkl9FN5Z5r", "iclr_2020_SkgjKR4YwH", "iclr_2020_SkgjKR4YwH", "iclr_2020_SkgjKR4YwH", "iclr_2020_SkgjKR4YwH" ]
iclr_2020_rJl3YC4YPH
GUIDEGAN: ATTENTION BASED SPATIAL GUIDANCE FOR IMAGE-TO-IMAGE TRANSLATION
Recently, Generative Adversarial Network (GAN) and numbers of its variants have been widely used to solve the image-to-image translation problem and achieved extraordinary results in both a supervised and unsupervised manner. However, most GAN-based methods suffer from the imbalance problem between the generator and discriminator in practice. Namely, the relative model capacities of the generator and discriminator do not match, leading to mode collapse and/or diminished gradients. To tackle this problem, we propose a GuideGAN based on attention mechanism. More specifically, we arm the discriminator with an attention mechanism so not only it estimates the probability that its input is real, but also does it create an attention map that highlights the critical features for such prediction. This attention map then assists the generator to produce more plausible and realistic images. We extensively evaluate the proposed GuideGAN framework on a number of image transfer tasks. Both qualitative results and quantitative comparison demonstrate the superiority of our proposed approach.
reject
The paper proposes to augment the conditional GAN discriminator with an attention mechanism, with the aim to help the generator, in the context of image to image translation. The reviewers raise several issues in their reviews. One theoretical concern has to do with how the training of the attention mechanism (which seems to be collaborative) would interact with the minimax, zero-sum nature of a GAN objective; another with the discrepancy in how the attention map is used during training and testing. The experimental results were not significant enough, and the reviewers also recommend additional experiment results to clearly demonstrate the benefit of the method.
train
[ "B1efjlqhsS", "HJes2hGKsS", "Hyx3lw24jr", "BJeW4U2EjS", "SkeZ9LnNir", "HkltujxotB", "SkeopBxRFH", "rJxCLIITcr" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers,\n\nThank you very much for your valuable comments and suggestion. Based on your’ suggestion and question, we did a massive modification to our paper draft. Below is a summary of our modification.\n\nMinor:\nWe removed some unnecessary terms to make the paper more clear and compact;\nTo make the pap...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2020_rJl3YC4YPH", "SkeZ9LnNir", "HkltujxotB", "rJxCLIITcr", "SkeopBxRFH", "iclr_2020_rJl3YC4YPH", "iclr_2020_rJl3YC4YPH", "iclr_2020_rJl3YC4YPH" ]
iclr_2020_SJe3KCNKPr
Dual-module Inference for Efficient Recurrent Neural Networks
Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs. We propose a big-little dual-module inference to dynamically skip unnecessary memory access and computation to speedup RNN inference. Leveraging the error-resilient feature of nonlinear activation functions used in RNNs, we propose to use a lightweight little module that approximates the original RNN layer, which is referred to as the big module, to compute activations of the insensitive region that are more error-resilient. The expensive memory access and computation of the big module can be reduced as the results are only used in the sensitive region. Our method can reduce the overall memory access by 40% on average and achieve 1.54x to 1.75x speedup on CPU-based server platform with negligible impact on model quality.
reject
This paper presents an efficient RNN architecture that dynamically switches big and little modules during inference. In the experiments, authors demonstrate that the proposed method achieves favorable speed up compared to baselines, and the contribution is orthogonal to weight pruning. All reviewers agree that the paper is well-written and that the proposed method is easy to understand and reasonable. However, its methodological contribution is limited because the core idea is essentially the same as distillation, and dynamically gating the modules is a common technique in general. Moreover, I agree with the reviewers that the method should be compared with more other state-of-the-art methods in this context. Accelerating or compressing DNNs are intensively studied topics and there are many approaches other than weight pruning, as authors also mention in the paper. As the possible contribution of the paper is more on the empirical side, it is necessary to thoroughly compare with other possible approaches to show that the proposed method is really a good solution in practice. For these reasons, I’d like to recommend rejection.
train
[ "SyxXFobKsr", "H1xNDoWFor", "BkeZ4oWFir", "HkxgREa4Kr", "B1e26lgRYr", "H1xpu-VWcB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you so much for your positive feedback on our work. Here are our answers to your questions.\n(1)\tFigure 2 shows the dynamic region distribution of neuron activations, i.e., outputs of gates in LSTM. The x-axis consists of the first one hundred neurons and the y-axis is across timesteps. The left and right a...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "HkxgREa4Kr", "B1e26lgRYr", "H1xpu-VWcB", "iclr_2020_SJe3KCNKPr", "iclr_2020_SJe3KCNKPr", "iclr_2020_SJe3KCNKPr" ]
iclr_2020_r1xpF0VYDS
Quantum algorithm for finding the negative curvature direction
We present an efficient quantum algorithm aiming to find the negative curvature direction for escaping the saddle point, which is a critical subroutine for many second-order non-convex optimization algorithms. We prove that our algorithm could produce the target state corresponding to the negative curvature direction with query complexity O(polylog(d)ε^(-1)), where d is the dimension of the optimization function. The quantum negative curvature finding algorithm is exponentially faster than any known classical method which takes time at least O(dε^(−1/2)). Moreover, we propose an efficient algorithm to achieve the classical read-out of the target state. Our classical read-out algorithm runs exponentially faster on the degree of d than existing counterparts.
reject
There was some support for the ideas presented, but this paper was on the borderline, and ultimately not able to be accepted for publication at ICLR. Concerns raised included level of novelty, and clarity of the exposition to an ML audience.
test
[ "rkxeWjB6Kr", "HyxPpaZNir", "SkeRC0-4sB", "B1g9H0ZNiB", "HkeqPoWNjr", "rygZKfH19S", "ryx_D7wr5S" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a quantum algorithm aiming to solve the eigenvalue decomposition for the Hessian matrix in second-order optimization. The main body of this paper is the quantum singular value decomposition algorithm from Kerenidis & Prakash, 2016. The authors propose some plug-in algorithms in the context of q...
[ 6, -1, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_r1xpF0VYDS", "ryx_D7wr5S", "rkxeWjB6Kr", "rygZKfH19S", "ryx_D7wr5S", "iclr_2020_r1xpF0VYDS", "iclr_2020_r1xpF0VYDS" ]
iclr_2020_SJlRF04YwB
Generating Semantic Adversarial Examples with Differentiable Rendering
Machine learning (ML) algorithms, especially deep neural networks, have demonstrated success in several domains. However, several types of attacks have raised concerns about deploying ML in safety-critical domains, such as autonomous driving and security. An attacker perturbs a data point slightly in the pixel space and causes the ML algorithm to misclassify (e.g. a perturbed stop sign is classified as a yield sign). These perturbed data points are called adversarial examples, and there are numerous algorithms in the literature for constructing adversarial examples and defending against them. In this paper we explore semantic adversarial examples (SAEs) where an attacker creates perturbations in the semantic space. For example, an attacker can change the background of the image to be cloudier to cause misclassification. We present an algorithm for constructing SAEs that uses recent advances in differential rendering and inverse graphics.
reject
The authors present a way for generating adversarial examples using discrete perturbations, i.e., perturbations that, unlike pixel ones, carry some semantics. Thus, in order to do so, they assume the existence of an inverse graphics framework. Results are conducted in the VKITTI dataset. Overall, the main serious concern expressed by the reviewers has to do with the general applicability of this method, since it requires an inverse graphics framework, which all-in-all is not a trivial task, so it is not clear how such a method would scale to more “real” datasets. A secondary concern has to do with the fact that the proposed method seems to be mostly a way to perform semantic data-augmentation rather than a way to avoid malicious attacks. In the latter case, we would want to know something about the generality of this method (e.g., what happens a model is trained for this attacks but then a more pixel-based attack is applied). As such, I do not believe that this submission is ready for publication at ICLR. However, the technique is an interesting idea it would be interesting if a later submission would provide empirical evidence about/investigate the generality of this idea.
train
[ "S1x8JJ4qiH", "BklLOCXqjB", "Skga7Amcsr", "Skg7z5hFYH", "HylO_-G19B", "BJgq2bPJ5S" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for his/her insightful comments. We provide clarifications for some of the important questions raised.\n\n1. Difficulties with inverse graphics: We thank the reviewer for pointing out difficulties in training/using the inverse graphics pipeline. We stress that our contribution is not in tunin...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "Skg7z5hFYH", "HylO_-G19B", "BJgq2bPJ5S", "iclr_2020_SJlRF04YwB", "iclr_2020_SJlRF04YwB", "iclr_2020_SJlRF04YwB" ]
iclr_2020_Hye190VKvH
Longitudinal Enrichment of Imaging Biomarker Representations for Improved Alzheimer's Disease Diagnosis
Longitudinal data is often available inconsistently across individuals resulting in ignoring of additionally available data. Alzheimer's Disease (AD) is a progressive disease that affects over 5 million patients in the US alone, and is the 6th leading cause of death. Early detection of AD can significantly improve or extend a patient's life so it is critical to use all available information about patients. We propose an unsupervised method to learn a consistent representation by utilizing inconsistent data through minimizing the ratio of p-Order Principal Components Analysis (PCA) and Locality Preserving Projections (LPP). Our method's representation can outperform the use of consistent data alone and does not require the use of complex tensor-specific approaches. We run experiments on patient data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), which consists of inconsistent data, to predict patients' diagnosis.
reject
This paper proposes to overcome the issue of inconsistent availability of longitundinal data via the combination of leveraging principal components analysis and locality preserving projections. All three reviewers express significant reservations regarding the technical writing in the paper. As it stands, this paper is not ready for publication.
train
[ "SkgHvwtCFr", "HygsE0WXqB", "SJlOnavpqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This manuscript develops a metric-learning with non-Euclidean error terms for robustness and applies in to data reduction to learn diagnostic models of Alzheimer's disease from brain images. The manuscript discusses a metric-learning formulation as a quotient for reconstruction-error terms, how to optimize the quo...
[ 1, 3, 1 ]
[ 5, 3, 1 ]
[ "iclr_2020_Hye190VKvH", "iclr_2020_Hye190VKvH", "iclr_2020_Hye190VKvH" ]
iclr_2020_SJxy5A4twS
Superbloom: Bloom filter meets Transformer
We extend the idea of word pieces in natural language models to machine learning tasks on opaque ids. This is achieved by applying hash functions to map each id to multiple hash tokens in a much smaller space, similarly to a Bloom filter. We show that by applying a multi-layer Transformer to these Bloom filter digests, we are able to obtain models with high accuracy. They outperform models of a similar size without hashing and, to a large degree, models of a much larger size trained using sampled softmax with the same computational budget. Our key observation is that it is important to use a multi-layer Transformer for Bloom filter digests to remove ambiguity in the hashed input. We believe this provides an alternative method to solving problems with large vocabulary size.
reject
This paper presents to integrate the codes based on multiple hashing functions with Transformer networks to reduce vocabulary sizes in input and output spaces. Compared to non-hashed models, it enables training more complex and powerful models with the same number of overall parameters, thus leads to better performance. Although the technical contribution is limited considering hash-based approach itself is rather well-known and straightforward, all reviewers agree that some findings in the experiments are interesting. On the cons side, two reviewers were concerned about unclear presentation regarding the details of the method. More importantly, the proposed method is only evaluated on non-standard tasks without comparison to other previous methods. Considering that the main contribution of the paper is in empirical side, I agree it is necessary to evaluate the method on more standard benchmarking tasks in NLP where there should be many other state-of-the-art methods of model compression. For these reasons, I’d like to recommend rejection.
train
[ "Byx_d_82sr", "rJepvg3ooH", "Byg55UhisS", "BJemP42isr", "BkxMxQ2oiH", "H1gKSJxnKH", "rkeOdb0TFS", "HJl9otEe5H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the detailed response.", "We thank the reviewers for their comments and suggestions. We would like to emphasize that our main contribution is to recognize the synergy between the multi-layer Transformer model and the Bloom filter (as the title suggests). While multiple hashing has been proposed before...
[ -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "Byg55UhisS", "iclr_2020_SJxy5A4twS", "H1gKSJxnKH", "rkeOdb0TFS", "HJl9otEe5H", "iclr_2020_SJxy5A4twS", "iclr_2020_SJxy5A4twS", "iclr_2020_SJxy5A4twS" ]
iclr_2020_rkegcC4YvS
Removing the Representation Error of GAN Image Priors Using the Deep Decoder
Generative models, such as GANs, have demonstrated impressive performance as natural image priors for solving inverse problems such as image restoration and compressive sensing. Despite this performance, they can exhibit substantial representation error for both in-distribution and out-of-distribution images, because they maintain explicit low-dimensional learned representations of a natural signal class. In this paper, we demonstrate a method for removing the representation error of a GAN when used as a prior in inverse problems by modeling images as the linear combination of a GAN with a Deep Decoder. The deep decoder is an underparameterized and most importantly unlearned natural signal model similar to the Deep Image Prior. No knowledge of the specific inverse problem is needed in the training of the GAN underlying our method. For compressive sensing and image superresolution, our hybrid model exhibits consistently higher PSNRs than both the GAN priors and Deep Decoder separately, both on in-distribution and out-of-distribution images. This model provides a method for extensibly and cheaply leveraging both the benefits of learned and unlearned image recovery priors in inverse problems.
reject
The paper introduces a method for removing what they call representation error and apply the method to super resolution and compressive sensing. The reviewers have provided constructive feedback. The reviewers like aspects of the paper but are also concerned with various shortcomings. The consensus is that the paper is not ready for publication as it stands. Rejection is therefore recommended with strong encouragement to keep working on the method and submit elsewhere.
train
[ "BJgwdLAaKr", "H1evWKPssB", "B1e9ROPooS", "Sklns_wsir", "SJlxZVMlqr", "rkxEiTqbqB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "[Update after rebuttal period]\nI am sorry that the response cannot address my confusion. I still doubt the motivation of this paper and the actual experimental performance compared with state-of-the-art methods are still ignored. Thus I decrease my score.\n\n[Original reviews]\nThis paper proposed to modeling ima...
[ 1, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rkegcC4YvS", "BJgwdLAaKr", "SJlxZVMlqr", "rkxEiTqbqB", "iclr_2020_rkegcC4YvS", "iclr_2020_rkegcC4YvS" ]
iclr_2020_H1ebc0VYvH
Unaligned Image-to-Sequence Transformation with Loop Consistency
We tackle the problem of modeling sequential visual phenomena. Given examples of a phenomena that can be divided into discrete time steps, we aim to take an input from any such time and realize this input at all other time steps in the sequence. Furthermore, we aim to do this \textit{without} ground-truth aligned sequences --- avoiding the difficulties needed for gathering aligned data. This generalizes the unpaired image-to-image problem from generating pairs to generating sequences. We extend cycle consistency to \textit{loop consistency} and alleviate difficulties associated with learning in the resulting long chains of computation. We show competitive results compared to existing image-to-image techniques when modeling several different data sets including the Earth's seasons and aging of human faces.
reject
The main concern raised by the reviewers is limited experimental work, and there is no rebuttal.
train
[ "Skl5hr_6FS", "BJlpRXDAtB", "SyxKW99S5B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces the LoopGan, which aims to enforce consistency in a sequential set of images but without aligned pairs. The prototypical example used in the paper is a network to transform seasonal images, where Summer -> Fall -> Winter -> Spring -> Summer -> etc.\n\nI suggest rejecting the paper.\n\nWhile ...
[ 1, 3, 3 ]
[ 1, 5, 4 ]
[ "iclr_2020_H1ebc0VYvH", "iclr_2020_H1ebc0VYvH", "iclr_2020_H1ebc0VYvH" ]
iclr_2020_BkgzqRVFDr
Reinforcement Learning with Probabilistically Complete Exploration
Balancing exploration and exploitation remains a key challenge in reinforcement learning (RL). State-of-the-art RL algorithms suffer from high sample complexity, particularly in the sparse reward case, where they can do no better than to explore in all directions until the first positive rewards are found. To mitigate this, we propose Rapidly Randomly-exploring Reinforcement Learning (R3L). We formulate exploration as a search problem and leverage widely-used planning algorithms such as Rapidly-exploring Random Tree (RRT) to find initial solutions. These solutions are used as demonstrations to initialize a policy, then refined by a generic RL algorithm, leading to faster and more stable convergence. We provide theoretical guarantees of R3L exploration finding successful solutions, as well as bounds for its sampling complexity. We experimentally demonstrate the method outperforms classic and intrinsic exploration techniques, requiring only a fraction of exploration samples and achieving better asymptotic performance.
reject
This was a borderline paper, with both pros and cons. In the end, it was not considered sufficiently mature to accept in its current form. The reviewers all criticized the assumptions needed, and lamented the lack of clarity around the distinction between reinforcement learning and planning. The paper requires a clearer contribution, based on a stronger justification of the approach and weakening of the assumptions. The submitted comments should be able to help the authors strengthen this work.
train
[ "H1l9QlZioS", "BkgvFz4IjB", "BJxKVzNIiB", "B1xCs-EUjS", "ryx7EiaBKr", "H1gMivLjKB", "S1gNfBxuqB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response.\n\nIt will take me some time to review the updated paper.\nI still have concerns over the treatment of (1) and (2), with respect to the assumptions that are now required.\n\nBut I will have a look through the new material and see if it causes me to change to accept.\n\nMany thanks", ...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 5, 5, 4 ]
[ "BJxKVzNIiB", "S1gNfBxuqB", "ryx7EiaBKr", "H1gMivLjKB", "iclr_2020_BkgzqRVFDr", "iclr_2020_BkgzqRVFDr", "iclr_2020_BkgzqRVFDr" ]
iclr_2020_ryxf9CEKDr
Efficient Saliency Maps for Explainable AI
We describe an explainable AI saliency map method for use with deep convolutional neural networks (CNN) that is much more efficient than popular gradient methods. It is also quantitatively similar or better in accuracy. Our technique works by measuring information at the end of each network scale. This is then combined into a single saliency map. We describe how saliency measures can be made more efficient by exploiting Saliency Map Order Equivalence. Finally, we visualize individual scale/layer contributions by using a Layer Ordered Visualization of Information. This provides an interesting comparison of scale information contributions within the network not provided by other saliency map methods. Our method is generally straight forward and should be applicable to the most commonly used CNNs. (Full source code is available at http://www.anonymous.submission.com).
reject
The paper presents an efficient approach to computer saliency measures by exploiting saliency map order equivalence (SMOE), and visualization of individual layer contribution by a layer ordered visualization of information. The authors did a good job at addressing most issues raised in the reviews. In the end, two major concerns remained not fully addressed: one is the motivation of efficiency, and the other is how much better SMOE is compared with existing statistics. I think these two issue also determines how significance the work is. After discussion, we agree that while the revised draft pans out to be a much more improved one, the work itself is nothing groundbreaking. Given many other excellent papers on related topics, the paper cannot make the cut for ICLR.
train
[ "SyqssQAtH", "H1g7MLuqsr", "ByxCVB_qiH", "Hkx72Qd9jS", "r1lnlmu5ir", "ryxz3THcsH", "B1g2p0vPsS", "HkeNsCwDjr", "HkxUdAwDoB", "BklDxRwvsH", "SJletTvPir", "HkxevpvvoS", "rJgAN6vDjS", "BJlR16wwjr", "S1xLa2wwsB", "BJluAS9XsS", "HyexX-cXsS", "B1llwXqQsB", "ryllQX9Qir", "H1gzAfcXjS",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ "I Summary\n\nThe authors present a method that computes a saliency map after each scale block of a CNN and combines them according to the weights of the prior layers in a final saliency map. The paper gives two main contributions: SMOE, which captures the informativeness of the corresponding layers of the scale bl...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_ryxf9CEKDr", "HkxUdAwDoB", "HkxevpvvoS", "rJgAN6vDjS", "S1xLa2wwsB", "iclr_2020_ryxf9CEKDr", "S1xU77G3tr", "S1xU77G3tr", "S1xU77G3tr", "S1xU77G3tr", "S1xU77G3tr", "S1xU77G3tr", "S1xU77G3tr", "S1xU77G3tr", "S1xU77G3tr", "S1xU77G3tr", "SyqssQAtH", "SyqssQAtH", "SyqssQAtH...
iclr_2020_B1eQcCEtDB
Calibration, Entropy Rates, and Memory in Language Models
Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing. Towards this end, we present a calibration-based approach to measure long-term discrepancies between a generative sequence model and the true distribution, and use these discrepancies to improve the model. Empirically, we show that state-of-the-art language models, including LSTMs and Transformers, are \emph{miscalibrated}: the entropy rates of their generations drift dramatically upward over time. We then provide provable methods to mitigate this phenomenon. Furthermore, we show how this calibration-based approach can also be used to measure the amount of memory that language models use for prediction.
reject
This paper shows empirically that the state-of-the-art language models have a problem of increasing entropy when generating long sequences. The paper then proposes a method to mitigate this problem. As the authors re-iterated through their rebuttal, this paper approaches this problem theoretically, rather than through a comprehensive set of empirical comparisons. After discussions among the reviewers, this paper is not recommended to be accepted. Some skepticism and concerns remain as to whether the paper makes sufficiently clear and proven theoretical contributions. We all appreciate the approach and potential of this paper and encourage the authors to re-submit a revision to a future related venue.
train
[ "ryeks0NnsH", "Bylqa0EhiS", "HJlgKVq2YH", "B1gyBkS2oS", "HylPgkS3oB", "S1ekmgJRqB", "r1lY9byg9H" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the encouraging review.\n\n@Bidirectional models: The trouble with applying our methods to these non-autoregressive language models is that there isn’t a single probabilistic model to improve. For example, under different masking patterns, there is no guarantee that BERT outputs conditional probabilitie...
[ -1, -1, 6, -1, -1, 6, 3 ]
[ -1, -1, 4, -1, -1, 3, 5 ]
[ "S1ekmgJRqB", "r1lY9byg9H", "iclr_2020_B1eQcCEtDB", "iclr_2020_B1eQcCEtDB", "HJlgKVq2YH", "iclr_2020_B1eQcCEtDB", "iclr_2020_B1eQcCEtDB" ]
iclr_2020_rJgE9CEYPS
Discriminability Distillation in Group Representation Learning
Learning group representation is a commonly concerned issue in tasks where the basic unit is a group, set or sequence. The computer vision community tries to tackle it by aggregating the elements in a group based on an indicator either defined by human such as the quality or saliency of an element, or generated by a black box such as the attention score or output of a RNN. This article provides a more essential and explicable view. We claim the most significant indicator to show whether the group representation can be benefited from an element is not the quality, or an inexplicable score, but the \textit{discrimiability}. Our key insight is to explicitly design the \textit{discrimiability} using embedded class centroids on a proxy set, and show the discrimiability distribution \textit{w.r.t.} the element space can be distilled by a light-weight auxiliary distillation network. This processing is called \textit{discriminability distillation learning} (DDL). We show the proposed DDL can be flexibly plugged into many group based recognition tasks without influencing the training procedure of the original tasks. Comprehensive experiments on set-to-set face recognition and action recognition valid the advantage of DDL on both accuracy and efficiency, and it pushes forward the state-of-the-art results on these tasks by an impressive margin.
reject
This paper proposes discriminability distillation learning (DDL) for learning group representations. The core idea is to learn a discriminability weight for each instance which are a member of a group, set or sequence. The discriminability score is learned by first training a standard supervised base model and using the features from this model, computing class-centroids on a proxy set, and computing the iter and intra-class distances. A function of these distance computations are then used as supervision for a distillation style small network (DDNet) which may predict the discriminability score (DDR score). A group representation is then created through a combination of known instances, weighted using their DDR score. The method is validated on face recognition and action recognition. This work initially received mixed scores, with two reviewers recommending acceptance and two recommending rejection. After reading all the reviews, rebuttals, and discussions, it seems that a key point of concern is low clarity of presentation. During the rebuttal period, the authors have revised their manuscript and interacted with reviewers. One reviewer has chosen to update their recommendation to weak acceptance in response. The main unresolved issues are related to novelty and experimental evaluation. Namely, for novelty comparison and discussion against attention based approaches and other metric learning based approaches would benefit the work, though the proposed solution does present some novelty. For the experiments there was a suggestion to evaluate the model on more complex datasets where performance is not already maxed out. The authors have provided such experiments during the rebuttal period. Despite the slight positive leanings post rebuttal, the ACs have discussed this case and determine the paper is not ready for publication.
train
[ "Bked194hjH", "SJxD-SVhjH", "BJeKfbEhjB", "HJxtWtQ2jr", "r1l-AFjntS", "B1eyMbYEcr", "S1gD7AVC9B", "H1eS1vpA5B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments! We have carefully modified this paper based on your suggestions. \n\n(1) About the methodology explained.\n\nWe modify the methodology part and re-plot the train and inference pipeline of our DDL, which is much clear. And we add references to support the observation that feature embedd...
[ -1, -1, -1, -1, 3, 1, 6, 6 ]
[ -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "r1l-AFjntS", "B1eyMbYEcr", "S1gD7AVC9B", "H1eS1vpA5B", "iclr_2020_rJgE9CEYPS", "iclr_2020_rJgE9CEYPS", "iclr_2020_rJgE9CEYPS", "iclr_2020_rJgE9CEYPS" ]
iclr_2020_S1g490VKvB
The Dynamics of Signal Propagation in Gated Recurrent Neural Networks
Training recurrent neural networks (RNNs) on long sequence tasks is plagued with difficulties arising from the exponential explosion or vanishing of signals as they propagate forward or backward through the network. Many techniques have been proposed to ameliorate these issues, including various algorithmic and architectural modifications. Two of the most successful RNN architectures, the LSTM and the GRU, do exhibit modest improvements over vanilla RNN cells, but they still suffer from instabilities when trained on very long sequences. In this work, we develop a mean field theory of signal propagation in LSTMs and GRUs that enables us to calculate the time scales for signal propagation as well as the spectral properties of the state-to-state Jacobians. By optimizing these quantities in terms of the initialization hyperparameters, we derive a novel initialization scheme that eliminates or reduces training instabilities. We demonstrate the efficacy of our initialization scheme on multiple sequence tasks, on which it enables successful training while a standard initialization either fails completely or is orders of magnitude slower. We also observe a beneficial effect on generalization performance using this new initialization.
reject
Using ideas from mean-field theory and statistical mechanics, this paper derives a principled way to analyze signal propagation through gated recurrent networks. This analysis then allows for the development of a novel initialization scheme capable of mitigating subsequent training instabilities. In the end, while reviewers appreciated some of the analytical insights provided, two still voted for rejection while one chose accept after the rebuttal and discussion period. And as AC for this paper, I did not find sufficient evidence to overturn the reviewer majority for two primary reasons. First, the paper claims to demonstrate the efficacy of the proposed initialization scheme on multiple sequence tasks, but the presented experiments do not really involve representative testing scenarios as pointed out by reviewers. Given that this is not a purely theoretical paper, but rather one suggesting practically-relevant initializations for RNNs, it seems important to actually demonstrate this on sequence data people in the community actually care about. In fact, even the reviewer who voted for acceptance conceded that the presented results were not too convincing (basically limited to toy situations involving Cifar10 and MNIST data). Secondly, all reviewers found parts of the paper difficult to digest, and while a future revision has been promised to provide clarity, no text was actually changed making updated evaluations problematic. Note that the rebuttal mentions that the paper is written in a style that is common in the physics literature, and this appears to be a large part of the problem. ICLR is an ML conference and in this respect, to the extent possible it is important to frame relevant papers in an accessible way such that a broader segment of this community can benefit from the key message. At the very least, this will ensure that the reviewer pool is more equipped to properly appreciate the contribution. My own view is that this work can be reframed in such a way that it could be successfully submitted to another ML conference in the future.
train
[ "r1xrrEojYS", "r1xoecU2Yr", "SyxDSP-CtH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper touches the signal processing/long term propagation problem in gated recurrent neural networks from the mean field theory. The paper starts from a dynamic system view of the recurrent neural networks and calculates the time scale of converging to the fixed point. In order to avoid the system to converge...
[ 3, 8, 1 ]
[ 3, 5, 5 ]
[ "iclr_2020_S1g490VKvB", "iclr_2020_S1g490VKvB", "iclr_2020_S1g490VKvB" ]
iclr_2020_rkeUcRNtwS
Salient Explanation for Fine-grained Classification
Explaining the prediction of deep models has gained increasing attention to increase its applicability, even spreading it to life-affecting decisions. However there has been no attempt to pinpoint only the most discriminative features contributing specifically to separating different classes in a fine-grained classification task. This paper introduces a novel notion of salient explanation and proposes a simple yet effective salient explanation method called Gaussian light and shadow (GLAS), which estimates the spatial impact of deep models by the feature perturbation inspired by light and shadow in nature. GLAS provides a useful coarse-to-fine control benefiting from scalability of Gaussian mask. We also devised the ability to identify multiple instances through recursive GLAS. We prove the effectiveness of GLAS for fine-grained classification using the fine-grained classification dataset. To show the general applicability, we also illustrate that GLAS has state-of-the-art performance at high speed (about 0.5 sec per 224×224 image) via the ImageNet Large Scale Visual Recognition Challenge.
reject
This paper is interested in finding salient areas in a deep learning image classification setting. The introduced method relies on masking images using Gaussian Gaussian light and shadow (GLAS) and estimating its impact on output. As noted by all reviewers, the paper is too weak for publication in its current form: - Novelty is very low. - Experimental section not convincing enough, in particular some metrics are missing. - The writing should be improved.
train
[ "Ske5I-NPFH", "Hkeo4WgCYr", "ByeZgnsWcr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper proposes a simple procedure to display the salient areas that determine classification decisions from deep networks. In practice, these area are computed by shadowing the image using a Gaussian and measuring the network contribution at every location of the image. Results in terms of \"Pointin...
[ 1, 1, 3 ]
[ 3, 5, 4 ]
[ "iclr_2020_rkeUcRNtwS", "iclr_2020_rkeUcRNtwS", "iclr_2020_rkeUcRNtwS" ]
iclr_2020_B1eP504YDr
Independence-aware Advantage Estimation
Most of existing advantage function estimation methods in reinforcement learning suffer from the problem of high variance, which scales unfavorably with the time horizon. To address this challenge, we propose to identify the independence property between current action and future states in environments, which can be further leveraged to effectively reduce the variance of the advantage estimation. In particular, the recognized independence property can be naturally utilized to construct a novel importance sampling advantage estimator with close-to-zero variance even when the Monte-Carlo return signal yields a large variance. To further remove the risk of the high variance introduced by the new estimator, we combine it with existing Monte-Carlo estimator via a reward decomposition model learned by minimizing the estimation variance. Experiments demonstrate that our method achieves higher sample efficiency compared with existing advantage estimation methods in complex environments.
reject
Policy gradient methods typically suffer from high variance in the advantage function estimator. The authors point out independence property between the current action and future states which implies that certain terms from the advantage estimator can be omitted when this property holds. Based on this fact, they construct a novel important sampling based advantage estimator. They evaluate their approach on simple discrete action environments and demonstrate reduced variance and improved performance. Reviewers were generally concerned about the clarity of the technical exposition and the positioning of this work with respect to other estimators of the advantage function which use control variates. The authors clarified differences between their approach and previous approaches using control variance and clarified many of the technical questions that reviewers asked about. I am not convinced by the merits of this approach. While, I think the fundamental idea is interesting, the experiments are limited to simple discrete environments and no comparison is made to other control variate based approaches for reducing variance. Furthermore, due to the function approximation which introduces bias, the method should be compared to actor critic methods which directly estimate the advantage function. Finally, one of the advantages of on policy policy gradient methods is its simplicity. This method introduces many additional steps and parameters to be learned. The authors would need to demonstrate large improvements in sample efficiency on more complex tasks to justify this added complexity. At this time, I do not recommend this paper for acceptance.
train
[ "rkgZgJThtS", "rJecUo8hiH", "HJedx22sir", "Byx4c7hcjB", "rklYmMbVjH", "BkxGR-7cor", "Byl_8kxXoS", "SygEzRJXiB", "r1eBkmmRFH", "S1gG5UblqS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a novel advantage estimate for reinforcement learning based on estimating the extent to which past actions impact the current state. More precisely, the authors train a classifier to predict the action taken k-steps ago from the state at time t, the state at t-k and the time gap k. The idea is ...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_B1eP504YDr", "rklYmMbVjH", "Byl_8kxXoS", "BkxGR-7cor", "rkgZgJThtS", "SygEzRJXiB", "r1eBkmmRFH", "S1gG5UblqS", "iclr_2020_B1eP504YDr", "iclr_2020_B1eP504YDr" ]
iclr_2020_BJlOcR4KwS
Channel Equilibrium Networks
Convolutional Neural Networks (CNNs) typically treat normalization methods such as batch normalization (BN) and rectified linear function (ReLU) as building blocks. Previous work showed that this basic block would lead to channel-level sparsity (i.e. channel of zero values), reducing computational complexity of CNNs. However, over-sparse CNNs have many collapsed channels (i.e. many channels with undesired zero values), impeding their learning ability. This problem is seldom explored in the literature. To recover the collapsed channels and enhance learning capacity, we propose a building block, Channel Equilibrium (CE), which takes the output of a normalization layer as input and switches between two branches, batch decorrelation (BD) branch and adaptive instance inverse (AII) branch. CE is able to prevent implicit channel-level sparsity in both experiments and theory. It has several appealing properties. First, CE can be stacked after many normalization methods such as BN and Group Normalization (GN), and integrated into many advanced CNN architectures such as ResNet and MobileNet V2 to form a series of CE networks (CENets), consistently improving their performance. Second, extensive experiments show that CE achieves state-of-the-art results on various challenging benchmarks such as ImageNet and COCO. Third, we show an interesting connection between CE and Nash Equilibrium, a well-known solution of a non-cooperative game. The models and code will be released soon.
reject
The paper proposed Channel Equilibrium (CE) to overcome the over-sparsity problem in CNNs using 'BN+ReLU'. Experiments on ImageNet and COCO show its effectiveness by introducing little computational complexity. However the reviewers pointed a number of problems in the writing and the clarity of the paper. Although the authors addressed all the se concerns in details and agreed to make revisions in the paper, it's better for the authors to submit the revised version to another opportunity.
train
[ "S1g_5if3jH", "H1ggN3M2sr", "BJevhFfhjS", "S1gWh_MnjB", "HygV4Vq0tB", "HygbkYmJ9r", "SklZzltOqS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for constructive feedback and helpful suggestions. We have provided detailed explanations for terminology and performed additional experiments to work towards addressing the concerns the reviewer has raised. The detailed responses to some concerns are listed below,\n1) diminishing sparsity an...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 1, 5, 5 ]
[ "HygbkYmJ9r", "HygV4Vq0tB", "SklZzltOqS", "iclr_2020_BJlOcR4KwS", "iclr_2020_BJlOcR4KwS", "iclr_2020_BJlOcR4KwS", "iclr_2020_BJlOcR4KwS" ]
iclr_2020_BkxdqA4tvB
Collapsed amortized variational inference for switching nonlinear dynamical systems
We propose an efficient inference method for switching nonlinear dynamical systems. The key idea is to learn an inference network which can be used as a proposal distribution for the continuous latent variables, while performing exact marginalization of the discrete latent variables. This allows us to use the reparameterization trick, and apply end-to-end training with SGD. We show that this method can successfully segment time series data (including videos) into meaningful "regimes", due to the use of piece-wise nonlinear dynamics.
reject
This is an interesting paper on an important topic. The reviewers identified a variety of issues both before and after the feedback period; I urge the authors to consider their comments as they continue to refine and extend their work.
train
[ "HyedYpoTKr", "r1x7NPassH", "SyxH1PTosH", "S1xFI8aooS", "Hyll2HaisB", "BklPSoBsFB", "rylQ61g-cB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors consider the problem of learning model parameters of a switching nonlinear dynamical system from a dataset. They propose a new variational inference algorithm for this model-learning problem that marginalizes all discrete random variables in the model using the forward-backward algorithm...
[ 8, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_BkxdqA4tvB", "iclr_2020_BkxdqA4tvB", "BklPSoBsFB", "HyedYpoTKr", "rylQ61g-cB", "iclr_2020_BkxdqA4tvB", "iclr_2020_BkxdqA4tvB" ]
iclr_2020_SJlYqRNKDS
Blockwise Adaptivity: Faster Training and Better Generalization in Deep Learning
Stochastic methods with coordinate-wise adaptive stepsize (such as RMSprop and Adam) have been widely used in training deep neural networks. Despite their fast convergence, they can generalize worse than stochastic gradient descent. In this paper, by revisiting the design of Adagrad, we propose to split the network parameters into blocks, and use a blockwise adaptive stepsize. Intuitively, blockwise adaptivity is less aggressive than adaptivity to individual coordinates, and can have a better balance between adaptivity and generalization. We show theoretically that the proposed blockwise adaptive gradient descent has comparable regret in online convex learning and convergence rate for optimizing nonconvex objective as its counterpart with coordinate-wise adaptive stepsize, but is better up to some constant. We also study its uniform stability and show that blockwise adaptivity can lead to lower generalization error than coordinate-wise adaptivity. Experimental results show that blockwise adaptive gradient descent converges faster and improves generalization performance over Nesterov's accelerated gradient and Adam.
reject
The authors propose an adaptive block-wise coordinate descent method and claim faster convergence and lower generalization error. While the reviewers agreed that this method may work well in practice, they had several concerns about the relevance of the theory and strength of the empirical results. After considering the author responses, the reviewers have agreed that this paper is not yet ready for publication.
train
[ "B1xwlWqiKr", "BkgPOgw1qH", "rkxsFV12jH", "SJgbimyhsH", "B1xUiGynoS", "Hkgm9j2x5S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes blockwise adaptivity. We divide the parameters into blocks, for example in a linear threshold unit the bias term is in a bias term block while the input weights are in an input weight block. We then average the square norm of the gradients over each block and use the same adaptation based on ...
[ 3, 6, -1, -1, -1, 1 ]
[ 3, 4, -1, -1, -1, 5 ]
[ "iclr_2020_SJlYqRNKDS", "iclr_2020_SJlYqRNKDS", "B1xwlWqiKr", "BkgPOgw1qH", "Hkgm9j2x5S", "iclr_2020_SJlYqRNKDS" ]
iclr_2020_HJe9cR4KvB
Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling
Sequence labeling is a fundamental framework for various natural language processing problems including part-of-speech tagging and named entity recognition. Its performance is largely influenced by the annotation quality and quantity in supervised learning scenarios. In many cases, ground truth labels are costly and time-consuming to collect or even non-existent, while imperfect ones could be easily accessed or transferred from different domains. A typical example is crowd-sourced datasets which have multiple annotations for each sentence which may be noisy or incomplete. Additionally, predictions from multiple source models in transfer learning can be seen as a case of multi-source supervision. In this paper, we propose a novel framework named Consensus Network (CONNET) to conduct training with imperfect annotations from multiple sources. It learns the representation for every weak supervision source and dynamically aggregates them by a context-aware attention mechanism. Finally, it leads to a model reflecting the consensus among multiple sources. We evaluate the proposed framework in two practical settings of multi-source learning: learning with crowd annotations and unsupervised cross-domain model adaptation. Extensive experimental results show that our model achieves significant improvements over existing methods in both settings.
reject
While the revised paper was better and improved the reviewers assessment of the work, the paper is just below the threshold for acceptance. The authors are strongly encouraged to continue this work.
train
[ "BJlZKk6pKS", "BJgqBNdvjr", "BJxt2vwiiS", "Skl66IDoiS", "B1gh27_woS", "SJeiQXuDoB", "HJehLMuvoB", "Bkg21BVsYH", "B1xyg9XT5B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n## Updated review\n\nI have read the rebuttals. The new version of the paper is clearer and the new baseline experiments are a good addition. \n\n## Original review\n\nThis paper presents an approach to train a neural networks-based model for sequence modelling using labels from different sources. The proposed a...
[ 6, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 1, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_HJe9cR4KvB", "Bkg21BVsYH", "B1xyg9XT5B", "iclr_2020_HJe9cR4KvB", "B1xyg9XT5B", "BJlZKk6pKS", "iclr_2020_HJe9cR4KvB", "iclr_2020_HJe9cR4KvB", "iclr_2020_HJe9cR4KvB" ]
iclr_2020_Bkgq9ANKvB
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
Learning with noisy labels is a common problem in supervised learning. Existing approaches require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem. In this work, we introduce a technique to learn from noisy labels that does not require a priori specification of the noise rates. In particular, we introduce a new family of loss functions that we name as peer loss functions. Our approach then uses a standard empirical risk minimization (ERM) framework with peer loss functions. Peer loss functions associate each training sample with a certain form of "peer" samples, which evaluate a classifier' predictions jointly. We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near optimal classifier as if performing ERM over the clean training data, which we do not have access to. To our best knowledge, this is the first result on "learning with noisy labels without knowing noise rates" with theoretical guarantees. We pair our results with an extensive set of experiments, where we compare with state-of-the-art techniques of learning with noisy labels. Our results show that peer loss functions based method consistently outperforms the baseline benchmarks. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations.
reject
Thank you very much for the detailed feedback to the reviewers, which helped us better understand your paper. Thanks also for revising the manuscript significantly; many parts were indeed revised. However, due to the major revision, we find more points to be further discussed, which requires another round of reviews/rebuttals. For this reason, we decided not to accept this paper. We hope that the reviewers' comments are useful for improving the paper for potential future publication.
train
[ "BJlM6AygoS", "SkgGILdiiB", "HJlLUFmoiB", "BkxWXCicjH", "r1esPNlOir", "HJxAIGggjr", "Sye_PVbljr", "rJeXn7Zxsr", "r1esAQqrtB", "BylUU8YTFB", "r1ex9g6pKS", "SJebzaMR5H", "HkeRt0bC5H", "rJeXF_Q9YB", "BkxFF8w_tS", "S1xL8xFvtH", "H1lGE6g8YS" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public" ]
[ "We thank the reviewer for detailed comments, and for providing pointers to the more recent literature. We want to clarify our motivation, novelty and significance. We do think the reviewer might have missed the main novelty and contribution of peer loss in that peer loss operates without the need of estimating the...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 8, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 1, -1, -1, -1, -1, -1, -1 ]
[ "r1esAQqrtB", "r1esPNlOir", "BkxWXCicjH", "HJxAIGggjr", "iclr_2020_Bkgq9ANKvB", "BylUU8YTFB", "HJxAIGggjr", "r1ex9g6pKS", "iclr_2020_Bkgq9ANKvB", "iclr_2020_Bkgq9ANKvB", "iclr_2020_Bkgq9ANKvB", "HkeRt0bC5H", "iclr_2020_Bkgq9ANKvB", "BkxFF8w_tS", "S1xL8xFvtH", "H1lGE6g8YS", "iclr_2020...
iclr_2020_Hklo5RNtwS
Behavior-Guided Reinforcement Learning
We introduce a new approach for comparing reinforcement learning policies, using Wasserstein distances (WDs) in a newly defined latent behavioral space. We show that by utilizing the dual formulation of the WD, we can learn score functions over trajectories that can be in turn used to lead policy optimization towards (or away from) (un)desired behaviors. Combined with smoothed WDs, the dual formulation allows us to devise efficient algorithms that take stochastic gradient descent steps through WD regularizers. We incorporate these regularizers into two novel on-policy algorithms, Behavior-Guided Policy Gradient and Behavior-Guided Evolution Strategies, which we demonstrate can outperform existing methods in a variety of challenging environments. We also provide an open source demo.
reject
The authors introduce the idea of using Wasserstein distances over latent "behavioral spaces" to measure the similarity between two polices, for use in RL algorithms. Depending on the choice of behavioral embedding, this method produces different regularizers for policy optimization, in some cases recovering known algorithms such as TRPO. This approach generalizes ideas of similarity used in many common algorithms like TRPO, making these ideas widely applicable to many policy optimization approaches. The reviewers all agree that the core idea is interesting and would likely be useful to the community. However, a primary concern that was not sufficiently resolved during the rebuttal period was the experimental evaluation -- both the ability of the experiments to be replicated, as well as whether they provide sufficient insight into how/why the algorithm performs. Thus, I recommend rejection of this paper at this time.
train
[ "rkxv0JQtYS", "H1xFIB0-iB", "HylKjG0-jH", "r1gNR40-oB", "rJgz4SCWoB", "rkxtkEAZiB", "SyxKFm0-ir", "r1l0QRy0FH", "Bye86rHeqB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThis paper proposes using Wasserstein Distances to measure the difference between higher-level functions of policies, which this paper terms as \"behaviors\". For example, one such behavior could be the distribution over final states given the policy, or the distribution over returns given policy. Throu...
[ 1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_Hklo5RNtwS", "rJgz4SCWoB", "iclr_2020_Hklo5RNtwS", "rkxv0JQtYS", "r1gNR40-oB", "r1l0QRy0FH", "Bye86rHeqB", "iclr_2020_Hklo5RNtwS", "iclr_2020_Hklo5RNtwS" ]
iclr_2020_SkxoqRNKwr
Adversarial Privacy Preservation under Attribute Inference Attack
With the prevalence of machine learning services, crowdsourced data containing sensitive information poses substantial privacy challenges. Existing work focusing on protecting against membership inference attacks under the rigorous framework of differential privacy are vulnerable to attribute inference attacks. In light of the current gap between theory and practice, we develop a novel theoretical framework for privacy-preservation under the attack of attribute inference. Under our framework, we propose a minimax optimization formulation to protect the given attribute and analyze its privacy guarantees against arbitrary adversaries. On the other hand, it is clear that privacy constraint may cripple utility when the protected attribute is correlated with the target variable. To this end, we also prove an information-theoretic lower bound to precisely characterize the fundamental trade-off between utility and privacy. Empirically, we extensively conduct experiments to corroborate our privacy guarantee and validate the inherent trade-offs in different privacy preservation algorithms. Our experimental results indicate that the adversarial representation learning approaches achieve the best trade-off in terms of privacy preservation and utility maximization.
reject
While there was some support for the ideas presented, the majority of reviewers felt that this submission is not ready for publication at ICLR in its present form. The most significant concerns raised were about the strength of the experiments, and choice of appropriate baselines.
train
[ "HJxv_uj4sH", "Skg-cPsEsS", "BkgB_viEoS", "S1xTzvjViB", "rkxK9f-suB", "Hyg8YbzRFB", "HyxrliSJcS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for the thoughtful comments and we answer each reviewer’s questions individually below. We have also performed additional experiments as requested by Reviewer 1 and 3, and updated our manuscript accordingly. More detailed results and discussions of the additional experiments could be fou...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 4, 1, 4 ]
[ "iclr_2020_SkxoqRNKwr", "rkxK9f-suB", "Hyg8YbzRFB", "HyxrliSJcS", "iclr_2020_SkxoqRNKwr", "iclr_2020_SkxoqRNKwr", "iclr_2020_SkxoqRNKwr" ]
iclr_2020_r1ln504YvH
Actor-Critic Approach for Temporal Predictive Clustering
Due to the wider availability of modern electronic health records (EHR), patient care data is often being stored in the form of time-series. Clustering such time-series data is crucial for patient phenotyping, anticipating patients’ prognoses by identifying “similar” patients, and designing treatment guidelines that are tailored to homogeneous patient subgroups. In this paper, we develop a deep learning approach for clustering time-series data, where each cluster comprises patients who share similar future outcomes of interest (e.g., adverse events, the onset of comorbidities, etc.). The clustering is carried out by using our novel loss functions that encourage each cluster to have homogeneous future outcomes. We adopt actor-critic models to allow “back-propagation” through the sampling process that is required for assigning clusters to time-series inputs. Experiments on two real-world datasets show that our model achieves superior clustering performance over state-of-the-art benchmarks and identifies meaningful clusters that can be translated into actionable information for clinical decision-making.
reject
This paper proposes a reinforcement learning approach to clustering time-series data. The reviewers had several questions related to clarity and concerns related to the novelty of the method, the connection to RL, and experimental results. While the authors were able to address some of these questions and concerns in the rebuttal, the reviewers believe that the paper is not quite ready for publication.
train
[ "H1gZQPOcjB", "BJlvFlF5oB", "B1lm9Cu9sH", "rkx8pYdcir", "Skxh_3ORFS", "B1latqSJ5B", "rkxb9oaZqB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the valuable comments.\n\nA1. In the probabilistic definition of the KL-divergence in (2), it is not necessary to explicitly denote the dependency between $S_{t}$ (i.e., the random variable for cluster assignment at time $t$) and $X_{1:t}$ (i.e., the random variable for the input subseque...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "rkxb9oaZqB", "B1latqSJ5B", "B1latqSJ5B", "Skxh_3ORFS", "iclr_2020_r1ln504YvH", "iclr_2020_r1ln504YvH", "iclr_2020_r1ln504YvH" ]
iclr_2020_Bkga90VKDB
Distilled embedding: non-linear embedding factorization using knowledge distillation
Word-embeddings are a vital component of Natural Language Processing (NLP) systems and have been extensively researched. Better representations of words have come at the cost of huge memory footprints, which has made deploying NLP models on edge-devices challenging due to memory limitations. Compressing embedding matrices without sacrificing model performance is essential for successful commercial edge deployment. In this paper, we propose Distilled Embedding, an (input/output) embedding compression method based on low-rank matrix decomposition with an added non-linearity. First, we initialize the weights of our decomposition by learning to reconstruct the full word-embedding and then fine-tune on the downstream task employing knowledge distillation on the factorized embedding. We conduct extensive experimentation with various compression rates on machine translation, using different data-sets with a shared word-embedding matrix for both embedding and vocabulary projection matrices. We show that the proposed technique outperforms conventional low-rank matrix factorization, and other recently proposed word-embedding matrix compression methods.
reject
This paper proposes to further distill token embeddings via what is effectively a simple autoencoder with a ReLU activation. All reviewers expressed concerns with the degree of technical contribution of this paper. As Reviewer 3 identifies, there are simple variants (e.g. end-to-end training with the factorized model) and there is no clear intuition for why the proposed method should outperform its variants as well as the other baselines (as noted by Reviewer 1). Reviewer 2 further expresses concerns about the merits of the propose approach over existing approaches, given the apparently small effect size of the improvement (let alone the possibility that the improvement may not in fact be statistically significant).
val
[ "BJeih5VjsH", "ByeEBVrhjS", "HyexmOEisH", "HJlNIMM9jH", "HyesYBaYjr", "BkgMMQW9oB", "HJl-Mn5kcS", "B1lOWHeJ9H", "BJgi5BU8qS" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and critique. \n\n1) The main strengths of our model are simplicity, a single parameter to control compression rate (bottleneck size), no reliance on frequency information which is seldom available for pre-trained models and a faster running time which we demonstrate in Section 2 b). Alth...
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "HJl-Mn5kcS", "HJl-Mn5kcS", "HJlNIMM9jH", "HyesYBaYjr", "BJgi5BU8qS", "B1lOWHeJ9H", "iclr_2020_Bkga90VKDB", "iclr_2020_Bkga90VKDB", "iclr_2020_Bkga90VKDB" ]
iclr_2020_HJgCcCNtwH
NeuroFabric: Identifying Ideal Topologies for Training A Priori Sparse Networks
Long training times of deep neural networks are a bottleneck in machine learning research. The major impediment to fast training is the quadratic growth of both memory and compute requirements of dense and convolutional layers with respect to their information bandwidth. Recently, training `a priori' sparse networks has been proposed as a method for allowing layers to retain high information bandwidth, while keeping memory and compute low. However, the choice of which sparse topology should be used in these networks is unclear. In this work, we provide a theoretical foundation for the choice of intra-layer topology. First, we derive a new sparse neural network initialization scheme that allows us to explore the space of very deep sparse networks. Next, we evaluate several topologies and show that seemingly similar topologies can often have a large difference in attainable accuracy. To explain these differences, we develop a data-free heuristic that can evaluate a topology independently from the dataset the network will be trained on. We then derive a set of requirements that make a good topology, and arrive at a single topology that satisfies all of them.
reject
This work proposes new initialization and layer topologies for training a priori sparse networks. Reviewers agreed that the direction is interesting and that the paper is well written. Additionally the theory presented on the toy matrix reconstruction task helped motivate the proposed approach. However, it is also necessary to validate the new approach by comparing with existing sparsity literature on standard benchmarks. I recommend resubmitting with the additional experiments suggested by the reviewers.
train
[ "HyeuHcRxcH", "BJlH9wIhiB", "BJlWGUInir", "rJxGJL8njB", "HkxX6B8nsS", "SJlvcB8hir", "rkev_8yLtr", "rJgxbE8wcS", "BJxHFA_PcS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a sparse cascade architecture that is a multiplication of several sparse matrices. Then the paper provides several considerations about the connectivity scheme. Finally, the paper proposes a specific connectivity pattern that outperforms other ones.\n\nI am not exactly in this area, but the pape...
[ 3, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ 1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2020_HJgCcCNtwH", "SJlvcB8hir", "rkev_8yLtr", "HyeuHcRxcH", "rJgxbE8wcS", "BJxHFA_PcS", "iclr_2020_HJgCcCNtwH", "iclr_2020_HJgCcCNtwH", "iclr_2020_HJgCcCNtwH" ]
iclr_2020_ryl0cAVtPH
On The Difficulty of Warm-Starting Neural Network Training
In many real-world deployments of machine learning systems, data arrive piecemeal. These learning scenarios may be passive, where data arrive incrementally due to structural properties of the problem (e.g., daily financial data) or active, where samples are selected according to a measure of their quality (e.g., experimental design). In both of these cases, we are building a sequence of models that incorporate an increasing amount of data. We would like each of these models in the sequence to be performant and take advantage of all the data that are available to that point. Conventional intuition suggests that when solving a sequence of related optimization problems of this form, it should be possible to initialize using the solution of the previous iterate---to "warm start'' the optimization rather than initialize from scratch---and see reductions in wall-clock time. However, in practice this warm-starting seems to yield poorer generalization performance than models that have fresh random initializations, even though the final training losses are similar. While it appears that some hyperparameter settings allow a practitioner to close this generalization gap, they seem to only do so in regimes that damage the wall-clock gains of the warm start. Nevertheless, it is highly desirable to be able to warm-start neural network training, as it would dramatically reduce the resource usage associated with the construction of performant deep learning systems. In this work, we take a closer look at this empirical phenomenon and try to understand when and how it occurs. Although the present investigation did not lead to a solution, we hope that a thorough articulation of the problem will spur new research that may lead to improved methods that consume fewer resources during training.
reject
The paper addresses the question of why warm starting could result in worse generalization ability than training from scratch. The reviewers agree that increasing the circumstances in which warm starting could be applied is of interest, in particular to reduce training time and computational resources. However, the reviewers were unanimous in their opinion that the paper is not suitable for publication at ICLR in its current form. Concerns included that the analysis was not sufficiently focused and the experiments too small scale. As the analysis component of the paper was considered to be limited, the experimental results were insufficient on the balance to push the paper to an acceptable state.
test
[ "B1l_dMc5tr", "HygkfbHijS", "rJgV5a4ssB", "BkgGph4soB", "rkxpchfIKr", "rygClmZstB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper conducted an empirical study on why training with warm starting has worse generalization ability than learning from scratch. The paper is interesting, however, it has something unclear to me, as explained below.\n\n\n1)\tThe scale and diversity of the study can be improved. Only three models and three...
[ 3, -1, -1, -1, 1, 3 ]
[ 5, -1, -1, -1, 3, 1 ]
[ "iclr_2020_ryl0cAVtPH", "rkxpchfIKr", "B1l_dMc5tr", "rygClmZstB", "iclr_2020_ryl0cAVtPH", "iclr_2020_ryl0cAVtPH" ]
iclr_2020_rJl05AVtwB
Chordal-GCN: Exploiting sparsity in training large-scale graph convolutional networks
Despite the impressive success of graph convolutional networks (GCNs) on numerous applications, training on large-scale sparse networks remains challenging. Current algorithms require large memory space for storing GCN outputs as well as all the intermediate embeddings. Besides, most of these algorithms involves either random sampling or an approximation of the adjacency matrix, which might unfortunately lose important structure information. In this paper, we propose Chordal-GCN for semi-supervised node classification. The proposed model utilizes the exact graph structure (i.e., without sampling or approximation), while requires limited memory resources compared with the original GCN. Moreover, it leverages the sparsity pattern as well as the clustering structure of the graph. The proposed model first decomposes a large-scale sparse network into several small dense subgraphs (called cliques), and constructs a clique tree. By traversing the tree, GCN training is performed clique by clique, and connections between cliques are exploited via the tree hierarchy. Furthermore, we implement Chordal-GCN on large-scale datasets and demonstrate superior performance.
reject
The submission is proposed a rejection based on majority review.
train
[ "SJxy62zaFB", "Bkl3gxj2iB", "ryxVOko3sH", "rygP3JsniB", "BJeCB_sT_H", "B1e8GSdatH", "Sye40EdLKH", "r1xfi7GlFH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper leverages the clique tree decomposition of the graph and design a new variant of GCN which does graph convolution on each clique and penalize the inconsistent prediction made on separators of each node and its children. Experiments on citation networks and the reddit network show that the proposed metho...
[ 6, -1, -1, -1, 1, 3, -1, -1 ]
[ 5, -1, -1, -1, 4, 4, -1, -1 ]
[ "iclr_2020_rJl05AVtwB", "BJeCB_sT_H", "B1e8GSdatH", "SJxy62zaFB", "iclr_2020_rJl05AVtwB", "iclr_2020_rJl05AVtwB", "r1xfi7GlFH", "iclr_2020_rJl05AVtwB" ]
iclr_2020_HJgkj0NFwr
Differentiable Architecture Compression
In many learning situations, resources at inference time are significantly more constrained than resources at training time. This paper studies a general paradigm, called Differentiable ARchitecture Compression (DARC), that combines model compression and architecture search to learn models that are resource-efficient at inference time. Given a resource-intensive base architecture, DARC utilizes the training data to learn which sub-components can be replaced by cheaper alternatives. The high-level technique can be applied to any neural architecture, and we report experiments on state-of-the-art convolutional neural networks for image classification. For a WideResNet with 97.2% accuracy on CIFAR-10, we improve single-sample inference speed by 2.28X and memory footprint by 5.64X, with no accuracy loss. For a ResNet with 79.15% Top-1 accuracy on ImageNet, we improve batch inference speed by 1.29X and memory footprint by 3.57X with 1% accuracy loss. We also give theoretical Rademacher complexity bounds in simplified cases, showing how DARC avoids over-fitting despite over-parameterization.
reject
The paper is on the borderline. A rejection is proposed due to the percentage limitation of ICLR.
train
[ "H1gtNovniB", "BJxR1iP2oB", "HJgVtcvhir", "ByesugqTtr", "ryeWeSmCFB", "Hyl64KDMqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Main Comments:\n\nThank you for your comments and careful review of the paper. We respond to each of your comments below:\n\n(1) \"[W]ithout weight regularization, this optimization problem seems ill-posed... More generally, a low alpha can be attained for any model by simply increasing the network weights themsel...
[ -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "Hyl64KDMqH", "ryeWeSmCFB", "ByesugqTtr", "iclr_2020_HJgkj0NFwr", "iclr_2020_HJgkj0NFwr", "iclr_2020_HJgkj0NFwr" ]
iclr_2020_HylloR4YDr
Learning Latent Representations for Inverse Dynamics using Generalized Experiences
Many practical robot locomotion tasks require agents to use control policies that can be parameterized by goals. Popular deep reinforcement learning approaches in this direction involve learning goal-conditioned policies or value functions, or Inverse Dynamics Models (IDMs). IDMs map an agent’s current state and desired goal to the required actions. We show that the key to achieving good performance with IDMs lies in learning the information shared between equivalent experiences, so that they can be generalized to unseen scenarios. We design a training process that guides the learning of latent representations to encode this shared information. Using a limited number of environment interactions, our agent is able to efficiently navigate to arbitrary points in the goal space. We demonstrate the effectiveness of our approach in high-dimensional locomotion environments such as the Mujoco Ant, PyBullet Humanoid, and PyBullet Minitaur. We provide quantitative and qualitative results to show that our method clearly outperforms competing baseline approaches.
reject
Solid, but not novel enough to merit publication. The reviewers agree on rejection, and despite authors' adaptation, the paper requires more work and broader experimentation for publication.
test
[ "B1goP3P7oH", "HJlHJ3v7iS", "BkgORYD7jS", "SkgQMKDQiH", "rkehqHWYFH", "rkgnfN29YB", "r1ejU8BQqr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\"limited to navigation environments - The proposed methods do not seem to be directly applicable to tasks other than navigation, where a very task-specific goal position can be provided.\"\n\nWe have shown results on tasks that have been the focus of many recent works in goal-conditioned RL ([4], [5], [6]). At th...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 5, 4, 3 ]
[ "HJlHJ3v7iS", "rkehqHWYFH", "rkgnfN29YB", "r1ejU8BQqr", "iclr_2020_HylloR4YDr", "iclr_2020_HylloR4YDr", "iclr_2020_HylloR4YDr" ]
iclr_2020_HklWsREKwr
Training Deep Neural Networks with Partially Adaptive Momentum
Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, despite the nice property of fast convergence, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes ``over adapted''. We design a new algorithm, called Partially adaptive momentum estimation method, which unifies the Adam/Amsgrad with SGD by introducing a partial adaptive parameter p, to achieve the best from both worlds. We also prove the convergence rate of our proposed algorithm to a stationary point in the stochastic nonconvex optimization setting. Experiments on standard benchmarks show that our proposed algorithm can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks.
reject
This paper extends Adam by adding another hyperparameter that allows the second moments to be raised to a power p other than 1/2. This certainly seems worth trying. The paper is well written, and the experiments seem reasonably complete. But some of the reviewers and I feel like the contribution is a bit obvious and incremental. The "small learning rate dilemma" needs a bit more justification: since the denominator has a different scale, the learning rates for different values of p are not directly comparable. It could very well be that Adam's learning rate has to be set too small due to some outlier dimensions, but showing this would require some evidence. From the experiments, it does seem like there's some practical benefit, though it's not terribly surprising that adding an additional hyperparameter will result in improved performance. The reviewers think the theoretical analysis is a straightforward extension of prior work (though I haven't checked myself). Overall, it doesn't seem to me like the contribution is quite enough for publication at ICLR.
train
[ "rJle8vkKKB", "rkl-_fDYsS", "HyewQGDKoB", "SJgNyfDKjH", "rkxzgXBx9S", "B1eUQxIZcr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "% post author response %\nThanks for your detailed response. \n\nR1. Note that in almost all classical optimization routines, the learning rate has a (very intuitive) scaling on the problem parameters - for e.g. in gradient descent, the learning rate looks like 1/smoothness. This is mirrored in the definition of N...
[ 1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, 3, 5 ]
[ "iclr_2020_HklWsREKwr", "rJle8vkKKB", "rkxzgXBx9S", "B1eUQxIZcr", "iclr_2020_HklWsREKwr", "iclr_2020_HklWsREKwr" ]
iclr_2020_HJe-oRVtPB
STABILITY AND CONVERGENCE THEORY FOR LEARNING RESNET: A FULL CHARACTERIZATION
ResNet structure has achieved great success since its debut. In this paper, we study the stability of learning ResNet. Specifically, we consider the ResNet block hl=ϕ(hl−1+τ⋅g(hl−1)) where ϕ(⋅) is ReLU activation and τ is a scalar. We show that for standard initialization used in practice, τ=1/Ω(L) is a sharp value in characterizing the stability of forward/backward process of ResNet, where L is the number of residual blocks. Specifically, stability is guaranteed for τ≤1/Ω(L) while conversely forward process explodes when τ>L−12+c for a positive constant c. Moreover, if ResNet is properly over-parameterized, we show for τ≤1/Ω~(L) gradient descent is guaranteed to find the global minima \footnote{We use Ω~(⋅) to hide logarithmic factor.}, which significantly enlarges the range of τ≤1/Ω~(L) that admits global convergence in previous work. We also demonstrate that the over-parameterization requirement of ResNet only weakly depends on the depth, which corroborates the advantage of ResNet over vanilla feedforward network. Empirically, with τ≤1/L, deep ResNet can be easily trained even without normalization layer. Moreover, adding τ=1/L can also improve the performance of ResNet with normalization layer.
reject
The article studies the stability of ResNets in relation to initialisation and depth. The reviewers found that this is an interesting article with important theoretical and experimental results. However, they also pointed out that the results, while good, are based on adaptations of previous work and hence might not be particularly impactful. The reviewers found that the revision made important improvements, but not quite meeting the bar for acceptance, pointing out that the presentation and details in the proofs could still be improved.
train
[ "r1eSRY_aKr", "ByxRBPa7sr", "HyxpjgESiB", "HkxKmDTmoS", "rylFv86QiH", "ryePM53AFB", "rylD2kolqS", "SJeixOkcuH", "H1einXEd_H", "Hklno59vdS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "This paper shows the stability of ResNet for the output scale of resblock: $\\tau < \\sqrt{L}^{-1}$ and shows the explosion for $\\tau > L^{-0.5+c}$. Based on this analysis, a linear convergence rate of the gradient descent for the squared loss is also shown. In addition, this paper empirically verifies the effici...
[ 3, -1, -1, -1, -1, 1, 6, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, 3, 3, -1, -1, -1 ]
[ "iclr_2020_HJe-oRVtPB", "r1eSRY_aKr", "iclr_2020_HJe-oRVtPB", "ryePM53AFB", "rylD2kolqS", "iclr_2020_HJe-oRVtPB", "iclr_2020_HJe-oRVtPB", "H1einXEd_H", "Hklno59vdS", "iclr_2020_HJe-oRVtPB" ]
iclr_2020_BygfiAEtwS
Inducing Stronger Object Representations in Deep Visual Trackers
Fully convolutional deep correlation networks are integral components of state-of- the-art approaches to single object visual tracking. It is commonly assumed that these networks perform tracking by detection by matching features of the object instance with features of the entire frame. Strong architectural priors and conditioning on the object representation is thought to encourage this tracking strategy. Despite these strong priors, we show that deep trackers often default to “tracking- by-saliency” detection – without relying on the object instance representation. Our analysis shows that despite being a useful prior, salience detection can prevent the emergence of more robust tracking strategies in deep networks. This leads us to introduce an auxiliary detection task that encourages more discriminative object representations that improve tracking performance.
reject
This paper proposes to learn a visual tracking network for an object detection loss as well as the ordinary tracking objective for enhancing the reliability of the tracking network. The reviewers were unanimous in their opinion that the paper should not be accepted to ICLR in its current form. A main concern is that the proposed method shows improvement over a relatively weak base system. Although the author response proposed to include additional analysis, but the reviewers felt that without the additional analysis already included it was not possible to change the overall review score.
train
[ "r1lK0BNojB", "BkxA2mEoir", "B1xmPmNjir", "HJx5eQNjiB", "S1lT_rjdtr", "rJlGv0Watr", "rJe0cj3z9B", "rJg5RTRVuS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Thank you for your response. It seems like we mostly agree, but without seeing the additional analyses you promised I cannot change my assessment. So I would suggest revising the paper accordingly and re-submitting the revision in the future.", "Response to Reviewer #2\n\nThank you for the detailed feedback and ...
[ -1, -1, -1, -1, 3, 3, 1, -1 ]
[ -1, -1, -1, -1, 5, 3, 4, -1 ]
[ "B1xmPmNjir", "iclr_2020_BygfiAEtwS", "iclr_2020_BygfiAEtwS", "iclr_2020_BygfiAEtwS", "iclr_2020_BygfiAEtwS", "iclr_2020_BygfiAEtwS", "iclr_2020_BygfiAEtwS", "iclr_2020_BygfiAEtwS" ]
iclr_2020_S1xXiREKDB
Adversarial training with perturbation generator networks
Despite the remarkable development of recent deep learning techniques, neural networks are still vulnerable to adversarial attacks, i.e., methods that fool the neural networks with perturbations that are too small for human eyes to perceive. Many adversarial training methods were introduced as to solve this problem, using adversarial examples as a training data. However, these adversarial attack methods used in these techniques are fixed, making the model stronger only to attacks used in training, which is widely known as an overfitting problem. In this paper, we suggest a novel adversarial training approach. In addition to the classifier, our method adds another neural network that generates the most effective adversarial perturbation by finding the weakness of the classifier. This perturbation generator network is trained to produce perturbations that maximize the loss function of the classifier, and these adversarial examples train the classifier with a true label. In short, the two networks compete with each other, performing a minimax game. In this scenario, attack patterns created by the generator network are adaptively altered to the classifier, mitigating the overfitting problem mentioned above. We theoretically proved that our minimax optimization problem is equivalent to minimizing the adversarial loss after all. Beyond this, we proposed an evaluation method that could accurately compare a wide-range of adversarial algorithms. Experiments with various datasets show that our method outperforms conventional adversarial algorithms.
reject
This paper proposes to use the GAN (i.e., minimax) framework for adversarial training, where another neural network was introduced to generate the most effective adversarial perturbation by finding the weakness of the classifier. The rebuttal was not fully convincing on why the proposed method should be superior to existing attacks.
test
[ "HkxX68w6FH", "BJl-qvl3iB", "ByxBHBLciH", "S1eGhiAKoH", "SyloogMtjB", "r1gQLv-wiB", "HyljxJYwjr", "H1goQvbDor", "Bkl-pB-DjS", "HkedgrZvjH", "HklZG5TYFB", "H1xDmCCJ5S" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to use the GAN framework for adversarial training. The proposed algorithm is mini-max loss plus L2-regularization on the perturbations generated by a generator network. Additionally, the paper shows a mathematical interpretation of the L2-term. Experiments on CIFAR-10 and CIFAR-100 shows that t...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_S1xXiREKDB", "iclr_2020_S1xXiREKDB", "S1eGhiAKoH", "Bkl-pB-DjS", "HyljxJYwjr", "H1xDmCCJ5S", "r1gQLv-wiB", "H1xDmCCJ5S", "HkxX68w6FH", "HklZG5TYFB", "iclr_2020_S1xXiREKDB", "iclr_2020_S1xXiREKDB" ]
iclr_2020_HklmoRVYvr
Long History Short-Term Memory for Long-Term Video Prediction
While video prediction approaches have advanced considerably in recent years, learning to predict long-term future is challenging — ambiguous future or error propagation over time yield blurry predictions. To address this challenge, existing algorithms rely on extra supervision (e.g., action or object pose), motion flow learning, or adversarial training. In this paper, we propose a new recurrent unit, Long History Short-Term Memory (LH-STM). LH-STM incorporates long history states into a recurrent unit to learn longer range dependencies. To capture spatio-temporal dynamics in videos, we combined LH-STM with the Context-aware Video Prediction model (ContextVP). Our experiments on the KTH human actions and BAIR robot pushing datasets demonstrate that our approach produces not only sharper near-future predictions, but also farther into the future compared to the state-of-the-art methods.
reject
The paper proposes a new recurrent unit which incorporates long history states to learn longer range dependencies for improved video prediction. This history term corresponds to a linear combination of previous hidden states selected through a soft-attention mechanism and can be directly added to ConvLSTM equations that compute the IFO gates and the new state. The authors perform empirical validation on the challenging KTH and BAIR Push datasets and show that their architecture outperforms existing work in terms of SSIM, PSNR, and VIF. The main issue raised by the reviewers is the incremental nature of the work and issues in the empirical evaluation which do not support the main claims in the paper. After the rebuttal and discussion phase the reviewers agree that these issues were not adequately resolved and the work doesn’t meet the acceptance bar. I will hence recommend the rejection of this paper. Nevertheless, we encourage the authors improve the manuscript by addressing the remaining issues in the empirical evaluation.
train
[ "HJgpVslatS", "H1xIL0o2jB", "SJeb-wohsH", "rJlUqJonoB", "rJxGfT9hsH", "ryxzNTRoiS", "rkg4ojRssH", "H1gUksXcsS", "rkls7c75sS", "SklElF75oB", "S1eCthRFjr", "HklVnt0KjS", "BJlXGS0FiS", "SygbgGVFsH", "SJg1ihBWsB", "HkeBAjr-iS", "SyxmMjSWoS", "SyeHF9HbsH", "rkgTTMcKKB", "r1xjMAW0tH"...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ "The paper proposes a type of recurrent neural network module called Long History Short-Term Memory (LH-STM) for longer-term video generation. This module can be used to replace ConvLSTMs in previously published video prediction models. It expands ConvLSTMs by adding a \"previous history\" term to the ConvLSTM equa...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_HklmoRVYvr", "rJxGfT9hsH", "rJlUqJonoB", "SklElF75oB", "rkls7c75sS", "SyeHF9HbsH", "iclr_2020_HklmoRVYvr", "BJlXGS0FiS", "HklVnt0KjS", "S1eCthRFjr", "SygbgGVFsH", "HkeBAjr-iS", "SyxmMjSWoS", "iclr_2020_HklmoRVYvr", "rkgTTMcKKB", "HJgpVslatS", "HJgpVslatS", "r1xjMAW0tH", ...
iclr_2020_BJlNs0VYPB
The Sooner The Better: Investigating Structure of Early Winning Lottery Tickets
The recent success of the lottery ticket hypothesis by Frankle & Carbin (2018) suggests that small, sparsified neural networks can be trained as long as the network is initialized properly. Several follow-up discussions on the initialization of the sparsified model have discovered interesting characteristics such as the necessity of rewinding (Frankle et al. (2019)), importance of sign of the initial weights (Zhou et al. (2019)), and the transferability of the winning lottery tickets (S. Morcos et al. (2019)). In contrast, another essential aspect of the winning ticket, the structure of the sparsified model, has been little discussed. To find the lottery ticket, unfortunately, all the prior work still relies on computationally expensive iterative pruning. In this work, we conduct an in-depth investigation of the structure of winning lottery tickets. Interestingly, we discover that there exist many lottery tickets that can achieve equally good accuracy much before the regular training schedule even finishes. We provide insights into the structure of these early winning tickets with supporting evidence. 1) Under stochastic gradient descent optimization, lottery ticket emerges when weight magnitude of a model saturates; 2) Pruning before the saturation of a model causes the loss of capability in learning complex patterns, resulting in the accuracy degradation. We employ the memorization capacity analysis to quantitatively confirm it, and further explain why gradual pruning can achieve better accuracy over the one-shot pruning. Based on these insights, we discover the early winning tickets for various ResNet architectures on both CIFAR10 and ImageNet, achieving state-of-the-art accuracy at a high pruning rate without expensive iterative pruning. In the case of ResNet50 on ImageNet, this comes to the winning ticket of 75:02% Top-1 accuracy at 80% pruning rate in only 22% of the total epochs for iterative pruning.
reject
This paper does extensive experiments to understand the lottery ticket hypothesis. The lottery ticket hypothesis is that there exist sparse sub-networks inside dense large models that achieve as good accuracy as the original model. The reviewers have issues with the novelty and significance of these experiments. They felt that it didn't shed new scientific light. They felt that epochs needed to do early detection was still expensive. I recommend doing further studies and submitting it to another venue.
train
[ "B1xmfZs6Yr", "SkgLQMNaFr", "SJxfRl8iiS", "ryxbBQ8jor", "Skl12GLjoS", "Hygc4zUsjS", "H1xEaZ8iiB", "rkx-7bUior", "SkgFH5o3FS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper carefully observes the behavior of weight magnitudes during training, finding the is a stage of saturation that is closely related to the winning lottery tickets drawing. Based on this observation the authors hypothesize that we can draw lottery tickets early but too early pruning can irreversibly hurt ...
[ 6, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_BJlNs0VYPB", "iclr_2020_BJlNs0VYPB", "B1xmfZs6Yr", "Skl12GLjoS", "iclr_2020_BJlNs0VYPB", "SkgFH5o3FS", "SkgLQMNaFr", "SJxfRl8iiS", "iclr_2020_BJlNs0VYPB" ]
iclr_2020_SklViCEFPH
Simple is Better: Training an End-to-end Contract Bridge Bidding Agent without Human Knowledge
Contract bridge is a multi-player imperfect-information game where one partnership collaborate with each other to compete against the other partnership. The game consists of two phases: bidding and playing. While playing is relatively easy for modern software, bidding is challenging and requires agents to learn a communication protocol to reach the optimal contract jointly, with their own private information. The agents need to exchange information to their partners, and interfere opponents, through a sequence of actions. In this work, we train a strong agent to bid competitive bridge purely through selfplay, outperforming WBridge5, a championship-winning software. Furthermore, we show that explicitly modeling belief is not necessary in boosting the performance. To our knowledge, this is the first competitive bridge agent that is trained with no domain knowledge. It outperforms previous state-of-the-art that use human replays with 70x fewer number of parameters.
reject
This paper proposes a new training method for an end-to-end contract bridge bidding agent. Reviewers R2 and R3 raised concerns regarding limited novelty and also experimental results not being convincing. R2's main objection is that the paper has "strong SOTA performance with a simple model, but empirical study are rather shallow." Based on their recommendations, I recommend to reject this paper.
val
[ "Bklt_hRCYH", "rke8MLk_oH", "BygCQrk_jH", "SJxd7CCoiS", "r1xKJ15TFS", "HJl0Je-1cr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a deep learning agent for automatic bidding in the bridge game. The agent is trained with a standard A3C reinforcement learning model with self-play, and the internal neural network only takes a rather succinct representation of the bidding history as the input. Experiment results demonstrate s...
[ 3, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, 1, 3 ]
[ "iclr_2020_SklViCEFPH", "Bklt_hRCYH", "HJl0Je-1cr", "iclr_2020_SklViCEFPH", "iclr_2020_SklViCEFPH", "iclr_2020_SklViCEFPH" ]