paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_S1gwC1StwS
Barcodes as summary of objective functions' topology
We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces. We present an algorithm for calculations of the objective function's barcodes of minima. Our experiments confirm two principal observations: (1) the barcodes of minima are located in a small lower part of the range of values of objective function and (2) increase of the neural network's depth brings down the minima's barcodes. This has natural implications for the neural network learning and the ability to generalize.
reject
The main concern raised by the reviewers is that the paper is difficult to read and potentially unclear. Therefore, the area chair read the paper, and also found it fairly dense and challenging to read. While there may be important discoveries in the paper, the paper in its current form makes it too difficult to read. Since four reviewers (including the AC) struggled to understand the paper, we believe the presentation of the paper should be improved. In particular, the claims of the paper should be better put into context.
train
[ "HJekGJOZjS", "HkxncmdbsS", "SJxwR-dbiS", "BklZP2GiFB", "BkeqaMZJ5S", "SkxeCjmJqS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comments.\nAs explained in section 3 we actually do not use any grid. The algorithm for computing barcodes of arbitrary function that we developed works with randomly chosen, or specifically chosen, point cloud in the function’s input. It does not require a grid, thus, it expands the calculations o...
[ -1, -1, -1, 1, 1, 1 ]
[ -1, -1, -1, 1, 5, 1 ]
[ "BkeqaMZJ5S", "SkxeCjmJqS", "BklZP2GiFB", "iclr_2020_S1gwC1StwS", "iclr_2020_S1gwC1StwS", "iclr_2020_S1gwC1StwS" ]
iclr_2020_rJguRyBYvr
Improved Detection of Adversarial Attacks via Penetration Distortion Maximization
This paper is concerned with the defense of deep models against adversarial at- tacks. We develop an adversarial detection method, which is inspired by the cer- tificate defense approach, and captures the idea of separating class clusters in the embedding space so as to increase the margin. The resulting defense is intuitive, effective, scalable and can be integrated into any given neural classification model. Our method demonstrates state-of-the-art detection performance under all threat models.
reject
A defense against of adversarial attacks is presented, which builds mostly on combining known methods in a novel way. While the novelty is somewhat limited, this would be fine if the results were unequivocally good and other parts of the problematic. However, reviewers were not entirely convinced by the results, and had a number of minor complaints with various parts of the paper. In sum, this paper is not currently at a stage where it can be accepted.
train
[ "S1ez9A-QKH", "HJlIS4s1qB", "HkxTPm3_jS", "SJxRCM3OsS", "HklKrf3uoH", "SkxDVJmVKH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "After rebuttal: my rating remains the same.\nI have read other reviewers' comments and the response. Overall, the contribution of retraining and detection with previously explored kernel density is limited. \n\n=================\nSummary: \nThis paper proposes new regularization techniques to train DNNs, which aft...
[ 3, 6, -1, -1, -1, 3 ]
[ 4, 1, -1, -1, -1, 5 ]
[ "iclr_2020_rJguRyBYvr", "iclr_2020_rJguRyBYvr", "S1ez9A-QKH", "SkxDVJmVKH", "HJlIS4s1qB", "iclr_2020_rJguRyBYvr" ]
iclr_2020_SJeOAJStwB
On Federated Learning of Deep Networks from Non-IID Data: Parameter Divergence and the Effects of Hyperparametric Methods
Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e.g., mobile-cloud ecosystems. However, despite the advantages, the federated learning-based methods still have a challenge in dealing with non-IID training data of local devices (i.e., learners). In this regard, we study the effects of a variety of hyperparametric conditions under the non-IID environments, to answer important concerns in practical implementations: (i) We first investigate parameter divergence of local updates to explain performance degradation from non-IID data. The origin of the parameter divergence is also found both empirically and theoretically. (ii) We then revisit the effects of optimizers, network depth/width, and regularization techniques; our observations show that the well-known advantages of the hyperparameter optimization strategies could rather yield diminishing returns with non-IID data. (iii) We finally provide the reasons of the failure cases in a categorized way, mainly based on metrics of the parameter divergence.
reject
This paper studies the problem of federated learning for non-i.i.d. data, and looks at the hyperparameter optimization in this setting. As the reviewers have noted, this is a purely empirical paper. There are certain aspects of the experiments that need further discussion, especially the learning rate selection for different architectures. That said, the submission may not be ready for publication at its current stage.
test
[ "rkxsOJ-nir", "HJg9jyWnsr", "HJe-HfWnor", "BkgAzMW2ir", "Skedlfbnsr", "HyxWNZ-njH", "rJeyG-WhiB", "SyxrReZnsB", "HkeZ3gb2iB", "HJgLu-bnjH", "B1goUWZ3jr", "H1lJwlb2iH", "HyeLHgZhir", "Bkex-eZniS", "rJg1DkWhoB", "ByguveDTFS", "SJebxqCTtB", "r1lHGdF0FH", "r1eGGqEkuS" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "\n**Regarding Section 4.2:\nIn many previous literatures, e.g., (Zhao et al., 2018), inordinate magnitude of parameter divergence is regarded as a direct response to learners’ local data being non-IID sampled from the population distribution; thus they explained that the consequent parameter averaging with the hig...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 1, -1 ]
[ "rJg1DkWhoB", "iclr_2020_SJeOAJStwB", "BkgAzMW2ir", "Skedlfbnsr", "ByguveDTFS", "rJeyG-WhiB", "SyxrReZnsB", "HkeZ3gb2iB", "SJebxqCTtB", "B1goUWZ3jr", "HyxWNZ-njH", "HyeLHgZhir", "Bkex-eZniS", "r1lHGdF0FH", "iclr_2020_SJeOAJStwB", "iclr_2020_SJeOAJStwB", "iclr_2020_SJeOAJStwB", "icl...
iclr_2020_rJxt0JHKvS
Coloring graph neural networks for node disambiguation
In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.
reject
This paper presents an extension of MPNN which leverages the random color augmentation to improve the representation power of MPNN. The experimental results shows the effectiveness of colorization. A majority of the reviewers were particularly concerned about lacking permutation invariance in the approach as well as the large variance issue in practice, and their opinion stays the same after the rebuttal. The reviewers unanimously expressed their concerns on the large variance issue during the discussion period. Overall, the reviewers believe that the authors has not addressed their concerns sufficiently.
train
[ "rylnm81RKr", "SygeSfGuoB", "H1ltBTpZiH", "HkgBzaabiB", "S1x72spZjH", "BJlp8spWsS", "SJxoqd7TtB", "Bye8QHt0tH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThe paper presents an interesting work, called Colored Local Iterative Procedure (CLIP), to improve the expressive power of Message Passing Neural Networks (MPNNs). Considering the expressive power from the concept of universal representations, the authors introduced the concept of separability and combine the s...
[ 6, -1, -1, -1, -1, -1, 1, 3 ]
[ 4, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2020_rJxt0JHKvS", "iclr_2020_rJxt0JHKvS", "HkgBzaabiB", "Bye8QHt0tH", "rylnm81RKr", "SJxoqd7TtB", "iclr_2020_rJxt0JHKvS", "iclr_2020_rJxt0JHKvS" ]
iclr_2020_rke5R1SFwS
Learning to Remember from a Multi-Task Teacher
Recent studies on catastrophic forgetting during sequential learning typically focus on fixing the accuracy of the predictions for a previously learned task. In this paper we argue that the outputs of neural networks are subject to rapid changes when learning a new data distribution, and networks that appear to "forget" everything still contain useful representation towards previous tasks. We thus propose to enforce the output accuracy to stay the same, we should aim to reduce the effect of catastrophic forgetting on the representation level, as the output layer can be quickly recovered later with a small number of examples. Towards this goal, we propose an experimental setup that measures the amount of representational forgetting, and develop a novel meta-learning algorithm to overcome this issue. The proposed meta-learner produces weight updates of a sequential learning network, mimicking a multi-task teacher network's representation. We show that our meta-learner can improve its learned representations on new tasks, while maintaining a good representation for old tasks.
reject
The paper addresses the setting of continual learning. Instead of focusing on catastrophic forgetting measured in terms of the output performance of the previous tasks, the authors tackle forgetting that happens at the level of the feature representation via a meta-learning approach. As rightly acknowledged by R2, from a meta-learning perspective the work is quite interesting and demonstrates a number of promising results. However the reviewers have raised several important concerns that placed this work below the acceptance bar: (1) the current manuscript lacks convincing empirical evaluations that clearly show the benefits of the proposed approach over SOTA continual learning methods; specifically the generalization of the proposed strategy to more than two sequential tasks is essential; also see R1’s detailed suggestions that would strengthen the contributions of this approach in light of continual learning; (2) training a meta-learner to predict the weight updates with supervision from a multi-task teacher network as an oracle, albeit nicely motivated, is unrealistic in the continual learning setting -- see R1’s detailed comments on this issue. (3) R2 and R3 expressed concerns regarding i) stronger baselines that are tuned to take advantage of the meta-learning data and ii) transferability to the different new tasks, i.e. dissimilarity of the meta-train and meta-test settings. Pleased to report that the authors showed and discussed in their response some initial qualitative results regarding these issues. An analysis on the performance of the proposed method when the meta-training and testing datasets are made progressively dissimilar would strengthen the evaluation the proposed meta-learning approach. There is a reviewer disagreement on this paper. AC can confirm that all three reviewers have read the rebuttal and have contributed to a long discussion. Among the aforementioned concerns, (3) did not have a decisive impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (2) make it very difficult to assess the benefits of the proposed approach, and were viewed by AC as critical issues. AC suggests, that in its current state the manuscript is not ready for a publication and needs a major revision before submitting for another round of reviews. We hope the reviews are useful for improving and revising the paper.
train
[ "HklvmbYRYS", "rJgWaM12KS", "B1lMX-HjoB", "SJlCEgrsoH", "Skgd2xBojH", "Hye_UZHisB", "Skg8UxriiB", "B1xinJSiiB", "SkxKwvdptB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary: This paper introduces a variation on measuring catastrophic forgetting in sequential learning at the representation level and attempts to resolve forgetting issue with the help of a meta-learner that predicts weight updates for previous tasks while it receives supervision from a multi-task learner teacher...
[ 1, 3, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_rke5R1SFwS", "iclr_2020_rke5R1SFwS", "rJgWaM12KS", "HklvmbYRYS", "SkxKwvdptB", "B1lMX-HjoB", "SJlCEgrsoH", "iclr_2020_rke5R1SFwS", "iclr_2020_rke5R1SFwS" ]
iclr_2020_ryl5CJSFPS
GENERALIZATION GUARANTEES FOR NEURAL NETS VIA HARNESSING THE LOW-RANKNESS OF JACOBIAN
Modern neural network architectures often generalize well despite containing many more parameters than the size of the training dataset. This paper explores the generalization capabilities of neural networks trained via gradient descent. We develop a data-dependent optimization and generalization theory which leverages the low-rank structure of the Jacobian matrix associated with the network. Our results help demystify why training and generalization is easier on clean and structured datasets and harder on noisy and unstructured datasets as well as how the network size affects the evolution of the train and test errors during training. Specifically, we use a control knob to split the Jacobian spectum into ``information" and ``nuisance" spaces associated with the large and small singular values. We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well. Over the nuisance space training is slower and early stopping can help with generalization at the expense of some bias. We also show that the overall generalization capability of the network is controlled by how well the labels are aligned with the information space. A key feature of our results is that even constant width neural nets can provably generalize for sufficiently nice datasets. We conduct various numerical experiments on deep networks that corroborate our theoretical findings and demonstrate that: (i) the Jacobian of typical neural networks exhibit low-rank structure with a few large singular values and many small ones leading to a low-dimensional information space, (ii) over the information space learning is fast and most of the labels falls on this space, and (iii) label noise falls on the nuisance space and impedes optimization/generalization.
reject
This submission investigates the properties of the Jacobian matrix in deep learning setup. Specifically, it splits the spectrum of the matrix into information (large singulars) and ``nuisance (small singulars) spaces. The paper shows that over the information space learning is fast and achieves zero loss. It also shows that generalization relates to how well labels are aligned with the information space. While the submission certainly has encouraging analysis/results, reviewers find these contributions limited and it is not clear how some of the claims in the paper can be extended to more general settings. For example, while the authors claim that low-rank structure is suggested by theory, the support of this claim is limited to a case study on mixture of Gaussians. In addition, the provided analysis only studies two-layer networks. As elaborated by R4, extending these arguments to more than two layers does not seem straighforward using the tools used in the submission. While all reviewers appreciated author's response, they were not convinced and maintained their original ratings.
train
[ "Syer94DpFS", "HyxZzj92jH", "S1lUTvchjH", "Bklu7DqnsS", "rkgb8JyCKr", "BygXbD0n5B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Note: The template used in this paper is of ICLR 2019, not ICLR 2020.\n\nThis paper identifies the information space and nuisance space by thresholding the singular values of the network's jacobian and shows that generally the residuals projected to the information space can be effectively optimized to zero, thus ...
[ 3, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, 3, 3 ]
[ "iclr_2020_ryl5CJSFPS", "rkgb8JyCKr", "Syer94DpFS", "BygXbD0n5B", "iclr_2020_ryl5CJSFPS", "iclr_2020_ryl5CJSFPS" ]
iclr_2020_SyxiRJStwr
Dynamic Scale Inference by Entropy Minimization
Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field. Rather than enumerate variations across filter channels or pyramid levels, dynamic models locally predict scale and adapt receptive fields accordingly. The degree of variation and diversity of inputs makes this a difficult task. Existing methods either learn a feedforward predictor, which is not itself totally immune to the scale variation it is meant to counter, or select scales by a fixed algorithm, which cannot learn from the given task and data. We extend dynamic scale inference from feedforward prediction to iterative optimization for further adaptivity. We propose a novel entropy minimization objective for inference and optimize over task and structure parameters to tune the model to each input. Optimization during inference improves semantic segmentation accuracy and generalizes better to extreme scale variations that cause feedforward dynamic inference to falter.
reject
This paper constitutes interesting progress on an important problem. I urge the authors to continue to refine their investigations, with the help of the reviewer comments; e.g., the quantitative analysis recommended by AnonReviewer4.
train
[ "ryxrlQ_Lcr", "SkeTFntooH", "HygQr3tosr", "HyxxXnYssB", "rJeSrHy0FB", "S1x7EQ7AFr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe following work proposes a test-time optimization over scales to improve semantic segmentation. Specifically, at test time, they iteratively optimize over the score and scale parameters of Shellhamer et al 2019, where a Gaussian receptive field is used to allow for dynamic scale adaptation of each con...
[ 3, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, 5, 3 ]
[ "iclr_2020_SyxiRJStwr", "rJeSrHy0FB", "S1x7EQ7AFr", "ryxrlQ_Lcr", "iclr_2020_SyxiRJStwr", "iclr_2020_SyxiRJStwr" ]
iclr_2020_H1lTRJBtwB
Compositional Transfer in Hierarchical Reinforcement Learning
The successful application of flexible, general learning algorithms to real-world robotics applications is often limited by their poor data-efficiency. To address the challenge, domains with more than one dominant task of interest encourage the sharing of information across tasks to limit required experiment time. To this end, we investigate compositional inductive biases in the form of hierarchical policies as a mechanism for knowledge transfer across tasks in reinforcement learning (RL). We demonstrate that this type of hierarchy enables positive transfer while mitigating negative interference. Furthermore, we demonstrate the benefits of additional incentives to efficiently decompose task solutions. Our experiments show that these incentives are naturally given in multitask learning and can be easily introduced for single objectives. We design an RL algorithm that enables stable and fast learning of structured policies and the effective reuse of both behavior components and transition data across tasks in an off-policy setting. Finally, we evaluate our algorithm in simulated environments as well as physical robot experiments and demonstrate substantial improvements in data data-efficiency over competitive baselines.
reject
This paper is concerned with improving data-efficiency in multitask reinforcement learning problems. This is achieved by taking a hierarchical approach, and learning commonalities across tasks for reuse. The authors present an off-policy actor-critic algorithm to learn and reuse these hierarchical policies. This is an interesting and promising paper, particularly with the ability to work with robots. The reviewers did however note issues with the novelty and making the contributions clear. Additionally, it was felt that the results proved the benefits of hierarchy rather than this approach, and that further comparisons to other approaches are required. As such, this paper is a weak reject at this point.
train
[ "Bylosox5ir", "Byg9Co7woH", "r1lZcsQwoS", "BJl2vi7DsB", "B1x4J2XDsB", "BkxMSq-nKr", "H1ezMRHRtS", "HJxjYHATtr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your clarifications\n\nThese comments have helped clear up my understanding of some important details.", "Thank you very much for the detailed feedback. We’re glad that the complexity of tasks and real world experiments are recognised and worked to address open questions and clarify contributions a...
[ -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "BJl2vi7DsB", "H1ezMRHRtS", "BJl2vi7DsB", "BkxMSq-nKr", "HJxjYHATtr", "iclr_2020_H1lTRJBtwB", "iclr_2020_H1lTRJBtwB", "iclr_2020_H1lTRJBtwB" ]
iclr_2020_HJgRCyHFDr
On Weight-Sharing and Bilevel Optimization in Architecture Search
Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search. However, its success is poorly understood and often found to be surprising. We argue that, rather than just being an optimization trick, the weight-sharing approach is induced by the relaxation of a structured hypothesis space, and introduces new algorithmic and theoretical challenges as well as applications beyond neural architecture search. Algorithmically, we show how the geometry of ERM for weight-sharing requires greater care when designing gradient- based minimization methods and apply tools from non-convex non-Euclidean optimization to give general-purpose algorithms that adapt to the underlying structure. We further analyze the learning-theoretic behavior of the bilevel optimization solved by practical weight-sharing methods. Next, using kernel configuration and NLP feature selection as case studies, we demonstrate how weight-sharing applies to the architecture search generalization of NAS and effectively optimizes the resulting bilevel objective. Finally, we use our optimization analysis to develop a simple exponentiated gradient method for NAS that aligns with the underlying optimization geometry and matches state-of-the-art approaches on CIFAR-10.
reject
Since there were only two official reviews submitted, I reviewed the paper to form a third viewpoint. I agree with reviewer 2 on the following points, which support rejection of the paper: 1) Only CIFAR is evaluated without Penn Treebank; 2) The "faster convergence" is not empirically justified by better final accuracy with same amount of search cost; and 3) The advantage of the proposed ACSA over SBMD is not clearly demonstrated in the paper. The scores of the two official reviews are insufficient for acceptance, and an additional review did not overturn this view.
train
[ "Syx3ZB43oH", "BygN0VE2jH", "rJevgbYIFH", "B1lBMGw2KB" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "Response: Thank you for your comments. We hope to address your issues below:\n\n1) Novelty and relevance of SBMD and ASCA to NAS:\n- Novelty: We respectfully disagree with your comment. In fact, our work is the first to introduce ASCA and it is *not* an existing generic algorithm.\n- Beta parameter: The beta para...
[ -1, -1, 3, 3 ]
[ -1, -1, 1, 1 ]
[ "rJevgbYIFH", "B1lBMGw2KB", "iclr_2020_HJgRCyHFDr", "iclr_2020_HJgRCyHFDr" ]
iclr_2020_H1gy1erYDH
CaptainGAN: Navigate Through Embedding Space For Better Text Generation
Score-function-based text generation approaches such as REINFORCE, in general, suffer from high computational complexity and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a reward function and ignore the gradient information. In this paper, we propose a novel approach, CaptainGAN, which adopts the straight-through gradient estimator and introduces a ”re-centered” gradient estimation technique to steer the generator toward better text tokens through the embedding space. Our method is stable to train and converges quickly without maximum likelihood pre-training. On multiple metrics of text quality and diversity, our method outperforms existing GAN-based methods on natural language generation.
reject
This paper proposes a method to train generative adversarial nets for text generation. The paper proposes to address the challenge of discrete sequences using straight-through and gradient centering. The reviewers found that the results on COCO Image Captions and EMNLP 2017 News were interesting. However, this paper is borderline because it does not sufficiently motivate one of its key contributions: the gradient centering. The paper establishes that it provides an improvement in ablation, but more in-depth analysis would significantly improve the paper. I strongly encourage the authors to resubmit the paper once this has been addressed.
train
[ "B1elMaV2oS", "rylorjQ2jr", "S1gEyFXnjH", "rygzfExniB", "H1lbVbx2iH", "HkeF9zh9jS", "HJxoXOKqoB", "HkxM_SnKjr", "BkxpntgtoB", "BylGeUlKiS", "rygN2IGOiB", "S1e_Gax_sS", "BJxaPddyjB", "HJgZb4nntr", "HkxIxHwaFB", "rkl9nFTF9H" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Reply to the first question: Yes.\n\nThe main purpose of constraining Lipschitz constant is to make the gradient informative even if the supports of real samples and generated samples are completely disjoint (or, make the loss surface between two supports smooth). In this case, the discriminator can still be confi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 4 ]
[ "S1gEyFXnjH", "iclr_2020_H1gy1erYDH", "H1lbVbx2iH", "HkxM_SnKjr", "rkl9nFTF9H", "HJgZb4nntr", "HJgZb4nntr", "BkxpntgtoB", "HJgZb4nntr", "rkl9nFTF9H", "rkl9nFTF9H", "BJxaPddyjB", "iclr_2020_H1gy1erYDH", "iclr_2020_H1gy1erYDH", "iclr_2020_H1gy1erYDH", "iclr_2020_H1gy1erYDH" ]
iclr_2020_Bklg1grtDr
Neural Design of Contests and All-Pay Auctions using Multi-Agent Simulation
We propose a multi-agent learning approach for designing crowdsourcing contests and all-pay auctions. Prizes in contests incentivise contestants to expend effort on their entries, with different prize allocations resulting in different incentives and bidding behaviors. In contrast to auctions designed manually by economists, our method searches the possible design space using a simulation of the multi-agent learning process, and can thus handle settings where a game-theoretic equilibrium analysis is not tractable. Our method simulates agent learning in contests and evaluates the utility of the resulting outcome for the auctioneer. Given a large contest design space, we assess through simulation many possible contest designs within the space, and fit a neural network to predict outcomes for previously untested contest designs. Finally, we apply mirror descent to optimize the design so as to achieve more desirable outcomes. Our empirical analysis shows our approach closely matches the optimal outcomes in settings where the equilibrium is known, and can produce high quality designs in settings where the equilibrium strategies are not solvable analytically.
reject
This paper demonstrates a framework for optimizing designs in auction/contest problems. The approach relies on considering a multi-agent learning process and then simulating it. To a large degree there is agreement among reviewers that this approach is sensible and sound, however lacks substantial novelty. The authors provided a rebuttal which clarified the aspects that they consider novel, however the reviewers remained mostly unconvinced. Furthermore, it would help if the improvement over past approaches is demonstrated in a more convincing way, for example with increased scope experiments that also involve richer analysis.
train
[ "HkegGTPZcS", "r1g0zMbRKS", "BJg_1FfTtS", "ryeXiUjdsS", "S1lvK8j_jH", "HyxRIUouoH", "SJgLHLjOjr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "After reading the rebuttal, I increased my score to weak accept, since it addressed my concern.\n----------------------------------------\nSummary\nThis paper presents a general machine learning method for contest / auction problems. The underlying idea is to collect data pairs (i.e., [design, utility]), fit a mod...
[ 6, 3, 3, -1, -1, -1, -1 ]
[ 1, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2020_Bklg1grtDr", "iclr_2020_Bklg1grtDr", "iclr_2020_Bklg1grtDr", "BJg_1FfTtS", "r1g0zMbRKS", "SJgLHLjOjr", "HkegGTPZcS" ]
iclr_2020_r1glygHtDB
A multi-task U-net for segmentation with lazy labels
The need for labour intensive pixel-wise annotation is a major limitation of many fully supervised learning methods for image segmentation. In this paper, we propose a deep convolutional neural network for multi-class segmentation that circumvents this problem by being trainable on coarse data labels combined with only a very small number of images with pixel-wise annotations. We call this new labelling strategy ‘lazy’ labels. Image segmentation is then stratified into three connected tasks: rough detection of class instances, separation of wrongly connected objects without a clear boundary, and pixel-wise segmentation to find the accurate boundaries of each object. These problems are integrated into a multi-task learning framework and the model is trained end-to-end in a semi-supervised fashion. The method is demonstrated on two segmentation datasets, including food microscopy images and histology images of tissues respectively. We show that the model gives accurate segmentation results even if exact boundary labels are missing for a majority of the annotated data. This allows more flexibility and efficiency for training deep neural networks that are data hungry in a practical setting where manual annotation is expensive, by collecting more lazy (rough) annotations than precisely segmented images.
reject
The paper proposes an architecture for semantic instance segmentation learnable from coarse annotations and evaluates it on two microscopy image datasets, demonstrating its advantage over baseline. While the reviewers appreciate the details of the architecture, they note the lack of evaluation on any of popular datasets and the lack of comparisons with baselines that would be more close to state-of-the-art. The authors do not address this criticism convincingly. It is not clear, why e.g. the Cityscapes or VOC Pascal datasets, which both have reasonably accurate annotations, cannot be used for the validation of the idea. If the focus is on the precision of the result near the boundaries, then one can always report the error near boundaries (this is a standard thing to do). Note that the performance of the baseline models is far from saturated near boundaries (i.e. the errors are larger than mistakes of annotation). At this stage, the paper lacks convincing evaluation and comparison with prior art. Given that this is first and foremost application paper, lacking some very novel ideas (as pointed out by e.g. Rev1), better evaluation is needed for acceptance.
test
[ "S1eydI-nor", "B1g4ENhV9r", "Bket8AJcoS", "r1xY-m3VoH", "H1garQn4sS", "rygpIbp4or", "ryguveaEsH", "SkxEis2VoS", "Hyg3vLW9Fr", "HkgaxjChYH" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much. We really appreciate the further comments and suggestions. \n\nWe agree that it is possible to make the mainstream datasets smaller in order to examine the weak supervision methods’ performance in the scenario of less training data. In this context, a performance loss is to be expected with al...
[ -1, 6, -1, -1, -1, -1, -1, -1, 6, 1 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ "Bket8AJcoS", "iclr_2020_r1glygHtDB", "rygpIbp4or", "HkgaxjChYH", "HkgaxjChYH", "B1g4ENhV9r", "B1g4ENhV9r", "Hyg3vLW9Fr", "iclr_2020_r1glygHtDB", "iclr_2020_r1glygHtDB" ]
iclr_2020_SJlWyerFPS
DeepXML: Scalable & Accurate Deep Extreme Classification for Matching User Queries to Advertiser Bid Phrases
The objective in deep extreme multi-label learning is to jointly learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set. Unfortunately, state-of-the-art deep extreme classifiers are either not scalable or inaccurate for short text documents. This paper develops the DeepXML algorithm which addresses both limitations by introducing a novel architecture that splits training of head and tail labels. DeepXML increases accuracy by (a) learning word embeddings on head labels and transferring them through a novel residual connection to data impoverished tail labels; (b) increasing the amount of negative training data available by extending state-of-the-art negative sub-sampling techniques; and (c) re-ranking the set of predicted labels to eliminate the hardest negatives for the original classifier. All of these contributions are implemented efficiently by extending the highly scalable Slice algorithm for pretrained embeddings to learn the proposed DeepXML architecture. As a result, DeepXML could efficiently scale to problems involving millions of labels that were beyond the pale of state-of-the-art deep extreme classifiers as it could be more than 10x faster at training than XML-CNN and AttentionXML. At the same time, DeepXML was also empirically determined to be up to 19% more accurate than leading techniques for matching search engine queries to advertiser bid phrases.
reject
The paper proposes a new method for extreme multi-label classification. However, this paper only combine some well known tricks, the technical contributions are too limited. And there are many problems in the experiments, such as the reproducibility, the scal of data set and the results on well-known extreme data sets and so on. The authors are encouraged to consider the reviewer's comments to revise the paper.
train
[ "HyxA3eVAYr", "B1gHAeImsH", "HyebKkIXiH", "ryg8yS8moS", "BylXPX8XjS", "S1e5yiS7jr", "SylqCgSG5H", "BJe2vF789S" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents a deep learning method for extreme classification and apply it to the application of matching user queries to bid phrases. The main idea is to learn the deep models separately for head and tail labels. Since there is abundant training data for the head labels and transfer the learnt word embeddi...
[ 6, -1, -1, -1, -1, -1, 3, 6 ]
[ 5, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_SJlWyerFPS", "HyebKkIXiH", "HyxA3eVAYr", "BylXPX8XjS", "SylqCgSG5H", "BJe2vF789S", "iclr_2020_SJlWyerFPS", "iclr_2020_SJlWyerFPS" ]
iclr_2020_SklM1xStPB
Copy That! Editing Sequences by Copying Spans
Neural sequence-to-sequence models are finding increasing use in editing of documents, for example in correcting a text document or repairing source code. In this paper, we argue that existing seq2seq models (with a facility to copy single tokens) are not a natural fit for such tasks, as they have to explicitly copy each unchanged token. We present an extension of seq2seq models capable of copying entire spans of the input to the output in one step, greatly reducing the number of decisions required during inference. This extension means that there are now many ways of generating the same output, which we handle by deriving a new objective for training and a variation of beam search for inference that explicitly handle this problem. In our experiments on a range of editing tasks of natural language and source code, we show that our new model consistently outperforms simpler baselines.
reject
This paper proposes an addition to seq2seq models to allow the model to copy spans of tokens of arbitrary length in one step. The authors argue that this method is useful in editing applications where long spans of the output sequence will be exact copies of the input. Reviewers agreed that the problem is interesting and the solution technically sound. However, during the discussion phase there were concerns that the method was too incremental to warrant publication at ICLR. The work would be strengthened with a more thorough discussion of related work and additional experiments comparing with the relevant baselines as suggested by Reviewer 2.
train
[ "SkgpBHUKoS", "BkeGuNIKsB", "rJefKqf7jS", "HJgZV5fmiB", "Bkgo3FzXor", "rJxLOFzmjB", "rklla_W3KH", "H1xT0KVecH", "BJxj26u45S" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We retrained our model without using our new marginalized objective (Eq. 2), and instead forcing the model to copy the longest possible span at each timestep, as done in Zhou et al. (2018). We find that the performance of the model significantly worsens (see updated Table 1; “force copy longest”). We believe that ...
[ -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "Bkgo3FzXor", "rJxLOFzmjB", "iclr_2020_SklM1xStPB", "rklla_W3KH", "H1xT0KVecH", "BJxj26u45S", "iclr_2020_SklM1xStPB", "iclr_2020_SklM1xStPB", "iclr_2020_SklM1xStPB" ]
iclr_2020_BkgGJlBFPS
Unsupervised Hierarchical Graph Representation Learning with Variational Bayes
Hierarchical graph representation learning is an emerging subject owing to the increasingly popular adoption of graph neural networks in machine learning and applications. Loosely speaking, work under this umbrella falls into two categories: (a) use a predefined graph hierarchy to perform pooling; and (b) learn the hierarchy for a given graph through differentiable parameterization of the coarsening process. These approaches are supervised; a predictive task with ground-truth labels is used to drive the learning. In this work, we propose an unsupervised approach, \textsc{BayesPool}, with the use of variational Bayes. It produces graph representations given a predefined hierarchy. Rather than relying on labels, the training signal comes from the evidence lower bound of encoding a graph and decoding the subsequent one in the hierarchy. Node features are treated latent in this variational machinery, so that they are produced as a byproduct and are used in downstream tasks. We demonstrate a comprehensive set of experiments to show the usefulness of the learned representation in the context of graph classification.
reject
The paper presents an unsupervised method for graph representation, building upon Loukas' method for generating a sequence of gradually coarsened graphs. The contribution is an "encoder-decoder" architecture trained by variational inference, where the encoder produces the embedding of the nodes in the next graph of the sequence, and the decoder produces the structure of the next graph. One important merit of the approach is that this unsupervised representation can be used effectively for supervised learning, with results quite competitive to the state of the art. However the reviewers were unconvinced by the novelty and positioning of the approach. The point of whether the approach should be viewed as variational Bayesian, or simply variational approximation was much debated between the reviewers and the authors. The area chair encourages the authors to pursue this very promising research, and to clarify the paper; perhaps the use of "encoder-decoder" generated too much misunderstanding. Another graph NN paper you might be interested in is "Edge Contraction Pooling for Graph NNs", by Frederik Diehl.
train
[ "B1xrAKUnoH", "ByxhQGLhoH", "Byx7A-82jS", "r1ekj-InsS", "BJezGUojtr", "SJg6jDF6tr", "SJljZmHL9S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Again, I do not see the point in calling variational approximation variational Bayes is there is no prior on the parameters. You seem to be confusing variational approximation in e.g. EM where a complex distribution is replaced by a factored approximation with variational Bayes where the posterior distribution of ...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 4, 3, 1 ]
[ "Byx7A-82jS", "BJezGUojtr", "SJg6jDF6tr", "SJljZmHL9S", "iclr_2020_BkgGJlBFPS", "iclr_2020_BkgGJlBFPS", "iclr_2020_BkgGJlBFPS" ]
iclr_2020_B1eXygBFPH
Attacking Graph Convolutional Networks via Rewiring
Graph Neural Networks (GNNs) have boosted the performance of many graph related tasks such as node classification and graph classification. Recent researches show that graph neural networks are vulnerable to adversarial attacks, which deliberately add carefully created unnoticeable perturbation to the graph structure. The perturbation is usually created by adding/deleting a few edges, which might be noticeable even when the number of edges modified is small. In this paper, we propose a graph rewiring operation which affects the graph in a less noticeable way compared to adding/deleting edges. We then use reinforcement learning to learn the attack strategy based on the proposed rewiring operation. Experiments on real world graphs demonstrate the effectiveness of the proposed framework. To understand the proposed framework, we further analyze how its generated perturbation to the graph structure affects the output of the target model.
reject
This paper proposes a method for attacking graph convolutional networks, where a graph rewiring operation was introduced that affects the graph in a less noticeable way compared to adding/deleting edges. Reinforcement learning is applied to learn the attack strategy based on the proposed rewiring operation. The paper should be improved by acknowledging/comparing with previous work in a more proper way. In particular, I view the major innovation is on the rewiring operation and its analysis. The reinforcement learning formulation is similar to Dai et al (2018). This connection should be made more clear in the technical part. One issue that needs to be discussed on is that if you directly consider the triples as actions, the space will be huge. Do you apply some hierarchical treatment as suggested by Dai et al. (2018)? The review comments should be considered to further improve too.
train
[ "S1ebgQN2sB", "SklXmeN3iB", "BJgLmkVhjH", "r1lNAbNnsB", "S1gxPZ4hsr", "Hkl2-ampKB", "rylgT8lNqB", "S1gdft7NcS", "BJlUEvc6KS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the valuable comments and suggestions. \n \nWe address the concerns from the reviewer as follows:\n\nQ1: In figure 3, the authors also show that the proposed method can make less noticeable changes on eigenvalue. But are these changes still noticeable compared to original one? Please also show these ...
[ -1, -1, -1, -1, -1, 6, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Hkl2-ampKB", "rylgT8lNqB", "S1gdft7NcS", "S1gxPZ4hsr", "BJlUEvc6KS", "iclr_2020_B1eXygBFPH", "iclr_2020_B1eXygBFPH", "iclr_2020_B1eXygBFPH", "iclr_2020_B1eXygBFPH" ]
iclr_2020_rylNJlStwB
Learning to Infer User Interface Attributes from Images
We present a new approach that helps developers automate the process of user interface implementation. Concretely, given an input image created by a designer (e.g, using a vector graphics editor), we learn to infer its implementation which when rendered (e.g., on the Android platform), looks visually the same as the input image. To achieve this, we take a black box rendering engine and a set of attributes it supports (e.g., colors, border radius, shadow or text properties), use it to generate a suitable synthetic training dataset, and then train specialized neural models to predict each of the attribute values. To improve pixel-level accuracy, we also use imitation learning to train a neural policy that refines the predicted attribute values by learning to compute the similarity of the original and rendered images in their attribute space, rather than based on the difference of pixel values.
reject
The majority of reviewers suggest rejection, pointing to concerns about design and novelty. Perhaps the most concerning part to me was the consistent lack of expertise in the applied area. This could be random bad luck draw of reviewers, but more likely the paper is not positioned well in the ICLR literature. This means that either it was submitted to the wrong venue, or that the exposition needs to be improved so that the paper is approachable by a larger part of the ICLR community. Since this is not currently true, I suggest that the authors work on a revision.
train
[ "BJgnNBH_sH", "ryetcfr_jS", "BJxQKt8LiH", "Syl3EvQNor", "rJlTAUXEjH", "r1gsYbQVoH", "HyendkXNjr", "rke2S9fVsS", "HkeXm6OKKH", "SJlsL5NoFB", "Bkgi2uRR9B" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Question: My concern has been the cost/benefit ratio: Siamese network is significantly more complicated than PixelSim (or doing nothing) but only brings marginal improvements over best prediction. We may need more evidence to show it's necessity. For example, if somehow the experiments on other UI elements showed ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1 ]
[ "BJxQKt8LiH", "iclr_2020_rylNJlStwB", "HkeXm6OKKH", "HkeXm6OKKH", "HkeXm6OKKH", "HkeXm6OKKH", "Bkgi2uRR9B", "SJlsL5NoFB", "iclr_2020_rylNJlStwB", "iclr_2020_rylNJlStwB", "iclr_2020_rylNJlStwB" ]
iclr_2020_BJeVklHtPr
Batch Normalization has Multiple Benefits: An Empirical Study on Residual Networks
Many state of the art models rely on two architectural innovations; skip connections and batch normalization. However batch normalization has a number of limitations. It breaks the independence between training examples within a batch, performs poorly when the batch size is too small, and significantly increases the cost of computing a parameter update in some models. This work identifies two practical benefits of batch normalization. First, it improves the final test accuracy. Second, it enables efficient training with larger batches and larger learning rates. However we demonstrate that the increase in the largest stable learning rate does not explain why the final test accuracy is increased under a finite epoch budget. Furthermore, we show that the gap in test accuracy between residual networks with and without batch normalization can be dramatically reduced by improving the initialization scheme. We introduce “ZeroInit”, which trains a 1000 layer deep Wide-ResNet without normalization to 94.3% test accuracy on CIFAR-10 in 200 epochs at batch size 64. This initialization scheme outperforms batch normalization when the batch size is very small, and is competitive with batch normalization for batch sizes that are not too large. We also show that ZeroInit matches the validation accuracy of batch normalization when training ResNet-50-V2 on ImageNet at batch size 1024.
reject
The paper is rejected based on unanimous reviews.
test
[ "H1emDY7njr", "r1egVKj-iH", "rJePYYZWjr", "r1e2BWceoS", "S1e5lyqxor", "r1e_OsV6tS", "BylOW51AKr", "SkguBLi29r", "BkeN40YhKH", "H1xVT-sedB" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "We have performed the additional experiments requested by the reviewer. Please find below the comparisons between batch normalization, Fixup and ZeroInit, both with and without dropout. The experiments presented are for ImageNet classification with ResNet50-V2. When using dropout, we use a drop probability of 0.2 ...
[ -1, -1, 3, -1, -1, 1, 3, -1, -1, -1 ]
[ -1, -1, 3, -1, -1, 4, 5, -1, -1, -1 ]
[ "r1e2BWceoS", "rJePYYZWjr", "iclr_2020_BJeVklHtPr", "BylOW51AKr", "r1e_OsV6tS", "iclr_2020_BJeVklHtPr", "iclr_2020_BJeVklHtPr", "BkeN40YhKH", "H1xVT-sedB", "iclr_2020_BJeVklHtPr" ]
iclr_2020_ByxHJeBYDB
Forecasting Deep Learning Dynamics with Applications to Hyperparameter Tuning
Well-performing deep learning models have enormous impact, but getting them to perform well is complicated, as the model architecture must be chosen and a number of hyperparameters tuned. This requires experimentation, which is timeconsuming and costly. We propose to address the problem of hyperparameter tuning by learning to forecast the training behaviour of deep learning architectures. Concretely, we introduce a forecasting model that, given a hyperparameter schedule (e.g., learning rate, weight decay) and a history of training observations (such as loss and accuracy), predicts how the training will continue. Naturally, forecasting is much faster and less expensive than running actual deep learning experiments. The main question we study is whether the forecasting model is good enough to be of use - can it indeed replace real experiments? We answer this affirmatively in two ways. For one, we show that the forecasted curves are close to real ones. On the practical side, we apply our forecaster to learn hyperparameter tuning policies. We experiment on a version of ResNet on CIFAR10 and on Transformer in a language modeling task. The policies learned using our forecaster match or exceed the ones learned in real experiments and in one case even the default schedules discovered by researchers. We study the learning rate schedules created using the forecaster are find that they are not only effective, but also lead to interesting insights.
reject
This paper trains a transformer to extrapolate learning curves, and uses this in a model-based RL framework to automatically tune hyperparameters. This might be a good approach, but it's hard to know because the experiments don't include direct comparisons against existing hyperparameter optimization/adaptation techniques (either the ones based on extrapolating training curves, or standard ones like BayesOpt or PBT). The presentation is also fairly informal, and it's not clear if a reader would be able to reproduce the results. Overall, I think there's significant cleanup and additional experiments needed before publication in ICLR.
test
[ "rylURM6jjB", "rJla3R_jjr", "rkgeDR_ojB", "HJlRC3djiB", "ByghS3dssB", "HkgABKrXir", "SyeR04XiuS", "Hyxs5tB0FS", "rye5LcLAYH" ]
[ "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for reading my review and responding to my comments.", "Thank you for mentioning this connection. We have added it to the updated version of our work.", "Thank you for the insightful review.\n\nWe updated the paper with better results and more tasks. We show that our method outperforms the human base...
[ -1, -1, -1, -1, -1, -1, 3, 6, 1 ]
[ -1, -1, -1, -1, -1, -1, 3, 1, 1 ]
[ "HJlRC3djiB", "HkgABKrXir", "SyeR04XiuS", "Hyxs5tB0FS", "rye5LcLAYH", "iclr_2020_ByxHJeBYDB", "iclr_2020_ByxHJeBYDB", "iclr_2020_ByxHJeBYDB", "iclr_2020_ByxHJeBYDB" ]
iclr_2020_r1eU1gHFvH
Under what circumstances do local codes emerge in feed-forward neural networks
Localist coding schemes are more easily interpretable than the distributed schemes but generally believed to be biologically implausible. Recent results have found highly selective units and object detectors in NNs that are indicative of local codes (LCs). Here we undertake a constructionist study on feed-forward NNs and find LCs emerging in response to invariant features, and this finding is robust until the invariant feature is perturbed by 40%. Decreasing the number of input data, increasing the relative weight of the invariant features and large values of dropout all increase the number of LCs. Longer training times increase the number of LCs and the turning point of the LC-epoch curve correlates well with the point at which NNs reach 90-100% on both test and training accuracy. Pseudo-deep networks (2 hidden layers) which have many LCs lose them when common aspects of deep-NN research are applied (large training data, ReLU activations, early stopping on training accuracy and softmax), suggesting that LCs may not be found in deep-NNs. Switching to more biologically feasible constraints (sigmoidal activation functions, longer training times, dropout, activation noise) increases the number of LCs. If LCs are not found in the feed-forward classification layers of modern deep-CNNs these data suggest this could either be caused by a lack of (moderately) invariant features being passed to the fully connected layers or due to the choice of training conditions and architecture. Should the interpretability and resilience to noise of LCs be required, this work suggests how to tune a NN so they emerge.
reject
This paper studies when hidden units provide local codes by analyzing the hidden units of trained fully connected classification networks under various architectures and regularizers. The reviewers and the AC believe that the paper in its current form is not ready for acceptance to ICLR-2020. Further work and experiments are needed in order to identify an explanation for the emergence of local codes. This would significantly strengthen the paper.
val
[ "Syet1-9jtH", "BJeh3af0FH", "ByeN3hTwcB", "rygJbaI6cS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I have a lot of questions about the data used in the experiments. They are created according to the method explained in “Data design” (p.2). It is also summarized in the last paragraph of the first section as follows: ”there are 1/10 input bits that are always 1 for each class and these are the invariant bits, the...
[ 1, 3, 3, 3 ]
[ 5, 1, 3, 1 ]
[ "iclr_2020_r1eU1gHFvH", "iclr_2020_r1eU1gHFvH", "iclr_2020_r1eU1gHFvH", "iclr_2020_r1eU1gHFvH" ]
iclr_2020_rJx8ylSKvr
Leveraging Entanglement Entropy for Deep Understanding of Attention Matrix in Text Matching
The formal understanding of deep learning has made great progress based on quantum many-body physics. For example, the entanglement entropy in quantum many-body systems can interpret the inductive bias of neural network and then guide the design of network structure and parameters for certain tasks. However, there are two unsolved problems in the current study of entanglement entropy, which limits its application potential. First, the theoretical benefits of entanglement entropy was only investigated in the representation of a single object (e.g., an image or a sentence), but has not been well studied in the matching of two objects (e.g., question-answering pairs). Second, the entanglement entropy can not be qualitatively calculated since the exponentially increasing dimension of the matching matrix. In this paper, we are trying to address these two problem by investigating the fundamental connections between the entanglement entropy and the attention matrix. We prove that by a mapping (via the trace operator) on the high-dimensional matching matrix, a low-dimensional attention matrix can be derived. Based on such a attention matrix, we can provide a feasible solution to the entanglement entropy that describes the correlation between the two objects in matching tasks. Inspired by the theoretical property of the entanglement entropy, we can design the network architecture adaptively in a typical text matching task, i.e., question-answering task.
reject
This paper advocates for the application of entanglement entropy from quantum physics to understand and improve the inductive bias of neural network architectures for question answering tasks. All reviewers found the current presentation of the method difficult to understand, and as a result it is difficult to determine what exactly the contribution of this work is. One suggestion for improving the manuscript is to minimize the references to quantum entanglement (where currently is it asserted without justification that entanglement entropy is a relevant concept for modeling question-answering tasks). Instead, presenting the method as applications of tensor decompositions for parameterizing neural network architectures would make the work more accessible to a machine learning audience, and help clarify the contribution with respect to related works [1]. 1. http://papers.nips.cc/paper/8495-a-tensorized-transformer-for-language-modeling.pdf
train
[ "ByefA3LHsH", "SJe-R5ehir", "S1glscrHsH", "ryeIgj5pFB", "HJe8f0JRKr", "SJlrd6mUcS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the time and feedback.\n1 We would like to in the experiment, we did not compute an average evaluation result on two sub-datasets. Instead, we combine the QA-pair’s matching score files from the two sub-datasets into one file and calculate the MAP and MRR of the entire dataset based on th...
[ -1, -1, -1, 1, 1, 3 ]
[ -1, -1, -1, 1, 3, 3 ]
[ "ryeIgj5pFB", "HJe8f0JRKr", "SJlrd6mUcS", "iclr_2020_rJx8ylSKvr", "iclr_2020_rJx8ylSKvr", "iclr_2020_rJx8ylSKvr" ]
iclr_2020_S1et1lrtwr
Unsupervised Meta-Learning for Reinforcement Learning
Meta-learning algorithms learn to acquire new tasks more quickly from past experience. In the context of reinforcement learning, meta-learning algorithms can acquire reinforcement learning procedures to solve new problems more efficiently by utilizing experience from prior tasks. The performance of meta-learning algorithms depends on the tasks available for meta-training: in the same way that supervised learning generalizes best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks. In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design. If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated. In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning. We motivate and describe a general recipe for unsupervised meta-reinforcement learning, and present an instantiation of this approach. Our conceptual and theoretical contributions consist of formulating the unsupervised meta-reinforcement learning problem and describing how task proposals based on mutual information can in principle be used to train optimal meta-learners. Our experimental results indicate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design and significantly exceeds the performance of learning from scratch.
reject
The paper discusses the relevant topic of unsupervised meta-learning in an RL setting. The topic is an interesting one, but the writing and motivation could be much clearer. I advise the authors to make a few more iterations on the paper taking into account the reviewers' comments and then resubmit to a different venue.
train
[ "ryeOpwn_jB", "rkgbwD3_jB", "ryellD2OiH", "BylsRL2OoB", "rkgUHZwTKS", "S1lqr4oRFr", "rkeDSjvI5r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their feedback and suggestions! We have added clarifications to the paper based on the suggestions and questions (refer to Section 3.4), as well as added additional comparisons (Section 4.2, Fig 3). Please find detailed comments below: \n\n“Why trajectory matching is considered as more ge...
[ -1, -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, -1, 3, 5, 4 ]
[ "rkgUHZwTKS", "S1lqr4oRFr", "BylsRL2OoB", "rkeDSjvI5r", "iclr_2020_S1et1lrtwr", "iclr_2020_S1et1lrtwr", "iclr_2020_S1et1lrtwr" ]
iclr_2020_ByeqyxBKvS
Quantum Semi-Supervised Kernel Learning
Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels. One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors. Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines. The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM.
reject
Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal with one reviewer hesitating about the appropriateness of this submission to ML venues. The reviewers have raised a number of criticisms such as an incremental nature of the paper (HHL and LMR algorithms) and the main contributions lying more within the field of quantum computing than ML. The paper was discussed with reviewers, buddy AC and chairs. On balance, it was concluded that this paper is minimally below the acceptance threshold. We encourage authors to consider all criticism, improve the paper and resubmit to another venue as there is some merit to the proposed idea.
train
[ "HJxO2en0tH", "HkxAav4nor", "rkehDQNhjS", "SJeQlM42iS", "rJedR_MxqS", "BJxnkesP5H" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to extend a quantum-computing based solution of least-squares support-vector-machine to include use of unlabeled samples. The formulation is analogous to the classical-computing case, in which semi-supervised learning introduces an additional term in the system of equations, which the authors ...
[ 6, -1, -1, -1, 6, 6 ]
[ 1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_ByeqyxBKvS", "HJxO2en0tH", "rJedR_MxqS", "BJxnkesP5H", "iclr_2020_ByeqyxBKvS", "iclr_2020_ByeqyxBKvS" ]
iclr_2020_B1esygHFwS
Detecting Change in Seasonal Pattern via Autoencoder and Temporal Regularization
Change-point detection problem consists of discovering abrupt property changes in the generation process of time-series. Most state-of-the-art models are optimizing the power of a kernel two-sample test, with only a few assumptions on the distribution of the data. Unfortunately, because they presume the samples are distributed i.i.d, they are not able to use information about the seasonality of a time-series. In this paper, we present a novel approach - ATR-CSPD allowing the detection of changes in the seasonal pattern of a time-series. Our method uses an autoencoder together with a temporal regularization, to learn the pattern of each seasonal cycle. Using low dimensional representation of the seasonal patterns, it is possible to accurately and efficiently estimate the existence of a change point using a clustering algorithm. Through experiments on artificial and real-world data sets, we demonstrate the usefulness of the proposed method for several applications.
reject
The paper proposes ATR-CSPD, which learns a low-dimensional representation of seasonal pattern, for detecting changes with clustering-based approaches. While ATR-CSPD is simple and intuitive, it lacks novel contribution in methodology. It is unclear how it is different from existing approaches. The evaluation and the writing could be improved significantly. In short, the paper is not ready for publication. We hope the reviews can help improve the paper for a strong submission in the future.
train
[ "B1lt_1QroB", "SkeXgdrT9S", "HyerFBLVtr", "Skgh0uOCKB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I am quite disappointed with the presentation and technical quality of the paper. \n\nThere are numerous grammatical errors that make the reading unpleasant. The mathematical notations are also inconsistent throughout different places in the paper.\n\nThe extensive literature of modelling time series with seasonal...
[ 1, 3, 1, 3 ]
[ 3, 4, 4, 3 ]
[ "iclr_2020_B1esygHFwS", "iclr_2020_B1esygHFwS", "iclr_2020_B1esygHFwS", "iclr_2020_B1esygHFwS" ]
iclr_2020_ryxnJlSKvr
SCELMo: Source Code Embeddings from Language Models
Continuous embeddings of tokens in computer programs have been used to support a variety of software development tools, including readability, code search, and program repair. Contextual embeddings are common in natural language processing but have not been previously applied in software engineering. We introduce a new set of deep contextualized word representations for computer programs based on language models. We train a set of embeddings using the ELMo (embeddings from language models) framework of Peters et al (2018). We investigate whether these embeddings are effective when fine-tuned for the downstream task of bug detection. We show that even a low-dimensional embedding trained on a relatively small corpus of programs can improve a state-of-the-art machine learning system for bug detection.
reject
This paper improves DeepBugs by borrowing the NLP method ELMo as new representations. The effectiveness of the embedding is investigated using the downstream task of bug detection. Two reviewers reject the paper for two main concerns: 1 The novelty of the paper is not strong enough for ICLR as this paper mainly uses a standard context embedding technique from NLP. 2 The experimental results are not convincing enough and more comprehensive evaluation are needed. Overall, this novelty of this paper does not meet the standard of ICLR.
train
[ "Ske3FZ93ir", "rkgtqN6ujS", "S1l5gN6Oor", "rJl-4yTOjH", "rke1ZgtatB", "SJgzrYrRFH", "Hyly6N2g5r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all reviewers for their feedback and insightful comments.\n\nWe would like to inform the reviewers that we have revised our submission to include a new section where we discuss whether the idea to add bug-introducing changes to a code dataset has practical usefulness for bug-finding. In the ...
[ -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2020_ryxnJlSKvr", "rke1ZgtatB", "SJgzrYrRFH", "Hyly6N2g5r", "iclr_2020_ryxnJlSKvr", "iclr_2020_ryxnJlSKvr", "iclr_2020_ryxnJlSKvr" ]
iclr_2020_Hkla1eHFvS
Efficient Exploration via State Marginal Matching
Reinforcement learning agents need to explore their unknown environments to solve the tasks given to them. The Bayes optimal solution to exploration is intractable for complex environments, and while several exploration methods have been proposed as approximations, it remains unclear what underlying objective is being optimized by existing exploration methods, or how they can be altered to incorporate prior knowledge about the task. Moreover, it is unclear how to acquire a single exploration strategy that will be useful for solving multiple downstream tasks. We address these shortcomings by learning a single exploration policy that can quickly solve a suite of downstream tasks in a multi-task setting, amortizing the cost of learning to explore. We recast exploration as a problem of State Marginal Matching (SMM), where we aim to learn a policy for which the state marginal distribution matches a given target state distribution, which can incorporate prior knowledge about the task. We optimize the objective by reducing it to a two-player, zero-sum game between a state density model and a parametric policy. Our theoretical analysis of this approach suggests that prior exploration methods do not learn a policy that does distribution matching, but acquire a replay buffer that performs distribution matching, an observation that potentially explains these prior methods' success in single-task settings. On both simulated and real-world tasks, we demonstrate that our algorithm explores faster and adapts more quickly than prior methods.
reject
The paper provides a nice approach to optimizing marginals to improve exploration for RL agents. The reviewers agree that its improvements w.r.t. the state of the art do not merit a publication at ICLR. Furthermore, additional experimentation is needed for the paper to be complete.
train
[ "SJgrEg0sKr", "HkeC5QqjjH", "SJey_N93oB", "r1gUcfZnsH", "rJl4Cf9soB", "ryxOfbqioB", "Sye85e9jsS", "Ske-6qohFB", "BklTj6E6tB" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update: I thank the authors for their response and I think the added baselines and more in depth discussion of prior work have improved the paper. However, given the limited novelty of the technical contribution, I believe the experimental section should be further extended (i.e. add a wider variety of domains and...
[ 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_Hkla1eHFvS", "SJgrEg0sKr", "r1gUcfZnsH", "rJl4Cf9soB", "Ske-6qohFB", "BklTj6E6tB", "iclr_2020_Hkla1eHFvS", "iclr_2020_Hkla1eHFvS", "iclr_2020_Hkla1eHFvS" ]
iclr_2020_HyxTJxrtvr
Learning a Spatio-Temporal Embedding for Video Instance Segmentation
Understanding object motion is one of the core problems in computer vision. It requires segmenting and tracking objects over time. Significant progress has been made in instance segmentation, but such models cannot track objects, and more crucially, they are unable to reason in both 3D space and time. We propose a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation. Our model includes a temporal network that learns to model temporal context and motion, which is essential to produce smooth embeddings over time. Further, our model also estimates monocular depth, with a self-supervised loss, as the relative distance to an object effectively constrains where it can be next, ensuring a time-consistent embedding. Finally, we show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset.
reject
This paper proposes a spatio-temporal embedding loss for video instance segmentation. The proposed model (1) learns a per-pixel embedding such that the embeddings of pixels from the same instance are closer than embeddings of pixels from other instances, and (2) learns depth in a self-supervised way using a photometric reconstruction loss which operates under the assumption of a moving camera and a static scene. The resulting loss is a weighted sum of these attraction, repulsion, regularisation and geometric view synthesis losses. The reviewers agree that the paper is well written and that the problem is well motivated. In particular, there is consensus that the 3D geometry and 2D instance representation should be considered jointly. However, due to the lack of technical novelty, the complexity of the final model, and the issues with the empirical validation of the proposed approach, we feel that the work is slightly below the acceptance bar.
train
[ "B1lvv-6Yor", "Hyexx-TFoH", "HkezDxpFiB", "SJe3pkTFjS", "SJlSrPVAKB", "rkl524DAKH", "rkxg1rP5KH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Many thanks for your careful review and helpful comments. Here’s our answer to the concerns you have raised:\n\n1a. \"I think the major argument I have is this method is lack of technical novelty, since it is straight forward to adopt the loss of Brabandere et.al 2017 to video cases for including pixels in the sa...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 5, 1, 4 ]
[ "SJlSrPVAKB", "rkxg1rP5KH", "rkl524DAKH", "iclr_2020_HyxTJxrtvr", "iclr_2020_HyxTJxrtvr", "iclr_2020_HyxTJxrtvr", "iclr_2020_HyxTJxrtvr" ]
iclr_2020_rJxRJeStvB
Learning scalable and transferable multi-robot/machine sequential assignment planning via graph embedding
Can the success of reinforcement learning methods for simple combinatorial optimization problems be extended to multi-robot sequential assignment planning? In addition to the challenge of achieving near-optimal performance in large problems, transferability to an unseen number of robots and tasks is another key challenge for real-world applications. In this paper, we suggest a method that achieves the first success in both challenges for robot/machine scheduling problems. Our method comprises of three components. First, we show any robot scheduling problem can be expressed as a random probabilistic graphical model (PGM). We develop a mean-field inference method for random PGM and use it for Q-function inference. Second, we show that transferability can be achieved by carefully designing two-step sequential encoding of problem state. Third, we resolve the computational scalability issue of fitted Q-iteration by suggesting a heuristic auction-based Q-iteration fitting method enabled by transferability we achieved. We apply our method to discrete-time, discrete space problems (Multi-Robot Reward Collection (MRRC)) and scalably achieve 97% optimality with transferability. This optimality is maintained under stochastic contexts. By extending our method to continuous time, continuous space formulation, we claim to be the first learning-based method with scalable performance in any type of multi-machine scheduling problems; our method scalability achieves comparable performance to popular metaheuristics in Identical parallel machine scheduling (IPMS) problems.
reject
Unfortunately, the reviewers of the paper are all not certain about their review, none of them being RL experts. Assessing the paper myself—not being an RL expert but having experience—the authors have addressed all points of the reviewers thoroughly.
train
[ "rkgTSLN3jr", "SJx0hqNhjH", "rkxYp8N2oH", "rkgBtXVhFr", "BkxFKINy5S", "Hkg7ErgLcr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are really glad that you are giving us some room to make your rating higher. Thanks to your comments, we found that the previous description of our RL problem reads how you described it. We wrote an entirely new introduction. We appreciate how much your helpful comment increased the readability of our paper. No...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 1, 1, 1 ]
[ "Hkg7ErgLcr", "rkgBtXVhFr", "BkxFKINy5S", "iclr_2020_rJxRJeStvB", "iclr_2020_rJxRJeStvB", "iclr_2020_rJxRJeStvB" ]
iclr_2020_HkgAJxrYwr
Attack-Resistant Federated Learning with Residual-based Reweighting
Federated learning has a variety of applications in multiple domains by utilizing private training data stored on different devices. However, the aggregation process in federated learning is highly vulnerable to adversarial attacks so that the global model may behave abnormally under attacks. To tackle this challenge, we present a novel aggregation algorithm with residual-based reweighting to defend federated learning. Our aggregation algorithm combines repeated median regression with the reweighting scheme in iteratively reweighted least squares. Our experiments show that our aggression algorithm outperforms other alternative algorithms in the presence of label-flipping, backdoor, and Gaussian noise attacks. We also provide theoretical guarantees for our aggregation algorithm.
reject
The paper proposes an aggregation algorithm for federated learning that is robust against label-flipping, backdoor, and Gaussian noise attacks. The reviewers agree that the paper presents an interesting and novel method, however the reviewers also agree that the theory was difficult to understand and that the success of the methodology may be highly dependent on design choices and difficult-to-tune hyperparameters.
train
[ "Bklb6D92sB", "Skxw--ijYB", "HJxmkyoaFr", "HkxDJvg1cS" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We want to thank the reviewers for their suggestions and comments! We have posted a revised version of the paper with several improvements based on the suggestions from the reviewers. \n\n1. We revised our proof in A.1 to include the whole passage as well as the exact reference to the previous work. \n2. We added ...
[ -1, 3, 6, 3 ]
[ -1, 3, 3, 1 ]
[ "iclr_2020_HkgAJxrYwr", "iclr_2020_HkgAJxrYwr", "iclr_2020_HkgAJxrYwr", "iclr_2020_HkgAJxrYwr" ]
iclr_2020_SJlJegHFvH
Address2vec: Generating vector embeddings for blockchain analytics
Bitcoin is a virtual coinage system that enables users to trade virtually free of a central trusted authority. All transactions on the Bitcoin blockchain are publicly available for viewing, yet as Bitcoin is built mainly for security it’s original structure does not allow for direct analysis of address transactions. Existing analysis methods of the Bitcoin blockchain can be complicated, computationally expensive or inaccurate. We propose a computationally efficient model to analyze bitcoin blockchain addresses and allow for their use with existing machine learning algorithms. We compare our approach against Multi Level Sequence Learners (MLSLs), one of the best performing models on bitcoin address data.
reject
The paper propose to analyze bitcoin addresses using graph embeddings. The reviewers found that the paper was too incomplete for publication. Important information such as a description of datasets and metrics was omitted.
train
[ "BJgjf53jOH", "H1ey-GxEFH", "Ske6w1375S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to use an autoencoder, networkX, and node2Vec in succession to convert a Bitcoin transaction to a vector. This is then used to predict whether a Bitcoin address will become empty after a year. The results are better than flipping a coin, but worse than an existing baseline.\n\nGiven the apparent...
[ 1, 1, 1 ]
[ 1, 1, 5 ]
[ "iclr_2020_SJlJegHFvH", "iclr_2020_SJlJegHFvH", "iclr_2020_SJlJegHFvH" ]
iclr_2020_SJg1lxrYwS
PatchFormer: A neural architecture for self-supervised representation learning on images
Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning. Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images. In this paper, we propose a neural architecture for self-supervised representation learning on raw images called the PatchFormer which learns to model spatial dependencies across patches in a raw image. Our method learns to model the conditional probability distribution of missing patches given the context of surrounding patches. We evaluate the utility of the learned representations by fine-tuning the pre-trained model on low data-regime classification tasks. Specifically, we benchmark our model on semi-supervised ImageNet classification which has become a popular benchmark recently for semi-supervised and self-supervised learning methods. Our model is able to achieve 30.3% and 65.5% top-1 accuracies when trained only using 1% and 10% of the labels on ImageNet showing the promise for generative pre-training methods.
reject
The paper presents a generative approach to learn an image representation along a self-supervised scheme. The reviews state that the paper is premature for publication at ICLR 2020 for the following reasons: * the paper is unfinished (Rev#3); in particular the description of the approach is hardly reproducible (Rev#1); * the evaluation is limited to ImageNet and needs be strenghtened (all reviewers) * the novelty needs be better explained (Rev#1). It might be interesting to discuss the approach w.r.t. "Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles", Noroozi and Favaro. I recommend the authors to rewrite and better structure the paper (claim, state of the art, high level overview of the approach, experimental setting, discussion of the results, discussion about the novelty and limitations of the approach).
train
[ "B1xXZpgaFB", "S1gbpSTe9B", "ryxGPZfzqS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The motivation of this paper is to use the idea of Transformer-based NLP models in image data, which is appreciated. However, this seems to be a far unfinished paper. The introduction part is well written. But, the method is not well described. It is very unclear how exactly the model is built. Moreover, the netw...
[ 1, 1, 1 ]
[ 3, 3, 4 ]
[ "iclr_2020_SJg1lxrYwS", "iclr_2020_SJg1lxrYwS", "iclr_2020_SJg1lxrYwS" ]
iclr_2020_SJlxglSFPB
Efficacy of Pixel-Level OOD Detection for Semantic Segmentation
The detection of out of distribution samples for image classification has been widely researched. Safety critical applications, such as autonomous driving, would benefit from the ability to localise the unusual objects causing the image to be out of distribution. This paper adapts state-of-the-art methods for detecting out of distribution images for image classification to the new task of detecting out of distribution pixels, which can localise the unusual objects. It further experimentally compares the adapted methods on two new datasets derived from existing semantic segmentation datasets using PSPNet and DeeplabV3+ architectures, as well as proposing a new metric for the task. The evaluation shows that the performance ranking of the compared methods does not transfer to the new task and every method performs significantly worse than their image-level counterparts.
reject
This paper studies the problem of out-of-distribution (OOD) detection for semantic segmentation. Reviewers and AC agree that the problem might be important and interesting, but the paper is not ready to publish in various aspects, e.g., incremental contribution and less-motivated/convincing experimental setups/results. Hence, I recommend rejection.
val
[ "BkgE5OCoYr", "HylBaYf6FB", "SJguyu66Fr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper evaluates a variety of existing pixel-wise out-of-distribution detection methods in the task of semantic segmentation of road scenes. To do so, the paper introduces an evaluation protocol and applies it to two datasets (SUN and IDD) and two models (PSPNet and DeepLabV3+).\n\nStrengths:\n- The paper is we...
[ 3, 1, 3 ]
[ 3, 3, 3 ]
[ "iclr_2020_SJlxglSFPB", "iclr_2020_SJlxglSFPB", "iclr_2020_SJlxglSFPB" ]
iclr_2020_BylWglrYPH
Symmetry and Systematicity
We argue that symmetry is an important consideration in addressing the problem of systematicity and investigate two forms of symmetry relevant to symbolic processes. We implement this approach in terms of convolution and show that it can be used to achieve effective generalisation in three toy problems: rule learning, composition and grammar learning.
reject
Thanks for clarifying several issues raised by the reviewers, which helped us understand the paper. After all, we decided not to accept this paper due to the weakness of its contribution. I hope the updated comments by the reviewers help you strengthen your paper for potential future submission.
train
[ "ryx5OlGTYr", "B1x8rXrnsB", "SyeDvfMhoB", "S1xGY5VijB", "B1xUuaAtir", "SJgWsDRKor", "HJl2YAI7sS", "SJeX-qL7jH", "BklVZr8XiS", "Syx8eeAbir", "HJe7pRBAFH", "B1ePQyfk9S" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n======================================== Update after rebuttal =============================================\n\nI have now read the author rebuttal, but my concerns about the paper remain. The training details are not described in anywhere near sufficient detail (optimizer?, batch size?, learning rate?, initiali...
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_BylWglrYPH", "SyeDvfMhoB", "B1xUuaAtir", "SJgWsDRKor", "Syx8eeAbir", "B1ePQyfk9S", "ryx5OlGTYr", "ryx5OlGTYr", "ryx5OlGTYr", "HJe7pRBAFH", "iclr_2020_BylWglrYPH", "iclr_2020_BylWglrYPH" ]
iclr_2020_HJlzxgBtwH
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
The evaluation of robustness against adversarial manipulations of neural networks-based classifiers is mainly tested with empirical attacks as the methods for the exact computation, even when available, do not scale to large networks. We propose in this paper a new white-box adversarial attack wrt the lp-norms for p∈{1,2,∞} aiming at finding the minimal perturbation necessary to change the class of a given input. It has an intuitive geometric meaning, yields quickly high quality results, minimizes the size of the perturbation (so that it returns the robust accuracy at every threshold with a single run). It performs better or similarly to state-of-the-art attacks which are partially specialized to one lp-norm.
reject
This work presents a method for generating an (approximately) minimal adversarial perturbation for neural networks. During the discussion period, the AC raised additional concerns that were not originally addressed by the reviewers. The method is an iterative first order method for solving constrained optimization problems, however when considered as a new first order optimization method the contribution seems minimal. Most of the additions are rather straightforward---e.g. using a line search at each step to determine the optimal step size---and the reported gains over PGD are unconvincing. PGD can be considered as a "universal" first order optimizer [1], as such we should be careful that the reported gains are substantial and not just a question of tuning. Given that using a line search at each step increases the computational cost by a multiplicative factor, the comparison with PGD should take this into account. The AC notes several plots in the Appendix show PGD having better performance (particularly on restricted Imagenet), and for others there remain questions on how PGD is tuned (for example the CIFAR-10 plots in Figure 5). One of two things explains the discrepancies in Figure 5: either PGD is finding a worse local optimum than FAB, or PGD has not converged to a local optimum. There needs to be provided experiments to rule out the second possibility, as this is evidence that PGD is not being tuned properly. Some standard things to check are the step size and number of steps. Additionally, enforcing a constant step size after projection is an easy way to improve the performance of PGD. For example, if the gradient of the loss is approximately equal to the normal vector of the constraint, then proj(x_i+ lambda * g) ~ x_i will result in an effective step size that is too low to make progress. Finally, it is unclear what practical use there is for a method that finds an approximately minimum norm perturbation. There are no provable guarantees so this cannot be used for certification. Additionally, in order to properly assess the security and reliability of ML systems, it is necessary to consider larger visual distortions, occlusions, and corruptions (such as the ones in [2]) as these will actually be encountered in practice. 1. https://arxiv.org/pdf/1706.06083.pdf 2. https://arxiv.org/abs/1807.01697
train
[ "Hkgp5zksFS", "BJeCBkS5Kr", "rJe_TblqiS", "ryxt4-xcjr", "ByePLDycsH", "HJgiUscAFH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The authors propose a new gradient-based method (FAB) for constructing adversarial perturbations for deep neural networks. At a high level, the method repeatedly estimates the decision boundary based on the linearization of the classifier at a given point and projects to the closest \"misclassified\" example based...
[ 6, 6, -1, -1, -1, 6 ]
[ 5, 5, -1, -1, -1, 4 ]
[ "iclr_2020_HJlzxgBtwH", "iclr_2020_HJlzxgBtwH", "HJgiUscAFH", "Hkgp5zksFS", "BJeCBkS5Kr", "iclr_2020_HJlzxgBtwH" ]
iclr_2020_B1xGxgSYvH
Domain-Invariant Representations: A Look on Compression and Weights
Learning Invariant Representations to adapt deep classifiers of a source domain to a new target domain has recently attracted much attention. In this paper, we show that the search for invariance favors the compression of representations. We point out this may have a bad impact on adaptability of representations expressed as a minimal combined domain error. By considering the risk of compression, we show that weighting representations can align representation distributions without impacting their adaptability. This supports the claim that representation invariance is too strict a constraint. First, we introduce a new bound on the target risk that reveals a trade-off between compression and invariance of learned representations. More precisely, our results show that the adaptability of a representation can be better controlled when the compression risk is taken into account. In contrast, preserving adaptability may overestimate the risk of compression that makes the bound impracticable. We support these statements with a theoretical analysis illustrated on a standard domain adaptation benchmark. Second, we show that learning weighted representations plays a key role in relaxing the constraint of invariance and then preserving the risk of compression. Taking advantage of this trade-off may open up promising directions for the design of new adaptation methods.
reject
This paper provides a new theoretical framework for domain adaptation by exploring the compression and adaptability. Reviewers and AC generally agree that this paper discusses about an important problem and provides new insight, but it is not a thorough theoretical work. The reviewers identified several key limitations of the theory such as unrealistic condition and approximation. Some important points still require more work to make the framework practical for algorithm design and computation. The presentation could also be improved. Hence I recommend rejection.
test
[ "HJx6Sqr2sr", "BklhSIZ5iS", "BJgmzUqtsS", "rJleDfqtjB", "HJe2_6KFiB", "S1x9j79pKS", "BJeSgyAaKB", "HklaqN7yqH" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewers for their valuable and insightful comments which have helped us to improve our submission and for their time for checking both the proofs and experiments.\n\nWe provide an updated version of the submission which addresses some concerns of the reviewers:\n- We clarify the exper...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2020_B1xGxgSYvH", "rJleDfqtjB", "S1x9j79pKS", "BJeSgyAaKB", "HklaqN7yqH", "iclr_2020_B1xGxgSYvH", "iclr_2020_B1xGxgSYvH", "iclr_2020_B1xGxgSYvH" ]
iclr_2020_BJxQxeBYwH
Are Powerful Graph Neural Nets Necessary? A Dissection on Graph Classification
Graph Neural Nets (GNNs) have received increasing attentions, partially due to their superior performance in many node and graph classification tasks. However, there is a lack of understanding on what they are learning and how sophisticated the learned graph functions are. In this work, we propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction. To study the importance of both parts, we propose to linearize them separately. We first linearize the graph filtering function, resulting Graph Feature Network (GFN), which is a simple lightweight neural net defined on a \textit{set} of graph augmented features. Further linearization of GFN's set function results in Graph Linear Network (GLN), which is a linear function. Empirically we perform evaluations on common graph classification benchmarks. To our surprise, we find that, despite the simplification, GFN could match or exceed the best accuracies produced by recently proposed GNNs (with a fraction of computation cost), while GLN underperforms significantly. Our results demonstrate the importance of non-linear set function, and suggest that linear graph filtering with non-linear set function is an efficient and powerful scheme for modeling existing graph classification benchmarks.
reject
This paper proposes to split the GNN operations into two parts and study the effects of each part. While two reviewers are positive about this paper, the other reviewer R1 has raised some concerns. During discussion, R1 responded and indicated that his/her concerns were not addressed in author rebuttal. Overall, I feel the paper is borderline and lean towards reject.
train
[ "rylyUz9voB", "Bkxh3xqwsH", "S1xJYl9vsB", "HklhfgqDsr", "BJg-ri4CYB", "HJgX4T6CtH", "Hye94W7M5r", "B1xn-Pmq5B", "SygrzG5uqH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This is to log what we have changed in the revision:\n\n1. We added comparisons to RETGK and GNTK as suggested by Reviewer 1.\n2. We clarified a notation as suggested by Reviewer 2.\n3. We added an experiment on varying dataset size according to the comment of Reviewer 3.\n", "We thank the reviewer for the time ...
[ -1, -1, -1, -1, 3, 6, 6, -1, -1 ]
[ -1, -1, -1, -1, 4, 4, 3, -1, -1 ]
[ "iclr_2020_BJxQxeBYwH", "BJg-ri4CYB", "Hye94W7M5r", "HJgX4T6CtH", "iclr_2020_BJxQxeBYwH", "iclr_2020_BJxQxeBYwH", "iclr_2020_BJxQxeBYwH", "SygrzG5uqH", "iclr_2020_BJxQxeBYwH" ]
iclr_2020_H1eVlgHKPr
Event Discovery for History Representation in Reinforcement Learning
Environments in Reinforcement Learning (RL) are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about past observations. While common methods represent this history using a Recurrent Neural Network (RNN), in this paper we propose an alternative representation which is based on the record of the past events observed in a given episode. Inspired by the human memory, these events describe only important changes in the environment and, in our approach, are automatically discovered using self-supervision. We evaluate our history representation method using two challenging RL benchmarks: some games of the Atari-57 suite and the 3D environment Obstacle Tower. Using these benchmarks we show the advantage of our solution with respect to common RNN-based approaches.
reject
The authors propose approaches to handle partial observability in reinforcement learning. The reviewers agree that the paper does not sufficiently justify the methods that are proposed and even the experimental performance shows that the proposed method is not always better than baselines.
train
[ "rye2pMZ0FS", "r1lPjaL3jr", "BylovTL2iH", "ByxIBpUhsH", "r1eaI3UhjB", "BJlBx38hoB", "H1eghoUhsH", "HyeyLsL2oB", "HyeSboInjB", "B1x9RcIhiH", "SyeCIqUnsr", "S1lLJcLhjS", "rJlhnKL3or", "HJglFFUnor", "BJgtIKUnjB", "rkl8zFUniH", "r1xGJtIhsS", "SJlfcdI2sS", "ByeQSdL2jB", "r1erzO82iB"...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "auth...
[ "This paper proposes a new way to represent past history as input to an RL agent, that consists in clustering states and providing the (soft) cluster assignment of past states in the input. The clustering algorithm comes from previous work based on mutual information, where close (in time) observations are assumed ...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_H1eVlgHKPr", "S1e-0pOIuS", "S1e-0pOIuS", "S1e-0pOIuS", "S1e-0pOIuS", "rye2pMZ0FS", "rye2pMZ0FS", "rye2pMZ0FS", "rye2pMZ0FS", "rye2pMZ0FS", "rye2pMZ0FS", "rJxzim-EcS", "rJxzim-EcS", "rJxzim-EcS", "rJxzim-EcS", "rJxzim-EcS", "rJxzim-EcS", "rJxzim-EcS", "rJxzim-EcS", "r...
iclr_2020_H1eNleBYwr
GENN: Predicting Correlated Drug-drug Interactions with Graph Energy Neural Networks
Gaining more comprehensive knowledge about drug-drug interactions (DDIs) is one of the most important tasks in drug development and medical practice. Recently graph neural networks have achieved great success in this task by modeling drugs as nodes and drug-drug interactions as links and casting DDI predictions as link prediction problems. However, correlations between link labels (e.g., DDI types) were rarely considered in existing works. We propose the graph energy neural network (\mname) to explicitly model link type correlations. We formulate the DDI prediction task as a structure prediction problem and introduce a new energy-based model where the energy function is defined by graph neural networks. Experiments on two real-world DDI datasets demonstrated that \mname is superior to many baselines without consideration of link type correlations and achieved 13.77% and 5.01% PR-AUC improvement on the two datasets, respectively. We also present a case study in which \mname can better capture meaningful DDI correlations compared with baseline models.
reject
This paper studies the use of a graph neural network for drug-to-drug interaction (DDI) prediction task (an instance of a link prediction task with drugs as vertices and interaction as edges). In particular, the authors apply structured prediction energy networks (SPEN) and model the dependency structure of the labels by minimising an energy function. The authors empirically validate the proposed approach against feedforward GNNs on two DDI prediction tasks. The reviewers feel that understanding drug-drug interactions is an important task and that the work is well motivated. However, the reviewers argued that the proposed methodology is not novel enough to merit publication at ICLR and that some conclusions are not supported by the empirical analysis. For the former, the benefits of the semi-supervised design need to be clearly and concisely presented. For the latter, providing a more convincing practical benefit would greatly improve the manuscript. As such, I will recommend the rejection of this paper at the current state.
train
[ "BJlN-zN3jS", "rkgWTlV2iH", "S1efEg4hir", "SJgPbxV3jS", "S1lFnblnKS", "ryx_dOHk9r", "r1gnZ-O-qr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. We added more elaboration about the difference from previous works in the contributions of Introduction. We also add one sentence in the second paragraph of Section 2.2 to explain our difference from the previous method.\n2. We re-organized the paragraphs in Section 4.3 and also added more detailed formulations...
[ -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2020_H1eNleBYwr", "S1lFnblnKS", "ryx_dOHk9r", "r1gnZ-O-qr", "iclr_2020_H1eNleBYwr", "iclr_2020_H1eNleBYwr", "iclr_2020_H1eNleBYwr" ]
iclr_2020_rkxNelrKPB
On Stochastic Sign Descent Methods
Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM. In this paper, we perform a general analysis of sign-based methods for non-convex optimization. Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients. Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes. We validate our theoretical findings experimentally.
reject
This paper proposes an analysis of signSGD in some special cases. SignGD has been shown to be of interest, whether because of its similarity to Adam or in quasi-convex settings. The complaint shared by reviewers was the strength of the conditions. SGC is really strong, I have yet to see increasing mini-batch sizes to be used in practice (although there are quite a few papers mentioning this technique to get a convergence rate) and the strength of the other two are harder to assess. With that said, the improvement compared to existing work such as Karimireddy et. al. 2019 is unclear. I encourage the authors to address the comment of the reviewers and to submit an improved version to a later, or perhaps to a more theoretical, convergence.
train
[ "BJl9kDNE5H", "HkemzUr2iS", "SyxoFPcHor", "rJgpnv5SjS", "B1gVTLqrjS", "rJxqaMwTtS", "HJeobaJAFS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents an improved analysis of the signSGD gradient estimator. The authors propose to relax the requirements on the gradient estimator in Bernstein (2019). The only requirement imposed on the gradient is that it should have the correct sign with probability greater than 1/2. In particular this approach...
[ 6, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rkxNelrKPB", "SyxoFPcHor", "HJeobaJAFS", "rJxqaMwTtS", "BJl9kDNE5H", "iclr_2020_rkxNelrKPB", "iclr_2020_rkxNelrKPB" ]
iclr_2020_HyerxgHYvH
Neural Arithmetic Unit by reusing many small pre-trained networks
We propose a solution for evaluation of mathematical expression. However, instead of designing a single end-to-end model we propose a Lego bricks style architecture. In this architecture instead of training a complex end-to-end neural network, many small networks can be trained independently each accomplishing one specific operation and acting a single lego brick. More difficult or complex task can then be solved using a combination of these smaller network. In this work we first identify 8 fundamental operations that are commonly used to solve arithmetic operations (such as 1 digit multiplication, addition, subtraction, sign calculator etc). These fundamental operations are then learned using simple feed forward neural networks. We then shows that different operations can be designed simply by reusing these smaller networks. As an example we reuse these smaller networks to develop larger and a more complex network to solve n-digit multiplication, n-digit division, and cross product. This bottom-up strategy not only introduces reusability, we also show that it allows to generalize for computations involving n-digits and we show results for up to 7 digit numbers. Unlike existing methods, our solution also generalizes for both positive as well as negative numbers.
reject
This paper proposes to train and compose neural networks for the purposes of arithmetic operations. All reviewers agree that the motivation for such a work is unclear, and the general presentation in the paper can be significantly improved. As such, I cannot recommend this paper in its current state for publication.
train
[ "BygZ_Hp9FS", "H1luNTN3tS", "H1gznbUMqr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to use neural networks to evaluate the mathematical expressions by designing 8 small building blocks for 8 fundamental operations, e.g., addition, subtraction, etc. They then design multi-digit multiplication and division using these small blocks. \n\nThe motivation of this paper is not very cl...
[ 1, 1, 1 ]
[ 1, 3, 3 ]
[ "iclr_2020_HyerxgHYvH", "iclr_2020_HyerxgHYvH", "iclr_2020_HyerxgHYvH" ]
iclr_2020_r1xHxgrKwr
Anomaly Detection Based on Unsupervised Disentangled Representation Learning in Combination with Manifold Learning
Identifying anomalous samples from highly complex and unstructured data is a crucial but challenging task in a variety of intelligent systems. In this paper, we present a novel deep anomaly detection framework named AnoDM (standing for Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning). The disentanglement learning is currently implemented by beta-VAE for automatically discovering interpretable factorized latent representations in a completely unsupervised manner. The manifold learning is realized by t-SNE for projecting the latent representations to a 2D map. We define a new anomaly score function by combining beta-VAE's reconstruction error in the raw feature space and local density estimation in the t-SNE space. AnoDM was evaluated on both image and time-series data and achieved better results than models that use just one of the two measures and other deep learning methods.
reject
The paper presents AnoDM (Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning) that combine beta-VAE and t-SNE for anomaly detection. Experiment results on both image and time series data are shown to demonstrate the effectiveness of the proposed solution. The paper aims to attack a challenging problem. The proposed solution is reasonable. The authors did a job at addressing some of the concerns raised in the reviews. However, two major concerns remain: (1) the novelty in the proposed model (a combination of two existing models) is not clear; (2) the experiment results are not fully convincing. While theoretical analysis is not a must for all models, it would be useful to conduct thorough experiments to fully understand how the model works, which is missing in the current version. Given the two reasons above, the paper did not attract enough enthusiasm from the reviewers during the discussion. We hope the reviews can help improve the paper for a better publication in the future.
train
[ "Hke85R3jsB", "Hkxee0dysS", "rJgerS3ooH", "ryxGQXnoiB", "HkeKsEc7tr", "Skxsb0gioB", "r1xgoEtysH", "rJenPMF1jB", "BygqC2QXKB", "r1xQXHqitr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We have uploaded a better version to address all reviews' comments.", "Thanks for your support. We will do the following to improve this work. (1) We will clarify that since existing work uses CNN/LSTM-VAEs (e.g. Park et al. 2017) which are special cases of beta-VAE, thus as long as best performance is achieved ...
[ -1, -1, -1, -1, 3, -1, -1, -1, 3, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, 4, 4 ]
[ "Hkxee0dysS", "r1xQXHqitr", "r1xgoEtysH", "Skxsb0gioB", "iclr_2020_r1xHxgrKwr", "rJenPMF1jB", "BygqC2QXKB", "HkeKsEc7tr", "iclr_2020_r1xHxgrKwr", "iclr_2020_r1xHxgrKwr" ]
iclr_2020_rkgIllBtwB
Exploring the Correlation between Likelihood of Flow-based Generative Models and Image Semantics
Among deep generative models, flow-based models, simply referred as \emph{flow}s in this paper, differ from other models in that they provide tractable likelihood. Besides being an evaluation metric of synthesized data, flows are supposed to be robust against out-of-distribution~(OoD) inputs since they do not discard any information of the inputs. However, it has been observed that flows trained on FashionMNIST assign higher likelihoods to OoD samples from MNIST. This counter-intuitive observation raises the concern about the robustness of flows' likelihood. In this paper, we explore the correlation between flows' likelihood and image semantics. We choose two typical flows as the target models: Glow, based on coupling transformations, and pixelCNN, based on autoregressive transformations. Our experiments reveal surprisingly weak correlation between flows' likelihoods and image semantics: the predictive likelihoods of flows can be heavily affected by trivial transformations that keep the image semantics unchanged, which we call semantic-invariant transformations~(SITs). We explore three SITs~(all small pixel-level modifications): image pixel translation, random noise perturbation, latent factors zeroing~(limited to flows using multi-scale architecture, e.g. Glow). These findings, though counter-intuitive, resonate with the fact that the predictive likelihood of a flow is the joint probability of all the image pixels. So flows' likelihoods, modeling on pixel-level intensities, is not able to indicate the existence likelihood of the high-level image semantics. We call for attention that it may be \emph{abuse} if we use the predictive likelihoods of flows for OoD samples detection.
reject
This paper discusses the (lack of) correlation between the image semantics and the likelihood assigned by flow-based models, and implications for out-of-distribution (OOD) detection. The reviewers raised several important questions: 1) precise definition of OOD: definition of semantics vs typicality (cf. definition in Nalisnick et al. 2019 pointed by R1) There was a nice discussion between authors and the reviewers. At a high level, there was some agreement in the end, but lack of precise definition may cause confusion. I think adding a precise definition will add more clarity and improve the paper. 2) novelty: similar observations have been made in earlier papers cf. Nalisnick et al. 2018. R3 also pointed a recent paper by Ren et al. 2019 which showed that likelihood can be dominated by background pixels. Older work has shown that the likelihood and sample quality are not necessarily correlated. The reviewers appreciate that this paper provides additional evidence, but weren't convinced that the new observations in this paper qualified for a full paper. 3) experiments on more datasets Overall, while this paper explores an interesting direction, it's not ready for publication as is. I encourage the authors to revise the paper based on the feedback and submit to a different venue.
train
[ "Syx5mzW0YS", "SkgdyLFhjr", "rJee5Bdnor", "BkgVYpD3or", "rylqlowniH", "Skl8OrlNjr", "B1eJNHl4iS", "H1g_p5VriH", "HkekeBfNsB", "r1goAJZSKH", "BygO8NNRFH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper raises a problem of the robustness of (log) likelihood computed by invertible flows. The authors show that the changes of likelihood of an image computed by flow-based image generative models have surprisingly weak correlations with semantic changes of image. The flow likelihoods are sensitive to very s...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_rkgIllBtwB", "rJee5Bdnor", "BkgVYpD3or", "Skl8OrlNjr", "iclr_2020_rkgIllBtwB", "B1eJNHl4iS", "r1goAJZSKH", "Syx5mzW0YS", "BygO8NNRFH", "iclr_2020_rkgIllBtwB", "iclr_2020_rkgIllBtwB" ]
iclr_2020_HJewxlHFwH
Skew-Explore: Learn faster in continuous spaces with sparse rewards
In many reinforcement learning settings, rewards which are extrinsically available to the learning agent are too sparse to train a suitable policy. Beside reward shaping which requires human expertise, utilizing better exploration strategies helps to circumvent the problem of policy training with sparse rewards. In this work, we introduce an exploration approach based on maximizing the entropy of the visited states while learning a goal-conditioned policy. The main contribution of this work is to introduce a novel reward function which combined with a goal proposing scheme, increases the entropy of the visited states faster compared to the prior work. This improves the exploration capability of the agent, and therefore enhances the agent's chance to solve sparse reward problems more efficiently. Our empirical studies demonstrate the superiority of the proposed method to solve different sparse reward problems in comparison to the prior work.
reject
While the reviewers generally appreciated the ideas presented in the paper and found the overall aims and motivation of the paper to be compelling, there were too many questions raised about the experiments and the soundness of the technical formulation to accept the paper at this time, and the reviewers did not feel that the authors had adequately addressed these issues in their responses. The main concerns were (1) with the correctness and rigor of the technical derivation, which the reviewers generally found to be somewhat questionable -- while the main idea seems reasonable, the details have a few too many question marks; (2) the experimental results have a number of shortcomings that make it difficult to fully understand whether the method really works, and how well.
train
[ "H1lXWHd45H", "rJgtIu9niH", "r1gP38qnsr", "r1eBrOq3or", "SylErwc2jS", "Hylrs_c3jS", "H1gZ0XznYS", "SJl_0LyRtr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of exploration in reinforcement learning. The key idea is to learn a goal-conditioned agent and do exploration by selecting goals at the frontier of previously visited states. This frontier is estimated using an extension of prior work (Pong 2019). The method is evaluated on two con...
[ 3, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_HJewxlHFwH", "r1eBrOq3or", "H1lXWHd45H", "SJl_0LyRtr", "r1gP38qnsr", "H1gZ0XznYS", "iclr_2020_HJewxlHFwH", "iclr_2020_HJewxlHFwH" ]
iclr_2020_HylvleBtPB
Language-independent Cross-lingual Contextual Representations
Contextual representation models like BERT have achieved state-of-the-art performance on a diverse range of NLP tasks. We propose a cross-lingual contextual representation model that generates language-independent contextual representations. This helps to enable zero-shot cross-lingual transfer of a wide range of NLP models, on top of contextual representation models like BERT. We provide a formulation of language-independent cross-lingual contextual representation based on mono-lingual representations. Our formulation takes three steps to align sequences of vectors: transform, extract, and reorder. We present a detailed discussion about the process of learning cross-lingual contextual representations, also about the performance in cross-lingual transfer learning and its implications.
reject
The paper proposes a method to learn cross-lingual representations by aligning monolingual models with the help of a parallel corpus using a three-step process: transform, extract, and reorder. Experiments on XNLI show that the proposed method is able to perform zero-shot cross-lingual transfer, although its overall performance is still below state-of-the-art jointly trained method XLM. All three reviewers suggested that the proposed method needs to be evaluated more thoroughly (more datasets and languages). R2 and R4 raise some concerns around the complexity of the proposed method (possibly could be simplified further). R3 suggests a more thorough investigation on why the model saturates at 250,000 parallel sentences, among others. The authors acknowledged reviewers' concerns in their response and will incorporate them in future work. I recommend rejecting this paper for ICLR.
train
[ "HkgCAIXniB", "BklxL4DDYr", "ryexDZ32KB", "SyxGx3e6YH" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the comments from all reviewers! \n\nWe acknowledge that one of the main weaknesses of this paper is in its evaluation, which is only performed on a single pair of language and on a single task. We are working on more experiments to strengthen the evaluation:\n\n- Evaluation on more languages, such as G...
[ -1, 3, 3, 3 ]
[ -1, 5, 5, 4 ]
[ "iclr_2020_HylvleBtPB", "iclr_2020_HylvleBtPB", "iclr_2020_HylvleBtPB", "iclr_2020_HylvleBtPB" ]
iclr_2020_rkxdexBYPB
Group-Transformer: Towards A Lightweight Character-level Language Model
Character-level language modeling is an essential but challenging task in Natural Language Processing. Prior works have focused on identifying long-term dependencies between characters and have built deeper and wider networks for better performance. However, their models require substantial computational resources, which hinders the usability of character-level language models in applications with limited resources. In this paper, we propose a lightweight model, called Group-Transformer, that reduces the resource requirements for a Transformer, a promising method for modeling sequence with long-term dependencies. Specifically, the proposed method partitions linear operations to reduce the number of parameters and computational cost. As a result, Group-Transformer only uses 18.2\% of parameters compared to the best performing LSTM-based model, while providing better performance on two benchmark tasks, enwik8 and text8. When compared to Transformers with a comparable number of parameters and time complexity, the proposed model shows better performance. The implementation code will be available.
reject
This paper proposes using a lightweight alternative to Transformer self-attention called Group-Transformer. This is proposed in order to overcome difficulties in modelling long-distance dependencies in character level language modelling. They take inspiration from work on group convolutions. They experiment on two large-scale char-level LM datasets which show positive results, but experiments on word level tasks fail to show benefits. I think that this work, though promising, is still somewhat incremental and has not shown to be widely applicable, and therefore I recommend that it is not accepted.
train
[ "rJxrktS3oS", "BJlvg1B2ir", "SygU4SVhsB", "B1l51SNniS", "ByxM6fN2or", "ByeW0XEhoH", "ryloNM9diB", "B1g4xYYOoH", "Sylk5ztuiH", "Hyx8TztusS", "B1e_weqbKH", "Syx8D1tKFr", "r1xkV0ATtB" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response!\n\n1. What do you mean by \"a common method used to compare model efficiencies (Bai et al., NIPS-19)\". Can you briefly describe it?\n\n- As you know, the point of this paper is to see if our methodology is efficient at making the lightweight character-level language model. Since ther...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "BJlvg1B2ir", "Hyx8TztusS", "ryloNM9diB", "B1g4xYYOoH", "Sylk5ztuiH", "iclr_2020_rkxdexBYPB", "B1e_weqbKH", "Syx8D1tKFr", "r1xkV0ATtB", "r1xkV0ATtB", "iclr_2020_rkxdexBYPB", "iclr_2020_rkxdexBYPB", "iclr_2020_rkxdexBYPB" ]
iclr_2020_HygFxxrFvB
Differentially Private Mixed-Type Data Generation For Unsupervised Learning
In this work we introduce the DP-auto-GAN framework for synthetic data generation, which combines the low dimensional representation of autoencoders with the flexibility of GANs. This framework can be used to take in raw sensitive data, and privately train a model for generating synthetic data that should satisfy the same statistical properties as the original data. This learned model can be used to generate arbitrary amounts of publicly available synthetic data, which can then be freely shared due to the post-processing guarantees of differential privacy. Our framework is applicable to unlabled \emph{mixed-type data}, that may include binary, categorical, and real-valued data. We implement this framework on both unlabeled binary data (MIMIC-III) and unlabeled mixed-type data (ADULT). We also introduce new metrics for evaluating the quality of synthetic mixed-type data, particularly in unsupervised settings.
reject
This provides a new method, called DPAutoGAN, for the problem of differentially private synthetic generation. The method uses private auto-encoder to reduce the dimension of the data, and apply private GAN on the latent space. The reviewers think that there is not sufficient justification for why this is a good approach for synthetic generation. They also think that the presentation is not ready for publication.
test
[ "Hyxu1aV2iH", "S1gIouZsYr", "B1l4NEaecS", "SJl8-OvmKS" ]
[ "author", "official_reviewer", "official_reviewer", "public" ]
[ "We thank the reviewers for their time and comments. We have made a careful editing pass on the paper to make the following improvements at the reviewers' suggestion:\n1. Grammatical editing -- we caught many typos including those pointed out the the reviewers\n2. Comparison to existing work -- we added a more exp...
[ -1, 1, 3, -1 ]
[ -1, 4, 3, -1 ]
[ "iclr_2020_HygFxxrFvB", "iclr_2020_HygFxxrFvB", "iclr_2020_HygFxxrFvB", "iclr_2020_HygFxxrFvB" ]
iclr_2020_B1eYlgBYPH
A Deep Recurrent Neural Network via Unfolding Reweighted l1-l1 Minimization
Deep unfolding methods design deep neural networks as learned variations of optimization methods. These networks have been shown to achieve faster convergence and higher accuracy than the original optimization methods. In this line of research, this paper develops a novel deep recurrent neural network (coined reweighted-RNN) by unfolding a reweighted l1-l1 minimization algorithm and applies it to the task of sequential signal reconstruction. To the best of our knowledge, this is the first deep unfolding method that explores reweighted minimization. Due to the underlying reweighted minimization model, our RNN has a different soft-thresholding function (alias, different activation function) for each hidden unit in each layer. Furthermore, it has higher network expressivity than existing deep unfolding RNN models due to the over-parameterizing weights. Moreover, we establish theoretical generalization error bounds for the proposed reweighted-RNN model by means of Rademacher complexity. The bounds reveal that the parameterization of the proposed reweighted-RNN ensures good generalization. We apply the proposed reweighted-RNN to the problem of video-frame reconstruction from low-dimensional measurements, that is, sequential frame reconstruction. The experimental results on the moving MNIST dataset demonstrate that the proposed deep reweighted-RNN significantly outperforms existing RNN models.
reject
This paper presents a novel RNN algorithm based on unfolding a reweighted L1-L1 minimization problem. Authors derive the generalization error bound which is tighter than existing methods. All reviewers appreciate the theoretical contributions of the paper, particularly the derivation of generalization error bounds. However, at a higher-level, the overall idea is incremental because RNN by unfolding L1-L1 minimization problem (Le+,2019) and reweighted L1 minimization (Candes+,2008) are both known techniques. The proposed method is essentially a simple combination of them and therefore the result seems somewhat obvious. Also, I agree with reviewers that some experiments are not deep enough to support the theory. For example, for over-parameterization (large model parameters) issue, one can compare the models with the same number of parameters and observe how they generalize. Overall, this is the very borderline paper that provides a good theoretical contribution with limited conceptual novelty and empirical evidences. As a conclusion, I decided to recommend rejection but could be accepted if there is a room.
val
[ "B1ezsbCiiB", "B1eJSH0sir", "SJl-lXCsor", "ByebhFEatH", "r1e_OXit9S", "B1e9hGLn5r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the comments and suggestions on the manuscript. Please find below our responses to your corresponding questions: \n\n1. The training time for the proposed Reweighted RNN model (with our default settings, namely, the compressed sensing rate of 0.2, d=3 hidden layers and h= 2^10 hidden units per layer)...
[ -1, -1, -1, 3, 6, 8 ]
[ -1, -1, -1, 4, 1, 1 ]
[ "B1e9hGLn5r", "ByebhFEatH", "r1e_OXit9S", "iclr_2020_B1eYlgBYPH", "iclr_2020_B1eYlgBYPH", "iclr_2020_B1eYlgBYPH" ]
iclr_2020_r1lclxBYDS
On the implicit minimization of alternative loss functions when training deep networks
Understanding the implicit bias of optimization algorithms is important in order to improve generalization of neural networks. One approach to try to exploit such understanding would be to then make the bias explicit in the loss function. Conversely, an interesting approach to gain more insights into the implicit bias could be to study how different loss functions are being implicitly minimized when training the network. In this work, we concentrate our study on the inductive bias occurring when minimizing the cross-entropy loss with different batch sizes and learning rates. We investigate how three loss functions are being implicitly minimized during training. These three loss functions are the Hinge loss with different margins, the cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. This Gcdf loss establishes a connection between a sharpness measure for the 0−1 loss and margin based loss functions. We find that a common behavior is emerging for all the loss functions considered.
reject
The paper proposes an interesting setting in which the effect of different optimization parameters on the loss function is analyzed. The analysis is based on considering cross-entropy loss with different softmax parameters, or hinge loss with different margin parameters. The observations are interesting but ultimately the reviewers felt that the experimental results were not sufficient to warrant publication at ICLR. The reviews unanimously recommended rejection, and no rebuttal was provided.
train
[ "ryx18TZ6dr", "ryeeKOZatS", "B1lQ53WCKH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper want to show that minimizing cross-entropy loss will simultaneously minimize Hinge loss with different margins, cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. The main contribution is a new gcdf loss based on Gaussian-perturbed paramet...
[ 1, 3, 3 ]
[ 3, 3, 4 ]
[ "iclr_2020_r1lclxBYDS", "iclr_2020_r1lclxBYDS", "iclr_2020_r1lclxBYDS" ]
iclr_2020_r1xjgxBFPB
Continual Deep Learning by Functional Regularisation of Memorable Past
Continually learning new skills without forgetting old ones is an important quality for an intelligent system, yet most deep learning methods suffer from catastrophic forgetting of the past. Recent works have addressed this by regularising the network weights, but it is challenging to identify weights crucial to avoid forgetting. A better approach is to directly regularise the network outputs at past inputs, e.g., by using Gaussian processes (GPs), but this is usually computationally challenging. In this paper, we propose a scalable functional-regularisation approach where we regularise only over a few memorable past examples that are crucial to avoid forgetting. Our key idea is to use a GP formulation of deep networks, enabling us to both identify the memorable past and regularise over them. Our method achieves state-of-the-art performance on standard benchmarks and opens a new direction for life-long learning where regularisation methods are naturally combined with memory-based methods.
reject
This work tackles the problem of catastrophic forgetting by using Gaussian processes to identify "memory samples" to regularize learning. Although the approach seems promising and well-motivated, the reviewers ultimately felt that some claims, such as scalability, need stronger justifications. These justifications could come, for example, from further experiments, including ablation studies to gain insights. Making the paper more convincing in this way is particularly desirable since the directions taken by this paper largely overlap with recent literature (as argued by reviewers).
val
[ "SkxrQ_HCKS", "r1lb8nRosB", "Hkev9olmiS", "rJgGDoemsr", "SylF4jgXjr", "BygTT5g7sr", "Skgef0KaYH", "SJezZcn6FS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper uses a Gaussian Processes framework previously introduced in [1] to identify the most important samples from the past for functional regularization. For evaluation authors report their average accuracy on Permuted MNIST, Split-MNIST, and CIFAR10-100 and achieve superior performance over EWC, DLP...
[ 1, -1, -1, -1, -1, -1, 1, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_r1xjgxBFPB", "iclr_2020_r1xjgxBFPB", "Skgef0KaYH", "SJezZcn6FS", "SkxrQ_HCKS", "iclr_2020_r1xjgxBFPB", "iclr_2020_r1xjgxBFPB", "iclr_2020_r1xjgxBFPB" ]
iclr_2020_BkxoglrtvH
Layerwise Learning Rates for Object Features in Unsupervised and Supervised Neural Networks And Consequent Predictions for the Infant Visual System
To understand how object vision develops in infancy and childhood, it will be necessary to develop testable computational models. Deep neural networks (DNNs) have proven valuable as models of adult vision, but it is not yet clear if they have any value as models of development. As a first model, we measured learning in a DNN designed to mimic the architecture and representational geometry of the visual system (CORnet). We quantified the development of explicit object representations at each level of this network through training by freezing the convolutional layers and training an additional linear decoding layer. We evaluate decoding accuracy on the whole ImageNet validation set, and also for individual visual classes. CORnet, however, uses supervised training and because infants have only extremely impoverished access to labels they must instead learn in an unsupervised manner. We therefore also measured learning in a state-of-the-art unsupervised network (DeepCluster). CORnet and DeepCluster differ in both supervision and in the convolutional networks at their heart, thus to isolate the effect of supervision, we ran a control experiment in which we trained the convolutional network from DeepCluster (an AlexNet variant) in a supervised manner. We make predictions on how learning should develop across brain regions in infants. In all three networks, we also tested for a relationship in the order in which infants and machines acquire visual classes, and found only evidence for a counter-intuitive relationship. We discuss the potential reasons for this.
reject
This paper investigates the properties of deep neural networks as they learn, and how they may relate to human visual learning (e.g. how learning develops across regions of the infant brain). The paper received three reviews, all of which recommended Weak Reject. The reviewers generally felt the topic of the paper was very interesting, but overall felt that the insights that the paper revealed were relatively modest, and had concerns about the connections between DNN and human learning (e.g., the extent to which DNNs are biologically plausible -- including back propagation, batch normalization, random initialization, etc. -- and whether this matters for the conclusions of the present study). In response to comments, the authors undertook a significant revision to try to address these points of confusion. However, the reviewers were still skeptical and chose to keep their Weak Reject scores. The AC agrees with reviewers that investigations of the similarity -- or not! -- between infant and deep neural networks is extremely interesting and, as the authors acknowledge, is a high risk but potentially very high reward research direction. However, in light of the reviews with unanimous Weak Reject decisions, the AC is not able to recommend acceptance at this time. I strongly encourage authors to continue this work and submit to another venue; this would seem to be a perfect match for CogSci conference, for example. We hope the reviews below help authors to improve their manuscript for this next submission.
train
[ "SylW7oE2or", "Byx94FVsoB", "HylJUqycsr", "r1lrJqk5sB", "rkeLLY15sB", "HkgYR6zaFH", "BJlwdkiTtr", "H1emeU9CtH" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate your scepticism: this research was conceived and explicitly funded as a high-risk/high-gain programme. However, we believe the challenge is tractable because it can be tackled in an iterative way. Our current goal is to find initial points of contact between DNNs and infants, in the form of correspon...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 3, 5, 1 ]
[ "Byx94FVsoB", "r1lrJqk5sB", "BJlwdkiTtr", "H1emeU9CtH", "HkgYR6zaFH", "iclr_2020_BkxoglrtvH", "iclr_2020_BkxoglrtvH", "iclr_2020_BkxoglrtvH" ]
iclr_2020_r1ghgxHtPH
Blurring Structure and Learning to Optimize and Adapt Receptive Fields
The visual world is vast and varied, but its variations divide into structured and unstructured factors. We compose free-form filters and structured Gaussian filters, optimized end-to-end, to factorize deep representations and learn both local features and their degree of locality. In effect this optimizes over receptive field size and shape, tuning locality to the data and task. Our semi-structured composition is strictly more expressive than free-form filtering, and changes in its structured parameters would require changes in architecture for standard networks. Dynamic inference, in which the Gaussian structure varies with the input, adapts receptive field size to compensate for local scale variation. Optimizing receptive field size improves semantic segmentation accuracy on Cityscapes by 1-2 points for strong dilated and skip architectures and by up to 10 points for suboptimal designs. Adapting receptive fields by dynamic Gaussian structure further improves results, equaling the accuracy of free-form deformation while improving efficiency.
reject
The paper proposes an interesting idea of inserting Gaussian convolutions into ConvNet in order to increase and to adapt effective receptive fields of network units. The reviewers generally agree that the idea is interesting and that the results on CityScapes are promising. However, it is hard not to agree with Reviewer 3, that validation on a single dataset for a single task is not sufficient. This criticism is unaddressed.
test
[ "SylSK0_jjr", "B1xLU0OosH", "HJg6MAOsjB", "S1e3I7w9KH", "Bye9gaWgqB", "Skg0uXMXqS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the feedback, and especially for coupling each point with advice for improvement.\n\n> improved efficiency (one of the main claims) is only assessed on the number of parameters\n\nOur main claim is to make filter size differentiable and unbounded (Figures 1 & 2), and we make use of Gaussian structure...
[ -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, 4, 3, 5 ]
[ "S1e3I7w9KH", "Bye9gaWgqB", "Skg0uXMXqS", "iclr_2020_r1ghgxHtPH", "iclr_2020_r1ghgxHtPH", "iclr_2020_r1ghgxHtPH" ]
iclr_2020_rJehllrtDS
Rethinking deep active learning: Using unlabeled data at model training
Active learning typically focuses on training a model on few labeled examples alone, while unlabeled ones are only used for acquisition. In this work we depart from this setting by using both labeled and unlabeled data during model training across active learning cycles. We do so by using unsupervised feature learning at the beginning of the active learning pipeline and semi-supervised learning at every active learning cycle, on all available data. The former has not been investigated before in active learning, while the study of latter in the context of deep learning is scarce and recent findings are not conclusive with respect to its benefit. Our idea is orthogonal to acquisition strategies by using more data, much like ensemble methods use more models. By systematically evaluating on a number of popular acquisition strategies and datasets, we find that the use of unlabeled data during model training brings a spectacular accuracy improvement in image classification, compared to the differences between acquisition strategies. We thus explore smaller label budgets, even one label per class.
reject
This paper argues that incorporating unsupervised/semi-supervised learning into the training process can dramatically increase the performance of models. In particular, its incorporation can result in performance gains that dwarf the gains obtained by collecting data actively alone. The experiments effectively demonstrate this phenomenon. The paper is written with a tone that implicitly assumes that "active learning for deep learning is effective" and therefore it is a surprise and a challenge to the status quo that using unlabelled data in intelligent ways alone gets such a boost. On the contrary, reviewers found that active learning not working very well for deep learning is a well-known state of affairs. This is not surprising because the most effective theoretically justifiable active learning algorithms rely on finite capacity assumptions about the model class, which deep learning disobeys. Thus, the reviewers found the conclusions to lack novelty as the power of semi-supervised and unsupervised learning is well known. Reject.
train
[ "Bke-m6GVoB", "SJgouzL7or", "rylYGfUQsr", "HylXa7IQjS", "SyeAR7IXiH", "BklJV7UmjS", "HJe-PntOtS", "HJg_1902tB", "SJlVOqRh9B", "BkgLl-YVcS", "HJg-jbrqFB" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This is a general response to all reviewers, meant to address common concerns and summarize our (lengthy) responses to each reviewer.\n\n1. We would like to thank all reviewers for their excellent in-depth analysis and feedback. The resulting discussion here helps us in improving our work, in particular discussion...
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 5, -1, -1 ]
[ "iclr_2020_rJehllrtDS", "SJlVOqRh9B", "SJlVOqRh9B", "HJe-PntOtS", "HJe-PntOtS", "HJg_1902tB", "iclr_2020_rJehllrtDS", "iclr_2020_rJehllrtDS", "iclr_2020_rJehllrtDS", "HJg-jbrqFB", "iclr_2020_rJehllrtDS" ]
iclr_2020_HJxTgeBtDr
Towards Interpretable Evaluations: A Case Study of Named Entity Recognition
With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits. Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us \emph{why} or \emph{how} a particular method is better and how dataset biases influence the choices of model design. In this paper, we present a general methodology for {\emph{interpretable}} evaluation of NLP systems and choose the task of named entity recognition (NER) as a case study, which is a core task of identifying people, places, or organizations in text. The proposed evaluation method enables us to interpret the \textit{model biases}, \textit{dataset biases}, and how the \emph{differences in the datasets} affect the design of the models, identifying the strengths and weaknesses of current approaches. By making our {analysis} tool available, we make it easy for future researchers to run similar analyses and drive the progress in this area.
reject
The paper diligently setup and conducted multiple experiments to validate their approach - bucketizating attributions of data and analyze them accordingly to discover deeper insights eg biases. However, reviewers pointed out that such bucketing is tailored to tasks where attributions are easily observed, such as the one of the focus in this paper -NER. While manuscript proposes this approach as ‘general’, reviewers failed to seem this point. Another reviewer recommended this manuscript to become a journal item rather than conference, due to the length of the page in appendix (17). There were some confusions around writings as well, pointed out by some reviewers. We highly recommend authors to carefully reflect on reviewers both pros and cons of the paper to improve the paper for your future submission.
test
[ "Bye062bisS", "rkxnw3-isS", "Syg_KsbijH", "SklmjObioH", "r1gOzubiir", "BJlGfDZssr", "H1xyV2ljsH", "rkxmuiLnYH", "Skgm6QnaYr", "ByljmRZSqH" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "For other detailed suggestions, we have refined our paper based on your feedback:\n1)    Clarify the description of “Supplementary exam”\n2)  Re-organize the Sec.2.2 and remove some repetition in methodological\nperspective\n3)    Merge Sec.3 into Sec.2 \n4)    Add a more intuitive explanation of the measures defi...
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "Skgm6QnaYr", "Skgm6QnaYr", "Skgm6QnaYr", "ByljmRZSqH", "rkxmuiLnYH", "rkxmuiLnYH", "iclr_2020_HJxTgeBtDr", "iclr_2020_HJxTgeBtDr", "iclr_2020_HJxTgeBtDr", "iclr_2020_HJxTgeBtDr" ]
iclr_2020_S1xRxgSFvH
ShardNet: One Filter Set to Rule Them All
Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret. Machine learning theory implies that such networks are highly over-parameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this. In this paper, we take a further step in this direction by proposing a filter-sharing approach to compressing deep CNNs that reduces their memory footprint by repeatedly applying a single convolutional mapping of learned filters to simulate a CNN pipeline. We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected. At a broader level, our approach also indicates how the scale-space regularities found in visual signals can be leveraged to build neural architectures that are more parsimonious and interpretable.
reject
This submission proposes an interesting experiment/modification of CNNs. However, it looks like this contribution overlaps significantly with prior work (that the authors initially missed) and the comparison in the (revised) manuscript seem to not clearly delineate and acknowledge the similarities and differences. I suggest the authors improve this aspect and try submitting this work to next venue.
train
[ "ByxW79whor", "SkeWW4VvoH", "rkxvNAcKjH", "SJxyyCctiB", "SyekhEVPsH", "BkgfN4NPiS", "r1g1Y4NPiB", "HJxWWgojYr", "ByeCD0Ritr", "rkebzlRTKB" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers again for their useful feedback. As promised, we have now uploaded a revised version of the paper that contains:\n- A more concise version of the introduction.\n- A revision to the ‘Related Work’ section to include discussion about the latest literature on recurrent implementations of convol...
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, 1, 5, 5 ]
[ "iclr_2020_S1xRxgSFvH", "iclr_2020_S1xRxgSFvH", "HJxWWgojYr", "ByeCD0Ritr", "ByeCD0Ritr", "SkeWW4VvoH", "rkebzlRTKB", "iclr_2020_S1xRxgSFvH", "iclr_2020_S1xRxgSFvH", "iclr_2020_S1xRxgSFvH" ]
iclr_2020_SJxAlgrYDr
City Metro Network Expansion with Reinforcement Learning
This paper presents a method to solve the city metro network expansion problem using reinforcement learning (RL). In this method, we formulate the metro expansion as a process of sequential station selection, and design feasibility rules based on the selected station sequence to ensure the reasonable connection patterns of metro line. Following this formulation, we train an actor critic model to design the next metro line. The actor is a seq2seq network with attention mechanism to generate the parameterized policy which is the probability distribution over feasible stations. The critic is used to estimate the expected reward, which is determined by the output station sequences generated by the actor during training, in order to reduce the training variance. The learning procedure only requires the reward calculation, thus our general method can be extended to multi-factor cases easily. Considering origin-destination (OD) trips and social equity, we expand the current metro network in Xi'an, China, based on the real mobility information of 24,770,715 mobile phone users in the whole city. The results demonstrate the effectiveness of our method.
reject
The paper explores the use of RL (actor-critic) for planning the expansion of a metro subway network in a City. The reviewers felt that novelty was limited and there was not enough motivation on what is special about this application, and what lessons can be learned from this exercise.
train
[ "rJgavbZ2tr", "Hyxa9ARAKr", "rJl7sAFy5H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a method for solving the problem of network expansion, in particular, considers the city metro network and its expansion within the new metro line. Since this problem was previously represented as the non-linear integer problem, having an exponential number of constraints or requiring expert ...
[ 3, 3, 3 ]
[ 3, 3, 3 ]
[ "iclr_2020_SJxAlgrYDr", "iclr_2020_SJxAlgrYDr", "iclr_2020_SJxAlgrYDr" ]
iclr_2020_HJlk-eHFwH
AdaGAN: Adaptive GAN for Many-to-Many Non-Parallel Voice Conversion
Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker. Earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs. Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning remains less explored areas in VC. Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices. In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i.e., case of zero-shot learning). In particular, propose Adaptive Generative Adversarial Network (AdaGAN), new architectural training procedure help in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC. We compare our results with the state-of-the-art StarGAN-VC architecture. In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively. The key strength of the proposed architectures is that it yields these results with less computational complexity. AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second (FLOPS), and 85.46% less complex in terms of trainable parameters.
reject
The paper has major presentation issues. The rebuttal clarified some technical ones, but it is clear that the authors need to improve the reading substantially, ,so the paper is not acceptable in its current form.
train
[ "SJxDQYG0FS", "HklBlV-ssB", "BylOnuWiir", "rkehj4Zosr", "BkgNckrRtH", "r1xjn_YV5r", "SkgCMSDftB", "r1lkU8ByuH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper presents a voice conversion approach using GANs based on adaptive instance normalization (AdaIN). The authors give the mathematical formulation of the problem and provide the implementation of the so-called AdaGAN. Experiments are carried out on VCTK and the proposed AdaGAN is compared with StarGAN. T...
[ 1, -1, -1, -1, 1, 6, -1, -1 ]
[ 3, -1, -1, -1, 5, 3, -1, -1 ]
[ "iclr_2020_HJlk-eHFwH", "r1xjn_YV5r", "SJxDQYG0FS", "BkgNckrRtH", "iclr_2020_HJlk-eHFwH", "iclr_2020_HJlk-eHFwH", "r1lkU8ByuH", "iclr_2020_HJlk-eHFwH" ]
iclr_2020_B1xybgSKwB
Self-Attentional Credit Assignment for Transfer in Reinforcement Learning
The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample efficient. Our main contribution is Secret, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture. Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function. Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.
reject
The paper introduces a novel approach to transfer learning in RL based on credit assignment. The reviewers had quite diverse opinions on this paper. The strength of the paper is that it introduces an interesting new direction for transfer learning in RL. However, there are some questions regarding design choices and whether the experiments sufficiently validate the idea (i.e., the sensitivity to hyperparameters is a question that is not sufficiently addressed). Overall, this research has great potential. However, a more extensive empirical study is necessary before it can be accepted.
train
[ "S1gkk2lt9H", "Hkg4OyMnsr", "rkePrJMhiS", "ryg1CRZ3oH", "BJlk_RW3sr", "SJgA7RW3jB", "H1eNjMcpKH", "Skx8qjXJcr", "BJeDbT_1_r", "r1g_Ve11uH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper proposes to consider the problem of transfer in the context of sequential decision-making -- in particular reinforcement learning -- from the view-point of learning transferable credit assignment capability. They hypothesize that by learning how to assign credit, structural invariants can be learned whi...
[ 8, -1, -1, -1, -1, -1, 3, 6, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, 3, 3, -1, -1 ]
[ "iclr_2020_B1xybgSKwB", "rkePrJMhiS", "H1eNjMcpKH", "Skx8qjXJcr", "SJgA7RW3jB", "S1gkk2lt9H", "iclr_2020_B1xybgSKwB", "iclr_2020_B1xybgSKwB", "r1g_Ve11uH", "iclr_2020_B1xybgSKwB" ]
iclr_2020_rklx-gSYPS
Learning to Optimize via Dual space Preconditioning
Preconditioning an minimization algorithm improve its convergence and can lead to a minimizer in one iteration in some extreme cases. There is currently no analytical way for finding a suitable preconditioner. We present a general methodology for learning the preconditioner and show that it can lead to dramatic speed-ups over standard optimization techniques.
reject
Thanks for the detailed replies to the reviewers, which significantly helped us understand your paper better. However, after all, we decided not to accept your paper due to weak justification and limited experimental validation. Writing should also be improved significantly. We hope that the feedback from the reviewers help you improve your paper for potential future submission.
train
[ "HJetjtg0YB", "rkgBTk-sjH", "rygBT9ljir", "BJlFUg-iiH", "Bkg2rQQZoH", "SygFLpeAYB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "[Update after rebuttal period]\nI have read the response, my confusion in the original reviews cannot be answered satisfactorily. Therefore, I keep my initial scores.\n\n\n[Original reviews]\nFirstly, the motivation of the proposed method is not convincing for me. The authors want to propose a general methodology...
[ 3, -1, -1, -1, 3, 1 ]
[ 5, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rklx-gSYPS", "SygFLpeAYB", "HJetjtg0YB", "Bkg2rQQZoH", "iclr_2020_rklx-gSYPS", "iclr_2020_rklx-gSYPS" ]
iclr_2020_BJx-ZeSKDB
Compositional Embeddings: Joint Perception and Comparison of Class Label Sets
We explore the idea of compositional set embeddings that can be used to infer not just a single class, but the set of classes associated with the input data (e.g., image, video, audio signal). This can be useful, for example, in multi-object detection in images, or multi-speaker diarization (one-shot learning) in audio. In particular, we devise and implement two novel models consisting of (1) an embedding function f trained jointly with a “composite” function g that computes set union opera- tions between the classes encoded in two embedding vectors; and (2) embedding f trained jointly with a “query” function h that computes whether the classes en- coded in one embedding subsume the classes encoded in another embedding. In contrast to prior work, these models must both perceive the classes associated with the input examples, and also encode the relationships between different class label sets. In experiments conducted on simulated data, OmniGlot, and COCO datasets, the proposed composite embedding models outperform baselines based on traditional embedding approaches.
reject
The authors propose a new type of compositional embedding (with two proposed variants) for performing tasks that involve set relationships between examples (say, images) containing sets of classes (say, objects). The setting is new and the reviewers are mostly in agreement (after discussion and revision) that the approach is interesting and the results encouraging. There is some concern, however, that the task setup may be too contrived, and that in any real task there could be a more obvious baseline that would do better. For example, one task setup requires that examples be represented via embeddings, and no reference can be made to the original inputs; this is justified in a setting where space is a constraint, but the combination of this setting with the specific set query tasks considered seems quite rare. The paper may be an example of a hammer in search of a nail. The ideas are interesting and the paper is written well, and so the authors can hopefully refine the proposed class of problems toward more practical settings.
train
[ "Hkx7Kvvk5S", "HJg9EJB2oB", "HylvuIEnjB", "S1lrgINnoH", "ByxlAVN2jB", "B1lxwbMTtS", "H1edfYN0YS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper describes a way to train functions that are able to represent the union of classes as well as to query if the classes in an image subsume the classes in another image. This is done throughly jointly training embedding functions, a set union function and a query function. The paper reads well.\n\nWhile t...
[ 6, -1, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_BJx-ZeSKDB", "iclr_2020_BJx-ZeSKDB", "B1lxwbMTtS", "H1edfYN0YS", "Hkx7Kvvk5S", "iclr_2020_BJx-ZeSKDB", "iclr_2020_BJx-ZeSKDB" ]
iclr_2020_HJe-blSYvH
Unsupervised Learning of Efficient and Robust Speech Representations
We present an unsupervised method for learning speech representations based on a bidirectional contrastive predictive coding that implicitly discovers phonetic structure from large-scale corpora of unlabelled raw audio signals. The representations, which we learn from up to 8000 hours of publicly accessible speech data, are evaluated by looking at their impact on the behaviour of supervised speech recognition systems. First, across a variety of datasets, we find that the features learned from the largest and most diverse pretraining dataset result in significant improvements over standard audio features as well as over features learned from smaller amounts of pretraining data. Second, they significantly improve sample efficiency in low-data scenarios. Finally, the features confer significant robustness advantages to the resulting recognition systems: we see significant improvements in out-of-domain transfer relative to baseline feature sets, and the features likewise provide improvements in four different low-resource African language datasets.
reject
The paper focuses on learning speech representations with contrastive predictive coding (CPC). As noted by reviewers, (i) novelty is too low (mostly making the model bidirectional) for ICLR (ii) comparison with existing work is missing.
train
[ "rJezAFHRFS", "r1xr4KzjsS", "SyxV5OzjsS", "SklFPOGiiS", "H1gpEOzsjH", "rkl6lH2for", "rJlW3Szi_r", "HJlqbVhpYH" ]
[ "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This paper investigates an unsupervised learning approach based on bi-directional contrasive predictive coding (CPC) to learning speech representations. The speech representations learned using 1k and 8k hours unlabeled data based on CPC are shown to be helpful in semi-supervised learning ASR tasks in terms of sa...
[ 6, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_HJe-blSYvH", "rkl6lH2for", "rJlW3Szi_r", "HJlqbVhpYH", "rJezAFHRFS", "rJlW3Szi_r", "iclr_2020_HJe-blSYvH", "iclr_2020_HJe-blSYvH" ]
iclr_2020_BJeGZxrFvS
A Simple Technique to Enable Saliency Methods to Pass the Sanity Checks
{\em Saliency methods} attempt to explain a deep net's decision by assigning a {\em score} to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input. Recently \citet{adebayosan} questioned the validity of many of these methods since they do not pass simple {\em sanity checks}, which test whether the scores shift/vanish when layers of the trained net are randomized, or when the net is retrained using random labels for inputs. % for the inputs. %Surprisingly, the tested methods did not pass these checks: the explanations were relatively unchanged. We propose a simple fix to existing saliency methods that helps them pass sanity checks, which we call {\em competition for pixels}. This involves computing saliency maps for all possible labels in the classification task, and using a simple competition among them to identify and remove less relevant pixels from the map. Some theoretical justification is provided for it and its performance is empirically demonstrated on several popular methods.
reject
This submission proposes a method to pass sanity checks on saliency methods for model explainability that were proposed in a prior work. Pros: -The method is simple, intuitive and does indeed pass the proposed checks. Cons: -The proposed method aims to pass the sanity checks, but is not well-evaluated on whether it provides good explanations. Passing these checks can be considered as necessary but not sufficient. -All reviewers agreed that the evaluation could be improved and most reviewers found the evaluation insufficient. Given the shortcomings, AC agrees with the majority recommendation to reject.
train
[ "rJgOz7c2jH", "Bkgs3f93jr", "Hyg7GfcnjH", "BJg6iRPDYS", "S1enNy8TFr", "SJx3NyisqS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank reviewer 2 for their thoughtful review. We agree that the question of how to optimally apply competition among labels remains open. ", "We thank reviewer 1 for their thoughtful review. \n\nWe apologize for the typos, and are correcting them in the revised version. As for the sentence beginning Section 4...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "BJg6iRPDYS", "S1enNy8TFr", "SJx3NyisqS", "iclr_2020_BJeGZxrFvS", "iclr_2020_BJeGZxrFvS", "iclr_2020_BJeGZxrFvS" ]
iclr_2020_SyeMblBtwr
CrossNorm: On Normalization for Off-Policy Reinforcement Learning
Off-policy temporal difference (TD) methods are a powerful class of reinforcement learning (RL) algorithms. Intriguingly, deep off-policy TD algorithms are not commonly used in combination with feature normalization techniques, despite positive effects of normalization in other domains. We show that naive application of existing normalization techniques is indeed not effective, but that well-designed normalization improves optimization stability and removes the necessity of target networks. In particular, we introduce a normalization based on a mixture of on- and off-policy transitions, which we call cross-normalization. It can be regarded as an extension of batch normalization that re-centers data for two different distributions, as present in off-policy learning. Applied to DDPG and TD3, cross-normalization improves over the state of the art across a range of MuJoCo benchmark tasks.
reject
This is certainly a boarderline paper. The reviewers agreed this paper provides a good explanation and empirical justification of why popular normalization schemes don't help in DRL. The paper then proposes a simple scheme and demonstrates how it improves learning in several domains. The main concerns are the nature of these gains and how broadly useful the new approach is. In many cases there appear to be somewhat clear wins in the middle of the learning curves, but by the end of each experiment the errorbars overlap. The most clear results are those with TD3. There are some oddities here: using half SD error bars and smoothing---both underline the concern about significance. The reviewers requested more experiments and the authors provided three more domains: two in which their method appears better. These are not widely used benchmarks and it was hard to compare the performance of the baselines with fan et al (different setup) to evaluate the claims. The paper nicely provides lots of insight and empirical wisdom in the appendix, explaining how they got the algorithms to perform well.
train
[ "S1ghBRZ5or", "HkeMrgMqoS", "rJlRvAZcir", "B1gHVaZ5iH", "BJemg8D6Yr", "SylC_-5aFB", "Skg1CMy0tS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the constructive feedback, and for pointing out that our “results are surprisingly good when the simplicity of the algorithm is considered”.\n\n\n> I think it would be much better if the paper develops some theory behind the normalization\n\nThere are theoretical studies and empirical stu...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 4, 1, 3 ]
[ "Skg1CMy0tS", "BJemg8D6Yr", "SylC_-5aFB", "iclr_2020_SyeMblBtwr", "iclr_2020_SyeMblBtwr", "iclr_2020_SyeMblBtwr", "iclr_2020_SyeMblBtwr" ]
iclr_2020_HJe7bxBYvr
Avoiding Negative Side-Effects and Promoting Safe Exploration with Imaginative Planning
With the recent proliferation of the usage of reinforcement learning (RL) agents for solving real-world tasks, safety emerges as a necessary ingredient for their successful application. In this paper, we focus on ensuring the safety of the agent while making sure that the agent does not cause any unnecessary disruptions to its environment. The current approaches to this problem, such as manually constraining the agent or adding a safety penalty to the reward function, can introduce bad incentives. In complex domains, these approaches are simply intractable, as they require knowing apriori all the possible unsafe scenarios an agent could encounter. We propose a model-based approach to safety that allows the agent to look into the future and be aware of the future consequences of its actions. We learn the transition dynamics of the environment and generate a directed graph called the imaginative module. This graph encapsulates all possible trajectories that can be followed by the agent, allowing the agent to efficiently traverse through the imagined environment without ever taking any action in reality. A baseline state, which can either represent a safe or an unsafe state (based on whichever is easier to define) is taken as a human input, and the imaginative module is used to predict whether the current actions of the agent can cause it to end up in dangerous states in the future. Our imaginative module can be seen as a ``plug-and-play'' approach to ensuring safety, as it is compatible with any existing RL algorithm and any task with discrete action space. Our method induces the agent to act safely while learning to solve the task. We experimentally validate our proposal on two gridworld environments and a self-driving car simulator, demonstrating that our approach to safety visits unsafe states significantly less frequently than a baseline.
reject
This paper tackles the problem of safe exploration in RL. The proposed approach uses an imaginative module to construct a connectivity graph between all states using forward predictions. The idea then consists in using this graph to plan a trajectory which avoids states labelled as "unsafe". Several concerns were raised and the authors did not provide any rebuttal. A major point is that the assumption that the approach has access to what are unsafe states, which is either unreasonable in practice or makes the problem much simpler. Another major point is the uniform data collection about every state-action pairs. This can be really unsafe and defeats the purpose of safe exploration following this phase. These questions may be due to a miscomprehension, indicating that the paper should be clarified, as demanded by reviewers. Finally, the experiments would benefit from additional details in order to be correctly understood. All reviewers agree that this paper should be rejected. Hence, I recommend reject.
train
[ "HkgkkpsmtB", "HkxTiwWTYB", "rygzKGNTtH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thie paper proposes using an \"imagination\" module to provide safe exploration during RL learning. The imagination module is used to perform forward predictions, constructing a graph between possible states. If any action would lead to a \"base state\" that is an unsafe state that action will not be executed and ...
[ 1, 1, 1 ]
[ 4, 4, 3 ]
[ "iclr_2020_HJe7bxBYvr", "iclr_2020_HJe7bxBYvr", "iclr_2020_HJe7bxBYvr" ]
iclr_2020_S1lNWertDr
Decoupling Hierarchical Recurrent Neural Networks With Locally Computable Losses
Learning long-term dependencies is a key long-standing challenge of recurrent neural networks (RNNs). Hierarchical recurrent neural networks (HRNNs) have been considered a promising approach as long-term dependencies are resolved through shortcuts up and down the hierarchy. Yet, the memory requirements of Truncated Backpropagation Through Time (TBPTT) still prevent training them on very long sequences. In this paper, we empirically show that in (deep) HRNNs, propagating gradients back from higher to lower levels can be replaced by locally computable losses, without harming the learning capability of the network, over a wide range of tasks. This decoupling by local losses reduces the memory requirements of training by a factor exponential in the depth of the hierarchy in comparison to standard TBPTT.
reject
All reviewers gave this paper a score of 1. The AC recommends rejection.
train
[ "BkgDnZ4noH", "BJge242NjH", "HyeYxYAs_H", "SJxfjMU9YB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their thoughtful comments. Due to the low scores, we decided to not update our manuscript but we will still include the useful feedback into future revisions of the paper.", "Claim: Backpropagation of gradients from a higher to lower level in a HRNN can be removed and replaced with aux...
[ -1, 1, 1, 1 ]
[ -1, 4, 4, 5 ]
[ "iclr_2020_S1lNWertDr", "iclr_2020_S1lNWertDr", "iclr_2020_S1lNWertDr", "iclr_2020_S1lNWertDr" ]
iclr_2020_BJxSWeSYPB
Self-supervised Training of Proposal-based Segmentation via Background Prediction
While supervised object detection and segmentation methods achieve impressive accuracy, they generalize poorly to images whose appearance significantly differs from the data they have been trained on. To address this in scenarios where annotating data is prohibitively expensive, we introduce a self-supervised approach to detection and segmentation, able to work with monocular images captured with a moving camera. At the heart of our approach lies the observations that object segmentation and background reconstruction are linked tasks, and that, for structured scenes, background regions can be re-synthesized from their surroundings, whereas regions depicting the object cannot. We encode this intuition as a self-supervised loss function that we exploit to train a proposal-based segmentation network. To account for the discrete nature of the proposals, we develop a Monte Carlo-based training strategy that allows the algorithm to explore the large space of object proposals. We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks, achieving competitive results compared to the few existing self-supervised methods and approaching the accuracy of supervised ones that exploit large annotated datasets.
reject
This work proposes a self-supervised segmentation method: building upon Crawford and Pineau 2019, this work adds a Monte-Carlo based training strategy to explore object proposals. Reviewers found the method interesting and clever, but shared concerns about the lack of a better comparison to Crawford and Pineau, as well as generally a lack of care in comparisons to others, which were not satisfactorily addressed by authors response. For these reasons, we recommend rejection.
train
[ "B1lr_hNniH", "S1lT7LJYoS", "BygQUXkKoH", "H1gAXMJFjB", "BylC3XOcYS", "HklAE1a2Fr", "HyloanZEcr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We tried replacing the importance sampling part in our method with the categorical reparameterization used in (Crawford and Pineau 2019). Since both strategies approximate the same objective, they should lead to very similar outcomes with a possible difference in the convergence speed. To this end we used Gumbel-S...
[ -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "H1gAXMJFjB", "BylC3XOcYS", "HklAE1a2Fr", "HyloanZEcr", "iclr_2020_BJxSWeSYPB", "iclr_2020_BJxSWeSYPB", "iclr_2020_BJxSWeSYPB" ]
iclr_2020_BJl8ZlHFwr
Relation-based Generalized Zero-shot Classification with the Domain Discriminator on the shared representation
Generalized zero-shot learning (GZSL) is the task of predicting a test image from seen or unseen classes using pre-defined class-attributes and images from the seen classes. Typical ZSL models assign the class corresponding to the most relevant attribute as the predicted label of the test image based on the learned relation between the attribute and the image. However, this relation-based approach presents a difficulty: many of the test images are predicted as biased to the seen domain, i.e., the \emph{domain bias problem}. Recently, many methods have addressed this difficulty using a synthesis-based approach that, however, requires generation of large amounts of high-quality unseen images after training and the additional training of classifier given them. Therefore, for this study, we aim at alleviating this difficulty in the manner of the relation-based approach. First, we consider the requirements for good performance in a ZSL setting and introduce a new model based on a variational autoencoder that learns to embed attributes and images into the shared representation space which satisfies those requirements. Next, we assume that the domain bias problem in GZSL derives from a situation in which embedding of the unseen domain overlaps that of the seen one. We introduce a discriminator that distinguishes domains in a shared space and learns jointly with the above embedding model to prevent this situation. After training, we can obtain prior knowledge from the discriminator of which domain is more likely to be embedded anywhere in the shared space. We propose combination of this knowledge and the relation-based classification on the embedded shared space as a mixture model to compensate class prediction. Experimentally obtained results confirm that the proposed method significantly improves the domain bias problem in relation-based settings and achieves almost equal accuracy to that of high-cost synthesis-based methods.
reject
This paper proposes a relation-based model that extends VAE to explicitly alleviate the domain bias problem between seen and unseen classes in the setting of generalized zero-shot learning. Reviewers and AC think that the studied problem is interesting, the reported experimental results are strong, and the writing is clear, but the proposed model and its scientific reasoning for convincing why the proposed method is valuable is somewhat limited. Thus the authors are encouraged to further improve in these directions. In particular: - The idea of using a variant of the widely-used domain discriminator to make seen and unseen classes distinguishable is somewhat contradicted to the basic principle of zero-shot learning. How to trade off the balance between seen and unseen classes has been an important problem in generalized ZSL. These problems need further elaboration. - The proposed model itself is not a real "VAE", making the value of an extensive derivation based on variational inference less prominent. - There is also the need to compare with the baselines mentioned by the reviewers. Overall, this is a borderline paper. Since the above concerns were not addressed convincingly in the rebuttal, I am leaning towards rejection.
train
[ "HkeUU1sniH", "Skg-wJPsir", "BJgl-GG5jB", "ByxAKp-coS", "BygJU3-ciB", "H1lg2tWcjr", "S1lrWRQvtH", "rJg9dq0kqr", "rJgIEz285r" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have corrected some more errors and typographical errors. In particular, the description of the hyperparameter that adjusts the variance of the inference model in the computation of the objective function of the domain discriminator was missing, so we mentioned this in the appendix.\n\nThank you.", "We thank ...
[ -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "Skg-wJPsir", "iclr_2020_BJl8ZlHFwr", "rJg9dq0kqr", "rJg9dq0kqr", "rJgIEz285r", "S1lrWRQvtH", "iclr_2020_BJl8ZlHFwr", "iclr_2020_BJl8ZlHFwr", "iclr_2020_BJl8ZlHFwr" ]
iclr_2020_r1xI-gHFDH
How can we generalise learning distributed representations of graphs?
We propose a general framework to construct unsupervised models capable of learning distributed representations of discrete structures such as graphs based on R-Convolution kernels and distributed semantics research. Our framework combines the insights and observations of Deep Graph Kernels and Graph2Vec towards a unified methodology for performing similarity learning on graphs of arbitrary size. This is exemplified by our own instance G2DR which extends Graph2Vec from labelled graphs towards unlabelled graphs and tackles issues of diagonal dominance through pruning of the subgraph vocabulary composing graphs. These changes produce new state of the art results in the downstream application of G2DR embeddings in graph classification tasks over datasets with small labelled graphs in binary classification to multi-class classification on large unlabelled graphs using an off-the-shelf support vector machine.
reject
The paper proposed a general framework to construct unsupervised models for representation learning of discrete structures. The reviewers feel that the approach is taken directly from graph kernels, and the novelty is not high enough.
val
[ "H1lWHaNhir", "rJgHTiE2sr", "BkxnW6eCFS", "H1lnJeRqiH", "H1eWRE9ciS", "rylK_u2bsH", "rkxdXdnbor", "Sylfcv2bjS", "SyxdvPnZjr", "S1xV-886FS", "ryexkwlRKr" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all of the reviewers for reading our work and providing feedback to improve our work and correct mistakes.\n\nWe have taken these into consideration and uploaded a revision. On top of including as many of the pointers and promised revisions as possible, we have changed parts of the presentat...
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 6, 1 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_r1xI-gHFDH", "H1lnJeRqiH", "iclr_2020_r1xI-gHFDH", "H1eWRE9ciS", "Sylfcv2bjS", "ryexkwlRKr", "S1xV-886FS", "SyxdvPnZjr", "BkxnW6eCFS", "iclr_2020_r1xI-gHFDH", "iclr_2020_r1xI-gHFDH" ]
iclr_2020_HyePberFvH
Monte Carlo Deep Neural Network Arithmetic
Quantization is a crucial technique for achieving low-power, low latency and high throughput hardware implementations of Deep Neural Networks. Quantized floating point representations have received recent interest due to their hardware efficiency benefits and ability to represent a higher dynamic range than fixed point representations, leading to improvements in accuracy. We present a novel technique, Monte Carlo Deep Neural Network Arithmetic (MCA), for determining the sensitivity of Deep Neural Networks to quantization in floating point arithmetic.We do this by applying Monte Carlo Arithmetic to the inference computation and analyzing the relative standard deviation of the neural network loss. The method makes no assumptions regarding the underlying parameter distributions. We evaluate our method on pre-trained image classification models on the CIFAR10 andImageNet datasets. For the same network topology and dataset, we demonstrate the ability to gain the equivalent of bits of precision by simply choosing weight parameter sets which demonstrate a lower loss of significance from the Monte Carlo trials. Additionally, we can apply MCA to compare the sensitivity of different network topologies to quantization effects.
reject
The paper studies the impact of rounding errors on deep neural networks. The authors apply Monte Carlos arithmetics to standard DNN operations. Their results indeed show catastrophic cancellation in DNNs and that the resulting loss of significance in the number representation correlates with decrease in validation performance, indicating that DNN performances are sensitive to rounding errors. Although recognizing that the paper addresses an important problem (quantized / finite precision neural networks), the reviewers point out the contribution of the paper is somewhat incremental. During the rebuttal, the authors made an effort to improve the manuscript based on reviewer suggestions, however review scores were not increased. The paper is slightly below acceptance threshold, based on reviews and my own reading, as the method is mostly restricted to diagnostics and cannot yet be used to help training low-precision neural networks.
train
[ "SygYoBI2oS", "B1xsDNhPjS", "rJeM1WhwiS", "S1gD9l3wiS", "rJxUTXU9tS", "S1xgzttTFH", "rkgOn7jaYS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for the constructive comments and valuable suggestions. We have uploaded a revised version of our paper following the suggestions. In the revised paper, we have highlighted the main changes in blue/aqua. The changes for each section can be summarized as follows:\n\nIn Section 1, we updat...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 3, 1, 1 ]
[ "iclr_2020_HyePberFvH", "rJxUTXU9tS", "S1xgzttTFH", "rkgOn7jaYS", "iclr_2020_HyePberFvH", "iclr_2020_HyePberFvH", "iclr_2020_HyePberFvH" ]
iclr_2020_rkeO-lrYwr
Mode Connectivity and Sparse Neural Networks
We uncover a connection between two seemingly unrelated empirical phenomena: mode connectivity and sparsity. On the one hand, there is growing catalog of situations where, across multiple runs, SGD learns weights that fall into minima that are connected (mode connectivity). A striking example is described by Nagarajan & Kolter (2019). They observe that test error on MNIST does not change along the linear path connecting the end points of two independent SGD runs, starting from the same random initialization. On the other hand, there is the lottery ticket hypothesis of Frankle & Carbin (2019), where dense, randomly initialized networks have sparse subnetworks capable of training in isolation to full accuracy. However, neither phenomenon scales beyond small vision networks. We start by proposing a technique to find sparse subnetworks after initialization. We observe that these subnetworks match the accuracy of the full network only when two SGD runs for the same subnetwork are connected by linear paths with the no change in test error. Our findings connect the existence of sparse subnetworks that train to high accuracy with the dynamics of optimization via mode connectivity. In doing so, we identify analogues of the phenomena uncovered by Nagarajan & Kolter and Frankle & Carbin in ImageNet-scale architectures at state-of-the-art sparsity levels.
reject
This paper investigates theories related to networks sparsification, related to mode connectivity and the so-called lottery ticket hypothesis. The paper is interesting and has merit, but on balance I find the contributions not sufficiently clear to warrant acceptance. The authors made substantial changes to the paper which are admirable and which bring it to borderline status.
train
[ "B1gQ7ILjsB", "S1g7aB8isB", "S1eMPBLoiS", "HkeWmS8iiS", "BJeG9EUsor", "BkxR3EIjjH", "rJlhQU77jr", "HyxwWTFAOS", "HygFbYdatS" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nNOTE: We have posted an updated version of the paper that has been substantially restructured and rewritten to address your concerns. We highly recommend looking over the new paper.\n\nWe have summarized these changes in a general response (posted as a top-level comment). We ask that you read our general respons...
[ -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "rJlhQU77jr", "rJlhQU77jr", "HygFbYdatS", "HyxwWTFAOS", "iclr_2020_rkeO-lrYwr", "iclr_2020_rkeO-lrYwr", "iclr_2020_rkeO-lrYwr", "iclr_2020_rkeO-lrYwr", "iclr_2020_rkeO-lrYwr" ]
iclr_2020_S1l_ZlrFvS
Why do These Match? Explaining the Behavior of Image Similarity Models
Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply. We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition. Our approach's ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2.
reject
This submission proposes an explainability method for deep visual representation models that have been trained to compute image similarity. Strengths: -The paper tackles an important and overlooked problem. -The proposed approach is novel and interesting. Weaknesses: -The evaluation is not convincing. In particular (i) the evaluation is performed only on ground-truth pairs, rather than on ground-truth pairs and predicted pairs; (ii) the user study doesn’t disambiguate whether users find the SANE explanations better than the saliency map explanations or whether users tend to find text more understandable in general than heat maps. The user study should have compared their predicted attributes to the attribute prediction baseline; (iii) the explanation of Figure 4 is not convincing: the attribute is not only being removed. A new attribute is also being inserted (i.e. a new color). Therefore it’s not clear whether the similarity score should have increased or decreased; (iv) the proposed metric in section 4.2 is flawed: It matters whether similarity increases or decreases with insertion or deletion. The proposed metric doesn’t reflect that. -Some key details, such as how the attribute insertion process was performed, haven’t been explained. The reviewer ratings were borderline after discussion, with some important concerns still not having been addressed after the author feedback period. Given the remaining shortcomings, AC recommends rejection.
train
[ "rJxnfmhhiB", "rkgosaFjoB", "Hygmw6tsor", "BygfkaYosr", "H1xPdntjiH", "BJeHgcM6KH", "S1gkMmh1qH", "HJl17Yad5B" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- ...the saliency map might highlight regions of the zipper and where the black color is present...\n\nAs you noted, a saliency map might represent more than one attribute. We evaluated the performance of the top ranked attribute, but one could return the top K attributes using our model. We didn’t do this because...
[ -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "BygfkaYosr", "BJeHgcM6KH", "S1gkMmh1qH", "HJl17Yad5B", "iclr_2020_S1l_ZlrFvS", "iclr_2020_S1l_ZlrFvS", "iclr_2020_S1l_ZlrFvS", "iclr_2020_S1l_ZlrFvS" ]
iclr_2020_ryxtWgSKPB
Quantum Optical Experiments Modeled by Long Short-Term Memory
We demonstrate how machine learning is able to model experiments in quantum physics. Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography. Of particular interest are complex quantum states with more than two particles and a large number of entangled quantum levels. Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it. To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states. In this work, we show that machine learning models can provide significant improvement over random search. We demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves. This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models.
reject
The paper predicts properties of quantum states through RNNs. The idea is nice, but the results are very limited and require more work. It seems to be more suited for a conference focussing on quantum ML---even when the authors have an ML background. All reviewers agree on a rejection, and their arguments are solid. The authors offered no rebuttal.
train
[ "rJe7c2NjKB", "B1gPU8eTtS", "rJgKzbZpKB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed to use machine learning models to predict certain properties of complex quantum systems. In quantum physics experiments, one need to randomly search millions of experimental setups to search for interesting experiments. This paper shown that machine learning models can provide significant impro...
[ 3, 1, 1 ]
[ 1, 1, 4 ]
[ "iclr_2020_ryxtWgSKPB", "iclr_2020_ryxtWgSKPB", "iclr_2020_ryxtWgSKPB" ]
iclr_2020_rJecbgHtDH
A Boolean Task Algebra for Reinforcement Learning
We propose a framework for defining a Boolean algebra over the space of tasks. This allows us to formulate new tasks in terms of the negation, disjunction and conjunction of a set of base tasks. We then show that by learning goal-oriented value functions and restricting the transition dynamics of the tasks, an agent can solve these new tasks with no further learning. We prove that by composing these value functions in specific ways, we immediately recover the optimal policies for all tasks expressible under the Boolean algebra. We verify our approach in two domains, including a high-dimensional video game environment requiring function approximation, where an agent first learns a set of base skills, and then composes them to solve a super-exponential number of new tasks.
reject
This paper considers the situation where a set of reinforcement learning tasks are related by means of a Boolean algebra. The tasks considered are restricted to stochastic shortest path problems. The paper shows that learning goal-oriented value functions for subtasks enables the agent to solve new tasks (specified with boolean operations on the goal sets) in a zero-shot fashion. Furthermore, the Boolean operations on tasks are transformed to simple arithmetic operations on the optimal action-value functions, enabling the zero short transfer to a new task to be computationally efficient. This approach to zero-shot transfer is tested in the four room domain without function approximation and a small video game with function approximation. The reviewers found several strengths and weaknesses in the paper. The paper was clearly written. The experiments support the claim that the method supports zero-shot composition of goal-specified tasks. The weaknesses lie in the restrictive assumptions. These assumptions require deterministic transition dynamics, reward functions that only differ on the terminal absorbing states, and having only two different terminal reward values possible across all tasks. These assumptions greatly restrict the applicability of the proposed method. The author response and reviewer comments indicated that some aspects these restrictions can be softened in practice, but the form of composition described in this paper is restrictive. The task restrictions also seem to limit the method's utility on general reinforcement learning problems. The paper falls short of being ready for publication at ICLR. Further justification of the restrictive assumptions is required to convince the readers that the forms of composition considered in this paper are adequately general.
val
[ "rklZsDLptr", "HJeTCJi3sH", "H1gv8W93ir", "ryltpNthiB", "Syl8lYDnor", "SyxuyeBnoH", "S1g66kSnjr", "BJxdj0V2sH", "HkxCYRN3sH", "BJeb9TNhjS", "r1lQupNhjH", "S1e4FUxTFB", "S1gpyr26tH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a method of combining value functions for a certain class of tasks, including shortest path problems, to solve composed tasks. By expressing tasks as a Boolean algebra, they can be combined using the negation, conjunction and disjunction operations. Analogous operations are available for the opt...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rJecbgHtDH", "H1gv8W93ir", "BJeb9TNhjS", "Syl8lYDnor", "SyxuyeBnoH", "S1g66kSnjr", "S1gpyr26tH", "HkxCYRN3sH", "rklZsDLptr", "r1lQupNhjH", "S1e4FUxTFB", "iclr_2020_rJecbgHtDH", "iclr_2020_rJecbgHtDH" ]
iclr_2020_Byg9bxrtwS
Kernel and Rich Regimes in Overparametrized Models
A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.
reject
The paper studies how the size of the initialization of neural network weights affects whether the resulting training puts the network in a "kernel regime" or a "rich regime". Using a two-layer model they show, theoretically and practically, the transition between kernel and rich regimes. Further experiments are provided for more complex settings. The scores of the reviewers were widely spread, with a high score (8) from a low confidence reviewer with a very short review. While the authors responded to the reviewer comments, two of the reviewers (importantly including the one recommending reject) did not further engage. Overall, the paper studies an important problem, and provides insight into how weight initialization size can affect the final network. Unfortunately, there are many strong submissions to ICLR this year, and the submission in its current state is not yet suitable for publication.
train
[ "BklE7rOpKH", "BylDcS_djH", "rkgXIHOusr", "SkgHXBudsH", "B1eHz4x6Yr", "Hkgx4G7ZqH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper analyzes an inductive bias of the gradient flow for diagonal two-or higher-homogeneous models and characterizes a limit point depending on the initialization scale of parameters. Concretely, the paper shows that the gradient flow converges to an interpolator attaining minimum L1- (or L2-norm) when the s...
[ 6, -1, -1, -1, 8, 3 ]
[ 3, -1, -1, -1, 3, 3 ]
[ "iclr_2020_Byg9bxrtwS", "BklE7rOpKH", "B1eHz4x6Yr", "Hkgx4G7ZqH", "iclr_2020_Byg9bxrtwS", "iclr_2020_Byg9bxrtwS" ]
iclr_2020_B1gcblSKwB
Meta-Learning with Network Pruning for Overfitting Reduction
Meta-Learning has achieved great success in few-shot learning. However, the existing meta-learning models have been evidenced to overfit on meta-training tasks when using deeper and wider convolutional neural networks. This means that we cannot improve the meta-generalization performance by merely deepening or widening the networks. To remedy such a deficiency of meta-overfitting, we propose in this paper a sparsity constrained meta-learning approach to learn from meta-training tasks a subnetwork from which first-order optimization methods can quickly converge towards the optimal network in meta-testing tasks. Our theoretical analysis shows the benefit of sparsity for improving the generalization gap of the learned meta-initialization network. We have implemented our approach on top of the widely applied Reptile algorithm assembled with varying network pruning routines including Dense-Sparse-Dense (DSD) and Iterative Hard Thresholding (IHT). Extensive experimental results on benchmark datasets with different over-parameterized deep networks demonstrate that our method can not only effectively ease meta-overfitting but also in many cases improve the meta-generalization performance when applied to few-shot classification tasks.
reject
This paper proposes a regularization scheme for reducing meta-overfitting. After the rebuttal period, the reviewers all still had concerns about the significance of the paper's contributions and the thoroughness of the empirical study. As such, this paper isn't ready for publication at ICLR. See the reviewer's comments for detailed feedback on how to improve the paper.
train
[ "SkxXjkpqKr", "HkgZbj0g5r", "Syg1bRkLYH", "ryeANU6ioB", "ryeI-9XdjS", "Syl0XcQdsH", "rJlzd_7_sS", "S1xjWdmOor" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "\nIn this paper, the authors propose a new method to alleviate the effect of overfitting in the meta-learning scenario. The method is based on network pruning. Empirical results demonstrate the effectiveness of the proposed method.\n\nPros:\n+ The problem is very important in the meta-learning field. The model is ...
[ 3, 3, 3, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2020_B1gcblSKwB", "iclr_2020_B1gcblSKwB", "iclr_2020_B1gcblSKwB", "iclr_2020_B1gcblSKwB", "Syg1bRkLYH", "Syg1bRkLYH", "SkxXjkpqKr", "HkgZbj0g5r" ]
iclr_2020_ByliZgBKPH
Policy path programming
We develop a normative theory of hierarchical model-based policy optimization for Markov decision processes resulting in a full-depth, full-width policy iteration algorithm. This method performs policy updates which integrate reward information over all states at all horizons simultaneously thus sequentially maximizing the expected reward obtained per algorithmic iteration. Effectively, policy path programming ascends the expected cumulative reward gradient in the space of policies defined over all state-space paths. An exact formula is derived which finitely parametrizes these path gradients in terms of action preferences. Policy path gradients can be directly computed using an internal model thus obviating the need to sample paths in order to optimize in depth. They are quadratic in successor representation entries and afford natural generalizations to higher-order gradient techniques. In simulations, it is shown that intuitive hierarchical reasoning is emergent within the associated policy optimization dynamics.
reject
The reviewers were not convinced about the significance of this work. There is no empirical or theoretical result justifying why this method has advantages over the existing methods. The reviewers also raised concerns related to the scalability of the proposal. Since none of the reviewers were enthusiastic about the paper, including the expert ones, I cannot recommend acceptance of this work.
train
[ "ryxWltR5sS", "rJg_CgCUYH", "SkxEoyMAYr", "ryer2GGXqB", "rygNnxhncS" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks to all reviewers for your feedback. The manuscript has been edited to address some of the issues raised and to improve its clarity and precision. The overall impression is that comparative demonstrations of this theory embedded in a scalable RL algorithm is required which is not possible at this stage. To a...
[ -1, 3, 3, 3, 1 ]
[ -1, 4, 1, 1, 5 ]
[ "iclr_2020_ByliZgBKPH", "iclr_2020_ByliZgBKPH", "iclr_2020_ByliZgBKPH", "iclr_2020_ByliZgBKPH", "iclr_2020_ByliZgBKPH" ]
iclr_2020_Skeh-xBYDH
On Symmetry and Initialization for Neural Networks
This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our results highlight the importance of using symmetry in the design of neural networks.
reject
The two main concerns raised by reviewers is that whether the results are significant, and a potential issue in the proof. While the rebuttal clarified some steps in the proof, the main concerns about the significance remain. The authors are encouraged to make this significance more clear. Note that one reviewer argued theoretical papers are not suitable for ICLR. This is false, as a theoretical understanding of neural networks remains a key research area that is of wide interest to the community. Consequently, this review was not considered in the final evaluation.
train
[ "rklvJlN3jB", "H1eCE4LosB", "BkewxGfGsH", "HklNwhbMsB", "H1gUil1TYS", "Bkg_Pp2rcB" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We uploaded a revised version of the paper.\n1. We modified the representations of the symmetric functions to the simpler representations suggested by Reviewer#2.\n2. We modified some phrasing in the proof of Theorem 1, to make it clearer.\n\nRecently, we came to know this work: https://arxiv.org/abs/1910.06956 an...
[ -1, -1, -1, -1, 3, 3 ]
[ -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_Skeh-xBYDH", "BkewxGfGsH", "H1gUil1TYS", "Bkg_Pp2rcB", "iclr_2020_Skeh-xBYDH", "iclr_2020_Skeh-xBYDH" ]
iclr_2020_ryl3blSFPr
Denoising Improves Latent Space Geometry in Text Autoencoders
Neural language models have recently shown impressive gains in unconditional text generation, but controllable generation and manipulation of text remain challenging. In particular, controlling text via latent space operations in autoencoders has been difficult, in part due to chaotic latent space geometry. We propose to employ adversarial autoencoders together with denoising (referred as DAAE) to drive the latent space to organize itself. Theoretically, we prove that input sentence perturbations in the denoising approach encourage similar sentences to map to similar latent representations. Empirically, we illustrate the trade-off between text-generation and autoencoder-reconstruction capabilities, and our model significantly improves over other autoencoder variants. Even from completely unsupervised training, DAAE can successfully alter the tense/sentiment of sentences via simple latent vector arithmetic.
reject
This work presents a simple technique for improving the latent space geometry of text autoencoders. The strengths of the paper lie in the simplicity of the method, and results show that the technique improves over the considered baselines. However, some reviewers expressed concerns over the presented theory for why input noise helps, and did not address concerns that the theory was useful. The paper should be improved if Section 4 were instead rewritten to focus on providing intuition, either with empirical analysis, results on a toy task, or clear but high level discussion of why the method helps. The current theorem statements seem either unnecessary or make strong assumptions that don't hold in practice. As a result, Section 4 in its current form is not in service to the reader's understanding why the simple method works. Finally, further improvements to the paper could be made with comparisons to additional baselines from prior work as suggested by reviewers.
train
[ "H1x_OYsitH", "H1eQz0q3sB", "r1ewQqdsjr", "SkeCx_djsr", "SklWUyn9ir", "rklXkKAFiH", "HygStORKiB", "HylyQjeGsH", "H1lNCGiSor", "Hkg7iGjSiB", "Hkl4scxzsS", "r1gBOZnutS", "BylI49ChFr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper \"Denoising Improves Latent Space Geometry in Text Autoencoders\" tackles the problem of text autoencoding in a space which respects text similarities. It is an interesting problem for which various attempts have been proposed, while still facing difficulties for encoding in smooth spaces. The paper prop...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_ryl3blSFPr", "iclr_2020_ryl3blSFPr", "rklXkKAFiH", "SklWUyn9ir", "H1lNCGiSor", "HygStORKiB", "BylI49ChFr", "Hkl4scxzsS", "Hkg7iGjSiB", "r1gBOZnutS", "H1x_OYsitH", "iclr_2020_ryl3blSFPr", "iclr_2020_ryl3blSFPr" ]
iclr_2020_SJxTZeHFPH
The Intriguing Effects of Focal Loss on the Calibration of Deep Neural Networks
Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Networks (DNNs) makes their predictions hard for downstream components to trust. Ideally, we want networks to be accurate, calibrated and confident. Temperature scaling, the most popular calibration approach, will calibrate a DNN without affecting its accuracy, but it will also make its correct predictions under-confident. In this paper, we show that replacing the widely used cross-entropy loss with focal loss allows us to learn models that are already very well calibrated. When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks. We provide a thorough analysis of the factors causing miscalibration, and use the insights we glean from this to theoretically justify the empirically excellent performance of focal loss. We perform extensive experiments on a variety of computer vision (CIFAR-10/100) and NLP (SST, 20 Newsgroup) datasets, and with a wide variety of different network architectures, and show that our approach achieves state-of-the-art accuracy and calibration in almost all cases.
reject
The paper investigates the effect of focal loss on calibration of neural nets. On one hand, the reviewers agree that this paper is well-written and the empirical results are interesting. On the other hand, the reviewers felt that there could be better evaluation of the effect of calibration on downstream tasks, and better justification for the choice of optimal gamma (e.g. on a simpler problem setup). I encourage the others to revise the draft and resubmit to a different venue.
val
[ "r1gDtRmDFS", "BylPp3Khir", "HJx9FhKnsr", "rylRbCI3sS", "r1gc7GgiiB", "S1ejArljsB", "Bkx_ohZijr", "rkgcyqZooH", "HJgsDVgojr", "ryekOgxijS", "HygboAyjjS", "H1xIA9yosr", "S1exU3JjiS", "H1xq5ikijH", "SJlkRDV6ur", "SkePAgQ9KH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper studies the effect of the focal loss, proposed by Lin et al. in 2017 on network miscalibration, which appears when the network's confidence in its prediction does not match its correctness. The authors provide a theoretical explanation to the superior results of the focal loss for calibration....
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_SJxTZeHFPH", "rylRbCI3sS", "rylRbCI3sS", "r1gc7GgiiB", "SJlkRDV6ur", "SJlkRDV6ur", "r1gDtRmDFS", "SJlkRDV6ur", "SJlkRDV6ur", "r1gDtRmDFS", "r1gDtRmDFS", "SkePAgQ9KH", "SkePAgQ9KH", "H1xIA9yosr", "iclr_2020_SJxTZeHFPH", "iclr_2020_SJxTZeHFPH" ]
iclr_2020_S1e0ZlHYDB
Progressive Compressed Records: Taking a Byte Out of Deep Learning Data
Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size. Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model. We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression. Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ.
reject
Main content: Introduces Progressive Compressed Records (PCR), a new storage format for image datasets for machine learning training. Discussion: reviewer 4: Interesting application of progressive compression to reduce the disk I/O overhead. Main concern is paper could be clearer about setting. reviewer 5: (not knowledgable about area): well-written paper. concern is that related work could be better, including state of the art on the topic. reviewer 2: likes the topic but discusses many areas for improvement (stronger exeriments, better metrics reported, etc.). this is probably the most experienced reviewer marking reject. reviewer 3: paper is well written. Main issue is that exeriments are limited to image classification tasks, and it snot clear how the method works on larger scale. Recommendation: interesting idea but experiments could be stronger. I lean to Reject.
train
[ "rygN9N_coB", "HkfjQucsr", "SylxdQO5or", "Bke1gmO5sB", "rkgdwz_qsB", "r1ehxrXZoB", "BkeXwO--sB", "SJluQZ2kor", "rkeMVzZJ5r" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their time and helpful feedback. After carefully considering reviewers’ comments, we made revisions to our paper to address concerns, and have uploaded an updated version of the paper. A summary of major changes follows:\n\n* We have run our experiments using the full 1000 class ImageNet...
[ -1, -1, -1, -1, -1, 3, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, 1, 5, 3 ]
[ "iclr_2020_S1e0ZlHYDB", "BkeXwO--sB", "r1ehxrXZoB", "rkeMVzZJ5r", "SJluQZ2kor", "iclr_2020_S1e0ZlHYDB", "iclr_2020_S1e0ZlHYDB", "iclr_2020_S1e0ZlHYDB", "iclr_2020_S1e0ZlHYDB" ]
iclr_2020_BJxyzxrYPH
Deep geometric matrix completion: Are we doing it right?
We address the problem of reconstructing a matrix from a subset of its entries. Current methods, branded as geometric matrix completion, augment classical rank regularization techniques by incorporating geometric information into the solution. This information is usually provided as graphs encoding relations between rows/columns. In this work we propose a simple spectral approach for solving the matrix completion problem, via the framework of functional maps. We introduce the zoomout loss, a multiresolution spectral geometric loss inspired by recent advances in shape correspondence, whose minimization leads to state-of-the-art results on various recommender systems datasets. Surprisingly, for some datasets we were able to achieve comparable results even without incorporating geometric information. This puts into question both the quality of such information and current methods' ability to use it in a meaningful and efficient way.
reject
This paper proposes a multiresolution spectral geometric loss called the zoomout loss to help with matrix completion, and show state-of-the-art results on several recommendation benchmarks, although experiments also show that the result improvements are not always dependent upon the geometric loss itself. Reviewers find the idea interesting and the results promising but also have important concerns about the experiments not establishing how the approach truly works. Authors have clarified their explanations in the revisions and provided requested experiments (e.g., on the importance of the initialization size), however important reservations re. why the approach works are still not sufficiently addressed, and would require more iterations to fulfill the potential of this paper. Therefore, we recommend rejection.
train
[ "HJgg1pKRtB", "S1e0phdhoH", "Byg11OH_sH", "SkgW4BB_oB", "HkeYUUBdjr", "HJxIzuSOjB", "S1gT7LHdoH", "HJlk4jujYB", "SyxRPVxfqB", "HyxruY5QOH", "H1l2FMPGOB", "Hylfk4HzOB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "This paper proposes a new method for geometric matrix completion based on functional maps. The proposed algorithm is a simple shallow and fully linear network. Experimental results demonstrate the effectiveness of the proposed method. \n\nThe proposed method is new and has been shown good empirical results. The pa...
[ 3, -1, -1, -1, -1, -1, -1, 6, 3, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 5, -1, -1, -1 ]
[ "iclr_2020_BJxyzxrYPH", "iclr_2020_BJxyzxrYPH", "HJlk4jujYB", "SyxRPVxfqB", "HJgg1pKRtB", "HJlk4jujYB", "HJgg1pKRtB", "iclr_2020_BJxyzxrYPH", "iclr_2020_BJxyzxrYPH", "H1l2FMPGOB", "Hylfk4HzOB", "iclr_2020_BJxyzxrYPH" ]
iclr_2020_B1x1MerYPB
Putting Machine Translation in Context with the Noisy Channel Model
We show that Bayes' rule provides a compelling mechanism for controlling unconditional document language models, using the long-standing challenge of effectively leveraging document context in machine translation. In our formulation, we estimate the probability of a candidate translation as the product of the unconditional probability of the candidate output document and the ``reverse translation probability'' of translating the candidate output back into the input source language document---the so-called ``noisy channel'' decomposition. A particular advantage of our model is that it requires only parallel sentences to train, rather than parallel documents, which are not always available. Using a new beam search reranking approximation to solve the decoding problem, we find that document language models outperform language models that assume independence between sentences, and that using either a document or sentence language model outperform comparable models that directly estimate the translation probability. We obtain the best-published results on the NIST Chinese--English translation task, a standard task for evaluating document translation. Our model also outperforms the benchmark Transformer model by approximately 2.5 BLEU on the WMT19 Chinese--English translation task.
reject
The authors propose using a noisy channel formulation which allows them to combine a sentence level target-source translation model with a language model trained over target side document-level information. They use reranking of a 50-best list generated by a standard Transformer model for forward translation and show reasonably strong results. The reviewers were concerned about the efficiency of this approach and the limited novelty as compared to the sentence-level noisy channel research Yu et al. 2017. The authors responded in depth, adding results with another baseline which includes backtranslated data. I feel that although this paper is interesting, it is not compelling enough for inclusion in ICLR.
train
[ "rye7MN6qjH", "Skxn36afjr", "BJgcb06GsB", "BklPU1CfjH", "Hyxf1RTfjB", "r1lfMoQ_YH", "Sk97bk0tS", "HJlp8ls19B" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your clarifications.", "Thank you for your review.\n\nRegarding the question about circumventing the data problem with back-translated documents. While this is a good idea, and there is evidence that it can work well (Junczys-Dowmunt, 2019), it is challenging to train such models well, whereas our mod...
[ -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "BJgcb06GsB", "Sk97bk0tS", "HJlp8ls19B", "r1lfMoQ_YH", "Sk97bk0tS", "iclr_2020_B1x1MerYPB", "iclr_2020_B1x1MerYPB", "iclr_2020_B1x1MerYPB" ]
iclr_2020_BJg1fgBYwH
SAFE-DNN: A Deep Neural Network with Spike Assisted Feature Extraction for Noise Robust Inference
We present a Deep Neural Network with Spike Assisted Feature Extraction (SAFE-DNN) to improve robustness of classification under stochastic perturbation of inputs. The proposed network augments a DNN with unsupervised learning of low-level features using spiking neuron network (SNN) with Spike-Time-Dependent-Plasticity (STDP). The complete network learns to ignore local perturbation while performing global feature detection and classification. The experimental results on CIFAR-10 and ImageNet subset demonstrate improved noise robustness for multiple DNN architectures without sacrificing accuracy on clean images.
reject
The paper proposes to improve noise robustness of the network learned features, by augmenting deep networks with Spike-Time-Dependent-Plasticity (STDP). The new network show improved noise robustness with better classification accuracy on Cifar10 and ImageNet subset when input data have noise. While this paper is well written, a number of concerns are raised by the reviewers. They include that the proposed method would not be favored from computer vision perspective, it is not convincing why spiking nets are more robust to random noises, and the method fails to address works in adversarial perturbations and adversarial training. Also, Reviewer #2 pointed out the low level of methodological novelty. The authors provided response to the questions, but did not change the rating of the reviewers. Given the various concerns raised, the ACs recommend reject.
train
[ "Byehu3VnsS", "rylXm642oH", "Bkef03NniS", "Hygo5iVniS", "rkgGEGy-5r", "rygtIBsQ5S", "ryew4lks5S", "rkxVKVvAwB", "SklLEylRDr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for your review. \n\n>>Innovation with respect to SNN Literature\n\nSpiking neural network (SNN) is an attracting idea of realizing biologically plausible neural networks and have been widely studied. However, SNN based on pure STDP learning have yet to show comparable performance as DNN, in particular f...
[ -1, -1, -1, -1, 3, 6, 3, -1, -1 ]
[ -1, -1, -1, -1, 3, 5, 1, -1, -1 ]
[ "ryew4lks5S", "rkgGEGy-5r", "rygtIBsQ5S", "iclr_2020_BJg1fgBYwH", "iclr_2020_BJg1fgBYwH", "iclr_2020_BJg1fgBYwH", "iclr_2020_BJg1fgBYwH", "SklLEylRDr", "iclr_2020_BJg1fgBYwH" ]
iclr_2020_BJgxzlSFvr
AN ATTENTION-BASED DEEP NET FOR LEARNING TO RANK
In information retrieval, learning to rank constructs a machine-based ranking model which given a query, sorts the search results by their degree of relevance or importance to the query. Neural networks have been successfully applied to this problem, and in this paper, we propose an attention-based deep neural network which better incorporates different embeddings of the queries and search results with an attention-based mechanism. This model also applies a decoder mechanism to learn the ranks of the search results in a listwise fashion. The embeddings are trained with convolutional neural networks or the word2vec model. We demonstrate the performance of this model with image retrieval and text querying data sets.
reject
All three reviewers felt the paper should be rejected and no rebuttal was offered. So the paper is rejected.
train
[ "BJxrToa2YB", "rJedKG9y5B", "BygEsdrNqr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose to use attention to combine multiple input representations for both query and search results in the learning to rank task. When these representations are embeddings from differentiable functions, they can be jointly learned with the neural network which predicts rankings. A limit...
[ 1, 1, 1 ]
[ 3, 4, 4 ]
[ "iclr_2020_BJgxzlSFvr", "iclr_2020_BJgxzlSFvr", "iclr_2020_BJgxzlSFvr" ]
iclr_2020_rJxGGlSKwH
Sentence embedding with contrastive multi-views learning
In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge. Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations. We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views. Formally, multiple views of the same sentence are mapped to close representations. On the contrary, views from other sentences are mapped further. By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form.
reject
This paper proposes a method to learn sentence representations that incorporates linguistic knowledge in the form of dependency trees using contrastive learning. Experiments on SentEval and probing tasks show that the proposed method underperform baseline methods. All reviewers agree that the results are not strong enough to support the claim of the paper and have some concerns about the scalability of the implementation. They also agree that the writing of the paper can be improved (details included in their reviews below). The authors acknowledged these concerns and mentioned that they will use them to improve the paper for future work, so I recommend rejecting this paper for ICLR.
train
[ "HJlsoiwNtB", "BygW3uM6KH", "BJejdUk3qr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new sentence embedding method. The novelty is to use dependency trees as examples in the self-supervised method based on contrastive learning. The idea to use linguistic knowledge in the design of sentence embeddings is attractive. The sentence representation is computed by a bi-LSTM and depen...
[ 1, 1, 3 ]
[ 4, 5, 4 ]
[ "iclr_2020_rJxGGlSKwH", "iclr_2020_rJxGGlSKwH", "iclr_2020_rJxGGlSKwH" ]
iclr_2020_H1gXzxHKvH
Deep Nonlinear Stochastic Optimal Control for Systems with Multiplicative Uncertainties
We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellman partial differential equations. Such PDEs arise when one considers stochastic dynamics characterized by uncertainties that are additive and control multiplicative. Stochastic models with the aforementioned characteristics have been used in computational neuroscience, biology, finance and aerospace systems and provide a more accurate representation of actuation than models with additive uncertainty. Previous literature has established the inadequacy of the linear HJB theory and instead rely on a non-linear Feynman-Kac lemma resulting in a second order forward-backward stochastic differential equations representation. However, the proposed solutions that use this representation suffer from compounding errors and computational complexity leading to lack of scalability. In this paper, we propose a deep learning based algorithm that leverages the second order Forward-Backward SDE representation and LSTM based recurrent neural networks to not only solve such Stochastic Optimal Control problems but also overcome the problems faced by previous approaches and scales well to high dimensional systems. The resulting control algorithm is tested on non-linear systems in robotics and biomechanics to demonstrate feasibility and out-performance against previous methods.
reject
A nice paper, but quite some unclarities; it's unclear in particular if the paper improves w.r.t. SOTA. Esp. scaling is an issue here. Also, the understandability is below par and more work can make this into an acceptable submission.
train
[ "Skg1ZY6utH", "rJgTgCqhjr", "rygvaE5hsS", "BkxGBZehjS", "rkg6b0CjsB", "ByxO66AiiB", "S1xLIpRjsH", "rJeVbb-bKH", "BJeaEz_ptS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n######### Rebuttal Response:\nThanks for the clarifications and especially for updating the formatting. The current state does not convince me to rate the paper as weak accept but I increased my rating to weak reject. \n\n\"Pereira et. al. has shown that a recurrent network architecture using LSTM outperforms th...
[ 3, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_H1gXzxHKvH", "rkg6b0CjsB", "BkxGBZehjS", "S1xLIpRjsH", "rJeVbb-bKH", "Skg1ZY6utH", "BJeaEz_ptS", "iclr_2020_H1gXzxHKvH", "iclr_2020_H1gXzxHKvH" ]
iclr_2020_HkxQzlHFPr
Robust Natural Language Representation Learning for Natural Language Inference by Projecting Superficial Words out
In natural language inference, the semantics of some words do not affect the inference. Such information is considered superficial and brings overfitting. How can we represent and discard such superficial information? In this paper, we use first order logic (FOL) - a classic technique from meaning representation language – to explain what information is superficial for a given sentence pair. Such explanation also suggests two inductive biases according to its properties. We proposed a neural network-based approach that utilizes the two inductive biases. We obtain substantial improvements over extensive experiments.
reject
This paper proposes using first order logic to rule out superficial information for improved natural language inference. While the topic is of interest, reviewers find that the paper misses much of the previous literature on semantics which is highly relevant. I thank the authors for submitting this paper to ICLR. Please take the reviewers' comments, especially recommended references, to improve the paper for future submission.
train
[ "r1lxoVPRKB", "rke9JLOAYS", "Byx4K0PJqH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper uses first order logic (FOL) to help reduce so-called “superficial” information/semantics that is less relevant to the judgement of natural language inference relations. The submission misses the major literature of and comparison to previous work that uses FOL for natural language inference (aka. RTE),...
[ 1, 3, 1 ]
[ 5, 4, 3 ]
[ "iclr_2020_HkxQzlHFPr", "iclr_2020_HkxQzlHFPr", "iclr_2020_HkxQzlHFPr" ]
iclr_2020_SJxmfgSYDB
Representing Unordered Data Using Multiset Automata and Complex Numbers
Unordered, variable-sized inputs arise in many settings across multiple fields. The ability for set- and multiset- oriented neural networks to handle this type of input has been the focus of much work in recent years. We propose to represent multisets using complex-weighted multiset automata and show how the multiset representations of certain existing neural architectures can be viewed as special cases of ours. Namely, (1) we provide a new theoretical and intuitive justification for the Transformer model's representation of positions using sinusoidal functions, and (2) we extend the DeepSets model to use complex numbers, enabling it to outperform the existing model on an extension of one of their tasks.
reject
Main summary: Paper is about generating feature representations for set elements using weighted multiset automata Discussion: reviewer 1: paper is well written but experimental results are not convincing reviewer 2: well written but weak motivation reviewer 3: well written but reviewer has some questions around the motivation of weighted automata machinery. Recommendation: all the reviewers agree its well written but the paper could be stronger with motivation and experiments, all reviewers agree. I vote Reject.
val
[ "BJepYfMAYB", "rJg1UyAZoS", "SyekMJR-jB", "r1e1JyAWsH", "BJlEUApWoS", "BkxOG9FhKr", "BJlFL4VpFS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes generating feature representations for set elements using weighted multiset automata. Experiments show that this leads to better generalization performance in some tasks.\n\nI am leaning to reject this paper. The proposed algorithm for generating features seems relevant and correct, but there a...
[ 6, -1, -1, -1, -1, 3, 6 ]
[ 1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_SJxmfgSYDB", "BkxOG9FhKr", "BJlFL4VpFS", "BJlFL4VpFS", "BJepYfMAYB", "iclr_2020_SJxmfgSYDB", "iclr_2020_SJxmfgSYDB" ]
iclr_2020_SJxNzgSKvH
Selective sampling for accelerating training of deep neural networks
We present a selective sampling method designed to accelerate the training of deep neural networks. To this end, we introduce a novel measurement, the {\it minimal margin score} (MMS), which measures the minimal amount of displacement an input should take until its predicted classification is switched. For multi-class linear classification, the MMS measure is a natural generalization of the margin-based selection criterion, which was thoroughly studied in the binary classification setting. In addition, the MMS measure provides an interesting insight into the progress of the training process and can be useful for designing and monitoring new training regimes. Empirically we demonstrate a substantial acceleration when training commonly used deep neural network architectures for popular image classification tasks. The efficiency of our method is compared against the standard training procedures, and against commonly used selective sampling alternatives: Hard negative mining selection, and Entropy-based selection. Finally, we demonstrate an additional speedup when we adopt a more aggressive learning-drop regime while using the MMS selective sampling method.
reject
The paper proposes a method to speed up training of deep nets by re-weighting samples based on their distance to the decision boundary. However, they paper seems hastily written and the method is not backed by sufficient experimental evidence.
train
[ "B1lvRyksiB", "H1lCXxqqsB", "rJglmTn5tS", "BJeWmNn3KS", "HkgDJXI-5B" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer #3:\n\nWe would like to thank you for the feedback and will answer the questions raised:\n\n1) We used CIFAR10 and CIFAR100 to prove our main concept which aims to reduce the number of training steps. We didn't have enough time to include other datasets for the deadline but we plan to add ImageNet to...
[ -1, -1, 1, 1, 3 ]
[ -1, -1, 4, 4, 3 ]
[ "HkgDJXI-5B", "rJglmTn5tS", "iclr_2020_SJxNzgSKvH", "iclr_2020_SJxNzgSKvH", "iclr_2020_SJxNzgSKvH" ]
iclr_2020_SJgSflHKDr
The Frechet Distance of training and test distribution predicts the generalization gap
Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets. However, when training and test distribution differ, this distribution shift can have a significant effect. With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution. We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is. Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set. Complementary to the popular belief that more data is always better, our results highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes.
reject
The authors discuss how to predict generalization gaps. Reviews are mixed, putting the submission in the lower half of this year's submissions. I also would have liked to see a comparison with other divergence metrics, for example, L1, MMD, H-distance, discrepancy distance, and learned representations (e.g., BERT, Laser, etc., for language). Without this, the empirical evaluation of FD is a bit weak. Also, the obvious next step would be trying to minimize FD in the context of domain adaptation, and the question is if this shouldn't already be part of your paper? Suggestions: The Amazon reviews are time-stamped, enabling you to run experiments with drift over time. See [0] for an example. [0] https://www.aclweb.org/anthology/W18-6210/
test
[ "H1euOzraKS", "ryx1DuaTKr", "B1goCQ9J9B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors consider the relation between Frechet distance of training and test distribution and the generalization gap. The authors derive the lower bound for the difference of loss function w.r.t. training and test set by the Wasserstein distance between embedding training and test set distribution. Empirically,...
[ 3, 3, 3 ]
[ 3, 4, 3 ]
[ "iclr_2020_SJgSflHKDr", "iclr_2020_SJgSflHKDr", "iclr_2020_SJgSflHKDr" ]
iclr_2020_S1lHfxBFDH
Gumbel-Matrix Routing for Flexible Multi-task Learning
This paper proposes a novel per-task routing method for multi-task applications. Multi-task neural networks can learn to transfer knowledge across different tasks by using parameter sharing. However, sharing parameters between unrelated tasks can hurt performance. To address this issue, routing networks can be applied to learn to share each group of parameters with a different subset of tasks to better leverage tasks relatedness. However, this use of routing methods requires to address the challenge of learning the routing jointly with the parameters of a modular multi-task neural network. We propose the Gumbel-Matrix routing, a novel multi-task routing method based on the Gumbel-Softmax, that is designed to learn fine-grained parameter sharing. When applied to the Omniglot benchmark, the proposed method improves the state-of-the-art error rate by 17%.
reject
This paper proposed to use Gumbel softmax to optimize the routing matrix in routing network for multitask learning. All reviewers have a consensus on rejecting this paper. The paper did not clearly explain how and why this method works, and the experiments are not sufficient.
train
[ "B1ltq287sH", "B1lHxjIXir", "rJlVz_rXor", "SJlvY-bRYS", "Syl8rm7atB", "rker7VmRKr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for valuable comments and suggestions. Our responses to specific points are provided below.\n\n1) Extensiveness of experiments\n\nWhile our method is compared with the SotA only on Omniglot, we also included several other experiments (MNIST, synthetic data), which were aimed at better underst...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 5, 4, 5 ]
[ "Syl8rm7atB", "SJlvY-bRYS", "rker7VmRKr", "iclr_2020_S1lHfxBFDH", "iclr_2020_S1lHfxBFDH", "iclr_2020_S1lHfxBFDH" ]
iclr_2020_S1xHfxHtPr
Online Learned Continual Compression with Stacked Quantization Modules
We introduce and study the problem of Online Continual Compression, where one attempts to learn to compress and store a representative dataset from a non i.i.d data stream, while only observing each sample once. This problem is highly relevant for downstream online continual learning tasks, as well as standard learning methods under resource constrained data collection. We propose a new architecture which stacks Quantization Modules (SQM), consisting of a series of discrete autoencoders, each equipped with their own memory. Every added module is trained to reconstruct the latent space of the previous module using fewer bits, allowing the learned representation to become more compact as training progresses. This modularity has several advantages: 1) moderate compressions are quickly available early in training, which is crucial for remembering the early tasks, 2) as more data needs to be stored, earlier data becomes more compressed, freeing memory, 3) unlike previous methods, our approach does not require pretraining, even on challenging datasets. We show several potential applications of this method. We first replace the episodic memory used in Experience Replay with SQM, leading to significant gains on standard continual learning benchmarks using a fixed memory budget. We then apply our method to compressing larger images like those from Imagenet, and show that it is also effective with other modalities, such as LiDAR data.
reject
The paper proposes a new problem setup as "online continual compression". The proposed idea is a combination of existing techniques and very simple, though interesting. Parts of the algorithm are not clear, and the hierarchy is not well-motivated. Experimental results seem promising but not convincing enough, since it is on a very special setting, the LiDAR experiment is missing quantitative evaluation, and different tasks might introduce different difficulties in this online learning setting. The ablation study is well designed but not discussed enough.
train
[ "rclwaBjUar", "Byl22PIRnr", "Hygao_4niB", "S1pLOVhjB", "SkelgPwqsB", "H1xKiHDqiB", "rJlpS_iRYH", "SkgU6JAW5r", "H1g1GUrr9B", "BJlQThKDKH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "I am not familiar with the generative model and continual learning. Thus, I can only give my review based on the authors writing and other reviewers' comments. \n- The paper proposes a new problem setup as \"online continual compression\".\n- The paper gives a combination of many existing techniques to address the...
[ 3, 3, -1, -1, -1, -1, 6, 3, 6, -1 ]
[ 3, 3, -1, -1, -1, -1, 1, 1, 3, -1 ]
[ "iclr_2020_S1xHfxHtPr", "iclr_2020_S1xHfxHtPr", "iclr_2020_S1xHfxHtPr", "H1g1GUrr9B", "rJlpS_iRYH", "SkgU6JAW5r", "iclr_2020_S1xHfxHtPr", "iclr_2020_S1xHfxHtPr", "iclr_2020_S1xHfxHtPr", "iclr_2020_S1xHfxHtPr" ]
iclr_2020_SyeLGlHtPS
Learning vector representation of local content and matrix representation of local motion, with implications for V1
This paper proposes a representational model for image pair such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1). The model couples the following two components. (1) The vector representations of local contents of images. (2) The matrix representations of local pixel displacements caused by the relative motions between the agent and the objects in the 3D scene. When the image frame undergoes changes due to local pixel displacements, the vectors are multiplied by the matrices that represent the local displacements. Our experiments show that our model can learn to infer local motions. Moreover, the model can learn Gabor-like filter pairs of quadrature phases.
reject
The paper received mixed reviews. On one hand, there is interesting novelty in relation to biological vision systems. On the other hand, there are some serious experimental issues with the machine learning model. While reviewers initially raised concerns about the motivation of the work, the rebuttal addressed those concerns. However, concerns about experiments remained.
train
[ "ByxsYwZhoH", "H1xdTLbnsB", "r1ep7v-noB", "HJxNLIZ2oH", "S1e5NBZhiB", "HJecGU-oKr", "r1gWl4Ksqr", "HyghDmnsqH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nQ4: “the motivation of the proposed method”, “Or the authors simply take some ideas form V1 model and add a module to “explain” motion? “\n\nA4: One motivation is based on Fourier analysis as mentioned above. Please see our answer to Q2. Another motivation is from previous papers that use matrices to represent c...
[ -1, -1, -1, -1, -1, 3, 1, 6 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "r1ep7v-noB", "HJxNLIZ2oH", "HJecGU-oKr", "r1gWl4Ksqr", "HyghDmnsqH", "iclr_2020_SyeLGlHtPS", "iclr_2020_SyeLGlHtPS", "iclr_2020_SyeLGlHtPS" ]
iclr_2020_BylUMxSFwS
Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Biological intelligence can learn to solve many diverse tasks in a data efficient manner by re-using basic knowledge and skills from one task to another. Furthermore, many of such skills are acquired through something called latent learning, where no explicit supervision for skill acquisition is provided. This is in contrast to the state-of-the-art reinforcement learning agents, which typically start learning each new task from scratch and struggle with knowledge transfer. In this paper we propose a principled way to learn and recombine a basis set of policies, which comes with certain guarantees on the coverage of the final task space. In particular, we construct a learning pipeline where an agent invests time to learn to perform intrinsically generated, goal-based tasks, and subsequently leverages this experience to quickly achieve a high level of performance on externally specified, often significantly more complex tasks through generalised policy improvement. We demonstrate both theoretically and empirically that such goal-based intrinsic tasks produce more transferable policies when the goals are specified in a space that exhibits a form of disentanglement.
reject
The author propose a method to first learn policies for intrinsically generated goal-based tasks, and then leverage the learned representations to improve the learning of a new task in a generalized policy iteration framework. The reviewers had significant issues about clarity of writing that were largely addressed in the rebuttal. However, there were also concerns about the magnitude of the contribution (especially if it was added anything significant to the existing literature on GPI, successor features, etc), and the simplicity (and small number of) test domains. These concerns persisted after the rebuttal and discussion. Thus, I recommend rejection at this time.
train
[ "B1xqEuK0Kr", "H1x4DQURYS", "ByenD63fqH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper tackles the challenging problem of transfer learning and few shot learning in RL setting and provides some theoretical guarantees for the downstream task coverage. \n\nThe paper structure can be further improved by adding a background subsection on successor representation (SR) in RL; SR is not a very we...
[ 6, 6, 3 ]
[ 1, 4, 5 ]
[ "iclr_2020_BylUMxSFwS", "iclr_2020_BylUMxSFwS", "iclr_2020_BylUMxSFwS" ]
iclr_2020_HklPzxHFwB
Zero-Shot Policy Transfer with Disentangled Attention
Domain adaptation is an open problem in deep reinforcement learning (RL). Often, agents are asked to perform in environments where data is difficult to obtain. In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment. The gap between visual observations of the source and target environments often causes the agent to fail in the target environment. We present a new RL agent, SADALA (Soft Attention DisentAngled representation Learning Agent). SADALA first learns a compressed state representation. It then jointly learns to ignore distracting features and solve the task presented. SADALA's separation of important and unimportant visual features leads to robust domain transfer. SADALA outperforms both prior disentangled-representation based RL and domain randomization approaches across RL environments (Visual Cartpole and DeepMind Lab).
reject
This paper proposes a new method for zero-shot policy transfer in RL. The authors propose learning the policy over a disentangled representation that is augmented with attention. Hence, the paper is a simple modification of an existing approach (DARLA). The reviewers agreed that the novelty of the proposed approach and the experimental evaluation are limited. For this reason I recommend rejection.
train
[ "HJgQJGREsH", "SyxgXjaVir", "BJxFF8pEjS", "HyxtTsDjKH", "H1gu5-h6Yr", "S1xiYtYZ9S" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review. \n\nI have uploaded a revision which addresses your comments and also respond to them here.\n\nLimited applicability of the proposed methods:\n- I have now made my focus on this setting clear in the introduction. Other work has shown success in the problem of transferring to domains wit...
[ -1, -1, -1, 1, 1, 1 ]
[ -1, -1, -1, 3, 4, 5 ]
[ "HyxtTsDjKH", "H1gu5-h6Yr", "S1xiYtYZ9S", "iclr_2020_HklPzxHFwB", "iclr_2020_HklPzxHFwB", "iclr_2020_HklPzxHFwB" ]
iclr_2020_SklwGlHFvH
Learning Curves for Deep Neural Networks: A field theory perspective
A series of recent works established a rigorous correspondence between very wide deep neural networks (DNNs), trained in a particular manner, and noiseless Bayesian Inference with a certain Gaussian Process (GP) known as the Neural Tangent Kernel (NTK). Here we extend a known field-theory formalism for GP inference to get a detailed understanding of learning-curves in DNNs trained in the regime of this correspondence (NTK regime). In particular, a renormalization-group approach is used to show that noiseless GP inference using NTK, which lacks a good analytical handle, can be well approximated by noisy GP inference on a related kernel we call the renormalized NTK. Following this, a perturbation-theory analysis is carried in one over the dataset-size yielding analytical expressions for the (fixed-teacher/fixed-target) leading and sub-leading asymptotics of the learning curves. At least for uniform datasets, a coherent picture emerges wherein fully-connected DNNs have a strong implicit bias towards functions which are low order polynomials of the input.
reject
This paper studies deep neural network (DNN) learning curves by leveraging recent connections of (wide) DNNs to kernel methods such as Gaussian processes. The bulk of the arguments contained in this paper are, thus, for the "kernel regime" rather than "the problem of non-linearity in DNNs", as one reviewer puts it. When it comes to scoring this paper, it has been controversial. However a lot of discussion has taken place. On the positive side, it seems that there is a lot of novel perspectives included in this paper. On the other hand, even after the revision, it seems that this paper is still very difficult to follow for non-physicists. Overall, it would be beneficial to perform a more careful revision of the paper such that it can be better appreciated by the targeted scientific community.
train
[ "ByxzbYLB5r", "HylTavYKor", "Bkx25PKFsr", "H1lEP8ttiS", "HyeI7HFKiS", "r1eXnS1RKH", "Ske_7mrk5S" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper used the field-theory formalism to derive two approximation formulas to the expected generalization error of kernel methods with n samples. Experiments showed that the sub-leading approximation formula approximates the generalization error well when $n$ is large. \n\n\nThis paper is poorly written. Many...
[ 1, -1, -1, -1, -1, 3, 8 ]
[ 4, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_SklwGlHFvH", "r1eXnS1RKH", "H1lEP8ttiS", "Ske_7mrk5S", "ByxzbYLB5r", "iclr_2020_SklwGlHFvH", "iclr_2020_SklwGlHFvH" ]
iclr_2020_SkgOzlrKvH
The Role of Embedding Complexity in Domain-invariant Representations
Unsupervised domain adaptation aims to generalize the hypothesis trained in a source domain to an unlabeled target domain. One popular approach to this problem is to learn domain-invariant embeddings for both domains. In this work, we study, theoretically and empirically, the effect of the embedding complexity on generalization to the target domain. In particular, this complexity affects an upper bound on the target risk; this is reflected in experiments, too. Next, we specify our theoretical framework to multilayer neural networks. As a result, we develop a strategy that mitigates sensitivity to the embedding complexity, and empirically achieves performance on par with or better than the best layer-dependent complexity tradeoff.
reject
This paper studies the impact of embedding complexity on domain-invariant representations by incorporating embedding complexity into the previous upper bound explicitly. The idea of embedding complexity is interesting, the exploration has some useful insight, and the paper is well-written. However, Reviewers and AC generally agree that the current version can be significantly improved in several ways: - The proposed upper bound has several limitations such as looser than existing ones. - The embedding complexity is only addressed implicitly, which shares similar idea with previous works. - The claim of implicit regularization has not been explored in-depth. - The proposed MDM method seems to be incremental and related closely with the embedding complexity. - There is no analysis about the generalization when estimating this upper bound from finite samples. There are important details requiring further elaboration. So I recommend rejection.
val
[ "B1xoBL8DiH", "rkljWLLPoS", "H1eK_SIPsS", "BJlNYB0RFH", "SkxxRsGe9H", "Byg2pWhqqr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your constructive comments. We would like to address your concerns as follows:\n\n(a) Our bound is tighter in some conditions. As we point out in definition 3, theFG\\DeltaG-divergence is smaller than the FG\\DeltaFG-divergence. Therefore, comparing to (4), if the lambda in (4) and (6) are small eno...
[ -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, 4, 5, 4 ]
[ "BJlNYB0RFH", "SkxxRsGe9H", "Byg2pWhqqr", "iclr_2020_SkgOzlrKvH", "iclr_2020_SkgOzlrKvH", "iclr_2020_SkgOzlrKvH" ]
iclr_2020_SyeKGgStDB
Training a Constrained Natural Media Painting Agent using Reinforcement Learning
We present a novel approach to train a natural media painting using reinforcement learning. Given a reference image, our formulation is based on stroke-based rendering that imitates human drawing and can be learned from scratch without supervision. Our painting agent computes a sequence of actions that represent the primitive painting strokes. In order to ensure that the generated policy is predictable and controllable, we use a constrained learning method and train the painting agent using the environment model and follows the commands encoded in an observation. We have applied our approach on many benchmarks and our results demonstrate that our constrained agent can handle different painting media and different constraints in the action space to collaborate with humans or other agents.
reject
Paper is withdrawn by authors.
train
[ "Byes_E3hYr", "Hyl4lLkRKr", "HkehXzoP9H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors present the results of training a natural media painting agent using reinforcement learning for different types of strokes. The agent seems to be capable of learning how to pain under different types of constraints and produce visually interesting images.\n\nComments:\n\n- Given that the authors give a...
[ 3, 1, 1 ]
[ 3, 4, 5 ]
[ "iclr_2020_SyeKGgStDB", "iclr_2020_SyeKGgStDB", "iclr_2020_SyeKGgStDB" ]
iclr_2020_B1ltfgSYwS
Few-Shot One-Class Classification via Meta-Learning
Although few-shot learning and one-class classification have been separately well studied, their intersection remains rather unexplored. Our work addresses the few-shot one-class classification problem and presents a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) and explicitly trains for few-shot class-imbalance learning, aiming to learn a model initialization that is particularly suited for learning one-class classification tasks after observing only a few examples of one class. Experimental results on datasets from the image domain and the time-series domain show that our model substantially outperforms the baselines, including MAML, and demonstrate the ability to learn new tasks from only few majority class samples. Moreover, we successfully learn anomaly detectors for a real world application involving sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine using only few examples from the normal class.
reject
The authors present a combination of few-shot learning with one-class classification model of problems. The authors use the existing MAML algorithm and build upon it to present a learning algorithm for the problem. As pointed out by the reviewers, the technical contributions of the paper are quite minimal and after the author response period the reviewers have not changed their minds. However, the authors have significantly changed the paper from its initial submission and as of now it needs to be reviewed again. I recommend authors to resubmit their paper to another conference. As of now, I recommend rejection.
train
[ "SyxbAdwjoS", "S1lVyVDosr", "Byg7TQvssS", "BkecBmwjsr", "r1g2xUvooS", "HkgyCrvjsr", "r1lYrNPsoB", "rJevDIwjsH", "SkeA0UwssB", "SJephPPjjH", "SyeAidDssr", "HyelYQDjir", "r1l2zKwjiB", "H1eGv9N6Fr", "r1xMn6u6tB", "S1lWpY-x5H" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed review and for recognizing that the few-shot one-classification is an under-studied problem and that the simplicity of the proposed method is a strength.\n\n\nWe summarize our additional contributions during the rebuttal phase in the following: \n\n\n-Theoretical analysis of why OC-MAM...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "H1eGv9N6Fr", "S1lWpY-x5H", "S1lWpY-x5H", "S1lWpY-x5H", "r1xMn6u6tB", "r1xMn6u6tB", "r1xMn6u6tB", "H1eGv9N6Fr", "H1eGv9N6Fr", "H1eGv9N6Fr", "H1eGv9N6Fr", "S1lWpY-x5H", "iclr_2020_B1ltfgSYwS", "iclr_2020_B1ltfgSYwS", "iclr_2020_B1ltfgSYwS", "iclr_2020_B1ltfgSYwS" ]
iclr_2020_Byg5flHFDr
EvoNet: A Neural Network for Predicting the Evolution of Dynamic Graphs
Neural networks for structured data like graphs have been studied extensively in recent years. To date, the bulk of research activity has focused mainly on static graphs. However, most real-world networks are dynamic since their topology tends to change over time. Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining. Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature. In this paper, we propose a model that predicts the evolution of dynamic graphs. Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs. Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology. We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets. Results demonstrate the effectiveness of the proposed model.
reject
The paper proposes a combination graph neural networks and graph generation model (GraphRNN) to model the evolution of dynamic graphs for predicting the topology of next graph given a sequence of graphs. The problem to be addressed seems interesting, but lacks strong motivation. Therefore it would be better if some important applications can be specified. The proposed approach lacks novelty. It would be better to point out why the specific combination of two existing models is the most appropriate approach to address the task. The experiments are not fully convincing. Bigger and comprehensive datasets (with the right motivating applications) should be used to test the effectiveness of the proposed model. In short, the current version failed to raise excitement from readers due to the reasons above. A major revision addressing these issues could lead to a strong publication in the future.
train
[ "Bylnso_aKH", "rJl6gxxRYB", "HylezQvAKB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a system for predicting evolution of graphs. It makes use of three different known components - (a) Graph Neural Networks (GNN); (b) Recurrent Neural Networks (RNN); (c) Graph Generator. A significant portion of the paper is spent in explaining these known concepts. The contribution of the pape...
[ 3, 3, 1 ]
[ 3, 4, 5 ]
[ "iclr_2020_Byg5flHFDr", "iclr_2020_Byg5flHFDr", "iclr_2020_Byg5flHFDr" ]
iclr_2020_rJl5MeHKvB
Learning Through Limited Self-Supervision: Improving Time-Series Classification Without Additional Data via Auxiliary Tasks
Self-supervision, in which a target task is improved without external supervision, has primarily been explored in settings that assume the availability of additional data. However, in many cases, particularly in healthcare, one may not have access to additional data (labeled or otherwise). In such settings, we hypothesize that self-supervision based solely on the structure of the data at-hand can help. We explore a novel self-supervision framework for time-series data, in which multiple auxiliary tasks (e.g., forecasting) are included to improve overall performance on a sequence-level target task without additional training data. We call this approach limited self-supervision, as we limit ourselves to only the data at-hand. We demonstrate the utility of limited self-supervision on three sequence-level classification tasks, two pertaining to real clinical data and one using synthetic data. Within this framework, we introduce novel forms of self-supervision and demonstrate their utility in improving performance on the target task. Our results indicate that limited self-supervision leads to a consistent improvement over a supervised baseline, across a range of domains. In particular, for the task of identifying atrial fibrillation from small amounts of electrocardiogram data, we observe a nearly 13% improvement in the area under the receiver operating characteristics curve (AUC-ROC) relative to the baseline (AUC-ROC=0.55 vs. AUC-ROC=0.62). Limited self-supervision applied to sequential data can aid in learning intermediate representations, making it particularly applicable in settings where data collection is difficult.
reject
The paper addresses an important problem of self-supervised learning in the context of time-series classification. However, all reviewers raised major concerns regarding the novelty of the approach and the quality of empirical evaluation, including insufficient comparison with the state-of-art and reproducibility issues. The reviewers agree that the paper, in its current state, does not path the ICLR acceptance threshold, and encourage the authors to improve the paper based on the provided suggestions.
train
[ "B1xd7CqhoH", "BJlLNT5hjr", "BJehm2qhiB", "SJe7stcnjr", "H1llIYPjFB", "BklmVCchYr", "BklOzy2atr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thorough review.\n\nIn response to your concerns:\n\n- Novelty of the proposed method compared with [1]:\nLimited self-supervision uses a multitask framework that, critically, requires no external labels to improve accuracy on a single task. Most applications of multitask learning require additi...
[ -1, -1, -1, -1, 1, 3, 1 ]
[ -1, -1, -1, -1, 5, 4, 3 ]
[ "H1llIYPjFB", "BklmVCchYr", "BklOzy2atr", "iclr_2020_rJl5MeHKvB", "iclr_2020_rJl5MeHKvB", "iclr_2020_rJl5MeHKvB", "iclr_2020_rJl5MeHKvB" ]