paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_xFYXLlpIyPQ
Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Learning-to-learn---using optimization algorithms to learn a new optimizer---has successfully trained efficient optimizers in practice. This approach relies on meta-gradient descent on a meta-objective based on the trajectory that the optimizer generates. However, there were few theoretical guarantees on how to avoid m...
withdrawn-rejected-submissions
The paper studies the problem of learning the step size of gradient descent for quadratic loss. Interesting theoretical results are presented, which formally support the empirically observed problems of exploding/vanishing gradients, as well as another result showing that if meta-learning is done based on the validatio...
train
[ "unTEKEbIX3C", "B4R0G-HAGR5", "FN1YSxyNcWq", "Sv_aO7nJE4", "4UNOEbatBWF", "cyIqI9IMTq", "P93LCjaF30", "nVEwoj7HeVx", "YqUiMEr5eCI" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your review and suggestions! \n\n“Proofs are not well spelled out within the text”\nDue to the space limit, the proof is left in the appendix. We will add a more detailed proof sketch in the main paper. \n\n“Theorem 4 is restricted to the quadratic inner-loop setting”\nNote that there was no previous th...
[ -1, -1, -1, -1, -1, 8, 4, 4, 4 ]
[ -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "YqUiMEr5eCI", "nVEwoj7HeVx", "P93LCjaF30", "cyIqI9IMTq", "Sv_aO7nJE4", "iclr_2021_xFYXLlpIyPQ", "iclr_2021_xFYXLlpIyPQ", "iclr_2021_xFYXLlpIyPQ", "iclr_2021_xFYXLlpIyPQ" ]
iclr_2021_ww-7bdU6GA9
Near-Optimal Linear Regression under Distribution Shift
Transfer learning is an essential technique when sufficient data comes from the source domain, while no or scarce labeled data is from the target domain. We develop estimators that achieve minimax linear risk for linear regression problems under the distribution shift. Our algorithms cover different kinds of settings w...
withdrawn-rejected-submissions
This paper derives estimators and minimax guarantees for regression under additive Gaussian noise and distributional shift Despite, some merits raised, limitations on too strong assumptions (knowing the entire marginal distributions, sample complexity bound that could become meaningless if what is assumed known itse...
train
[ "ZJ_1Qi4LbUb", "xawwNaYVrWS", "gokS7YgtSxf", "fhSyoVGm7D", "4nCiRi1NDqr", "BNjgfo1FD8", "tp2MinlgJuu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "## Summary\nThe authors propose a near minimax optimal linear estimator under distribution shift. They have estimators for when there is a covariate shift (i.e. the underlying data distribution) or model shift (i.e. the distribution of the label given the features of the data).\n\n## Strengths\n* The authors provi...
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 3, 2, 4, -1, -1, -1, -1 ]
[ "iclr_2021_ww-7bdU6GA9", "iclr_2021_ww-7bdU6GA9", "iclr_2021_ww-7bdU6GA9", "iclr_2021_ww-7bdU6GA9", "xawwNaYVrWS", "ZJ_1Qi4LbUb", "gokS7YgtSxf" ]
iclr_2021_oVz-YWdiMjt
Single Layers of Attention Suffice to Predict Protein Contacts
The established approach to unsupervised protein contact prediction estimates coevolving positions using undirected graphical models. This approach trains a Potts model on a Multiple Sequence Alignment, then predicts that the edges with highest weight correspond to contacts in the 3D structure. On the other hand, incre...
withdrawn-rejected-submissions
The paper shows a connection between Potts model and Transformers and uses the connection to propose a factored attention energy to use in an MRF. Results are shown, using this energy based on factored attention. Also, pretrained BERT models are used to predict contact maps as a comparison. The reviewers found the pape...
train
[ "uBqGHjM1FQb", "AowgLbK53ne", "bJ5cx8ucEKl", "ZynkDfMTa3D", "4HNZ8zBrWOK", "BTIwWCHQhm", "BrabluwryDn", "TOpyRElKJ5O", "uy5Q9CRRBRy", "GSbuA-yEYY", "A7p-BK8Hhd", "35c7F7Aganm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This manuscript describes a connection between Potts models and attention as implemented in modern transformers. The authors then present an attention model in which positional encodings are defined as one-hot vectors indicating fixed positions in the multiple sequence alignment and train single layer attention mo...
[ 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_oVz-YWdiMjt", "iclr_2021_oVz-YWdiMjt", "ZynkDfMTa3D", "AowgLbK53ne", "BTIwWCHQhm", "A7p-BK8Hhd", "35c7F7Aganm", "uy5Q9CRRBRy", "uBqGHjM1FQb", "iclr_2021_oVz-YWdiMjt", "iclr_2021_oVz-YWdiMjt", "iclr_2021_oVz-YWdiMjt" ]
iclr_2021_NTP9OdaT6nm
Formal Language Constrained Markov Decision Processes
In order to satisfy safety conditions, an agent may be constrained from acting freely. A safe controller can be designed a priori if an environment is well understood, but not when learning is employed. In particular, reinforcement learned (RL) controllers require exploration, which can be hazardous in safety critical ...
withdrawn-rejected-submissions
The paper proposes formulating safety constraints as formal language constrains, as a step toward bridging the gap between ML and software engineering, and enabling safe exploration in RL. The authors responded and improved the paper significantly during the rebuttal period. Despite that, the reviewers raise the quest...
train
[ "LacrlxL88Hi", "DUXFRioCrG4", "8omNL6e1Jhf", "G7IP8Caz1VU", "vWGvXmhOsg2", "NVgWm9tEsT4", "vkCR2s1-aF-", "Oxw_UHm5sC0", "bfd9djH3Q9j", "NbEcKkiMBYF", "uyY0giE9VDT", "avSku0KpAM", "oSiG5pIO594" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "#### Summary\nThe paper proposes a constrained reinforcement learning (RL) formulation relying on constraints written in a formal language. The proposed formulation is based on constrained Markov decision processes where the constraint is represented as a deterministic finite automaton that rejects any trajectory ...
[ 5, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_NTP9OdaT6nm", "iclr_2021_NTP9OdaT6nm", "iclr_2021_NTP9OdaT6nm", "NbEcKkiMBYF", "Oxw_UHm5sC0", "iclr_2021_NTP9OdaT6nm", "DUXFRioCrG4", "LacrlxL88Hi", "oSiG5pIO594", "uyY0giE9VDT", "avSku0KpAM", "8omNL6e1Jhf", "iclr_2021_NTP9OdaT6nm" ]
iclr_2021_ohdw3t-8VCY
CTRLsum: Towards Generic Controllable Text Summarization
Current summarization systems yield generic summaries that are disconnected from users' preferences and expectations. To address this limitation, we present CTRLsum, a novel framework for controllable summarization. Our approach enables users to control multiple aspects of generated summaries by interacting with the su...
withdrawn-rejected-submissions

The paper attempts at controllable summarization in two dimensions: Length, and content. Authors try to achieve this through training data generation approach, where they provide a standard BART model with additional keywords (extracted using a BERT model) in training. The paper's main motivation on controllable summ...
train
[ "5Mc64UkeQFv", "seX3Hn_4roa", "ltIhQFgxl1g", "U9aIBgniCpY", "UcgXeRXA8HK", "zQuAplqoaA2", "jUQy4cvD6hI", "dvio7K6Lxxn", "ouf74IHvwCZ", "Qo366HRkgq_", "gg7-miWPeLQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Paper Summary:\n* This paper proposes a framework for controllable summarization, CTRLsum. It is different from standard summarization models that CTRLsum uses a set of keywords extracted from the source text automatically or descriptive prompts to control the summary. Experiments with three domains of summariza...
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_ohdw3t-8VCY", "iclr_2021_ohdw3t-8VCY", "seX3Hn_4roa", "zQuAplqoaA2", "U9aIBgniCpY", "gg7-miWPeLQ", "Qo366HRkgq_", "iclr_2021_ohdw3t-8VCY", "5Mc64UkeQFv", "iclr_2021_ohdw3t-8VCY", "iclr_2021_ohdw3t-8VCY" ]
iclr_2021_WweBNiwWkZh
Skinning a Parameterization of Three-Dimensional Space for Neural Network Cloth
We present a novel learning framework for cloth deformation by embedding virtual cloth into a tetrahedral mesh that parametrizes the volumetric region of air surrounding the underlying body. In order to maintain this volumetric parameterization during character animation, the tetrahedral mesh is constrained to follow t...
withdrawn-rejected-submissions
Three of the four reviewers recommend rejection; one additional reviewer considers the paper to be marginally above threshold for acceptance but is very uncertain and this is taken into account. The AC is in consensus with the first three reviewers that this paper is not ready yet for publication. There is concern ...
train
[ "K30YcCcE7tO", "61TrjBFANIS", "ALGJfBkA-iQ", "25EfcODf-Bj", "ryumqDu15d4", "NXaOPSDP6Rb", "ODntZWxKWb", "UbcyurpcAY5", "h5VYRb0zc_0", "Ubkqy_k8oe", "Xr69viBcMfO" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I'm not very familiar with 3d cloth animation, so may not provide an entirely adequate evaluation. However it looks to me that this paper is more suited to the graphics conference like SIGGRAPH.\n\nThe paper present a new method for cloth deformation based on tetrahedral KDSM mesh, to make the deformation more nat...
[ 6, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ 1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_WweBNiwWkZh", "ALGJfBkA-iQ", "NXaOPSDP6Rb", "h5VYRb0zc_0", "Ubkqy_k8oe", "K30YcCcE7tO", "Xr69viBcMfO", "iclr_2021_WweBNiwWkZh", "iclr_2021_WweBNiwWkZh", "iclr_2021_WweBNiwWkZh", "iclr_2021_WweBNiwWkZh" ]
iclr_2021_8CCwiOHx_17
Adversarial Environment Generation for Learning to Navigate the Web
Learning to autonomously navigate the web is a difficult sequential decision making task. The state and action spaces are large and combinatorial in nature, and successful navigation may require traversing several partially-observed pages. One of the bottlenecks of training web navigation agents is providing a learnabl...
withdrawn-rejected-submissions
This paper considers the problem of agents learning to autonomously navigate the web, specifically by focusing on filling out forms. The focus is on using adversarial environment generation to form a curriculum of training tasks. Thank you for the revisions to the manuscript, which have particularly improved readabilit...
train
[ "i2ncbNEUlCo", "0EqiRWcAeMk", "WjhShqyFXBj", "Jq4-ZNhzXdi", "MRvT2PiK5y", "oak5kxVB6FA", "wzz8-AIR3MM", "OOk7hDt2pDy", "SoEawEi2joh", "JsVoR1DlVCq" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**\"The paper can benefit from a more general formulation of the task it solves....\"**\n\nThank you for the suggestion. We agree that the core ideas of proposing an adversarial environment decoder that creates compositional tasks made with primitives, and training it on the objective that ties the adversary rewar...
[ -1, -1, -1, -1, -1, -1, 7, 4, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 1 ]
[ "wzz8-AIR3MM", "SoEawEi2joh", "SoEawEi2joh", "OOk7hDt2pDy", "OOk7hDt2pDy", "JsVoR1DlVCq", "iclr_2021_8CCwiOHx_17", "iclr_2021_8CCwiOHx_17", "iclr_2021_8CCwiOHx_17", "iclr_2021_8CCwiOHx_17" ]
iclr_2021_9t0CV2iD5gE
Robust Learning Rate Selection for Stochastic Optimization via Splitting Diagnostic
This paper proposes SplitSGD, a new dynamic learning rate schedule for stochastic optimization. This method decreases the learning rate for better adaptation to the local geometry of the objective function whenever a stationary phase is detected, that is, the iterates are likely to bounce at around a vicinity of a loca...
withdrawn-rejected-submissions
This paper proposes to automatically determine when the SGD step-size should be decreased, by running two "threads" of SGD for a bunch of iterations, divide those into windows, and then look at the average inner-product of the gradients in the two threads in each window. If the inner-product tends to be high, that indi...
train
[ "OWzRIJgI1K7", "glOA8vIlDxV", "oDZynWd4l1d", "-Xj0Kb-oI2E", "xw66uvXLnOx", "_47AarLi4M2", "i5hQM9-Lv6", "CCHr9BBwDVc", "zbr3riF783t", "lLFCYBmjqcj", "w5coYaCSRAH" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers and Area Chair, we submitted a revised version of our paper, which has been improved according to the many good suggestions that we have received. We have done our best to accommodate as many of the reviewers' comments as possible. In particular:\n- We have improved the explanation of theorem 3.1 b...
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "iclr_2021_9t0CV2iD5gE", "-Xj0Kb-oI2E", "CCHr9BBwDVc", "oDZynWd4l1d", "zbr3riF783t", "w5coYaCSRAH", "lLFCYBmjqcj", "iclr_2021_9t0CV2iD5gE", "iclr_2021_9t0CV2iD5gE", "iclr_2021_9t0CV2iD5gE", "iclr_2021_9t0CV2iD5gE" ]
iclr_2021_m2ZxDprKYlO
Meta-Learning with Implicit Processes
This paper presents a novel implicit process-based meta-learning (IPML) algorithm that, in contrast to existing works, explicitly represents each task as a continuous latent vector and models its probabilistic belief within the highly expressive IP framework. Unfortunately, meta-training in IPML is computationally chal...
withdrawn-rejected-submissions
This paper sits at the borderline: the reviewers agree it is a well-written and interesting paper, but have concerns about efficiency as well as a comparison with the neural process (the authors did include a revision with this comparison, though the numbers they report are worse than in the original neural processes p...
train
[ "0Ls7Rwt4LR", "rIVfK21Tupq", "J6GZ5AyTRdh", "slW-8C_haIa", "cq9gdX2FSne", "soePrsd2he" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank you for providing valuable suggestions and feedback, which we have seriously considered when revising our paper. We would like to address your comments and questions below:\n\nRegarding IPML sometimes not outperforming prototypical net (PN) in Tables 2 and 3, we have provided a reason in the second paragr...
[ -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "slW-8C_haIa", "cq9gdX2FSne", "soePrsd2he", "iclr_2021_m2ZxDprKYlO", "iclr_2021_m2ZxDprKYlO", "iclr_2021_m2ZxDprKYlO" ]
iclr_2021_doeyA2PBjdy
An empirical study of a pruning mechanism
Many methods aim to prune neural network to the maximum extent. However, there are few studies that investigate the pruning mechanism. In this work, we empirically investigate a standard framework for network pruning: pretraining large network and then pruning and retraining it. The framework has been commonly used bas...
withdrawn-rejected-submissions
Pruning is an important problem in practice. The angle of this study is also interesting. The key concept proposed by this submission is called the "utility imbalance" of the weights. There are many concerns raised by the reviewers. Let us summarize some of them here: (1) hard to follow even for the domain experts; (2...
train
[ "FDVmqRmfEZE", "nwL0qw6KYNZ", "FMtByR5XIKV", "21c10zLX0bW", "qjvB0XPigk1", "wWkO7EfSO-q", "-EH7vpBJxZW", "Bgypd_S8Lg", "R4DZpKGT4ml", "IhHoT9AZjQF", "PDmhyQfHb79" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to answer the question why \"a network with the same number of weights as that of the pruned network cannot achieve similar performance when trained from scratch\". Then it proposes an hypothesis that the small model \"does not utilize all of its weights either\". To prove this hypothesis, it go...
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ 4, 2, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_doeyA2PBjdy", "iclr_2021_doeyA2PBjdy", "21c10zLX0bW", "-EH7vpBJxZW", "FDVmqRmfEZE", "IhHoT9AZjQF", "PDmhyQfHb79", "PDmhyQfHb79", "nwL0qw6KYNZ", "iclr_2021_doeyA2PBjdy", "iclr_2021_doeyA2PBjdy" ]
iclr_2021_-u4j4dHeWQi
Explore with Dynamic Map: Graph Structured Reinforcement Learning
In reinforcement learning, a map with states and transitions built based on historical trajectories is often helpful in exploration and exploitation. Even so, learning and planning on such a map within a sparse environment remains a challenge. As a step towards this goal, we propose Graph Structured Reinforcement Learn...
withdrawn-rejected-submissions
This work presents an algorithm - graph-structured reinforcement learning (GSRL)- to address the problem of exploration in sparse reward settings. The core elements of this work are 1) to build a state-transition graph from experienced trajectories in the replay buffer; 2) learn an attention module that chooses a goal ...
train
[ "hALfsOI26Yf", "kMcsbR-gZin", "kUjRgNJbmD7", "H8bM9Mbfd7L", "a8nw8zm2LB6", "l7KBG4rbEUl", "00Oiu-4N_Qx", "DmqWrbT4qC", "r8PrBs0qS9j", "ZRstfoDF8BQ", "eP4ecXXLZVs" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n**Summary**\nThis paper proposes graph-structured reinforcement learning (GSRL), which consists of two key components: (1) goal generation, to choose what goals a goal-conditioned agent should follow during an exploration episode, and (2) value estimation, to prioritize experience from highly related trajectorie...
[ 6, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_-u4j4dHeWQi", "H8bM9Mbfd7L", "iclr_2021_-u4j4dHeWQi", "DmqWrbT4qC", "r8PrBs0qS9j", "ZRstfoDF8BQ", "eP4ecXXLZVs", "hALfsOI26Yf", "iclr_2021_-u4j4dHeWQi", "iclr_2021_-u4j4dHeWQi", "iclr_2021_-u4j4dHeWQi" ]
iclr_2021_PvVbsAmxdlZ
Causal Inference Q-Network: Toward Resilient Reinforcement Learning
Deep reinforcement learning (DRL) has demonstrated impressive performance in various gaming simulators and real-world applications. In practice, however, a DRL agent may receive faulty observation by abrupt interferences such as black-out, frozen-screen, and adversarial perturbation. How to design a resilient DRL algor...
withdrawn-rejected-submissions
This paper presents a deep reinforcement learning method that aims at ensuring resilience to observational interference. During training labels that indicate presence or absence of interference are available to the algorithm. The training objective is augmented to learn the prediction of interference that is used at te...
train
[ "T5Fv2YXrq0P", "DYMOtcuxkK", "eyLgxTthSH", "6UJtn-kpn7R", "rbllYObyog", "7yZFJp9HmuZ", "e_INLgGEjf", "oXl_yJTKqa", "FwtoERNSRhU", "-FPYFt4xqtm", "ARt_zij2ihu", "4w9yCCyZ9Gc", "zSPyYQWBZTm", "nYyx5_VqLeq", "Bqy8CCP8qY", "XrkBV-e__O", "B7lqOZsXKdr", "7XT48LBEokX", "qT8kzxTEc2", "...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "##########################################################################\n\nSummary:\n\nThe paper presents a framework for deep reinforcement learning that is motivated by causal inference and with the central objective of being resilient to observational interferences. The key idea is to use interference labels...
[ 4, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_PvVbsAmxdlZ", "7yZFJp9HmuZ", "6UJtn-kpn7R", "rbllYObyog", "ARt_zij2ihu", "iclr_2021_PvVbsAmxdlZ", "oXl_yJTKqa", "-FPYFt4xqtm", "iclr_2021_PvVbsAmxdlZ", "DYMOtcuxkK", "4w9yCCyZ9Gc", "nYyx5_VqLeq", "nYyx5_VqLeq", "qT8kzxTEc2", "T5Fv2YXrq0P", "Rx_p9MiDGdL", "7XT48LBEokX", "...
iclr_2021_Y0MgRifqikY
Visual Explanation using Attention Mechanism in Actor-Critic-based Deep Reinforcement Learning
Deep reinforcement learning (DRL) has great potential for acquiring the optimal action in complex environments such as games and robot control. However, it is difficult to analyze the decision-making of the agent, i.e., the reasons it selects the action acquired by learning. In this work, we propose Mask-Attention A3C ...
withdrawn-rejected-submissions
The paper proposes a method to generate attention masks to interpret the performance of RL agents. Results are presented on a few ATARI games. Reviewers unanimously vote for rejecting the papers. R1, R3 give a score of 5, whereas R4, R5 give a score of 4. Their concerns are best explained in their own words: R1 says,...
train
[ "6JGB9iyBuIx", "yH6O-Mrr6Z2", "vGDydNCIVkm", "YM9PEyoiUCA", "ABYVWteADF5", "PMsz3QcRJT", "jlJX7ZkhPv", "bqU3eKQO1t", "vkxrafABFPW", "k5gHgydGlj6", "QU3WngbGyv", "rJ2ZKwt86HQ" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- Summary\n - This paper proposes an interpretable RL agent architecture that uses attention masks to produce visual explanations of the action selected by the policy and output of the value function \n - The authors demonstrate their method on 3 Atari games and use A3C as the training algorithm\n- Strengths...
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_Y0MgRifqikY", "vGDydNCIVkm", "YM9PEyoiUCA", "PMsz3QcRJT", "rJ2ZKwt86HQ", "jlJX7ZkhPv", "6JGB9iyBuIx", "QU3WngbGyv", "k5gHgydGlj6", "iclr_2021_Y0MgRifqikY", "iclr_2021_Y0MgRifqikY", "iclr_2021_Y0MgRifqikY" ]
iclr_2021_fESskTMMSv
Practical Marginalized Importance Sampling with the Successor Representation
Marginalized importance sampling (MIS), which measures the density ratio between the state-action occupancy of a target policy and that of a sampling distribution, is a promising approach for off-policy evaluation. However, current state-of-the-art MIS methods rely on complex optimization tricks and succeed mostly on s...
withdrawn-rejected-submissions
The paper is about an approach that combines successor representation with marginalized importance sampling. Although the reviewers acknowledge that the paper has some merits (interesting idea, good discussion, extensive experimental analysis) and the authors' responses have solved most of the reviewers' issues, the pa...
train
[ "kp8u_ouNeL2", "77vO1ILdDV", "ICAD7TFu9AW", "yqrUChuZNW8", "NPtDY4xjc4b", "shZkilh_bUv", "A2JEEn_mewT", "fh3f7AqvP2G", "ThifDeB15oz", "ccW3OKyMSg8", "5JTYYimhTY8", "MZ9hqqSEAgi", "igZRZZ2uKzx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "I thank the authors for the detailed feedback. I appreciate the adjustments to the experimental evaluation part. Thus, I am raising my score to 6.", "***Summary***\nThe paper proposes an approach to employ successor representation combined with marginalized importance sampling. The basic idea exploited in the pa...
[ -1, 6, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, 5 ]
[ -1, 3, 5, -1, -1, -1, -1, -1, -1, 4, -1, -1, 4 ]
[ "shZkilh_bUv", "iclr_2021_fESskTMMSv", "iclr_2021_fESskTMMSv", "A2JEEn_mewT", "ICAD7TFu9AW", "77vO1ILdDV", "iclr_2021_fESskTMMSv", "ThifDeB15oz", "5JTYYimhTY8", "iclr_2021_fESskTMMSv", "ccW3OKyMSg8", "igZRZZ2uKzx", "iclr_2021_fESskTMMSv" ]
iclr_2021_4ADnf1HqIw
Recovering Geometric Information with Learned Texture Perturbations
Regularization is used to avoid overfitting when training a neural network; unfortunately, this reduces the attainable level of detail hindering the ability to capture high-frequency information present in the training data. Even though various approaches may be used to re-introduce high-frequency detail, it typically ...
withdrawn-rejected-submissions
The 4 reviewers all had a consistent view of this paper: concern that the scope of the work was overstated (paper claims, without evidence, to apply in more generality than the 1 example scenario shown); concern about the difficulty of implementing this approach (1 TSNN required for each rendered viewpoint); and lack ...
train
[ "_-ap46_rIR4", "rh5VQohVAa3", "W50qwfMMeJD", "xNFqIYZPIQ7", "AThsvqqAgEe", "u-XEJ1wsCcM", "4vClXTcljUR", "b5RJzG5gFGY", "k2OQZz6En6A" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes an approach to encoding high frequency wrinkles into lower frequency texture coordinates, dubbed texture sliding. \n\nAt its core, is the idea of perturbing a predicted texture so that the rendered cloth mesh appears to more closely match the ground truth from a camera view point. \nThis is cont...
[ 5, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ 4, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "iclr_2021_4ADnf1HqIw", "4vClXTcljUR", "_-ap46_rIR4", "b5RJzG5gFGY", "k2OQZz6En6A", "iclr_2021_4ADnf1HqIw", "iclr_2021_4ADnf1HqIw", "iclr_2021_4ADnf1HqIw", "iclr_2021_4ADnf1HqIw" ]
iclr_2021_6KZ_kUVCfTa
Non-Markovian Predictive Coding For Planning In Latent Space
High-dimensional observations are a major challenge in the application of model-based reinforcement learning (MBRL) to real-world environments. In order to handle high-dimensional sensory inputs, existing MBRL approaches use representation learning to map high-dimensional observations into a lower-dimensional latent sp...
withdrawn-rejected-submissions
This paper presents Non-Markovian Predictive Coding (NPMC), a method for learning state representations in visual RL domains that can be used for planning. This work builds on recent work on PC3 (Shu et al. 2020) and PlaNet (Hafner et al. 2020). Concretely, NPMC replaces the image reconstruction objective in PlaNet wit...
train
[ "XxE5d9Isotw", "PCmD6cMTTXp", "C7VoDvO8XkC", "1FVtUS2_P2K", "nM3ha0VBYJM", "-C37TgP32T", "mbRuwDdyuIy", "moxKhtMSzwi", "O1TLwQGf7e3", "UQQMjJ-_f10", "8VDlUUKK8gn", "RCXPwwy5aMt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Problem Setup: The paper proposes a mutual information objective to learn a latent representation which\ncan be used for planning. The paper note that most of the existing model based RL methods learn a \nmodel of the world via reconstruction objective, which requires to predict each and every detail of the visual...
[ 5, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_6KZ_kUVCfTa", "iclr_2021_6KZ_kUVCfTa", "iclr_2021_6KZ_kUVCfTa", "iclr_2021_6KZ_kUVCfTa", "C7VoDvO8XkC", "XxE5d9Isotw", "PCmD6cMTTXp", "nM3ha0VBYJM", "mbRuwDdyuIy", "8VDlUUKK8gn", "RCXPwwy5aMt", "iclr_2021_6KZ_kUVCfTa" ]
iclr_2021_uFk038O5wZ
Improving Abstractive Dialogue Summarization with Conversational Structure and Factual Knowledge
Recently, people have been paying more attention to the abstractive dialogue summarization task. Compared with news text, the information flows of the dialogue exchange between at least two interlocutors, which leads to the necessity of capturing long-distance cross-sentence relations. In addition, the generated summar...
withdrawn-rejected-submissions
The authors address the important task of improving dialogue summarization using conversation structure and factual knowledge. Pros: 1) Clearly written and well motivated (as acknowledge by all reviewers) 2) Technically sound (the proposed architecture is clearly in line with the problem that the authors are trying to...
test
[ "AT7jJ1BlQzN", "qnWue5IpXqr", "sv4mnGZKmAk", "7g6fCoMZqdC", "Gy2i8qUDLEi", "6xifSH-mkeB", "n6KdF4NgFR_", "2ay9m7PByns" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper proposes a knowledge graph enhanced network to improve abstractive dialog summarization with graphs constructed from the dialog structure and factual knowledge. The dialog graph is composed of utterances as nodes and 3 heuristic types of edges (such as utterances of the same speaker, adjacent u...
[ 5, 6, -1, -1, -1, -1, 6, 6 ]
[ 4, 4, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_uFk038O5wZ", "iclr_2021_uFk038O5wZ", "2ay9m7PByns", "n6KdF4NgFR_", "AT7jJ1BlQzN", "qnWue5IpXqr", "iclr_2021_uFk038O5wZ", "iclr_2021_uFk038O5wZ" ]
iclr_2021_wXoHN-Zoel
There is no trade-off: enforcing fairness can improve accuracy
One of the main barriers to the broader adoption of algorithmic fairness in machine learning is the trade-off between fairness and performance of ML models: many practitioners are unwilling to sacrifice the performance of their ML model for fairness. In this paper, we show that this trade-off may not be necessary. If t...
withdrawn-rejected-submissions
The problem as formalized in this paper is essentially a domain adaptation problem. There is a training distrinution P and a test distribution P*. the learner gets training data generated by P and aims to minimize the loss of its hypothesis w.r.t. P*. How is it relate dto fairness? The authors add the assumption "we as...
train
[ "7EeGR0uHQcJ", "pBKLk15aX-6", "aXgMU5DzlEQ", "rsdEABa814d", "tQbVJjHj9Ui", "aJfYcWTP0mA", "lOkqwViGAQY", "s84WMiNHgIG" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "---------------------After reading the author response and the feedback from the others ------------------------\n\nThank the authors for their response. \n\nI still think the assumptions made in the paper are strong. For example, the assumption in equation 2.3 and 3.3 is hardly true. Even in the revised example...
[ 6, -1, -1, -1, -1, -1, 4, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_wXoHN-Zoel", "lOkqwViGAQY", "s84WMiNHgIG", "tQbVJjHj9Ui", "7EeGR0uHQcJ", "iclr_2021_wXoHN-Zoel", "iclr_2021_wXoHN-Zoel", "iclr_2021_wXoHN-Zoel" ]
iclr_2021_rLj5jTcCUpp
Distribution Embedding Network for Meta-Learning with Variable-Length Input
We propose Distribution Embedding Network (DEN) for meta-learning, which is designed for applications where both the distribution and the number of features could vary across tasks. DEN first transforms features using a learned piecewise linear function, then learns an embedding of the underlying data distribution afte...
withdrawn-rejected-submissions
This paper addresses a meta-learning method which works for cases where both the distribution and the number of features may vary across tasks. The method is referred to as 'distribution embedding network (DEN)' which consists of three building block. While the method seems to be interesting and contains some new ideas...
train
[ "Ia0RvmZX41Q", "qJ6nidpWqX2", "sj3pP7UdjB3", "nRd5bR79WSD", "cwOAXB2Tc2" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for their insightful and concrete comments. We address their main comments below.\n\nR1. How is the PLF trained on meta-test tasks?\n\nWe directly train the PLF on the support set without splitting it into training and validation sets. Note that we did not tune hyperparameters in this step s...
[ -1, 5, 4, 4, 4 ]
[ -1, 2, 3, 4, 3 ]
[ "iclr_2021_rLj5jTcCUpp", "iclr_2021_rLj5jTcCUpp", "iclr_2021_rLj5jTcCUpp", "iclr_2021_rLj5jTcCUpp", "iclr_2021_rLj5jTcCUpp" ]
iclr_2021_c1xAGI3nYST
NCP-VAE: Variational Autoencoders with Noise Contrastive Priors
Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in various domains. However, they struggle to generate high-quality images, especially when samples are obtained from the prior without any tempering. One explanation for VAEs’ poor generative quality is the pri...
withdrawn-rejected-submissions
This paper is rejected. All of the reviewers found the empirical results strong. However, R3 and R4 pointed out concerns with the positioning of the work relative to prior work and that their approach is conceptually similar to previous work. The authors have tried to address these concerns in their rebuttal. Both rev...
train
[ "YqFM6Qn5Ok", "9c6qSywGeH6", "XwhaqeYiPAx", "8Ze94YlmWY3", "kl8TKq2YLTH", "gc-LEjk-z2r", "BtzBqxsp7Kh", "yJOk_JQDhgV", "nch4z9MDVv", "kMJbeHWJTcQ", "UjTceeDgMek", "fpd0NP9H-JB", "nSb5kZRJQ2", "lw-sHMeMBU", "jSQ_GJeMTur" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe authors highlight an important problem in VAE - the prior-hole problem - which is that the approximate posterior and the simple gaussian prior do not match in spite of the KL term in the ELBO which makes sampling an issue - leading to the prior putting probability mass on latents that are not decod...
[ 6, 5, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_c1xAGI3nYST", "iclr_2021_c1xAGI3nYST", "iclr_2021_c1xAGI3nYST", "UjTceeDgMek", "iclr_2021_c1xAGI3nYST", "9c6qSywGeH6", "lw-sHMeMBU", "XwhaqeYiPAx", "YqFM6Qn5Ok", "jSQ_GJeMTur", "fpd0NP9H-JB", "nSb5kZRJQ2", "XwhaqeYiPAx", "iclr_2021_c1xAGI3nYST", "iclr_2021_c1xAGI3nYST" ]
iclr_2021_4CqesJ7GO7Q
Intriguing class-wise properties of adversarial training
Adversarial training is one of the most effective approaches to improve model robustness against adversarial examples. However, previous works mainly focus on the overall robustness of the model, and the in-depth analysis on the role of each class involved in adversarial training is still missing. In this paper, we pro...
withdrawn-rejected-submissions
Analyzing class-wise adversarial vulnerability of models is an interesting direction to pursue. However, the authors should consult the references pointed out in the reviews to put their contributions in the right perspective. Overall, the lines of inquiry explored in this paper are of interest but, as some of the revi...
train
[ "almCEh0y9E", "4mR5jaKW5gp", "7wnqQNOSogA", "r3jp9q1yc5F", "jdJIkU6So1f", "wUs7yi9rHEA", "y4ROmre9zXB", "YfPm2oYv2kO", "yn_W1R4rox9", "sMHWEm3Tuwq" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper empirically studies the class-wise properties of classification models produced by existing adversarial training methods on benchmark image datasets. In particular, it demonstrates the following observations: 1) robustness varies for different seed-target class pairs; 2) for certain class, retraining th...
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_4CqesJ7GO7Q", "iclr_2021_4CqesJ7GO7Q", "iclr_2021_4CqesJ7GO7Q", "7wnqQNOSogA", "almCEh0y9E", "iclr_2021_4CqesJ7GO7Q", "YfPm2oYv2kO", "4mR5jaKW5gp", "sMHWEm3Tuwq", "iclr_2021_4CqesJ7GO7Q" ]
iclr_2021_4rsTcjH7co
Autoencoder Image Interpolation by Shaping the Latent Space
One of the fascinating properties of deep learning is the ability of the network to reveal the underlying factors characterizing elements in datasets of different types. Autoencoders represent an effective approach for computing these factors. Autoencoders have been studied in the context of enabling interpolation betw...
withdrawn-rejected-submissions
The authors propose a technique called Autoencoder Adversarial Interpolation (AEAI). The key idea is to train autoencoder architectures that explicitly "shapes" trajectories in the encoder (latent) space to correspond to smooth geodesics between data points. This is achieved by a combination of several loss terms that ...
train
[ "QZbLhNAmTX", "mQvGWbZMOQU", "J1QVqTh9twY", "puOvCRZ5Gem", "hEegDGhO7n", "z7kVCzLvLHy", "1jxX_eW5T4v", "3zM_LBI_aLp", "i_SPGZ1kER5", "eE4IIJeaYCa" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Summary\nThe paper presents a method of regularising the latent space of an Autoencoder in a way that pressures the data manifold to be convex. This allows interpolation within the latent space which does not leave the data manifold and results in realistic reconstructions as one moves from one point to another...
[ 7, 6, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_4rsTcjH7co", "iclr_2021_4rsTcjH7co", "eE4IIJeaYCa", "i_SPGZ1kER5", "QZbLhNAmTX", "1jxX_eW5T4v", "mQvGWbZMOQU", "iclr_2021_4rsTcjH7co", "iclr_2021_4rsTcjH7co", "iclr_2021_4rsTcjH7co" ]
iclr_2021_0Su7gvitc1H
ARMCMC: Online Model Parameters full probability Estimation in Bayesian Paradigm
Although the Bayesian paradigm provides a rigorous framework to estimate the full probability distribution over unknown parameters, its online implementation can be challenging due to heavy computational costs. This paper proposes Adaptive Recursive Markov Chain Monte Carlo (ARMCMC) which estimates full probability d...
withdrawn-rejected-submissions
Pros: - The authors propose a novel method to perform MCMC in the condition where there is a distribution over models describing the data, rather than just a distribution over the parameters of a single consistent model (ie, in the theme of reversible jump MCMC). Sampling from the posterior of mixtures of parametric mo...
train
[ "XmyPi4rrOe", "wAoBkndOIfI", "YGLBodRx0NS", "eEI0j2CF3qu", "bsedsh35gir", "jQrRDdrAO6c" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**The changes of the review after rebuttal are indicated in bold text.**\n\n### Summary of the contribution\nThis article proposes an algorithm called ARMCMC (Adaptive Recursive Markov Chain Monte Carlo\" in the context of contact dynamic, for example when detecting the presence/absence of contact over time betwe...
[ 6, -1, -1, -1, 5, 7 ]
[ 4, -1, -1, -1, 4, 4 ]
[ "iclr_2021_0Su7gvitc1H", "XmyPi4rrOe", "bsedsh35gir", "jQrRDdrAO6c", "iclr_2021_0Su7gvitc1H", "iclr_2021_0Su7gvitc1H" ]
iclr_2021_mPmCP2CXc7p
Dynamic Feature Selection for Efficient and Interpretable Human Activity Recognition
In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost. For example, in human activity recognition (HAR) and mobile health (mHealth) applications, monitoring performance should be achieved with a low cost to gather different sensory features, as ma...
withdrawn-rejected-submissions
The authors propose a methodoloy for dynamic feature selection. They use differentiable gates with an RNN architecture to select different subsets of features at each time point thus resulting in dynamic selection. The reviewers agree that the idea is interesting and the method could be useful and I share their opini...
train
[ "JB558U0yuIo", "_ih8FZgwtX0", "Sqk8BHX8QRC", "-B96ryrnoCT", "ayaFJyVKKFQ", "Tt_lnX1n0s", "sWn6eSaVmhC", "hQzFUXMiCs5", "9JSBD42c20Y", "pWmxCwPcsRK", "lidPvR6nuMN", "Z0SPy3Cbv90", "Jrf1dulzrls", "caIqTnXIjD", "CIGP18JiNz", "aEyOoDp3IsO" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for spending significant time on reading our submission and responses multiple times. We also apologize if we have offended the reviewer somehow either in our original submission or our first response. We are glad to come to a mutual understanding.\n\nWe would, however, now draw the reviewer’...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 5 ]
[ "_ih8FZgwtX0", "ayaFJyVKKFQ", "-B96ryrnoCT", "hQzFUXMiCs5", "sWn6eSaVmhC", "aEyOoDp3IsO", "pWmxCwPcsRK", "lidPvR6nuMN", "pWmxCwPcsRK", "caIqTnXIjD", "Jrf1dulzrls", "CIGP18JiNz", "iclr_2021_mPmCP2CXc7p", "iclr_2021_mPmCP2CXc7p", "iclr_2021_mPmCP2CXc7p", "iclr_2021_mPmCP2CXc7p" ]
iclr_2021_nhIsVl2UoMt
Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Stochastic Processes
We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in stochastic processes using lower dimensional projections. Our model combines the techniques in information geometry to model higher-order interactions on a statistical manif...
withdrawn-rejected-submissions
This paper proposes a method for modeling higher-order interactions in Poisson processes. Unfortunately, the reviewers do not feel that the paper, in its current state, meets the bar for ICLR. In particular, reviewers found the descriptions unclear and the justifications lacking. While the responses did aid the reviewe...
test
[ "o3BsNZZ568E", "Pf7UPs20dw", "BaAHjG41Pfw", "EsaiNlfalo5", "oZiJ-0IqYHi", "v8pbpRzocBO", "bGN3uwrEQP" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Summary\n\nThis work claims to address the problem of learning the structure of interactions between multiple point processes. This can, e.g., be used to predict co-occurrences between different types of events. The authors develop a method that (a) is theoretically principled and (b) scales to large datasets.\...
[ 4, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_nhIsVl2UoMt", "oZiJ-0IqYHi", "bGN3uwrEQP", "v8pbpRzocBO", "o3BsNZZ568E", "iclr_2021_nhIsVl2UoMt", "iclr_2021_nhIsVl2UoMt" ]
iclr_2021_luGQiBeRMxd
CorrAttack: Black-box Adversarial Attack with Structured Search
We present a new method for score-based adversarial attack, where the attacker queries the loss-oracle of the target model. Our method employs a parameterized search space with a structure that captures the relationship of the gradient of the loss function. We show that searching over the structured space can be ...
withdrawn-rejected-submissions
The paper presents a new Bayesian optimization method based on the Gaussian process bandits framework for black-box adversarial attacks. The method achieves good performance in the experiments, which was appreciated by all the reviewers. At the same time, the presentation of the method is quite confusing, which curren...
train
[ "rte8C2Nc1qa", "MZduXl4sgZm", "XU362yyEDuP", "oA3Wrefukyy", "fXnutM1gwl", "QkPMVV-TQrH", "ikOma9kjFI-", "vIBzSgcOEyH", "qPkGNaRHWmW", "wAF4ZAJJzWh" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the detailed answer.\n\n1. **Square Attack and SignHunter**: the comparison to Square Attack and SignHunter is quite convincing in the sense that at least on some models the proposed method outperforms the existing methods. I agree that a specific initialization is important for boosting the query effic...
[ -1, 6, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 5, 2, 3 ]
[ "QkPMVV-TQrH", "iclr_2021_luGQiBeRMxd", "oA3Wrefukyy", "vIBzSgcOEyH", "qPkGNaRHWmW", "MZduXl4sgZm", "wAF4ZAJJzWh", "iclr_2021_luGQiBeRMxd", "iclr_2021_luGQiBeRMxd", "iclr_2021_luGQiBeRMxd" ]
iclr_2021_V1N4GEWki_E
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Sparse Neural Networks (NNs) can match the generalization of dense NNs using a fraction of the compute/storage for inference, and also have the potential to enable efficient training. However, naively training unstructured sparse NNs from random initialization results in significantly worse generalization, with the not...
withdrawn-rejected-submissions
The paper shows empirically that training unstructured sparse networks from random initialization performs poorly as sparse NNs have poor gradient flow at initialization. Besides, the authors argue that sparse NNs have poor gradient flow during training. They show that DST based methods achieving the best generalizatio...
train
[ "Iynm0T7BFVZ", "opSEqwF5WmS", "BXZszVI_NZf", "8vVVZ9Qy7hU", "7rts-uwGrJI", "D0nK4rlNncf", "9hcLhoM4wtd", "Z2w02DVcVqz", "RxXIw-01GSZ", "ehP_ZwJZuMU", "-pZuChbkVqt", "mqlycatelq5" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Overview:\n\nSummary:\nThis paper tries to answer the following two questions: i) why training unstructured sparse networks from random initiation perform poorly? 2) what makes LTs and DST the exception? The authors show the following findings:\n1. Sparse NNs have poor gradient flow at initialization. They show th...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_V1N4GEWki_E", "Z2w02DVcVqz", "Iynm0T7BFVZ", "Iynm0T7BFVZ", "ehP_ZwJZuMU", "ehP_ZwJZuMU", "-pZuChbkVqt", "iclr_2021_V1N4GEWki_E", "mqlycatelq5", "iclr_2021_V1N4GEWki_E", "iclr_2021_V1N4GEWki_E", "iclr_2021_V1N4GEWki_E" ]
iclr_2021_WJfIKDt8d2f
Driving through the Lens: Improving Generalization of Learning-based Steering using Simulated Adversarial Examples
To ensure the wide adoption and safety of autonomous driving, the vehicles need to be able to drive under various lighting, weather, and visibility conditions in different environments. These external and environmental factors, along with internal factors associated with sensors, can pose significant challenges to perc...
withdrawn-rejected-submissions
This meta-review is written after considering the reviews, the authors’ responses, the discussion, and the paper itself. The paper has 2 main contributions: 1) analysis of the sensitivity of a deep network predicting steering angle from images w.r.t. different synthetic image perturbations, 2) A training method, based...
train
[ "JhSQa-BbsYA", "GAKr4_Ra1sl", "zu5L-dSYWPd", "hKeiK4dbKT0", "CJjuuyEYDZf", "W0YIGCrbdGh", "TTAJX-rWew2", "dVYi58eWDjv", "llwR16E8xvB", "a4ZzyH_zzpx", "8_uw_IbdzS3", "wzutYsGrlPV", "jZ09ycNxGjp" ]
[ "public", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper makes use of Sully Chen's Driving dataset https://github.com/sullychen/driving-datasets , which is a dataset of continuous driving image sequences recorded from a car dashcam with labeled steering angles. Images depict the street, looking out from the car onto the road.\n\nThe authors apply perturbation...
[ -1, 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_WJfIKDt8d2f", "iclr_2021_WJfIKDt8d2f", "iclr_2021_WJfIKDt8d2f", "CJjuuyEYDZf", "W0YIGCrbdGh", "dVYi58eWDjv", "GAKr4_Ra1sl", "zu5L-dSYWPd", "jZ09ycNxGjp", "wzutYsGrlPV", "iclr_2021_WJfIKDt8d2f", "iclr_2021_WJfIKDt8d2f", "iclr_2021_WJfIKDt8d2f" ]
iclr_2021_4artD3N3xB0
Bayesian Learning to Optimize: Quantifying the Optimizer Uncertainty
Optimizing an objective function with uncertainty awareness is well-known to improve the accuracy and confidence of optimization solutions. Meanwhile, another relevant but very different question remains yet open: how to model and quantify the uncertainty of an optimization algorithm itself? To close such a gap, the pr...
withdrawn-rejected-submissions
The reviewers felt that the idea of learning a posterior distribution on optimization algorithms is very novel. However, the negative flip side of this novelty was that it was not clear how the prior and likelihood were defined so that Bayes rule could be approximated. The three reviewers appeared to find the paper som...
train
[ "xD7mayFdceR", "qKTUEZNQAA", "w3ljtQ4q7CW", "3TvCooULuOT", "jwsimfhYPar", "Z9ODXJUON3d", "n3JwBQWHZWO" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\nThis paper proposes to account for an additional source of uncertainty during inference, i.e., the uncertainty introduced by the optimization algorithm. The paper uses a neural network to parameterize the optimization algorithm, and proposes intuitive and heuristic techniques for inferring its posterior ...
[ 4, 5, -1, -1, -1, -1, 6 ]
[ 4, 2, -1, -1, -1, -1, 3 ]
[ "iclr_2021_4artD3N3xB0", "iclr_2021_4artD3N3xB0", "qKTUEZNQAA", "xD7mayFdceR", "Z9ODXJUON3d", "n3JwBQWHZWO", "iclr_2021_4artD3N3xB0" ]
iclr_2021_6gZJ6f6pU6h
Multi-EPL: Accurate Multi-source Domain Adaptation
Given multiple source datasets with labels, how can we train a target model with no labeled data? Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels. MSDA is a crucial problem applicable to many practical cases ...
withdrawn-rejected-submissions
In this paper, the authors proposed a solution to the problem of multi-source domain adaptation. All the reviewers have two concerns: 1) the technical contribution/novelty is limited, and 2) the experimental results are not convincing. Therefore, this paper does not meet the standard of being published in ICLR. The aut...
train
[ "-v-k12R4v0", "Qxc6YB9xX0r", "av5S_VCHqrS", "dZajlqppd0n", "v5Eu9hZjEAe", "4JQ_-3WEpzq", "dahCP3_Z75s", "zDtV_sSUpq1" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose a MULTI-EPL for multi-source domain adaptation. The key idea includes two folds: (1) to align label-wise moment, and (2) to ensemble multiple feature extractor. Experimental studies on 3 datasets are done to verify the proposed MUTL-EPL.\n\nOverall, the paper is well-written. The...
[ 4, 5, -1, -1, -1, -1, 4, 4 ]
[ 5, 3, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_6gZJ6f6pU6h", "iclr_2021_6gZJ6f6pU6h", "-v-k12R4v0", "dahCP3_Z75s", "zDtV_sSUpq1", "Qxc6YB9xX0r", "iclr_2021_6gZJ6f6pU6h", "iclr_2021_6gZJ6f6pU6h" ]
iclr_2021_KTS3QeWxRQq
Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding
Variational autoencoder (VAE) estimates the posterior parameters (mean and variance) of latent variables corresponding to each input data. While it is used for many tasks, the transparency of the model is still an underlying issue. This paper provides a quantitative understanding of VAE property by interpreting VAE as...
withdrawn-rejected-submissions
This submission analyses the VAE objective from the perspective of non-linearly scaled isometric embeddings, with the aim of improving our information-theoretic understanding of the variational objective. Reviewers are in consensus that this submission in its current form is very difficult to read, even after revisio...
train
[ "l7bPgpb_95t", "NRYiF9N7JT", "Q1c9CRCifpc", "IvyNLnb_nvO", "SYA1J-aIoue", "pVRL5WyWBnD", "a7eaDeFZa7R", "YRf2Q_BrG-T", "kWhh1Rjs7XJ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper proposes a methodology to understand quantitively the VAE model, based on the Rate-distortion theory.\n\nComments: I think that the writing of the paper is rather confusing. Unfortunately, I am not able to judge the proposed idea (which might be interesting) and provide a reasonable review, beca...
[ 4, 4, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, 2, 1 ]
[ "iclr_2021_KTS3QeWxRQq", "iclr_2021_KTS3QeWxRQq", "kWhh1Rjs7XJ", "iclr_2021_KTS3QeWxRQq", "YRf2Q_BrG-T", "NRYiF9N7JT", "l7bPgpb_95t", "iclr_2021_KTS3QeWxRQq", "iclr_2021_KTS3QeWxRQq" ]
iclr_2021_xJFxgRLx79J
Learning Two-Time-Scale Representations For Large Scale Recommendations
We propose a surprisingly simple but effective two-time-scale (2TS) model for learning user representations for recommendation. In our approach, we will partition users into two sets, active users with many observed interactions and inactive or new users with few observed interactions, and we will use two RNNs to model...
withdrawn-rejected-submissions
This paper received divergent scores (one strong negative and three positives). The positive reviews praise the clear intuition/motivation and strong empirical performance, while the negative review considers the proposed approach ad-hoc with limited novelty. I read the paper myself and found myself leaning more toward...
val
[ "utaqoS0dfE0", "qTX7y5DIlPM", "nu_ZELyVv9", "FruqOzVGMrQ", "XJ6qFDP9cfc", "lgJR8S2Hpg7", "vXK_WgiSQZD", "isg7EtuCQkX", "NxRsp3QLzo2" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The main idea in this work is to separately model users with a lot of activity in the systems and users with little activity and data. \nTo this end, two RNN's are trained on the items that the users have interacted with one for active users and one for less active users. User and item embeddings are generated wit...
[ 6, 3, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_xJFxgRLx79J", "iclr_2021_xJFxgRLx79J", "iclr_2021_xJFxgRLx79J", "qTX7y5DIlPM", "utaqoS0dfE0", "isg7EtuCQkX", "NxRsp3QLzo2", "iclr_2021_xJFxgRLx79J", "iclr_2021_xJFxgRLx79J" ]
iclr_2021_JzG0n48hRf
Uncertainty for deep image classifiers on out of distribution data.
In addition to achieving high accuracy, in many applications, it is important to estimate the probability that a model prediction is correct. Predictive uncertainty is particularly important on out of distribution (OOD) data where accuracy degrades. However, models are typically overconfident, and model calibration ...
withdrawn-rejected-submissions
This paper presents a method to improve the calibration of neural networks on out-of-distribution (OOD) data. The authors show that their method can be applied post-hoc to existing methods and that it improves calibration under distribution shift using the benchmark in Ovadia et al. 2019. However, reviewers felt that...
val
[ "wQOb6VeBda", "fkn1w5z6TnF", "7eC_4j7tYVE", "P_phbTML3Bk", "tFVyHMc0Lt", "VAtQRROc7Sr", "y-1q0sduTp", "_2aO9lgIgv4", "QastWpMlJ7P", "1AQnDQ8H7pF", "Kppxu78l1q" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the problem of providing calibrated predictions for out-of-distribution data. They propose algorithms for both calibrating predictions given a single image from the unknown distribution as well as given multiple images from the unknown distribution. They propose an algorithm that estimates which...
[ 5, 6, -1, 6, -1, -1, -1, -1, -1, -1, 4 ]
[ 4, 3, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_JzG0n48hRf", "iclr_2021_JzG0n48hRf", "iclr_2021_JzG0n48hRf", "iclr_2021_JzG0n48hRf", "iclr_2021_JzG0n48hRf", "P_phbTML3Bk", "P_phbTML3Bk", "Kppxu78l1q", "wQOb6VeBda", "fkn1w5z6TnF", "iclr_2021_JzG0n48hRf" ]
iclr_2021_nzLFm097HI
How to Design Sample and Computationally Efficient VQA Models
In multi-modal reasoning tasks, such as visual question answering (VQA), there have been many modeling and training paradigms tested. Previous models propose different methods for the vision and language tasks, but which ones perform the best while being sample and computationally efficient? Based on our experiments, w...
withdrawn-rejected-submissions
The reviewers are in agreement that this paper could benefit further improvement. There are several areas: novelty of the proposed approach and evaluation on real-world datasets (beyond just CLEVR).
train
[ "wbhmMu7HwXc", "pHz4UC6taRu", "UvNec4wcLhd", "o8U-HC9ximY", "UVysiqRqc2j", "xNhlcqCGdNw", "mW0MLgq153h", "tPVKmIR-xV-" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**PAPER SUMMARY**\n\nThe paper presents a method for visual question answering (VQA) that makes use of a differentiable program executor that softly approximates the execution of neural modules that read and write to a stack. Experiments on CLEVR demonstrate that the model can achieve high performance using small ...
[ 5, -1, -1, -1, -1, -1, 3, 5 ]
[ 5, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2021_nzLFm097HI", "iclr_2021_nzLFm097HI", "wbhmMu7HwXc", "tPVKmIR-xV-", "mW0MLgq153h", "iclr_2021_nzLFm097HI", "iclr_2021_nzLFm097HI", "iclr_2021_nzLFm097HI" ]
iclr_2021_xH251EA80go
A Simple Sparse Denoising Layer for Robust Deep Learning
Deep models have achieved great success in many applications. However, vanilla deep models are not well-designed against the input perturbation. In this work, we take an initial step to designing a simple robust layer as a lightweight plug-in for vanilla deep models. To achieve this goal, we first propose a fast spa...
withdrawn-rejected-submissions
The paper motivates the need for robustness, citing a paper on adversarial attacks. The type of perturbations are quite different (and of greater concern) than those originally included in the work, namely additive Gaussian or Laplace noise. This was raised by reviewers 1, 2 and 3. The authors provided a detailed rebu...
train
[ "srnKoMtplhI", "7hN_idIz_zP", "pg8H72Q_x6", "Lq6_CZzyVQ", "magBziRxp0c", "JEM-GFcqx1i", "wXbF4iYZflu", "EZ5m7jLOmb8", "BmWCj-5owu6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper is generally well presented. However, a main issue is that the optimization algorithms for the l0-norm regularized problems (Section 3.1.2 and Section 3.2) are not correctly presented. Specifically, in the algorithm development to solve the \"Fix $\\boldsymbol{R}$, optimize $\\boldsymbol{Y}$\" subproblem...
[ 5, 4, 5, -1, -1, -1, -1, -1, 5 ]
[ 3, 4, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_xH251EA80go", "iclr_2021_xH251EA80go", "iclr_2021_xH251EA80go", "srnKoMtplhI", "pg8H72Q_x6", "BmWCj-5owu6", "7hN_idIz_zP", "iclr_2021_xH251EA80go", "iclr_2021_xH251EA80go" ]
iclr_2021_jwgZh4Y4U7
Temporal and Object Quantification Nets
We aim to learn generalizable representations for complex activities by quantifying over both entities and time, as in “the kicker is behind all the other players,” or “the player controls the ball until it moves toward the goal.” Such a structural inductive bias of object relations, object quantification, and tempora...
withdrawn-rejected-submissions
This paper presents work on temporal logic representations in neural networks. The paper builds on work on Neural Logic Machines (Dong et al.), adding temporal quantification. The main positives to the method are this contribution of the temporal reasoning layers (e.g. iii in Fig. 2). This layer provides an interest...
train
[ "r92j6btvNAN", "KR0bsQ3MTGo", "CKyddpJbld", "YZ47MIfT5TM", "kAt0dyliuIA", "rD9VEdZ88TF", "AjrAR3tHI0J", "LVaJ2Ryf6CH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Strengths:\n- The authors address a well formulated and an important problem for lots of practical scenarios.\n- The paper is well written.\n- The experiments demonstrate that the proposed method outperforms several baselines.\n\nWeaknesses:\n- I personally found the proposed approach to be very cumbersome and com...
[ 3, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_jwgZh4Y4U7", "iclr_2021_jwgZh4Y4U7", "r92j6btvNAN", "AjrAR3tHI0J", "LVaJ2Ryf6CH", "iclr_2021_jwgZh4Y4U7", "iclr_2021_jwgZh4Y4U7", "iclr_2021_jwgZh4Y4U7" ]
iclr_2021_zbEupOtJFF
On interaction between augmentations and corruptions in natural corruption robustness
Invariance to a broad array of image corruptions, such as warping, noise, or color shifts, is an important aspect of building robust models in computer vision. Recently, several new data augmentations have been proposed that significantly improve performance on ImageNet-C, a benchmark of such corruptions. However, ther...
withdrawn-rejected-submissions
The paper investigates the relationship between data augmentations used during training and their effect on the accuracy when evaluated on unseen corruptions at test-time. The paper proposes a metric called minimal sample distance (MSD) to measure the similarity between augmentations during training time and corruption...
train
[ "2OMZK3oJOhh", "GECfZ-aPmhQ", "qIGKxk2Q14W", "u926VQgaoci", "GXn9C7eLS5S", "HgSJMx36pvW", "Myv5jMQsdbR", "tuIsmY6uheU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary \nThe paper studies the importance of similarity between augmentations and corruptions for improving performance on those corruptions. To measure the distance between the augmentation and corruption distributions, the paper proposes a new metric, Minimal Sample Distance (MSD), which is the perceptual simil...
[ 5, 6, 7, 5, -1, -1, -1, -1 ]
[ 4, 4, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2021_zbEupOtJFF", "iclr_2021_zbEupOtJFF", "iclr_2021_zbEupOtJFF", "iclr_2021_zbEupOtJFF", "u926VQgaoci", "2OMZK3oJOhh", "GECfZ-aPmhQ", "qIGKxk2Q14W" ]
iclr_2021_KwgQn_Aws3_
Interpretable Sequence Classification Via Prototype Trajectory
We propose a novel interpretable recurrent neural network (RNN) model, called ProtoryNet, in which we introduce a new concept of prototype trajectories. Motivated by the prototype theory in modern linguistics, ProtoryNet makes a prediction by finding the most similar prototype for each sentence in a text sequence and f...
withdrawn-rejected-submissions
The authors introduce an RNN model, ProtoryNet, which uses trajectories of sentence protoypes to illuminate the semantics of text data. Good points were brought up and addressed in discussion, which have improved the paper - including a helpful suggestion from Rev 3 to fine-tune BERT sentence embeddings in ProtoryNet,...
train
[ "oK5nuwj43Qe", "4JFWVhHweNO", "0ol1CA23m5i", "vxSsHTDwWws", "1tx92mHIOMH", "SOQB9z7LrIW", "qXds3M3ukbK", "mhuQCG44tn3", "QHrQrnjJep", "vpx0SChGmqn", "sRDQZkv8lrF", "Kugq0Xi-aox", "e3vYOWSJAYe", "BePgOXpjjQ7", "gpcAHqkKncy", "aCRT1J3G0v2", "D8Ndb_ooZZM", "uDj-7anPwR", "Zank7f9HNvl...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", ...
[ "The authors propose ProtoryNet, a prototype-based model for paragraph classification that associates each sentence in the paragraph with a relevant prototypical sentence from the training data. The idea is interesting and the ability to decompose sentiment scores over each sentence + find prototypes for each help...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_KwgQn_Aws3_", "iclr_2021_KwgQn_Aws3_", "2NISF1_eKy", "8wHRS_FmDX", "QHrQrnjJep", "qXds3M3ukbK", "1tx92mHIOMH", "1tx92mHIOMH", "vpx0SChGmqn", "iclr_2021_KwgQn_Aws3_", "e3vYOWSJAYe", "gpcAHqkKncy", "BePgOXpjjQ7", "aCRT1J3G0v2", "D8Ndb_ooZZM", "uDj-7anPwR", "uDj-7anPwR", "Z...
iclr_2021_mhEd8uOyNTI
Representational correlates of hierarchical phrase structure in deep language models
While contextual representations from Transformer-based architectures have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-level syntax are captured by these representations, nor how (if at all) they ...
withdrawn-rejected-submissions
The paper presents a significant body of seemingly solid work, but its contribution nevertheless feels limited: It evaluates a single MLM on a single dataset, and results are largely unsurprising. Note: The authors added experiments on other LMs in the rebuttal. The idea of using perturbations is related in spirit to m...
train
[ "a3Ecq4lxDQ1", "hcZqDoj8dXE", "3mbptpAS9WU", "430TW3LvHE", "cOdlwiZiCX", "PlpRq9WiGPS", "WDOZdzxq6xj", "x-MlO8prs--", "vEP3cI5QCTx", "Uoi5qYXu1X", "K1aY-JyCqjY" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your review! Please find our response below:\n- **“In experiments, it is not clear whether the randomness of BERT itself has been deducted. The randomness could be caused by the dropout operation which may lead to the discrepancy on output even using the same sentence.”** Thanks for pointing this out. W...
[ -1, -1, -1, -1, -1, -1, 6, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 5, 4 ]
[ "WDOZdzxq6xj", "Uoi5qYXu1X", "K1aY-JyCqjY", "iclr_2021_mhEd8uOyNTI", "x-MlO8prs--", "vEP3cI5QCTx", "iclr_2021_mhEd8uOyNTI", "iclr_2021_mhEd8uOyNTI", "iclr_2021_mhEd8uOyNTI", "iclr_2021_mhEd8uOyNTI", "iclr_2021_mhEd8uOyNTI" ]
iclr_2021_hKps4HGGGx
Improving robustness of softmax corss-entropy loss via inference information
Adversarial examples easily mislead the vision systems based on deep neural networks (DNNs) trained with the softmax cross entropy (SCE) loss. Such a vulnerability of DNN comes from the fact that SCE drives DNNs to fit on the training samples, whereas the resultant feature distributions between the training and adversa...
withdrawn-rejected-submissions
This paper presents an inference-softmax cross entropy (I-SCE) loss, a modification to the widely adopted "Softmax Cross Entropy" (SCE) loss, to achieve better robustness against adversarial attacks. The original submission had critical issues on motivation, theoretical analysis and experiments. Although the authors pr...
val
[ "PtHnrdH1s1A", "8SEjKiMLQLI", "zBiVb4raAz9", "3p9Dn3QueJz", "mlcHbOslOJj", "PTFB2E4PWSs", "75r-dn29CaR", "zLDm2sBZEco" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a loss function that is robust to the adversarial samples and claims the training with this loss function makes the model achieve better generalization ability.\nHowever, there are a number of problems in this claim.\n\nThe method does not ground on a solid theoretical analysis\\\nI tried to in...
[ 5, -1, -1, -1, -1, 5, 4, 4 ]
[ 4, -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2021_hKps4HGGGx", "PTFB2E4PWSs", "75r-dn29CaR", "zLDm2sBZEco", "PtHnrdH1s1A", "iclr_2021_hKps4HGGGx", "iclr_2021_hKps4HGGGx", "iclr_2021_hKps4HGGGx" ]
iclr_2021_Fn5wiAq2SR
Adversarial Training using Contrastive Divergence
To protect the security of machine learning models against adversarial examples, adversarial training becomes the most popular and powerful strategy against various adversarial attacks by injecting adversarial examples into training data. However, it is time-consuming and requires high computation complexity to generat...
withdrawn-rejected-submissions
The authors propose adversarial training using contrastive divergence based on ideas from Hybrid Monte Carlo methods. On the positive side the shown experimental results are promising both in terms of robustness and efficiency. On the negative side the paper seems to be written in a hurry. At several places terms are n...
train
[ "5RuS1v53iuQ", "arX05CkXHiL", "XGT1MRSEpTj", "FNdhzvGM49o", "iZc7pGkPuZJ", "IuLdKSKM0eA" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you so much for taking the time to review the paper and we appreciate the comments.\n\nQ1. Conduct ablation studies to find out what actually causes the performance improvement. \n\nA1. Thanks for your valuable suggestion. We have added the relevant ablation study in Appendix A.3.3. In short, both the genera...
[ -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, 3, 4, 2 ]
[ "iZc7pGkPuZJ", "IuLdKSKM0eA", "FNdhzvGM49o", "iclr_2021_Fn5wiAq2SR", "iclr_2021_Fn5wiAq2SR", "iclr_2021_Fn5wiAq2SR" ]
iclr_2021_u4WfreuXxnk
Single-Node Attack for Fooling Graph Neural Networks
Graph neural networks (GNNs) have shown broad applicability in a variety of domains. Some of these domains, such as social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited scenario of a single-...
withdrawn-rejected-submissions
The paper deals with adversarial attacks on graph neural networks, a new and promising field in graph representation learning. The paper analyzes a new extreme setting of attack for a single node, and presents important insights, albeit not new algorithms. The reviewers were not particularly enthusiastic and complaine...
train
[ "B30wPJg1Qa4", "XXFcUE-F8A", "O3qPpecCUoJ", "tvnbzxXxU9q", "BLW8viZaPRB", "cZkrlSnVTj", "2zf4qbq2wrd", "bTibz9gEHnm", "7LRhtra6IK", "LkdFHggFvYh", "iiva3UzBcl1", "dSPeh3gzzBp", "K9BVKpAvG-i", "2Bk2Nul-meV", "pPogDc5POm", "JVikAF-wTG" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the problem of adversarial attacks in graph neural networks. It proposes a new attack strategy called single-node attack where only one node is perturbed. A gradient-based attack algorithm is proposed to modify features of the attack node and achieve the single-node attack. Experiments have demo...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 3, 5 ]
[ "iclr_2021_u4WfreuXxnk", "iclr_2021_u4WfreuXxnk", "tvnbzxXxU9q", "dSPeh3gzzBp", "iclr_2021_u4WfreuXxnk", "2Bk2Nul-meV", "pPogDc5POm", "JVikAF-wTG", "LkdFHggFvYh", "K9BVKpAvG-i", "B30wPJg1Qa4", "iiva3UzBcl1", "iclr_2021_u4WfreuXxnk", "iclr_2021_u4WfreuXxnk", "iclr_2021_u4WfreuXxnk", "ic...
iclr_2021_xrLrpG3Ep1X
Domain-Free Adversarial Splitting for Domain Generalization
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the k...
withdrawn-rejected-submissions
The paper is proposing a domain generalization method based on the intuition that an invariant model would work for any split of train/val. Hence, the method uses adversarial train/val splits during training. The paper is reviewed by three expert reviews and none of them championed the paper to be accepted. I carefully...
val
[ "iaQUmPu090w", "c-BE-OIml_", "-I03bNalhot", "CZTEFWBGx8", "frRfljRKvAe", "snkzrqOwaQY", "ti4gcM5n2x4", "uHd0PtAGtP", "_lIIoXFNVu4", "q3cAHf3Vxcm", "p7cC7dqky3x", "Ccecr0x1VeZ", "kAQhEcf1NQ5" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "$Paper$ $summary$\n\nThis paper focuses on domain generalization, targeting the challenging scenario where the training set might not include different sources; even under the presence of different sources, the problem formulation does not takes into account domain labels. The proposed solution is based on meta-le...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2021_xrLrpG3Ep1X", "iclr_2021_xrLrpG3Ep1X", "CZTEFWBGx8", "p7cC7dqky3x", "c-BE-OIml_", "c-BE-OIml_", "Ccecr0x1VeZ", "iaQUmPu090w", "kAQhEcf1NQ5", "kAQhEcf1NQ5", "iaQUmPu090w", "iclr_2021_xrLrpG3Ep1X", "iclr_2021_xrLrpG3Ep1X" ]
iclr_2021__ptUyYP19mP
BeBold: Exploration Beyond the Boundary of Explored Regions
Efficient exploration under sparse rewards remains a key challenge in deep reinforcement learning. To guide exploration, previous work makes extensive use of intrinsic reward (IR). There are many heuristics for IR, including visitation counts, curiosity, and state-difference. In this paper, we analyze the pros and cons...
withdrawn-rejected-submissions
The reviewers have mixed views about this paper. However, it seems to me that the paper is missing some important related work on near-optimal exploration and it is only picking a couple of superficially similar approaches to look at. In particular, the standard benchmarks of Rmax, UCRL and Posterior Sampling do are ...
train
[ "CCDpWO2mVv", "pQkE-Uw2aIn", "UHp9O-1X4k5", "knkOTSLu92o", "IVLhNgN6Nt", "Gk6C6HitGBA", "wnZeMngmq5v", "kbUFKKggH3", "ErxCcxUP1_J", "N75wgt6Wn7X", "PhmkCbQnhdP", "LC5XPKwekkF", "uowzCcPDTwP", "Fo29Zb5ehe-", "Jqm3sTsraiq", "m2MaqW_3B_C", "7Sq9efum4J", "FJdzhbR4yOE" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Summary**\nThis paper is a presentation of BeBold, a new method using an intrinsic reward for exploration, meant for procedurally generated, episodic environments. The method includes two major components: the first being intrinsically rewarding the agent for entering states that are less visited than the curren...
[ 4, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 7, 5 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021__ptUyYP19mP", "iclr_2021__ptUyYP19mP", "IVLhNgN6Nt", "pQkE-Uw2aIn", "wnZeMngmq5v", "PhmkCbQnhdP", "kbUFKKggH3", "knkOTSLu92o", "iclr_2021__ptUyYP19mP", "m2MaqW_3B_C", "7Sq9efum4J", "CCDpWO2mVv", "CCDpWO2mVv", "CCDpWO2mVv", "FJdzhbR4yOE", "iclr_2021__ptUyYP19mP", "iclr_2021...
iclr_2021_7qmQNB6Wn_B
Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration
Policy entropy regularization is commonly used for better exploration in deep reinforcement learning (RL). However, policy entropy regularization is sample-inefficient in off-policy learning since it does not take the distribution of previous samples stored in the replay buffer into account. In order to take advantage ...
withdrawn-rejected-submissions
First, I'd like to thank both the authors and the reviewers for extensive and constructive discussion. The paper proposes a generalization of SAC, which considers the entropy of both the current policy and the action samples in the replay pool. The method is motivated by better sample complexity, as it avoids retaking ...
test
[ "hEe7fmcWT3y", "k4QaQQz9Cdn", "SGtzm6Shvve", "LShbDPeWkOC", "bc065YqWOlK", "1kGxcjzkh_", "qFVWYwDJ3-7", "BJpkbPiUV8O", "RTjWKxBActi", "KDq0ro3EsT3", "fLTXybpsmF", "BSxgLmfsAAU", "PkYm4XJdTSt", "-4kuWf96jy", "IXFJcbejoak", "nIsMbMpQPVu", "VD7S4E556no", "kivh0klHpHw", "D4G6YpLnlf",...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_r...
[ "Summary\n\nThis paper proposes a novel exploration method in off-policy learning. Compared to previous methods which do not take care into account the distribution of the samples in the replay buffer, the proposed method maximizes the entropy of the mixture of the policy distribution and the distribution of the sa...
[ 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2021_7qmQNB6Wn_B", "iclr_2021_7qmQNB6Wn_B", "iclr_2021_7qmQNB6Wn_B", "RTjWKxBActi", "1kGxcjzkh_", "-4kuWf96jy", "BJpkbPiUV8O", "BSxgLmfsAAU", "KDq0ro3EsT3", "fLTXybpsmF", "D4G6YpLnlf", "nIsMbMpQPVu", "-4kuWf96jy", "IXFJcbejoak", "iclr_2021_7qmQNB6Wn_B", "o2VsOTAJEOC", "hEe7fmcW...
iclr_2021_T1EMbxGNEJC
RankingMatch: Delving into Semi-Supervised Learning with Consistency Regularization and Ranking Loss
Semi-supervised learning (SSL) has played an important role in leveraging unlabeled data when labeled data is limited. One of the most successful SSL approaches is based on consistency regularization, which encourages the model to produce unchanged with perturbed input. However, there has been less attention spent on i...
withdrawn-rejected-submissions
Despite the performance gains of RankingMatch over the benchmarks used in the paper, the reviewers remained concerned about how the paper compares to state of the art in several respects.
train
[ "35ADZa7bqBk", "1A7wARY3sl", "dFBccMqlQ_j", "0LoSuyuZp9C", "RfgRxWVUUDy", "grWHi1FaEz", "dYLgCGen1c", "1z4No_paX56", "kA-lNyc2lKT", "phj4MsJ0x5Q", "fJnKhQN0_U", "JSd27vszBPO", "LasXGSnSFuP", "65zhsCeEq0n", "JJXd0Dk1ppq" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe paper presents an SSL method extending FixMatch by introducing an auxiliary loss motivated from the metric learning literature. For example, the triplet loss is utilized to define the loss, where any triplet of anchor, positive and negative examples, either using ground-truth labels for labeled data ...
[ 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_T1EMbxGNEJC", "iclr_2021_T1EMbxGNEJC", "35ADZa7bqBk", "iclr_2021_T1EMbxGNEJC", "65zhsCeEq0n", "65zhsCeEq0n", "1z4No_paX56", "35ADZa7bqBk", "1A7wARY3sl", "JJXd0Dk1ppq", "RfgRxWVUUDy", "1A7wARY3sl", "JJXd0Dk1ppq", "iclr_2021_T1EMbxGNEJC", "iclr_2021_T1EMbxGNEJC" ]
iclr_2021_le9LIliDOG
Efficient Long-Range Convolutions for Point Clouds
The efficient treatment of long-range interactions for point clouds is a challenging problem in many scientific machine learning applications. To extract global information, one usually needs a large window size, a large number of layers, and/or a large number of channels. This can often significantly increase the comp...
withdrawn-rejected-submissions
This work proposes an efficient method for modelling long-range connections in point-cloud data. Reviewers found the paper to be generally well-written. On the less positive side, reviewers felt that the novelty of the work was marginal, and that the experimentation, limited to synthetic data in one domain, was too lim...
train
[ "rr9e3-tD-55", "79A_NK33Fgy", "n4H6UhkOQz-", "WYzq0_SRSt3", "sfgv0xrau5R", "bTR8mYhdojA", "KthYYqQls1a", "HDPRGpoHEZp", "zm2EgYTfCcp", "K6oGTndTwG", "0WQeS4qeZCk" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "-------\n\n*A bit more of commenting on the experiments' results would be appreciated. For instance, it seems that the 2-scale training strategy is especially efficient (not needing many samples) when the LRIs are sufficiently strong (figure 3, right). This is probably an effect of the \"screening\" of LRI by shor...
[ -1, -1, -1, -1, -1, -1, 6, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, 3, 4, 4 ]
[ "79A_NK33Fgy", "KthYYqQls1a", "zm2EgYTfCcp", "K6oGTndTwG", "bTR8mYhdojA", "0WQeS4qeZCk", "iclr_2021_le9LIliDOG", "KthYYqQls1a", "iclr_2021_le9LIliDOG", "iclr_2021_le9LIliDOG", "iclr_2021_le9LIliDOG" ]
iclr_2021_s0Chrsstpv2
Better sampling in explanation methods can prevent dieselgate-like deception
Machine learning models are used in many sensitive areas where besides predictive accuracy their comprehensibility is also important. Interpretability of prediction models is necessary to determine their biases and causes of errors, and is a necessary prerequisite for users' confidence. For complex state-of-the-art bla...
withdrawn-rejected-submissions
The overall impression on the paper is rather positive, however, even after rebuttal, it still seem that the paper requires further work and definitely a second review round before being ready for publication. Thus, I encourage the authors to continue with the work started during the rebuttal to address the reviewers' ...
val
[ "R85WAl7Ay39", "i3q7Ro74Glm", "V7avAHpJ5gW", "TJ9UgddT2wQ", "eiC7pkRmqr6", "vDQRQ1dDEot", "JZFhzaFmFMU", "sytKzhZ_Wd_" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "ICLR 2021 Review - Better Sampling in Explanation Dieselgate\n\nSummary: The paper suggests to replace the perturbations part for the existing post-hoc explanation methods like LIME and SHAP with on-data manifold sampling methods. \n\nSHAP and LIME use perturbations or randomly generated points to explain the deci...
[ 4, -1, -1, -1, -1, 4, 4, 7 ]
[ 5, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_s0Chrsstpv2", "vDQRQ1dDEot", "R85WAl7Ay39", "JZFhzaFmFMU", "sytKzhZ_Wd_", "iclr_2021_s0Chrsstpv2", "iclr_2021_s0Chrsstpv2", "iclr_2021_s0Chrsstpv2" ]
iclr_2021_LcPefbNSwx_
Factor Normalization for Deep Neural Network Models
Deep neural network (DNN) models often involve features of high dimensions. In most cases, the high-dimensional features can be decomposed into two parts. The first part is a low-dimensional factor. The second part is the residual feature, with much-reduced variability and inter-feature correlation. This leads to a num...
withdrawn-rejected-submissions
The authors proposed to pre-process the original input features into a low dimensional term and its corresponding residual term via SVD. The paper empirically demonstrated the neural networks trained on such factorized exhibit faster convergence in training. Several issues of clarity were addressed during the rebuttal ...
train
[ "5_kAUE_L0ak", "A_OdbWQgcUu", "CYJQSkshC2", "-mCRwD6Kdem" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Review of \"Factor normalization for DNNs\". \n\nThe paper makes an observation that datasets used for training many deep neural nets exhibit a strong factor structure, i.e. have a small number of dominant principal components explaining most of the variance. If we were to remove the dominant factors, the residua...
[ 5, 4, 4, 4 ]
[ 4, 3, 3, 3 ]
[ "iclr_2021_LcPefbNSwx_", "iclr_2021_LcPefbNSwx_", "iclr_2021_LcPefbNSwx_", "iclr_2021_LcPefbNSwx_" ]
iclr_2021_ueiBFzt7CiK
A Framework For Differentiable Discovery Of Graph Algorithms
Recently there is a surge of interests in using graph neural networks (GNNs) to learn algorithms. However, these works focus more on imitating existing algorithms, and are limited in two important aspects: the search space for algorithms is too small and the learned GNN models are not interpretable. To address these is...
withdrawn-rejected-submissions
This paper proposes a method for automatically discovering graph algorithms using GNNs. In general, the reviewers find the paper well-written, and the problem and the approach interesting. However, there is a concern on the practical usefulness of proposed method as shown in the following comments: “My main concerns ...
train
[ "mhgXH9UbPgo", "klQAnPwonZ", "juV85oGTdqx", "3Hn4V29kCPi", "mT25JdmiDrW", "6nixCaBqsTI", "eWHLT_C0EYS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors proposed a framework for differentiable graph algorithm discovery (DAD). The framework is developed by improving two the discovery processes, i.e., designing a larger search space, and an effective explainer model. To enlarge the search space, the proposed DAD augments GNNs with cheap gl...
[ 7, 4, -1, -1, -1, -1, 6 ]
[ 3, 3, -1, -1, -1, -1, 3 ]
[ "iclr_2021_ueiBFzt7CiK", "iclr_2021_ueiBFzt7CiK", "klQAnPwonZ", "eWHLT_C0EYS", "mhgXH9UbPgo", "klQAnPwonZ", "iclr_2021_ueiBFzt7CiK" ]
iclr_2021_784_F-WCW46
Rethinking Sampling in 3D Point Cloud Generative Adversarial Networks
In this paper, we examine the long-neglected yet important effects of point sam- pling patterns in point cloud GANs. Through extensive experiments, we show that sampling-insensitive discriminators (e.g. PointNet-Max) produce shape point clouds with point clustering artifacts while sampling-oversensitive discriminators ...
withdrawn-rejected-submissions
The paper provides empirical evidence that the sampling strategy used in point cloud GANs can drastically impact the generation quality of the network. Specifically, the authors show that discriminators that are not sensitive to sampling have clustering artifact errors, while those that are sensitive to sampling do not...
train
[ "5XYuEn24yor", "jdkQLcTbHbA", "PpdKAZW_M65", "vaK9q1bmrt", "fqmcnbiIRC0", "uXlk_EkyP5S", "GK69EOMlL3", "0eDS_YSZLk", "hicXQzDc0zx", "W2_uulUkgW", "OaWXw9-p1P6", "O7yJ0341KsE", "2eQcoIRGaXy", "TG2dxMPnPMb" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper considers the task of 3d shape generation. More precisely, it embraces the point cloud GAN strategy for generation and offers an extensive comparative analysis of the existing architectures of point cloud generators and discriminators with a focus on the variations of the latter component.\n\nA series o...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "iclr_2021_784_F-WCW46", "TG2dxMPnPMb", "fqmcnbiIRC0", "2eQcoIRGaXy", "hicXQzDc0zx", "vaK9q1bmrt", "OaWXw9-p1P6", "jdkQLcTbHbA", "5XYuEn24yor", "O7yJ0341KsE", "iclr_2021_784_F-WCW46", "iclr_2021_784_F-WCW46", "iclr_2021_784_F-WCW46", "iclr_2021_784_F-WCW46" ]
iclr_2021_NNd0J677PN
Voting-based Approaches For Differentially Private Federated Learning
While federated learning (FL) enables distributed agents to collaboratively train a centralized model without sharing data with each other, it fails to protect users against inference attacks that mine private information from the centralized model. Thus, facilitating federated learning methods with differential privac...
withdrawn-rejected-submissions
This paper adapts the semi-supervised DP learning methods based on voting to FL. Specifically, PATE and private-kNN. The adaptation is fairly straightforward as those methods rely on averaging of votes a primitive that is a standard part of FL. The framework assumes that unlabeled data from the same distribution is ava...
train
[ "-xEbLI3yp1U", "vAf2IILpKUA", "LThgqPFdIA", "hzSntzoTEw", "jVTzKFY0uGv", "Jps-2YVC7je", "hzVZNcMXBUT", "WZxDK3dur9O", "gyqU8BSrGXM" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe paper proposes two approaches (i.e., PATE-FL and Private-kNN-FL) to train a differentially private global model in a federated setting based on [1] and [2]. In PATE-FL, each client first trains a teacher model using their local dataset. The teacher models are used to make noisy predictions on a publi...
[ 5, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ 4, -1, -1, -1, -1, -1, 2, 4, 2 ]
[ "iclr_2021_NNd0J677PN", "gyqU8BSrGXM", "iclr_2021_NNd0J677PN", "WZxDK3dur9O", "hzVZNcMXBUT", "-xEbLI3yp1U", "iclr_2021_NNd0J677PN", "iclr_2021_NNd0J677PN", "iclr_2021_NNd0J677PN" ]
iclr_2021_kVZ6WBYazFq
Constraint-Driven Explanations of Black-Box ML Models
Modern machine learning techniques have enjoyed widespread success, but are plagued by lack of transparency in their decision making, which has led to the emergence of the field of explainable AI. One popular approach called LIME, seeks to explain an opaque model's behavior, by training a surrogate interpretable model...
withdrawn-rejected-submissions
The authors present CLIME, a variant of LIME which samples from user-defined subspaces specified by Boolean constraints. One motivation is to address the OOD sampling issue in regular LIME. They introduce a metric to quantify the severity of this issue and demonstrate empirically that CLIME helps to address it. In orde...
train
[ "7_YceSvrexx", "FSUv3W3N8eM", "7s69qshQ2Z", "hOPGBCt1JGE", "ZCPdOn2oRXG", "bZ6MvLimamJ", "33axQPwNywr", "cFUmEZyXWAC", "fpNwb9RuP7U", "M6LETC2SFq", "VRABe58x19t", "CugVpU4UUvo", "Q3IfF48IZdD", "MmJ-7_qv_L", "W44OlXSihpN", "KrhpxuU6iMU" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper proposes a new sampling method for LIME based on user-defined Boolean subspaces. They show that using these subspaces rather than the default sampling settings of LIME can lead to robustness against adversarial attacks and allow users to better undercover bugs and biases in subspaces relevant t...
[ 5, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_kVZ6WBYazFq", "fpNwb9RuP7U", "cFUmEZyXWAC", "iclr_2021_kVZ6WBYazFq", "bZ6MvLimamJ", "M6LETC2SFq", "iclr_2021_kVZ6WBYazFq", "VRABe58x19t", "MmJ-7_qv_L", "7_YceSvrexx", "33axQPwNywr", "33axQPwNywr", "W44OlXSihpN", "KrhpxuU6iMU", "iclr_2021_kVZ6WBYazFq", "iclr_2021_kVZ6WBYazFq"...
iclr_2021_kmBFHJ5pr0o
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
Current deep neural networks are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification. To defend against such attacks, an effective and popular approach, known as adversarial training, has been shown to mitigate the negative impact of adversarial attac...
withdrawn-rejected-submissions
In this paper, the authors claim to propose a distributed large-batch adversarial training framework to robustify DNN. Although the authors made efforts to clarify reviewers' concerns, it is clear that the authors still cannot convince some reviewers in several points after several rounds of discussion between reviewe...
test
[ "kqHMI0j7Aoi", "_sYdb2pOVww", "98SycISKYek", "u6lf3ew4uUi", "VArQNhdmjH3", "f-CLvaRmD2b", "tcsk76PEyzc", "7DeXvph25Y-", "3sG0lYQ6udK", "uxRhj6f3bZW", "o6mrVSK1Ol1", "s7ZfjYDQK5I", "Z_q8hfFyP8y", "rvetg1kdY8s", "MJlrtiKTylW", "lFu0SgAzR5", "iyXsrCD-cK3", "GtRbR3exdUc", "o7vLLk0iyS...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_...
[ "This paper proposed distributed adversarial training (DAT) for robust models. The method is a combination of PGD-like adversarial training, LARS-like large batch training, and quantizing gradients for communication efficiency in distributed training. The authors show convergence of adversarial training with LARS-l...
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_kmBFHJ5pr0o", "iclr_2021_kmBFHJ5pr0o", "u6lf3ew4uUi", "o6mrVSK1Ol1", "iclr_2021_kmBFHJ5pr0o", "tcsk76PEyzc", "s7ZfjYDQK5I", "rvetg1kdY8s", "Z_q8hfFyP8y", "_sYdb2pOVww", "kqHMI0j7Aoi", "_sYdb2pOVww", "lFu0SgAzR5", "iyXsrCD-cK3", "kqHMI0j7Aoi", "HbyhbtXxabY", "HbyhbtXxabY", ...
iclr_2021__cadenVdKzF
Self-supervised Contrastive Zero to Few-shot Learning from Small, Long-tailed Text data
For natural language processing (NLP) ‘text-to-text’ tasks, prevailing approaches heavily rely on pretraining large self-supervised models on massive external datasources. However, this methodology is being critiqued for: exceptional compute and pretraining data requirements; diminishing returns on both large and smal...
withdrawn-rejected-submissions
The paper presents a self-supervised model based on a contrastive autoencoder that can make use of a small training set for upstream multi-label/class tasks. Reviewers have several concerns, including the lack of comparisons and justification for the setting, as well as the potentially narrow setting. Overall, I found ...
train
[ "yvc_OKfwnG-", "a9I6bRnQk_r", "VTtZfRiaXP", "c6j6onntfd9", "K4alqNzafxU", "FjHrcA6BwFe", "w3f1Tfb_kE8", "Hv3j1KP2248" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a contrastive autoencoder approach that only requires small data to perform a multi-label classification on the long-tail problem. They introduce a matching network to compare text and label embeddings and calculate the probabilities of the label given the input. The proposed idea is very strai...
[ 4, -1, -1, -1, -1, -1, 5, 5 ]
[ 5, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021__cadenVdKzF", "iclr_2021__cadenVdKzF", "w3f1Tfb_kE8", "K4alqNzafxU", "yvc_OKfwnG-", "Hv3j1KP2248", "iclr_2021__cadenVdKzF", "iclr_2021__cadenVdKzF" ]
iclr_2021_GCXq4UHH7h4
Selective Sensing: A Data-driven Nonuniform Subsampling Approach for Computation-free On-Sensor Data Dimensionality Reduction
Designing an on-sensor data dimensionality reduction scheme for efficient signal sensing has always been a challenging task. Compressive sensing is a state-of-the-art sensing technique used for on-sensor data dimensionality reduction. However, the undesired computational complexity involved in the sensing stage of comp...
withdrawn-rejected-submissions
The reviewers all agree that the problems studied in this paper are interesting, and the solutions provided are reasonable. However qualitative and quantitative comparisons to state of the art methods are missing, and the sensing model assumed by the paper needs to be more well motivated.
train
[ "n5t8HgeFs0", "1brsL8rvMzM", "-GJGnN3hQdc", "9rKtPTFSY-", "ki2H43sWSb4", "Hz2527BsOHv", "vMSwjQp7Iqe", "i-oCWRymu1y", "WokzP87r6n0", "jeHLZF1pzbM", "mw6U9lMb6m5", "DmxXopoAnQu" ]
[ "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Reviewer's comment: Autoencoders can also be trained to obtain compressed representations that preserve information. This is a well-studied area that the paper fails to discuss.\n\nResponse: Co-trained signal sensing and reconstruction frameworks can be viewed as a specific type of autoencoders[10]. The main diffe...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 5 ]
[ "WokzP87r6n0", "WokzP87r6n0", "jeHLZF1pzbM", "jeHLZF1pzbM", "mw6U9lMb6m5", "DmxXopoAnQu", "i-oCWRymu1y", "iclr_2021_GCXq4UHH7h4", "iclr_2021_GCXq4UHH7h4", "iclr_2021_GCXq4UHH7h4", "iclr_2021_GCXq4UHH7h4", "iclr_2021_GCXq4UHH7h4" ]
iclr_2021_SUyxNGzUsH
VilNMN: A Neural Module Network approach to Video-Grounded Language Tasks
Neural module networks (NMN) have achieved success in image-grounded tasks such as question answering (QA) on synthetic images. However, very limited work on NMN has been studied in the video-grounded language tasks. These tasks extend the complexity of traditional visual tasks with the additional visual temporal varia...
withdrawn-rejected-submissions
The authors propose a neural module based approach for reasoning about video grounding. The goal is to provide performance and interpretability. Unfortunately, the reviewers found the paper opaque, the results confusing, and expressed repeated concerns about the novelty, fairness of comparisons and concerns that the ...
train
[ "D_mav22ZWd-", "MuS5HYVnPwx", "W1x0DGRphM0", "OZ3Uh8fUkO", "vAJMbdqxcmo", "BhbSlmT2nzQ", "gIovmw9vbz", "HkG_DfE0r24" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Description:\n\nThis paper introduces the Visio-Linguistic Neural Module Network (VilNMN) consisting of a pipeline of dialogue and video understanding neural modules. Motivated by Hu et al. (2017), Kottur et al (2017), this paper extends the NMNs on video tasks for interpretable neural models. The model explicitly...
[ 5, 5, -1, -1, -1, -1, 5, 4 ]
[ 3, 2, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_SUyxNGzUsH", "iclr_2021_SUyxNGzUsH", "gIovmw9vbz", "D_mav22ZWd-", "HkG_DfE0r24", "MuS5HYVnPwx", "iclr_2021_SUyxNGzUsH", "iclr_2021_SUyxNGzUsH" ]
iclr_2021_b_7OR0Fo_iN
A Unifying Perspective on Neighbor Embeddings along the Attraction-Repulsion Spectrum
Neighbor embeddings are a family of methods for visualizing complex high-dimensional datasets using kNN graphs. To find the low-dimensional embedding, these algorithms combine an attractive force between neighboring pairs of points with a repulsive force between all points. One of the most popular examples of such algo...
withdrawn-rejected-submissions
This paper analyzes several neighbor embedding methods-- t-SNE, UMAP, and ForceAtlas2-- by considering their objectives as consisting of attractive and repulsive terms. The main hypothesis is that stronger repulsive terms contribute towards learning discrete structures, while stronger attractive terms contribute towar...
train
[ "hpF4F3Ru7Qs", "6qKb7Ie_y-_", "ipWhMnq0fgR", "NiBL4Q6vDuh", "YVORy9sVYH", "xQYzteBOFvc", "Q_a-7dX1lQT", "T9zev874nMz", "qHy4n57zvDc", "svSwb7Fyo1t", "uu_SYe8BzP2", "jkBujmHyn5", "Ca1UUOJ07L", "WtN4Vw7aT02", "fs3rsMxolh8" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: the authors study a number of neighbor embedding methods in terms of attraction-repulsion forces. The authors show that t-SNE, UMAP, FA2, and LE can be (approximately) unified as a common approach that use different levels of tradeoff between these two terms. They also discuss the increased attraction in ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 5 ]
[ "iclr_2021_b_7OR0Fo_iN", "jkBujmHyn5", "hpF4F3Ru7Qs", "fs3rsMxolh8", "Ca1UUOJ07L", "WtN4Vw7aT02", "T9zev874nMz", "uu_SYe8BzP2", "Ca1UUOJ07L", "WtN4Vw7aT02", "fs3rsMxolh8", "hpF4F3Ru7Qs", "iclr_2021_b_7OR0Fo_iN", "iclr_2021_b_7OR0Fo_iN", "iclr_2021_b_7OR0Fo_iN" ]
iclr_2021_UwOMufsTqCy
RRL: A Scalable Classifier for Interpretable Rule-Based Representation Learning
Rule-based models, e.g., decision trees, are widely used in scenarios demanding high model interpretability for their transparent inner structures and good model expressivity. However, rule-based models are hard to optimize, especially on large data sets, due to their discrete parameters and structures. Ensemble method...
withdrawn-rejected-submissions
This paper falls in the borderline area and there are still some concerns (for instance by AnonReviewer5 and AnonReviewer2) that deserve further treatment. Given that most ideas can only be validated in experiments (as the results are not theoretical), some points that remain are the comparison with other approaches (t...
train
[ "zOStCiKn5xz", "J0-RYnWo04H", "Fa95bfv4hEU", "TwcFhdZxBLA", "Gs6gDtg_Q81", "usu30-wM-nm", "x_tedVl2BAm", "xsB4rCDzsNS", "sfbKV0zFh9d", "xdga0vhHiR", "_Hu6bCPDGEV", "Yq6DO9S_lf" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Summary\nThe authors propose a new approach for training interpretable discrete models via gradient descent. They claim three contributions: 1) incorporation of layers implementing logical conjunction and disjunction operations; 2) the gradient grafting technique for performing gradient descent on discrete stru...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2021_UwOMufsTqCy", "iclr_2021_UwOMufsTqCy", "iclr_2021_UwOMufsTqCy", "J0-RYnWo04H", "zOStCiKn5xz", "J0-RYnWo04H", "_Hu6bCPDGEV", "zOStCiKn5xz", "zOStCiKn5xz", "Yq6DO9S_lf", "iclr_2021_UwOMufsTqCy", "iclr_2021_UwOMufsTqCy" ]
iclr_2021_3YdNZD5dMxI
Unconditional Synthesis of Complex Scenes Using a Semantic Bottleneck
Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes. We assume pixel-wise segmentation labels are available during training and us...
withdrawn-rejected-submissions
The paper receives a mixed rating, with R3 rates the paper above the bar, R1 and R2 rates marginally above the bar, and R4 recommends rejection. The cited positive points include 1) decomposing image generation into first synthesizing segmentation masks and then converting segmentation masks to images, and 2) good resu...
train
[ "uQMb6SIkHI", "4ZEcE-aCGbF", "HYedVVomwlB", "enxGIjDE7K3", "6cSN0JQweg8", "QHz75c_RBj5", "aF-dixAE0H", "OdQgoGEcBw1" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks very much for your time and feedback.\n\n**Run on more common datasets:**\nThanks for the suggestion. As you mentioned, the focus of this work is more on complex datasets with limited size but this is an interesting experiment to be explored. \n\n**StyleGAN as a baseline:**\nAs you suggested, we have traine...
[ -1, -1, -1, -1, 6, 8, 4, 6 ]
[ -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "6cSN0JQweg8", "QHz75c_RBj5", "OdQgoGEcBw1", "aF-dixAE0H", "iclr_2021_3YdNZD5dMxI", "iclr_2021_3YdNZD5dMxI", "iclr_2021_3YdNZD5dMxI", "iclr_2021_3YdNZD5dMxI" ]
iclr_2021_Mwuc0Plt_x2
RG-Flow: A hierarchical and explainable flow model based on renormalization group and sparse prior
Flow-based generative models have become an important class of unsupervised learning approaches. In this work, we incorporate the key idea of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, called RG-Flow, which can separate different scale information of i...
withdrawn-rejected-submissions
This paper proposes a hierarchical flow-based generative model to learn disentangled features at different levels of abstractions. The key technical contribution is a combination of renormalization group and flow-based models. The reviewers do find the idea interesting. However, the merit of the work with respect to S...
test
[ "67X7r-ufl1", "ZcBHIOU991", "GXjntF-8R3f", "mXWrD2SlHl6", "8DDesv4Put", "0UcEy69Tfzc", "nmJ_NrCEa8B", "N6B0VtZR3oR", "yj4q2wBrVw8" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new architecture for flow based generative models. The model imposes a hierarchical structure over information at different scales. The paper shows that the hierarchical structure results in disentangled features at different levels of abstractions. The paper claims that the approach is based ...
[ 6, 6, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_Mwuc0Plt_x2", "iclr_2021_Mwuc0Plt_x2", "ZcBHIOU991", "N6B0VtZR3oR", "yj4q2wBrVw8", "67X7r-ufl1", "iclr_2021_Mwuc0Plt_x2", "iclr_2021_Mwuc0Plt_x2", "iclr_2021_Mwuc0Plt_x2" ]
iclr_2021_SO73JUgks8
AUBER: Automated BERT Regularization
How can we effectively regularize BERT? Although BERT proves its effectiveness in various downstream natural language processing tasks, it often overfits when there are only a small number of training instances. A promising direction to regularize BERT is based on pruning its attention heads based on a proxy score for h...
withdrawn-rejected-submissions
This paper proposes to use RL to learn how to prune attention heads in BERT to achieve regularization for tasks with small dataset size. Specifically, the authors use DQN to learn a policy to prune heads layer by layer. This paper receives 4 reject recommendations with an average score of 4.5. Though the idea in thi...
train
[ "gIgu-RONZbS", "uO4pXbHdxuL", "R6TSJ_iG6IA", "labHckEYxBt", "gOq9tAX8pP", "ILSc7hfCj0", "t5bx0EOFKe4", "XQiycnUDBPL" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your kind and thorough review on our paper.\n\n1.\tTraining routine seems very time consuming as the network is fine-tuned after removing every single attention head. The authors can try to remove the heads in a batch in order to reduce the training time.\n- As pointed out, the time consumption was g...
[ -1, -1, -1, -1, 5, 4, 4, 5 ]
[ -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "gOq9tAX8pP", "ILSc7hfCj0", "t5bx0EOFKe4", "XQiycnUDBPL", "iclr_2021_SO73JUgks8", "iclr_2021_SO73JUgks8", "iclr_2021_SO73JUgks8", "iclr_2021_SO73JUgks8" ]
iclr_2021_xW9zZm9qK0_
Class2Simi: A New Perspective on Learning with Label Noise
Label noise is ubiquitous in the era of big data. Deep learning algorithms can easily fit the noise and thus cannot generalize well without properly modeling the noise. In this paper, we propose a new perspective on dealing with label noise called ``\textit{Class2Simi}''. Specifically, we transform the training exampl...
withdrawn-rejected-submissions
The paper's stated contributions are: (1) a new perspective on learning with label noise, which reduces the problem to a similarity learning (Ie, pairwise classification) task (2) a technique leveraging the above to learn from noisy similarity labels, and a theoretical analysis of the same (3) empirical demonstratio...
train
[ "e84yZrkWRXU", "Irq4_pA_LED", "qnFWEqnwKF", "Z42qJkw9o3I", "33-j8mwSz_", "X1mpuLuHqTU", "3P2DENAaFV8", "bYbN1RKw1EZ", "ghhA1E515U", "9A-ERFBMzvN", "pjaa-UOdk7s", "jUBm0xAfKYh", "DNzEk49mle", "TlrcaL0ViPY", "dsvJz6bd7qy", "x8ZK_IEWrIa", "6sIKqkLBSkS", "T42ewZaRQU6", "vofyaD2Y6R8",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "...
[ "This paper proposes a new perspective on dealing with label noise, called Class2Simi, by transforming the training examples with noisy labels into pairs of examples with noisy similarity labels and then learning a deep model with the noisy similarity labels. Experimental results on real datasets show that Class2Si...
[ 6, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -...
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -...
[ "iclr_2021_xW9zZm9qK0_", "iclr_2021_xW9zZm9qK0_", "iclr_2021_xW9zZm9qK0_", "iclr_2021_xW9zZm9qK0_", "bYbN1RKw1EZ", "bYbN1RKw1EZ", "pjaa-UOdk7s", "8-M3il3gxpT", "9A-ERFBMzvN", "V5DK-KdROjf", "5y8CLyF3PS", "5y8CLyF3PS", "5y8CLyF3PS", "5y8CLyF3PS", "5y8CLyF3PS", "5y8CLyF3PS", "5y8CLyF3P...
iclr_2021_I6QHpMdZD5k
Learning to Solve Nonlinear Partial Differential Equation Systems To Accelerate MOSFET Simulation
Semiconductor device simulation uses numerical analysis, where a set of coupled nonlinear partial differential equations is solved with the iterative Newton-Raphson method. Since an appropriate initial guess to start the Newton-Raphson method is not available, a solution of practical importance with desired boundary co...
withdrawn-rejected-submissions
This paper proposes using a neural network to learn an approximate solution for desired boundary conditions to accelerate the semiconductor device simulation. The work shows that speed-up simulation is increased significantly. However, the major concern about this work is the limited contribution to the machine learnin...
train
[ "9qPgD4glxC", "UOjjbhScivE", "uVfHZ0Z1n6a", "9koLLmLqEJb", "RyrtYRt36Zv", "L8UPMWXyfX", "1eVU81n4x_q", "xUF5Ysrvi4", "Lcfh6QQ4Hzu", "eriGTsuime8", "W8K7MQ2qlmk", "1wpDb7rETZA", "ixG9uF4QfsF", "nlpra42YOWd", "dZ0PfsQnln7", "EfRXjfO58Lu", "WESViyUe2mC", "DBLJGsR6Ooa", "xJ-XsyTfrC9"...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ "Thank you for your feedback!\n\nRegarding your first comment, our approach can be applied to other fields, where a set of nonlinear equations must be solved in an iterative manner (the Newton-Raphson method).\nAlthough our examples are taken from a somewhat unfamiliar field (the semiconductor device simulation) fr...
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 4 ]
[ "9koLLmLqEJb", "uVfHZ0Z1n6a", "xJ-XsyTfrC9", "W8K7MQ2qlmk", "iclr_2021_I6QHpMdZD5k", "Lcfh6QQ4Hzu", "eR9pNNG2__M", "iclr_2021_I6QHpMdZD5k", "DBLJGsR6Ooa", "iclr_2021_I6QHpMdZD5k", "RyrtYRt36Zv", "eR9pNNG2__M", "eriGTsuime8", "ZHg9PaoU_YW", "iclr_2021_I6QHpMdZD5k", "RyrtYRt36Zv", "eR9...
iclr_2021_PRr_3HPakQ
Learning to Generate Questions by Recovering Answer-containing Sentences
To train a question answering model based on machine reading comprehension (MRC), significant effort is required to prepare annotated training data composed of questions and their answers from contexts. To mitigate this issue, recent research has focused on synthetically generating a question from a given context and a...
withdrawn-rejected-submissions
All reviewers appreciate the good quality of this submission with a good idea and solid execution (as said by R3). The paper is clearly written and the addition during the discussion have greatly improved it as acknowledged by all reviewers. However, a major weakness of the submission still needs to be addressed befor...
train
[ "vAPU0Tlk1Y2", "SA50RZ4IeHr", "_RRVdkX_i8", "FyLEoPupy2", "pfX-cDHFouN", "RAgPCC0ZBX", "uEi6PERbSci", "Dzpax4MZKqo", "RmHSQDuvaz", "kwo1DpHBvK_", "lMXSdHoPH6", "I2Ny1aS8S0", "e_2HdZPq4Pf", "F7Jj1UbYdP", "sAdEt4dUENC", "pxZSrxv4Lfx", "up--IA1trW", "fTCbZfEd7Db" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response to our comments.\n\nAs mentioned in Appendix B, we tested our synthetic data with Electra (Large) MRC model which has official code, and its variant is currently in state-of-the-art on single model results for SQuAD (https://rajpurkar.github.io/SQuAD-explorer/). We will further evaluate...
[ -1, 6, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "_RRVdkX_i8", "iclr_2021_PRr_3HPakQ", "uEi6PERbSci", "iclr_2021_PRr_3HPakQ", "FyLEoPupy2", "sAdEt4dUENC", "pxZSrxv4Lfx", "kwo1DpHBvK_", "iclr_2021_PRr_3HPakQ", "F7Jj1UbYdP", "RmHSQDuvaz", "SA50RZ4IeHr", "up--IA1trW", "RmHSQDuvaz", "fTCbZfEd7Db", "SA50RZ4IeHr", "iclr_2021_PRr_3HPakQ",...
iclr_2021_Svfh1_hYEtF
Federated Continual Learning with Weighted Inter-client Transfer
There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios. Yet little research has been done regarding the scenario where each client learns on a sequence of tasks from a private local data stream. This problem of federa...
withdrawn-rejected-submissions
This paper tackles an interesting problem (that of federated continual deep learning) and proposes an effective approach for it with good results. This is a good contribution. However, there are presentation issues in several aspects of the paper that require improvement before publication. The authors' claims of the ...
train
[ "TWjlKeYv0k_", "93zyu5Swt6G", "SDqJ2KkY9c-", "movGIIcvlJN", "IXyGahkNzE", "4XO9JuOX5T", "hR8SneUBHcg", "SVwi_vBrtzQ", "1CQv5bNnOcH", "nTKnKFXQrRR", "g5w49ozwcV", "1n12txCYWC2", "oh27EjKkJwI", "kLxFKesQfWT", "gDezEzfRlC9", "9-jqAt3GcNR", "zz3iNwq-QFk", "89IS6Myc17", "hRm7-2gm65Q",...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", ...
[ "In this paper, the authors present a federated continual learning framework. By decomposing the local client parameter, the method could alleviate the effect of negative transfer and improve efficiency. Empirical results partly show the effectiveness of the proposed algorithm. \n\n**Strength**\nTo my best knowledg...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_Svfh1_hYEtF", "iclr_2021_Svfh1_hYEtF", "4XO9JuOX5T", "4XO9JuOX5T", "4XO9JuOX5T", "1CQv5bNnOcH", "93zyu5Swt6G", "93zyu5Swt6G", "gDezEzfRlC9", "93zyu5Swt6G", "oh27EjKkJwI", "oh27EjKkJwI", "vMDiTB5it9t", "9-jqAt3GcNR", "9-jqAt3GcNR", "rLqOiXPgPJ8", "iclr_2021_Svfh1_hYEtF", ...
iclr_2021_Lnomatc-1s
Learning-Augmented Sketches for Hessians
We study learning-based sketching for Hessians, which is known to provide considerable speedups to second order optimization. A number of works have shown how to sketch or subsample the Hessian to speed up each iteration, but such sketches are usually specific to the matrix at hand, rather than being learned from a dis...
withdrawn-rejected-submissions
This paper considers convex optimization problems whose solutions involve the solution of linear systems defined in terms of the Hessian. It presents algorithms that reduce the runtime of standard iterative approaches to solving these problems by iteratively sketching the Hessian; the novelty lies in the fact that the ...
train
[ "dgR_plzbfIE", "WRPRfwKRQSO", "2Vhz0tDT4M3", "e9XUUxGfYz", "3pr__I2YIoe", "jwTUzSSNbTW", "Zb4vAKlhU1Q", "8n6vaMJ_Cqb" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper proposed a learned variant of the well-known iterative Hessian sketch (IHS) method of Pilanci and Wainwright, for efficiently solving least-squares regression. The proposed method is essentially a learned variant of the count-sketch, where the positions of the non-zero entries are random while the value ...
[ 4, -1, 6, -1, -1, -1, -1, 6 ]
[ 5, -1, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2021_Lnomatc-1s", "3pr__I2YIoe", "iclr_2021_Lnomatc-1s", "jwTUzSSNbTW", "dgR_plzbfIE", "2Vhz0tDT4M3", "8n6vaMJ_Cqb", "iclr_2021_Lnomatc-1s" ]
iclr_2021_YjXnezbeCwG
Learning to Use Future Information in Simultaneous Translation
Simultaneous neural machine translation (briefly, NMT) has attracted much attention recently. In contrast to standard NMT, where the NMT system can access the full input sentence, simultaneous NMT is a prefix-to-prefix problem, where the system can only utilize the prefix of the input sentence and thus more uncertainty...
withdrawn-rejected-submissions
This paper improves the wait-k based simultaneous NMT by training on an adaptive wait-m policy with a controller determining the lag for sentence pair. The controller is trained with RL to minimize the loss on a validation set. The overall model is reasonable, which is well presented. I however have the following two ...
val
[ "QVDftfLek_7", "bJvd70uEp-", "MoSIdgJdNOz", "k4yW5DLs0Z", "YN-JT1Yc0qI", "zng-7L28CR8", "l8khlnuz2_q", "E0QnlM00xUq", "BXmSWQDi7qu", "Xzgy6V12UU", "3pHCV0Rw_Z2" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your quick response!\n\n> Towards “whether the improvements are worth the effort”\n1.\tWe make consistent improvement on three IWSLT tasks and one WMT task over a series of baselines (including both heuristic and adaptive baselines). The results for IWSLT tasks are available at Figure 2, Table 1 Figure ...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "bJvd70uEp-", "l8khlnuz2_q", "BXmSWQDi7qu", "3pHCV0Rw_Z2", "E0QnlM00xUq", "iclr_2021_YjXnezbeCwG", "Xzgy6V12UU", "iclr_2021_YjXnezbeCwG", "iclr_2021_YjXnezbeCwG", "iclr_2021_YjXnezbeCwG", "iclr_2021_YjXnezbeCwG" ]
iclr_2021_1MJPtHogkwX
A Multi-Modal and Multitask Benchmark in the Clinical Domain
Healthcare represents one of the most promising application areas for machine learning algorithms, including modern methods based on deep learning. Modern deep learning algorithms perform best on large datasets and on unstructured modalities such as text or image data; advances in deep learning have often been drive...
withdrawn-rejected-submissions
The paper has two contributions. A novel benchmark for clinical multi-modal multi-task learning based on the already released MIMIC III and a multi-modal multi-task machine learning model. While the paper does show value in providing a curated benchmark and combining/unifying existing approaches to a timely problem, th...
train
[ "QB0Ii9keuhB", "XhjEmcssLlw", "MHdEp9S69uo", "eFDNKCEZq3f", "eb3T_pyAqF", "VgPPN9KGOj" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the feedback on the presentation of table 4. We apologize for the confusion and reorganized our results in comparison to baselines. As mentioned in the review, there are several works that utilized a single modality for some subset of the six tasks. For example, Sheikhalishahi et al. used Bi-LSTM to ...
[ -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, 5, 4, 4 ]
[ "eFDNKCEZq3f", "eb3T_pyAqF", "VgPPN9KGOj", "iclr_2021_1MJPtHogkwX", "iclr_2021_1MJPtHogkwX", "iclr_2021_1MJPtHogkwX" ]
iclr_2021_EXkD6ZjvJQQ
Provable More Data Hurt in High Dimensional Least Squares Estimator
This paper investigates the finite-sample prediction risk of the high-dimensional least squares estimator. We derive the central limit theorem for the prediction risk when both the sample size and the number of features tend to infinity. Furthermore, the finite-sample distribution and the confidence interval of the pre...
withdrawn-rejected-submissions
This paper derives CLT type results for the minimum $\ell_2$ norm least squares estimator allowing both n and p to grow. Pros: As one reviewer puts it: Asymptotic confidence intervals for different prediction risks are derived. These results seem new. Cons: It's not clear what has been gained by having these results...
train
[ "9StQWaACyPc", "QoGRsqgeR9i", "BtDsHD6y--5", "qDK6GczDF0V", "sfJ5Sdg0d4n", "--xAtBz02yP", "8_Mrnhrtoo", "wiWgUIkr7Mj" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**Summary**: In this article, the authors characterized the second-order fluctuation of the prediction risk of the (min-norm) least square estimator, by assuming an underlying noisy teacher model $y_i = \\beta^T x_i + \\epsilon_i$, in the regime where the data dimension $p$ and the number of training samples $n$ g...
[ 7, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_EXkD6ZjvJQQ", "BtDsHD6y--5", "sfJ5Sdg0d4n", "9StQWaACyPc", "wiWgUIkr7Mj", "8_Mrnhrtoo", "iclr_2021_EXkD6ZjvJQQ", "iclr_2021_EXkD6ZjvJQQ" ]
iclr_2021_2234Pp-9ikZ
Don't be picky, all students in the right family can learn from good teachers
State-of-the-art results in deep learning have been improving steadily, in good part due to the use of larger models. However, widespread use is constrained by device hardware limitations, resulting in a substantial performance gap between state-of-the-art models and those that can be effectively deployed on small devi...
withdrawn-rejected-submissions
The paper proposes a new approach to knowledge distillation by searching for a family of student models instead of a specific model. The key idea is that given an optimal family of student models, any model sampled from this family is expected to perform well when trained using knowledge distillation. Overall this is a...
train
[ "mQL2xviB-0b", "BKAPqLqBYHx", "vfLKZ9vsNP1", "xQWYg11m-M2", "g2MnTJyr8v3", "nAV971oXTIf", "mlFcxuLEWx2", "DQNOcb4dc9Y", "PIobn1-RlyN", "VOjy_Z33oS" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "A new version of the paper has been uploaded. The following changes were made, following the reviewer's comments:\n\n**\"No results on ImageNet\".**\nWe have added ImageNet results to Section 4.2:\n\n\"ImageNet. The improved results on smaller datasets extend to large datasets as well. On ImageNet, AutoKD reaches7...
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nAV971oXTIf", "iclr_2021_2234Pp-9ikZ", "g2MnTJyr8v3", "DQNOcb4dc9Y", "PIobn1-RlyN", "VOjy_Z33oS", "iclr_2021_2234Pp-9ikZ", "iclr_2021_2234Pp-9ikZ", "iclr_2021_2234Pp-9ikZ", "iclr_2021_2234Pp-9ikZ" ]
iclr_2021_IJxaSrLIbkx
On Relating "Why?" and "Why Not?" Explanations
Explanations of Machine Learning (ML) models often address a ‘Why?’ question. Such explanations can be related with selecting feature-value pairs which are sufficient for the prediction. Recent work has investigated explanations that address a ‘Why Not?’ question, i.e. finding a change of feature values that guar...
withdrawn-rejected-submissions
The authors consider local 'why' or 'abductive' explanations for a model and a given class, which identify a minimal subset of features such that they're sufficient to imply that the model predicts the class; and 'why not' or 'contrastive' explanations, which identify a minimal subset s.t. they're sufficient to imply t...
train
[ "Kg9Gdwg7F-S", "EFykK6N5TzW", "GrYpxvLXisZ", "CVXo8LA4uJI", "RZ2_MZP0FP", "vUrQcSFhoL5", "hRr9lqH0LBI", "1ZW-z_u0fpm" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose to extract two types of explanations: abductive and contrastive explanations to address a gap in the literature of explainable AI. Indeed, that's a great point and often explainable models address the \"why\" and rarely the \"why not\" that can help identify the features guiding the change in t...
[ 5, -1, -1, -1, -1, 5, 6, 8 ]
[ 5, -1, -1, -1, -1, 2, 2, 3 ]
[ "iclr_2021_IJxaSrLIbkx", "vUrQcSFhoL5", "hRr9lqH0LBI", "Kg9Gdwg7F-S", "1ZW-z_u0fpm", "iclr_2021_IJxaSrLIbkx", "iclr_2021_IJxaSrLIbkx", "iclr_2021_IJxaSrLIbkx" ]
iclr_2021_9sF3n8eAco
All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and Memory-Efficient Inference of Deep Neural Networks
Modern deep neural network (DNN) models generally require a huge amount of weight and activation values to achieve good inference outcomes. Those data inevitably demand a massive off-chip memory capacity/bandwidth, and the situation gets even worse if they are represented in high-precision floating-point formats. Effor...
withdrawn-rejected-submissions
After reading the paper, reviews and authors’ feedback. The meta-reviewer agrees with the reviewers that the paper has limited novelty as there are already previous studies on setting floating point configurations. Additionally, the particular hardware setting that the authors provide seems to rely on a fp32 FMA, which...
train
[ "aKRg2Cwqzph", "HNKYeYVoIan", "NksSRZdzwvy", "aQF-FVrRYJ", "ZLJnyMKbgA", "FvR3xWLkg7X", "KI1RYzbIOe", "_VOdSrQTkWA", "aYkeJ-w9ILV", "BuPDcYXImy", "lNbu8u5AsKO", "wNn5UIC_-29" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q1:\nIt is not clear to me the objective of this paper. This paper introduces an 8-bit quantized inference framework. However, it is well-known that, 8-bit precision can be applied to popular DNN models to accelerate inference while maintaining model accuracies and many such systems have already been put into prod...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "BuPDcYXImy", "BuPDcYXImy", "BuPDcYXImy", "aYkeJ-w9ILV", "aYkeJ-w9ILV", "aYkeJ-w9ILV", "lNbu8u5AsKO", "wNn5UIC_-29", "iclr_2021_9sF3n8eAco", "iclr_2021_9sF3n8eAco", "iclr_2021_9sF3n8eAco", "iclr_2021_9sF3n8eAco" ]
iclr_2021_aGmEDl1NWJ-
Luring of transferable adversarial perturbations in the black-box paradigm
The growing interest for adversarial examples, i.e. maliciously modified examples which fool a classifier, has resulted in many defenses intended to detect them, render them inoffensive or make the model more robust against them. In this paper, we pave the way towards a new approach to improve the robustness of a model...
withdrawn-rejected-submissions
The paper proposes to augment the original model to introduce the "luring effect", which can be used for detection and black-box defense. Despite being an interesting setup, there are several weaknesses in the threat model (whether it is practical) and the evaluation (lack of adaptive attacks). Those concerns remain a...
train
[ "ohLz6N-Hhx", "9OK1d7An-z8", "ak0TIqy59Fg", "9WLW0SFA5Vi", "nCyGMQZ18qM", "M5BbHp2sqkj", "2DyLa9S2J-e", "Tg4yqByKPp", "AaYlGsbhDz7", "laJJD4GNzhp", "YSb8qM7UdO", "aZ2mbprq4Lv", "wUJRquyJTtH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "**Update:**\n\nThanks to the authors for their detailed response to my review. Unfortunately after reading the response, I don't understand how it addresses some significant concerns I have about this paper and therefore I can't increase my score. In particular:\n\n- Author response says \"Measuring the transferab...
[ 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, 5 ]
[ 2, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_aGmEDl1NWJ-", "nCyGMQZ18qM", "iclr_2021_aGmEDl1NWJ-", "iclr_2021_aGmEDl1NWJ-", "M5BbHp2sqkj", "9WLW0SFA5Vi", "wUJRquyJTtH", "aZ2mbprq4Lv", "ohLz6N-Hhx", "YSb8qM7UdO", "iclr_2021_aGmEDl1NWJ-", "iclr_2021_aGmEDl1NWJ-", "iclr_2021_aGmEDl1NWJ-" ]
iclr_2021_Oz_4sa7hKhl
Cluster & Tune: Enhance BERT Performance in Low Resource Text Classification
In data-constrained cases, the common practice of fine-tuning BERT for a target text classification task is prone to producing poor performance. In such low resources scenarios, we suggest performing an unsupervised classification task prior to fine-tuning on the target task. Specifically, as such an intermediat...
withdrawn-rejected-submissions
The paper suggests a simple variant for BERT training that improves classification for smaller training samples. So it has a very specific applicability unlike other published variants which generally improve a broad range of tasks. The variant adds a self-supervision classification task based on clustering. Experim...
train
[ "PwL1kwPLh8r", "ShF90w5wnCn", "yVMFd-Z9CwF", "U_opCD3wYkj", "RSDmp6J7hi", "5AfAlFvToVR", "WLR1hHoO3tz", "jbGR7Ji_JOF" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a novel domain/task adaptation procedure for BERT-style language models (LMs). Inspired by computer vision, the authors propose to specialize LMs to a particular domain and task with an intermediate clustering task. A large unlabeled dataset is clustered to generate pseudo-labels then a BERT mo...
[ 6, -1, -1, -1, -1, 6, 8, 3 ]
[ 4, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2021_Oz_4sa7hKhl", "5AfAlFvToVR", "PwL1kwPLh8r", "jbGR7Ji_JOF", "WLR1hHoO3tz", "iclr_2021_Oz_4sa7hKhl", "iclr_2021_Oz_4sa7hKhl", "iclr_2021_Oz_4sa7hKhl" ]
iclr_2021_iqmOTi9J7E8
Private Split Inference of Deep Networks
Splitting network computations between the edge device and the cloud server is a promising approach for enabling low edge-compute and private inference of neural networks. Current methods for providing the privacy train the model to minimize information leakage for a given set of private attributes. In practice, howeve...
withdrawn-rejected-submissions
While reviewers believe that the motivation of the paper is strong and the idea is interesting the ultimate execution of the paper is not up to the standards of ICLR. I believe the biggest concern is the precise privacy guarantee of the method. As pointed out, it is an extremely strong assumption that the model structu...
train
[ "7vVnTIIhIr-", "KEW7mSTzBVo", "yz5rrjqrVf", "o8C90u2BJcJ", "Qp4pild5SB4", "DfVvXj7CZtn", "9auAYixxgx2" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. We provide our responses in the following.\n\n* About orthogonality of public and private information: As you mentioned, the public and private information could be indeed highly correlated. Our proposed methods do not assume that the private information is orthogonal to the public inf...
[ -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, 3, 5, 4 ]
[ "9auAYixxgx2", "iclr_2021_iqmOTi9J7E8", "Qp4pild5SB4", "DfVvXj7CZtn", "iclr_2021_iqmOTi9J7E8", "iclr_2021_iqmOTi9J7E8", "iclr_2021_iqmOTi9J7E8" ]
iclr_2021_a2rFihIU7i
Model-based Asynchronous Hyperparameter and Neural Architecture Search
We introduce a model-based asynchronous multi-fidelity method for hyperparameter and neural architecture search that combines the strengths of asynchronous Successive Halving and Gaussian process-based Bayesian optimization. At the heart of our method is a probabilistic model that can simultaneously reason across hyper...
withdrawn-rejected-submissions
This paper is solid. It is correct, the text and author response demonstrate good knowledge of the area, the results are significant and solid, the experiments are strengthened by many independent runs (refreshing to see), the ablation study is well done, and the proposed distributed hyper-parameter and NAS alg is simp...
test
[ "pbZnorcNFiW", "8Xkn8gKSrGe", "zMx4bJLK-EW", "WjLwDnlh-zv", "YBVPLLmlzn0", "taKZBVIoanJ", "B91vnS9vaIE", "9PEWjfr9V3r", "sJ-vaDmN1ht", "k4F7QmPSNsH", "SEblVqF7ut", "ObDqc5acxHZ", "5K4mQRYOpn0", "0QosHVhpiP6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "After rebuttal: First of all, I would like to thank the authors for all their effort on the rebuttal and the revised paper and I really appreciate that. After carefull discussion with AC and other reviewers, I would, however, have to decrease my score to 6 due the lack of significant technical novelty. \n\n-------...
[ 6, 6, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, 2, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_a2rFihIU7i", "iclr_2021_a2rFihIU7i", "iclr_2021_a2rFihIU7i", "ObDqc5acxHZ", "iclr_2021_a2rFihIU7i", "iclr_2021_a2rFihIU7i", "5K4mQRYOpn0", "pbZnorcNFiW", "0QosHVhpiP6", "8Xkn8gKSrGe", "taKZBVIoanJ", "zMx4bJLK-EW", "iclr_2021_a2rFihIU7i", "iclr_2021_a2rFihIU7i" ]
iclr_2021_5slGDu_bVc6
Learning from deep model via exploring local targets
Deep neural networks often have huge number of parameters, which posts challenges in deployment in application scenarios with limited memory and computation capacity. Knowledge distillation is one approach to derive compact models from bigger ones. However, it has been observed that a converged heavy teacher mod...
withdrawn-rejected-submissions
This paper proposed a new variant of knowledge distillation. The basic idea is interesting although similar ideas have more or less appeared in the literature as pointed out by the reviewers. Our main concern on this work is that the real empirical improvements are too limited such that it is hard to conclude that the ...
train
[ "nJ6JuczOdB1", "ol-UgnIGof", "VzZpZ5S-iF0", "ql6U_cw1CM1", "DxZDA7rAaeA", "yE7vs3cbFIn", "rUjhkaW900", "OvZFUDriAZL" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper follows the work of RCO[1], where knowledge distillation is conducted by learning from the optimization trajectories of the teacher rather than the converged teacher solely. The main difference is that in the proposed method ProKT, the student model learns from the teacher model step by step, while RCO ...
[ 4, 4, -1, -1, -1, -1, 3, 5 ]
[ 5, 5, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_5slGDu_bVc6", "iclr_2021_5slGDu_bVc6", "ol-UgnIGof", "nJ6JuczOdB1", "rUjhkaW900", "OvZFUDriAZL", "iclr_2021_5slGDu_bVc6", "iclr_2021_5slGDu_bVc6" ]
iclr_2021_zcOJOUjUcyF
Better Optimization can Reduce Sample Complexity: Active Semi-Supervised Learning via Convergence Rate Control
Reducing the sample complexity associated with deep learning (DL) remains one of the most important problems in both theory and practice since its advent. Semi-supervised learning (SSL) tackles this task by leveraging unlabeled instances which are usually more accessible than their labeled counterparts. Active learning...
withdrawn-rejected-submissions
The paper investigates an active learning strategy for speeding up the convergence for SSL deep learning algorithms. When the SSL objective could learn a good approximation of the optimal model, the proposed method efficiently converges to the result with a few queries. The main idea is that when the eigenvalues of the...
val
[ "ouY9OB8_tcw", "9PQrfBf6R99", "eZUi0uRWs7U", "qIpaWwJgpBc", "HeVtx0c_e2y", "EF1wGUKADtv", "vEGJxQEx6wA", "3ufSCa2UfOt", "FdHp9-K2nt", "IcAwYAb_413", "N0jrPE61t_d", "8pGZOnwTm19" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper makes an attempt at combining semi-supervised and active learning. The authors note that in restricted settings, SSL and AL can achieve exponential improvements over standard supervised learning with random sampling. Instead, this work attempts to use active learning to speed up the convergence to the a...
[ 5, -1, 5, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_zcOJOUjUcyF", "3ufSCa2UfOt", "iclr_2021_zcOJOUjUcyF", "HeVtx0c_e2y", "eZUi0uRWs7U", "N0jrPE61t_d", "N0jrPE61t_d", "8pGZOnwTm19", "ouY9OB8_tcw", "iclr_2021_zcOJOUjUcyF", "iclr_2021_zcOJOUjUcyF", "iclr_2021_zcOJOUjUcyF" ]
iclr_2021_Fblk4_Fd7ao
Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations
Effective communication is an important skill for enabling information exchange and cooperation in multi-agent settings. Indeed, emergent communication is now a vibrant field of research, with common settings involving discrete cheap-talk channels. One limitation of this setting is that it does not allow for the emerge...
withdrawn-rejected-submissions
This paper received borderline scores, R1, R3, R4 gave a score of 6 and recommended a borderline acceptance. R2 provided by far the most detailed review and recommended a score of 5 (i.e., borderline reject). After the rebuttal, R2 comments, "I believe that the paper is still below the acceptance threshold, although on...
train
[ "4LmOLgxI-_P", "9bA2ptmMasO", "mIahx7Wnpl", "RQ99Enwgx7", "PnJvr6vPa-", "7LJ40OK_mUU", "Cjywd5aN-Qw", "xrfTN8v_moG", "GmeiePmBaS8", "vmD3HxBQ8q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "*** Summary ***\n\nThe paper investigates emergent gesture-based communication in Embodied Multi-Agent Populations. A noticeable feature of the paper is that it investigates emergent communication in the case of non-uniform distribution of intents and costly communication (i.e. agents are penalized for effort). Th...
[ 5, 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, 1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_Fblk4_Fd7ao", "iclr_2021_Fblk4_Fd7ao", "iclr_2021_Fblk4_Fd7ao", "9bA2ptmMasO", "GmeiePmBaS8", "4LmOLgxI-_P", "vmD3HxBQ8q", "4LmOLgxI-_P", "iclr_2021_Fblk4_Fd7ao", "iclr_2021_Fblk4_Fd7ao" ]
iclr_2021_aFvG-DNPNB9
Self-Reflective Variational Autoencoder
The Variational Autoencoder (VAE) is a powerful framework for learning probabilistic latent variable generative models. However, typical assumptions on the approximate posterior distributions can substantially restrict its capacity for inference and generative modeling. Variational inference based on neural autoregress...
withdrawn-rejected-submissions
The paper proposes a variant of the hierarchical VAE architectures. All reviewers felt that the paper's clarity was lacking. While the authors made very significant improvements during the feedback phase, which were recognized by reviewers, the paper could use a revision that takes clarity into account from the ground ...
train
[ "WsLZjg_lVKi", "Py72OEaC5Ai", "rRZ8dd4ya6m", "b7l2l0kKwy", "FUEyAzzhgcb", "etzqVnZv1AT", "PeFkkuNxmo5", "ZOPb8yirORl", "qxy9p7R5WzF", "yBCBlwr0SEL", "K0RK1jsjY1D", "M0acz10e6UK" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the updated version.\n\nIn my opinion it improved the clarity quite a lot - especially for the residual distributional layers, which were in the initial version more difficult to understand, both in its construction and utility.", "\"The recurrent refinement (or autoregressive dependencies) is orth...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 7, 3 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 3, 4 ]
[ "rRZ8dd4ya6m", "PeFkkuNxmo5", "iclr_2021_aFvG-DNPNB9", "etzqVnZv1AT", "iclr_2021_aFvG-DNPNB9", "yBCBlwr0SEL", "ZOPb8yirORl", "M0acz10e6UK", "K0RK1jsjY1D", "FUEyAzzhgcb", "iclr_2021_aFvG-DNPNB9", "iclr_2021_aFvG-DNPNB9" ]
iclr_2021_XEw5Onu69uu
Self-Labeling of Fully Mediating Representations by Graph Alignment
To be able to predict a molecular graph structure (W) given a 2D image of a chemical compound (U) is a challenging problem in machine learning. We are interested to learn f:U→W where we have a fully mediating representation V such that f factors into U→V→W. However, observing V requires detailed and expensive labels. ...
withdrawn-rejected-submissions
The paper proposes a graph aligning approach generating rich and detailed labels given normal labels. Authors cast the problem in a domain adaptation setting, considering a source domain where "expensive" labels are available, and a target domain where only normal labels are available. The application scenario is the p...
train
[ "LssN8aD1_P", "NU5LBP9Vvp", "aUTZRTdEgLY", "kwZFlxt8vrf", "fbZmIZbBCf", "DqMjtQ0A9vr", "oip8fx2MU05", "seOkrlNmZn-", "XKrLwTGbHxg", "QYCxs3VLiPd" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers, Area Chair,\n\nWe have made our final rebuttal revision now available which should address the comments that were still pending during discussion phase.\nReviewers: We appreciate all of your valuable feedback which enabled us to make the paper stronger.\n\nThanks again and best regards,\n\nICLR 202...
[ -1, -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "iclr_2021_XEw5Onu69uu", "seOkrlNmZn-", "QYCxs3VLiPd", "iclr_2021_XEw5Onu69uu", "XKrLwTGbHxg", "oip8fx2MU05", "iclr_2021_XEw5Onu69uu", "iclr_2021_XEw5Onu69uu", "iclr_2021_XEw5Onu69uu", "iclr_2021_XEw5Onu69uu" ]
iclr_2021_rEaz5uTcL6Q
Neural spatio-temporal reasoning with object-centric self-supervised learning
Transformer-based language models have proved capable of rudimentary symbolic reasoning, underlining the effectiveness of applying self-attention computations to sets of discrete entities. In this work, we apply this lesson to videos of physical interaction between objects. We show that self-attention-based models ope...
withdrawn-rejected-submissions
This paper presents an approach to tackle visual reasoning by combining MONET and transformers. All reviewers agree that there is some performance improvement shown. But there are several concerns including clarity/writing (multiple reviewers point it), experiments (baselines) and most importantly missing insights from...
train
[ "lrisITSb-c", "xrRzKHUvwdG", "FymGN7DIL9H", "srp44MNWkcS", "Z0arxPMUDu", "yFV4P2bMe5i", "lqq22iNJwQX", "sshN2GdaamD", "LW6S4W9JWPq" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the temporal and spatial reasoning in videos. Specifically, the authors propose to combine unsupervised object representation learning MONet with self-attention transformer and introduce self-supervised learning through masked representation prediction. Experiments are conducted on CLEVRER and C...
[ 5, 4, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_rEaz5uTcL6Q", "iclr_2021_rEaz5uTcL6Q", "iclr_2021_rEaz5uTcL6Q", "sshN2GdaamD", "LW6S4W9JWPq", "lrisITSb-c", "xrRzKHUvwdG", "iclr_2021_rEaz5uTcL6Q", "iclr_2021_rEaz5uTcL6Q" ]
iclr_2021_1P2KAvsE59b
Robustness to Pruning Predicts Generalization in Deep Neural Networks
Why over-parameterized neural networks generalize as well as they do is a central concern of theoretical analysis in machine learning today. Following Occam's razor, it has long been suggested that simpler networks generalize better than more complex ones. Successfully quantifying this principle has proved difficult gi...
withdrawn-rejected-submissions
Summary: The authors propose to predict a neural network classifier's generalization performance by measuring the proportion of parameters that can be pruned to produce an equivalent network (in terms of training error). Experimental and theoretical evaluation are provided. Discussion: The overall opinion in reviews...
train
[ "GTL93KJdg_y", "leRSh2_60Nk", "zU2Y5GK5N1K", "iljRiYSZOg", "8sm3oGpNGuB", "AE-po61X3en", "j1IPfFNabpC", "m-N4Gc2tfI9", "jQffr-73GYN", "T80MQ_y-98P", "6Y3S5UFEKs4", "9OoJTTq-7u", "jBtVsLYk-_Q", "BMB5_bN5Qm7", "7KHC3NsCiL8", "ZXvr2DKKPj4", "ZDbmqBI0qcq", "hFd_ZzW8e0k" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a novel generalisation measure, i.e., measurement that indicates how well the network generalises, based on pruning. The idea is to measure the fraction of the weights that can be pruned (either randomly, or based on the norms) without hurting the training loss of the model. The paper provides t...
[ 5, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_1P2KAvsE59b", "iclr_2021_1P2KAvsE59b", "iclr_2021_1P2KAvsE59b", "8sm3oGpNGuB", "ZXvr2DKKPj4", "ZDbmqBI0qcq", "jQffr-73GYN", "T80MQ_y-98P", "jBtVsLYk-_Q", "7KHC3NsCiL8", "9OoJTTq-7u", "leRSh2_60Nk", "BMB5_bN5Qm7", "GTL93KJdg_y", "zU2Y5GK5N1K", "hFd_ZzW8e0k", "leRSh2_60Nk", ...
iclr_2021_gkOYZpeGEK
Uniform Manifold Approximation with Two-phase Optimization
We present a dimensionality reduction algorithm called Uniform Manifold Approximation with Two-phase Optimization (UMATO) which produces less biased global structures in the embedding results and is robust over diverse initialization methods than previous methods such as t-SNE and UMAP. We divide the optimization into ...
withdrawn-rejected-submissions
Reviewers generally agree that the proposed method UMATO, a two-phase optimization dimensionality reduction algorithm based on UMAP, is interesting and has potential, and that the paper is well-written. However, there are several concerns with the current paper. In particular, R1 is not convinced by the performance of ...
train
[ "d-eMbVE1L_B", "YL1tMP2Sryb", "MvL3_JLHIS", "fto0WdDI5yL", "nHt6YoCJJPW", "-0etz1Xwwee", "gPUWIkQkuzB", "qzfb5KlYXcc", "ih50tBUXvl", "hQ13L4J9jyN" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes UMATO, a low-dimensional embedding technique based on a two-phase optimization, the first phase embeds representative points (called hubs) for global structure preservation, and the second phase embeds the rest of the points for local structure preservation.\n\nQuality, clarity, originality and ...
[ 6, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2021_gkOYZpeGEK", "qzfb5KlYXcc", "qzfb5KlYXcc", "hQ13L4J9jyN", "ih50tBUXvl", "d-eMbVE1L_B", "d-eMbVE1L_B", "iclr_2021_gkOYZpeGEK", "iclr_2021_gkOYZpeGEK", "iclr_2021_gkOYZpeGEK" ]
iclr_2021_uELnyih9gqb
WAVEQ: GRADIENT-BASED DEEP QUANTIZATION OF NEURAL NETWORKS THROUGH SINUSOIDAL REGULARIZATION
Deep quantization of neural networks below eight bits can lead to superlinear benefits in storage and compute efficiency. However, homogeneously quantizing all the layers to the same level does not account for the distinction of the layers and their individual properties. Heterogenous assignment of bitwidths to individ...
withdrawn-rejected-submissions
This paper proposed a regularization term to control the bit-width and encourage the DNN weights moving to the quantization intervals. The paper is well-written and the idea of using the sinusoidal period as a continuous representation is novel. However, the theoretical analysis provided are not consistent with the pro...
train
[ "-AW9SIzdoDD", "NlyQOxMxXI", "-01hKevJKc", "oGWZlFs2Pn7", "hJkbL-ihhRI", "DnXIqb4p1S2", "P5FD1Wdg5e", "gnKffpHu_rY", "5j07Sf3fUeB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThank you for the insightful and stimulating comments.\n\n===== (1) The theoretical analysis ===========\n\nIt seems that there is a misunderstanding in interpreting our theoretical and experimental results, accordingly we provide more details to clarify this.\n\nFirst, we would like to clarify that while the th...
[ -1, -1, -1, -1, -1, 4, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "DnXIqb4p1S2", "gnKffpHu_rY", "iclr_2021_uELnyih9gqb", "P5FD1Wdg5e", "5j07Sf3fUeB", "iclr_2021_uELnyih9gqb", "iclr_2021_uELnyih9gqb", "iclr_2021_uELnyih9gqb", "iclr_2021_uELnyih9gqb" ]
iclr_2021_25OSRH9H0Gi
Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms
Most of existing deep learning models rely on excessive amounts of labeled training data in order to achieve state-of-the-art results, even though these data can be hard or costly to get in practice. One attractive alternative is to learn with little supervision, commonly referred to as few-shot learning (FSL), and, in...
withdrawn-rejected-submissions
This paper is a systematic study of how assumptions that are present recent theoretical meta-learning bounds are satisfied in practical methods, and whether promoting these assumptions (by adding appropriate regularization terms) can improve performance of existing methods. The authors review common themes in theoretic...
train
[ "HgK2PPdTfq", "Ekyj5MyjaFL", "oMrAu15po5", "CCIRrRq-B4", "v8WK3lobTSL", "Mk2y1t9J9ld", "J28pXiIoFu6", "DQYVztNHIyL", "xJ6wWTQcOB", "rFEvcC0xEkH", "w8T96h1n7jE" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. \"The proposed regularization is not novel and is similar to weight decay and spectral normalization.\" It is important to think about the regularization terms as a whole, and to not take the terms separately because satisfying both assumptions is crucial and only one of them is not enough to ensure efficient f...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "oMrAu15po5", "v8WK3lobTSL", "iclr_2021_25OSRH9H0Gi", "DQYVztNHIyL", "xJ6wWTQcOB", "w8T96h1n7jE", "rFEvcC0xEkH", "iclr_2021_25OSRH9H0Gi", "iclr_2021_25OSRH9H0Gi", "iclr_2021_25OSRH9H0Gi", "iclr_2021_25OSRH9H0Gi" ]
iclr_2021_hbzCPZEIUU
Connecting Sphere Manifolds Hierarchically for Regularization
This paper considers classification problems with hierarchically organized classes. We force the classifier (hyperplane) of each class to belong to a sphere manifold, whose center is the classifier of its super-class. Then, individual sphere manifolds are connected based on their hierarchical relations. Our technique r...
withdrawn-rejected-submissions
This paper introduces a method for hierarchical classification with deep networks. The idea is interesting, and as far as I know novel: namely, the authors add a regularizer to the last layer in order to enforce a hierarchical structure onto the classifiers. The idea of placing spheres (with a fixed radius) around each...
test
[ "ITMCTrwF2s8", "Yyp3YRmxAY1", "e-kfRohuAkw", "L1eSVpZzbKk", "wBTzL9kHwMa", "xL4MDtrefDE", "mS9kbsYdJVA", "6IrD26kEdrM", "nrEla74jOMk", "RETrq3vRt1I", "lrfxt72LwEy" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors proposed a novel reparameterization framework of the last network layer that takes semantic hierarchy into account. Specifically, the authors assume a predefined hierarchy graph, and model the classifier of child classes as a parent classifier plus offsets $\\delta$ recursively. The auth...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_hbzCPZEIUU", "iclr_2021_hbzCPZEIUU", "Yyp3YRmxAY1", "ITMCTrwF2s8", "mS9kbsYdJVA", "lrfxt72LwEy", "iclr_2021_hbzCPZEIUU", "Yyp3YRmxAY1", "RETrq3vRt1I", "iclr_2021_hbzCPZEIUU", "iclr_2021_hbzCPZEIUU" ]
iclr_2021_w8iCTOJvyD
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Auxiliary losses are a general way of encoding biases in order to help networks learn better representations by adding extra terms to the loss function. However, since they ar...
withdrawn-rejected-submissions
This work proposes new learning algorithms that fine-tune ("tailor") a model at test-time using unsupervised objectives. This formulation allows for introducing an inductive bias into the model that might improve generalization on unseen data. The proposed algorithm is demonstrated on two example tasks. The reviewers ...
train
[ "REvI6CLYi_C", "lawgTT7wpPJ", "-aA78DI4AMl", "tDR4UCG5Wfg", "dET5CBWAD05", "tzXCvboDF-", "1AO51IDpSR", "Y921m80PMc", "epS5O7i3Zeq" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your review.\n\n**“What is the formulation of the affine parameters? Is there any study about adapting the entire model versus the affine parameters?”**\n\n- Inspired by conditional normalization, our method CNGrad adds affine layers $\\vec{y} = \\vec{\\gamma}\\cdot \\vec{x} + \\vec{\\beta}$ and only op...
[ -1, -1, -1, -1, -1, 7, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, 3, 2, 3, 3 ]
[ "epS5O7i3Zeq", "Y921m80PMc", "1AO51IDpSR", "iclr_2021_w8iCTOJvyD", "tzXCvboDF-", "iclr_2021_w8iCTOJvyD", "iclr_2021_w8iCTOJvyD", "iclr_2021_w8iCTOJvyD", "iclr_2021_w8iCTOJvyD" ]
iclr_2021_76M3pxkqRl
Status-Quo Policy Gradient in Multi-agent Reinforcement Learning
Individual rationality, which involves maximizing expected individual return, does not always lead to optimal individual or group outcomes in multi-agent problems. For instance, in social dilemma situations, Reinforcement Learning (RL) agents trained to maximize individual rewards converge to mutual defection that is i...
withdrawn-rejected-submissions
In general there is agreement under reviewers that the ideas/method presented are somewhat interesting/promising but also that the paper lacks a lot of clarity. Reviewers agree that the paper needs more work (on the method) and more extensive experiments to be convincing, and that in its current form it is not mature e...
train
[ "TmfBH-Hlzp", "70Wr8K0h07J", "5sCDG0JO6W", "LcIdZrEGU0t", "-BpzofT9Sa5", "vyk8uQrate5", "IUtKA_DgVob", "xcbxBP4w51w", "rlDFqQEsLk", "7VZsgnQhjuz", "sgj36teIP7U", "iFcnJJNygCb", "Y1OY1CMZHK_" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Post-rebuttal update:** Thanks to the authors for engaging in the discussion and for the responses. The authors have provided a satisfying argument that with an appropriate choice of hyper parameters, the SQLoss does promote cooperative behaviors whenever all utilities are negative -- this addresses my main conc...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2 ]
[ "iclr_2021_76M3pxkqRl", "-BpzofT9Sa5", "LcIdZrEGU0t", "vyk8uQrate5", "xcbxBP4w51w", "IUtKA_DgVob", "TmfBH-Hlzp", "sgj36teIP7U", "iFcnJJNygCb", "Y1OY1CMZHK_", "iclr_2021_76M3pxkqRl", "iclr_2021_76M3pxkqRl", "iclr_2021_76M3pxkqRl" ]
iclr_2021_N6SmiyDrkR5
What's in the Box? Exploring the Inner Life of Neural Networks with Robust Rules
We propose a novel method for exploring how neurons within a neural network interact. In particular, we consider activation values of a network for given data, and propose to mine noise-robust rules of the form X→Y , where X and Y are sets of neurons in different layers. To ensure we obtain a small and non-redundant se...
withdrawn-rejected-submissions
This paper proposes a method to explore neuron interactions within a neural network by deriving rules for the activations of units at different layers. The rules can presumably help interpret the inner workings of the neural network. The reviewers have very different opinions on the paper and the views did not converg...
train
[ "MkD-jIg_DFr", "e4rz_Skyvea", "64OxHsg6c9u", "TzigeKNTuVR", "P9y2BxDm4tg", "rpxYYGVMom2", "SWRUO3VzSl", "IMhojpniJ4l", "SnhPdXxMjy9", "Q9N7lNhypDh", "hICkJ2uoI9p", "d-k6qTuaLyF", "EUOZOpJx8i", "ECCIcwudWW4", "RBUcnLQ05Yf" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an approach to explainable supervised learning by extracting sets of rules for two individual layers within a neural network. The authors build their work on recent published work for patttern-based rule mining [0] to efficently find so-called robust rules. The authors evaluate the approach for ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_N6SmiyDrkR5", "64OxHsg6c9u", "rpxYYGVMom2", "P9y2BxDm4tg", "hICkJ2uoI9p", "SnhPdXxMjy9", "IMhojpniJ4l", "Q9N7lNhypDh", "EUOZOpJx8i", "ECCIcwudWW4", "MkD-jIg_DFr", "RBUcnLQ05Yf", "iclr_2021_N6SmiyDrkR5", "iclr_2021_N6SmiyDrkR5", "iclr_2021_N6SmiyDrkR5" ]
iclr_2021_IjIzIOkK2D6
Efficient Graph Neural Architecture Search
Recently, graph neural networks (GNN) have been demonstrated effective in various graph-based tasks. To obtain state-of-the-art (SOTA) data-specific GNN architectures, researchers turn to the neural architecture search (NAS) methods. However, it remains to be a challenging problem to conduct efficient ar...
withdrawn-rejected-submissions
This paper presents a differentiable neural architecture search method for GNNs using Gumbel softmax-based gating for fast search. It also introduces a transfer technique to search architectures on smaller graphs with similar properties as the target graph dataset. The paper further introduces a search space based on G...
train
[ "hvOAe9je5Yl", "he_p37FzoZE", "HlKyj_yzwiq", "ucsFBmyWYOt", "k8PZew9hho", "X7z8IdeKgaT", "VrzATwkKyM", "_pcYhWdNTT2", "QisyeDWpDk", "DkLzaRbXwz2" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a differentiable NAS method named EGAN for automatically designing GNN architectures. The main contribution is searching GNN architectures efficiently with an one-shot framework based on stochastic relaxation and natural gradient method. Extensive experiments conducted on node-level and graph-l...
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, 3 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_IjIzIOkK2D6", "iclr_2021_IjIzIOkK2D6", "iclr_2021_IjIzIOkK2D6", "HlKyj_yzwiq", "DkLzaRbXwz2", "hvOAe9je5Yl", "_pcYhWdNTT2", "he_p37FzoZE", "iclr_2021_IjIzIOkK2D6", "iclr_2021_IjIzIOkK2D6" ]
iclr_2021_DHSNrGhAY7W
The Lipschitz Constant of Self-Attention
Lipschitz constants of neural networks have been explored in various contexts in deep learning, such as provable adversarial robustness, estimating Wasserstein distance, stabilising training of GANs, and formulating invertible neural networks. Such works have focused on bounding the Lipschitz constant of fully connecte...
withdrawn-rejected-submissions
This paper shows that L2 self-attention is Lipschitz and presents a new method for computing the Lipschitz constant. All reviewers are positive about the technical part of the paper. However, the major concern comes from the significance of the computed Lipschitz constant. The paper only presents some numerical results...
train
[ "CrJvQnKP9l", "KwyJ6PBRgBN", "J9zQVI7Tgv", "9QqbISxpFJ", "dpUyxUnlRhx", "Q8LsRXmrOj8", "Dtiava_FF9a", "fgYXwuF9KEp", "E8QK4yERZ2R", "qbbHAO79m2" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThis paper theoretically analyzes the Lipschitz constant of the self-attention module. In particular, the authors prove that the vanilla dot-product self-attention is not Lipschitz and propose a Lipschitz L2 self-attention whose Lipschitz constant is upper-bounded. The theoretical results and the asympt...
[ 7, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "iclr_2021_DHSNrGhAY7W", "Q8LsRXmrOj8", "qbbHAO79m2", "E8QK4yERZ2R", "fgYXwuF9KEp", "Dtiava_FF9a", "CrJvQnKP9l", "iclr_2021_DHSNrGhAY7W", "iclr_2021_DHSNrGhAY7W", "iclr_2021_DHSNrGhAY7W" ]
iclr_2021_HZcDljfUljt
Filter pre-pruning for improved fine-tuning of quantized deep neural networks
Deep Neural Networks(DNNs) have many parameters and activation data, and these both are expensive to implement. One method to reduce the size of the DNN is to quantize the pre-trained model by using a low-bit expression for weights and activations, using fine-tuning to recover the drop in accuracy. However, it is gener...
withdrawn-rejected-submissions
Four reviewers rate this article borderline. R3 finds the paper clearly presented and the method effective, but misses quantitative analysis of the dynamic range problem as well as novelty. Following the discussion and revision, she/he considers the paper improved and updated the score to 5, still being concerned about...
train
[ "2q3M5RTvEY4", "Ks2tW-5vOZ9", "QKNhB_nddwr", "mEAfbVuxm1C", "3Ut_aYznT8l", "9FLSS66FBv7", "VPsI1EE5QG6", "rh8jxSAvhjf", "uFbhq-nk9V7", "m9Sl3h07sFk", "_WswB4aE17N", "q37Kk4RpcGL", "A9PzPA_joMn" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "### Overall\nThis work present a Pruning mechanism for Quantization scenario. Duo to the low-bits effects, the quantized network is hard to train properly. Therefore, authors provide a new method call Pruning for Quantization (PfQ) and a workflow to solve the model compression problem practically. Comparing to som...
[ 6, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_HZcDljfUljt", "iclr_2021_HZcDljfUljt", "iclr_2021_HZcDljfUljt", "uFbhq-nk9V7", "m9Sl3h07sFk", "_WswB4aE17N", "q37Kk4RpcGL", "iclr_2021_HZcDljfUljt", "QKNhB_nddwr", "2q3M5RTvEY4", "A9PzPA_joMn", "Ks2tW-5vOZ9", "iclr_2021_HZcDljfUljt" ]
iclr_2021_jQ0XleVhYuT
Double Generative Adversarial Networks for Conditional Independence Testing
In this article, we consider the problem of high-dimensional conditional independence testing, which is a key building block in statistics and machine learning. We propose a double generative adversarial networks (GAN)-based inference procedure. We first introduce a double GANs framework to learn two generators, and in...
withdrawn-rejected-submissions
This paper discusses the conditional independence test using GAN. In the same way as GCIT (Bellot & van der Schaar, 2019), they realize sampling under the null hypothesis by generating sample from P(X|Z) approximately with GAN. They propose to use a test statistic defined by the maximum of generalized covariance mea...
train
[ "gcRKQSxAcqp", "oEPEGggMrx", "kBqriudb7WW", "xPzut6MVsXT", "cm3tOYLbUdH", "d0rbzZe4LEp", "fjbRTyURCE0", "ACAqZkstpCr", "r__JZ4zYnEZ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThe paper proposed a novel simulation based testing procedure for conditional independence: X\\perp Y | Z. The testing procedure incorporate the techniques of GAN, which is especially useful for dealing with high-dimensional data. The testing procedure first learn the generative adversarial network that is able ...
[ 5, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "iclr_2021_jQ0XleVhYuT", "r__JZ4zYnEZ", "fjbRTyURCE0", "ACAqZkstpCr", "gcRKQSxAcqp", "gcRKQSxAcqp", "iclr_2021_jQ0XleVhYuT", "iclr_2021_jQ0XleVhYuT", "iclr_2021_jQ0XleVhYuT" ]
iclr_2021_8SP2-AiWttb
Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness
Evaluating the robustness of a defense model is a challenging task in adversarial robustness research. Obfuscated gradients, a type of gradient masking, have previously been found to exist in many defense methods and cause a false signal of robustness. In this paper, we identify a more subtle situation called \emph{Imb...
withdrawn-rejected-submissions
The paper identifies a subtle gradient problem in adversarial robustness-- imbalanced gradients, which can cause create a false sense of adversarial robustness. The paper provides insights into this problem and proposes a margin decomposition based solution for the PGD attack. Pros: - Novel insights into why some adve...
train
[ "ClWDvhypd6k", "S0OLMF2CWn", "znSLqvX2oQ8", "mzbkw9SNzey", "4UMPdNXWzsv", "QsL2tX47Wr4", "vtm6xD0cBJ2", "1w6a9shoSFP", "CQsrGZa0Ik", "M0UbJbncWIZ", "GGnitctB4Ee" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper identified the issue of Imbalanced Gradient, verified through some recent defense methods. Motivated by such an issue, a marginal decomposition (MD) attack is proposed to offer a stronger robustness measure. In general, the paper is well written, and the studied problem is interesting. The MD perspectiv...
[ 6, 4, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, 2, 5 ]
[ "iclr_2021_8SP2-AiWttb", "iclr_2021_8SP2-AiWttb", "iclr_2021_8SP2-AiWttb", "GGnitctB4Ee", "ClWDvhypd6k", "M0UbJbncWIZ", "S0OLMF2CWn", "CQsrGZa0Ik", "GGnitctB4Ee", "iclr_2021_8SP2-AiWttb", "iclr_2021_8SP2-AiWttb" ]
iclr_2021_QnzSSoqmAvB
Playing Nondeterministic Games through Planning with a Learned Model
The MuZero algorithm is known for achieving high-level performance on traditional zero-sum two-player games of perfect information such as chess, Go, and shogi, as well as visual, non-zero sum, single-player environments such as the Atari suite. Despite lacking a perfect simulator and employing a learned model of envir...
withdrawn-rejected-submissions
There is a pretty good consensus that this paper should not be accepted at ICLR. The reviewers do not seem think that extending MuZero to non-deterministic MuZero constitutes a significant advance. Three reviewers give clear rejects with scores (3, 4, 5) all with good confidence (4). A fourth reviewer gave a score of...
val
[ "OZsykQ7G1gM", "O-Y_RGuVgp", "OXzOhOlfXgb", "ltv6DKQo2L", "Mamw87s19I4", "SFFjaaZ30Of", "Q_EAkaGLa2G", "RDnIOc8CGx5", "NvD3b4cPkZ5", "wKE8jZRintM" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary\nThe paper extends MuZero for nondeterministic domains (NDMZ). Compared to MuZero NDMZ also learns a function that determines who is to act (player 1,2 or chance) and a distribution of chance outcomes. This makes it possible to employ MCTS search adjusted to handle nondeterministic nodes on top of a tree c...
[ 5, -1, -1, -1, -1, -1, 7, 6, 4, 3 ]
[ 4, -1, -1, -1, -1, -1, 1, 4, 4, 4 ]
[ "iclr_2021_QnzSSoqmAvB", "Q_EAkaGLa2G", "OZsykQ7G1gM", "RDnIOc8CGx5", "NvD3b4cPkZ5", "wKE8jZRintM", "iclr_2021_QnzSSoqmAvB", "iclr_2021_QnzSSoqmAvB", "iclr_2021_QnzSSoqmAvB", "iclr_2021_QnzSSoqmAvB" ]
iclr_2021_VCAXR34cp59
On Disentangled Representations Extracted from Pretrained GANs
Constructing disentangled representations is known to be a difficult task, especially in the unsupervised scenario. The dominating paradigm of unsupervised disentanglement is currently to train a generative model that separates different factors of variation in its latent space. This separation is typically enforced by...
withdrawn-rejected-submissions
This paper evaluates the extent to which disentangled representations can be recovered from pre-trained GANs with style-based generators by finding an orthogonal basis in the space of style vectors, and then training an encoder to map images to coordinates in the resulting latent space. To construct the orthogonal basi...
test
[ "PcgyaVdEX7g", "tbHfSI_mCIB", "054vCQdjL2x", "7RzEWJ72K7O", "pVxRdfEAFq", "07i14y9YeYa", "ut55ugwtFsA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "[Summary]\nThis paper proposes a new approach to learn disentangled representations from Generative models that were trained without a loss that explicitly enforces disengagement. The motivation behind this work is to address the computational time cost regarding hyperparameter tuning that is typically required fo...
[ 7, 4, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2021_VCAXR34cp59", "iclr_2021_VCAXR34cp59", "tbHfSI_mCIB", "PcgyaVdEX7g", "ut55ugwtFsA", "iclr_2021_VCAXR34cp59", "iclr_2021_VCAXR34cp59" ]
iclr_2021_zmgJIjyWSOw
UserBERT: Self-supervised User Representation Learning
This paper extends the BERT model to user data for pretraining user representations in a self-supervised way. By viewing actions (e.g., purchases and clicks) in behavior sequences (i.e., usage history) in an analogous way to words in sentences, we propose methods for the tokenization, the generation of input representa...
withdrawn-rejected-submissions
The paper discusses an extension of BERT for learning user representations based on activity patterns in a self-supervised setting. All reviewers have concerns about the validity of the claims and the significance of the experimental results. Overall, I agree with the reviewers that the paper needs more work to be publ...
train
[ "WZKoggSn-fA", "1-nzHMQ-24C", "eMcmj0LgXoJ", "RLUHKhZL1U5", "AgOQ9wRSqJU", "jIIGrV_HYxe", "ms4CLZz5kTb", "mCtQ5UTOou0", "z0udscpw8wO" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "##\nI have read the author response and I still think the paper is limited in terms of novelty, significance and experiments. I would like to keep my current score. \n##\n\nThis work proposed a self-supervised pre-training approach to model users from user behavior time-series data. Given time-series data of user ...
[ 4, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ 3, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "iclr_2021_zmgJIjyWSOw", "mCtQ5UTOou0", "ms4CLZz5kTb", "WZKoggSn-fA", "z0udscpw8wO", "iclr_2021_zmgJIjyWSOw", "iclr_2021_zmgJIjyWSOw", "iclr_2021_zmgJIjyWSOw", "iclr_2021_zmgJIjyWSOw" ]
iclr_2021_UEtNMTl6yN
Neural Pooling for Graph Neural Networks
Tasks such as graph classification, require graph pooling to learn graph-level representations from constituent node representations. In this work, we propose two novel methods using fully connected neural network layers for graph pooling, namely Neural Pooling Method 1 and 2. Our proposed methods have the ability to h...
withdrawn-rejected-submissions
All four knowledgeable referees have indicated reject due to many concerns. In particular, reviewers pointed out that the novelty of this paper is not clear because the difference from related work is very limited (i.e., the difference from Z. Wang and S. Ji is not clear, other than using one additional layer),  and th...
train
[ "GlDtq-iEbwT", "sXsn_HqXRrK", "1pKZmI7UkCx", "gpRZcwyj6Fd", "hvJrBGY_keL", "-HH_TtSshvs", "22CQLYPWbRD", "zJYI4MkySro" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer2, thank you for your valuable comments. We attempt to address your concerns as follows:\n\n1)Do both methods capture second order statistics:Yes, only method 2 captures second-order statistics by computing Flatten(HTH).\n\n2)In this work we propose two novel techniques for graph pooling Neural Poolin...
[ -1, -1, -1, -1, 3, 4, 3, 2 ]
[ -1, -1, -1, -1, 4, 5, 5, 5 ]
[ "hvJrBGY_keL", "zJYI4MkySro", "-HH_TtSshvs", "22CQLYPWbRD", "iclr_2021_UEtNMTl6yN", "iclr_2021_UEtNMTl6yN", "iclr_2021_UEtNMTl6yN", "iclr_2021_UEtNMTl6yN" ]