paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_DegtqJSbxo
Adversarial and Natural Perturbations for General Robustness
In this paper we aim to explore the general robustness of neural network classifiers by utilizing adversarial as well as natural perturbations. Different from previous works which mainly focus on studying the robustness of neural networks against adversarial perturbations, we also evaluate their robustness on natural p...
withdrawn-rejected-submissions
The reviewers indicated a number of concerns (which I agree with) which have not been addressed by the authors as they have not provided any response. Indeed, the paper would be significantly improved once these issues are addressed.
train
[ "s9OSlXigbc5", "PvrknKRASrB", "CvfxhWk84E" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "#### Summary\n\nIn this paper, the authors evaluate the performance of classifiers trained and then later tested on both adversarially generated perturbations as well as more natural perturbations. By considering six different natural perturbations, they show empirically that natural perturbations can improve per...
[ 4, 4, 4 ]
[ 5, 4, 4 ]
[ "iclr_2021_DegtqJSbxo", "iclr_2021_DegtqJSbxo", "iclr_2021_DegtqJSbxo" ]
iclr_2021_HfnQjEN_ZC
Ballroom Dance Movement Recognition Using a Smart Watch and Representation Learning
Smart watches are being increasingly used to detect human gestures and movements. Using a single smart watch, whole body movement recognition remains a hard problem because movements may not be adequately captured by the sensors in the watch. In this paper, we present a whole body movement detection study using a singl...
withdrawn-rejected-submissions
This paper initially received three negative reviews: 4,4,4. The main concerns of the reviewers included limited methodological novelty and an oversimplistic experimental setup. The authors did not submit their responses. As a result, the final recommendation is reject.
val
[ "4ac3H-dj8LB", "QeF9s_iwavE", "3vOHlZO_UxK" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Authors propose an approach to perform classification of ballroom dance movements (called figures) captured by the sensing mechanism of a smartwatch and discriminated via different ANN architectures. The sequence of figures are modelled as a Marlov chain, which work in a generative+discriminative fashion to outpu...
[ 4, 4, 4 ]
[ 5, 5, 3 ]
[ "iclr_2021_HfnQjEN_ZC", "iclr_2021_HfnQjEN_ZC", "iclr_2021_HfnQjEN_ZC" ]
iclr_2021_FN7_BUOG78e
Computing Preimages of Deep Neural Networks with Applications to Safety
To apply an algorithm in a sensitive domain it is important to understand the set of input values that result in specific decisions. Deep neural networks suffer from an inherent instability that makes this difficult: different outputs can arise from very similar inputs. We present a method to check that t...
withdrawn-rejected-submissions
Thank you for your submission to ICLR. The reviewers and I unanimously felt, even after some of the clarifications provided, that while there was some interesting element to this work, ultimately there were substantial issues with both the presentation and content of the paper. Specifically, the reviewers largely fel...
train
[ "7NTeW23Grct", "d14theNdzw", "KueV94mOyBj", "9z7mn0eA1F2", "SyO8OwOL5em", "pqtnIbPPq1N", "ZxXY6zONJnk", "YIgezrKe4y_", "jc6Zkbt-3q7", "NoE09WESWRZ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your reply!\n\nWe think that there are some insights about neural networks that DNN researchers would find profound or interesting that most users could not really appreciate (for example because it requires some subtle mathematical concept). \n\nOur point with the quote above can be more simply stated ...
[ -1, -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "d14theNdzw", "pqtnIbPPq1N", "NoE09WESWRZ", "ZxXY6zONJnk", "YIgezrKe4y_", "jc6Zkbt-3q7", "iclr_2021_FN7_BUOG78e", "iclr_2021_FN7_BUOG78e", "iclr_2021_FN7_BUOG78e", "iclr_2021_FN7_BUOG78e" ]
iclr_2021_3FkrodAXdk
Deep Ensembles with Hierarchical Diversity Pruning
Diverse deep ensembles hold the potential for improving accuracy and robustness of deep learning models. Both pairwise and non-pairwise ensemble diversity metrics have been proposed over the past two decades. However, it is also challenging to find the right metrics that can effectively prune those deep ensembles with ...
withdrawn-rejected-submissions
This work studies statistics of ensemble models that capture the prediction diversity between ensemble members. The goal of the work is to identify or construct a metric which is predictive of the holdout accuracy achieved by the ensemble prediction. Pros: * Studies empirically how measures of ensemble diversity rela...
test
[ "6zOQXYe5PJz", "gQkmOatsPBJ", "wYTSIU0m2rs", "NIjUFVrewdO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe manuscript studies the problem of ensemble selection (pruning) with the ensemble consists of deep neural network models. The authors compare different diversity metrics, which they named collectively as Q-metric, visualize the accuracies of different ensembles on CIFAR-10 dataset where the ensembles ...
[ 4, 3, 3, 4 ]
[ 4, 4, 5, 4 ]
[ "iclr_2021_3FkrodAXdk", "iclr_2021_3FkrodAXdk", "iclr_2021_3FkrodAXdk", "iclr_2021_3FkrodAXdk" ]
iclr_2021_PkqwRo2wjuW
Learning Axioms to Compute Verifiable Symbolic Expression Equivalence Proofs Using Graph-to-Sequence Networks
We target the problem of proving the semantic equivalence between two complex expressions represented as typed trees, and demonstrate our system on expressions from a rich multi-type symbolic language for linear algebra. We propose the first graph-to-sequence deep learning system to generate axiomatic proofs of equival...
withdrawn-rejected-submissions
I think this is a very promising paper, but the work is not ready for publication. The most significant concern shared by several reviewers is the insufficient evaluation. For example, the work is not compared with more traditional approaches to equivalence checking or any other baselines beyond ablations of the prop...
train
[ "7wkB9AwuZNG", "x0sJ1038FNY", "UE0pb2qWgpT", "64Jjddd9B0l" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThis paper proposes a model for verifying semantic equivalence between symbolic linear algebra expressions. Expressions are represented by trees and equivalence is proven by a sequence of axioms applied to the first expression. The proposed model encodes the expression/program trees as nodes on a graph...
[ 4, 4, 5, 6 ]
[ 4, 3, 5, 3 ]
[ "iclr_2021_PkqwRo2wjuW", "iclr_2021_PkqwRo2wjuW", "iclr_2021_PkqwRo2wjuW", "iclr_2021_PkqwRo2wjuW" ]
iclr_2021_WUTkGqErZ9
Convolutional Neural Networks are not invariant to translation, but they can learn to be
When seeing a new object, humans can immediately recognize it across different retinal locations: we say that the internal object representation is invariant to translation. It is commonly believed that Convolutional Neural Networks (CNNs) are architecturally invariant to translation thanks to the convolution and/or po...
withdrawn-rejected-submissions
This paper receives 3 initial rejection ratings. No rebuttal was submitted by the authors. There is no basis for overturning the reviewers' decisions. This paper should be rejected.
train
[ "UE_HQ5hvq2C", "bTn2iBdTLRs", "2jXZSPp1_W" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper analysis and studies translation invariance in convolution neural networks. It argues that typically it is claimed that CNNs are translation invariant due to the convolution function, and that actually convolution are equivariant. While pooling is the actual function that gives local invariance (or glob...
[ 5, 4, 4 ]
[ 5, 3, 4 ]
[ "iclr_2021_WUTkGqErZ9", "iclr_2021_WUTkGqErZ9", "iclr_2021_WUTkGqErZ9" ]
iclr_2021_HMqNjkBEqP4
Bayesian Meta-Learning for Few-Shot 3D Shape Completion
Estimating the 3D shape of real-world objects is a key perceptual challenge. It requires going from partial observations, which are often too sparse and incomprehensible for the human eye, to detailed shape representations that vary significantly across categories and instances. We propose to cast shape completion as a...
withdrawn-rejected-submissions
This submission generated a lot of discussion. The main strengths of the paper: * It is an interesting application of meta-learning (to 3D shape completion), and a novel one. * It appears to work well: it is remarkable that the proposed model can reconstruct shapes as well as it does given a point cloud with only 50 i...
val
[ "2UvmIuzcdp", "ut_Z_QHDWVP", "PNKUUwS5grn", "Devys_lA8N", "YagcNcuUWoY", "SSdKHpHnHMZ", "7LEB7hiffCC", "b1Vd19hVzjU", "-cCpKHZfCLh" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a way of reconstructing a surface from sparse point clouds via a \"meta learning\" approach. Specifically, the authors view each shape in a collection as a \"domain\", and predicting the SDF values of points in R^3 to reconstruct a given shape (the reconstructed surface is the isosurface of the...
[ 4, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_HMqNjkBEqP4", "SSdKHpHnHMZ", "Devys_lA8N", "SSdKHpHnHMZ", "b1Vd19hVzjU", "2UvmIuzcdp", "-cCpKHZfCLh", "iclr_2021_HMqNjkBEqP4", "iclr_2021_HMqNjkBEqP4" ]
iclr_2021_ep81NLpHeos
Momentum Contrastive Autoencoder
Wasserstein autoencoder (WAE) shows that matching two distributions is equivalent to minimizing a simple autoencoder (AE) loss under the constraint that the latent space of this AE matches a pre-specified prior distribution. This latent space distribution matching is a core component in WAE, and is in itself a challeng...
withdrawn-rejected-submissions
This paper presents an interesting approach for training generative autoencoders with a latent space that lies on a hyperspherical subspace. However, the reviewers have raised concerns regarding the similarity of this work with several prior works and have questioned the experimental setup. Without the authors' respons...
train
[ "BoAASPJLR1I", "aZhxHe6EbGU", "1leGDR4fvdS", "AT3VdMjjCt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThe authors aim at constructing another generative autoencoder: reconstruction loss plus matching-aggregate-posterior-with-prior penalty. For this they use a constrastive loss in latent space.\n\nWithout paying attention to writing style and clarity, etc., the paper should be rejected to be published at ICLR alr...
[ 4, 4, 5, 3 ]
[ 4, 4, 4, 5 ]
[ "iclr_2021_ep81NLpHeos", "iclr_2021_ep81NLpHeos", "iclr_2021_ep81NLpHeos", "iclr_2021_ep81NLpHeos" ]
iclr_2021_INXUNEmgbnx
Neural Bayes: A Generic Parameterization Method for Unsupervised Learning
We introduce a parameterization method called Neural Bayes which allows computing statistical quantities that are in general difficult to compute and opens avenues for formulating new objectives for unsupervised representation learning. Specifically, given an observed random variable x and a latent discrete variable z,...
withdrawn-rejected-submissions
Summary: The authors propose a method for representing a posterior over discrete latent variables in representation learning problems using a neural network. Two applications are discussed: One are certain clustering problems, in which clusters are sufficiently separated. Another is the computation of mutual informatio...
train
[ "vAo_93nMVfL", "nTCPqXUjFvH", "7sJhlggOniA", "HfykrDJjS_8" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# Summary \nThe paper introduces a new function $L(x)$ so that, when optimised under certain objectives defined over continuous observation $x$ and discrete latent $z$, learns the correct clustering probability $p(z|x)$. The loss functions considered are the Jensen-Fisher divergence and muture information. The aut...
[ 4, 5, 4, 5 ]
[ 3, 4, 4, 5 ]
[ "iclr_2021_INXUNEmgbnx", "iclr_2021_INXUNEmgbnx", "iclr_2021_INXUNEmgbnx", "iclr_2021_INXUNEmgbnx" ]
iclr_2021_jpDaS6jQvcr
Unsupervised Anomaly Detection by Robust Collaborative Autoencoders
Unsupervised anomaly detection plays a crucial role in many critical applications. Driven by the success of deep learning, recent years have witnessed growing interests in applying deep neural networks (DNNs) to anomaly detection problems. A common approach is to use autoencoders to learn a feature representation for t...
withdrawn-rejected-submissions
The paper describes an autoencoder-based approach to anomaly detection. The main weakness—not untypical for papers in this application area—is the experimental section. The problem itself may be not well-defined, and of course that makes practical comparison difficult. Perhaps different measures—e.g., remaining life—...
train
[ "0oYmljBNuiV", "GSCSO9fyf_d", "LyNpr3Wr3HH", "wjmrKnKzgz2", "s5uvr7rFE1k", "neQ-XdbGoO7", "Q9s2S1Cdnoh", "134RZfNhEQQ", "3mKqL-Afy5M", "Zz6F3fUwfRK" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "a) I don't see how your response in part (c) for reviewer 4 helps in addressing my concern. Within the large space of methods in the AD space, I feel your proposal is a bit ad-hoc, and for it to be useful practically, this insight on how the framework performs with multiple AE's is needed and not only for scalabil...
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "wjmrKnKzgz2", "s5uvr7rFE1k", "Q9s2S1Cdnoh", "134RZfNhEQQ", "3mKqL-Afy5M", "Zz6F3fUwfRK", "iclr_2021_jpDaS6jQvcr", "iclr_2021_jpDaS6jQvcr", "iclr_2021_jpDaS6jQvcr", "iclr_2021_jpDaS6jQvcr" ]
iclr_2021_BvrKnFq_454
Expectigrad: Fast Stochastic Optimization with Robust Convergence Properties
Many popular adaptive gradient methods such as Adam and RMSProp rely on an exponential moving average (EMA) to normalize their stepsizes. While the EMA makes these methods highly responsive to new gradient information, recent research has shown that it also causes divergence on at least one convex optimization problem....
withdrawn-rejected-submissions
The paper proposes a new adaptive optimization algorithm which is claimed to have better convergence properties and lower susceptibility to gradient variance. Reviewers found the idea of normalizing on the fly to be interesting, but raised some important concerns. Although similar to AdaGrad, Expectigrad has a very imp...
train
[ "x4yIHNXCfgI", "OrJGybNrTc5", "VR13ttdlPGE", "o1SR_VBigE8", "jkMGOAK9D_0", "j1RdSbrCGy7", "lrsQYnokN2A", "I07Ms4UK0e3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary: The paper proposes a new adaptive method for stochastic optimization with convergence guarantees. Experimental results on two tasks viz., image classification and language translation are provided to show the benefit of the proposed algorithm.\n\nStrength:\n\n+ The paper proposes an adaptive stochastic op...
[ 5, 3, 4, -1, -1, -1, -1, 5 ]
[ 5, 4, 3, -1, -1, -1, -1, 3 ]
[ "iclr_2021_BvrKnFq_454", "iclr_2021_BvrKnFq_454", "iclr_2021_BvrKnFq_454", "x4yIHNXCfgI", "OrJGybNrTc5", "VR13ttdlPGE", "I07Ms4UK0e3", "iclr_2021_BvrKnFq_454" ]
iclr_2021_26WnoE4hjS
Measuring and mitigating interference in reinforcement learning
Catastrophic interference is common in many network-based learning systems, and many proposals exist for mitigating it. But, before we overcome interference we must understand it better. In this work, we first provide a definition and novel measure of interference for value-based control methods such as Fitted Q Iterat...
withdrawn-rejected-submissions
The paper investigates interference in reinforcement learning and introduces a novel measure that can be used in value-based methods. Although the reviewers acknowledge that the paper has merits (the topic is relevant and the paper is well written), they feel that the contribution is not sufficiently supported by eithe...
train
[ "7bxwste9euB", "NuMclkgiSnM", "2PcP9E_IGYV", "joVH9YoD9wT", "_KQJcZmBRE_", "wW3Hj5HFoAw", "xuVBmtDeb1G", "xGVpH98YYJP", "N6aqgFESdjH", "xjV-sP0ouSt", "zeh6EzlSmIA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the reason for interference, aka catastrophic forgetting, when using parametric models for Reinforcement Learning. The authors draw the connection with previous methods and introduce some reasonable measure of interference. Then, they introduce a method to explicitly address the problem of inter...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2021_26WnoE4hjS", "iclr_2021_26WnoE4hjS", "xGVpH98YYJP", "_KQJcZmBRE_", "7bxwste9euB", "xjV-sP0ouSt", "zeh6EzlSmIA", "N6aqgFESdjH", "NuMclkgiSnM", "iclr_2021_26WnoE4hjS", "iclr_2021_26WnoE4hjS" ]
iclr_2021_jNTeYscgSw8
Demystifying Loss Functions for Classification
It is common to use the softmax cross-entropy loss to train neural networks on classification datasets where a single class label is assigned to each example. However, it has been shown that modifying softmax cross-entropy with label smoothing or regularizers such as dropout can lead to higher performance. In this pape...
withdrawn-rejected-submissions
The paper presents an extensive empirical evaluation of several loss functions and regularization techniques used in deep networks. The authors conclude that the classical softmax is significantly outperformed by the other approaches, but there is no clear winner among them. Moreover, the authors have noticed two inter...
train
[ "5BgCdPWZeGt", "5XaJYMvxrz4", "h_tqszhdmYT", "iVNzD1qANtz", "4rIQleWCKk", "a9rV-4JgeM", "A7RZgS4TDD4", "lx_jopuSfZN", "xl9SqmH0b0", "7-HHpm6Ojaw" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their comments.\n\n> “It seems it is hard to make a conclusion to decide which single objective performs better than others…It seems to have little insight for future works that using novel methods or optimization frameworks”\n\nWe agree that, in Section 3.1, there is no single loss funct...
[ -1, -1, -1, -1, -1, -1, 5, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "A7RZgS4TDD4", "lx_jopuSfZN", "xl9SqmH0b0", "4rIQleWCKk", "7-HHpm6Ojaw", "iclr_2021_jNTeYscgSw8", "iclr_2021_jNTeYscgSw8", "iclr_2021_jNTeYscgSw8", "iclr_2021_jNTeYscgSw8", "iclr_2021_jNTeYscgSw8" ]
iclr_2021_sfgcqgOm2F_
Natural Compression for Distributed Deep Learning
Modern deep learning models are often trained in parallel over a collection of distributed machines to reduce training time. In such settings, communication of model updates among machines becomes a significant performance bottleneck and various lossy update compression techniques have been proposed to alleviate this p...
withdrawn-rejected-submissions
Three reviewers provided negative reviews and the authors wrote detailed feedback. During the later discussion stages, the reviewers acknowledged that some concerns are alleviated (e.g. R1 raised score from 4 to 5), but two concerns still remain: i) the novelty is less clear to the reviewers; ii) the advantage over exi...
train
[ "Jn0-JW5uMZK", "FYJjqBMuL55", "hwVoHp_fQrU", "eq3WZP8M-L2", "C77J3MDCeZB", "yyQnbtreH89", "laHtEl67s2c", "anoh47sWMIn", "BtC_NZhee7l", "5nVVE2xq8EN" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a data compression method named natural compression. This method can be implemented very fast and could be added to other compression methods for additional compression. Then this is generalized to natural dithering for a more aggressive compression. In addition, it compares different compressi...
[ 5, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_sfgcqgOm2F_", "anoh47sWMIn", "Jn0-JW5uMZK", "FYJjqBMuL55", "5nVVE2xq8EN", "iclr_2021_sfgcqgOm2F_", "BtC_NZhee7l", "iclr_2021_sfgcqgOm2F_", "iclr_2021_sfgcqgOm2F_", "iclr_2021_sfgcqgOm2F_" ]
iclr_2021_dzZaIeG9-fW
Learning to Infer Run-Time Invariants from Source code
Source code is notably different from natural language in that it is meant to be executed. Experienced developers infer complex "invariants" about run-time state while reading code, which helps them to constrain and predict program behavior. Knowing these invariants can be helpful; yet developers rarely encode these ex...
withdrawn-rejected-submissions
The paper gives a way of constructing a dataset of programs aligned with invariants that the programs satisfy at runtime, and training a model to predict invariants for a given program. While the overall idea behind the paper is reasonable, the execution (in particular, the experimental evaluation) is problematic. As ...
train
[ "af5b2EyUI1y", "gU4Epd_Vxg", "o0bFk1Jdus-", "w9OcVw1PQR", "KotmMUQKhyV" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their feedback. At this point, we are not asking you to change your scores, but would like to respond to general concerns and clarify some comments here. We will take into account the feedback received while revising this work.\n\nFirst, we underscore that the goal of our approach is qui...
[ -1, 5, 5, 5, 3 ]
[ -1, 5, 3, 4, 4 ]
[ "iclr_2021_dzZaIeG9-fW", "iclr_2021_dzZaIeG9-fW", "iclr_2021_dzZaIeG9-fW", "iclr_2021_dzZaIeG9-fW", "iclr_2021_dzZaIeG9-fW" ]
iclr_2021_99M-4QlinPr
Efficient Competitive Self-Play Policy Optimization
Reinforcement learning from self-play has recently reported many successes. Self-play, where the agents compete with themselves, is often used to generate training data for iterative policy improvement. In previous work, heuristic rules are designed to choose an opponent for the current learner. Typical rules include c...
withdrawn-rejected-submissions
This paper investigate an interesting problem of multi-agent RL with self-play. We agree with the reviewers that the paper requires more work before it can be presented at a top conference. We would encourage the authors to use the reviewers' feedback to improve the paper and resubmit to one of the upcoming conferenc...
test
[ "SWZ6QotRgkq", "1oETMjfNMvr", "NsA2U2MdX1c", "D8RuTeBT1H_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors present a rule for selecting opponents for self-play training in zero-sum games: Train each agent i against the agent j that is \"hardest\" for i (in the sense that i's payoff is least among all candidate opponents when playing against j). This principle is justified by appeal to \"pert...
[ 7, 5, 3, 5 ]
[ 3, 3, 4, 4 ]
[ "iclr_2021_99M-4QlinPr", "iclr_2021_99M-4QlinPr", "iclr_2021_99M-4QlinPr", "iclr_2021_99M-4QlinPr" ]
iclr_2021_X5ivSy4AHx
Enhanced First and Zeroth Order Variance Reduced Algorithms for Min-Max Optimization
Min-max optimization captures many important machine learning problems such as robust adversarial learning and inverse reinforcement learning, and nonconvex-strongly-concave min-max optimization has been an active line of research. Specifically, a novel variance reduction algorithm SREDA was proposed recently by (Luo e...
withdrawn-rejected-submissions
The paper introduces a new variant (SREDA-Boost) of a variance-reduced method SEDRA for nonconvex-strongly-concave min-max optimization. Given that SEDRA is already optimal in the worst case, the proposed modification is intended to improve practical performance of the method, by relaxing conditions needed at initializ...
train
[ "IpQDbo0YV-T", "HSaTxilmEDl", "Qm-l7b31qv3", "BrYJ5b4xwse" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an enhanced variant of the SREDA algorithm (Lou et al 2020), called SREDA-Boost, that improves SREDA on two aspects: the initial complexity and the step-size. The algorithm achieves the same complexity as the original SREDA scheme.\n\nThe main contribution of this paper is perhaps the following...
[ 4, 6, 5, 6 ]
[ 5, 5, 3, 3 ]
[ "iclr_2021_X5ivSy4AHx", "iclr_2021_X5ivSy4AHx", "iclr_2021_X5ivSy4AHx", "iclr_2021_X5ivSy4AHx" ]
iclr_2021__Ea-ECV6Vkm
Consistent Instance Classification for Unsupervised Representation Learning
In this paper, we address the problem of learning the representations from images without human annotations. We study the instance classification solution, which regards each instance as a category, and improve the optimization and feature quality. The proposed consistent instance classification (ConIC) approach simult...
withdrawn-rejected-submissions
This paper introduces a consistency loss for instance discrimination by adding a term to maximize the squared dot product between two views of the same image. The impact of the proposed approach is evaluated on a variety of settings with mixed improvements. While reviewers generally found the proposed method to be inte...
train
[ "SufIunNG2s", "DI1ZH8ecRQh", "tKyaMD8bMRl", "tG9Kf8DBfjU", "HknMklB9oN6", "j4u0GkOiGvb" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a loss function for unsupervised representation learning. It has two terms: an instance classification loss and a consistency loss. The novel part seems to be the consistency loss. It explicitly penalizes the dissimilarity between different views of the same instance.\n\nGood paper, accept\n\n+T...
[ 5, -1, -1, -1, 5, 5 ]
[ 3, -1, -1, -1, 4, 4 ]
[ "iclr_2021__Ea-ECV6Vkm", "HknMklB9oN6", "j4u0GkOiGvb", "SufIunNG2s", "iclr_2021__Ea-ECV6Vkm", "iclr_2021__Ea-ECV6Vkm" ]
iclr_2021_JHx9ZDCQEA
PolyRetro: Few-shot Polymer Retrosynthesis via Domain Adaptation
Polymers appear everywhere in our daily lives -- fabrics, plastics, rubbers, etc. -- and we could hardly live without them. To make polymers, chemists develop processes that combine smaller building blocks~(monomers) to form long chains or complex networks~(polymers). These processes are called polymerizations and wil...
withdrawn-rejected-submissions
This paper proposes a novel problem of polymer retrosynthesis, and a method to solve it. The authors formally define the polymer retrosynthesis optimization problem as a constrained problem to identify the monomers and the unit polymer, with the recursive and stability constraints. Further, since the main challenge wit...
train
[ "MPoN0CyMLOL", "O9zlfOU-T7N", "0A7Vc9-fFaR", "l3nuLyHmidd", "EySLbnS9QbG", "imE2uQrTXdR", "ucRPRi5vYH", "3jR_lMSZDCC", "jwLjJFyN7Z-", "fhDB0XsK_Wo", "H1Ag0JHQ6M_", "YX4OyPWmnO", "kmfIdi7vV9L", "mrWiPp8KdqZ", "ZEA6s9titLI" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for adding the new baselines. I think it makes the paper stronger.", "Thanks for your kindly reply! We appreciate it and respect your thoughts. However, we still want to reiterate that,\n- Although the equations are simple, our proposed approach is still data-driven. The polymerization template distrib...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "jwLjJFyN7Z-", "0A7Vc9-fFaR", "ucRPRi5vYH", "3jR_lMSZDCC", "YX4OyPWmnO", "iclr_2021_JHx9ZDCQEA", "YX4OyPWmnO", "kmfIdi7vV9L", "mrWiPp8KdqZ", "ZEA6s9titLI", "iclr_2021_JHx9ZDCQEA", "iclr_2021_JHx9ZDCQEA", "iclr_2021_JHx9ZDCQEA", "iclr_2021_JHx9ZDCQEA", "iclr_2021_JHx9ZDCQEA" ]
iclr_2021_4q8qGBf4Zxb
Network Architecture Search for Domain Adaptation
Deep networks have been used to learn transferable representations for domain adaptation. Existing deep domain adaptation methods systematically employ popular hand-crafted networks designed specifically for image-classification tasks, leading to sub-optimal domain adaptation performance. In this paper, we present Neur...
withdrawn-rejected-submissions
This work studies an intriguing problem of searching optimal architectures for unsupervised domain adaptation. It is based on a two-stage approach: (1) transferable architecture search via DARTS + MK-MMD; (2) transferable feature learning via Backbone + MCD. The reviews for this paper are very insightful, constructive...
train
[ "D0bF1PERwHm", "qSANRlqHhjy", "vGmbMiXfyk3", "rgIQuNLVKkN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces an approach to search for the best network architecture for a domain adaptation task. This is achieved by following a differentiable architecture search strategy in which an additional loss function is included to account for the domain shift. Specifically, the loss function aims to minimize ...
[ 4, 4, 6, 4 ]
[ 4, 5, 3, 5 ]
[ "iclr_2021_4q8qGBf4Zxb", "iclr_2021_4q8qGBf4Zxb", "iclr_2021_4q8qGBf4Zxb", "iclr_2021_4q8qGBf4Zxb" ]
iclr_2021_VlRqY4sV9FO
Human-interpretable model explainability on high-dimensional data
The importance of explainability in machine learning continues to grow, as both neural-network architectures and the data they model become increasingly complex. Unique challenges arise when a model's input features become high-dimensional: on one hand, principled model-agnostic approaches to explainability become too ...
withdrawn-rejected-submissions
This paper introduces an approach to model explainability on high-dimensional data by: (1) first mapping inputs to a smaller set of intelligible latent features, and then (2) applying the Shapley method to this set of latent features. Several methods are considered for (1), and empirical results are examined across sev...
train
[ "tF0nAU-Qns7", "Fr_NKfSzAFh", "pLNbv-gxPH9", "FzN-JDSL68r", "PhajJcTm0aM", "aVf_y2Zo3S", "HAuPemRQtig", "AbYdGY91jZT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe paper proposes an approach to generate semantic explanations for high-dimensional data. The proposed approach consists of two modules -- the first module transforms the high-dimensional raw data into lower-dimensional semantic latent space and the second module applies Shapely explainability to thi...
[ 5, 3, -1, -1, -1, -1, 4, 7 ]
[ 4, 4, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_VlRqY4sV9FO", "iclr_2021_VlRqY4sV9FO", "HAuPemRQtig", "AbYdGY91jZT", "Fr_NKfSzAFh", "tF0nAU-Qns7", "iclr_2021_VlRqY4sV9FO", "iclr_2021_VlRqY4sV9FO" ]
iclr_2021_UJRFjuJDsIO
Why Convolutional Networks Learn Oriented Bandpass Filters: Theory and Empirical Support
It has been repeatedly observed that convolutional architectures when applied to image understanding tasks learn oriented bandpass filters. A standard explanation of this result is that these filters reflect the structure of the images that they have been exposed to during training: Natural images typically are locally...
withdrawn-rejected-submissions
The reviewers raised a number of concerns, but the authors provided no rebuttal to the reviewers' comments. One reviewer felt the experimental fitting was not thorough enough. Suppose one used layers of oriented bandpass filters, separated by non-linearities, would that perform well on the task convnets are trained o...
train
[ "lznvMxmNww2", "yXV8kyo7UNF", "jq3ccfFx8k7", "gt8hTszN71c" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The continuous domain formalism is not adapted to what the authors would like to prove because convnet discrete kernels make inadequate and questionable, any inference through a continuous formalism: one can say anything we want, giving that one can choose the intrinsic underlying continuous kernel as we want. \nL...
[ 3, 6, 5, 3 ]
[ 5, 4, 4, 5 ]
[ "iclr_2021_UJRFjuJDsIO", "iclr_2021_UJRFjuJDsIO", "iclr_2021_UJRFjuJDsIO", "iclr_2021_UJRFjuJDsIO" ]
iclr_2021_OAdGsaptOXy
Pretrain Knowledge-Aware Language Models
How much knowledge do pretrained language models hold? Recent research observed that pretrained transformers are adept at modeling semantics but it is unclear to what degree they grasp human knowledge, or how to ensure they do so. In this paper we incorporate knowledge-awareness in language model pretraining without ch...
withdrawn-rejected-submissions
The authors propose to improve the LMs ability on modelling entities by signalling the existence of entities and also allowing the model to represent entities also as units. The embeddings of the surface form and then entity unit are then added and passed through a layer to predict the next word. The paper evaluates on...
val
[ "IrN81COH3lw", "_jkKFgW2pa", "RbnRkTxq2Rh", "K79tP3CzQY", "uR5HsUlI2l", "acz3skdPJF", "px8P4-eHTvj", "a8jGXC5BNc4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "- Summary\n\nThis paper presents a knowledge-aware language model pretraining method without changing model architecture. Specifically, they add entity prediction task along with language modeling task to make the model aware of knowledge. Experiments show improved results on the LAMA knowledge probing task compar...
[ 5, 7, 4, -1, -1, -1, -1, 6 ]
[ 2, 3, 4, -1, -1, -1, -1, 2 ]
[ "iclr_2021_OAdGsaptOXy", "iclr_2021_OAdGsaptOXy", "iclr_2021_OAdGsaptOXy", "IrN81COH3lw", "_jkKFgW2pa", "RbnRkTxq2Rh", "a8jGXC5BNc4", "iclr_2021_OAdGsaptOXy" ]
iclr_2021_I-VfjSBzi36
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Deep, heavily overparameterized language models such as BERT, XLNet and T5 have achieved impressive success in many NLP tasks. However, their high model complexity requires enormous computation resources and extremely long training time for both pre-training and fine-tuning. Many works have studied model compression on...
withdrawn-rejected-submissions
It is important to develop efficient training methods for BERT like models since they have been widely used in real-world natural language processing tasks. The proposed approach is interesting. It speeds up BERT training via identifying lottery tickets in the early stage of training. We agree with the authors's rebutt...
train
[ "LjymxTRiccT", "hfD5lvu5Wp9", "r5yAWvGo_kf", "h_PnmVzKc-Q", "wWTVvhne1Ab", "78w7pSec4Sh", "NNcBt-Courh", "W_3BQltKKKT", "qQ1_4d8nyNP", "56zqxe4mAJV", "pYWYnwS55pz", "JUZn29zrvyP", "sAX5fneTI_2", "LxOVxNXxUNX", "WJCsET4sSBq", "_QHZA65mG4K", "DfnIh_mRH-" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper tackles a very important and under-studied problem: reducing the cost of training NLP models. The authors present a method that builds on the lottery ticket hypothesis (LTH). The authors first identify redundant structures early during training, then prune these structures, which leads to faster trainin...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "iclr_2021_I-VfjSBzi36", "WJCsET4sSBq", "pYWYnwS55pz", "wWTVvhne1Ab", "DfnIh_mRH-", "LjymxTRiccT", "LjymxTRiccT", "LxOVxNXxUNX", "WJCsET4sSBq", "_QHZA65mG4K", "_QHZA65mG4K", "DfnIh_mRH-", "DfnIh_mRH-", "iclr_2021_I-VfjSBzi36", "iclr_2021_I-VfjSBzi36", "iclr_2021_I-VfjSBzi36", "iclr_2...
iclr_2021_paUVOwaXTAR
Compositional Models: Multi-Task Learning and Knowledge Transfer with Modular Networks
Conditional computation and modular networks have been recently proposed for multitask learning and other problems as a way to decompose problem solving into multiple reusable computational blocks. We propose a novel fully-differentiable approach for learning modular networks.In our method, the modules can be invoked r...
withdrawn-rejected-submissions
This paper presents an approach for modular multi-task learning. All the reviewers believe the goals are appealing and the idea is reasonable. However, R2 and R4 raise concerns with respect to novelty. There are also strong concerns regarding experiments. The concerns vary from reproducibility to small improvements and...
train
[ "XSaGYwfkqlU", "ysUNIyTm4s", "cRreBvlF08N", "2A42KNeR4TM", "ciqKYsh5em", "8gekuNA9vA1", "XLoG_7KB95E", "P6cxbAri23Y" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank Reviewer 3 for their thorough and thoughtful comments. Here we would like to address some of the questions raised by the reviewer:\n\n* **Multitask learning: more comparisons ... needed.** Indeed, we agree that comparison with other baseline multitask learning methods are necessary to draw c...
[ -1, -1, -1, -1, 4, 5, 4, 4 ]
[ -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "8gekuNA9vA1", "ciqKYsh5em", "XLoG_7KB95E", "P6cxbAri23Y", "iclr_2021_paUVOwaXTAR", "iclr_2021_paUVOwaXTAR", "iclr_2021_paUVOwaXTAR", "iclr_2021_paUVOwaXTAR" ]
iclr_2021_UHGbeVORAAf
Multi-Representation Ensemble in Few-Shot Learning
Deep neural networks (DNNs) compute representations in a layer by layer fashion, producing a final representation at the top layer of the pipeline, and classification or regression is made using the final representation. A number of DNNs (e.g., ResNet, DenseNet) have shown that representations from the earlier layers c...
withdrawn-rejected-submissions
This paper introduces an ensemble method to few-shot learning. Although the introduced method yields competitive results, it is fair to say it is more complicated than much simpler algorithms and does not necessarily perform better. Given that ensembling for few-shot learning has been around for a while, it is not cle...
train
[ "n37cJxHydtl", "ozcq0lKqJYU", "njBfDFRtbfo", "WINm80EVQR5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a deeply supervised few-shot learning model via ensemble achieving state-of-the-art performance on mini-ImageNet and tiredImageNet. The authors first studied the classification accuracy on mini-Image across convolutional layers and found the network could perform well even in the middle layer. ...
[ 4, 4, 4, 5 ]
[ 4, 4, 5, 3 ]
[ "iclr_2021_UHGbeVORAAf", "iclr_2021_UHGbeVORAAf", "iclr_2021_UHGbeVORAAf", "iclr_2021_UHGbeVORAAf" ]
iclr_2021_mb2L9vL-MjI
The Quenching-Activation Behavior of the Gradient Descent Dynamics for Two-layer Neural Network Models
A numerical and phenomenological study of the gradient descent (GD) algorithm for training two-layer neural network models is carried out for different parameter regimes. It is found that there are two distinctive phases in the GD dynamics in the under-parameterized regime: An early phase in which the GD dynamics foll...
withdrawn-rejected-submissions
This paper empirically investigates the gradient dynamic of two-layer network nets with ReLU activations on synthetic datasets under $L^2$ loss. The empirical results show that for a specific type of initialization and less overparametrized neural nets, the gradient dynamics experience two phases: a phase that follows ...
train
[ "7qY6jOeyfcT", "Lp_RdmZQb4", "0sDvv8iVbBE", "yOeMIj_-Ld6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the gradient dynamics of two-layers neural networks. It is empirically shown that, for a specific type of initialization, for less over-parameterized neural networks, the gradient dynamics follows two phases: a phase that follows the random features model where all the neurons are \"quenched\", ...
[ 5, 5, 5, 5 ]
[ 4, 4, 4, 4 ]
[ "iclr_2021_mb2L9vL-MjI", "iclr_2021_mb2L9vL-MjI", "iclr_2021_mb2L9vL-MjI", "iclr_2021_mb2L9vL-MjI" ]
iclr_2021_jcN7a3yZeQc
Decorrelated Double Q-learning
Q-learning with value function approximation may have the poor performance because of overestimation bias and imprecise estimate. Specifically, overestimation bias is from the maximum operator over noise estimate, which is exaggerated using the estimate of a subsequent state. Inspired by the recent advance of deep rein...
withdrawn-rejected-submissions
This paper investigates some variants of the double Q-learning algorithm and develops theoretical guarantees. In particular, it focuses on how to reduce the correlation between the two trajectories employed in the double Q-learning strategy, in the hope of rigorously addressing the overestimation bias issue that arises...
train
[ "YaZoTQj2J9n", "qAUWHDfCDQa", "rwAmhWey89L", "fftMr1r4arM", "iLayvWD1I87" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Your idea looks good. However, the comparison experiments lack similar methods which reduce the estimation error.\nRecommend the author compare with the following method: [1]Li Z, Hou X. Mixing Update Q-value for Deep Reinforcement Learning[C]//2019 International Joint Conference on Neural Networks (IJCNN). IEEE,...
[ -1, 4, 3, 5, 3 ]
[ -1, 3, 3, 4, 4 ]
[ "iclr_2021_jcN7a3yZeQc", "iclr_2021_jcN7a3yZeQc", "iclr_2021_jcN7a3yZeQc", "iclr_2021_jcN7a3yZeQc", "iclr_2021_jcN7a3yZeQc" ]
iclr_2021_rQYyXqHPgZR
Success-Rate Targeted Reinforcement Learning by Disorientation Penalty
Current reinforcement learning generally uses discounted return as its learning objective. However, real-world tasks may often demand a high success rate, which can be quite different from optimizing rewards. In this paper, we explicitly formulate the success rate as an undiscounted form of return with {0, 1}-binary re...
withdrawn-rejected-submissions
Despite the fact that some of the reviewers found the idea interesting, none of them believe that the paper is ready to be published at this stage. For example, better comparison with existing/similar work, and more solid argument on why the idea is better than alternatives are mentioned. All considered, unfortunately ...
train
[ "kPn6zkx0bwg", "ZA9z3XI1PTg", "UMFfRZr9nFJ", "HdKPAc6WHJ", "oE3wdOXxn5I", "AYdOLv-YW9", "8fjmwZfS18l", "h0sSxWlxZMU", "qZqSBM09Hk" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "------------------------------------------\nPOST-REBUTTAL COMMENTS\n\nThanks for your comments.\n\nRe: A2, time augmentation in finite-horizon settings increases the size of the state space you need to keep in memory by at most a factor of 2... But in any case, further discussion of this issue will have to wait un...
[ 2, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ 5, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_rQYyXqHPgZR", "AYdOLv-YW9", "kPn6zkx0bwg", "8fjmwZfS18l", "h0sSxWlxZMU", "qZqSBM09Hk", "iclr_2021_rQYyXqHPgZR", "iclr_2021_rQYyXqHPgZR", "iclr_2021_rQYyXqHPgZR" ]
iclr_2021_D4QFCXGe_z2
R-LAtte: Attention Module for Visual Control via Reinforcement Learning
Attention mechanisms are generic inductive biases that have played a critical role in improving the state-of-the-art in supervised learning, unsupervised pre-training and generative modeling for multiple domains including vision, language and speech. However, they remain relatively under-explored for neural network arc...
withdrawn-rejected-submissions
This paper proposes an attention-endowed architecture for deep image-based RL. While some positive points were raised by the reviewers, most comments were on the negative side. The reviewers noted marginal/incremental advances in terms of empirical results and low novelty and significance. Moreover, the provided baseli...
train
[ "-X8KIezBG_y", "DFEoxoG6o9S", "lJD8mnmStsE", "W_wQrh3K9Dd", "Bg60I4S2Cbm", "bXEi01yVAd", "5DmwuZcKrNh" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ ">In Section 4.1 (Page 4), it should be Figure 6 rather than Figure 5 to support the claim that using a shared encoder gives better performance. The structure of the writing is a bit off. Shouldn't section 6 be a subsection for section 5.4?\n\nThank you for pointing out the room for improvement in our writing struc...
[ -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, 4, 5, 3 ]
[ "Bg60I4S2Cbm", "bXEi01yVAd", "5DmwuZcKrNh", "iclr_2021_D4QFCXGe_z2", "iclr_2021_D4QFCXGe_z2", "iclr_2021_D4QFCXGe_z2", "iclr_2021_D4QFCXGe_z2" ]
iclr_2021_loe6h28yoq
Certified Robustness of Nearest Neighbors against Data Poisoning Attacks
Data poisoning attacks aim to corrupt a machine learning model via modifying, adding, and/or removing some carefully selected training examples, such that the corrupted model predicts any or attacker-chosen incorrect labels for testing examples. The key idea of state-of-the-art certified defenses against data poisoning...
withdrawn-rejected-submissions
Some reviewers expressed concerns on soundness of the theory in the paper. Specifically, theorem 3 does not seem to be correct. There are other concerns such as the significance of the theoretical contributions, little empirical value and existence of much stronger results. Unfortunately the authors did not provide re...
val
[ "MlWoMfTIJg8", "UK6HHtyxGdc", "l9EXtJhbJYF", "BeP6Z_DBeL", "fN-3VkcNXZh" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Summary:**\nFirst, the paper identifies k-Nearest Neighbor (kNN) and radius Nearest Neighbor (rNN) to be naturally effective baseline certified defenses against data poisoning attack. It is easy to see that kNN and rNN are resistant to poison attacks, since to flip the prediction of a test example, one would nee...
[ 4, 3, 5, 5, 4 ]
[ 3, 4, 4, 2, 4 ]
[ "iclr_2021_loe6h28yoq", "iclr_2021_loe6h28yoq", "iclr_2021_loe6h28yoq", "iclr_2021_loe6h28yoq", "iclr_2021_loe6h28yoq" ]
iclr_2021_J4XaMT9OcZ
Mitigating Deep Double Descent by Concatenating Inputs
The double descent curve is one of the most intriguing properties of deep neural networks. It contrasts the classical bias-variance curve with the behavior of modern neural networks, occurring where the number of samples nears the number of parameters. In this work, we explore the connection between the double descent ...
withdrawn-rejected-submissions
This paper proposed an augmentation construction to mitigate the double descent. For any pairs of data points, the constructed input is simply concatenation of two inputs and the constructed label is the average of their corresponding labels. The authors further empirically show that this would mitigate double descent....
train
[ "f_luMyGHZHE", "a0eGWMj8EOZ", "lzsT0KJj67", "npnC80YwuTh" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Summary**: In this article, the authors proposed a data augmentation procedure by concatenating the input data to produce an augmented dataset of size $O(n^2)$ from an original dataset of size $O(n)$, so as to mitigate the double descent curve. The authors showed experimentally that such construction does not im...
[ 4, 3, 5, 2 ]
[ 3, 3, 4, 4 ]
[ "iclr_2021_J4XaMT9OcZ", "iclr_2021_J4XaMT9OcZ", "iclr_2021_J4XaMT9OcZ", "iclr_2021_J4XaMT9OcZ" ]
iclr_2021_Rw_vo-wIAa
Multi-agent Policy Optimization with Approximatively Synchronous Advantage Estimation
Cooperative multi-agent tasks require agents to deduce their own contributions with shared global rewards, known as the challenge of credit assignment. General methods for policy based multi-agent reinforcement learning to solve the challenge introduce differentiate value functions or advantage functions for individual...
withdrawn-rejected-submissions
All reviewers expressed interest in this promising approach, but raised questions that were not addressed by the authors during the discussion period. As concerns raised included insufficient repeats of empirical experiments to draw conclusions and the paper appearing to be in an early draft format, we cannot support a...
train
[ "7Kp8Kg_Fm8v", "KnW5S5EUtr2", "C4EgCKLnA_", "XizmkzrM0Yy" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThe paper deals with the problem of credit assignment and synchronous estimation in cooperative multi-agent reinforcement learning problems. The authors introduce marginal advantage functions and use them for the estimation of the counterfactual advantage function. These functions permit to decompose th...
[ 5, 5, 3, 4 ]
[ 2, 4, 4, 4 ]
[ "iclr_2021_Rw_vo-wIAa", "iclr_2021_Rw_vo-wIAa", "iclr_2021_Rw_vo-wIAa", "iclr_2021_Rw_vo-wIAa" ]
iclr_2021_QcqsxI6rKDs
Meta Gradient Boosting Neural Networks
Meta-optimization is an effective approach that learns a shared set of parameters across tasks for parameter initialization in meta-learning. A key challenge for meta-optimization based approaches is to determine whether an initialization condition can be generalized to tasks with diverse distributions to acceler...
withdrawn-rejected-submissions
The paper proposes a meta-gradient boosting framework to tackle the model-agnostic meta-learning problem. The idea is to use a base learn that learns shared information across tasks, and gradient boosted modules to capture task-specific modules. The experiments show that the proposed meta-gradient boosting framework (w...
train
[ "3DhIID3GIlO", "CD7G1JUE_MF", "VKPBwzs9Z0r", "7V3I9F5a247", "A6rsDF8cFR", "dgdgiZjtZZH", "RtheyDjxFrf", "fMtoqy3yqt" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Paper summary\nThis paper addresses a problem of model-agnostic meta-learning (MAML) and most of its variations/extensions - learning only a single parameter initialization for the entire task distribution, which might not be effective when the task distribution is too diverse. Inspired by gradient boosting, w...
[ 4, -1, -1, -1, -1, 4, 6, 5 ]
[ 5, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_QcqsxI6rKDs", "dgdgiZjtZZH", "fMtoqy3yqt", "3DhIID3GIlO", "RtheyDjxFrf", "iclr_2021_QcqsxI6rKDs", "iclr_2021_QcqsxI6rKDs", "iclr_2021_QcqsxI6rKDs" ]
iclr_2021_pBDwTjmdDo
Dynamic Graph Representation Learning with Fourier Temporal State Embedding
Static graph representation learning has been applied in many tasks over the years thanks to the invention of unsupervised graph embedding methods and more recently, graph neural networks (GNNs). However, in many cases, we are to handle dynamic graphs where the structures of graphs and labels of the nodes are evolving ...
withdrawn-rejected-submissions
The paper proposes Fourier temporal state embedding, a new technique to embed dynamic graphs. However, the paper needs to be improved in writing, computational complexity analysis, and more thorough baseline comparisons.
train
[ "nluTxYPX1lQ", "_dvIW42EE6S", "Tbc4c02ax-N", "wCezuAQnrsu", "CKsVn87LYYi", "XMHZSU0fP7X", "_vXELcpherB", "RzSMZn-AFrm", "tmL3qrHYaB8" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I want to thank the authors for the response. I decide to maintain the original score.", "Thank you very much for your review of our paper. Your review gave good advice on this work, which is intuitive and helpful for future revision. \n\nThe original motivation for using TSE is to see the learning of temporal g...
[ -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "_dvIW42EE6S", "RzSMZn-AFrm", "_vXELcpherB", "XMHZSU0fP7X", "tmL3qrHYaB8", "iclr_2021_pBDwTjmdDo", "iclr_2021_pBDwTjmdDo", "iclr_2021_pBDwTjmdDo", "iclr_2021_pBDwTjmdDo" ]
iclr_2021_FOR2VqgJXb
Evaluating representations by the complexity of learning low-loss predictors
We consider the problem of evaluating representations of data for use in solving a downstream task. We propose to measure the quality of a representation by the complexity of learning a predictor on top of the representation that achieves low loss on a task of interest. To this end, we introduce two measures: surplus d...
withdrawn-rejected-submissions
The paper studies the problem of evaluating representations and proposes two new metrics: surplus description length and epsilon sample complexity. Pros: - A good overview of existing methods and their corresponding weaknesses (i.e. sensitivity to dataset size and insensitivity to representation quality and computatio...
train
[ "BlGXAu_ezx0", "f_TpOYGr-2E", "HLk0G8-y4II", "AQCkrWrrtc8", "fZlgfBFBzPm", "RSRyFi7TPoH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for a thorough and insightful review. We especially appreciate that the reviewer recognizes the importance of the topic and the practicality of our proposed measures. We address some questions below.\n\n**“Although very plausible, the assumption might not hold...”**\n\nWe agree that the assum...
[ -1, -1, -1, 7, 4, 4 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "AQCkrWrrtc8", "fZlgfBFBzPm", "RSRyFi7TPoH", "iclr_2021_FOR2VqgJXb", "iclr_2021_FOR2VqgJXb", "iclr_2021_FOR2VqgJXb" ]
iclr_2021_okT7QRhSYBw
Anti-Distillation: Improving Reproducibility of Deep Networks
Deep networks have been revolutionary in improving performance of machine learning and artificial intelligence systems. Their high prediction accuracy, however, comes at a price of model irreproducibility in very high levels that do not occur with classical linear models. Two models, even if they are supposedly i...
withdrawn-rejected-submissions
The paper tries to argue the value of making ensembles more reproducible through the use of a correlation loss to try to make components as different as possible. The paper is tough to follow and the high level motivation is unclear. As one of the reviewers points out, don't ensembles provide an estimate of uncertainty...
train
[ "lF6qxjgftSs", "h-_YbptgpZK", "hmy359yEghf", "eUD5VOHWyT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThe author proposes a method to train ensembles to have different prediction by using a correlation loss between the model's predictions. The authors show that their loss decreases the relative prediction difference between the models in the ensemble.\n\nStrong points:\nThe correlation loss is a great i...
[ 3, 3, 3, 3 ]
[ 3, 3, 4, 4 ]
[ "iclr_2021_okT7QRhSYBw", "iclr_2021_okT7QRhSYBw", "iclr_2021_okT7QRhSYBw", "iclr_2021_okT7QRhSYBw" ]
iclr_2021_uKZsVyFKbaj
It's Hard for Neural Networks to Learn the Game of Life
Efforts to improve the learning abilities of neural networks have focused mostly on the role of optimization methods rather than on weight initializations. Recent findings, however, suggest that neural networks rely on lucky random initial weights of subnetworks called "lottery tickets" that converge quickly to a solut...
withdrawn-rejected-submissions
Unfortunately, the authors did not submit a response during the rebuttal phase.
train
[ "POVvnyV5QEI", "xxVobZ2nvo6", "vrizSK6F15j", "0ITyQrQHcLb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work presents a numerical study of how well can deep learning frameworks learn the underlying rules of a discrete dynamical system when the training set is composed of pairs of configurations separated by some time interval. Specifically, they focus on the Game of Life and study the success rates of learning ...
[ 6, 5, 3, 5 ]
[ 4, 3, 5, 4 ]
[ "iclr_2021_uKZsVyFKbaj", "iclr_2021_uKZsVyFKbaj", "iclr_2021_uKZsVyFKbaj", "iclr_2021_uKZsVyFKbaj" ]
iclr_2021_KcLlh3Qe7KU
Ensembles of Generative Adversarial Networks for Disconnected Data
Most computer vision datasets are composed of disconnected sets, such as images of different objects. We prove that distributions of this type of data cannot be represented with a continuous generative network without error, independent of the learning algorithm used. Disconnected datasets can be represented in two wa...
withdrawn-rejected-submissions
I am recommending rejection for this paper for the following reasons: I agree that the main claim is an obvious consequence of the structure of the GAN generator and the prior. I'm also not sure why the authors restricted their analysis to GANs, but that's not a super important point to me. More important is that the...
train
[ "atEo6HF_cD4", "dcZsea4YJE", "gVtlX1VM_g_", "RECQxTo641l" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis work proposes that most relevant datasets to the machine learning community today have support on a mixture of disconnected components. They argue that popular GAN models cannot fit distributions of this kind and provide a number of proofs to convince the reader of this claim. The authors discuss ...
[ 4, 5, 7, 4 ]
[ 3, 3, 4, 4 ]
[ "iclr_2021_KcLlh3Qe7KU", "iclr_2021_KcLlh3Qe7KU", "iclr_2021_KcLlh3Qe7KU", "iclr_2021_KcLlh3Qe7KU" ]
iclr_2021_Twm9LnWK-zt
Searching towards Class-Aware Generators for Conditional Generative Adversarial Networks
Conditional Generative Adversarial Networks (cGAN) were designed to generate images based on the provided conditions, e.g., class-level distributions. However, existing methods have used the same generating architecture for all classes. This paper presents a novel idea that adopts NAS to find a distinct architecture fo...
withdrawn-rejected-submissions
This paper investigates the use of class-conditional architectures in GANs. It achieves this by employing neural architecture search (NAS) on top of reinforcement learning. Their main contribution is a “flexible and safe” search space; experiments are carried out on CIFAR-10 and -100. Standard performance results are a...
val
[ "Z0nmOqoSiVV", "e0v_bwr7-j4", "qgW51VZDfuN", "qIpcrwhW4_H", "GXFc7GrO4Mh", "LrYbBNvzVSg", "q9CbfbJ7tft", "mg2Fpsc7Oun", "VPt_l9eCeut", "6YWwvVDgTj", "me6tDJs9UK8", "x-pL7mEIRe4", "L8SETnSWTs3" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**\"SPADE [2] used an unconditional GAN\"**\n\nThank you for your reply.\n\nWhat I mean is that SPADE does not explicitly use a classification-based discriminator. For example, the SPADE‘s discriminator only judges whether the input image is real or fake. Besides, I think simply concatenating semantic mask with in...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3, 3 ]
[ "e0v_bwr7-j4", "LrYbBNvzVSg", "VPt_l9eCeut", "6YWwvVDgTj", "me6tDJs9UK8", "x-pL7mEIRe4", "L8SETnSWTs3", "iclr_2021_Twm9LnWK-zt", "iclr_2021_Twm9LnWK-zt", "iclr_2021_Twm9LnWK-zt", "iclr_2021_Twm9LnWK-zt", "iclr_2021_Twm9LnWK-zt", "iclr_2021_Twm9LnWK-zt" ]
iclr_2021_hx0D7wn6qIy
Semi-supervised learning by selective training with pseudo labels via confidence estimation
We propose a novel semi-supervised learning (SSL) method that adopts selective training with pseudo labels. In our method, we generate hard pseudo-labels and also estimate their confidence, which represents how likely each pseudo-label is to be correct. Then, we explicitly select which pseudo-labeled data should be use...
withdrawn-rejected-submissions
The paper proposes an approach that generates pseudo-labels along with confidence to help semi-supervised learning. Then, selected pseudo-labels are used to update the model. Moreover, the authors include a variation of mixup for data augmentation to train a more calibrated model. Experimental results justify the valid...
train
[ "rT-ICD3xv-F", "y4aD850Lba_", "v4VOBKMGsO", "Cua4yVjsqUw" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a pseudo-labeling-based method for semi-supervised leanring. The proposed method consists of (1) pseudo-labeled data selection based on prediction confidence for efficient training and (2) a data augmentation method named mixconf, which is a modification of mixup. \n\nOverall, the paper is well...
[ 4, 5, 5, 6 ]
[ 3, 2, 4, 3 ]
[ "iclr_2021_hx0D7wn6qIy", "iclr_2021_hx0D7wn6qIy", "iclr_2021_hx0D7wn6qIy", "iclr_2021_hx0D7wn6qIy" ]
iclr_2021_-RQVWPX73VP
Interpretable Meta-Reinforcement Learning with Actor-Critic Method
Meta-reinforcement learning (meta-RL) algorithms have successfully trained agent systems to perform well on different tasks within only few updates. However, in gradient-based meta-RL algorithms, the Q-function at adaptation step is mainly estimated by the return of few trajectories, which can lead to high variance in ...
withdrawn-rejected-submissions
A meta-RL algorithms that aims to improve meta policy interpretability by reducing the meta-gradient variance and bias estimation. The method is evaluated on an exploration in 2d navigation and meta-RL benchmarks. Despite an important topic of research, the reviewers are unanimous that the paper is an early version an...
train
[ "z0u9pYP6aro", "whfj3opi86_", "tPfDdRCR_gX", "wFytLGTzGrA", "s1YkYEeSYYG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### **Summary and Contributions of Paper** \nThis paper proposes to improve the K-shot RL meta-learning problem by using an LSTM, whose repeated inputs are (s,a,s') state-action transitions, and whose step-wise outputs are context vectors. The policy, during the K-shot phase, then additionally observes the current...
[ 4, 3, 4, 2, 3 ]
[ 3, 4, 4, 5, 3 ]
[ "iclr_2021_-RQVWPX73VP", "iclr_2021_-RQVWPX73VP", "iclr_2021_-RQVWPX73VP", "iclr_2021_-RQVWPX73VP", "iclr_2021_-RQVWPX73VP" ]
iclr_2021_e3bhF_p0T7c
Analysis of Alignment Phenomenon in Simple Teacher-student Networks with Finite Width
Recent theoretical analysis suggests that ultra-wide neural networks always converge to global minima near the initialization under first order methods. However, the convergence property of neural networks with finite width could be very different. The simplest experiment with two-layer teacher-student networks shows t...
withdrawn-rejected-submissions
The reviewer seems to reach a consensus that the paper is not ready for publication at ICLR. One of the major issues seems to be that the paper only analyzes the case of $d=2$. (In the AC's opinion, $d>2$ might be fundamentally more difficult to analyze than $d=2$).
train
[ "d0bYdDc2ekk", "tG8knaiH18K", "8BCE0cliWzQ", "o-aoGlZP2-3", "GaF-16SbAv5", "1hLxxx2-4GL", "VfaW484GAuR", "WcNiNJjNCSd", "6-0mu0kqvvT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[Summary]\n\nThis paper studies the optimization landscape of one-hidden-layer neural networks in the teacher-student setting, where the ground truth teacher network is one relu and the learner network is m>=2 relus. The paper proves that (1) when m=2, any stationary point (of the student network) has to be aligne...
[ 3, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_e3bhF_p0T7c", "GaF-16SbAv5", "d0bYdDc2ekk", "VfaW484GAuR", "WcNiNJjNCSd", "6-0mu0kqvvT", "iclr_2021_e3bhF_p0T7c", "iclr_2021_e3bhF_p0T7c", "iclr_2021_e3bhF_p0T7c" ]
iclr_2021_e3KNSdWFOfT
Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
Many recent AI architectures are inspired by zero-sum games, however, the behavior of their dynamics is still not well understood. Inspired by this, we study standard gradient descent ascent (GDA) dynamics in a specific class of non-convex non-concave zero-sum games, that we call hidden zero-sum games. In this class, p...
withdrawn-rejected-submissions
This paper studies the convergence of gradient descent ascent (GDA) dynamics in a specific class of non-convex non-concave zero-sum games that the authors call "hidden zero-sum games". Unlike general min-max games, these games have a well-defined notion of a "von Neumann solution". The authors show that if the hidden g...
train
[ "_kcY_wgGOF4", "MQEFB2F0BHU", "T8adNVo_n3c", "vuEkyZLiB-N", "5w-kYqyv_wQ", "CGKgC3peNDr", "ET5m9hwijWk", "9Ubgp4PeiR", "nW8aSilGDTW", "GdyGLqLT7gx", "2tDXKPsnByF", "QdWcodpQ1iP", "Avtg_wz0dtB", "3tluY7uc4PJ", "yzvuqHEe8QU", "fMLXlIbwzX4", "dTpjJswBCu", "R1-vTid89oW", "xx6nfbpv6Rs...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", ...
[ "Given the restrictions of maximum 8 pages, in the main paper we focused our discussion on simple GAN applications that our framework can capture. However, our paper is not written with a sole focus on GANs (no explicit mention of GANs on the title and abstract) but on this new class non-convex non-concave zero sum...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "T8adNVo_n3c", "iclr_2021_e3KNSdWFOfT", "vuEkyZLiB-N", "ET5m9hwijWk", "9Ubgp4PeiR", "yzvuqHEe8QU", "5w-kYqyv_wQ", "CGKgC3peNDr", "3tluY7uc4PJ", "fMLXlIbwzX4", "uDmFioIsY_G", "Avtg_wz0dtB", "xx6nfbpv6Rs", "R1-vTid89oW", "ykMg_0584Bl", "2tDXKPsnByF", "EvdW-pqg8_", "EvdW-pqg8_", "cF...
iclr_2021_jQUf0TmN-oT
SACoD: Sensor Algorithm Co-Design Towards Efficient CNN-powered Intelligent PhlatCam
There has been a booming demand for integrating Convolutional Neural Networks (CNNs) powered functionalities into Internet-of-Thing (IoT) devices to enable ubiquitous intelligent "IoT cameras”. However, more extensive applications of such IoT systems are still limited by two challenges. First, some applications, espec...
withdrawn-rejected-submissions
This paper attempts to jointly search for the sensor and the neural network architecture. More specifically, the proposed approach jointly optimizes the parameters governing the PhlatCam sensor and the backend CNN model. In terms of the approach, the paper follows a well known DARTS formulation for the differentiable a...
train
[ "60SpRWrROA", "7dHJ8h3ES2u", "juxNlyrLVGs", "ul05W1TKHIs", "zCMKCzpFJ1E", "PAj5acPzOnj", "kbhy32PIto", "LF5INkkF36", "TIPG2MxCR5" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "#################################\n\nSummary:\n\nThe paper proposed to adopt differentiable network architecture search (DARTS) for the co-design of the sensor (a lensless camera) and the deep model for visual recognition tasks, so as to maximize the accuracy and minimize the energy consumption. The key idea is to...
[ 6, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, 1, 4 ]
[ "iclr_2021_jQUf0TmN-oT", "60SpRWrROA", "kbhy32PIto", "kbhy32PIto", "LF5INkkF36", "TIPG2MxCR5", "iclr_2021_jQUf0TmN-oT", "iclr_2021_jQUf0TmN-oT", "iclr_2021_jQUf0TmN-oT" ]
iclr_2021_Rq31tXaqXq
VideoFlow: A Framework for Building Visual Analysis Pipelines
The past years have witnessed an explosion of deep learning frameworks like PyTorch and TensorFlow since the success of deep neural networks. These frameworks have significantly facilitated algorithm development in multimedia research and production. However, how to easily and efficiently build an end-to-end visual ana...
withdrawn-rejected-submissions
All reviewers appreciate the framework described in the paper and say it is a "useful tool", a "flexible, efficient, extensible, and secure visual analysis framework" and "in full fledged form may help the productivity while building visual analysis applications." However, the reviewers also point to significant shor...
train
[ "BfSI5ZHhobA", "wYvd3_7Xb6j", "ErXNDYINUT0", "9f_CIX4_5l", "2IabjocZRYY" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a tutorial to a video analysis platform software, i.e., VideoFlow, which represents a video analysis task as a computation graph, provides common functions like video decoding and database storage, integrates deep learning frameworks, e.g. Caffe/Pytorch/MXNet as built-in inference engines, and s...
[ 3, 3, 4, 3, 3 ]
[ 4, 4, 3, 4, 4 ]
[ "iclr_2021_Rq31tXaqXq", "iclr_2021_Rq31tXaqXq", "iclr_2021_Rq31tXaqXq", "iclr_2021_Rq31tXaqXq", "iclr_2021_Rq31tXaqXq" ]
iclr_2021_xcd5iTC6J-W
Hidden Markov models are recurrent neural networks: A disease progression modeling application
Hidden Markov models (HMMs) are commonly used for disease progression modeling when the true state of a patient is not fully known. Since HMMs may have multiple local optima, performance can be improved by incorporating additional patient covariates to inform parameter estimation. To allow for this, we formulate a spec...
withdrawn-rejected-submissions
There is consensus that the submission is not yet ready for publication. The reviews contain multiple comments and suggestions and I hope they can be useful for the authors.
train
[ "SS6RKvIU_gM", "ynBnG5Wc_to", "tCmSKdGzfKF", "pr8ls-ZoiK" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Summary\nAuthors demonstrated that one can encode the data likelihood function of an HMM using a specialized RNN architecture. Unlike previous work where neurons from different layers were multiplied together, the new encoding strictly followed the classical architecture restrictions of a neural network , i.e....
[ 5, 5, 3, 4 ]
[ 4, 3, 5, 4 ]
[ "iclr_2021_xcd5iTC6J-W", "iclr_2021_xcd5iTC6J-W", "iclr_2021_xcd5iTC6J-W", "iclr_2021_xcd5iTC6J-W" ]
iclr_2021_bVzUDC_4ls
Exploiting Verified Neural Networks via Floating Point Numerical Error
Motivated by the need to reliably characterize the robustness of deep neural networks, researchers have developed verification algorithms for deep neural networks. Given a neural network, the verifiers aim to answer whether certain properties are guaranteed with respect to all inputs in a space. However, little attenti...
withdrawn-rejected-submissions
There are many recent methods for the formal verification of neural networks. However, most of these methods do not soundly model the floating-point representation of real numbers. This paper shows that this unsoundness can be exploited to construct adversarial examples for supposedly verified networks. The takeaway is...
train
[ "qCHP5SNxrMi", "DbTa59vtkhO", "bKAWgRoLfq", "c7H2xOrTR5Z", "BPaPdAU0M-a", "P3E3o0FKJhD", "cKB_2PF4O0e", "iMZfgaEIlLv", "7emg9f8BWi7", "4NIpHlIh5AT", "ZnkLSqs2wmy", "zd3H4EhfIGP", "NDpPlRFSWA3" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for clearly expressing your concerns. However, we still believe that our paper is significant in the context of NN verification research.\n\n> The fact that a program may be certified sound under the assumption of reals may indeed be unsound is folklore in verification (hence the push for FP soundness in th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3, 5 ]
[ "DbTa59vtkhO", "iMZfgaEIlLv", "BPaPdAU0M-a", "NDpPlRFSWA3", "cKB_2PF4O0e", "iclr_2021_bVzUDC_4ls", "ZnkLSqs2wmy", "zd3H4EhfIGP", "4NIpHlIh5AT", "iclr_2021_bVzUDC_4ls", "iclr_2021_bVzUDC_4ls", "iclr_2021_bVzUDC_4ls", "iclr_2021_bVzUDC_4ls" ]
iclr_2021_x2ywTOFM4xt
Variational saliency maps for explaining model's behavior
Saliency maps have been widely used to explain the behavior of an image classifier. We introduce a new interpretability method which considers a saliency map as a random variable and aims to calculate the posterior distribution over the saliency map. The likelihood function is designed to measure the distance between t...
withdrawn-rejected-submissions
Overall the reviewers had various positive things to say about the paper, including that it was well written and easy to understand, topical, that the method was sensible, novel and interesting and that the computational efficiency (i.e. real time) was appealing. However, all the reviewers thought it wasn't quite read...
train
[ "x94qEbAnE_D", "S0dzD4Dsut", "8g1Q9Zuyx9d", "sLLxw2yBU8f", "fPnfM5v9Gh", "gb7Los6KRuv", "D8EOXOqDyr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe paper presents a new saliency map interpretability method for the task of image classification. It considers the saliency map as a random variable and computes the posterior distribution over it. The likelihood measures the predictions of the classifier for an image and its perturbed counterpart. T...
[ 4, -1, -1, -1, -1, 4, 5 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_x2ywTOFM4xt", "x94qEbAnE_D", "x94qEbAnE_D", "D8EOXOqDyr", "gb7Los6KRuv", "iclr_2021_x2ywTOFM4xt", "iclr_2021_x2ywTOFM4xt" ]
iclr_2021_rUVFU1oyAoy
Nonconvex Continual Learning with Episodic Memory
Continual learning aims to prevent catastrophic forgetting while learning a new task without accessing data of previously learned tasks. The memory for such learning scenarios build a small subset of the data for previous tasks and is used in various ways such as quadratic programming and sample selection. ...
withdrawn-rejected-submissions
This work proposes to analyse convergence of episodic memory-based continual learning methods by looking at this problem through the lense of nonconvex optimisation. Based on the analysis a method is proposed to scale learning rates such that the bounds on the convergence rate are improved. Pros: - I agree with the re...
train
[ "WMUXQh87kP2", "mIB18bDk8Jv", "bvYymw5lZQc", "kJWBlPQDLB", "eWvXUtvk2xJ", "EcNt0KP9i2j", "SPluu2PQARs", "zvHDCg_Rlvm", "FBA-7swiXWN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "In this paper, the authors provide theoretical justifications for memory-based continual learning (CL) methods and provide a scaling learning rate method NCCL to improve the practical performance. The results look quite exciting (there is quite scant theoretical paper for CL), however, after looking into the detai...
[ 4, 5, 3, 4, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2021_rUVFU1oyAoy", "iclr_2021_rUVFU1oyAoy", "iclr_2021_rUVFU1oyAoy", "iclr_2021_rUVFU1oyAoy", "WMUXQh87kP2", "bvYymw5lZQc", "kJWBlPQDLB", "mIB18bDk8Jv", "iclr_2021_rUVFU1oyAoy" ]
iclr_2021_8_7yhptEWD
On the Neural Tangent Kernel of Equilibrium Models
Existing analyses of the neural tangent kernel (NTK) for infinite-depth networks show that the kernel typically becomes degenerate as the number of layers grows. This raises the question of how to apply such methods to practical "infinite depth" architectures such as the recently-proposed deep equilibrium (DEQ) model,...
withdrawn-rejected-submissions
This paper combines recently emerging NTK theory and kernels with DEQ models. In particular the authors use the root-finding capability of DEQ models to compute the corresponding NTK of DEQ models for fully connected and convolutional variants. The reviewers raised various concerns including lack of experimental detail...
train
[ "Hvk3IQNmFP9", "nru8XzPldKG", "_iBLLEj8Wb", "970hEgtM6F", "gkRs5ZcZNYf", "0lkQHux4YQ5", "mpxa544gMyn", "msfYlcZaJFs" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your review and suggestions, we appreciate your feedback.\n\nThe proofs are indeed incremental, but they are not the focus of this paper. Instead, the main focus is the concept of what happens to DEQ in the infinite width limit, and how can we calculate the limits using root-finding. The choice of para...
[ -1, -1, -1, -1, 6, 4, 3, 4 ]
[ -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "0lkQHux4YQ5", "gkRs5ZcZNYf", "mpxa544gMyn", "msfYlcZaJFs", "iclr_2021_8_7yhptEWD", "iclr_2021_8_7yhptEWD", "iclr_2021_8_7yhptEWD", "iclr_2021_8_7yhptEWD" ]
iclr_2021_pD9x3TmLONE
XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-Domain Mixup
Transferring knowledge from large source datasets is an effective way to fine-tune the deep neural networks of the target task with a small sample size. A great number of algorithms have been proposed to facilitate deep transfer learning, and these techniques could be generally categorized into two groups – Regularized...
withdrawn-rejected-submissions
All reviewers recommend rejection due to limited novelty and insufficient experimental analysis. The author’s response has addressed several other questions raised by the reviewers, but it was not sufficient to eliminate the main concerns about novelty (as the method is a combination of existing techniques) and missing...
train
[ "FfjHw8XfP-x", "NgeudA76v7", "_Q0fAbAkIhK", "VTnuWo_TCn", "Gc2eXLbeptj", "xy_hv7w2eb", "nxdk47DhYQX", "Mx3PuH0o6h" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes XMixup, a strategy for improving transfer learning in neural networks. Specifically, XMixup consists of mixup applied between target samples and source samples from the class pre-determined to be closest to target sample’s class. Experiments conducting transfer learning from pre-trained ImageNe...
[ 5, -1, -1, -1, -1, 4, 4, 4 ]
[ 4, -1, -1, -1, -1, 5, 4, 5 ]
[ "iclr_2021_pD9x3TmLONE", "FfjHw8XfP-x", "xy_hv7w2eb", "nxdk47DhYQX", "Mx3PuH0o6h", "iclr_2021_pD9x3TmLONE", "iclr_2021_pD9x3TmLONE", "iclr_2021_pD9x3TmLONE" ]
iclr_2021_ptbb7olhGHd
On the Robustness of Sentiment Analysis for Stock Price Forecasting
Machine learning (ML) models are known to be vulnerable to attacks both at training and test time. Despite the extensive literature on adversarial ML, prior efforts focus primarily on applications of computer vision to object recognition or sentiment analysis to movie reviews. In these settings, the incentives for adve...
withdrawn-rejected-submissions
One reviewer is positive, but that review is not of high quality. The other reviewers agree that this paper is interesting, but has too many limitations to be accepted by a highly competitive venue such as ICLR.
val
[ "t5i5WJnLXrC", "8K42v4OjTd8", "Lg79HSNt9Ld", "-mQzFgE3fS", "JlDtGQkgOV2", "dKj-JBD3QXa", "R7dGc2Rm0jc", "NFfoYVJFxjn" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the author for their comments and insight.\n\nThe goal of our paper was to develop a stock price forecasting pipeline and showcase the capabilities of an adversarial ML attack to the pipeline. We decided to investigate a single company as a proof of concept. We decided to work with Tesla in large part bec...
[ -1, -1, -1, -1, 5, 5, 4, 7 ]
[ -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "JlDtGQkgOV2", "NFfoYVJFxjn", "dKj-JBD3QXa", "R7dGc2Rm0jc", "iclr_2021_ptbb7olhGHd", "iclr_2021_ptbb7olhGHd", "iclr_2021_ptbb7olhGHd", "iclr_2021_ptbb7olhGHd" ]
iclr_2021_ascdLuNQY4J
Searching for Convolutions and a More Ambitious NAS
An important goal of neural architecture search (NAS) is to automate-away the design of neural networks on new tasks in under-explored domains, thus helping to democratize machine learning. However, current NAS research largely focuses on search spaces consisting of existing operations---such as different types of conv...
withdrawn-rejected-submissions
Motivated by the possibility of Neural Architecture Search on domains beyond computer vision, this paper introduces a new search space and search method to improve neural operators. It applies the technique to problems in vision and text. Reviewer 1 found the paper interesting and liked the motivation of considering d...
train
[ "2Uvl1rz_txE", "Z4thgyD67Xl", "d0lIh4dupl", "LK2SSB1IAdz", "nY9Ty4OVuWU", "opghoGzL8rg", "JW00y6fAhQ6", "DNt-1xWMW3b", "beNlUvJHBej", "EauVzx4Nz65", "YMGeABT7Fqi" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# Summary\n\nThe paper proposes to search neural network operations that outperform some human-designed ones, e.g. convolution. Specifically, it proposes to extend the Kaleidoscope paper to formulate a new searchable operation. Combining with differently architecture search method, \n\n# Strength\n\nThe proposed K...
[ 5, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_ascdLuNQY4J", "beNlUvJHBej", "2Uvl1rz_txE", "2Uvl1rz_txE", "YMGeABT7Fqi", "YMGeABT7Fqi", "EauVzx4Nz65", "iclr_2021_ascdLuNQY4J", "iclr_2021_ascdLuNQY4J", "iclr_2021_ascdLuNQY4J", "iclr_2021_ascdLuNQY4J" ]
iclr_2021_i7aDkDEXJQU
Demystifying Learning of Unsupervised Neural Machine Translation
Unsupervised Neural Machine Translation or UNMT has received great attention in recent years. Though tremendous empirical improvements have been achieved, there still lacks theory-oriented investigation and thus some fundamental questions like \textit{why} certain training protocol can work or not und...
withdrawn-rejected-submissions
This paper attempts to explain why popular UNMT training objective components (back-translation and denoising autoencoding) are effective. The paper provides experimental analysis and draws connections with ELBO and mutual information. Reviewers generally agree that the paper's goal is worthy: trying to form a better t...
train
[ "ckdHnUfy6eR", "CzMkblyw_K8", "lvDWD8jp426", "-gm-kf0o1FE", "hsO0ufI2ECm", "vbMoVwMHiW", "L28mnwOgfHv", "hByCy33pwqN" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper takes a closer look at the inner workings of unsupervised MT training.\n\nThe authors provide two alternate views on the backtranslation+DAE objectives used in unsupervised-MT. This interpretation sheds more light on the relationship between the two: for example, it appears that the DAE loss is critical...
[ 5, -1, -1, -1, -1, 6, 4, 5 ]
[ 3, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2021_i7aDkDEXJQU", "ckdHnUfy6eR", "vbMoVwMHiW", "L28mnwOgfHv", "hByCy33pwqN", "iclr_2021_i7aDkDEXJQU", "iclr_2021_i7aDkDEXJQU", "iclr_2021_i7aDkDEXJQU" ]
iclr_2021_Shjmp-QK8Y-
Prior Knowledge Representation for Self-Attention Networks
Self-attention networks (SANs) have shown promising empirical results in various natural language processing tasks. Typically, it gradually learning language knowledge on the whole training dataset in parallel and stacked ways, thereby modeling language representation. In this paper, we propose a simple and general rep...
withdrawn-rejected-submissions
This paper proposes to incorporate additional prior knowledge into transformer architectures for machine translation tasks. The definition of problem is reasonable, despite the fact that there is a long thread of work on adding knowledge of different types into neural architectures of NMT. The proposed model, however,...
train
[ "ElA6SOtyMnF", "l5KFz5kL7Dd", "UZgoDCRtMzh", "onnodFZ8t1v" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your valuable and insightful suggestions! As you said, there are indeed many places to be improved:\n1) Unclear how exactly these two matrices are produced from M;\n2) The advantage of the proposed method seems to be unclear compare to the existing works;\n3) The motivation needs to be further refined a...
[ -1, 3, 5, 4 ]
[ -1, 5, 4, 4 ]
[ "iclr_2021_Shjmp-QK8Y-", "iclr_2021_Shjmp-QK8Y-", "iclr_2021_Shjmp-QK8Y-", "iclr_2021_Shjmp-QK8Y-" ]
iclr_2021_jk1094_ZiN
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search
Neural Architecture Search (NAS) explores a large space of architectural motifs -- a compute-intensive process that often involves ground-truth evaluation of each motif by instantiating it within a large network, and training and evaluating the network with thousands or more data samples. Inspired by how biolog...
withdrawn-rejected-submissions
The reviewers overall appreciated the efforts of the authors in making NAS more computationally efficient. The paper could greatly benefit from further editing/restructuring with the goal of improving clarity, as it’s currently hard to navigate and understand in places. Future submissions of this work would benefit fro...
train
[ "ww06XfsqkDK", "ozucI0m11Rh", "nzL156adSwV", "tumAR6O112E", "Vp9MK17_M6S", "qZfhb4RA50b", "PmghpBeirh", "no4FtOCeCPW", "oP_Pb8Pcp1r", "T5xJy2r8-s7" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all reviewers for their insightful comments! We recognize that reviewing is time-consuming work, and we are deeply appreciative. We are glad that the reviewers found 'Synthetic Petri Dish' to be a novel and well motivated method with a potential of real world impact. Below, we’ve written res...
[ -1, -1, -1, -1, -1, -1, 4, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 1, 3 ]
[ "iclr_2021_jk1094_ZiN", "nzL156adSwV", "PmghpBeirh", "oP_Pb8Pcp1r", "no4FtOCeCPW", "T5xJy2r8-s7", "iclr_2021_jk1094_ZiN", "iclr_2021_jk1094_ZiN", "iclr_2021_jk1094_ZiN", "iclr_2021_jk1094_ZiN" ]
iclr_2021_P3WG6p6Jnb
Offline Policy Optimization with Variance Regularization
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-es...
withdrawn-rejected-submissions
The reviewers are unanimous that the submission does not clear the bar for ICLR.
train
[ "-KC8p8WZa7", "KWbEiP0Hvf4", "Qz-FiRH8FWx", "359SqokvoX", "SKLJmVK1BI", "tXpGkh9rbJF", "8cFaf6Tat1R", "nxe8zwmpHJ", "1AzapEGTDLa", "0D35vqc5NS_" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "```\"The entire Appendix B.2 is wrong, where the 3rd equality (aka, line 2 of Eq. 39) does NOT necessarily hold. The term d_\\pi(\\theta)’s gradient is not computed at all, and therefore, any results afterward are not correct\"\n\nWe hope the above comment helps clarify this confusion. The gradient of the term $d_...
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "Qz-FiRH8FWx", "nxe8zwmpHJ", "nxe8zwmpHJ", "nxe8zwmpHJ", "1AzapEGTDLa", "1AzapEGTDLa", "0D35vqc5NS_", "iclr_2021_P3WG6p6Jnb", "iclr_2021_P3WG6p6Jnb", "iclr_2021_P3WG6p6Jnb" ]
iclr_2021_ml1LSu49FLZ
Topic-aware Contextualized Transformers
Training on disjoint fixed-length segments, Transformers successfully transform static word embeddings into contextualized word representations. However, they often restrict the context of a token to the segment it resides in and hence neglect the flow of contextual information across segments, failing to capture longe...
withdrawn-rejected-submissions
This paper proposes enhancing contextualized word embeddings learned by Transformers by modeling long-range dependencies via a deep topic model, using a Poisson Gamma Belief Network (PGBN). The experimental results show incorporating topic information can further improve the performance of Transformers. While this is a...
train
[ "OIqI3GXNBSZ", "FYReshArlB", "b_LgHFqdXwD" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper introduces a novel LM architecture that combines a Transformer with a PGBN topic model, enabling the transformer model to make use of addtional context topic information. The PGBN extracts topic information from the input, which is then used to enrich the information that is available to the transformer....
[ 4, 7, 4 ]
[ 3, 4, 4 ]
[ "iclr_2021_ml1LSu49FLZ", "iclr_2021_ml1LSu49FLZ", "iclr_2021_ml1LSu49FLZ" ]
iclr_2021_6IVdytR2W90
MSFM: Multi-Scale Fusion Module for Object Detection
Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the rob...
withdrawn-rejected-submissions
This submission proposes an approach for fusing representations at multiple scales to improve object detection systems. Reviewers thought the paper was well-written and showed positive results on COCO, a common object detection benchmark. However, reviewers agreed that there was not sufficient methodological novelty or...
test
[ "bYfKpqYXuRD", "DsPfTyOB54_", "4fDxYerty5e", "veRSu22GcL", "B6qoZ23WB1s", "yHGFg6cjMt8", "n2YSiKuodII", "csoADX-1jKc" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors study the problem of scale-friendly feature fusion for object detection. Specifically, the authors propose to process features at each layer of a feature pyramid network at multiple scales and fuse them back into a single scale. To be specific, they resize features at a layer into multip...
[ 4, -1, -1, -1, -1, 3, 3, 3 ]
[ 5, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2021_6IVdytR2W90", "yHGFg6cjMt8", "bYfKpqYXuRD", "n2YSiKuodII", "csoADX-1jKc", "iclr_2021_6IVdytR2W90", "iclr_2021_6IVdytR2W90", "iclr_2021_6IVdytR2W90" ]
iclr_2021_gp5Uzbl-9C-
Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning
Inducing causal relationships from observations is a classic problem in machine learning. Most work in causality starts from the premise that the causal variables themselves have known semantics or are observed. However, for AI agents such as robots trying to make sense of their environment, the only observables are lo...
withdrawn-rejected-submissions
This paper proposes a suite of benchmark visual model-based RL tasks to evaluate causal discovery approaches under systematically varying causal graphs. Despite some disagreement on this point among reviewers, I would come down on the side of saying that a better-executed version of this paper would have been a good fi...
train
[ "gFeXm2wjuy", "STIcPRr_Q0s", "EOeFsKSF70T", "tpAjAjAxgVv", "uqJ3jRk01YQ", "GjB2cBsHt8o", "-FifBaoRx1o", "RrcwhJ8uHid", "3H1WsetLr6L", "DwofMlu2zIu", "7xKVC8puAJt", "HzQAu1lyMCr", "7XGr0p1Hmk6", "lBHuVyKdf1", "C2M1hAEl2Kc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "=== Summary\n\nThis paper proposes a benchmark that aims to systematically evaluate models' ability in learning representations of high-level variables as well as causal structures among them. The authors introduce two benchmarking RL environments:\n- One is in a physical domain where an agent is pushing blocks of...
[ 5, 6, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_gp5Uzbl-9C-", "iclr_2021_gp5Uzbl-9C-", "iclr_2021_gp5Uzbl-9C-", "iclr_2021_gp5Uzbl-9C-", "iclr_2021_gp5Uzbl-9C-", "7xKVC8puAJt", "7XGr0p1Hmk6", "C2M1hAEl2Kc", "tpAjAjAxgVv", "EOeFsKSF70T", "HzQAu1lyMCr", "gFeXm2wjuy", "DwofMlu2zIu", "STIcPRr_Q0s", "tpAjAjAxgVv" ]
iclr_2021_3FAl0W6gZ_e
Three Dimensional Reconstruction of Botanical Trees with Simulatable Geometry
We tackle the challenging problem of creating full and accurate three dimensional reconstructions of botanical trees with the topological and geometric accuracy required for subsequent physical simulation, e.g. in response to wind forces. Although certain aspects of our approach would benefit from various improvements,...
withdrawn-rejected-submissions
Paper proposes and demonstrates a method to reconstruct 3d shape for a tree, from drone data. While the reviewers all appreciated to work, all felt there were many shortcomings of the paper with respect to an ICLR audience: (a) no machine learning novelty (b) highly interactive data processing method (c) only one ex...
train
[ "bWGXZe8yGzR", "9QgYM0VzYG8", "z5_VVOE1Xlg", "RJsLIn4PCyN", "3U_ohdrva-2", "tcL0clZKBJu", "TQ9Oc95RA6d", "42s6luywKnK" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper considers an interesting and challenging problem, namely reconstructing 3D structures with very fine details from image data captured \"in-the-wild\", i.e., taken in uncontrolled environments. More concretely, the paper considers the problem of obtaining a detailed 3D reconstruction of trees that results...
[ 4, -1, -1, -1, -1, 4, 3, 6 ]
[ 4, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_3FAl0W6gZ_e", "42s6luywKnK", "bWGXZe8yGzR", "tcL0clZKBJu", "TQ9Oc95RA6d", "iclr_2021_3FAl0W6gZ_e", "iclr_2021_3FAl0W6gZ_e", "iclr_2021_3FAl0W6gZ_e" ]
iclr_2021_vY0bnzBBvtr
Provably More Efficient Q-Learning in the One-Sided-Feedback/Full-Feedback Settings
Motivated by the episodic version of the classical inventory control problem, we propose a new Q-learning-based algorithm, Elimination-Based Half-Q-Learning (HQL), that enjoys improved efficiency over existing algorithms for a wide variety of problems in the one-sided-feedback setting. We also provide a simpler variant...
withdrawn-rejected-submissions
This paper explores the performance of Q-learning in the presence of either one-sided feedback or full feedback. Such feedbacks play an important role in improving the resulting regret bounds, which are (almost) not affected by the dimension of the state and action space. The motivation of such feedback settings stems ...
test
[ "4oy8fg55ypa", "b50Eie_Gwck", "lTLC0jgtOUR", "MSZ6g28FOGD", "VAKQkzl9MiT", "pCrX_TCzYsL", "bZFihBqOOUi", "gk99AyMi7z" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "After rebuttal:\n\nMy main concerns are addressed, and I changed my score to 5 accordingly.\n\n------\nMotivated by OR problems, this paper extends Q-learning algorithm to one-sided-feedback and full-feedback settings. With additional assumptions, this paper proves a $\\sqrt{T}$-regret bound with no dependence on ...
[ 5, -1, -1, -1, -1, 4, 6, 5 ]
[ 4, -1, -1, -1, -1, 2, 3, 4 ]
[ "iclr_2021_vY0bnzBBvtr", "4oy8fg55ypa", "pCrX_TCzYsL", "gk99AyMi7z", "bZFihBqOOUi", "iclr_2021_vY0bnzBBvtr", "iclr_2021_vY0bnzBBvtr", "iclr_2021_vY0bnzBBvtr" ]
iclr_2021_GJkTaYTmzVS
Play to Grade: Grading Interactive Coding Games as Classifying Markov Decision Process
Contemporary coding education often present students with the task of developing programs that have user interaction and complex dynamic systems, such as mouse based games. While pedagogically compelling, grading such student programs requires dynamic user inputs, therefore they are difficult to grade by unit tests. In...
withdrawn-rejected-submissions
The paper studies a novel problem setting of automatically grading interactive programming exercises. Grading such interactive programs is challenging because they require dynamic user inputs. The paper's main strengths lie in formally introducing this problem, proposing an initial solution using reinforcement learning...
train
[ "RMcUhlh-vaW", "0QYKY3Hlnke", "jJDwfMGUyF9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- Quality : Okay\n- Clarity : Poor\n- Originality : Good at problem formulation, but then becomes bad at solution\n- Significance : Could be very significant if done right (not quite there yet).\n\nList of Cons :\n\nA. Needs improvement on writing:\n\n1) \nThe task of the students was confusing to understand. Spec...
[ 4, 3, 5 ]
[ 4, 4, 4 ]
[ "iclr_2021_GJkTaYTmzVS", "iclr_2021_GJkTaYTmzVS", "iclr_2021_GJkTaYTmzVS" ]
iclr_2021_RcJHy18g1M
Outlier Preserving Distribution Mapping Autoencoders
State-of-the-art deep outlier detection methods map data into a latent space with the aim of having outliers far away from inliers in this space. Unfortunately, this often fails as the divergence penalty they adopt pushes outliers into the same high-probability regions as inliers. We propose a novel method, OP-DMA, t...
withdrawn-rejected-submissions
In this paper, a data mapping method to a latent space designed for outlier detection is proposed. Outlier detection by latent space mapping has been extensively studied in the literature. Unfortunately, this paper does not fully discuss the relation of the proposed method with a large amount of existing literature and...
val
[ "zT9rmPCiFWH", "uGIvjQjydE4", "VB74HqoXrLT", "PQvzyW7W5dl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe authors proposed a WAE-based algorithm for outlier detection, aiming at mapping outliers to a low probability region and inliers to a high probability region. The training objective is based on WAE by replacing the reconstruction error with the prior weighted loss. Experiments were performed to sho...
[ 3, 4, 6, 5 ]
[ 5, 3, 4, 3 ]
[ "iclr_2021_RcJHy18g1M", "iclr_2021_RcJHy18g1M", "iclr_2021_RcJHy18g1M", "iclr_2021_RcJHy18g1M" ]
iclr_2021_iy3xVojOhV
GraphCGAN: Convolutional Graph Neural Network with Generative Adversarial Networks
Graph convolutional networks (GCN) achieved superior performances in graph-based semi-supervised learning (SSL) tasks. Generative adversarial networks (GAN) also show the ability to increase the performance in SSL. However, there is still no good way to combine the GAN and GCN in graph-based SSL tasks. ...
withdrawn-rejected-submissions
This paper presents a method to combine graph convolutional neural networks (GCNs) with generative adversarial networks (GANs) for graph-based semi-supervised learning. **Strengths:** * It is a reasonable attempt to combine GCN with GAN for semi-supervised node classification. * The proposed method is general in t...
train
[ "397IGU6zj24", "iV7y2LhAQv2", "1_WJPVtNRMI", "YD5XKVzr7xv", "MElkMy8Rznu", "qz85LtkiAmY", "pQpGI3zXN3", "mb7vS5ZO8V0" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for all valuable comments.\n\nRegarding to “Motivation and significance are not clear”:\n\nCompared to previous method, proposed method generates nodes other than hidden vectors. The generated nodes help boost the performance in state-of-the-art classifiers (GCN and its variants). The intuition can be su...
[ -1, -1, -1, -1, 5, 5, 5, 4 ]
[ -1, -1, -1, -1, 4, 5, 2, 4 ]
[ "MElkMy8Rznu", "qz85LtkiAmY", "pQpGI3zXN3", "mb7vS5ZO8V0", "iclr_2021_iy3xVojOhV", "iclr_2021_iy3xVojOhV", "iclr_2021_iy3xVojOhV", "iclr_2021_iy3xVojOhV" ]
iclr_2021_cYr2OPNyTz7
Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model
Masked Language Model (MLM) framework has been widely adopted for self-supervised language pre-training. In this paper, we argue that randomly sampled masks in MLM would lead to undesirably large gradient variance. Thus, we theoretically quantify the gradient variance via correlating the gradient covariance with the H...
withdrawn-rejected-submissions
This work proposes a fully-explored masking strategy, segmenting the input text, which maximizes the Hamming distance between any two sampled masks on a fixed text sequence. The hope is to reduce the large variance of MLM objective, based on the hypothesis that randomly sampled masks in MLM lead to undesirably large gr...
train
[ "etuEUUNuMQr", "b-kmDuN1cRk", "28jIK8rqr9", "Uee9uPi_YZU", "A-9B93rYcJT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "To reduce the variance due to the sampling of masks, the authors propose a fully-explored masking strategy, where a text sequence is divided into a certain number of non-overlapping segments. And they show this technique improves accuracy in downstream tasks.\n\nThis idea is novel and interesting to me, and the de...
[ 5, 4, 5, 6, 6 ]
[ 3, 5, 3, 4, 4 ]
[ "iclr_2021_cYr2OPNyTz7", "iclr_2021_cYr2OPNyTz7", "iclr_2021_cYr2OPNyTz7", "iclr_2021_cYr2OPNyTz7", "iclr_2021_cYr2OPNyTz7" ]
iclr_2021_b-7nwWHFtw
Privacy-preserving Learning via Deep Net Pruning
Neural network pruning has demonstrated its success in significantly improving the computational efficiency of deep models while only introducing a small reduction on final accuracy. In this paper, we explore an extra bonus of neural network pruning in terms of enhancing privacy. Specifically, we show a novel connectio...
withdrawn-rejected-submissions
The paper claims to draw a connection between pruning and differential privacy. There seem to be conceptual issues with the paper, highlighted by all reviewers (see particularly Reviewer 3's review), which the authors had no response to. For example, a function approximating another does not imply any transfer of diffe...
train
[ "eBW7rUs-I-", "pULG4Ykvabq", "W8YN7ihYWnj", "6unKJVJoiX7", "boVCjhbrfD5", "yXux4U8IKki", "Dz34tIDeCdo", "qU7BfwWtQfa" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Overview: This paper aims to establish a theoretical connection between differential privacy and magnitude based pruning. The authors show theoretically that outputs of pruned single layer neural networks have some similarity to outputs of the same network with differential privacy added. The paper then empiricall...
[ 4, -1, -1, -1, -1, 5, 4, 2 ]
[ 4, -1, -1, -1, -1, 5, 4, 5 ]
[ "iclr_2021_b-7nwWHFtw", "Dz34tIDeCdo", "yXux4U8IKki", "eBW7rUs-I-", "qU7BfwWtQfa", "iclr_2021_b-7nwWHFtw", "iclr_2021_b-7nwWHFtw", "iclr_2021_b-7nwWHFtw" ]
iclr_2021_yfKOB5CO5dY
Localized Meta-Learning: A PAC-Bayes Analysis for Meta-Learning Beyond Global Prior
Meta-learning methods learn the meta-knowledge among various training tasks and aim to promote the learning of new tasks under the task similarity assumption. Such meta-knowledge is often represented as a fixed distribution; this, however, may be too restrictive to capture various specific task information because the ...
withdrawn-rejected-submissions
The paper presents a PAC-Bayesian approach for meta-learning that utilizes information of the task distribution in the prior. The presented localized approach allows the authors to derive an algorithm directly from the bound - this is a worthwhile contribution. Nevertheless there are several concerns that were raised b...
test
[ "C2r23uj7e2t", "5eiSF3WID3j", "Qt-GM9w5WWV", "1PPye_35DBf", "AyuZGn9NG0r", "yNv8Sz6NVAR", "bPzJ3N9H6Yu", "QWBV_TOaXXC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update: I appreciate the response to address the major concerns. The proposed approach doesn't follow the episodic training, so there exists a clear difference from advanced MAML approaches, which update the task-specific parameters in the episodic training. I still believe that more empirical justifications shoul...
[ 5, 5, -1, -1, -1, -1, 6, 5 ]
[ 3, 4, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_yfKOB5CO5dY", "iclr_2021_yfKOB5CO5dY", "QWBV_TOaXXC", "5eiSF3WID3j", "bPzJ3N9H6Yu", "C2r23uj7e2t", "iclr_2021_yfKOB5CO5dY", "iclr_2021_yfKOB5CO5dY" ]
iclr_2021_AT7jak63NNK
Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling
Reinforcement learning algorithms can acquire policies for complex tasks autonomously. However, the number of samples required to learn a diverse set of skills can be prohibitively large. While meta-reinforcement learning methods have enabled agents to leverage prior experience to adapt quickly to new tasks, their perf...
withdrawn-rejected-submissions
The paper was evaluated by 4 knowledgeable reviewers and got mixed scores. While most reviewers appreciated the new intuitive approach to meta RL. there were severe concerns about algorithmic choices and the evaluations that led to a poor score from some reviewers. These concerns are summarized below: - The motivation ...
val
[ "DMorMguf8wn", "_5N5zNGwmzv", "4aqG9SHGxBa", "0401fRehC4v", "WucXDwd1l1B", "1_yNsG265oe", "MK-yMPy7I4n", "RGrNHYL-LcX", "W3AbNBzHQLZ", "5WEifGhq8s" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a new meta-RL algorithm containing two main components: 1. Model Identification 2. Experience Relabeling. Model Identification models the next state and reward function condition on the previous state, action, and context. Experience Relabeling module uses the model learned in the previous ste...
[ 5, 4, 5, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_AT7jak63NNK", "iclr_2021_AT7jak63NNK", "iclr_2021_AT7jak63NNK", "MK-yMPy7I4n", "5WEifGhq8s", "4aqG9SHGxBa", "RGrNHYL-LcX", "_5N5zNGwmzv", "DMorMguf8wn", "iclr_2021_AT7jak63NNK" ]
iclr_2021_Uf_WNt41tUA
CorDial: Coarse-to-fine Abstractive Dialogue Summarization with Controllable Granularity
Dialogue summarization is challenging due to its multi-speaker standpoints, casual spoken language, and limited labeled data. In this paper, we propose CorDial, aiming to improve the abstractive dialogue summarization quality and at the same time enable granularity controllability. We propose 1) a coarse-to-fine genera...
withdrawn-rejected-submissions
The paper proposes a method for the interesting task of dialog summarisation which is slowly getting attention from the research community. In particular, they propose a method which first generates a summary draft and then a final draft. Pros: 1) The paper is well written 2) Addresses an interesting problem 3) SOTA r...
train
[ "s4xlUs4AbPT", "VAutVl813CW", "GWPoRQOFYyp", "uqdUiTF_cVz", "6S5mrLlUSyx", "ACZNsV6ul1p", "QOnlVerBmC1", "lvs6FUDF8lj" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer,\n\nPlease let us answer your concerns or questions in the following:\n\n**[largely ad-hoc and engineering intensive]**\nOur work is more like an empirical work and the main contribution and goal is not to propose a new generative model or a new optimization algorithm, instead, we are trying to make ...
[ -1, -1, -1, -1, 4, 5, 5, 6 ]
[ -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "6S5mrLlUSyx", "ACZNsV6ul1p", "QOnlVerBmC1", "lvs6FUDF8lj", "iclr_2021_Uf_WNt41tUA", "iclr_2021_Uf_WNt41tUA", "iclr_2021_Uf_WNt41tUA", "iclr_2021_Uf_WNt41tUA" ]
iclr_2021_CzRSsOG6JDw
The impacts of known and unknown demonstrator irrationality on reward inference
Algorithms inferring rewards from human behavior typically assume that people are (approximately) rational. In reality, people exhibit a wide array of irrationalities. Motivated by understanding the benefits of modeling these irrationalities, we analyze the effects that demonstrator irrationality has on reward inferenc...
withdrawn-rejected-submissions
Two very confident and fairly confident reviewers rate this paper ok but not good enough, and two other fairly confident reviewers rate the article below the acceptance threshold. Therefore I must reject the article. The reviewers provided encouraging comments and suggestions on how the manuscript could be improved, wh...
train
[ "FPB-UldBqmu", "P41iahsqq2K", "EpUcAHZeVxc", "gvzU5p6hwBb", "W0TCPTWMEqq", "NyEz1sun9nw", "iOMY1UgTjHg", "5BuqjaGj_fS", "Wnn90NnmiCH", "ElExdznAsJq" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your detailed review! We’re happy that you found our paper well written and easy to follow, agree that an exhaustive list of irrationality models has been tested, and also that the problem we are trying to answer is interesting. \n\nWe agree that our theoretical results are not necessarily surprising or...
[ -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 5 ]
[ "iOMY1UgTjHg", "ElExdznAsJq", "Wnn90NnmiCH", "5BuqjaGj_fS", "Wnn90NnmiCH", "iclr_2021_CzRSsOG6JDw", "iclr_2021_CzRSsOG6JDw", "iclr_2021_CzRSsOG6JDw", "iclr_2021_CzRSsOG6JDw", "iclr_2021_CzRSsOG6JDw" ]
iclr_2021_IU8QxEiG4hR
SBEVNet: End-to-End Deep Stereo Layout Estimation
Accurate layout estimation is crucial for planning and navigation, for robotics applications such as self driving. In this paper, we introduce stereo bird's eye view network SBEVNet, a novel supervised end-to-end framework for estimation of bird's eye view layout from a pair of stereo images. Although our network reuse...
withdrawn-rejected-submissions
This paper addresses the problem of estimating a “birds-eyed-view” overhead semantic layout estimate of a scene given an input pair of stereo images of the scene. The authors present an end-to-end trainable deep network that fuses features derived from the stereo images and projects these features into an overhead coor...
train
[ "jyh-d33SwOA", "b0OVQgJaUM5", "hiY94BWBKdB", "nAyKAWJRmMM", "gplSDF7xo9w", "uq8vwuxeKuO", "8mo0cvbXIpn", "HYETgkIjh0u" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n## Contributions\n\nThis paper presents SBEVNet, a neural network architecture to estimate the bird's-eye view (BEV) layout of an urban driving scene. Given an image captured by a stereo camera, SBEVNet performs an inverse perspective mapping (IPM) to obtain an initial feature volume, which is further processed ...
[ 5, 6, -1, -1, -1, -1, 5, 5 ]
[ 5, 4, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_IU8QxEiG4hR", "iclr_2021_IU8QxEiG4hR", "8mo0cvbXIpn", "b0OVQgJaUM5", "HYETgkIjh0u", "jyh-d33SwOA", "iclr_2021_IU8QxEiG4hR", "iclr_2021_IU8QxEiG4hR" ]
iclr_2021_butEPeLARP_
Predicting the impact of dataset composition on model performance
Real-world machine learning systems are often are trained using a mix of data sources with varying cost and quality. Understanding how the size and composition of a training dataset affect model performance is critical for advancing our understanding of generalization, as well as designing more effective data collect...
withdrawn-rejected-submissions
This paper studies the following broad question: How can we predict model performance when the data comes from different sources? The reviewers agreed that the direction studied is very interesting. While the results presented in this work are promising, several reviewers pointed out some weaknesses in the paper, inclu...
train
[ "trDVHXbg-Ib", "-IoBo1Lpf0", "0ymkiR73NvM", "fGGohJJCXmS", "zB3ZWK8lXir", "bQ89rm6gOct", "zSgOcStAZjc", "YflQO3vEM5m", "QaNX-CfJiKS", "wGUAhqGyXUj", "hyI3VGpbMdJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This work studies the problem of predicting model performance with more training data when the data are collected from different sources. The predictor is a function of the number training examples, and the ratio of examples from each source. The predictor needs to be built from a small number of training examples...
[ 5, 4, 7, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_butEPeLARP_", "iclr_2021_butEPeLARP_", "iclr_2021_butEPeLARP_", "zB3ZWK8lXir", "bQ89rm6gOct", "trDVHXbg-Ib", "0ymkiR73NvM", "hyI3VGpbMdJ", "-IoBo1Lpf0", "iclr_2021_butEPeLARP_", "iclr_2021_butEPeLARP_" ]
iclr_2021_TDDZxmr6851
The large learning rate phase of deep learning
The choice of initial learning rate can have a profound effect on the performance of deep networks. We present empirical evidence that networks exhibit sharply distinct behaviors at small and large learning rates. In the small learning rate phase, training can be understood using the existing theory of infinitely wide ...
withdrawn-rejected-submissions
I agree with the reviewers that said that this paper has valuable insights. However, all reviewers ultimately recommended rejection. I think the main reason was that the reviewers did not feel these insights don't accumulate together to a message that would justify a paper. I hope the authors can address these concerns...
train
[ "qKGtK8i7MHg", "MKQFm2jaJC", "g2KxE-M24hH", "KND_9Jzf0ta", "H5lsQgDLMX", "L7gebsreHY" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the effect of the learning rate's magnitude when training neural networks. I believe this to be an extremely relevant problem since large learning rates are widely adopted in practice due to the their positive impact on the model's generalization, even though we don't understand the reason behind...
[ 5, -1, -1, -1, 3, 4 ]
[ 4, -1, -1, -1, 4, 4 ]
[ "iclr_2021_TDDZxmr6851", "H5lsQgDLMX", "L7gebsreHY", "qKGtK8i7MHg", "iclr_2021_TDDZxmr6851", "iclr_2021_TDDZxmr6851" ]
iclr_2021_H8hgu4XsTXi
Estimating Treatment Effects via Orthogonal Regularization
Decision-making often requires accurate estimation of causal effects from observational data. This is challenging as outcomes of alternative decisions are not observed and have to be estimated. Previous methods estimate outcomes based on unconfoundedness but neglect any constraints that unconfoundedness imposes on the ...
withdrawn-rejected-submissions
This paper proposes a regularization term that enforces the orthogonality between (i) a residual between the observed outcome and its estimator and (ii) the treatment and propensity score. The method empirically performs competitively. However, there seems to exist a gap between the proposed method and the assumptions ...
train
[ "tc7uaEcMQkY", "FQeO73zBvqh", "cKSWFSMoUn", "p-V-n9Tl1UH", "XGxEX0Im4Dh", "JJtdDcYlGK", "GFPLkgVMHf1", "TH1DIhkIWH", "8MtcuFJeHU9", "igTbZru8MpX", "YvVHV4BaVLn" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe present paper introduces a new approach, deep orthogonal networks for unconfounded treatments (DONUT), that allows to estimate (average) treatment effects exploiting an orthogonality property implied by the classical unconfoundedness assumption. The authors propose a regularization framework based on...
[ 7, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_H8hgu4XsTXi", "cKSWFSMoUn", "JJtdDcYlGK", "tc7uaEcMQkY", "8MtcuFJeHU9", "igTbZru8MpX", "TH1DIhkIWH", "YvVHV4BaVLn", "iclr_2021_H8hgu4XsTXi", "iclr_2021_H8hgu4XsTXi", "iclr_2021_H8hgu4XsTXi" ]
iclr_2021_6FsCHsZ66Fp
Towards certifying ℓ∞ robustness using Neural networks with ℓ∞-dist Neurons
It is well-known that standard neural networks, even with a high classification accuracy, are vulnerable to small ℓ∞ perturbations. Many attempts have been tried to learn a network that can resist such adversarial attacks. However, most previous works either can only provide empirical verification of the defense to a p...
withdrawn-rejected-submissions
In this paper, the authors propose a theoretically principled neural network that inherently resists ℓ∞ perturbations without the help of adversarial training. Although the authors insist to focus on the novel design with comprehensive theoretical supports, the reviewers still concern the insufficient empirical evaluat...
test
[ "EO0vjFJzjVq", "bNDpkThZLbo", "3MunlCNu9NB", "RDoJyEBi16f", "gAXMhXGcSv3", "sj9QIWTOpqM", "AB5_Xp59Ky", "lLRXhjVkmi6", "GrAy4kPakQo", "49etAdFtzAd" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new kind neural network based on a new kind of activation function, the L_\\infty-Dist neuron, which they then demonstrate how to train, and show is both experimentally and certifiably robust. Furthermore, they provide a theoretical result demonstrating that the network can approximate any d...
[ 4, -1, -1, -1, -1, -1, -1, 4, 6, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2021_6FsCHsZ66Fp", "RDoJyEBi16f", "gAXMhXGcSv3", "EO0vjFJzjVq", "49etAdFtzAd", "lLRXhjVkmi6", "GrAy4kPakQo", "iclr_2021_6FsCHsZ66Fp", "iclr_2021_6FsCHsZ66Fp", "iclr_2021_6FsCHsZ66Fp" ]
iclr_2021_pAj7zLJK05U
AttackDist: Characterizing Zero-day Adversarial Samples by Counter Attack
Deep Neural Networks (DNNs) have been shown vulnerable to adversarial attacks, which could produce adversarial samples that easily fool the state-of-the-art DNNs. The harmfulness of adversarial attacks calls for the defense mechanisms under fire. However, the relationship between adversarial attacks and defenses is lik...
withdrawn-rejected-submissions
Reviewers liked the concept of the zero-day attack and yet raised different concerns about the other parts of the paper. In general, Reviewers wanted to see more thorough experimental evaluations (e.g., against blackbox attack and adaptive attack) and improved clarity of the theoretical analyses. AC encourages authors ...
train
[ "euiuHQe4h16", "v4-wHTr_aZW", "CvENlRiR5Ve", "XVgLpE5Sb35", "nv8uxq46l1A", "MIAbWkKNoi" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for pointing out the mistake in our current version. We make a mistake in plotting Table 2 and Table 6. The results in these tables are for detecting $l_{\\inf}$ attacks. As we state in the experimental setup, the $l_{\\inf}$ attacks considered are PGD, BIM, and FGSM rather than BB, CW, DF. (We use the s...
[ -1, -1, 3, 3, 5, 5 ]
[ -1, -1, 5, 5, 3, 4 ]
[ "v4-wHTr_aZW", "iclr_2021_pAj7zLJK05U", "iclr_2021_pAj7zLJK05U", "iclr_2021_pAj7zLJK05U", "iclr_2021_pAj7zLJK05U", "iclr_2021_pAj7zLJK05U" ]
iclr_2021_biH_IISPxYA
Multi-Level Generative Models for Partial Label Learning with Non-random Label Noise
Partial label (PL) learning tackles the problem where each training instance is associated with a set of candidate labels that include both the true label and irrelevant noise labels. In this paper, we propose a novel multi-level generative model for partial label learning (MGPLL), which tackles the PL problem by learn...
withdrawn-rejected-submissions
Dear Authors, Thank you very much for your detailed feedback to the reviewers in the rebuttal phase. This certainly clarified some of the concerns raised by the reviewers and contributed highly to deepen their understanding of your work. We positively evaluated the novelty and the superior empirical performance of th...
train
[ "qCVdGfq2TIJ", "uXskmJ-G1LL", "2JXf5FnvDWg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Overall, I like the idea of this paper and it is well-written. In this paper, the authors propose multi-level generative models for partial label learning with non-random label noise. It consists of five components: the conditional noise label generator which models the noise labels conditioning on the ground-trut...
[ 7, 6, 5 ]
[ 4, 3, 4 ]
[ "iclr_2021_biH_IISPxYA", "iclr_2021_biH_IISPxYA", "iclr_2021_biH_IISPxYA" ]
iclr_2021__adSMszz_g9
Memformer: The Memory-Augmented Transformer
Transformer models have obtained remarkable accomplishments in various NLP tasks. However, these models have efficiency issues on long sequences, as the complexity of their self-attention module scales quadratically with the sequence length. To remedy the limitation, we present Memformer, a novel language model that ut...
withdrawn-rejected-submissions
This paper introduces a new model, called Memformer, that combines the strength of transformer networks and recurrent neural networks. While the reviewers found the idea interesting, they also raised issues regarding the experimental section. In particular, they found the results unconvincing, because of weak baselines...
test
[ "ENqdn6YyCuE", "GeTpL5JTC2M", "Edui7mMOke", "qiYfVIF2u2O", "9MaZbQmwx_j", "Uzk9wYK2iwM", "_p6C97oMAhU", "SOzd6X-Q0eP", "oto51gg1Uog" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q: I think that section 2.1 requires an additional description: (e.g., in (a), what is the meaning of W_rE_{x_j}?, in (c), how W_k E_{x_j} can be the global content bias?)\n\nSorry for the confusion. (c) W_k E_{x_j} is the global content bias because it projects local token representations to the attention scores ...
[ -1, -1, -1, -1, -1, 6, 5, 4, 3 ]
[ -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "Uzk9wYK2iwM", "_p6C97oMAhU", "SOzd6X-Q0eP", "oto51gg1Uog", "iclr_2021__adSMszz_g9", "iclr_2021__adSMszz_g9", "iclr_2021__adSMszz_g9", "iclr_2021__adSMszz_g9", "iclr_2021__adSMszz_g9" ]
iclr_2021_HNytlGv1VjG
What are effective labels for augmented data? Improving robustness with AutoLabel
A wide breadth of research has devised data augmentation approaches that can improve both accuracy and generalization performance for neural networks. However, augmented data can end up being far from the clean data and what is the appropriate label is less clear. Despite this, most existing work simply reuses the orig...
withdrawn-rejected-submissions
The paper introduces a simple and interesting method that adaptively smoothes the labels of augmented data based on a distance to the “clean” training data. The reviewers have raised concerns about limited novelty, minor improvement over baselines, and insufficient experiments. The author’s response was not sufficient ...
train
[ "0XID18rceYw", "4jwpsqelds", "oFC61gOiDlj", "Kmm6PL7poWN", "Z9AWnQmcgs5", "ff9LwmT5LQ", "OD3TfSFUgA3", "zzHjuRJquD8" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We propose AutoLabel as a generic algorithm, which can be easily applied to existing data augmentation methods. This is supported by the experimental results when we apply AutoLabel to three representative data augmentations. We want to emphasize that the major benefit of AutoLabel is that it is a generic framewor...
[ -1, -1, -1, -1, 4, 4, 4, 5 ]
[ -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "iclr_2021_HNytlGv1VjG", "Z9AWnQmcgs5", "ff9LwmT5LQ", "OD3TfSFUgA3", "iclr_2021_HNytlGv1VjG", "iclr_2021_HNytlGv1VjG", "iclr_2021_HNytlGv1VjG", "iclr_2021_HNytlGv1VjG" ]
iclr_2021_6_FjMpi_ebO
Redesigning the Classification Layer by Randomizing the Class Representation Vectors
Neural image classification models typically consist of two components. The first is an image encoder, which is responsible for encoding a given raw image into a representative vector. The second is the classification component, which is often implemented by projecting the representative vector onto target class vector...
withdrawn-rejected-submissions
The reviewers are in consensus that this paper is not ready for publication: cited concerns include simple (interesting) ideas but need to be carefully analyzed empirically, contextualized (other similar studies exist), identifying convincing empirical evidences,. etc. The AC recommends Reject.
train
[ "MyLlwmcQ5Hk", "Q8PYICvAcY", "EttQ6lkqPd", "iT7FtOo8mb", "cSy5GJcEuz", "VSBrq1TLl_S", "4bvhmpEWirY", "_4X2KRBAW2n" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper explores deeper into the specific classification layer of a standard supervised learning system. The core idea of the paper is to randomly initialize and then fix the classification layer weights and train the network leading improved discrimination.\nThe writing is satisfactory and the paper develops th...
[ 5, -1, -1, -1, -1, 5, 4, 4 ]
[ 4, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_6_FjMpi_ebO", "_4X2KRBAW2n", "MyLlwmcQ5Hk", "VSBrq1TLl_S", "4bvhmpEWirY", "iclr_2021_6_FjMpi_ebO", "iclr_2021_6_FjMpi_ebO", "iclr_2021_6_FjMpi_ebO" ]
iclr_2021_q_Q9MMGwSQu
A Simple and Effective Baseline for Out-of-Distribution Detection using Abstention
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems. While simple to state, this has been a particularly challenging problem in deep learning, where models often end up making...
withdrawn-rejected-submissions
This paper proposes a method for out-of-distribution (OOD) detection by introducing a K+1 abstention class for outliers, in addition to the in-distribution classes. While the method has shown promising performance compared to the Outlier Exposure (OE), the novelty is limited given the idea is almost identical to an AAA...
train
[ "sfZV3fxO9xU", "DVjgjYOxwRv", "bP-l3dSp0nc", "SEHQiBbSBj", "TIc4_o8hiOV", "V4Zk6Or2Nk8", "X7iY0LXoP7c", "kcAe9GiRFA", "Um30h4-Uu86", "pqbxN-cH7u2", "2ZthW8rxYOL", "lvVaECNtrgA", "JFNRnhmRiv", "KzJGhOEnC00" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "- Summary:\nThis paper shows that introducing an abstention class for out-of-distribution (OOD) works well for detecting it when the in-distribution dataset is CIFAR and TinyImageNet is available during training as an OOD dataset.\n\n- Reasons for score:\n1. The proposed setting with a large OOD dataset has alread...
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_q_Q9MMGwSQu", "iclr_2021_q_Q9MMGwSQu", "TIc4_o8hiOV", "KzJGhOEnC00", "sfZV3fxO9xU", "kcAe9GiRFA", "Um30h4-Uu86", "2ZthW8rxYOL", "JFNRnhmRiv", "JFNRnhmRiv", "lvVaECNtrgA", "DVjgjYOxwRv", "iclr_2021_q_Q9MMGwSQu", "iclr_2021_q_Q9MMGwSQu" ]
iclr_2021_JVs1OrQgR3A
Time Series Counterfactual Inference with Hidden Confounders
We present augmented counterfactual ordinary differential equations (ACODEs), a new approach to counterfactual inference on time series data with a focus on healthcare applications. ACODEs model interventions in continuous time with differential equations, augmented by auxiliary confounding variables to reduce inferenc...
withdrawn-rejected-submissions
There are some interesting ideas raised on continuous-time models with latent variables in machine learning. However, the reviewers argue, and I agree, that the connection to causal models as typically required in applications about the effects of interventions is not addressed with as much care as it might have been n...
train
[ "iBogl4Yetkw", "UjVZaRthubG", "uo--LjItJu", "esRvuQBUglL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed to solve an interesting problem: how do we perform counterfactual inference for time series data? The paper follows a study of the problem in the static setting: in the first step, the paper fit an augmented time series $u_t$ as additional confounders, and then perform inference based on the au...
[ 5, 5, 4, 5 ]
[ 3, 3, 4, 4 ]
[ "iclr_2021_JVs1OrQgR3A", "iclr_2021_JVs1OrQgR3A", "iclr_2021_JVs1OrQgR3A", "iclr_2021_JVs1OrQgR3A" ]
iclr_2021_uHNEe2aR4qJ
The Negative Pretraining Effect in Sequential Deep Learning and Three Ways to Fix It
Negative pretraining is a prominent sequential learning effect of neural networks where a pretrained model obtains a worse generalization performance than a model that is trained from scratch when either are trained on a target task. We conceptualize the ingredients of this problem setting and examine the negative pret...
withdrawn-rejected-submissions
Taking all reviews and the work in consideration, unfortunately the work does not present the breadth it needs to sustain the claims it makes. In particular, there work requires to analyse more architectures/variations of datasets with different properties and to provide more careful ablation studies that shows the eff...
train
[ "E9DyIP1jevS", "DghCIDYrsF7", "gaeMs7gLkdb", "g-NZLIsNBgM", "Mz4W59bZ6d" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. Summarize what the paper claims to contribute. Be positive and generous.\nThe paper claims to contribute three ways of mitigating negative pretraining:\n (1) altering the learning rate after pretraining,\n (2) increasing the discretization of data distribution changes from start to target task instead of \"ju...
[ 5, 4, 4, 6, 4 ]
[ 2, 3, 4, 4, 4 ]
[ "iclr_2021_uHNEe2aR4qJ", "iclr_2021_uHNEe2aR4qJ", "iclr_2021_uHNEe2aR4qJ", "iclr_2021_uHNEe2aR4qJ", "iclr_2021_uHNEe2aR4qJ" ]
iclr_2021_XKgo1UfNRx8
Recycling sub-optimial Hyperparameter Optimization models to generate efficient Ensemble Deep Learning
Ensemble Deep Learning improves accuracy over a single model by combining predictions from multiple models. It has established itself to be the core strategy for tackling the most difficult problems, like winning Kaggle challenges. Due to the lack of consensus to design a successful deep learning ensemble, we introduc...
withdrawn-rejected-submissions
Following a strong consensus across the reviewers, the paper is recommended for rejection. They have all acknowledged some weaknesses of the paper, for instance * Inadequate reference to prior work * Unsatisfactory level of polishing * Too limited evaluation, with more comparisons to baselines required * The proposed ...
train
[ "XD_9hmU84Zr", "2KsNtSfb9wR", "zC2P2AXUoqV", "6suwpJcQd3g" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "After rebuttal: no rebuttal, so I will keep my score.\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nOverview: This paper proposes to apply ensemble methods to make use of sub-optima...
[ 3, 3, 3, 4 ]
[ 5, 4, 5, 4 ]
[ "iclr_2021_XKgo1UfNRx8", "iclr_2021_XKgo1UfNRx8", "iclr_2021_XKgo1UfNRx8", "iclr_2021_XKgo1UfNRx8" ]
iclr_2021_RSn0s-T-qoy
Multi-View Disentangled Representation
Learning effective representations for data with multiple views is crucial in machine learning and pattern recognition. Recently great efforts have focused on learning unified or latent representations to integrate information from different views for specific tasks. These approaches generally assume simple or implicit...
withdrawn-rejected-submissions
This paper focuses on disentangled representation learning from multi-view data, which is an interesting and hot topic. However, there are several papers published in the last couple of years (especially in NeurIPS2020 and ECCV2020) solving very similar problems with closely related contributions to this paper. The con...
train
[ "0-RFIitiUj4", "GQOcYwOSdh_", "ZZJmoN-pMSt", "uFGjDx745c_", "610yQZnEEsY", "eIQH-FMqcz-", "f4ktmyz_ihL", "eSBijstRWry" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for valuable and positive comments. We try to reply each point as follows.\n\n**Q1**: Extension to supervised representation disentanglement: I am wondering how to extend the proposed model to the supervised setting so that the semantic of the disentangled feature can be learned for further m...
[ -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, 5, 4, 3, 3 ]
[ "eSBijstRWry", "f4ktmyz_ihL", "eIQH-FMqcz-", "610yQZnEEsY", "iclr_2021_RSn0s-T-qoy", "iclr_2021_RSn0s-T-qoy", "iclr_2021_RSn0s-T-qoy", "iclr_2021_RSn0s-T-qoy" ]
iclr_2021_53WS781RzT9
The Impact of the Mini-batch Size on the Dynamics of SGD: Variance and Beyond
We study mini-batch stochastic gradient descent (SGD) dynamics under linear regression and deep linear networks by focusing on the variance of the gradients only given the initial weights and mini-batch size, which is the first study of this nature. In the linear regression case, we show that in each iteration the norm...
withdrawn-rejected-submissions
This work analyses the impact of mini-batch size on the variance of the gradients during SGD, in the context of linear models. It shows an inverse relationship between the variance of the gradient and the batch size for such models, under certain assumptions. Reviewers generally agree that the work is theoretically sou...
val
[ "AwBvjI-Bpd", "zQ_6D64nW6Y", "uQcJMnDD6I", "mxizK4t0imj", "R4iCM2SYzPk", "VisPfiSJeNZ", "bROrBmlE0yH", "0__c71QYewk", "3BoKGuFN-lD", "XcppNu1oxDU", "I-xDU_Hn5lW", "skA84bWPS3" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your response. I strongly agree with the authors that deep linear networks are interesting models to work on and the results can provide insight into the training landscape of deep non-linear networks.\n\nHowever, I still feel a theorem stating an application of the current results will help strengthen ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "R4iCM2SYzPk", "I-xDU_Hn5lW", "mxizK4t0imj", "3BoKGuFN-lD", "skA84bWPS3", "XcppNu1oxDU", "0__c71QYewk", "iclr_2021_53WS781RzT9", "iclr_2021_53WS781RzT9", "iclr_2021_53WS781RzT9", "iclr_2021_53WS781RzT9", "iclr_2021_53WS781RzT9" ]
iclr_2021_Twf5rUVeU-I
Convergence Analysis of Homotopy-SGD for Non-Convex Optimization
First-order stochastic methods for solving large-scale non-convex optimization problems are widely used in many big-data applications, e.g. training deep neural networks as well as other complex and potentially non-convex machine learning models. Their inexpensive iterations generally come together with slow glob...
withdrawn-rejected-submissions
The authors provide a homotopy framework for SGD in order to exploit structures that arise by construction, such as PL. I very much liked the delineated homotopy analysis which is general (i.e., as opposed to simply adding a quadratic, the authors consider a homotopy mapping). While the algorithm should not be consider...
train
[ "BEgjKkahj_", "DR9VsHYzqlE", "ftul47wO8H7", "umBpbSwGhvD", "nPRVnbicHpf", "B13lTrqNBu0", "BcsFqOn3c_r", "czOXLczwtjT", "i3qj2oz4vwa", "AGqSNDYzB2A" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes homotopy SGD (H-SGD) which solves a sequence of unconstrained problems with a homotopy map and homotopy parameter. The authors analyze the algorithm for solving nonconvex problems satisfying PL condition. The analysis works with a generic homotopy map and homotopy parameter satisfying certain c...
[ 5, 5, -1, 5, -1, -1, -1, -1, -1, 4 ]
[ 4, 4, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_Twf5rUVeU-I", "iclr_2021_Twf5rUVeU-I", "nPRVnbicHpf", "iclr_2021_Twf5rUVeU-I", "B13lTrqNBu0", "umBpbSwGhvD", "DR9VsHYzqlE", "AGqSNDYzB2A", "BEgjKkahj_", "iclr_2021_Twf5rUVeU-I" ]
iclr_2021_54-QTuqSLyn
Mitigating Mode Collapse by Sidestepping Catastrophic Forgetting
Generative Adversarial Networks (GANs) are a class of generative models used for various applications, but they have been known to suffer from the mode collapse problem, in which some modes of the target distribution are ignored by the generator. Investigative study using a new data generation procedure indicates that...
withdrawn-rejected-submissions
The authors propose an approach to mitigate mode collapse phenomena in GANs. Motivated by the intuition that mode collapse stems from catastrophic forgetting of the discriminator, the authors propose a solution inspired by recent research in continual learning and dynamically add new discriminators during training. The...
train
[ "_DUpWBth_Gc", "bVhebp-YmiB", "XXrmp5xdFhn", "qYgWer5-JcD", "dVebc34o6ZC" ]
[ "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper addresses the well-known phenomenon of model collapse in generative adversarial networks (GANs). In particular, this paper identifies catastrophic forgetting of the discriminator as a potential source of mode collapse and proposes a multi-discriminator framework called DMAT as a solution. DMAT takes ins...
[ 4, -1, 6, 7, 5 ]
[ 5, -1, 4, 3, 3 ]
[ "iclr_2021_54-QTuqSLyn", "iclr_2021_54-QTuqSLyn", "iclr_2021_54-QTuqSLyn", "iclr_2021_54-QTuqSLyn", "iclr_2021_54-QTuqSLyn" ]
iclr_2021_xoPj3G-OKNM
Stochastic Normalized Gradient Descent with Momentum for Large Batch Training
Stochastic gradient descent (SGD) and its variants have been the dominating optimization methods in machine learning. Compared with small batch training, SGD with large batch training can better utilize the computational power of current multi-core systems like GPUs and can reduce the number of communication rounds in ...
withdrawn-rejected-submissions
All reviewers agree that the contributions of this paper are not significant, and the paper does not compare well with many of the existing works. Authors did not respond.
train
[ "w6gZIi_lV8", "ja7avO-MVbx", "9HsLbWGDpwN", "0pHl9kr9TI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "###################################################################\n\nSummary:\n\nThis paper proposes a new stochastic normalized gradient descent method with momentum (SNGM) for large batch training. They prove that unlike mometum SGD (MSGD), SNGM can adopt larger batch size to converge to the epsilon-stationary...
[ 4, 4, 4, 3 ]
[ 4, 4, 5, 3 ]
[ "iclr_2021_xoPj3G-OKNM", "iclr_2021_xoPj3G-OKNM", "iclr_2021_xoPj3G-OKNM", "iclr_2021_xoPj3G-OKNM" ]
iclr_2021_2rcgRSAa1A3
Fighting Filterbubbles with Adversarial BERT-Training for News-Recommendation
Recommender engines play a role in the emergence and reinforcement of filter bubbles. When these systems learn that a user prefers content from a particular site, the user will be less likely to be exposed to different sources or opinions and, ultimately, is more likely to develop extremist tendencies. We trace t...
withdrawn-rejected-submissions
All reviewers agree that this paper is not ready for publication. In addition to the technical comments, the authors should pay attention to the comments by Reviewer 3 about the naivete of the motivation provided for the work. Filter bubbles (to the extent that they really exist; there is controversy about this) have m...
train
[ "Vjz9VS0KfYg", "5N4Gd1kBkj0", "2HhLeVS6iW", "FPocKAuhupI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed an adversarial training framework for reducing the predictive ability of the new outlet from the news recommendation system. While deep learning-based news recommendation systems work very well, these systems tend to recommend contents of the same site, and are likely to develop extremist tende...
[ 3, 3, 4, 5 ]
[ 2, 4, 4, 4 ]
[ "iclr_2021_2rcgRSAa1A3", "iclr_2021_2rcgRSAa1A3", "iclr_2021_2rcgRSAa1A3", "iclr_2021_2rcgRSAa1A3" ]
iclr_2021_19drPzGV691
Distributional Reinforcement Learning for Risk-Sensitive Policies
We address the problem of learning a risk-sensitive policy based on the CVaR risk measure using distributional reinforcement learning. In particular, we show that applying the distributional Bellman optimality operator with respect to a risk-based action-selection strategy overestimates the dynamic, Markovian CVaR. The...
withdrawn-rejected-submissions
The reviewers found found the paper well motivated and well written, they found both the theoretical contributions limited in novelty and the experiments too rudimentary to be insightful.
train
[ "nBGlP5KOlU0", "KYMCWi9_ME", "VY_EsTSbV-", "oDeYivcozru", "yXbo1LMuSmm", "VJ1fsYrqONl", "fpJ9pXJFiKQ", "q1IBUUxuS0" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thoughtful comments.\n\n1. We did start this work by trying to solve the augmented MDP directly but the results were pretty bad -- nowhere close to what we obtained here. We decided not to show these. Our code can be modified to run this as well so we welcome any interested reader to reproduce o...
[ -1, -1, -1, -1, 5, 7, 5, 5 ]
[ -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "VJ1fsYrqONl", "yXbo1LMuSmm", "fpJ9pXJFiKQ", "q1IBUUxuS0", "iclr_2021_19drPzGV691", "iclr_2021_19drPzGV691", "iclr_2021_19drPzGV691", "iclr_2021_19drPzGV691" ]
iclr_2021_CJmMqnXthgX
An Empirical Study of the Expressiveness of Graph Kernels and Graph Neural Networks
Graph neural networks and graph kernels have achieved great success in solving machine learning problems on graphs. Recently, there has been considerable interest in determining the expressive power mainly of graph neural networks and of graph kernels, to a lesser extent. Most studies have focused on the ability of th...
withdrawn-rejected-submissions
The reviewers liked the direction of the paper but unanimously agree that, in its current version, it is not strong enough to justify publication at ICLR. There was no rebuttal from the authors to consider.
train
[ "TDOBXcFjlsu", "58pQniYfI-9", "qW37RKVPV-Y", "NNEc3gfl7Ak" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper deals with supervised graph classification comparing GNN and graph kernel approaches. The authors define an intractable graph similarity functions, which boils down to a normalized graph edit distance. The authors then empirically study how well well-known graph kernel and GNN architectures align within ...
[ 4, 4, 3, 4 ]
[ 5, 4, 4, 4 ]
[ "iclr_2021_CJmMqnXthgX", "iclr_2021_CJmMqnXthgX", "iclr_2021_CJmMqnXthgX", "iclr_2021_CJmMqnXthgX" ]
iclr_2021_B9t708KMr9d
Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification
Graph neural network (GNN) and label propagation algorithm (LPA) are both message passing algorithms, which have achieved superior performance in semi-supervised classification. GNN performs \emph{feature propagation} by a neural network to make predictions, while LPA uses \emph{label propagation} across graph adjacenc...
withdrawn-rejected-submissions
This paper proposes a semi-supervised graph classification technique that unifies feature and label propagation techniques. The resulting algorithm is a simple extension that attains strong performance. Reviewers were divided on this submission. Some reviewers felt the proposed algorithm did not constitute a sufficient...
test
[ "MH4IOO5cYrw", "jEQ52pEN4Ba", "jGthZ4l_R0S", "TtW6-sG5y7R", "NAgJNBwXg6I", "6N7lC1yr6l", "X9Gxe-xTdT", "ZyChtxDdnz6", "2IAElEmekRi" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their thorough and very helpful feedback. To summarize, all the reviewers agree on the effectiveness of our method, which proposes a novel framework to more effectively and explicitly utilize the label information by GNN in the semi-supervised scenario, achieving three SOTA results o...
[ -1, -1, -1, -1, -1, 7, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "iclr_2021_B9t708KMr9d", "6N7lC1yr6l", "X9Gxe-xTdT", "ZyChtxDdnz6", "2IAElEmekRi", "iclr_2021_B9t708KMr9d", "iclr_2021_B9t708KMr9d", "iclr_2021_B9t708KMr9d", "iclr_2021_B9t708KMr9d" ]
iclr_2021_5PiSFHhRe2C
Meta Auxiliary Labels with Constituent-based Transformer for Aspect-based Sentiment Analysis
Aspect based sentiment analysis (ABSA) is a challenging natural language processing task that could benefit from syntactic information. Previous work exploit dependency parses to improve performance on the task, but this requires the existence of good dependency parsers. In this paper, we build a constituent-based tra...
withdrawn-rejected-submissions
The paper proposes a constituent-based transformer for aspect-based sentiment analysis. The approach allows conducting aspect-based sentiment analysis to leverage the syntactic information without pre-specified dependency parse trees. Overall, the idea is interesting. However, all the reviewers shared the following c...
train
[ "wmp9BxX3dEV", "JVNtqd0JFI", "bb8KJ3rwHb", "hnhVtVVro2L", "uJBRu69Qpa", "uDLoSKFBej_" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a Transformer-based model for aspect-based sentiment analysis, intended to support the unsupervised induction of constituents within the Transformer forward pass. Their evaluations demonstrate that their model can match (and in some cases improve upon) models which depend upon explicit dependen...
[ 2, -1, -1, -1, 4, 3 ]
[ 4, -1, -1, -1, 4, 5 ]
[ "iclr_2021_5PiSFHhRe2C", "uDLoSKFBej_", "uJBRu69Qpa", "wmp9BxX3dEV", "iclr_2021_5PiSFHhRe2C", "iclr_2021_5PiSFHhRe2C" ]
iclr_2021_O1pkU_4yWEt
Distantly supervised end-to-end medical entity extraction from electronic health records with human-level quality
Medical entity extraction (EE) is a standard procedure used as a first stage inmedical texts processing. Usually Medical EE is a two-step process: named entityrecognition (NER) and named entity normalization (NEN). We propose a novelmethod of doing medical EE from electronic health records (EHR) as a singl...
withdrawn-rejected-submissions
This paper tackles an important problem and includes experiments on a new domain (Russian documents vs English documents). Unfortunately, all reviewers agree that this paper lacks novelty for publication in its current state. Additional details and clarifications to the proposed approach, notably through a more thoroug...
test
[ "WhRQiVmxos", "WkzcvpLiNU7", "pi5i_d-iEKD", "Infq_MSTVYr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces an end-to-end task that identifies medical\nentities and links them to UMLS concepts for Russian biomedical\ntext. The paper uses distant supervision to identify the entities with\ntheir corresponding concepts and uses a Russian pre-trained BERT model\nto perform the task. The most frequent 1...
[ 5, 4, 4, 3 ]
[ 5, 4, 4, 4 ]
[ "iclr_2021_O1pkU_4yWEt", "iclr_2021_O1pkU_4yWEt", "iclr_2021_O1pkU_4yWEt", "iclr_2021_O1pkU_4yWEt" ]
iclr_2021_24-DxeAe2af
Accurate and fast detection of copy number variations from short-read whole-genome sequencing with deep convolutional neural network
A copy number variant (CNV) is a type of genetic mutation where a stretch of DNA is lost or duplicated once or multiple times. CNVs play important roles in the development of diseases and complex traits. CNV detection with short-read DNA sequencing technology is challenging because CNVs significantly vary in size and a...
withdrawn-rejected-submissions
Four knowledgeable referees have indicated reject. I agree with the most critical reviewer R4 that the model design lacks a clear and transparent motivation and that the experimental setup is not convincing, and so must reject.
train
[ "lG8CleiuM2D", "hYTP2pyQfUb", "bV13DskJMfC", "3eVcbqOxWPe" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "##########################################################################\n\nSummary:\nThe authors proposed CNV-Net, a deep learning-based approach for copy number variation identification. They encoded mapped DNA sequences into a pileup image that captures reference sequence, sequencing coverage, and mapped read...
[ 3, 2, 2, 5 ]
[ 4, 4, 5, 3 ]
[ "iclr_2021_24-DxeAe2af", "iclr_2021_24-DxeAe2af", "iclr_2021_24-DxeAe2af", "iclr_2021_24-DxeAe2af" ]
iclr_2021_jN8TTVCgOqf
Local Clustering Graph Neural Networks
Graph Neural Networks (GNNs), which benefit various real-world problems and applications, have emerged as a powerful technique for learning graph representations. The depth of a GNN model, denoted by K, restricts the receptive field of a node to its K-hop neighbors and plays a subtle role in the performance of GNNs. Re...
withdrawn-rejected-submissions
The paper considers using local spectral graph clustering methods such at the PPR-Nibble method for graph neural networks. These local spectral methods are widely used in social networks, and understanding neural networks from them is interesting. In many ways, the results are interesting and novel, and they deserve ...
test
[ "e2n0SqFs1Sz", "_C7ctsqxqT6", "tvXNPcpzg0_", "cRtBP3OlaCE", "vKmg1NmLAR1", "5Ky7GL1uR0H", "dP5KCOFiBa_", "_tX2xrxgwG" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. We would also like to clarify a couple of points.\n\n**[Regarding Short Random Walks]** We believe there are some misunderstandings. We NEVER claim that short random walks are sufficient to extract topology information from graphs. The motivation of this work is the FACT that many exis...
[ -1, -1, -1, -1, 4, 5, 6, 5 ]
[ -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "vKmg1NmLAR1", "5Ky7GL1uR0H", "dP5KCOFiBa_", "_tX2xrxgwG", "iclr_2021_jN8TTVCgOqf", "iclr_2021_jN8TTVCgOqf", "iclr_2021_jN8TTVCgOqf", "iclr_2021_jN8TTVCgOqf" ]
iclr_2021_lXoWPoi_40
Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate Schedule
Several papers argue that wide minima generalize better than narrow minima. In this paper, through detailed experiments that not only corroborate the generalization properties of wide minima, we also provide empirical evidence for a new hypothesis that the density of wide minima is likely lower than the density of narr...
withdrawn-rejected-submissions
The reviewers are concerned about the novelty of the proposed learning rate schedule, the rigor of the empirical validation, and the relationship between the results and the discussion of sharp vs. local minima. I invite the authors to incorporate reviewers' comments and resubmit to other ML venues.
train
[ "sSRjZq41mGt", "YdW-XaJFCP2", "6XjWSc60w6v", "jCdmisvdTgM", "yrXJQIiSDsH", "NA5C-W-K4PE", "NaY-1hcLp-9", "wrv0Pu-dDBx", "EX_5Mm61zy7", "m6f-H0hyLxD", "LBfSNOzOZ2Q", "MCVDXhAetf", "XL_eVUnzZ--", "yV0Kxzm0rCp", "fyxp7Jdn6Ww" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. We will compute the delta-w and add it to the paper but our past experience with computing delta-w across runs suggest that i) there is little correlation between delta-w and accuracy and ii) running longer generally increases delta-w, most likely due to the high dimensional landscape. \n\n\n2. In table-2 the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 2, 3, 3 ]
[ "YdW-XaJFCP2", "wrv0Pu-dDBx", "yrXJQIiSDsH", "NaY-1hcLp-9", "m6f-H0hyLxD", "MCVDXhAetf", "MCVDXhAetf", "XL_eVUnzZ--", "yV0Kxzm0rCp", "iclr_2021_lXoWPoi_40", "fyxp7Jdn6Ww", "iclr_2021_lXoWPoi_40", "iclr_2021_lXoWPoi_40", "iclr_2021_lXoWPoi_40", "iclr_2021_lXoWPoi_40" ]