paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18
values | meta_review stringlengths 29 10k | label stringclasses 3
values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2021_4K_NaDAHc0d | Unsupervised Task Clustering for Multi-Task Reinforcement Learning | Meta-learning, transfer learning and multi-task learning have recently laid a path towards more generally applicable reinforcement learning agents that are not limited to a single task. However, most existing approaches implicitly assume a uniform similarity between tasks. We argue that this assumption is limiting in s... | withdrawn-rejected-submissions | This paper considers multi-task RL from the perspective of an unsupervised clustering of different tasks with an EM-like algorithm. The idea is evaluated on several simple and ATARI domains.
We thank the reviewers for their detailed responses and revision. This work still seems a little preliminary in its current form.... | test | [
"EtBaM9IV1X",
"ZGjGfeFAAXx",
"-aN-5jKGES4",
"miC9K0ypXCG",
"ARFhE97wyZT",
"5HeySPzjnRC",
"KJbX-XPFiIY",
"VnU9epS5X5Q",
"Z8s4307WVpl",
"MnSxHIBa9hz"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors present a new multi-task reinforcement learning (RL) algorithm. Since In general, the relationships between tasks is unknown a-priori, directly applying classical multi-task learning approaches that assume all tasks are related, could suffer from negative transfer. The authors propose to... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"iclr_2021_4K_NaDAHc0d",
"iclr_2021_4K_NaDAHc0d",
"Z8s4307WVpl",
"EtBaM9IV1X",
"KJbX-XPFiIY",
"MnSxHIBa9hz",
"VnU9epS5X5Q",
"iclr_2021_4K_NaDAHc0d",
"iclr_2021_4K_NaDAHc0d",
"iclr_2021_4K_NaDAHc0d"
] |
iclr_2021_ox8wgFpoyHc | Targeted VAE: Structured Inference and Targeted Learning for Causal Parameter Estimation | Undertaking causal inference with observational data is extremely useful across a wide range of domains including the development of medical treatments, advertisements and marketing, and policy making. There are two main challenges associated with undertaking causal inference using observational data: treatment assignm... | withdrawn-rejected-submissions | The paper introduces some good ideas, but I don't think it is quite there in terms of a method to be recommended for publications. I think it is mostly reasonably written (I do not agree with the comment of a 'complete rewrite') but there are indeed some passages for improvement (for instance, an equation as y = σ−1[Q0... | val | [
"I5dkc-yzW-",
"opr_k3JTFWg",
"Ga8IrGJivzX",
"gNEsh2XmulP",
"3UZY0eJSTEf",
"VPXzujAmV4D",
"CMezdu2qyoO",
"K1jvRWaJrNw",
"3LasS-5xXPz"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank AnonReviewer2 for their constructive feedback, and their acknowledgement that the 'ideas are interesting'. We seek to address their concerns in the same order in which they were presented. Based on your initial feedback we have revised and uploaded an updated version of the manuscript, regarding which we ... | [
-1,
-1,
-1,
-1,
-1,
6,
3,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"CMezdu2qyoO",
"iclr_2021_ox8wgFpoyHc",
"VPXzujAmV4D",
"3LasS-5xXPz",
"K1jvRWaJrNw",
"iclr_2021_ox8wgFpoyHc",
"iclr_2021_ox8wgFpoyHc",
"iclr_2021_ox8wgFpoyHc",
"iclr_2021_ox8wgFpoyHc"
] |
iclr_2021_uRuGNovS11 | Bayesian Metric Learning for Robust Training of Deep Models under Noisy Labels | Label noise is a natural event of data collection and annotation and has been shown to have significant impact on the performance of deep learning models regarding accuracy reduction and sample complexity increase. This paper aims to develop a novel theoretically sound Bayesian deep metric learning that is robust again... | withdrawn-rejected-submissions | Reviewers have commented on the lack of novelty of the paper as it reads only as applying the variational inference framework of Blundell et al. (2015) to deep metric learning (R2 and R4). Furthermore, the paper has not properly positioned itself when compared to previous works on "Deep variational metric learning" and... | test | [
"rkyi9LCRtlv",
"tgprnckgPYj",
"MUVUQ8edQHh",
"Qzb4_0lRY2",
"mCI7jEicWQf",
"oizq-foVD1X",
"jyWgStwaVS2",
"rEL8AqSt5Z"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"R3: The method is not evaluated against the state of the art on the evaluated datasets, albeit the authors give a convincing reason for this, namely that the methodological novelty of the proposed approach warrants proof-of-concept results with a relatively simple neural network architecture\n\nResponse: our main ... | [
-1,
-1,
-1,
-1,
7,
3,
4,
5
] | [
-1,
-1,
-1,
-1,
2,
4,
5,
4
] | [
"mCI7jEicWQf",
"oizq-foVD1X",
"jyWgStwaVS2",
"rEL8AqSt5Z",
"iclr_2021_uRuGNovS11",
"iclr_2021_uRuGNovS11",
"iclr_2021_uRuGNovS11",
"iclr_2021_uRuGNovS11"
] |
iclr_2021_8W7LTo_zxdE | Variational Deterministic Uncertainty Quantification | Building on recent advances in uncertainty quantification using a single deep deterministic model (DUQ), we introduce variational Deterministic Uncertainty Quantification (vDUQ). We overcome several shortcomings of DUQ by recasting it as a Gaussian process (GP) approximation. Our principled approximation is based on an... | withdrawn-rejected-submissions | The reviewers all agreed that the paper represent thorough work but also is closely related to existing literature. (All referees point to other non-overlapping literature so it is a crowded field the authors have entered.) The amount of novelty (needed) can always be discussed but given the referees unanimous opinion ... | train | [
"EdUrRBJMYFl",
"nSBM9tmba1t",
"AYNJQik-rl",
"wLy0tDBt_2-",
"P11Vz3FxrvD",
"x_hlB7Yve2f",
"FBdM7Jcr3pG",
"8FF6w4XXhkD",
"aNeCIM5eszA"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Variational deterministic uncertainty quantification\n\nSummary:\nThe paper proposes a method for out-of-distribution detection by combining deep kernel learning and Gaussian processes. Using neural networks as a kernel for the GP as well as inducing point approximation alleviates the scalability issues of GP. The... | [
5,
5,
-1,
-1,
-1,
-1,
-1,
5,
2
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_8W7LTo_zxdE",
"iclr_2021_8W7LTo_zxdE",
"aNeCIM5eszA",
"iclr_2021_8W7LTo_zxdE",
"EdUrRBJMYFl",
"nSBM9tmba1t",
"8FF6w4XXhkD",
"iclr_2021_8W7LTo_zxdE",
"iclr_2021_8W7LTo_zxdE"
] |
iclr_2021_w5bNwUzj33 | Cross-Domain Few-Shot Learning by Representation Fusion | In order to quickly adapt to new data, few-shot learning aims at learning from few examples, often by using already acquired knowledge. The new data often differs from the previously seen data due to a domain shift, that is, a change of the input-target distribution. While several methods perform well on small domain s... | withdrawn-rejected-submissions | The paper deals with cross-domain few-shot learning in the case of large source-target domain shifts.
The paper received mostly below-threshold reviews, with one exception (R3) whose review is addressing more general aspects, but still with some concern, especially in relation to the experimental part (to which author... | train | [
"7YggefBKGf",
"rIxNTMziUXy",
"o17PIfz_ZP",
"lXwBw7hJVF",
"UiNAf4z2qOG",
"Ewe7OFZIAcy",
"i_A3NwdP4R",
"n6eoj_fmoE",
"-tbYnK-19bh",
"TZ492T9APT2",
"-n9Q0hm_Y19",
"RPOjACdR9qv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This paper proposes a domain-shift problem using fewer training examples. It suggests representation fusion as the concept of unifying and merging information from different layers of abstraction. \nCross-domain Hebbian Ensemble Few-shot learning (CHEF) is introduced for extracting features using represe... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6,
4
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"iclr_2021_w5bNwUzj33",
"7YggefBKGf",
"iclr_2021_w5bNwUzj33",
"iclr_2021_w5bNwUzj33",
"7YggefBKGf",
"-tbYnK-19bh",
"RPOjACdR9qv",
"-n9Q0hm_Y19",
"iclr_2021_w5bNwUzj33",
"iclr_2021_w5bNwUzj33",
"iclr_2021_w5bNwUzj33",
"iclr_2021_w5bNwUzj33"
] |
iclr_2021_D5Wt3FtvCF | PURE: An Uncertainty-aware Recommendation Framework for Maximizing Expected Posterior Utility of Platform | Commercial recommendation can be regarded as an interactive process between the recommendation platform and its target users. One crucial problem for the platform is how to make full use of its advantages so as to maximize its utility, i.e., the commercial benefits from recommendation. In this paper, we propose a novel... | withdrawn-rejected-submissions | This paper received borderline negative scores. The reviewers all agree that the proposed approach is interesting. However, there are also common concerns around the clarity of the paper, as well as lacking sufficient empirical evaluation. One reviewer also argues that technical contribution is relatively limited. The ... | train | [
"5v6q7CQ6Rf",
"yCSjYzvvtV",
"5VXircYlENe",
"wK0bGoKIHqt",
"Yv8HjVH-Hmh",
"MNEeQt5lIC",
"5vMP8du5xiN",
"N2S8J7j9Vd",
"tB8WETxh21q",
"t0bRkp7ItcC"
] | [
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Decision: Concerns raised (can publish with adjustment)\n\nSummary: Reviewers and the Area Chair note the paper's emphasis on exploiting the \"information asymmetry\" between a recommender system and its users is cause for an ethical review. The idea in the paper is that a recommender system can utilize the uncer... | [
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
4,
1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2021_D5Wt3FtvCF",
"iclr_2021_D5Wt3FtvCF",
"iclr_2021_D5Wt3FtvCF",
"iclr_2021_D5Wt3FtvCF",
"tB8WETxh21q",
"yCSjYzvvtV",
"5VXircYlENe",
"t0bRkp7ItcC",
"iclr_2021_D5Wt3FtvCF",
"iclr_2021_D5Wt3FtvCF"
] |
iclr_2021_LpSGtq6F5xN | A Mixture of Variational Autoencoders for Deep Clustering | In this study, we propose a deep clustering algorithm that utilizes a variational autoencoder (VAE) framework with a multi encoder-decoder neural architecture. This setup enforces a complementary structure that guides the learned latent representations towards a more meaningful space arrangement. It differs from previo... | withdrawn-rejected-submissions | The submitted paper proposes a novel model/approach for deep clustering which shows good empirical performance on a set of standard benchmark datasets as compared to state of the art baseline algorithms. While I believe that this paper can be turned into a good ICLR paper, it doesn’t meet the standard of ICLR in its cu... | train | [
"T-D076YJk0",
"TPtLdLIeoB6",
"Bd0ZMeXihhA",
"1hcYZ1p-2Un",
"teMn7rFxml",
"FABnsRApn37",
"AsSyCJhb2UU",
"BUH0m9IBhvE",
"-BA_FUH2INA",
"-K7y2GEO8YE"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to cluster a data-set into k groups using k-VAE.\nThe model is at the intersection of k-DAE (Opochinsky et al., 2020), and VaDE (Jiang et al., 2016).\n\nThe paper is straightforward and goes directly to the point: VAEs improve AEs.\nThe reason is however not discussed.\n\nSome sentences are a b... | [
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_LpSGtq6F5xN",
"iclr_2021_LpSGtq6F5xN",
"teMn7rFxml",
"iclr_2021_LpSGtq6F5xN",
"TPtLdLIeoB6",
"-BA_FUH2INA",
"-K7y2GEO8YE",
"T-D076YJk0",
"iclr_2021_LpSGtq6F5xN",
"iclr_2021_LpSGtq6F5xN"
] |
iclr_2021_9wHe4F-lpp | FTBNN: Rethinking Non-linearity for 1-bit CNNs and Going Beyond | Binary neural networks (BNNs), where both weights and activations are binarized into 1 bit, have been widely studied in recent years due to its great benefit of highly accelerated computation and substantially reduced memory footprint that appeal to the development of resource constrained devices. In contrast to previo... | withdrawn-rejected-submissions | ## Description
The paper proposes an improvement to binary neural networks with real-valued skip connections between pre-activations, by introducing more flexible learnable non-linearities on the real-valued connections. The parametric non-linearity is actually linear at initialization, which makes the training easie... | test | [
"YowNrENxbK",
"Ko7-1hKNRmM",
"DmJmsf--vZS",
"FrRo99vL-4Y",
"pDjOChx-dnd",
"a1eY0LfjIyY",
"d34oPIiXv0c",
"rXEN3-UyRhI"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We really appreciate your review and detailed feedbacks. Please see the replies of your concerns as follows:\n\nQ1: The accuracy improvement may come from the fact that both [0, 1] and [-1, 1] activations are used in a network.\n\nR1: Thanks for pointing out this important question. Because in the [0, 1] activatio... | [
-1,
-1,
-1,
-1,
5,
6,
3,
4
] | [
-1,
-1,
-1,
-1,
4,
2,
5,
5
] | [
"pDjOChx-dnd",
"rXEN3-UyRhI",
"d34oPIiXv0c",
"a1eY0LfjIyY",
"iclr_2021_9wHe4F-lpp",
"iclr_2021_9wHe4F-lpp",
"iclr_2021_9wHe4F-lpp",
"iclr_2021_9wHe4F-lpp"
] |
iclr_2021_MG8Zde0ip6u | A Siamese Neural Network for Behavioral Biometrics Authentication | The raise in popularity of personalized web and mobile applications brings about a need of robust authentication systems. Although password authentication is the most popular authentication mechanism, it has also several drawbacks. Behavioral Biometrics Authentication has emerged as a complementary risk-based authentic... | withdrawn-rejected-submissions | This paper received 3 reviews with mixed initial ratings: 9,4,5. The main concerns of R1 and R3, who gave unfavorable scores, included lack of novelty and hence limited value of this work for the ML community. At the same time, R5 strongly advocates for acceptance and mentions meaningful contributions in the context of... | train | [
"tkck6hNilRW",
"u0zIeuJAtl3",
"HK4dJj2wKkG",
"Btz2F3C3Rv9",
"6I6WqBN-oNq",
"0vTcET67eH",
"KEtnHt07bi-"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the valuable comments and insights. In the following we address specific concerns of the reviewer. \n\n**“That said, originality in terms of research contribution is limited. Siamese NNs have long been in use and recently revisited for variety of vision tasks such as face recognition [1, ... | [
-1,
-1,
-1,
-1,
5,
4,
9
] | [
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"6I6WqBN-oNq",
"0vTcET67eH",
"KEtnHt07bi-",
"iclr_2021_MG8Zde0ip6u",
"iclr_2021_MG8Zde0ip6u",
"iclr_2021_MG8Zde0ip6u",
"iclr_2021_MG8Zde0ip6u"
] |
iclr_2021_F8lXvXpZdrL | Reintroducing Straight-Through Estimators as Principled Methods for Stochastic Binary Networks | Training neural networks with binary weights and activations is a challenging problem due to the lack of gradients and difficulty of optimization over discrete weights.
Many successful experimental results have been achieved with empirical straight-through (ST) approaches, proposing a variety of ad-hoc rules for ... | withdrawn-rejected-submissions | This paper collects a variety of results that cast straight-through estimators as arising as principled methods that make a linearization assumption on the loss for functions with binary arguments. R1 & R3 recommended against acceptance, citing clarity concerns and a lack of novelty. R2 & R4 recommended acceptance, but... | train | [
"3-Gu-7KGd3l",
"kIXvgn1wS_",
"jg9hUF_tfSj",
"ZkMruLA10os",
"B9u8SdlirSW",
"-YfiXKzFmbj",
"XkLXrZx9GsV",
"PgvhqHw4H7R",
"FJz94Av1o_Y",
"ZrKjrv0ADBh",
"qtt33UKWTgF",
"pf--6UYltq",
"Y-OrkOz4Egv",
"R5awjTpq2J"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper reintroduces the straight-through estimator with bias-variance analysis. It further discusses its relationship with some constrained optimization methods in convex optimization and \n\nIn general, the novelty of the paper on the methodology side is not high. Its value may lie in the theoretical analysis ... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
3
] | [
"iclr_2021_F8lXvXpZdrL",
"jg9hUF_tfSj",
"qtt33UKWTgF",
"qtt33UKWTgF",
"3-Gu-7KGd3l",
"XkLXrZx9GsV",
"pf--6UYltq",
"Y-OrkOz4Egv",
"iclr_2021_F8lXvXpZdrL",
"3-Gu-7KGd3l",
"R5awjTpq2J",
"iclr_2021_F8lXvXpZdrL",
"iclr_2021_F8lXvXpZdrL",
"iclr_2021_F8lXvXpZdrL"
] |
iclr_2021_1-j4VLSHApJ | Learn2Weight: Weights Transfer Defense against Similar-domain Adversarial Attacks | Recent work in black-box adversarial attacks for NLP systems has attracted attention. Prior black-box attacks assume that attackers can observe output labels from target models based on selected inputs. In this work, inspired by adversarial transferability, we propose a new type of black-box NLP adversarial attack tha... | withdrawn-rejected-submissions | The submission considers a new attack model for adversarial perturbation in a framework where the attacker has neither access to the trained model nor the data used for training the model. The submission suggests a"domain adaptation inspired attack": learn a different model on a similar domain and generate the adversar... | train | [
"NucAUxOGgRN",
"-HEWODswilw",
"bFuoorhlYuk",
"RKs9Uesyqaj",
"nmLHmnjUcj",
"HLdyLLMmK2o"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors propose a learn2weight framework to defend against similar-domain adversarial attacks. Experimental studies on Amazon dataset are done to verify the proposed learn2 weight. \n\nThe paper is not easy to follow. The presentation and organization should be further improved. Here are the det... | [
3,
5,
-1,
-1,
-1,
4
] | [
3,
5,
-1,
-1,
-1,
3
] | [
"iclr_2021_1-j4VLSHApJ",
"iclr_2021_1-j4VLSHApJ",
"-HEWODswilw",
"NucAUxOGgRN",
"HLdyLLMmK2o",
"iclr_2021_1-j4VLSHApJ"
] |
iclr_2021_8-sxWOto_iI | Introducing Sample Robustness | Choosing the right data and model for a pre-defined task is one of the critical competencies in machine learning. Investigating what features of a dataset and its underlying distribution a model decodes may enlighten the mysterious "black box" and guide us to a deeper and more profound understanding of the ongoing proc... | withdrawn-rejected-submissions | The paper proposes sample robustness (a data-dependent measure) which is essentially a point-wise Lipschitz constant of the label map. The measure is used to choose a subset of training data for training and it measures how small of a perturbation is required to cause a label change w.r.t. label map.
initially, the pap... | train | [
"y7-Tb_X6YI2",
"83RvmkdcW3B",
"KofreZAqXt",
"cT__nY9uMGZ",
"jPnjQbopFPt",
"AR7_AK-H1Y8",
"zblfXUpg262",
"8uA9pstvjAj",
"o5r4cnxfk0z"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: \n\nThis work introduces the concept of sample robustness – based on computing the pointwise Lipschitz constant of a data point – and use it to empirically analyze the effects of training on least and most robust training subsets on the performance for different models. This is done for both classificat... | [
4,
-1,
-1,
-1,
-1,
-1,
4,
2,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_8-sxWOto_iI",
"zblfXUpg262",
"o5r4cnxfk0z",
"8uA9pstvjAj",
"y7-Tb_X6YI2",
"iclr_2021_8-sxWOto_iI",
"iclr_2021_8-sxWOto_iI",
"iclr_2021_8-sxWOto_iI",
"iclr_2021_8-sxWOto_iI"
] |
iclr_2021_IVwXaHpiO0 | SyncTwin: Transparent Treatment Effect Estimation under Temporal Confounding | Estimating causal treatment effects using observational data is a problem with few solutions when the confounder has a temporal structure, e.g. the history of disease progression might impact both treatment decisions and clinical outcomes. For such a challenging problem, it is desirable for the method to be transparent... | withdrawn-rejected-submissions |
The paper developed a method that estimates treatment effect with
longitudinal observational data under temporal confounding. It extends
the idea of the synthetic control method and offers flexibility and
ease of estimation. However, some major concerns remain after the
discussion among the reviewers. In particular, t... | train | [
"RszXAePFYc6",
"LJAFzvESgGR",
"d0Ywjq5aZT8",
"ejV8jDumtVb",
"9ypKgm9AgZ",
"dv4e8-5wi7v",
"CM4uvn0okQc",
"JW_1NuUSOHq",
"rELmav_Y1pr",
"akcq2qx_EOy",
"PxVcHxBM9V",
"actt5qKGlVz",
"e7g_4760sC5",
"YbMd6HQ0SM9",
"Yi26qTuCX0L"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"- pros\n - The problem is well motivated. Estimating the causal effect with time-series data is a practical and important problem. Transparency is a desirable property. \n - The idea connects works in econometrics to modern ML. \n- cons\n - High level points\n - The paper proposes to use the number... | [
4,
7,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
3
] | [
3,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_IVwXaHpiO0",
"iclr_2021_IVwXaHpiO0",
"rELmav_Y1pr",
"iclr_2021_IVwXaHpiO0",
"RszXAePFYc6",
"LJAFzvESgGR",
"RszXAePFYc6",
"YbMd6HQ0SM9",
"ejV8jDumtVb",
"Yi26qTuCX0L",
"Yi26qTuCX0L",
"Yi26qTuCX0L",
"iclr_2021_IVwXaHpiO0",
"iclr_2021_IVwXaHpiO0",
"iclr_2021_IVwXaHpiO0"
] |
iclr_2021__QnwcbR-GG | On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes | A neural implicit outputs a number indicating whether the given query point in space is inside, outside, or on a surface. Many prior works have focused on _latent-encoded_ neural implicits, where a latent vector encoding of a specific shape is also fed as input. While affording latent-space interpolation, this comes at... | withdrawn-rejected-submissions | The paper proposes how weight-encoded neural implicit can be strong 3D shape representations. A neural network is trained such that it overfits over a single shape, and the weights of such network is a great representation for the 3D shape. Results are shown on signed distance field (SDF) generation from meshes.
Stren... | train | [
"6eN_xyQnwWw",
"_8WyQznKocM",
"floeDGtwWEX",
"n0uzrE-ikz4",
"FUsSgVgj4uw",
"OS-6mLf7SGO",
"zN-Lcbc6IUl"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their comments and interest in our work. We too are excited about the potential future work on our neural implicit format and are pleased to release our code and datasets for the community to explore.",
"***“I find 2.2 and 2.3 of potential interest, but they are not much developed nor e... | [
-1,
-1,
-1,
-1,
8,
4,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"FUsSgVgj4uw",
"OS-6mLf7SGO",
"OS-6mLf7SGO",
"zN-Lcbc6IUl",
"iclr_2021__QnwcbR-GG",
"iclr_2021__QnwcbR-GG",
"iclr_2021__QnwcbR-GG"
] |
iclr_2021_EsA9Nr9JHvy | The Heavy-Tail Phenomenon in SGD | In recent years, various notions of capacity and complexity have been proposed for characterizing the generalization properties of stochastic gradient descent (SGD) in deep learning. Some of the popular notions that correlate well with the performance on unseen data are (i) the 'flatness' of the local minimum found by ... | withdrawn-rejected-submissions | This work seeks to describe the heavy-tail phenomenon observed for deep networks learned with SGD. The work presents proof of a relationship between curvature, step size, batch size, and a heavy-tail weight distribution. The proofs assume a quadratic optimization problem and the authors speculate that the results may a... | train | [
"uaGz5M7KT-",
"_oEtkjwJU7N",
"glIMBiMzNeG",
"sHWepYbjdc-",
"BqUaZjuax2H",
"ai2tIuMd2gA",
"_vOaFJawdUZ",
"wFToM9QUE--",
"S2_rqkUZWUc",
"KgWY2lcpI9I"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the relations between the heavy tail phenomenon of SGD and the ‘flatness’ of the local minimum found by SGD and the ratio of the step size $\\eta$ to the batch size $b$ for the quadratic and convex problem. They show that depending on the curvature, the step size, and the batch size, the iterate... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_EsA9Nr9JHvy",
"wFToM9QUE--",
"_vOaFJawdUZ",
"uaGz5M7KT-",
"KgWY2lcpI9I",
"S2_rqkUZWUc",
"iclr_2021_EsA9Nr9JHvy",
"iclr_2021_EsA9Nr9JHvy",
"iclr_2021_EsA9Nr9JHvy",
"iclr_2021_EsA9Nr9JHvy"
] |
iclr_2021_6y3-wzlGHkb | Non-robust Features through the Lens of Universal Perturbations | Recent work ties adversarial examples to existence of non-robust features: features which are susceptible to small perturbations and believed to be unintelligible to humans, but still useful for prediction. We study universal adversarial perturbations and demonstrate that the above picture is more nuanced. Specifically... | withdrawn-rejected-submissions | The reviews were a bit mixed, and there was some concern on the usefulness and actual novelty of this work. On one hand, the authors did a nice job in visualizing their findings and conducting a wealth of interesting experiments. On the other hand, the submission suffers severely from hand-waving definitions and argume... | train | [
"Q603bVQjx8f",
"cU06Kgil4i9",
"R9YPvDjODry",
"MpvTv1YfiNp",
"Fs7bzy5WunI",
"9o_tug-3GsZ",
"H7b1acljb9f",
"Og7DaEtjjME",
"qyC4f2Y4-fG",
"lbmEbAwhkoX",
"_Gg0jA_MzH",
"TVyjeByKWMg",
"su8TNUCBYXG"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper analyzes the existence of non-robust features through the lens of universal perturbations. The concept of non-robust features and universal perturbations is not new, but it is interesting to study the two concepts together. \n\nStrength: the paper gives a detailed study of the property of non-robust fea... | [
5,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2021_6y3-wzlGHkb",
"MpvTv1YfiNp",
"Fs7bzy5WunI",
"lbmEbAwhkoX",
"H7b1acljb9f",
"iclr_2021_6y3-wzlGHkb",
"9o_tug-3GsZ",
"Q603bVQjx8f",
"iclr_2021_6y3-wzlGHkb",
"su8TNUCBYXG",
"TVyjeByKWMg",
"iclr_2021_6y3-wzlGHkb",
"iclr_2021_6y3-wzlGHkb"
] |
iclr_2021_1Q-CqRjUzf | On the Reproducibility of Neural Network Predictions | Standard training techniques for neural networks involve multiple sources of randomness, e.g., initialization, mini-batch ordering and in some cases data augmentation. Given that neural networks are heavily over-parameterized in practice, such randomness can cause {\em churn} -- disagreements between predictions of the... | withdrawn-rejected-submissions | Fitting a neural net is a stochastic process, with many sources of stochasticity, including initialization, batch presentation, data augmentation, non-deterministic low-level operations and the non-associativity of rounding errors in multi-threads systems such as GPUs and TPUs. In this paper, the authors aim to allevia... | train | [
"4HOnX-eOiUn",
"4nkEwJT40qv",
"pG4-qsoGxZ6",
"gaJwJqSHeL5",
"KtpxpxT4vio",
"Sh942wQpsy9",
"tc_jTy4F7oN"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work studies the ‘churn’ (disagreement between predictions of two replicates) caused by different sources of variation in the training procedure and proposes solutions to reduce it. One solution is to use minimum entropy regularization to increase prediction confidences and the second solution is to force mod... | [
5,
-1,
-1,
-1,
-1,
4,
5
] | [
4,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2021_1Q-CqRjUzf",
"Sh942wQpsy9",
"tc_jTy4F7oN",
"4HOnX-eOiUn",
"iclr_2021_1Q-CqRjUzf",
"iclr_2021_1Q-CqRjUzf",
"iclr_2021_1Q-CqRjUzf"
] |
iclr_2021_t5lNr0Lw84H | Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms | We benchmark commonly used multi-agent deep reinforcement learning (MARL) algorithms on a variety of cooperative multi-agent games. While there has been significant innovation in MARL algorithms, algorithms tend to be tested and tuned on a single domain and their average performance across multiple domains is less char... | withdrawn-rejected-submissions | The paper presents a thorough comparison of different algorithms for multi-agent Deep RL methods. The conclusions of the paper is that across a variety of envionment and hyperparameter tuning, multi-agent PPO seems to peform well relatively to the competitors.
The reviewers agreed that the paper fills a gap in the lit... | train | [
"LwVVAxtCY67",
"ryUkD0veg7u",
"yyVDsYwKoec",
"U24FRYcSZUm",
"e2h1tmbSsp0",
"Z2Yx3i_Lfwe",
"n7-dlu9OCSw",
"JhfGV9EKQn",
"Bs8rH5w0G7C"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The major contribution of this paper is benchmarking 5 MARL algorithms on 4 cooperative multi-agent environments. Also, this paper found that under constrained hyperparameter search budgets, the multi-agent PPO algorithm has more consistent performance over the other algorithms across different tested multi-agent ... | [
6,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_t5lNr0Lw84H",
"iclr_2021_t5lNr0Lw84H",
"iclr_2021_t5lNr0Lw84H",
"JhfGV9EKQn",
"ryUkD0veg7u",
"Bs8rH5w0G7C",
"LwVVAxtCY67",
"iclr_2021_t5lNr0Lw84H",
"iclr_2021_t5lNr0Lw84H"
] |
iclr_2021_pwwVuSICBgt | Enabling Binary Neural Network Training on the Edge | The ever-growing computational demands of increasingly complex machine learning models frequently necessitate the use of powerful cloud-based infrastructure for their training. Binary neural networks are known to be promising candidates for on-device inference due to their extreme compute and memory savings over higher... | withdrawn-rejected-submissions | After reading the paper, reviews and authors’ feedback. The meta-reviewer agrees that this paper addresses an important topic. However, as the reviewers pointed out. The paper mainly builds the technique on simulated setting, and it is unclear how the method will translate to real world speedups. Past work(e.g. [1]) ha... | train | [
"kCBAi_0l7TX",
"B4kvJu-01oE",
"AMfZReX8924",
"hzax3orV6h",
"5Ucajy1JbJO",
"-n9jdfNsfqF",
"XWf3ptGW1oC",
"SB03xhDUg_a",
"nwvrc0YLcUI"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
">Although there are certain aspects that could be improved, such as including a table outlining in a clearer manner the contributions of the authors in this context.\n\nThank you for this suggestion. We now compare more directly against prior low-cost training works targeting non-binary networks in the newly added... | [
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
4
] | [
"-n9jdfNsfqF",
"XWf3ptGW1oC",
"nwvrc0YLcUI",
"SB03xhDUg_a",
"iclr_2021_pwwVuSICBgt",
"iclr_2021_pwwVuSICBgt",
"iclr_2021_pwwVuSICBgt",
"iclr_2021_pwwVuSICBgt",
"iclr_2021_pwwVuSICBgt"
] |
iclr_2021_mWnfMrd9JLr | On the Latent Space of Flow-based Models | Flow-based generative models typically define a latent space with dimensionality identical to the observational space. In many problems, however, the data does not populate the full ambient data-space that they natively reside in, but rather inhabit a lower-dimensional manifold. In such scenarios, flow-based models ar... | withdrawn-rejected-submissions | The paper proposes an algorithm for training flow models by minimizing the KL divergence in the latent space Z. The paper addresses an important problem in training flow models. However, some major concerns remain after the discussion among the reviewers. The scale of the experiments and the scalability of the approach... | train | [
"PcAtp6pWgdp",
"1RMcrDZpYE",
"h-CMqiI7GHQ",
"1kW8-PlmbPU",
"di_oRuiBC5h",
"F3h064tcXpy",
"CGvPI-O4e-q",
"bThvka2WwFH",
"5I-AHiXrR9",
"oyXFucOT_eg",
"dc96UxRoBqJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new method to train flow models on data from low dimensional manifolds embedded in high dimensional ambient spaces. The basic idea is based on minimizing the KL divergence in the latent space, which is equivalent to maximizing expected log-likelihood over the data distribution. Since the KL b... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"iclr_2021_mWnfMrd9JLr",
"iclr_2021_mWnfMrd9JLr",
"iclr_2021_mWnfMrd9JLr",
"5I-AHiXrR9",
"dc96UxRoBqJ",
"1RMcrDZpYE",
"oyXFucOT_eg",
"PcAtp6pWgdp",
"iclr_2021_mWnfMrd9JLr",
"iclr_2021_mWnfMrd9JLr",
"iclr_2021_mWnfMrd9JLr"
] |
iclr_2021_IG3jEGLN0jd | Contrastive estimation reveals topic posterior information to linear models | Contrastive learning is an approach to representation learning that utilizes naturally occurring similar and dissimilar pairs of data points to find useful embeddings of data. In the context of document classification under topic modeling assumptions, we prove that contrastive learning is capable of recovering a repres... | withdrawn-rejected-submissions | This promising work proves that the proposed contrastive learning approach to representation learning can recover the underlying topic posterior information given standard topic modelling assumptions. The work provides detailed proof and detailed experiments. The analysis is interesting and yields interesting insights.... | train | [
"vbge2M9Qwt6",
"ELUlO4ddw6B",
"2rpJ-VjoSVL",
"v6AxUMV4iFr",
"SEPSNhoO8E_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nContrastive learning is applied in a semi-supervised setting with few training examples to provide features for a linear classifier. A theoretical analysis is provided to show that contrastive embeddings allow to predict using a linear transformation. The model allows to recover the topic structure of a ... | [
5,
-1,
7,
6,
6
] | [
4,
-1,
4,
2,
3
] | [
"iclr_2021_IG3jEGLN0jd",
"2rpJ-VjoSVL",
"iclr_2021_IG3jEGLN0jd",
"iclr_2021_IG3jEGLN0jd",
"iclr_2021_IG3jEGLN0jd"
] |
iclr_2021_H8VDvtm1ij8 | Normalizing Flows for Calibration and Recalibration | In machine learning, due to model misspecification and overfitting, estimates of the aleatoric uncertainty are often inaccurate.
One approach to fix this is isotonic regression, in which a monotonic function is fit on a validation set to map the model's CDF to an optimally calibrated CDF. However, this makes it i... | withdrawn-rejected-submissions | The paper proposes to recalibrate predictive models by fitting a
normalizing flow on top of the predictive model on a held out validation
set using side information. At a high level this idea has some potential,
especially in the multivariate setting, but there are several directions for
improvement:
- Comparison with... | train | [
"tNqIsxNTWX",
"pjPdE6-8svh",
"zadZ1cHaWU",
"Z3XghORrp22",
"e5WV6dGM--6",
"hAW7_O5VkoR",
"M7SWXscDUil",
"ah_sfcB04GN",
"57qyvZSVdv",
"OOyYBo8xKRO"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the response. The role of the proposed method -- and how it differs with that introduced by Kuleshov et al -- is more clear to me now. That being said, the experimental results don't offer a very compelling story for improvement. My advice for the next revision would be to find tasks where it is vita... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
4,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4,
5
] | [
"zadZ1cHaWU",
"M7SWXscDUil",
"ah_sfcB04GN",
"57qyvZSVdv",
"OOyYBo8xKRO",
"iclr_2021_H8VDvtm1ij8",
"iclr_2021_H8VDvtm1ij8",
"iclr_2021_H8VDvtm1ij8",
"iclr_2021_H8VDvtm1ij8",
"iclr_2021_H8VDvtm1ij8"
] |
iclr_2021_tf8a4jDRFCv | Learning Aggregation Functions | Learning on sets is increasingly gaining attention in the machine learning community, due to its widespread applicability. Typically, representations over sets are computed by using fixed aggregation functions such as sum or maximum. However, recent results showed that universal function representation by sum- (or max... | withdrawn-rejected-submissions | The authors propose a novel and elegant way for learning parameterized aggregation functions and show that their approach can achieve good performance on several datasets (in many cases outperforming other state-of-the-art methods). This is also appreciated by most of the reviewers. However, there have been several iss... | train | [
"OzEdwXfTWO_",
"S_GbWX5Chsr",
"kXfw73Rka1N",
"qpPTbYfkaN",
"ukmCfRiCNgL",
"fwm_mZaQpTI",
"CFIfcS-0F2",
"iZ7Xy--PhFF"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" - One of the problems with learnable rational approximations is the\n potential of finding a pole, e.g., x/0, you do not mention how you\n avoid/use? such a situation in your approach, whether this causes\n instabilities during learning, etc. Could you mention something\n about this? \n\n This is... | [
-1,
-1,
-1,
-1,
6,
5,
3,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"fwm_mZaQpTI",
"ukmCfRiCNgL",
"CFIfcS-0F2",
"iZ7Xy--PhFF",
"iclr_2021_tf8a4jDRFCv",
"iclr_2021_tf8a4jDRFCv",
"iclr_2021_tf8a4jDRFCv",
"iclr_2021_tf8a4jDRFCv"
] |
iclr_2021_aLtty4sUo0o | Quickest change detection for multi-task problems under unknown parameters | We consider the quickest change detection problem where both the parameters of pre- and post- change distributions are unknown, which prevent the use of classical simple hypothesis testing. Without additional assumptions, optimal solutions are not tractable as they rely on some minimax and robust variant of the objecti... | withdrawn-rejected-submissions | The paper treats a relevant and challenging problem in sequential learning scenarios -- how to detect distributional change over time when the pre- and post-change distributions are not known up to certainty. All reviewers more or less acknowledge that the paper presents a new approach towards solving this inference pr... | train | [
"q4BTjZuT-3l",
"i0U03TKMV3",
"ahuNr9lMl8k",
"1Bj-N3QdfI8",
"M-3FimTEc6",
"uuvfn4Hwyiu",
"r0ZVnmAq7mu",
"Sj_UkoO-SRM",
"2lQdfFNlxfJ",
"2R6ORWKTMC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"***** Paper's Summary *****\n\nThis paper considers the quickest change detection (QCD) problem where pre-change and post-change distributions are unknown. For such problems, the authors proposed approximate algorithms in MIN-MAX and Bayesian settings. The algorithms run in O(1) and have near-optimal performance... | [
7,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6
] | [
2,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_aLtty4sUo0o",
"iclr_2021_aLtty4sUo0o",
"1Bj-N3QdfI8",
"r0ZVnmAq7mu",
"iclr_2021_aLtty4sUo0o",
"i0U03TKMV3",
"M-3FimTEc6",
"q4BTjZuT-3l",
"2R6ORWKTMC",
"iclr_2021_aLtty4sUo0o"
] |
iclr_2021_o2N6AYOp31 | Augmentation-Interpolative AutoEncoders for Unsupervised Few-Shot Image Generation | We aim to build image generation models that generalize to new domains from few examples. To this end, we first investigate the generalization properties of classic image generators, and discover that autoencoders generalize extremely well to new domains, even when trained on highly constrained data. We leverage this i... | withdrawn-rejected-submissions | This meta-review is written after considering the reviews, the authors’ responses, the discussion, and the paper itself.
The paper proposes a training scheme for autoencoders, involving data augmentation and interpolation, that results in autoencoders for which interpolations in the latent space lead to meaningful int... | test | [
"xGePHk3XOWe",
"Bg639sw0R39",
"FdSJVF4Tbji",
"ChASrY-ZwQi",
"Xc0Hj7Ed1i",
"ZRmhj5MhwtC"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The general idea of the paper is interesting: when using an AE one can use the constraint that \"interpolated images\" should also correspond to \"interpolated latent codes\". While the idea is interesting, the experimental results are not really that compelling. \n\nThe proposed approach in section 4 is an intere... | [
5,
-1,
-1,
-1,
4,
4
] | [
4,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_o2N6AYOp31",
"ZRmhj5MhwtC",
"Xc0Hj7Ed1i",
"xGePHk3XOWe",
"iclr_2021_o2N6AYOp31",
"iclr_2021_o2N6AYOp31"
] |
iclr_2021_20qC5K2ICZL | Robust Learning via Golden Symmetric Loss of (un)Trusted Labels | Learning robust deep models against noisy labels becomes ever critical when today's data is commonly collected from open platforms and subject to adversarial corruption. The information on the label corruption process, i.e., corruption matrix, can greatly enhance the robustness of deep models but still fall behind in c... | withdrawn-rejected-submissions | The paper studies robust learning in the presence of noisy labels and proposes a new loss function called the golden symmetric loss (GSL) combining both regular cross-entropy and reverse cross entropy and leveraging an estimate of the corruption matrix. The paper appears to be well-written.
Pros:
- Good range of appli... | train | [
"xcGqGNUGlwd",
"n3gpwU7EIQ",
"p3i8PvEKrOh",
"U7MP-jsJ4_o",
"dGHBWZQ1n05",
"GKVsemf5Irs",
"htGw4dKVtGZ",
"zUKJ2oQwgcR",
"FApfIBcaDeb"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer"
] | [
"##### Summary\n\nThe paper proposes a golden symmetric loss method that combines cross entropy loss and reverse cross entropy loss, and at the same time performs forward loss correction for cross entropy loss, under the problem where we have noisy training labels and clean training labels. It shows theoretical res... | [
5,
-1,
-1,
-1,
-1,
3,
-1,
4,
4
] | [
4,
-1,
-1,
-1,
-1,
5,
-1,
4,
3
] | [
"iclr_2021_20qC5K2ICZL",
"zUKJ2oQwgcR",
"GKVsemf5Irs",
"xcGqGNUGlwd",
"FApfIBcaDeb",
"iclr_2021_20qC5K2ICZL",
"iclr_2021_20qC5K2ICZL",
"iclr_2021_20qC5K2ICZL",
"iclr_2021_20qC5K2ICZL"
] |
iclr_2021_OCm0rwa1lx1 | Addressing Some Limitations of Transformers with Feedback Memory | Transformers have been successfully applied to sequential tasks despite being feedforward networks. Unlike recurrent neural networks, Transformers use attention to capture temporal relations while processing input tokens in parallel. While this parallelization makes them computationally efficient, it restricts the mod... | withdrawn-rejected-submissions | This paper focuses on the limitations of the transformer architecture as an autoregressive model. The paper is relatively easy to follow. Though most reviewers find the paper interesting, the idea is not very novel. The introduction of sequential-ness to Transformer is good, though it also slow things down especially a... | train | [
"iW8K2nhM7r",
"8UHbH-Fq4IK",
"K4SqbKvbnP1",
"LI19fZGqts",
"DYdPERtnqpy",
"ezLeNQuQQ05",
"v-ki4Ebwlh",
"mrVNNvmBHK_",
"GkpG1rprUeV",
"C3a90inGuOM",
"JWEUZwNHl6_",
"Y3QldkPvuFs",
"AQX92e7BBc",
"CahuumFe_i9",
"3wpl50jL9py",
"QjLXU6Jre1O",
"YGb02ErA3Ka",
"KXrzXY-86jP",
"ddFJhYoHzyZ",... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"... | [
"We thank the reviewer for the detailed review and discussion, we appreciate it. Below, we provide additional details to help clarify our contribution, and we will make sure that these clarifications (as well as the others above) will be clear in the final version of our paper. Thanks for the consideration, and tha... | [
-1,
5,
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
5,
-1,
-1,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"K4SqbKvbnP1",
"iclr_2021_OCm0rwa1lx1",
"KXrzXY-86jP",
"AXItnFX4W9N",
"iclr_2021_OCm0rwa1lx1",
"iclr_2021_OCm0rwa1lx1",
"ezLeNQuQQ05",
"GkpG1rprUeV",
"C3a90inGuOM",
"JWEUZwNHl6_",
"AQX92e7BBc",
"We7Uggc4Ggr",
"YGb02ErA3Ka",
"D37yg-s2MVS",
"DYdPERtnqpy",
"iclr_2021_OCm0rwa1lx1",
"We7U... |
iclr_2021_sFDJNhwz7S | Semantic Hashing with Locality Sensitive Embeddings | Semantic hashing methods have been explored for learning transformations into binary vector spaces. These learned binary representations may then be used in hashing based retrieval methods, typically by retrieving all neighboring elements in the Hamming ball with radius 1 or 2. Prior studies focus on tasks with a few ... | withdrawn-rejected-submissions | Thanks for your submission to ICLR.
This paper presents an extension to Deep Hashing Networks that utilizes angular similarity, and show improved results using the proposed method. The reviewers were somewhat mixed on this paper, with two of three reviewers on the negative side. Some reviewers appreciated that the p... | train | [
"RIcmxqgcP9n",
"V521Tvd8wxu",
"rwz177xDRfb",
"6dwcsY1a2-",
"A7w0XKRsgpU",
"vCPNc9ZuHat",
"BkwA-r3Zq7G",
"uyJIEIlys1",
"tvQA_5UB8v6",
"f-OHYZXPKO4",
"vMvqFO74ypT"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Please see sections 3.3 and A.2 for theoretical justifications for angular similarity.",
"Please see Section A.1 in appendix for a study on the impact of hard negatives. We note that DHN model improves in the scenario when hard negatives are removed.",
"Please see Sections 3.3, A.2 for updates on the theoreti... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"tvQA_5UB8v6",
"f-OHYZXPKO4",
"vMvqFO74ypT",
"uyJIEIlys1",
"tvQA_5UB8v6",
"f-OHYZXPKO4",
"vMvqFO74ypT",
"iclr_2021_sFDJNhwz7S",
"iclr_2021_sFDJNhwz7S",
"iclr_2021_sFDJNhwz7S",
"iclr_2021_sFDJNhwz7S"
] |
iclr_2021_rI3RMgDkZqJ | A Primal Approach to Constrained Policy Optimization: Global Optimality and Finite-Time Analysis | Safe reinforcement learning (SRL) problems are typically modeled as constrained Markov Decision Process (CMDP), in which an agent explores the environment to maximize the expected total reward and meanwhile avoids violating certain constraints on a number of expected total costs. In general, such SRL problems have nonc... | withdrawn-rejected-submissions | All reviewers appreciated the main result in the paper, which gives global optimality guarantee for constrained policy optimization for both tabular setting and NTK setting. However, there were a number of unclear parts of the paper reported by several reviewers (assumptions, hyperparameter tuning, complexity dependen... | test | [
"82rePtBSzD",
"XlrlASqLLnK",
"20jLyo6n_Ig",
"_UrcaDyXkMX",
"LzU23EKwuOi",
"CqyKfUdPHlO",
"kKVHhbx2cgI",
"Wg-sT2wlEAG"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers safe RL through solving a constrained MDP in (1). The authors proposed a primal method which alternates between maximizing the reward and minimizing the constraint violation. The convergence rate of the algorithm is also provided under standard settings, e.g., bounded reward, iid samples, etc.... | [
6,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"iclr_2021_rI3RMgDkZqJ",
"82rePtBSzD",
"kKVHhbx2cgI",
"CqyKfUdPHlO",
"Wg-sT2wlEAG",
"iclr_2021_rI3RMgDkZqJ",
"iclr_2021_rI3RMgDkZqJ",
"iclr_2021_rI3RMgDkZqJ"
] |
iclr_2021_Ip195saXqIX | Knowledge Distillation By Sparse Representation Matching | Knowledge Distillation refers to a class of methods that transfers the knowledge from a teacher network to a student network. In this paper, we propose Sparse Representation Matching (SRM), a method to transfer intermediate knowledge obtained from one Convolutional Neural Network (CNN) to another by utilizing sparse re... | withdrawn-rejected-submissions | The paper received four negative reviews. The overall idea was found to be interesting, but several concerns were raised. There is a general consensus that the experimental part and the results are not convincing. Several comments have also been made regarding the clarity and motivation, which needs to be strengthened.... | test | [
"czPCeOf8Np",
"H9bPRGc1oFD",
"ltiVojyW_Ya",
"d_8e7YDOPtg",
"8qoCpadpujq",
"Dl2ebss-IXr",
"ovSOzvH3EVG",
"O1pUKEU5iAz",
"ZN3nlS0tThi",
"39dbgUF8EAw",
"hk8XVfIeWN",
"L_1zmld6T3X",
"fyZFJwIICLP",
"kbgejaU9gV",
"vBQ8mbQXWs8",
"E1QfDE1aQAq",
"YDrIDnDu_N2",
"in-swVyEztP",
"EkDilRQvn3r"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We do not agree with the Reviewer on the notion of conceptual simplicity as this is subjective and we believe the concept of sparse representation is as easy to understand and simple as a clustering concept. But more importantly, there is no argument against the usefulness of our method. There is an alternative su... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
5
] | [
"H9bPRGc1oFD",
"8qoCpadpujq",
"d_8e7YDOPtg",
"39dbgUF8EAw",
"Dl2ebss-IXr",
"ovSOzvH3EVG",
"O1pUKEU5iAz",
"YDrIDnDu_N2",
"E1QfDE1aQAq",
"E1QfDE1aQAq",
"in-swVyEztP",
"in-swVyEztP",
"YDrIDnDu_N2",
"YDrIDnDu_N2",
"EkDilRQvn3r",
"iclr_2021_Ip195saXqIX",
"iclr_2021_Ip195saXqIX",
"iclr_2... |
iclr_2021_YbDGyviJkrL | Transformers for Modeling Physical Systems | Transformers are widely used in neural language processing due to their ability to model longer-term dependencies in text. Although these models achieve state-of-the-art performance for many language related tasks, their applicability outside of the neural language processing field has been minimal. In this work, we pr... | withdrawn-rejected-submissions | The paper introduces a framework for learning dynamical system models from observations consisting of discrete spatio-temporal series. It is composed of two components trained sequentially. A first one learns embedding from observations using a seq2seq approach, where the embeddings are constrained to follow linear dy... | train | [
"V_OY67Q-8Qf",
"nWBOn60ppzu",
"U8QlURjFMgY",
"bT_3ram1CSH",
"L9AgiBqm2_l",
"Ydsx8WcrHxJ",
"F7YUzovD8sR",
"Xn2TGxNFK41",
"Q_P7UUEqL7o",
"w3ksSwEmEV-",
"MZGiUX9adty",
"3ink3-dD-w_",
"mYUIMKI32uA"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Summary:\nThe paper proposes to use transformers for modelling dynamical systems. The transformer is combined with a linear dynamical system to enforce Koopman features and is trained using the reconstruction and prediction loss. Finally, the proposed algorithm is applied to the different tasks with 1, 2 & 3 d... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"iclr_2021_YbDGyviJkrL",
"bT_3ram1CSH",
"Ydsx8WcrHxJ",
"L9AgiBqm2_l",
"Ydsx8WcrHxJ",
"F7YUzovD8sR",
"V_OY67Q-8Qf",
"MZGiUX9adty",
"mYUIMKI32uA",
"3ink3-dD-w_",
"iclr_2021_YbDGyviJkrL",
"iclr_2021_YbDGyviJkrL",
"iclr_2021_YbDGyviJkrL"
] |
iclr_2021_dhQHk8ShEmF | Informative Outlier Matters: Robustifying Out-of-distribution Detection Using Outlier Mining | Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep learning models in an open-world setting. However, existing OOD detection solutions can be brittle in the open world, facing various types of adversarial OOD inputs. While methods leveraging auxiliary OOD data have emerged, our analysis re... | withdrawn-rejected-submissions | In this paper, the authors studied a robust method for detecting out-of-distribution (OOD) instances. OOD instance detection is an important practical problem, and multiple reviewers recognized the proposed approach is interesting. However, it was the common opinion of several reviewers that the main theoretical analys... | train | [
"81WVNyiYyXt",
"mCXOmLDluo",
"CxLdU1NbZfz",
"3V-3Lgf00IP",
"jy5DN3HI0Zr",
"EgRBP_wkp98",
"64NT1edz9iQ",
"ByM1dn0KsV",
"lMWbl8YAzI0",
"SJIxxcmLQhU",
"zo1l-bL764i",
"Ho0g1c4KlMr",
"FgxHrr3QhfA",
"PrPuLoLpeM",
"p72j6tzOzfQ",
"AdntSU1ATJ5",
"zk2oI3QeajG",
"8xJIn6Nc-dn"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"In this paper the authors propose a method for training a classifier to be more effective at OOD (out of distribution) detection. Many OOD detection methods work by utilizing an auxiliary dataset as examples of OOD-ness. This is the approach taken in this paper and OOD is trained as being a k+1 classification clas... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
3
] | [
"iclr_2021_dhQHk8ShEmF",
"3V-3Lgf00IP",
"iclr_2021_dhQHk8ShEmF",
"64NT1edz9iQ",
"81WVNyiYyXt",
"81WVNyiYyXt",
"AdntSU1ATJ5",
"SJIxxcmLQhU",
"iclr_2021_dhQHk8ShEmF",
"lMWbl8YAzI0",
"lMWbl8YAzI0",
"p72j6tzOzfQ",
"lMWbl8YAzI0",
"8xJIn6Nc-dn",
"iclr_2021_dhQHk8ShEmF",
"zk2oI3QeajG",
"p72... |
iclr_2021_2ioNazs6lvw | Learning to generate Wasserstein barycenters | Optimal transport is a notoriously difficult problem to solve numerically, with current approaches often remaining intractable for very large scale applications such as those encountered in machine learning. Wasserstein barycenters -- the problem of finding measures in-between given input measures in the optimal transp... | withdrawn-rejected-submissions | Given the reviewer's exchange with the authors, and my own examination of the paper, I don't think that it can be accepted in the present form.
First, since this paper aims at solving an optimization problem (for which existing methods exist, with theoretical guarantees) via a NN, it is important to compare appropriat... | train | [
"khrURN-aHwe",
"MwhDXC20-df",
"KtKEGiwnQFU",
"wrmkbEuxm9m",
"pVu1uO1LcpN",
"nAWHJe14ScA"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for its constructive feedback, and detail below our answer.\n1. While we did not consider non-image settings in our paper, we expect our model to be trivially extended to voxel grids. However, the unstructured data case is much more complicated since it cannot benefit from the fast convoluti... | [
-1,
-1,
-1,
3,
7,
6
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"wrmkbEuxm9m",
"pVu1uO1LcpN",
"nAWHJe14ScA",
"iclr_2021_2ioNazs6lvw",
"iclr_2021_2ioNazs6lvw",
"iclr_2021_2ioNazs6lvw"
] |
iclr_2021_wbQXW1XTq_y | Sim2SG: Sim-to-Real Scene Graph Generation for Transfer Learning | Scene graph (SG) generation has been gaining a lot of traction recently. Current SG generation techniques, however, rely on the availability of expensive and limited number of labeled datasets. Synthetic data offers a viable alternative as labels are essentially free. However, neural network models trained on synthetic... | withdrawn-rejected-submissions | Paper addresses the problem of sim2real (training with synthetic data and then applying the learned model on real data) in the context of scene graph generation. The paper was reviewed by four expert reviewers who identified the following pros and cons of the method.
> Pros:
- Paper addresses relevant and importa... | train | [
"fHo92lcWkW",
"XLGIQSFIMC8",
"6Tg3MHDXM2-",
"muITaEJ6Q6r",
"PfBzZMNESyl",
"4X1VdkWXj_I",
"6j7F3EyObRU",
"oawATQTXnm3",
"xSZ77KjExK",
"CYjHG8PYp0P"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"* **Analysis Experiments**. We have added Figure 3 to Section 4.3 (second last paragraph) that illustrates the evolution of the generated data through different epochs of the training process using label alignment.\n",
"* **Applicability to Visual Genome type of Scenarios**. We assume access to simulators for ea... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"muITaEJ6Q6r",
"6j7F3EyObRU",
"oawATQTXnm3",
"CYjHG8PYp0P",
"xSZ77KjExK",
"iclr_2021_wbQXW1XTq_y",
"iclr_2021_wbQXW1XTq_y",
"iclr_2021_wbQXW1XTq_y",
"iclr_2021_wbQXW1XTq_y",
"iclr_2021_wbQXW1XTq_y"
] |
iclr_2021_gBpYGXH9J7F | Online Learning under Adversarial Corruptions | We study the design of efficient online learning algorithms tolerant to adversarially corrupted rewards. In particular, we study settings where an online algorithm makes a prediction at each time step, and receives a stochastic reward from the environment that can be arbitrarily corrupted with probability ϵ∈[0,12). Her... | withdrawn-rejected-submissions | The discussion with the expert reviewers reached the consensus that the paper lacks in novel *technical* contributions, and as such it does not meet the bar for a theory-oriented paper at ICLR. | train | [
"cIgCsqf7AEt",
"2PsXdF4IYda",
"4m6MYm8OCS6",
"f8L96-ySvSv",
"2b5bYF1LnGW"
] | [
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The work studies three online learning problems with corrupted rewards as the feedback. The three problems are the stochastic multi-armed bandit, the linear contextual bandit, and the reinforcement learning of the Markov Decision Process optimization.\n\nThe major contributions are three improved regret bounds for... | [
5,
-1,
7,
5,
5
] | [
4,
-1,
4,
3,
4
] | [
"iclr_2021_gBpYGXH9J7F",
"iclr_2021_gBpYGXH9J7F",
"iclr_2021_gBpYGXH9J7F",
"iclr_2021_gBpYGXH9J7F",
"iclr_2021_gBpYGXH9J7F"
] |
iclr_2021_8nXkyH2_s6 | Neural networks behave as hash encoders: An empirical study | The input space of a neural network with ReLU-like activations is partitioned into multiple linear regions, each corresponding to a specific activation pattern of the included ReLU-like activations. We demonstrate that this partition exhibits the following encoding properties across a variety of deep learning models: (... | withdrawn-rejected-submissions | The paper studies the behavior of the intermediate ReLU-like activations of trained neural networks and show empirically that the intermediate activation can be used as a hashing function for the examples with some key advantages, including almost no collisions and that there are desirable geometric properties (i.e. ca... | val | [
"W30ojOnalj",
"SDqOQqU29av",
"UXdEFurD4yE",
"hxZNTWaR2m",
"uEHJukFnsL2",
"AoBXyCQYEXf",
"xZ9fIR_K9J",
"OW2hRwS1YqX",
"hLIoeqqXiBF",
"Ph2wPoCAdG7",
"R4hx8GyZQR2",
"OmpV7j0f_3M",
"phLQfsZVUga",
"bIOWQHLFaJN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"**Update after rebuttal:** I thank the authors for their detailed responses and the additional experiments. The responses addressed most of my concerns. I noticed that I had the wrong notion of redundancy ratio in my mind (I'm glad the authors now give a more formal definition of this concept as I think this would... | [
6,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
2,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_8nXkyH2_s6",
"iclr_2021_8nXkyH2_s6",
"iclr_2021_8nXkyH2_s6",
"Ph2wPoCAdG7",
"W30ojOnalj",
"W30ojOnalj",
"UXdEFurD4yE",
"W30ojOnalj",
"bIOWQHLFaJN",
"iclr_2021_8nXkyH2_s6",
"UXdEFurD4yE",
"SDqOQqU29av",
"UXdEFurD4yE",
"iclr_2021_8nXkyH2_s6"
] |
iclr_2021_-WwaX9vKKt | Ask Question with Double Hints: Visual Question Generation with Answer-awareness and Region-reference | The task of visual question generation~(VQG) aims to generate human-like questions from an image and potentially other side information (e.g. answer type or the answer itself). Despite promising results have been achieved, previous works on VQG either i) suffer from one image to many questions mapping problem rendering... | withdrawn-rejected-submissions |
The paper proposes to generate human-like question for a image by using additional information (hints) such as the textual answer and visual regions of interest (ROIs). The visual regions are used to guide the question generation so that the model can generate relevant and informative question. The question generati... | train | [
"6b6HzTr-aC9",
"_wFdBUamdE",
"1vXKQxwqhOQ",
"gGQrkwS9Iq1",
"0NIJ_t4BWma",
"8VmdChThbgp",
"LRDH-9abuLj",
"0RuBZRf5Qpe",
"4VocQO9F9qZ",
"pVBpYmrp1tt",
"ns5GkElATzr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper introduces a new model on the task of Visual Question Generation. The model uses cross-modal alignment between the object features, position features and answer hints to find the right subset of relevant visual hints to be used to generate the relevant question. The model also ensures that the latent spa... | [
6,
6,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
5,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_-WwaX9vKKt",
"iclr_2021_-WwaX9vKKt",
"iclr_2021_-WwaX9vKKt",
"iclr_2021_-WwaX9vKKt",
"iclr_2021_-WwaX9vKKt",
"6b6HzTr-aC9",
"gGQrkwS9Iq1",
"gGQrkwS9Iq1",
"1vXKQxwqhOQ",
"_wFdBUamdE",
"1vXKQxwqhOQ"
] |
iclr_2021_sgJJjd3-Y3 | Semi-supervised regression with skewed data via adversarially forcing the distribution of predicted values | Advances in scientific fields including drug discovery or material design are accompanied by numerous trials and errors. However, generally only representative experimental results are reported. Because of this reporting bias, the distribution of labeled result data can deviate from their true distribution. A regressio... | withdrawn-rejected-submissions | This paper addresses the real-world problem of semi-supervised learning where the distribution from which the labeled examples are drawn is different from the distribution from which the unlabeled examples are drawn. The task is motivated by structure-activity prediction for drug design (quantitative structure activit... | train | [
"0LGRF2nSW7b",
"KOUMikXcbOC",
"BfgHwR0asdI",
"wMxMtn58YY0",
"uCCP-mlZuij",
"i5o-4si1Bn",
"sSirPsTTKyE",
"2xwAUSC2hCP",
"_-nQr2fU2Wi",
"1qxP012V_AZ",
"L7kdTa0N0bg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n\nThe paper presents a novel approach to improve the accuracy of regression models that are learned from a skew dataset. The proposed approach consists of two parts, namely, (i) adversarial network for forcing output distributi... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_sgJJjd3-Y3",
"iclr_2021_sgJJjd3-Y3",
"iclr_2021_sgJJjd3-Y3",
"KOUMikXcbOC",
"KOUMikXcbOC",
"1qxP012V_AZ",
"L7kdTa0N0bg",
"0LGRF2nSW7b",
"0LGRF2nSW7b",
"iclr_2021_sgJJjd3-Y3",
"iclr_2021_sgJJjd3-Y3"
] |
iclr_2021_RdhjoXl-SDG | Multiscale Invertible Generative Networks for High-Dimensional Bayesian Inference | High-dimensional Bayesian inference problems cast a long-standing challenge in generating samples, especially when the posterior has multiple modes. For a wide class of Bayesian inference problems equipped with the multiscale structure that low-dimensional (coarse-scale) surrogate can approximate the original high-dime... | withdrawn-rejected-submissions | The paper considers sample generation in high-dimensional bayesian inference and proposes a multi scale procedure that performs coarse-to-fine multi-stage training and enables interpretability of intermediate activations at coarse scales. The method is simple, elegant and addresses a very important bottleneck of high-... | train | [
"rDTTm8hs41d",
"IwJedFV5AAH",
"nG468HsvcRC",
"CfCuRU2WJf",
"HgeOdAvvkgf",
"U1xFEv5p8T",
"y-15D5Yklf0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"# Paper Summary\nThis paper presents a model and a corresponding training approach for multi-scale invertible models. The presented model is defined on multiple scales with information on finer scales being conditioned on coarser scales. Data generation is hence done sequentially from a coarser to finer scale. The... | [
6,
-1,
-1,
-1,
-1,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
1,
3
] | [
"iclr_2021_RdhjoXl-SDG",
"iclr_2021_RdhjoXl-SDG",
"U1xFEv5p8T",
"rDTTm8hs41d",
"y-15D5Yklf0",
"iclr_2021_RdhjoXl-SDG",
"iclr_2021_RdhjoXl-SDG"
] |
iclr_2021_ClZ4IcqnFXB | Active Feature Acquisition with Generative Surrogate Models | Many real-world situations allow for the acquisition of additional relevant information when making an assessment with limited or uncertain data. However, traditional ML approaches either require all features to be acquired beforehand or regard part of them as missing data that cannot be acquired. In this work, we prop... | withdrawn-rejected-submissions | The paper studied an interesting and important problem in active learning/information acquisition (AFA), and provided an RL-based active learning scheme for a broad spectrum of AFA tasks, in both supervised (active classification/regression) and unsupervised (feature completion/recovery) domains. The reviewers generall... | train | [
"xBaCrn6HGLp",
"qUo0-a7nA5Q",
"W6RTq5c-9Uk",
"0ZGuhf7p8N",
"bciSGlNAO30",
"81Dp3Gyp9fF",
"BHl1toG2LMD",
"ScfUbIlLmnO",
"MJxoW_vccV"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper operates in the setting where there is a possibility to adaptively acquire features for the prediction on each datapoint. The authors study classification, regression, time series and feature completion problems. The proposed solution relies on RL and introduces additional hand crafted rewards and featu... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
4,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_ClZ4IcqnFXB",
"iclr_2021_ClZ4IcqnFXB",
"iclr_2021_ClZ4IcqnFXB",
"qUo0-a7nA5Q",
"xBaCrn6HGLp",
"ScfUbIlLmnO",
"MJxoW_vccV",
"iclr_2021_ClZ4IcqnFXB",
"iclr_2021_ClZ4IcqnFXB"
] |
iclr_2021_5L8XMh667qz | Encoded Prior Sliced Wasserstein AutoEncoder for learning latent manifold representations | While variational autoencoders have been successful in a variety of tasks, the use of conventional Gaussian or Gaussian mixture priors are limited in their ability to encode underlying structure of data in the latent representation.
In this work, we introduce an Encoded Prior Sliced Wasserstein AutoEncoder (EPSWA... | withdrawn-rejected-submissions | All three referees have provided detailed comments, both before and after the author response period. While the authors have carefully revised the paper and provided detailed responses, leading to clearly improved clarity and quality, there remain clear concerns on novelty (at least not sufficiently supported with abla... | train | [
"Iinm5DjqIzK",
"bIENVj0QHo",
"POuXoPrMjAK",
"AnSy0NGx0hV",
"qY1Jo0jHuyL",
"dEQ9UQoNYm3",
"SqadPK_CR-",
"b3VnJugOUTl",
"tGyqQCY1z9U",
"kpIlY7i8cuG",
"WvzjOwnuxm5",
"aV-8sQlgUon",
"4lNRRWfYS53",
"ljI-4PoGOki",
"bSTYmIa5A5M"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"# Paper Summary\n\nThe paper extends the variational autoencoder framework with a richer prior distribution to model more complex correlations in the latent variable distribution. They start with a Gaussian mixture distribution as the prior for the latent variables, and add an encoder network to allow richer corre... | [
7,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_5L8XMh667qz",
"iclr_2021_5L8XMh667qz",
"iclr_2021_5L8XMh667qz",
"dEQ9UQoNYm3",
"b3VnJugOUTl",
"tGyqQCY1z9U",
"iclr_2021_5L8XMh667qz",
"bSTYmIa5A5M",
"kpIlY7i8cuG",
"POuXoPrMjAK",
"aV-8sQlgUon",
"bIENVj0QHo",
"ljI-4PoGOki",
"bSTYmIa5A5M",
"Iinm5DjqIzK"
] |
iclr_2021_MY5iHZ0IZXl | ABSTRACTING INFLUENCE PATHS FOR EXPLAINING (CONTEXTUALIZATION OF) BERT MODELS | While “attention is all you need” may be proving true, we do not yet know why: attention-based transformer models such as BERT are superior but how they contextualize information even for simple grammatical rules such as subject-verb number agreement(SVA) is uncertain. We introduce multi-partite patterns, abstractions ... | withdrawn-rejected-submissions | This paper propose a method to explain the contextualization of BERT by identifying a set of influence paths from the input to the output. Although all reviewers give overall score 6, their comments are pointing to the negative direction. The following excerpts summarize the general sentiment of the reviews:
R2: Over... | val | [
"SbOZnrFMN5X",
"bDqqD2WIomb",
"RKei0M_bnpk",
"SjpOU6e9wlC",
"91JPqzxxIc1",
"aJtdQVOHkm3",
"hXF_afJ1pv",
"4jA3dBNQSrE",
"BYpMIvcn_H",
"1w6TrJ7nHmf",
"N5oNwj_K0mB",
"AQ571UKrj_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"*Summary*\n\nThis paper investigates how BERT uses attention to contextualise information. To find an answer to this question, the authors use abstractions of sets of paths through a neural network model, to quantify and localise the effect of specific inputs to the output. They describe these paths -- multi-parti... | [
6,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_MY5iHZ0IZXl",
"1w6TrJ7nHmf",
"iclr_2021_MY5iHZ0IZXl",
"hXF_afJ1pv",
"iclr_2021_MY5iHZ0IZXl",
"BYpMIvcn_H",
"AQ571UKrj_",
"iclr_2021_MY5iHZ0IZXl",
"91JPqzxxIc1",
"RKei0M_bnpk",
"SbOZnrFMN5X",
"iclr_2021_MY5iHZ0IZXl"
] |
iclr_2021_w6p7UMtf-0S | Improving Few-Shot Visual Classification with Unlabelled Examples | We propose a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance. Our approach combines a regularized Mahalanobis-distance-based soft k-means clustering procedure with a modified state of the art neural adaptive feature extractor to achieve improved test... | withdrawn-rejected-submissions | The submission proposes a transductive few-shot classification method on the basis of the simple Conditional Neural Adaptive Processes (CNAPS) introduced by Bateni et al. The paper received two borderline accept and two borderline reject reviews, indicating that the paper may not be yet ready for a publication. The met... | train | [
"8GLtI-3hOQV",
"RDSGpy0bB3N",
"ptcVM4PXFaR",
"MIYrK21JwvP",
"F5mGTkRlcaB",
"yHErIZXNnDO",
"swCqNYwQ2En",
"7QLI7zqGG3i",
"ihE5yAV07Fe",
"bXCFs9RNEmW",
"u_jfklGi4Nb",
"kdgRWoJTnoV"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In order to improve few shot visual classification, the authors propose a transductive meta-learning method using unlabelled examples. The authors have introduced a two-step transductive encoder as well as soft k-means clustering procedure on the existing simple CNAPS architecture.\n\nPros:\nThe paper is well writ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2021_w6p7UMtf-0S",
"u_jfklGi4Nb",
"u_jfklGi4Nb",
"8GLtI-3hOQV",
"8GLtI-3hOQV",
"bXCFs9RNEmW",
"bXCFs9RNEmW",
"kdgRWoJTnoV",
"kdgRWoJTnoV",
"iclr_2021_w6p7UMtf-0S",
"iclr_2021_w6p7UMtf-0S",
"iclr_2021_w6p7UMtf-0S"
] |
iclr_2021_XoF2fGAvXO6 | Weakly Supervised Neuro-Symbolic Module Networks for Numerical Reasoning | Neural Module Networks (NMNs) have been quite successful in incorporating explicit reasoning as learnable modules in various question answering tasks, including the most generic form of numerical reasoning over text in Machine Reading Comprehension (MRC). However, to achieve this, contemporary NMNs need strong supervi... | withdrawn-rejected-submissions | This paper proposes a weakly supervised model for numerical reasoning. After discussion with the reviewers it seems that it is already known that training NMNs directly on DROP is not successful and requires taking additional measures. Past work (NERD) has resorted to using data augmentation, and this work encodes it d... | val | [
"w09HqwowTJu",
"g63c55_6Ilv",
"nghgS5HzFdx",
"KP4aQajTJNW",
"eH8D8D0PRq",
"rp5Q8kpqFAA",
"IfNOX6dQ-3w",
"_SLBLCZJ4Qm",
"YRFxxYF542I",
"f8PLHPHPa38",
"1onfOub2uZe",
"TMGmxPAnOYn",
"VHtXLx8adyV",
"jYCEgv7LSK4"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your response. Please find our's below.\n\nQuestion: Assumption that the sampled actions are extensive enough to contain the gold answer\n\nResponse: For the Hard-EM and REINFORCE to work, atleast one of the sampled actions need to have positive reward in atleast some of the training instances. This ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
3
] | [
"g63c55_6Ilv",
"rp5Q8kpqFAA",
"KP4aQajTJNW",
"1onfOub2uZe",
"VHtXLx8adyV",
"TMGmxPAnOYn",
"rp5Q8kpqFAA",
"jYCEgv7LSK4",
"_SLBLCZJ4Qm",
"eH8D8D0PRq",
"iclr_2021_XoF2fGAvXO6",
"iclr_2021_XoF2fGAvXO6",
"iclr_2021_XoF2fGAvXO6",
"iclr_2021_XoF2fGAvXO6"
] |
iclr_2021_RepN5K31PT3 | On the Dynamic Regret of Online Multiple Mirror Descent | We study the problem of online convex optimization, where a learner makes sequential decisions to minimize an accumulation of strongly convex costs over time. The quality of decisions is given in terms of the dynamic regret, which measures the performance of the learner relative to a sequence of dynamic minimizers. Pri... | withdrawn-rejected-submissions | All the reviewers questioned the significance of the result, in the sense that the qualitatively it is not clear how much of an improvement it is to replace "min(S_T,C_T) with Lipschitz assumption" by "min(S_T,C_T,G_T)". The authors' response on this point did not convince the reviewers. If the authors were to resubmit... | train | [
"31Ew_uNiVqp",
"xs3z9zbTtP",
"SQ9nfSp7wGM",
"JiZsKVmix2N",
"kpF6k2VOfXs",
"0i9mdwY4I_Q",
"-rLEpBpsOR1",
"wsS232r33Tz",
"te-w8Ry5B6_"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper studies the dynamic regret of online multiple mirror descent, which is online mirror descent with M repeated steps on each of T sequential loss functions. The authors show three bounds for the dynamic regret of OMMD, which generalizes OMGD [Zhang et al. '17]: C_T (the path length of the minimi... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_RepN5K31PT3",
"SQ9nfSp7wGM",
"wsS232r33Tz",
"31Ew_uNiVqp",
"JiZsKVmix2N",
"-rLEpBpsOR1",
"te-w8Ry5B6_",
"iclr_2021_RepN5K31PT3",
"iclr_2021_RepN5K31PT3"
] |
iclr_2021_ufS1zWbRCEa | Parallel Training of Deep Networks with Local Updates | Deep learning models trained on large data sets have been widely successful in both vision and language domains. As state-of-the-art deep learning architectures have continued to grow in parameter count so have the compute budgets and times required to train them, increasing the need for compute-efficient methods that ... | withdrawn-rejected-submissions | After reading the paper, reviews and authors’ feedback. The meta-reviewer agrees with the reviewers that the paper touches an important topic(scale up training). However, as some of the reviewers pointed out, the paper could be further improved by clarifying the novelty and more thorough evaluation justification of the... | train | [
"FuM8Dt9LEi2",
"WjQQyQUZ-_N",
"Js3rJxPTctC",
"YyoJj_J5N-t",
"M98Ftg2nHBS",
"f8use8nPgQ8",
"GKziyHZ0fWX",
"g-UkrUCSgq"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper explores and compares several methods for parallel training of deep nets. It presents the results on multiple datasets for image classification and language modelling.\n\n# Quality\n\nThis work provides many experiments with neat visualizations. \n\nPros:\n- The paper provides many rigorous experiments f... | [
9,
-1,
-1,
-1,
-1,
6,
3,
4
] | [
3,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"iclr_2021_ufS1zWbRCEa",
"GKziyHZ0fWX",
"f8use8nPgQ8",
"FuM8Dt9LEi2",
"g-UkrUCSgq",
"iclr_2021_ufS1zWbRCEa",
"iclr_2021_ufS1zWbRCEa",
"iclr_2021_ufS1zWbRCEa"
] |
iclr_2021_FTit3PiAw4 | Training Federated GANs with Theoretical Guarantees: A Universal Aggregation Approach | Recently, Generative Adversarial Networks (GANs) have demonstrated their potential in federated learning, i.e., learning a centralized model from data privately hosted by multiple sites. A federated GAN jointly trains a centralized generator and multiple private discriminators hosted at different sites. A major theoret... | withdrawn-rejected-submissions | The paper presents a provable correct framework, namely Universal Aggregation, for training GANs in federated learning scenarios. It aims to address an important problem. The proposed solution is well-grounded with theoretical analysis and promising empirical results.
The paper receives mixed ratings and therefore th... | train | [
"FT4rZYRlmz",
"sbw2modEWTO",
"FAztdrLQMD0",
"lSYTEU2H8D",
"HiNFL_F2sA5",
"4rjiTaw5Zt",
"uHWVYdCzat",
"O--dozDicNN",
"QGN9ClnHyO3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThe paper proposes a new method, UA-GAN to train GANs in a federated learning setup. The method simulates a central discriminator D_ua such that the odds values of the central discriminator is equivalent to the weighted sum of the local discriminators. The central generator is then trained based on the... | [
6,
-1,
-1,
-1,
-1,
-1,
5,
6,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_FTit3PiAw4",
"uHWVYdCzat",
"O--dozDicNN",
"FT4rZYRlmz",
"4rjiTaw5Zt",
"QGN9ClnHyO3",
"iclr_2021_FTit3PiAw4",
"iclr_2021_FTit3PiAw4",
"iclr_2021_FTit3PiAw4"
] |
iclr_2021_6M4c3WegNtX | Neural Ensemble Search for Uncertainty Estimation and Dataset Shift | Ensembles of neural networks achieve superior performance compared to stand-alone networks not only in terms of predictive performance, but also uncertainty calibration and robustness to dataset shift. Diversity among networks is believed to be key for building strong ensembles, but typical approaches, such as \emph{de... | withdrawn-rejected-submissions | This paper proposes a new method to perform uncertainty estimation based on ensembles with diverse network architecture.
The reviewers raised a few concerns:
- Although it is ok not to compare with (Tao, 2019), an active analytical comparison with baselines for ensemble diversification should not be overlooked e.g. (... | train | [
"DhNzkTAPqhx",
"xLFjvlnHzfs",
"yr0WUlZl2ar",
"YS3Dmkchpti",
"ML0WgF_XsS",
"AQlAjscXtzH",
"fFm88ug-8AQ",
"lBtSKfdXXQw",
"aWTFLqT-zdY",
"cC1VfAedGT",
"yp0frOsRYG",
"PhusmWZoujU",
"nzfVaztAQVX",
"Vu6k8QOjWkK",
"3qMs3dBfyrG"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper suggests a new approach to the construction of ensembles of deep neural networks (DNN). Unlike previous methods which usually deal with multiple DNNs of same structure authors propose to form an ensemble of networks with different architecture. The main claim is that using diverse architectures increases... | [
5,
4,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_6M4c3WegNtX",
"iclr_2021_6M4c3WegNtX",
"AQlAjscXtzH",
"iclr_2021_6M4c3WegNtX",
"fFm88ug-8AQ",
"cC1VfAedGT",
"yp0frOsRYG",
"aWTFLqT-zdY",
"iclr_2021_6M4c3WegNtX",
"3qMs3dBfyrG",
"DhNzkTAPqhx",
"xLFjvlnHzfs",
"Vu6k8QOjWkK",
"YS3Dmkchpti",
"iclr_2021_6M4c3WegNtX"
] |
iclr_2021_wXBt-7VM2JE | On Single-environment Extrapolations in Graph Classification and Regression Tasks | Extrapolation in graph classification/regression remains an underexplored area of an otherwise rapidly developing field. Our work contributes to a growing literature by providing the first systematic counterfactual modeling framework for extrapolations in graph classification/regression tasks. To show that extrapolatio... | withdrawn-rejected-submissions | The paper introduces a new extrapolation problem for graph representation learning (they refer to it as ' counterfactual modeling').
While the problem set-up is intriguing and the work likely has merit, two reviewers (R2 and R4), found the writing highly problematic and we share their opinion. Even though some of th... | train | [
"W0ELKsWrcXJ",
"0plgZx1ZPS",
"MToNomnVt3-",
"kYv-z7ZEUQ",
"6Ej_jMM_z6t",
"kTwO-NpRi2E",
"UGDdepY7X9D",
"1j3mkcANcL5",
"wAwnFhm_yJ1",
"v2SqvPZyr0T",
"_dItrVoHJHI",
"JvJM6LUfvyf",
"yugM_O1zSz",
"jV-BdSVWaz",
"yXJLeP8W6TR",
"fVwoKer8rRp",
"ySpcK1G8eQo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors have violated the following ICLR code of ethics:\n\nBe Honest, Trustworthy and Transparent\nBe Fair and Take Action not to Discriminate\nRespect the Work Required to Produce New Ideas and Artefacts\n\nIn the author response, the authors threaten the reviewer with offensive phrases such as \"we were jus... | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2
] | [
"kYv-z7ZEUQ",
"MToNomnVt3-",
"kYv-z7ZEUQ",
"iclr_2021_wXBt-7VM2JE",
"kYv-z7ZEUQ",
"fVwoKer8rRp",
"fVwoKer8rRp",
"ySpcK1G8eQo",
"fVwoKer8rRp",
"kYv-z7ZEUQ",
"kYv-z7ZEUQ",
"kYv-z7ZEUQ",
"kYv-z7ZEUQ",
"kYv-z7ZEUQ",
"kYv-z7ZEUQ",
"iclr_2021_wXBt-7VM2JE",
"iclr_2021_wXBt-7VM2JE"
] |
iclr_2021_a5KvtsZ14ev | SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks | Graph neural networks (GNNs) work well when the graph structure is provided. However, this structure may not always be available in real-world applications. One solution to this problem is to infer the latent structure and then apply a GNN to the inferred graph. Unfortunately, the space of possible graph structures gro... | withdrawn-rejected-submissions | The paper received 5 reviews, one of which had positive feedback. Although there are merits associated with the paper, several concerns raised in the reviews and the discussion period that prevents the paper to be accepted. It appears that experiments on noisy graphs are not properly done and competitive baselines are ... | test | [
"lMUOsZZRAVn",
"fBU2bPMGpTP",
"F19BDa4A22T",
"i_SL1PbslIq",
"En3OH46iwmv",
"g7eZkCT0bnM",
"NxZV5ECcKE_",
"50USISVFLWm",
"IKoQ7YYlLgK",
"takH8LzZJnJ",
"Bd1ZPlDbPy",
"GlOs9xwewkz",
"caadRfEb027",
"KX2uDF2HrLn",
"_0ltaipY67",
"lPpVONlB-V"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I thank the authors for their detailed responses. I believe the paper has improved its quality after the revision. But I still have one follow-up question regarding the learned graph structure (previous Q5). \n\nAs agreed by another reviewer, I think it is important to analyze the learned graph structure, since ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4,
3
] | [
"50USISVFLWm",
"F19BDa4A22T",
"g7eZkCT0bnM",
"En3OH46iwmv",
"IKoQ7YYlLgK",
"NxZV5ECcKE_",
"GlOs9xwewkz",
"caadRfEb027",
"lPpVONlB-V",
"_0ltaipY67",
"KX2uDF2HrLn",
"iclr_2021_a5KvtsZ14ev",
"iclr_2021_a5KvtsZ14ev",
"iclr_2021_a5KvtsZ14ev",
"iclr_2021_a5KvtsZ14ev",
"iclr_2021_a5KvtsZ14ev"... |
iclr_2021_ABZSAe9gNeg | Differentially Private Synthetic Data: Applied Evaluations and Enhancements | Machine learning practitioners frequently seek to leverage the most informative available data, without violating the data owner's privacy, when building predictive models. Differentially private data synthesis protects personal details from exposure, and allows for the training of differentially private machine learni... | withdrawn-rejected-submissions | The paper surveys existing differentially private data synthesis
methods, and introduces an algorithm that learns both a generator and
a classifier in a differentially private mode.
The problem is highly timely and important. Results are promising.
Main remaining concerns after discussion between the reviewers and th... | train | [
"OvfhXQdVXjT",
"_pfZlDfGDBn",
"p9lKhca24bb",
"U432pN7D1R",
"BVtiob0wwAg",
"VyFJM9sTpk",
"umUTphXx83H",
"VJP8uDw3Ako"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\"My first comment still stands...\"\n\nPoint taken, we appreciate the advice about how we could structure the paper or papers for more impact!\n\n\n\"I do agree that DP methods can SOMETIME outperform non-DP methods...\"\n\nIt is true that there are sometimes cases where naïve use of holdouts, or improper regular... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"_pfZlDfGDBn",
"p9lKhca24bb",
"VyFJM9sTpk",
"umUTphXx83H",
"VJP8uDw3Ako",
"iclr_2021_ABZSAe9gNeg",
"iclr_2021_ABZSAe9gNeg",
"iclr_2021_ABZSAe9gNeg"
] |
iclr_2021_BfayGoTV4iQ | SketchEmbedNet: Learning Novel Concepts by Imitating Drawings | Sketch drawings are an intuitive visual domain that appeals to human instinct. Previous work has shown that recurrent neural networks are capable of producing sketch drawings of a single or few classes at a time. In this work we investigate representations developed by training a generative model to produce sketches fr... | withdrawn-rejected-submissions |
Description:
The paper presents a generative model, SketchEmbedNet, for class-agnostic generation of sketch drawings from images. They leverage sequential data in hand-drawn sketches. Results shows this outperforms STOA on few-shot classification tasks, and the model can generate sketches from new classes after one s... | train | [
"sB-PJrmpz_g",
"tjz8CxSx8iZ",
"SAeskpsdeX",
"dFtlq0Ep9Qd",
"WTqnvuZBX7",
"6_s-wFjHaOF",
"FkOKqos-1RD",
"IExUDr67piz",
"4fyQQcA-7_",
"nHQY5W9L4om",
"Ir0EOkagpdo",
"nERNyKhcwhM"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a generalized sketch drawing model named SketchEmbedNet for producing sketches and visual summaries of open-domain natural images. The idea is interesting and the experimental results show SketchEmbedNet is able to do not only few-shot classification but also one-shot generation. \n\nOverall, I... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
9
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_BfayGoTV4iQ",
"iclr_2021_BfayGoTV4iQ",
"dFtlq0Ep9Qd",
"6_s-wFjHaOF",
"sB-PJrmpz_g",
"tjz8CxSx8iZ",
"IExUDr67piz",
"Ir0EOkagpdo",
"nERNyKhcwhM",
"iclr_2021_BfayGoTV4iQ",
"iclr_2021_BfayGoTV4iQ",
"iclr_2021_BfayGoTV4iQ"
] |
iclr_2021_zdrls6LIX4W | A Policy Gradient Algorithm for Learning to Learn in Multiagent Reinforcement Learning | A fundamental challenge in multiagent reinforcement learning is to learn beneficial behaviors in a shared environment with other agents that are also simultaneously learning. In particular, each agent perceives the environment as effectively non-stationary due to the changing policies of other agents. Moreover, each ag... | withdrawn-rejected-submissions | This paper studies the problem of multi-agent meta-learning. It can be viewed as extending Al-Shedivat et al. (2018) by incorporating the dynamics of other agents. The reviewers praised clear writing and theory. There were two main concerns. The first concern is the novelty when compared to Al-Shedivat et al. (2018). T... | train | [
"x3DMpk4zHB5",
"pxx8bpoaKH5",
"ELfpw9moLF",
"sL-Ca_uLgR3",
"KdwlHQsfpSy",
"vzLjL5b8jJf",
"XUkCCQTf5yq",
"WLEjjkNI9h",
"43J4Xef9sIB",
"4He0guKRGZP",
"Pnmdmj--je7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies meta-learning in multi-agent reinforcement learning. It proposes a meta multi-agent policy gradient method that considers the learning processes of other agents in the environment for fast adaptation. This method can be seen as a unified framework of previous methods (Al-Shedivat et al. (2018) a... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"iclr_2021_zdrls6LIX4W",
"iclr_2021_zdrls6LIX4W",
"Pnmdmj--je7",
"4He0guKRGZP",
"vzLjL5b8jJf",
"x3DMpk4zHB5",
"43J4Xef9sIB",
"iclr_2021_zdrls6LIX4W",
"iclr_2021_zdrls6LIX4W",
"iclr_2021_zdrls6LIX4W",
"iclr_2021_zdrls6LIX4W"
] |
iclr_2021_szUsQ3NcQwV | Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning | Real world multi-agent tasks often involve varying types and quantities of agents and non-agent entities; however, agents within these tasks rarely need to consider all others at all times in order to act effectively. Factored value function approaches have historically leveraged such independences to improve learning ... | withdrawn-rejected-submissions | This paper proposes an attention based technique to focus on relevant entities in multi-agent reinforcement learning. While the effectiveness of the proposed method is demonstrated on some tasks, there remain major concerns including the following:
1. It is not sufficiently convincing that the proposed method performs... | val | [
"p03KIuTcrmH",
"iAPGxOv9f2F",
"4v2a42-3aDO",
"8_lpWfov2U5",
"Y5-MGTWVxly",
"cbp54d6bfUq",
"oW-6pS6IGRq",
"81o2hcEfXDE",
"RgP1vHeKUDw",
"7znBztGeoJg"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank all reviewers for their useful and insightful feedback. We have made extensive additions to the experimental section of the paper based on reviewer suggestions, and we will summarize these additions here:\n\n* Additional comparisons\n * Added Table 1 which provides a comparison of the att... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"iclr_2021_szUsQ3NcQwV",
"oW-6pS6IGRq",
"8_lpWfov2U5",
"81o2hcEfXDE",
"RgP1vHeKUDw",
"7znBztGeoJg",
"iclr_2021_szUsQ3NcQwV",
"iclr_2021_szUsQ3NcQwV",
"iclr_2021_szUsQ3NcQwV",
"iclr_2021_szUsQ3NcQwV"
] |
iclr_2021_NqWY3s0SILo | Differentiable Graph Optimization for Neural Architecture Search | In this paper, we propose Graph Optimized Neural Architecture Learning (GOAL), a novel gradient-based method for Neural Architecture Search (NAS), to find better architectures with fewer evaluated samples. Popular NAS methods usually employ black-box optimization based approaches like reinforcement learning, evolution ... | withdrawn-rejected-submissions | This work was deemed interesting by the reviewers, but they highlighted the following weaknesses in this version of the paper:
- Lack of comparison to other methods.
- Lack of novelty compared to previous work.
- Fundamental problem with training only on one dataset (MNIST), issue with possible overfitting. | train | [
"S590I7eXAJ",
"_1G2OjAD1VG",
"eXeZ-aQ8K65",
"O61hSt0wp9",
"OcGDojF6myE",
"8Xy7qcOjBBE"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewer for your careful reading and providing a lot of valuable comments! Below we address the concerns mentioned in the review:\n\nResponse to the concerns:\n\nQ1. Differences and comparison to NAO\n\nNAO [1] aims to map neural architecture to latent space to perform continuous optimi... | [
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"OcGDojF6myE",
"O61hSt0wp9",
"8Xy7qcOjBBE",
"iclr_2021_NqWY3s0SILo",
"iclr_2021_NqWY3s0SILo",
"iclr_2021_NqWY3s0SILo"
] |
iclr_2021_0h9cYBqucS6 | Communication-Computation Efficient Secure Aggregation for Federated Learning | Federated learning has been spotlighted as a way to train neural network models using data distributed over multiple clients without a need to share private data. Unfortunately, however, it has been shown that data privacy could not be fully guaranteed as adversaries may be able to extract certain information on local ... | withdrawn-rejected-submissions | This paper presents an efficient secure aggregation algorithm in federated learning scenarios, which employs sparse random secure-sharing clients. Four experienced reviewers left valuable comments on this paper, and three of them are unfortunately negative to this work (4, 4, 3) while one reviewer is slightly on the po... | train | [
"1qmzpO-bhx_",
"xk2PATMrYip",
"Zze7ZWQjyH",
"vTwKXV43hdd",
"Rw4wb1AqIiK",
"5AH40brleV4",
"EAiwZ9i1fjx",
"v-D8VRGthFe",
"a_RlVX1mFxq"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the problem of secure aggregation for federated learning, where the goal is to design a protocol that allows the server to aggregate models from clients without learning anything about any individual model. The paper builds up on the secure aggregation scheme of (Bonawitz et al., 2017), and pr... | [
3,
-1,
-1,
-1,
-1,
-1,
4,
6,
4
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2021_0h9cYBqucS6",
"EAiwZ9i1fjx",
"v-D8VRGthFe",
"1qmzpO-bhx_",
"a_RlVX1mFxq",
"iclr_2021_0h9cYBqucS6",
"iclr_2021_0h9cYBqucS6",
"iclr_2021_0h9cYBqucS6",
"iclr_2021_0h9cYBqucS6"
] |
iclr_2021_kuqBCnJuD4Z | FedMes: Speeding Up Federated Learning with Multiple Edge Servers | We consider federated learning with multiple wireless edge servers having their own local coverages. We focus on speeding up training in this increasingly practical setup. Our key idea is to utilize the devices located in the overlapping areas between the coverage of edge servers; in the model-downloading stage, the de... | withdrawn-rejected-submissions | The paper studies the benefit of having multiple servers (with partial coverage) in increase the training speed and latency in Federated Learning. Of course optimization/learning in the multi-server setting comes with a number of challenges which the authors seek to address via novel algorithmic procedures (e.g. FedMe... | train | [
"yV000WDjpeJ",
"8G_c9zooDa",
"RDgQ6GXXuY1",
"XQ_vEffqTY",
"3hPJHSOlU5H",
"WTgTyhYibQ7",
"hrZouYcF1jp",
"lFT1Yz2yuac",
"thpTs7S4mEU",
"jnyiRfUt8s",
"zEOEGS8HVpW"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"## Summary\nIn previous federated learning literature, people usually assume there is only one cloud server communicating with all edge nodes/clients. However, since each server has its own coverage in practice, the latency between the server and clients out of the coverage can be pretty long. This paper focuses o... | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_kuqBCnJuD4Z",
"iclr_2021_kuqBCnJuD4Z",
"XQ_vEffqTY",
"thpTs7S4mEU",
"yV000WDjpeJ",
"lFT1Yz2yuac",
"zEOEGS8HVpW",
"8G_c9zooDa",
"hrZouYcF1jp",
"iclr_2021_kuqBCnJuD4Z",
"iclr_2021_kuqBCnJuD4Z"
] |
iclr_2021_wMIdpzTmnct | Hard-label Manifolds: Unexpected advantages of query efficiency for finding on-manifold adversarial examples | Designing deep networks robust to adversarial examples remains an open problem. Likewise, recent zeroth order hard-label attacks on image classification tasks have shown comparable performance to their first-order alternatives. It is well known that in this setting, the adversary must search for the nearest decision bo... | withdrawn-rejected-submissions | The paper investigates several properties of adversarial examples obtained by hard-label attacks. There are some interesting findings in this paper, such as the connection between query efficiency and distance to the image manifold. However, all the reviewers think the paper is below the acceptance threshold due to sev... | test | [
"wijZvDa4SL",
"NEJbzIVdLYu",
"mljjdYqJ5S",
"QzAu0o8ldFi",
"etbMpPteuY",
"kgyH1Y6cQb4",
"FFn2ko02RuP",
"kILbOOd36k-",
"Zqbk17dPDOe",
"Cr2QTVCbYQG",
"aYfYRRSO8J"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary\n-------------\nThe paper investigates methods that perform hard-label adversarial attacks at the zeroth-order and analyze them from the perspective of generating on-manifold adversarial examples, down-scaling of the input space, and query-efficiency on two datasets, cifar and imagenet.\n\n\nPros\n------\n... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_wMIdpzTmnct",
"iclr_2021_wMIdpzTmnct",
"aYfYRRSO8J",
"NEJbzIVdLYu",
"wijZvDa4SL",
"wijZvDa4SL",
"aYfYRRSO8J",
"NEJbzIVdLYu",
"iclr_2021_wMIdpzTmnct",
"iclr_2021_wMIdpzTmnct",
"iclr_2021_wMIdpzTmnct"
] |
iclr_2021_5B8YAz6W3eX | Apollo: An Adaptive Parameter-wised Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization | In this paper, we introduce Apollo, a quasi-newton method for noncovex stochastic optimization, which dynamically incorporates the curvature of the loss function by approximating the Hessian via a diagonal matrix. Algorithmically, Apollo requires only first-order gradients and updates the approximation of the Hessian d... | withdrawn-rejected-submissions | Dear authors,
I took your concerns into account, and I also understand the whole crazy situation around the COVID-19. Many of the reviewers have families (e.g., in US, many kids are now homeschooled, and there are no good daycare solutions as well). I do not plan to list all the good parts of the paper and list weakne... | train | [
"BFU17FoqZ9O",
"TNub_MjNUxB",
"9z926ZPs18D",
"sal3hhQ5HiT",
"yCL3yvbv3LP",
"W1VgR2gNlEZ",
"Z6hZlukT5Z",
"DrNCDdkYZp",
"A42HNI_VrSb",
"mKYPr3fHvnH",
"LwKvsrK7R-A"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This work considers a layer-wise weak secant equation to update the Hessian approximation and train deep learning models through a stochastic quasi-Newton-like update. The major idea is to construct diagonal approximations so the computational cost is low, and the idea is to some extent similar to adagrad in modi... | [
4,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_5B8YAz6W3eX",
"iclr_2021_5B8YAz6W3eX",
"sal3hhQ5HiT",
"iclr_2021_5B8YAz6W3eX",
"TNub_MjNUxB",
"iclr_2021_5B8YAz6W3eX",
"sal3hhQ5HiT",
"BFU17FoqZ9O",
"LwKvsrK7R-A",
"iclr_2021_5B8YAz6W3eX",
"iclr_2021_5B8YAz6W3eX"
] |
iclr_2021_UoAFJMzCNM | Multi-agent Deep FBSDE Representation For Large Scale Stochastic Differential Games | In this paper we present a deep learning framework for solving large-scale multi-agent non-cooperative stochastic games using fictitious play. The Hamilton-Jacobi-Bellman (HJB) PDE associated with each agent is reformulated into a set of Forward-Backward Stochastic Differential Equations (FBSDEs) and solved via for... | withdrawn-rejected-submissions | This paper introduces a scalable method for FSP based on FBSDE. The method is theoretically derived then applied on two problems, one simple but with many (1000) agents, and one with only 2 agents but partial observability.
The main strength of this paper lies in the scalability and the time complexity of the proposed... | val | [
"UMEOt-XZ1j3",
"V4rOyZuiYxb",
"yw4U_0IduxI",
"pYZir_7o_YR",
"CsE51kOor4J",
"c1AntnfEgoe",
"ZMv_laijUcg",
"vAR77G0R1IX",
"6Cd6XNlI996"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I hereby acknowledge that I have read the author response.",
"Summary\n\nThe paper introduces improved deep learning architecture for solving stochastic differential games by fictitious play. Compared to previous best model it uses LSTM instead of MLP to capture forward dynamics of the system, importance samplin... | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"pYZir_7o_YR",
"iclr_2021_UoAFJMzCNM",
"CsE51kOor4J",
"vAR77G0R1IX",
"V4rOyZuiYxb",
"6Cd6XNlI996",
"iclr_2021_UoAFJMzCNM",
"iclr_2021_UoAFJMzCNM",
"iclr_2021_UoAFJMzCNM"
] |
iclr_2021_4kWGWoFGA_H | Beyond the Pixels: Exploring the Effects of Bit-Level Network and File Corruptions on Video Model Robustness | We investigate the robustness of video machine learning models to bit-level network and file corruptions, which can arise from network transmission failures or hardware errors, and explore defenses against such corruptions. We simulate network and file corruptions at multiple corruption levels, and find that bit-level ... | withdrawn-rejected-submissions | This paper investigates robustness of the neural networks under bit-level network and file corruptions, and proposes corruption-agnostic and corruption-aware defense approaches. The Bit-corruption Augmented Training is introduced, which is about applying the data augmentation at a bit level.
The majority of the review... | train | [
"1biktfSjxZ-",
"Irf2mqioNRz",
"MYV28wCu2Hv",
"gdCwKVYbFgq",
"lydaJirxzM3",
"h2l8nxrBD7f",
"po2sF3_YA16",
"NDLE6jzkVaS",
"icrC4nTbJYS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors explored the robustness of video machine learning models to bit-level corruption. They investigated previous methods such as Out-Of-Distribution (OOD) detection and adversarial training and found that they are not effective enough to defense against the bit-level corruption. Accordingly, this paper pr... | [
4,
4,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_4kWGWoFGA_H",
"iclr_2021_4kWGWoFGA_H",
"iclr_2021_4kWGWoFGA_H",
"NDLE6jzkVaS",
"Irf2mqioNRz",
"icrC4nTbJYS",
"1biktfSjxZ-",
"iclr_2021_4kWGWoFGA_H",
"iclr_2021_4kWGWoFGA_H"
] |
iclr_2021_Rld-9OxQ6HU | MC-LSTM: Mass-conserving LSTM | The success of Convolutional Neural Networks (CNNs) in computer vision is mainly driven by their strong inductive bias, which is strong enough to allow CNNs to solve vision-related tasks with random weights, meaning without learning. Similarly, Long Short-Term Memory (LSTM) has a strong inductive bias towards storing ... | withdrawn-rejected-submissions | The paper proposes a variant of recurrent neural networks based on Long Short-Term Memory. Unlike the standard LSTM, the proposed mass-conserving LSTM subtracts the output hidden state of the LSTM from its current cell state, thus preserving the "mass" stored in the cell states at each step. A left-stochastic recurrent... | train | [
"Qab9cJgkvZa",
"He8rYglDaY7",
"s18NnymVhw",
"FLz8bovl8SD",
"z1JBhXk7ggy",
"P4DhBF4rkf7",
"7rtq8sNEBQn",
"r1ng5Qesv-_",
"Nb_DnMAQGgu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper the authors propose a novel architecture, called Mass-Conserving LSTM (MC-LSTM) based on LSTM. The authors base their work over the hypothesis that the real world is based over conservation laws related to mass, energy, etc. Thus, they propose that also the quantities involved in deep learning models... | [
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
3,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_Rld-9OxQ6HU",
"iclr_2021_Rld-9OxQ6HU",
"iclr_2021_Rld-9OxQ6HU",
"s18NnymVhw",
"He8rYglDaY7",
"Nb_DnMAQGgu",
"Qab9cJgkvZa",
"iclr_2021_Rld-9OxQ6HU",
"iclr_2021_Rld-9OxQ6HU"
] |
iclr_2021_LnVNgfvrQjC | CAFENet: Class-Agnostic Few-Shot Edge Detection Network | We tackle a novel few-shot learning challenge, few-shot semantic edge detection, aiming to localize boundaries of novel categories using only a few labeled samples. Reliable boundary information has been shown to boost the performance of semantic segmentation and localization, while also playing a key role in its own r... | withdrawn-rejected-submissions | The paper introduces the new task of few-shot semantic edge detection by adapting existing datasets. It proposes a new method which is compared to a baseline.
Pros:
- Clear writing.
- Extensive ablation experiments.
- Good architectural choices.
Mixed:
- The value of the new task raises a mix of opinions. For exampl... | train | [
"2t6VfpFIfY",
"4w7c5YIxXHy",
"lN0xzGqauCN",
"X_At-nwMt8Z",
"Nn7zQYkbTN",
"llmkKwlfhuo",
"ZYNGTl4rM3T",
"2ZyHmuDF06J"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Response to concerns about PANet+Sobel baseline: Indeed, we did measured the perfornance of PANet+Sobel on various sobel filter size. As reviewer mentioned, we first set sobel filter size to 1 and observed unsatisfactory result. We empirically found that optimal kernel size is 3 and conduct comparision accordingly... | [
-1,
-1,
-1,
-1,
4,
6,
4,
4
] | [
-1,
-1,
-1,
-1,
5,
3,
5,
5
] | [
"Nn7zQYkbTN",
"llmkKwlfhuo",
"ZYNGTl4rM3T",
"2ZyHmuDF06J",
"iclr_2021_LnVNgfvrQjC",
"iclr_2021_LnVNgfvrQjC",
"iclr_2021_LnVNgfvrQjC",
"iclr_2021_LnVNgfvrQjC"
] |
iclr_2021_p7OewL0RRIH | Sself: Robust Federated Learning against Stragglers and Adversaries | While federated learning allows efficient model training with local data at edge devices, two major issues that need to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries. While the presence of both stragglers and adversaries raises serious concerns for the deployment of pra... | withdrawn-rejected-submissions | This paper proposes a combined method to address stragglers and adversaries in federated learning. Stragglers are overcome by allowing staleness in model aggregation. Adversaries are handled by using a public dataset to identify poisoned devices and adjusting their weights when doing model aggregation. However, the rev... | train | [
"Cukvmjczi5",
"4phT3v2QR78",
"SaBM0aSMPyi",
"A3r1hev4DGF",
"7qHi2tlqzyS",
"SVA_iAdd5r4",
"qVe2XIFAMJR",
"IhnXSmixIlh",
"Cr0YTA_vAA"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers federated learning with straggling and adversarial devices. To tackle stragglers, the paper proposes semi-synchronous averaging wherein models with the same staleness are first averaged together, and then a weighted average of the results with different stateless is computed. To mitigate adver... | [
4,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"iclr_2021_p7OewL0RRIH",
"qVe2XIFAMJR",
"Cr0YTA_vAA",
"Cukvmjczi5",
"IhnXSmixIlh",
"iclr_2021_p7OewL0RRIH",
"iclr_2021_p7OewL0RRIH",
"iclr_2021_p7OewL0RRIH",
"iclr_2021_p7OewL0RRIH"
] |
iclr_2021_qn_gk5j3PJ | PIVEN: A Deep Neural Network for Prediction Intervals with Specific Value Prediction | Improving the robustness of neural nets in regression tasks is key to their application in multiple domains. Deep learning-based approaches aim to achieve this goal either by improving their prediction of specific values (i.e., point prediction), or by producing prediction intervals (PIs) that quantify uncertainty. We ... | withdrawn-rejected-submissions | The paper presents PIVEN, a deep neural network that produces a prediction interval in addition to a specific point prediction. PIVEN is distribution free and does not assume symmetric intervals.
All the reviewers agree that the paper investigates an important problem and the paper is well-written. The reviewers al... | train | [
"xlcx_WvrTay",
"zYJ4ji3OiTQ",
"RBHVr2OvQxD",
"eAYOe9e8gu",
"P9DL6uh4_is",
"YivIvYQg6b",
"2wFu6QXgiOq",
"VGGStxXFgrF",
"g7lB8dF7uk",
"DDxv5gayt63"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Quality and Clarity**\n\nWhile the overall message of the paper is clear, the explanation of the method in Section 4 is a bit hard to follow. Specifically it is a bit hard to keep a track of the meaning of the multiple terms in the loss function and the associated hyper parameters (see Queries and Suggestions be... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_qn_gk5j3PJ",
"xlcx_WvrTay",
"iclr_2021_qn_gk5j3PJ",
"VGGStxXFgrF",
"DDxv5gayt63",
"VGGStxXFgrF",
"g7lB8dF7uk",
"iclr_2021_qn_gk5j3PJ",
"iclr_2021_qn_gk5j3PJ",
"iclr_2021_qn_gk5j3PJ"
] |
iclr_2021_MD3D5UbTcb1 | A Unified View on Graph Neural Networks as Graph Signal Denoising | Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data. A single GNN layer typically consists of a feature transformation and a feature aggregation operation. The former normally uses feed-forward networks to transform features, while the latter aggregates the transf... | withdrawn-rejected-submissions | The paper argues that GNNs can be understood as a graph signal denoising. While this interpretation is not surprising and not novel, the unified view does seem insightful according to some reviewers. Yet, it is not clear how much insight can be drawn from the presented theory, as no significantly better architecture or... | train | [
"DHuogjL6dBH",
"_MX7WoYdt7_",
"bqZZliMLqMa",
"17l2Z2KFw-a",
"ld82oh5ajlL",
"t8_nr7IyGD8",
"tdkcV3HjeM2",
"C3voM1NWL_",
"BGPvplhpdtL",
"m9cAqc9oINu",
"jFVg2yzwcu",
"ELLhXmYN_cn",
"LPLsZcheSlg",
"hLoDZODGAlP",
"RmjogvLUbVb"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Post-discussion update:**\n\nI would like to thank the authors for addressing (albeit partially) my comments, as well as the comments from other reviewers. While I understand that some connections can be made between the proposed approach and other approaches or aspects that go beyond local smoothing or oversmoo... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
3,
6,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"iclr_2021_MD3D5UbTcb1",
"DHuogjL6dBH",
"C3voM1NWL_",
"tdkcV3HjeM2",
"ELLhXmYN_cn",
"bqZZliMLqMa",
"RmjogvLUbVb",
"hLoDZODGAlP",
"jFVg2yzwcu",
"LPLsZcheSlg",
"m9cAqc9oINu",
"iclr_2021_MD3D5UbTcb1",
"iclr_2021_MD3D5UbTcb1",
"iclr_2021_MD3D5UbTcb1",
"iclr_2021_MD3D5UbTcb1"
] |
iclr_2021_5UY7aZ_h37 | Transferring Inductive Biases through Knowledge Distillation | Having the right inductive biases can be crucial in many tasks or scenarios where data or computing resources are a limiting factor, or where training data is not perfectly representative of the conditions at test time. However, defining, designing, and efficiently adapting inductive biases is not necessarily straightf... | withdrawn-rejected-submissions | This paper studies through empirical analysis an interesting problem: distilling the (strong) inductive bias of a teacher model to the student model (of weak inductive bias). The main claim/finding is that not only the "dark knowledge" in the logits can be transferred, but also the inductive bias (e.g. recurrence in RN... | train | [
"zp3LWFlt1n0",
"otOBf7O1ht1",
"CnA7bwYG27n",
"_H3LZcx2Q1Q",
"u40B-qQH55L",
"8x7mxzJboFB",
"MibIkgnOCvO",
"kr5K7Lxrfz",
"sPwOg6-co1D",
"gLpC_loyduo",
"aOXUwXDB7pj",
"EIi1Wgidpgi",
"aNP5R0VuYN",
"pA_cpVZfiRM",
"iUqWAlXhkaa",
"25Uy2JIAUBO",
"7pWWj2os9Ny",
"70eJGPC1SpE",
"XRONzHtVQpx... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_revi... | [
"The paper investigates the oft-overlooked aspect of knowledge distillation (KD) -- why it works. The paper highlights the ability of KD for transferring not just the soft labels, but the inductive bias (assumptions inherent in the method, e.g. LSTM's notion of sequentiality, and CNN's translational invariance/equi... | [
7,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_5UY7aZ_h37",
"25Uy2JIAUBO",
"pA_cpVZfiRM",
"u40B-qQH55L",
"kr5K7Lxrfz",
"iclr_2021_5UY7aZ_h37",
"EIi1Wgidpgi",
"aNP5R0VuYN",
"gLpC_loyduo",
"iUqWAlXhkaa",
"ARXdgVSYd_B",
"Y6fCoYywCqq",
"bTFUogGlQw4",
"nXWUiSqsrun",
"vcagYhysBD2",
"7Z0At-bDXwA",
"gswW1VUJmvA",
"w_A1fQkG6z... |
iclr_2021_WdOCkf4aCM | CDT: Cascading Decision Trees for Explainable Reinforcement Learning | Deep Reinforcement Learning (DRL) has recently achieved significant advances in various domains. However, explaining the policy of RL agents still remains an open problem due to several factors, one being the complexity of explaining neural networks decisions. Recently, a group of works have used decision-tree-based mo... | withdrawn-rejected-submissions | The reviewers and authors have had a significant and healthy discussion around this manuscript. The reviewers remain concerned about the some of the central claims in this manuscript. While they have appreciated the clear communication and willingness of the authors to clarify most of their concerns, this central issue... | train | [
"TpXh9HUDS6S",
"ih3OXjn8Ipc",
"3KKCgyLBG03",
"TOKYKb_SY9",
"D_MgCBHE3yT",
"gqJNO5TO53",
"RmIRE3z0nN2",
"kHTnNb34DlX",
"64N9YpGpOSm",
"Ognu1tWzqi",
"QNXpQMQQXo",
"yWCvWXbpzHC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"## After Rebuttal and Discussion Period\nI want to first say that I really appreciated the opportunity to review this paper. It was awesome to see the authors willing to respond to the various suggestions and questions that the reviewers provided. The paper has improved over time but some significant areas of impr... | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"iclr_2021_WdOCkf4aCM",
"iclr_2021_WdOCkf4aCM",
"TOKYKb_SY9",
"RmIRE3z0nN2",
"iclr_2021_WdOCkf4aCM",
"TpXh9HUDS6S",
"ih3OXjn8Ipc",
"gqJNO5TO53",
"QNXpQMQQXo",
"yWCvWXbpzHC",
"iclr_2021_WdOCkf4aCM",
"iclr_2021_WdOCkf4aCM"
] |
iclr_2021_8bZC3CyF-f7 | Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution | Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards.
Complex tasks are often hierarchically composed of sub-tasks.
A step in the Q-function indicates solving a sub-task, where the expectation of the return increases.
RUDDER iden... | withdrawn-rejected-submissions | The reviewers appreciated that the paper was clear and well written. They also appreciated that the paper has been largely improved during the discussions. The results seem to support the claim and the experiments on Minecraft are convincing.
Yet, the reviewers had some important concerns. First the focus on RUDDER s... | train | [
"ctr3MHUOLCA",
"eVpJ-MvuURV",
"XnVBG6eN3Nx",
"WoBowjjcJjT",
"brpqTSonoOq",
"-2ygEKnOaB6",
"ns6Gu3T2aK",
"7dgujWxXCIo",
"zoXcYcB8klh",
"0g9JpeIYNoZ",
"AX_HK8fCoCp",
"_ldh_cB-MkD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"[Summary]\n\nPaper proposes to attack the challenging problem of RL with sparse feedback by leveraging a few demonstrations and learnable reward redistribution. The redistributed reward is computed by aligning the key events (a set of clustered symbols) to the demonstrations via PSSM-based seq matching. Experiment... | [
6,
5,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_8bZC3CyF-f7",
"iclr_2021_8bZC3CyF-f7",
"0g9JpeIYNoZ",
"zoXcYcB8klh",
"iclr_2021_8bZC3CyF-f7",
"eVpJ-MvuURV",
"ctr3MHUOLCA",
"brpqTSonoOq",
"brpqTSonoOq",
"_ldh_cB-MkD",
"iclr_2021_8bZC3CyF-f7",
"iclr_2021_8bZC3CyF-f7"
] |
iclr_2021_eZllW0F5aM_ | Don't stack layers in graph neural networks, wire them randomly | Graph neural networks have become a staple in problems addressing learning and analysis of data defined over graphs. However, several results suggest an inherent difficulty in extracting better performance by increasing the number of layers. Besides the classic vanishing gradient issues, recent works attribute this to ... | withdrawn-rejected-submissions | This paper proposes to use randomly wired architectures [1] in the context of GNNs and introduces a method for sampling random architectures based on the Erdős–Rényi model. The authors further include a theoretical analysis and two methodological contributions: sequential path embeddings and DropPath, a regularizer. Re... | train | [
"5wfNmF-sWYm",
"9q8J98wcmro",
"lepHU9hYuZb",
"8scwm_k8LV-",
"xlaUtHX4R2P",
"49LhqXheA5q",
"u8Fk0QxZui_",
"lykgNNx03Fq",
"V6HUWhwVz_Y",
"rkb4l4qfn_p",
"2MK61Mnvmso",
"Ym2n3qGjPb_",
"8Z42yY-HTcB",
"cAhrsl-IKr1",
"dh2yuetVfS",
"rZ9Ge6d_kWd",
"HvvZdOnlUWb",
"aCSZomdQCg2",
"3Myl_vRDDQ... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"... | [
"Summary:\n\nThis paper extends the technique of randomly wired neural nets from [1] to Graph Neural Networks and show that they perform better than tradtional GNN architectures. They demonstrate the improved capacity of this architecture via a number of experiments on the benchmark in [2] and ablation studies.\n\n... | [
5,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_eZllW0F5aM_",
"iclr_2021_eZllW0F5aM_",
"iclr_2021_eZllW0F5aM_",
"lykgNNx03Fq",
"Ym2n3qGjPb_",
"HvvZdOnlUWb",
"aCSZomdQCg2",
"8Z42yY-HTcB",
"5wfNmF-sWYm",
"vf_YylV5YO2",
"9q8J98wcmro",
"dh2yuetVfS",
"cAhrsl-IKr1",
"TE6o2_FKaH",
"lepHU9hYuZb",
"5wfNmF-sWYm",
"vf_YylV5YO2",
... |
iclr_2021_CLnj31GZ4cI | K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters | We study the problem of injecting knowledge into large pre-trained models like BERT and RoBERTa. Existing methods typically update the original parameters of pre-trained models when injecting knowledge. However, when multiple kinds of knowledge are injected, they may suffer from catastrophic forgetting. To address thi... | withdrawn-rejected-submissions | The paper augments pre-trained language models by introducing “adapter”, where each adapter is another language model pre-trained for a specific knowledge source (e.g., Wikidata) and an objective (e.g., relation classification). The representation from each adapter is concatenated to the representation from the generic... | train | [
"sUcCphNUtAA",
"YpGtMnah35N",
"3xNNl4wd1J",
"EqBRFDMAPo3",
"V8evmofwb3E",
"AidjK6WiAq7",
"nkP70vQ9olt",
"oJYENq2UhyT",
"GsHAZBOZmnN",
"WsrwLAqK-Y6",
"sv5rbVdpwk"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your thorough reading and positive comments! We hope to be able to address your concern below.\n\nAs illustrated in our paper, BERT performs much better than RoBERTa on probing experiments. Sun et al., (2020) observe the same phenomenon in their paper. We conjecture that it is mainly because BERT uses a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"sv5rbVdpwk",
"oJYENq2UhyT",
"GsHAZBOZmnN",
"WsrwLAqK-Y6",
"WsrwLAqK-Y6",
"nkP70vQ9olt",
"iclr_2021_CLnj31GZ4cI",
"iclr_2021_CLnj31GZ4cI",
"iclr_2021_CLnj31GZ4cI",
"iclr_2021_CLnj31GZ4cI",
"iclr_2021_CLnj31GZ4cI"
] |
iclr_2021_RtNpzLdHUAW | Stochastic Subset Selection for Efficient Training and Inference of Neural Networks | Current machine learning algorithms are designed to work with huge volumes of high dimensional data such as images. However, these algorithms are being increasingly deployed to resource constrained systems such as mobile devices and embedded systems. Even in cases where large computing infrastructure is available, the ... | withdrawn-rejected-submissions | The paper proposed a two-stage method to select instances from a set, involving candidate selection (learning a function to determine a Bernoulli probability for each input) and AutoRegressive subset selection (learning a function to generate probabilities for sampling elements from a reduced set); both stages use th... | train | [
"WIhRln6n2G",
"dHi4RWijfJZ",
"3-o1omhU_76",
"o7qnoXHYqJF",
"wbmLZDX47_B",
"Qu1I67lYPw",
"8uMGPMOIv6X",
"MG7pbuc3Uqw",
"awCiY_zPFme",
"0DSP5-RGg21",
"2sO7mIJeyu0",
"nAJuWlK_J4",
"x2QzqU__9ka",
"LZtBZR421YV",
"i4-6-_J4DZP",
"4NoB1pHQv8m",
"-CIqHPdVjq-",
"VZIchvGq__-",
"eArw3IE1qQ",... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"### Summary\n\nThe authors present a stochastic algorithm for selecting a subset of a large dataset, while trying to preserve statistics from the original dataset. The algorithm is a two-step process, first selecting a set of \"candidates\" based on individual features, then filtering to a final subset which may a... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_RtNpzLdHUAW",
"iclr_2021_RtNpzLdHUAW",
"WIhRln6n2G",
"wbmLZDX47_B",
"8uMGPMOIv6X",
"2sO7mIJeyu0",
"x2QzqU__9ka",
"WIhRln6n2G",
"WIhRln6n2G",
"iclr_2021_RtNpzLdHUAW",
"i4-6-_J4DZP",
"iclr_2021_RtNpzLdHUAW",
"4NoB1pHQv8m",
"eArw3IE1qQ",
"0DSP5-RGg21",
"dHi4RWijfJZ",
"J7iMmLQ... |
iclr_2021_BW5PuV4V-rL | Gradient-based training of Gaussian Mixture Models for High-Dimensional Streaming Data | We present an approach for efficiently training Gaussian Mixture Models by SGD on non-stationary, high-dimensional streaming data.
Our training scheme does not require data-driven parameter initialization (e.g., k-means) and has the ability to process high-dimensional samples without numerical problems.
Fur... | withdrawn-rejected-submissions | This paper proposes training Gaussian mixture models using SGD, creating an algorithm appropriate for streaming data. However, we feel that the current manuscript does not sufficiently support the proposed method, and lacks insight into its workings. The reviewers believed the method lacked justification (while the aut... | train | [
"klGYaZZ_SBx",
"uj6CJXFgkD5",
"rrchfQVnfda",
"7nTkNmO6k1",
"rQeDtQDJDEt",
"QXMhJ9K9874",
"m6aeLE-0Be8",
"ZMWM5Diy8h2",
"BKluMxDCrK",
"a_qbJG_-RDN"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the constructive review! As this is an open discussion phase, we would value your feedback to our replies to better understand how we can improve the paper. We will incorporate the obvious improvements right away, and the rest as a function of the feedback in the open discussion phase.\n\nConcerning ... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
3,
4
] | [
"m6aeLE-0Be8",
"BKluMxDCrK",
"ZMWM5Diy8h2",
"a_qbJG_-RDN",
"QXMhJ9K9874",
"iclr_2021_BW5PuV4V-rL",
"iclr_2021_BW5PuV4V-rL",
"iclr_2021_BW5PuV4V-rL",
"iclr_2021_BW5PuV4V-rL",
"iclr_2021_BW5PuV4V-rL"
] |
iclr_2021_gMRZ4wLqlkJ | Few-Round Learning for Federated Learning | Federated learning (FL) presents an appealing opportunity for individuals who are willing to make their private data available for building a communal model without revealing their data contents to anyone else. Of central issues that may limit a widespread adoption of FL is the significant communication resources requi... | withdrawn-rejected-submissions | This paper proposes a meta-learning based few-shot federated learning approach to reduce the communication overhead incurred in aggregating model updates. The use of meta-learning also gives some generalization benefits. The reviewers think that the paper has the following main issues (see reviews for more details):
* ... | train | [
"bttQv-2-lq",
"oAD9a8mmDe0",
"2volpR82s4L",
"A0lcnpKbB8",
"rqnbLZzqky5",
"a_954AxecUc",
"1zu8GME6sJ",
"lKGyn0Fua9",
"63VRykLX_TC",
"yMS551gAQQJ",
"M38KbGZ4WZw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"1. Strength:\n\nTargeting an important problem of FL: reducing the communication cost.\n\n\n2. Weakness:\n\nThis work simply applies the meta-learning method into the federated learning setting. I can’t see any technical contribution, either in the meta-learning perspective or the federated perspective. The experi... | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_gMRZ4wLqlkJ",
"iclr_2021_gMRZ4wLqlkJ",
"oAD9a8mmDe0",
"M38KbGZ4WZw",
"yMS551gAQQJ",
"rqnbLZzqky5",
"lKGyn0Fua9",
"bttQv-2-lq",
"iclr_2021_gMRZ4wLqlkJ",
"iclr_2021_gMRZ4wLqlkJ",
"iclr_2021_gMRZ4wLqlkJ"
] |
iclr_2021_HWqv5Pm3E3 | Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics | In this paper, we propose a novel domain adaptation method for the source-free setting. In this setting, we cannot access source data during adaptation, while unlabeled target data and a model pretrained with source data are given. Due to lack of source data, we cannot directly match the data distributions between doma... | withdrawn-rejected-submissions | This submission develops a novel technique for domain adaptation for the setup where only a trained model (but no data) from the source task is available. The authors propose to fine-tune the feature encoder using batch norm statistics of the features extracted. Additionally their criterion also promotes increasing the... | test | [
"xPdeUw59khO",
"NHQeiafkCgy",
"uiitIIeaKX0",
"LdpqJ_-0AcA",
"7oxfGORkDUx",
"xkANcA9MxS",
"tflsx1oQLy0",
"6yWC4oa4iVt"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"### Summary\nThis paper proposes a domain adaptation technique when source data is not available. The exponentially weighted average of BN statistics from source training along with the trained model is utilized to align source and target distributions. Source model is divided into feature encoder and classifier c... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
4
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_HWqv5Pm3E3",
"iclr_2021_HWqv5Pm3E3",
"LdpqJ_-0AcA",
"7oxfGORkDUx",
"NHQeiafkCgy",
"6yWC4oa4iVt",
"xPdeUw59khO",
"iclr_2021_HWqv5Pm3E3"
] |
iclr_2021_czv8Ac3Kg7l | Sparse Gaussian Process Variational Autoencoders | Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering. An effective framework for handling such data are Gaussian process deep generative models (GP-DGMs), which employ GP priors over the latent variables of DGMs. Existing approaches for performing inference in GP-DGMs do n... | withdrawn-rejected-submissions | The paper proposes a method for inference in models with GP priors and neural network likelihoods for multi-output modelling, dealing with the problem of scalability and missing data. The paper builds upon previous work on inducing variables for scalability on GP models and inference networks for amortization (reducing... | train | [
"_8gd1mt-sTr",
"voAcUf6GUOD",
"XETBanKK3U",
"Tt1yR-t6iey",
"VzpIMXzn7tf",
"qSwJQ2IuBqV",
"4-v0RSBuBmB",
"mI3riFNkufu",
"wvQC_xEi0z8"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this work generative models using a GP as prior and a deep network as likelihood (GP-DGMs) are considered. In the VAE formalism for inference, the novelty of this paper is located in the encoder: It is sparse and the posterior can be computed even when part of the observations are missing. Sparsity is obtained ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_czv8Ac3Kg7l",
"XETBanKK3U",
"mI3riFNkufu",
"wvQC_xEi0z8",
"qSwJQ2IuBqV",
"_8gd1mt-sTr",
"iclr_2021_czv8Ac3Kg7l",
"iclr_2021_czv8Ac3Kg7l",
"iclr_2021_czv8Ac3Kg7l"
] |
iclr_2021_7ZJPhriEdRQ | AR-ELBO: Preventing Posterior Collapse Induced by Oversmoothing in Gaussian VAE | Variational autoencoders (VAEs) often suffer from posterior collapse, which is a phenomenon that the learned latent space becomes uninformative. This is related to local optima introduced by a fixed hyperparameter resembling the data variance in the objective function. We suggest that this variance parameter regularize... | withdrawn-rejected-submissions | This paper is about learning the output noise variance of a VAE and its effect on the generated image quality as measured by FID. The paper argues that the output variance parameter plays an important role and proposes a simple procedure, where a maximum likelihood estimate of the noise variance is estimated. Experimen... | test | [
"pdUcyXpUqON",
"tWM4XNMLYZ7",
"5LlAphGG5y",
"xuGReG2YdKG",
"AhEXiPfro3p",
"PMBd9rQtHQk",
"k23FZ7PHn2t",
"l858E7WN7Xx",
"hagi4lRhM4",
"4_eJn3fvhX7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers Gaussian VAEs and their tendency to suffer from posterior collapse. In particular, the authors analyse the impact of the usually fixed covariance $\\sigma_x$ of the decoder Gaussian on the learned encoder variance. They show that the former can be seen as a regulariser for the latter and theref... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_7ZJPhriEdRQ",
"AhEXiPfro3p",
"iclr_2021_7ZJPhriEdRQ",
"pdUcyXpUqON",
"4_eJn3fvhX7",
"l858E7WN7Xx",
"hagi4lRhM4",
"iclr_2021_7ZJPhriEdRQ",
"iclr_2021_7ZJPhriEdRQ",
"iclr_2021_7ZJPhriEdRQ"
] |
iclr_2021_0vO-u0sucRF | Information Theoretic Meta Learning with Gaussian Processes | We formulate meta learning using information theoretic concepts such as mutual information and the information bottleneck. The idea is to learn a stochastic representation or encoding of the task description, given by a training or support set, that is highly informative about predicting the validation set. By making u... | withdrawn-rejected-submissions | Information bottleneck is a well-known principle that is used for clustering, dimensionality reduction, and recently deep learning. It finds a compressed representation of input X while retaining most information on the response Y. This paper addresses an attempt to interpret the meta-learning using the information bot... | train | [
"4S-Ag-rNZl1",
"zIjnLsiBtLR",
"18uoy0CjRU",
"vikicQIIJAS",
"CngCP8Y8nVi",
"KNPt__knvzr",
"VHWtzAG0Q5x",
"Pqf44PumjU1"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThe paper proposed variational approximations to the information bottleneck objective functions for meta-learning.\nThe authors then provided three different settings using their variational loss functions, namely SMAML, GP, and GP + MAML.\nThe authors' motivations for these three settings were to stud... | [
5,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
3,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2021_0vO-u0sucRF",
"Pqf44PumjU1",
"VHWtzAG0Q5x",
"4S-Ag-rNZl1",
"KNPt__knvzr",
"iclr_2021_0vO-u0sucRF",
"iclr_2021_0vO-u0sucRF",
"iclr_2021_0vO-u0sucRF"
] |
iclr_2021_GwjkaD3g-V1 | Semi-Supervised Learning of Multi-Object 3D Scene Representations | Representing scenes at the granularity of objects is a prerequisite for scene understanding and decision making. We propose a novel approach for learning multi-object 3D scene representations from images. A recurrent encoder regresses a latent representation of 3D shapes, poses and texture of each object from an input ... | withdrawn-rejected-submissions | The paper proposes learning of 3D object representation from images. The pretraining used assumes it can generate implicit 3D models for the objects, and then objects are detected in multi-object scenes without further supervision. Reviewers raised concerns regarding experiments being conducted only on synthetic data. ... | train | [
"tOIJ-Q0-CD",
"K8C1LQugs6u",
"SVGqcmmYi1B",
"NQc9I5foE_",
"eLg-ygL_w7C",
"q5HBipoEyba",
"sSLepV5uMIG"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"### Weaknesses\n\n- All experiments are synthetic. Although synthetic experiments are common in the field, I see this as a weakness because the method heavily relies on image reconstruction loss on rendered objects. Real scene are very different due to both high level (more clutter, occlusion, number of objects) a... | [
6,
6,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_GwjkaD3g-V1",
"iclr_2021_GwjkaD3g-V1",
"tOIJ-Q0-CD",
"K8C1LQugs6u",
"sSLepV5uMIG",
"iclr_2021_GwjkaD3g-V1",
"iclr_2021_GwjkaD3g-V1"
] |
iclr_2021_0xdQXkz69x9 | Attacking Few-Shot Classifiers with Adversarial Support Sets | Few-shot learning systems, especially those based on meta-learning, have recently made significant advances, and are now being considered for real world problems in healthcare, personalization, and science. In this paper, we examine the robustness of such deployed few-shot learning systems when they are fed an impercep... | withdrawn-rejected-submissions | This paper presents a method for attacking few-shot learners with poisoning a subset of support set. I believe this might be the first work to address adversarial examples for meta-learners (or few-shot learners), which is a timely issue. A common concern raised by most of reviewers is in the novelty of this work, in t... | test | [
"ScsPs453Q4L",
"SgegDOJmcNF",
"U2hBxImfrUq",
"r4Lj52exjZq",
"S1HTbOfMZws",
"tflLKyjHnd",
"7fybPVcor5u",
"Zaay0CkOLL5",
"uihFTDo9-N6",
"_yPXh5HmnUV",
"BtCt_BDg9vP"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Pros:**\n+ The paper considers the construction of adversarial examples for a new learning paradigm which has practical relevance.\n+ A number of possible threat models under the few-shot learning paradigm are considered.\n+ The considered attack (a simple variation on PGD) is found to be effective against a var... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_0xdQXkz69x9",
"iclr_2021_0xdQXkz69x9",
"ScsPs453Q4L",
"S1HTbOfMZws",
"uihFTDo9-N6",
"_yPXh5HmnUV",
"BtCt_BDg9vP",
"U2hBxImfrUq",
"iclr_2021_0xdQXkz69x9",
"iclr_2021_0xdQXkz69x9",
"iclr_2021_0xdQXkz69x9"
] |
iclr_2021_bQtejwuIqB | With False Friends Like These, Who Can Have Self-Knowledge? | Adversarial examples arise from excessive sensitivity of a model. Commonly studied adversarial examples are malicious inputs, crafted by an adversary from correctly classified examples, to induce misclassification. This paper studies an intriguing, yet far overlooked consequence of the excessive sensitivity, that is, a... | withdrawn-rejected-submissions | This paper the flip-side of an adversarial "attack" in that data may be perturbed to make it look like a model was performing well rather than the standard notion of adversarial attacks. The reviewers found this notion interesting and potentially worthy of investigation. However as it stands, the proposed applications ... | train | [
"bvMARVU4LEE",
"xUg7w9ytjfK",
"7r0qwqCPsVr",
"KhGyaw_MPy",
"ZRS52pOTPD",
"iTybCfrobCI",
"7g-Yp_ziJPX",
"oOi45btN8b9",
"NCFzaPvWmWC",
"cF8B70aJV-D",
"jf4rPjEG16a",
"GJB6Cq0p2aL",
"wUT5NFpxre"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers attacks that flip the label for those data points that are incorrectly classified. It proposes a risk metric to capture this and uses prior attacks for this type of attack.\n\nPositive points:\nThe main idea in the paper is, at first look, interesting but then on more thinking I have doubts (s... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_bQtejwuIqB",
"7r0qwqCPsVr",
"oOi45btN8b9",
"bvMARVU4LEE",
"wUT5NFpxre",
"iclr_2021_bQtejwuIqB",
"bvMARVU4LEE",
"jf4rPjEG16a",
"GJB6Cq0p2aL",
"jf4rPjEG16a",
"iclr_2021_bQtejwuIqB",
"iclr_2021_bQtejwuIqB",
"iclr_2021_bQtejwuIqB"
] |
iclr_2021_r1d-lFmO-cM | Pointwise Binary Classification with Pairwise Confidence Comparisons | Ordinary (pointwise) binary classification aims to learn a binary classifier from pointwise labeled data. However, such pointwise labels may not be directly accessible due to privacy, confidentiality, or security considerations. In this case, can we still learn an accurate binary classifier? This paper proposes a novel... | withdrawn-rejected-submissions | This paper has been evaluated by three expert reviewers, two of whom recommended rejection and one acceptance. Two of the three reviews are particularly detailed and thorough. Both point out a few points of conceptual issues that leave the reader confused. These key issues have not been addressed sufficiently in the r... | val | [
"nyBnb5P7Ss",
"4EctlViNUM",
"zrPj_K2t_YK",
"t1IPQxD8aBz",
"fEBhiFeirky",
"GoonPkMeHij",
"iquh89P9Pja",
"4MLe5eCNRN",
"GRFDMUsYrnY"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"A1. Thank you for this question.\n\nThe generation model works in the following way:\n\nFor a pair of data $(x,x')$ with labels $(+1,+1)$ or $(+1,-1)$ or $(-1,-1)$, $(x,x')$ will be taken as a pairwise comparison example for Pcomp classification.\n\nFor a pair of data $(x,x')$ with labels $(-1,+1)$, $(x',x)$ (orde... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"zrPj_K2t_YK",
"iquh89P9Pja",
"iquh89P9Pja",
"GRFDMUsYrnY",
"iclr_2021_r1d-lFmO-cM",
"4MLe5eCNRN",
"iclr_2021_r1d-lFmO-cM",
"iclr_2021_r1d-lFmO-cM",
"iclr_2021_r1d-lFmO-cM"
] |
iclr_2021_3u3ny6UYmjy | RetCL: A Selection-based Approach for Retrosynthesis via Contrastive Learning | Retrosynthesis, of which the goal is to find a set of reactants for synthesizing a target product, is an emerging research area of deep learning. While the existing approaches have shown promising results, they currently lack the ability to consider availability (e.g., stability or purchasability) of the reactants or g... | withdrawn-rejected-submissions | While the authors appreciated the proposed contrastive training scheme and the strong related work summary, all authors agreed that the approach was severeley limited by being a pure selection-based method. Without the help of another model that proposes molecules, the approach can only select reactants from an existin... | train | [
"ELIs_D7cg4W",
"87N1_30BajG",
"_Zz2afzgy42",
"B-wpXR_mpl9",
"96KnamspzfS",
"Zc--FnaTu3",
"7TO6mAC9cA0",
"XCPuF0Zaii-",
"xpW0x_Ocsvm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Summary of the paper\nThis paper proposes a sequential reactant selection scheme for retrosynthesis. In each step, the model gives a ranking of reactants based on previously chosen reactants $R_{given}$. After all the reactants are selected, the model checks whether the chosen reactants result in desired produ... | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"iclr_2021_3u3ny6UYmjy",
"7TO6mAC9cA0",
"iclr_2021_3u3ny6UYmjy",
"xpW0x_Ocsvm",
"XCPuF0Zaii-",
"ELIs_D7cg4W",
"iclr_2021_3u3ny6UYmjy",
"iclr_2021_3u3ny6UYmjy",
"iclr_2021_3u3ny6UYmjy"
] |
iclr_2021_L3iGqaCTWS9 | Hybrid and Non-Uniform DNN quantization methods using Retro Synthesis data for efficient inference | Existing post-training quantization methods attempt to compensate for the quantization loss by determining the quantized weights and activation ranges with the help of training data. Quantization aware training methods, on the other hand, achieve accuracy near to FP32 models by training the quantized model which consum... | withdrawn-rejected-submissions | Four knowledgeable referees reviewed this paper; one reviewer (weakly) supports accept and other three indicate reject. Even with the rebuttal, all negative reviewers have concerns on the limited novelty and marginal performance improvement, and agree that the paper is not well qualified for the high standard of ICLR. | test | [
"8-8sXqDjdHD",
"_6sDODKv6ts",
"Q9IZZx4duwo",
"uc0Xxbopa1k",
"T-b3eILkx8Z",
"tqZvRSn_ba",
"zQuwp6oFtlK",
"wpm2uS6GvL-",
"bsRLt_n0VWE",
"3QctDb4V5FS",
"qD0YFASBjg7",
"RZFnmHmlFLO"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work uses post-training quantization without access to training data for privacy concerns. Instead, useful statistics are estimated using a retro-synthesis data obtained from the FP baseline. I have a few comments, some concerns and some suggestions I think can be used to improve this work.\n\nOverview of pri... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2021_L3iGqaCTWS9",
"Q9IZZx4duwo",
"3QctDb4V5FS",
"iclr_2021_L3iGqaCTWS9",
"tqZvRSn_ba",
"qD0YFASBjg7",
"RZFnmHmlFLO",
"zQuwp6oFtlK",
"8-8sXqDjdHD",
"iclr_2021_L3iGqaCTWS9",
"iclr_2021_L3iGqaCTWS9",
"iclr_2021_L3iGqaCTWS9"
] |
iclr_2021_7eD88byszZ | A Unified Spectral Sparsification Framework for Directed Graphs | Recent spectral graph sparsification research allows constructing nearly-linear-sized subgraphs that can well preserve the spectral (structural) properties of the original graph, such as the first few eigenvalues and eigenvectors of the graph Laplacian, leading to the development of a variety of nearly-linear time nume... | withdrawn-rejected-submissions | The paper proposes a fast, nearly-linear time, algorithm for finding a sparsifier for general directed and undirected graphs that approximately preserves the spectral properties of the original graph. The reviewers appreciated the main contribution of the paper, but they were concerned about the correctness and clarity... | train | [
"JFkP19mi0Gj",
"70Y3PiVOlN3",
"9kQl1hucc7K",
"QrlQ7W-qzc3",
"q61G9hyPhsP",
"5f120RHTTxi",
"KfckmThNvSw",
"6HJaEfSQR55",
"FQgB0lTDaBp",
"ozmDMfI_rf",
"PJd1UmXFIbm",
"GGknSfjO4O",
"NskMaRlksK5"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary\n========\nThe paper studies a certain notion of spectral sparsification of directed graphs. It claims the existence of nearly linear sized sparsifiers under this notion, and suggests empirical methods to produce such sparsifiers in nearly linear time.\n\nComments\n=========\n\nSection 4: I am not sure wha... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"iclr_2021_7eD88byszZ",
"9kQl1hucc7K",
"5f120RHTTxi",
"q61G9hyPhsP",
"5f120RHTTxi",
"JFkP19mi0Gj",
"PJd1UmXFIbm",
"GGknSfjO4O",
"NskMaRlksK5",
"iclr_2021_7eD88byszZ",
"iclr_2021_7eD88byszZ",
"iclr_2021_7eD88byszZ",
"iclr_2021_7eD88byszZ"
] |
iclr_2021_apiI1ySCSSR | Meta-learning Transferable Representations with a Single Target Domain | Recent works found that fine-tuning and joint training---two popular approaches for transfer learning---do not always improve accuracy on downstream tasks. First, we aim to understand more about when and why fine-tuning and joint training can be suboptimal or even harmful for transfer learning. We design semi-synthetic... | withdrawn-rejected-submissions | The paper compares transfer learning with fine-tuning and joint training and then proposes a new approach (Merlin). Reviewers have pointed to the fact that Merlin works in a setting that is different from normal transfer learning settings (it assumes some target domain data is available during training). The authors ac... | val | [
"rLcEs6LFbWE",
"in-GKTBCIw7",
"Evr2Pa2Hmr",
"89CXgj67ew0",
"Q6cwp8oRzJi",
"kvFuP9hd--K",
"swar0EN-6K",
"dD1E8Tu_Jci",
"F0w-9EIUaeZ",
"KJlCpX-MTb",
"Fmi7POqmE20",
"AcdBd-k61XP",
"dPjF2FfGR6h"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We also add results of BSS on ResNet-18 to compare with our method in the following table.\n\n|Method|CUB-200|Caltech-256|Stanford Cars|\n| --- | :---: | :---: | :---: |\n|Fine-tuning|72.52 ± 0.51|81.12 ± 0.27 |81.59 ± 0.49|\n|BSS|73.43 ± 0.21|82.21 ± 0.18|81.84 ± 0.25|\n|MeRLin|75.42 ± 0.47 | 82.45 ± 0.26|83.68 ±... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"Evr2Pa2Hmr",
"Evr2Pa2Hmr",
"89CXgj67ew0",
"swar0EN-6K",
"kvFuP9hd--K",
"F0w-9EIUaeZ",
"Fmi7POqmE20",
"AcdBd-k61XP",
"dPjF2FfGR6h",
"dPjF2FfGR6h",
"iclr_2021_apiI1ySCSSR",
"iclr_2021_apiI1ySCSSR",
"iclr_2021_apiI1ySCSSR"
] |
iclr_2021_3JI45wPuReY | Neural Network Surgery: Combining Training with Topology Optimization | With ever increasing computational capacities, neural networks become more and more proficient at solving complex tasks. However, picking a sufficiently good network topology usually relies on expert human knowledge. Neural architecture search aims to reduce the extent of expertise that is needed. Modern architecture s... | withdrawn-rejected-submissions | This work proposes a framework to search for the topology of an artificial neural network jointly with the network training, via a genetic algorithm that can decide structural actions, such as addition or removal of neurons and layers. An extra heuristic based on Bayesian information criterion helps the optimization pr... | test | [
"R2246-NIOSf",
"bQptJQO1kVT",
"5knkbiyADxo",
"0v6kpi4hV1d",
"cLPFQSLgaM0",
"DgjgMZvsBke"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Answering the authors' question, Regarding mainstream NAS algorithms, there have been many including DARTS, ProxylessNAS. I believe if this paper can be interweaved with these works, it would be interesting. In fact, presenting ImageNet results in relation to such direction of work could be a better application of... | [
-1,
-1,
4,
4,
5,
4
] | [
-1,
-1,
4,
4,
2,
4
] | [
"0v6kpi4hV1d",
"iclr_2021_3JI45wPuReY",
"iclr_2021_3JI45wPuReY",
"iclr_2021_3JI45wPuReY",
"iclr_2021_3JI45wPuReY",
"iclr_2021_3JI45wPuReY"
] |
iclr_2021_oKWmzgO7bfl | Detection Booster Training: A detection booster training method for improving the accuracy of classifiers. | Deep learning models owe their success at large, to the availability of a large amount of annotated data. They try to extract features from the data that contain useful information needed to improve their performance on target applications. Most works focus on directly optimizing the target loss functions to improve th... | withdrawn-rejected-submissions | In this paper the authors propose an approach to improving the accuracy of the classification problem based on deep neural networks by detecting the in-domain data from background/noise. The strategy is designed in such a way that the detector and the classifier share the bottom layers of the network. Theoretical pro... | train | [
"IaqTHbvVkL",
"o_VSnwiNXw",
"XbBIU5Ptcd9",
"GCAXAIo1d4X",
"sdqqq-54Kl",
"KyUG7gdNNdK",
"hUDJrjl8ve",
"8-PLtgkXG18",
"WiIvy1uzJ3Q"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Sorry, if the reviewer misunderstood the statement. This issue is not data imbalance. For both Tables 10 and 11, we use the same number of examples for the background class as others (eg., 5000/class for CIFAR-10 and 500 /class for CIFAR-100) to conduct our experiments. \n\nIn the pointed part of the text (\"the c... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"o_VSnwiNXw",
"XbBIU5Ptcd9",
"WiIvy1uzJ3Q",
"hUDJrjl8ve",
"8-PLtgkXG18",
"iclr_2021_oKWmzgO7bfl",
"iclr_2021_oKWmzgO7bfl",
"iclr_2021_oKWmzgO7bfl",
"iclr_2021_oKWmzgO7bfl"
] |
iclr_2021_tFPAIXpb13 | Graph Deformer Network | Convolution learning on graphs draws increasing attention recently due to its potential applications to a large amount of irregular data. Most graph convolution methods leverage the plain summation/average aggregation to avoid the discrepancy of responses from isomorphic graphs. However, such an extreme collapsing way ... | withdrawn-rejected-submissions | Three of the reviewers are significantly concerned about this submission, while R1 is positive. Meanwhile, R1 is also concerned about some details in the paper, including space and time complexity etc. The authors provided detailed feedback to these comments, but R1 does not provide support to this work during discussi... | train | [
"0AQxpPsbK0I",
"dIOMJAhWOMi",
"4J3woGWTBbS",
"320Lm8eHAg3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In order to perform anisotropic convolution on graphs, this paper proposes to project a local neighborhood into a unified virtual space by introducing anchor nodes. A theoretical analysis is provided to show the expressive power of the proposed graph deformer operation on graph isomorphism test. Extensive experime... | [
5,
4,
7,
5
] | [
4,
4,
5,
5
] | [
"iclr_2021_tFPAIXpb13",
"iclr_2021_tFPAIXpb13",
"iclr_2021_tFPAIXpb13",
"iclr_2021_tFPAIXpb13"
] |
iclr_2021_3eNrIs9I78x | SALR: Sharpness-aware Learning Rates for Improved Generalization | In an effort to improve generalization in deep learning, we propose SALR: a sharpness-aware learning rate update technique designed to recover flat minimizers. Our method dynamically updates the learning rate of gradient-based optimizers based on the local sharpness of the loss function. This allows optimizers to autom... | withdrawn-rejected-submissions | This paper proposes a method to update the learning rate dynamically by increasing it in areas with higher sharpness and decreasing it otherwise. This would the hopefully leads to escaping sharp valleys and better generalization. Authors further provide some related theoretical results and several experiments to show e... | val | [
"qywSDgUKPR_",
"bDR6cRUT_-b",
"C6saSZfgcV_",
"zE6-YW1r5qC",
"HotIyKkJicT",
"_bKs5rIBlHq",
"3F_Lq4nCSXm",
"gTi1FmA9nwf",
"o1C5et8uRlE",
"4KttAUE1U7N"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes an algorithm that aims at finding a flat minimizer. The high-level strategy in the design of the proposed algorithm is increasing the learning rate when the iterate is in the region of a sharp minimizer. The authors claim that by increasing the learning rate, the iterate can get out of the undes... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
5,
4
] | [
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_3eNrIs9I78x",
"4KttAUE1U7N",
"qywSDgUKPR_",
"HotIyKkJicT",
"iclr_2021_3eNrIs9I78x",
"HotIyKkJicT",
"o1C5et8uRlE",
"iclr_2021_3eNrIs9I78x",
"iclr_2021_3eNrIs9I78x",
"iclr_2021_3eNrIs9I78x"
] |
iclr_2021_Io8oYQb4LRK | Non-greedy Gradient-based Hyperparameter Optimization Over Long Horizons | Gradient-based meta-learning has earned a widespread popularity in few-shot learning, but remains broadly impractical for tasks with long horizons (many gradient steps), due to memory scaling and gradient degradation issues. A common workaround is to learn meta-parameters online, but this introduces greediness which co... | withdrawn-rejected-submissions | This paper investigates methods for gradient-based tuning of optimization hyperparameters. This is an interesting area, and the paper isn't bad. The examination of hypervariance seems relatively novel and useful. I also appreciate the point about Bayesopt sometimes working well simply due to small ranges.
However, ... | train | [
"Nmwpd0fp24m",
"QXKHRJK84sB",
"w8APF2m_lBW",
"jFkq47-Ivzt",
"Bkb7BvI8lMD",
"HbhhbPIHEr",
"Ky0h1zY-psH",
"KJxsMXXyKx",
"YUfkoOYJ8C",
"6mBOyff5_h-",
"6HQMY4zt-_"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for following through. We've had time to run BOHB experiments, and hope to expend on our last comment below.\n\nThe range of alpha stated previously was taken from the tables in the appendices of the BOHB paper (Table 1,3,4). Note that the range for the CIFAR-10 experiment doesn't seem to be given, so we... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
2,
4
] | [
"QXKHRJK84sB",
"Bkb7BvI8lMD",
"6HQMY4zt-_",
"6mBOyff5_h-",
"KJxsMXXyKx",
"YUfkoOYJ8C",
"iclr_2021_Io8oYQb4LRK",
"iclr_2021_Io8oYQb4LRK",
"iclr_2021_Io8oYQb4LRK",
"iclr_2021_Io8oYQb4LRK",
"iclr_2021_Io8oYQb4LRK"
] |
iclr_2021_ZJGnFbd6vW | PCPs: Patient Cardiac Prototypes | Existing deep learning methodologies within the medical domain are typically population-based and difficult to interpret. This limits their clinical utility as population-based findings may not generalize to the individual patient. To overcome these obstacles, we propose to learn patient-specific representations, entit... | withdrawn-rejected-submissions | The paper proposes the use of contrastive learning to learn patient specific representations from medical data. The authors show how their method can be used to find similar patients within and across datasets.
The paper has some issues, as indicated by the reviewers:
- similarity to past work; in the response to R1,... | train | [
"qZsvNcPbGAx",
"qJN70pIdxp",
"EVckcsRScpN",
"6lzFNOWhxQQ",
"cBaCeUg-kz8",
"Mf8dfe66T3s",
"_jEaBoYqM1I",
"uUQX7E42qc3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the efforts in addressing some of my minor concerns. However, the most critical issues remain unaddressed. They are listed below.\n\n1. Lack of novelty and lack of comparison with state-of-the-art methods\n \nThe authors claimed “perform contrastive and supervised learning on the actual downstream ta... | [
-1,
-1,
-1,
-1,
-1,
2,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"Mf8dfe66T3s",
"EVckcsRScpN",
"Mf8dfe66T3s",
"uUQX7E42qc3",
"_jEaBoYqM1I",
"iclr_2021_ZJGnFbd6vW",
"iclr_2021_ZJGnFbd6vW",
"iclr_2021_ZJGnFbd6vW"
] |
iclr_2021_Y3pk2JxYmO | Fast Training of Contrastive Learning with Intermediate Contrastive Loss | Recently, representations learned by self-supervised approaches have significantly reduced the gap with their supervised counterparts in many different computer vision tasks. However, these self-supervised methods are computationally challenging. In this work, we focus on accelerating contrastive learning algorithms wi... | withdrawn-rejected-submissions | This paper introduces modifications that allow to make the training of contrastive-learning-based models practical. The goal of the paper is very interesting, and the motivation clear. This paper tackles a very important issue with recent unsupervised feature learning methods.
However, while the goal is great, the pres... | train | [
"0flDkgqFyuS",
"2qTHy2Z8-r",
"97T_Bll_sda",
"lFyuj-9atA8",
"S4PWTQ8Qpa",
"vkvm8cTl0a",
"xNp0euf_-s",
"edEYQunYam9",
"8l_F4N5eqVe",
"BCyF5JJW5Wk"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer1, thank you for your new comment.\nHere, we would like to clarify some background.\n1) For the first problem (hyper-parameter), as shown in the appendix, the self-supervised learning methods that we built our algorithms upon (MoCo, SimCLR, SwAV) are sensitive to some hyper-parameters. (**For example,... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"2qTHy2Z8-r",
"vkvm8cTl0a",
"BCyF5JJW5Wk",
"8l_F4N5eqVe",
"xNp0euf_-s",
"edEYQunYam9",
"iclr_2021_Y3pk2JxYmO",
"iclr_2021_Y3pk2JxYmO",
"iclr_2021_Y3pk2JxYmO",
"iclr_2021_Y3pk2JxYmO"
] |
iclr_2021_5rc0K0ezhqI | Unpacking Information Bottlenecks: Surrogate Objectives for Deep Learning | The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models. However, multiple competing objectives are proposed in the literature, and the information-theoretic quantities used in these objectives a... | withdrawn-rejected-submissions | We have a very well informed reviewer who strongly feels that this paper is insufficiently novel and significant further discussion on how the paper might be raised to a publishable level with more empirical results. I will have to side with the more engaged reviewers who feel that the paper should be rejected. | train | [
"ynGuKwN76Ll",
"TqB5y1Wo2E0",
"v5wzj92lbNH",
"6SbuUinc_Rd",
"aGbXQeul5bc",
"FWihY1eehoI",
"QvNCkdJBX1x",
"enxXSvwfX5u",
"2qbcspcSnbS",
"bZw86dHiMs0",
"OHiT4xE5n29",
"JgTHb0uTE5E"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary of paper:\nThe authors review the information bottleneck (IB) in the context of deep learning. They discuss the obstacles to applying the IB (and a deterministic variant, the DIB) to modern datasets, review approaches to doing so, and introduce their own scalable approach. Their approach introduces practic... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_5rc0K0ezhqI",
"enxXSvwfX5u",
"2qbcspcSnbS",
"OHiT4xE5n29",
"iclr_2021_5rc0K0ezhqI",
"6SbuUinc_Rd",
"JgTHb0uTE5E",
"ynGuKwN76Ll",
"bZw86dHiMs0",
"iclr_2021_5rc0K0ezhqI",
"iclr_2021_5rc0K0ezhqI",
"iclr_2021_5rc0K0ezhqI"
] |
iclr_2021_QQzomPbSV7q | Reducing Class Collapse in Metric Learning with Easy Positive Sampling | Metric learning seeks perceptual embeddings where visually similar instances are close and dissimilar instances are apart, but learned representation can be sub-optimal when the distribution of intra-class samples is diverse and distinct sub-clusters are present. We theoretically prove and empirically show that under r... | withdrawn-rejected-submissions | This paper is truly borderline. On one hand, the theoretical contribution seems novel and interesting, however, there appears to be somewhat of a gap between theory and practice.
There is unfortunately another problem. According to the authors, the main contribution of this publication is arguably the introduction of... | train | [
"gN1R94iJ9jP",
"kUgMfFk5WpQ",
"9j3TIx8mD02",
"f87kq_8pfAC",
"LHQEwE7IhF",
"prj6AOxIkV_",
"Lamsa4rf62j",
"otggLWTgbPt",
"38RwnwiLlqQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Post-rebuttal: The rebuttal partly addresses my concerns, so I would like to change my score to 4.\n------------------------------------------------------\nThis paper proposes an easy positive sampling method for deep metric learning which aims to reduce the class collapse problem which is found to harm the perfor... | [
4,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"iclr_2021_QQzomPbSV7q",
"LHQEwE7IhF",
"gN1R94iJ9jP",
"Lamsa4rf62j",
"otggLWTgbPt",
"38RwnwiLlqQ",
"iclr_2021_QQzomPbSV7q",
"iclr_2021_QQzomPbSV7q",
"iclr_2021_QQzomPbSV7q"
] |
iclr_2021_4D4Rjrwaw3q | Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking | Existing studies in black-box optimization for machine learning suffer from low
generalizability, caused by a typically selective choice of problem instances used
for training and testing different optimization algorithms. Among other issues,
this practice promotes overfitting and poor-performing user... | withdrawn-rejected-submissions | This paper presents a benchmarking suite, primarily targeting the domain of evolutionary style optimization algorithms, and an effective heuristic algorithm selection procedure ABBO. The reviewers seemed quite split in their reviews with significant variance, particularly with one outlier review (9) lifting up the ave... | train | [
"mp8Dm6QIXJm",
"GsrEPQOdcez",
"sxoPhq8lF18",
"9mP_gZbz6_6",
"C2Jj2y8aHbn",
"lNkh8wBHazu",
"lDUQL2DGyEc",
"snK4Y9jiOlT",
"wXicg_zQdzS",
"3hrRt9yayW",
"-W8QcIPkUkj",
"OICU4vkbgk",
"SZ7XIsX3BFK",
"LdfbE-ezdMm",
"sKNvGHU33rt",
"cNsoXs6ygv",
"8g6h-yQIChy",
"8_L_0MpvGUY",
"0h_qcbaNb7y"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"o... | [
"The paper proposes a benchmark suite for black-box optimization that covers more\ndifferent types of problems than existing benchmarks. They derive an algorithm\nselection system for black-box optimization from it and evaluate its performance\nempirically, comparing to other black-box optimization solvers.\n\nThe ... | [
6,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
9,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_4D4Rjrwaw3q",
"-W8QcIPkUkj",
"iclr_2021_4D4Rjrwaw3q",
"lNkh8wBHazu",
"snK4Y9jiOlT",
"OICU4vkbgk",
"iclr_2021_4D4Rjrwaw3q",
"3hrRt9yayW",
"8_L_0MpvGUY",
"SZ7XIsX3BFK",
"sKNvGHU33rt",
"LdfbE-ezdMm",
"3oFlrQ3x3E_",
"buXkNUtbREb",
"T8bsDoCRP5m",
"iclr_2021_4D4Rjrwaw3q",
"iclr_... |
iclr_2021_TVbDOOr6hL | Variational Auto-Encoder Architectures that Excel at Causal Inference | This paper provides a generative approach for causal inference using data from observational studies. Inspired by the work of Kingma et al. (2014), we propose a sequence of three architectures (namely Series, Parallel, and Hybrid) that each incorporate their M1 and M2 models as building blocks. Each architecture is an ... | withdrawn-rejected-submissions | The authors suggest a VAE model for causal inference. The approach is motivated by CEVAE (Louizos et al., 2017) which uses a VAE to learn a latent representation of confounding between the treatment, target, and covariates. This paper goes beyond this approach and tries to design generative model architectures that enc... | val | [
"1IUHSnQWqOh",
"MqBU0zHp7Nh",
"yMHx0dcCbZ",
"n6PxveiNg_U",
"PQRkUQvPxzX",
"G4dRcxLfiWr",
"Tf8nQ5s5YNY",
"CHpTzO-uqo",
"RGkMGg5W3wr",
"_nf3URHCD5",
"zb7eDZnvFkr",
"-ErrVRQgK7p",
"tkZp4fZEQAW",
"WS66kQG4oHJ",
"TDknr1R3Hf-",
"1heHh-sxYd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary:\n\nThis paper introduces a new VAE architecture for performing\ncausal inference. It shows superior precision on estimating\nthe heterogeneous effect and lower bias in estimating the\ntreatment effect.\n\nClarity:\n\nThe paper was fairly clearly written and straightforward to follow.\nI think a small amou... | [
7,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7
] | [
2,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_TVbDOOr6hL",
"zb7eDZnvFkr",
"n6PxveiNg_U",
"tkZp4fZEQAW",
"G4dRcxLfiWr",
"-ErrVRQgK7p",
"iclr_2021_TVbDOOr6hL",
"RGkMGg5W3wr",
"TDknr1R3Hf-",
"iclr_2021_TVbDOOr6hL",
"1IUHSnQWqOh",
"Tf8nQ5s5YNY",
"1heHh-sxYd",
"_nf3URHCD5",
"WS66kQG4oHJ",
"iclr_2021_TVbDOOr6hL"
] |
iclr_2021_1cEEqSp9kXV | Constructing Multiple High-Quality Deep Neural Networks: A TRUST-TECH Based Approach | The success of deep neural networks relied heavily on efficient stochastic gradient descent-like training methods. However, these methods are sensitive to initialization and hyper-parameters.
In this paper, a systematical method for finding multiple high-quality local optimal deep neural networks from a single t... | withdrawn-rejected-submissions | This work proposes a method to discover neighboring local optima around an existing one. Reviewers all found the idea interesting but argued that the paper needed more work. In particular, some of the claims are too informal or not sufficiently supported and the reviewers found the key section were difficult to follow.... | train | [
"4pkewgbl6up",
"qIdnaGNyr1",
"lmYIgFso-7o",
"T6mzz23SY8",
"JjZV43QMvs",
"3mc9gwHldq2",
"jRHzlXy2whB",
"iM04DNbW11f",
"pQ_gTVuAtUO",
"eeBWynzxuto",
"isuc3f_nW4k"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n \nThe paper describes a technique based on the modified generalized gradient descent for finding multiple high-quality local optima of deep neural networks. The search method does not require re-initialization of the model par... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_1cEEqSp9kXV",
"iclr_2021_1cEEqSp9kXV",
"qIdnaGNyr1",
"iclr_2021_1cEEqSp9kXV",
"4pkewgbl6up",
"4pkewgbl6up",
"eeBWynzxuto",
"isuc3f_nW4k",
"iclr_2021_1cEEqSp9kXV",
"iclr_2021_1cEEqSp9kXV",
"iclr_2021_1cEEqSp9kXV"
] |
iclr_2021_VbCVU10R7K | Offline policy selection under Uncertainty | The presence of uncertainty in policy evaluation significantly complicates the process of policy ranking and selection in real-world settings. We formally consider offline policy selection as learning preferences over a set of policy prospects given a fixed experience dataset. While one can select or rank policies bas... | withdrawn-rejected-submissions | The review team appreciated the new Bayesian perspective offered by the submission, which lends itself well to selection and ranking, though some of them were still not convinced by the motivation (including in the private post-rebuttal discussion, R3). The reviewers also identified many points for improvement. The pap... | train | [
"0UoDJeBJvxC",
"8B2e1HLLupe",
"77r3LMVmNOd",
"iE4_OfgjvVU",
"5o91WwbyL6T",
"9Bmv9QjvWh",
"7E7iM8RDv4"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"## Review\n\nGiven as set of pre-specified policies, this paper proposes a Bayesian method to estimate the posterior distribution of their average values, by estimating posterior distributions of their discounted stationary distribution ratios. These posterior distributions are used for off-policy evaluation in va... | [
5,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_VbCVU10R7K",
"0UoDJeBJvxC",
"iclr_2021_VbCVU10R7K",
"9Bmv9QjvWh",
"7E7iM8RDv4",
"iclr_2021_VbCVU10R7K",
"iclr_2021_VbCVU10R7K"
] |
iclr_2021_D51irFX8UOG | HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem Solving | Humans learn compositional and causal abstraction, \ie, knowledge, in response to the structure of naturalistic tasks. When presented with a problem-solving task involving some objects, toddlers would first interact with these objects to reckon what they are and what can be done with them. Leveraging these concepts, th... | withdrawn-rejected-submissions | This paper proposes a new task domain for learning-based AI agents, HALMA, a game that is designed to bring together multiple areas of research in AI. Perception, in the form of recognition of MNIST digits, learning mathematics - in the form of arithmetic operations on the natural numbers, and navigation and planning. ... | train | [
"tR_0QATSCCd",
"EfH8MJ9zJYo",
"W4nG-r29Lww",
"YtTOXCJ2wjd",
"nFW-iqPwtwv",
"lyI1zf_wG7F",
"7rvCDtDqa2Z",
"LS6fu6Kz1im",
"UkVykLiDsYz",
"EeNqakOkvvJ",
"cL-P47EMmsV",
"gKoYGvXAhPw",
"QoG29Xhmjta",
"1hXNtUrhPzQ",
"smGYZL1Mvl",
"jS9tykHuQj1"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"---Post rebuttal---\n\nThank you for the detailed response. Overall, I think the proposed work provides a valuable benchmark for testing generalization ability of RL agents. However, I agree with R3 regarding the writing being dense/difficult to follow. I keep my rating unchanged (Weak Accept).\n\n----\n\nThis wor... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2
] | [
"iclr_2021_D51irFX8UOG",
"nFW-iqPwtwv",
"EeNqakOkvvJ",
"jS9tykHuQj1",
"YtTOXCJ2wjd",
"gKoYGvXAhPw",
"tR_0QATSCCd",
"1hXNtUrhPzQ",
"iclr_2021_D51irFX8UOG",
"LS6fu6Kz1im",
"QoG29Xhmjta",
"smGYZL1Mvl",
"iclr_2021_D51irFX8UOG",
"iclr_2021_D51irFX8UOG",
"iclr_2021_D51irFX8UOG",
"iclr_2021_D... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.